text
stringlengths 6
128k
|
---|
# A new observable for cosmic shear
Jérémie Francfort Ruth Durrer and Giulia Cusin
###### Abstract
In this paper we introduce a new observable to measure cosmic shear. We show
that if we can measure with good accuracy both, the orientation of a galaxy
and the polarisation direction of its radio emission, the angle between them
is sensitive to the foreground cosmic shear. Even if the signal-to-noise ratio
for a single measurement is expected to be rather small, the fact that all
galaxies in a given pixel are subject to the same shear can be used to
overcome the noise. An additional advantage of this observable is that the
signal is not plagued by intrinsic alignment. We estimate the SNR for the
shear correlation functions $\zeta_{\pm}(\mu,z_{1},z_{2})$ measured in this
way with the future SKA II survey.
## 1 Introduction
Cosmic shear is the coherent deformation of images of background galaxies due
to gravitational field. It gives us precious information about the total
foreground matter density as it is sensitive to both, dark and luminous matter
alike.
However, shear measurements are very difficult. They typically modify the
ellipticity of a galaxy by about 1% or even less [1]. Furthermore, the shear
correlation function is affected by so called intrinsic alignment which can be
of the same order as the shear itself [2, 3]. Nevertheless, in recent years
several observational campains like KiDs (Kilo Degree Survey) and DES (Dark
Energy Survey) and HSC (Hyper Supreme-Cam) have measured the shear correlation
function in different redshift bins, see e.g. [4, 5, 6, 7, 8, 9, 10]. The
shear correlation function is a very important variable to measure
cosmological parameters and, more importantly, to test the consistency of the
cosmological standard model $\Lambda$CDM.
The shear from scalar perturbations is determined by the lensing potential,
$\phi({\boldsymbol{n}},z)=-\int_{0}^{r(z)}\mathrm{d}r\frac{r(z)-r}{r(z)r}\left[\Phi(r{\boldsymbol{n}},t_{0}-r)+\Psi(r{\boldsymbol{n}},t_{0}-r)\right]\,.$
(1.1)
Here $\Phi$ and $\Psi$ are the Bardeen potentials, $r(z)$ is the comoving
distance out to redshift $z$, $t=t_{0}-r$ is conformal time along the light
path and ${\boldsymbol{n}}$ is a direction in the sky. We neglect possible
contributions from tensor perturbations, i.e. gravitational waves, as well as
from vector perturbations since they are generally small [11, 12]. Also, Eq.
(1.1) is the so called ’Born approximation’, i.e. we compute the lensing
potential along the straight, unlensed path, assuming that lensing is a small
perturbation. For non-relativistic matter and a cosmological constant the two
Bardeen potentials are equal and correspond to the Newtonian gravitational
potential. Light from a source at redshift $z$, seen in direction
${\boldsymbol{n}}$ is coming to us from the angular position
${\boldsymbol{n}}+{\boldsymbol{\nabla}}\phi({\boldsymbol{n}},z)$, where
${\boldsymbol{\nabla}}$ denotes the 2D gradient on the unit sphere and
${\boldsymbol{\nabla}}\phi({\boldsymbol{n}},z)$ is the deflection angle.
The shear $\gamma_{ij}$ is given by the traceless part of the second angular
derivatives of $\phi$. The convergence, given by the angular Laplacian can be
measured by galaxy number counts, see e.g. [13, 14, 15] for theoretical
aspects and numerical simulations and [16, 17] for observations.
Usually, the shear is measured via the correlation of the direction of the
ellipticity of galaxies. This assumes that ellipticities are intrinsically
uncorrelated which is evidently not true for galaxies at similar redshifts and
is even relevant for different redshifts, see e.g. [18] for a discussion of
intrinsic alignment. In this paper we derive a new observable which can be
used to measure the shear correlation function and which does not depend on
’intrinsic alignment’ : It is well known that the polarisation of a photon is
parallel transported along its path. However, a small image of finite size is
Lie transported. This is in general described with the Jacobi map [19].
Therefore, if the light from a galaxy is polarised, which is usually the case
for radio galaxies, and if this polarisation is aligned with the ellipticity
of the galaxy, which is also typically the case, this alignment is affected by
foreground shear. Typically, the angle between the polarisation vector and the
axes of the galaxy is of the order of a few degrees, see [20] for more
details. It might also be useful to measure the galaxy shapes with near future
optical telescopes like LSST [21] or the Euclid satellite [22] but the
polarisation has to be measured in the radio since these are the wavelengths
of synchrotron radiation whose polarisation is correlated with the intrinsic
direction of the galaxy.
If the principle axes of the shear tensor and the intrinsic ellipticity of the
galaxy are not aligned, this leads to a slight rotation of the image with
respect to the polarisation, as we have shown in a previous paper [23]. In
that paper we have studied the effect considering galaxies as Schwarzschild
lenses. In this work, we use shear from linear cosmological perturbation
theory and want to outline how one can use the correlation of the orientation
of the image and the polarisation to measure the shear correlation function.
The class of sources we have in mind in this analysis are low frequency radio
galaxies (typically 1-50 GHz as lower frequencies are significantly
depolarised by Faraday rotation [24]), for which the dominant source of linear
polarisation is expected to be synchrotron radiation due to electrons moving
in the magnetic field of the galaxy. For these objects, the magnetic field is
dominantly in the galactic plane (the orthogonal component is very small) and
tends to be aligned with galaxy morphology, i.e. the semi-major axis of the
galaxy (see e.g. [25]). Then polarisation from synchrotron radiation is mainly
orthogonal to the magnetic field component (i.e. it is in the orbital plane).
Hence its projected component (on the observer’s screen) is normal to the
galaxy’s major axis.
Previous authors have exploited the fact that the polarisation position angle
is unaffected by lensing in order to measure gravitational lensing of distant
quasars, see [26, 27, 28]. In [29], the authors proposed to use the
polarisation information in radio galaxies as an indicator of the galaxy
intrinsic shape, with the goal of mitigating shot noise and intrinsic
alignement uncertainties in shear reconstruction. In [30], the same authors
extended this idea to reconstruct maps of the projected dark matter
distribution, or the lensing convergence field. The authors of [31] proposed
to use a proxy for the intrinsic position angle of an observed galaxy, and
propose techniques for cleanly separating weak gravitational lensing signals
from intrinsic alignment contamination in forthcoming radio surveys. Finally,
in [32] it is shown that, thanks to polarisation information, radio weak
lensing surveys will be able to mitigate contamination by intrinsic
alignments, in a way similar but fully complementary to available self-
calibration methods based on position-shear correlations.
Unlike all these works, where the polarisation direction is used to have a
better handle on intrinsic alignment (inferred from the polarisation direction
itself), we propose to measure the offset between the observed polarisation
and galaxy morphology as a new observable on its own. In other words, although
the idea of using polarisation information to access the galaxy intrinsic
orientation is widely explored around in the literature, we believe that this
is the first time where a shear estimator is explicitly written down in terms
of the offset between the (observed) galaxy major axis and polarisation
orientation. A first attempt to do weak lensing with radio surveys is
published in [33]. In this first work, however polarisation is not used.
In Ref. [34] the authors do consider rotation but not the rotation induced by
shear which is considered in the present paper, rather they consider the
rotation from an antisymmetric contribution to the Jacobi map which is much
smaller than shear as it appears only at second order in the perturbations
[35].
This paper is structured as follows. In the next section we develop the
theoretical expressions which determine the shear from a measured angle
$\delta\alpha$ by which the orientation of the galaxy and its polarisation
differ. In Section 3 we present a rough estimate of the error on the
measurement given a typical precision of measured angles. In Section 4 we
discuss our results and in Section 5 we conclude. Some useful properties of
Spin Weighted Spherical Harmonics are presented in Appendix A for
completeness. In Appendix B we derive in detail the error estimates used in
the main text.
Notations and conventions:
We use the signature $(-,+,+,+)$.
The usual spherical angles are $(\theta,\varphi)$, and the corresponding unit
vector is $\boldsymbol{n}$. The surface element of the sphere is denoted
$\mathrm{d}\Omega$. The lensing potential is called $\phi(\boldsymbol{n},z)$.
The Bardeen potentials are $\Phi$ and $\Psi$. The Spin Weighted Spherical
Harmonics are ${}_{s}Y_{\ell,m}$, while the ’usual’ Spherical Harmonics,
${}_{0}Y_{\ell,m}$, are simply denoted $Y_{\ell,m}$.
Figure 1: The general setup (in $2D$, seen from above): Two pixels are
considered, in directions $\boldsymbol{n}_{j}$ and at redshifts $z_{j}$. The
directions are separated by an angle $\varphi$ (or
$\boldsymbol{n}_{1}\cdot\boldsymbol{n}_{2}=\cos\varphi=\mu$). In each pixel,
one galaxy is chosen (represented by the dots). The computations are made in
the equatorial plane.
## 2 Theoretical development
The lensing potential given in Eq. (1.1) is a stochastic quantity which can be
decomposed into Spherical Harmonics as
$\phi(\boldsymbol{n},z)=\sum_{\ell,m}\phi_{\ell,m}(z)Y_{\ell,m}(\boldsymbol{n})\,,$
(2.1)
where the scalars $\phi_{\ell,m}(z)$ are also random variables. Assuming
statistical isotropy different values of $\ell$ and $m$ are not correlated and
their two-point correlation spectrum is given by
$\langle\phi_{\ell_{1},m_{1}}(z_{1})\phi^{*}_{\ell_{2},m_{2}}(z_{2})\rangle=C_{\ell_{1}}(z_{1},z_{2})\delta_{\ell_{1},\ell_{2}}\delta_{m_{1},m_{2}}\,.$
(2.2)
The $C_{\ell}(z_{1},z_{2})$ are the lensing power spectra for different
resdhifts $z_{1}$ and $z_{2}$. If the fluctuations are Gaussian, these power
spectra encode all the statistical information of the lensing potential. The
lensing potential contains very useful information e.g. about the matter
distribution in the Universe which is not plagued by the biasing problem of
galaxy number counts. Therefore estimating it using different measurements
with different systematics is very important.
In this section we present the main theoretical tools and formulas of the
article. More explanations and details can be found in the Appendix.
We consider radio galaxies which are polarised along their semi-major (or
minor) axis. This polarisation is parallel transported and hence its
components expressed in a parallel transported Sachs basis are constant. The
radio galaxy, represented by an ellipse is sheared and magnified according to
the Jacobi map. If the principle axes of the shear are not aligned with the
principle axes of the galaxy, this leads to a rotation of the galaxies
principle axes expressed in the Sachs basis. In our previous work [23] we have
calculated this rotation which is given by
$\delta\alpha=\frac{\varepsilon^{2}}{2-\varepsilon^{2}}\left(\gamma_{2}\cos
2\alpha-\gamma_{1}\sin 2\alpha\right)\,.$ (2.3)
Here $\varepsilon$ is the eccentricity of the galaxy,
$(\gamma_{1},\gamma_{2})$ are the components of the shear matrix in the Sachs
basis,
$\boldsymbol{\Gamma}=\left(\begin{array}[]{cc}-\gamma_{1}&-\gamma_{2}\\\
-\gamma_{2}&+\gamma_{1}\end{array}\right)\,,$ (2.4)
and $\alpha$ is the angle between the major-axis of the galaxy shape and the
first basis vector $\boldsymbol{e}_{1}$. We stress that the dependence of the
rotation angle (2.3) on the choice of the Sachs basis is only apparent: under
a rotation of the Sachs basis, the shear transformation compensates the
transformation of the position angle $\alpha$, see [23] for details.
If the semi-major axis of the galaxy is aligned with the shear, $\delta\alpha$
vanishes. For example if we choose $\boldsymbol{e}_{1}$ in the direction of
the semi-major axis of the galaxy such that $\alpha=0$, alignment with the
shear implies $\gamma_{2}=0$ and hence $\delta\alpha=0$. In this case, the
shear just enhances or reduces somewhat the ellipticity of the galaxy. In all
other situation it generates also a rotation by $\delta\alpha$. This rotation
has already been studied long ago as a possible origon of the anisotropy of
galaxy orientations [36].
An additional rotation is in principle also generated by the anti-symmetric
part of the Jacobi matrix. But this part in non-vanishing only at second order
in perturbation theory [35] and we neglect it here.
In addition to $\delta\alpha$, the angle between the polarisation direction
and the semi major axis, also the eccentricity $\varepsilon$ and the direction
of the galaxy’s semi-major axis parametrised by $\alpha$ are observables.
Similar to our previous work [23], we define an observable which we call the
’scaled rotation’111Some versions of the article have a sign mistake in this
definition. The formula given here is correct. by
$\Theta=\frac{2-\varepsilon^{2}}{\varepsilon^{2}}\delta\alpha\,.$ (2.5)
With (2.3) the scaled rotation is related to the shear as
$\Theta=\gamma_{2}\cos 2\alpha-\gamma_{1}\sin 2\alpha\,,$ (2.6)
which is actually simply the shear in the direction $\alpha-\pi/4$, see
Appendix A.4. We want to determine the correlation function
$\langle\Theta({\boldsymbol{n}}_{1},z_{1})\Theta({\boldsymbol{n}}_{2},z_{2})\rangle$
for two directions ${\boldsymbol{n}}_{1}$ and ${\boldsymbol{n}}_{2}$ in the
sky and two redshifts $z_{1}$, $z_{2}$. Our expression for the variable
$\Theta({\boldsymbol{n}},z)$ given in (2.6) in principle depends on our choice
for the Sachs basis via the angle $\alpha$ and via $\gamma_{1}$ and
$\gamma_{2}$. However, as explained in App. A.4, one can circumvent this
problem and define a correlation function that is explicitly coordinate
invariant by choosing $\boldsymbol{e}_{1}$ the direction of the great circle
from ${\boldsymbol{n}}_{1}$ to ${\boldsymbol{n}}_{2}$ which is equivalent to
putting both galaxies on the ’Equator’, with coordinates $(\pi/2,0)$ and
$(\pi/2,\varphi)$ and
$\mu=\cos\varphi={\boldsymbol{n}}_{1}\cdot{\boldsymbol{n}}_{2}$. Note that
there is still a $\mathbb{Z}_{2}$ symmetry where one can swap both galaxies.
However, the correlation function does not depend on this choice. Given two
galaxies and the described setup, the correlation between their scaled
rotation is given by
$\displaystyle\langle{\Theta}(\boldsymbol{n}_{1},\alpha_{1},z_{1}){\Theta}(\boldsymbol{n}_{2},\alpha_{2},z_{2})\rangle$
$\displaystyle=\zeta_{+}(\mu,z_{1},z_{2})\cos(2(\alpha_{1}-\alpha_{2}))+\zeta_{-}(\mu,z_{1},z_{2})\cos(2(\alpha_{1}+\alpha_{2}))\,,$
(2.7)
with $\mu=\boldsymbol{n}_{1}\cdot\boldsymbol{n}_{2}=\cos\varphi$ and
$\zeta_{+}(\mu,z_{1},z_{2})$ and $\zeta_{-}(\mu,z_{1},z_{2})$ the two
coordinate independent shear correlation functions (see App. A for more
details). These correlation functions are related to the power spectrum of the
lensing potential $C_{\ell}(z_{1},z_{2})$ as
$\displaystyle\int_{-1}^{+1}\zeta_{+}(\mu,z_{1},z_{2})\tilde{P}_{\ell}(\mu)\,\mathrm{d}\mu$
$\displaystyle=\frac{1}{4\pi}C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}\,,$ (2.8)
$\displaystyle\int_{-1}^{+1}\zeta_{-}(\mu,z_{1},z_{2})\tilde{Q}_{\ell}(\mu)\,\mathrm{d}\mu$
$\displaystyle=\frac{1}{4\pi}C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}\,,$ (2.9)
$\displaystyle\nu^{2}_{\ell}$ $\displaystyle=\frac{(\ell+2)!}{(\ell-2)!}\,,$
(2.10)
where the polynomials $\tilde{P}_{\ell}(\mu)$ $\tilde{Q}_{\ell}(\mu)$ are
defined by, $\mu=\cos\theta$,
$\displaystyle-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,+2}}(\theta,\pi/2$
$\displaystyle=-\sqrt{\frac{2\ell+1}{4\pi}}\;{{}_{-2}\tensor{Y}{{}_{\ell,-2}}(\theta,\pi/2)}=\frac{2\ell+1}{16\pi}\tilde{Q}_{\ell}(\mu)\,,$
(2.11)
$\displaystyle-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,-2}}(\theta,\pi/2)$
$\displaystyle=-\sqrt{\frac{2\ell+1}{4\pi}}\;{{}_{+2}\tensor{Y}{{}_{\ell,-2}}}(\theta,\pi/2)=\frac{2\ell+1}{16\pi}\tilde{P}_{\ell}(\mu)\,,$
(2.12)
and ${}_{s}Y_{\ell,m}$ are the Spin Weighted Spherical Harmonics. More
details, and the explicit expressions for $\ell=2,\dots,5$ are given in App.
A.
From the observable $\Theta$ we now construct an estimator for the coordinate
independent correlation functions $\zeta_{+}$ and $\zeta_{-}$. Since we want
to estimate two correlation functions, we need two couples of galaxies,
separated by the same angle $\varphi$. Schematically, as
$\Theta\sim\gamma_{1}+\gamma_{2}$, one needs two galaxies to invert this
relation and express $\gamma_{1}$ and $\gamma_{2}$. Moreover, as the
correlation function is given by $\zeta\sim\langle\gamma\gamma\rangle$, we
need the value of $\gamma$ in two different pixels, which can be performed
considering $4$ galaxies in total.
More precisely, the estimator can be computed as follows. We consider two
couples of galaxies both separated by the same angle $\varphi$ and located at
the same redshifts (within the resolution of our survey). The galaxies of the
first couple have the directions and redshifts $(\boldsymbol{n}_{j},z_{j})$
with angles as defined above $\alpha_{j}$ ($j=1,2$), while the second couple
of galaxies are located in different directions $\boldsymbol{n}^{\prime}_{j}$
and with different angles $\alpha^{\prime}_{j}$ but inside the same redshift
bins $z_{j}$. Note that we define the angles $\alpha_{j}$ and
$\alpha^{\prime}_{j}$, with respect to the great circle connecting
${\boldsymbol{n}}_{1}$ and ${\boldsymbol{n}}_{2}$ respectively
${\boldsymbol{n}}^{\prime}_{1}$ and ${\boldsymbol{n}}^{\prime}_{2}$ which can
be different for each couples. The two couples of galaxies, however should be
separated by the same angle $\varphi$ (within our angular resolution), i.e.
$\boldsymbol{n}_{1}\cdot\boldsymbol{n}_{2}=\boldsymbol{n}^{\prime}_{1}\cdot\boldsymbol{n}^{\prime}_{2}=\cos\varphi=\mu$.
The two observables are the product of the scaled rotations, namely
$\displaystyle\Xi$
$\displaystyle={\Theta}(\boldsymbol{n}_{1},\alpha_{1},z_{1})\Theta(\boldsymbol{n}_{2},\alpha_{2},z_{2})\,,$
(2.13) $\displaystyle\Xi^{\prime}$
$\displaystyle={\Theta}(\boldsymbol{n}_{1}^{\prime},\alpha_{1}^{\prime},z_{1})\Theta(\boldsymbol{n}_{2}^{\prime},\alpha_{2}^{\prime},z_{2})\,.$
(2.14)
From these, and using the theoretical expression of the correlation function
of the scaled rotations given by Eq. (2.7), replacing the expectation value by
the observables $\Xi$ and $\Xi^{\prime}$, we can extract the estimators
$\displaystyle\hat{\zeta}_{+}(\mu,z_{1},z_{2})$
$\displaystyle=\Xi\,F_{1}(\alpha_{1}^{\prime},\alpha_{2}^{\prime},\alpha_{1},\alpha_{2})+\Xi^{\prime}\,F_{1}(\alpha_{1},\alpha_{2},\alpha_{1}^{\prime},\alpha_{2}^{\prime})\,,$
(2.15) $\displaystyle\hat{\zeta}_{-}(\mu,z_{1},z_{2})$
$\displaystyle=\Xi\,F_{2}(\alpha_{1}^{\prime},\alpha_{2}^{\prime},\alpha_{1},\alpha_{2})+\Xi^{\prime}\,F_{2}(\alpha_{1},\alpha_{2},\alpha_{1}^{\prime},\alpha_{2}^{\prime})\,,$
(2.16)
with
$F_{1}(\alpha_{1},\alpha_{2},\alpha_{1}^{\prime},\alpha_{2}^{\prime})=\frac{\cos(2(\alpha_{1}+\alpha_{2}))}{\cos(2(\alpha_{1}^{\prime}-\alpha_{2}^{\prime}))\cos(2(\alpha_{1}+\alpha_{2}))-\cos(2(\alpha_{1}^{\prime}+\alpha_{2}^{\prime}))\cos(2(\alpha_{1}-\alpha_{2}))}\,,$
(2.17)
and
$F_{2}(\alpha_{1},\alpha_{2},\alpha_{1}^{\prime},\alpha_{2}^{\prime})=\frac{\cos(2(\alpha_{1}-\alpha_{2}))}{\cos(2(\alpha_{1}^{\prime}+\alpha_{2}^{\prime}))\cos(2(\alpha_{1}-\alpha_{2}))-\cos(2(\alpha_{1}^{\prime}-\alpha_{2}^{\prime}))\cos(2(\alpha_{1}+\alpha_{2}))}\,.$
(2.18)
We observe that in eqs. (2.15) and (2.16) on the left hand side there is no
angle dependence. We used this notation to stress that, observationally, one
chooses a given Sachs frame and for each galaxy quadruplet in pixels
${\boldsymbol{n}}_{1}$ and ${\boldsymbol{n}}_{2}$, one builds the correlations
given in eqs. (2.15) and (2.16). Every single estimator depends on the frame
choice. However, their expectation value obtained by averaging over all
possible quadruplets in the two pixels is independent of the angles
$\alpha_{i}$ and $\alpha^{\prime}_{i}$. In other words, and by construction,
$\langle\hat{\zeta}_{\pm}(\mu,z_{1},z_{2})\rangle=\zeta_{\pm}(\mu,z_{1},z_{2})\,.$
(2.19)
Once an estimator for $\zeta_{\pm}$ is obtained, the estimator for the lensing
potential power spectrum $C_{\ell}(z_{1},z_{2})$ can be given by Eqs. (2.8)
and (2.9).
## 3 Error estimation
In this Section, we estimate the expected error (or signal-to-noise ratio) on
the lensing angular power spectrum extracted via Eqs. (2.8) and (2.9),
starting from our estimator for the correlation functions Eqs. (2.15) and
(2.16).
As explained in the previous section, given two couples of galaxies, each
couple being separated by an angle $\varphi$ (with $\mu=\cos\varphi$), an
estimator for the correlation functions $\zeta_{\pm}$ is given by Eq. (2.15)
and Eq. (2.16). Of course, to obtain a good estimator for
$\zeta_{\pm}(\mu,z_{1},z_{2})$ we need to have many pairs of galaxies at a
given angular separations $\varphi$ (with $\mu=\cos\varphi$) inside the two
redshift bins. Furthermore, we need a good measurement of the scaled rotation
for these pairs and a good measurement of the angles $\alpha_{j}$ and
$\alpha_{j}^{\prime}$. The expressions for $F_{1}$ and $F_{2}$ (see Eqs.
(2.17) and (2.18)) also tell us that for
$\alpha_{1}+\alpha_{2}=\alpha^{\prime}_{1}+\alpha^{\prime}_{2}=\pi/4$ we
cannot determine $\hat{\zeta}_{+}$ while for for
$\alpha_{1}-\alpha_{2}=\alpha^{\prime}_{1}-\alpha^{\prime}_{2}=\pi/4$ we
cannot determine $\hat{\zeta}_{-}$. It follows that to obtain a well-defined
estimator of the correlation functions $\zeta_{\pm}$ we need to select
properly the angles $\alpha_{j}$ and $\alpha_{j}^{\prime}$, excluding galaxy
pairs with
$\alpha_{1}+\alpha_{2}=\alpha^{\prime}_{1}+\alpha^{\prime}_{2}=\pi/4$ or with
$\alpha_{1}-\alpha_{2}=\alpha^{\prime}_{1}-\alpha^{\prime}_{2}=\pi/4$. Note,
however, it does not matter whether the angles
$\alpha_{j},~{}\alpha_{j}^{\prime}$ are correlated, hence intrinsic alignment,
the major concern for traditional shear measurements is not an issue here.
What is important, however, is to have a good measurement of these angles and
of the small and more difficult-to-measure angle $\delta\alpha$ between the
image axis and polarisation.
An optimal estimator can be built as explained in Appendix B, by combining the
information that can be extracted from all possible pairs of couples with the
same angular separation and redshifts. It is optimal to choose the weighting
of each measurement inversely proportional to its error. To determine the
associated signal-to-noise ratio (SNR), we use the results presented in
Appendix B. Let $q$ represent a pair of a couples of galaxies (hence a
quadruplet). For each $q$, we compute an estimator $\hat{\zeta}_{\pm,q}(\mu)$
with its relative error $\tau_{\pm,q}$. The total signal-to-noise ratio for
the measurement of $\zeta_{\pm}(\mu,z_{1},z_{2})$ is given by Eq. (B.6)
${\rm
SNR}_{\pm}(\mu,z_{1},z_{2})=\sqrt{\sum_{q}\frac{1}{\tau^{2}_{\pm,q}}}\,.$
(3.1)
This sum can be computed explicitly if one is given a catalogue of
measurements. Here, we will take a more heuristic approach and admit that the
relative error is roughly equal (or we just consider an average value)
$\tau_{\pm,q}\simeq\tau_{0}\,.$ (3.2)
Then, the signal-to-noise is estimated as
${\rm
SNR}_{\pm}(\mu,z_{1},z_{2})\approx\frac{\sqrt{N_{\mathrm{e}}(\mu,z_{1},z_{2})}}{\tau_{0}}\,,$
(3.3)
where $N_{\mathrm{e}}(\mu,z_{1},z_{2})$ is the number of estimators one can
extract by choosing two couples of galaxies separated by an angle $\varphi$.
The number of quadruplets is computed in Appendix B and is given by
$N_{\mathrm{e}}(\varphi,z_{1},z_{2})=\frac{N_{\mathrm{p}}(\varphi,z_{1},z_{2})(N_{\mathrm{p}}(\varphi,z_{1},z_{2})-1)}{2}\simeq\frac{1}{2}\left(N_{\mathrm{g}}(z_{1})N_{\mathrm{g}}(z_{2})\frac{8f_{\rm
sky}\sin\varphi}{\delta\theta^{3}}\right)^{2}\,,$ (3.4)
where $N_{\mathrm{g}}(z)$ is the number of galaxies in a pixel at redshift $z$
and $\delta\theta$ is the aperture of the angular resolution. Note that the
formula for $N_{\mathrm{e}}$ given here holds for two different redshifts, and
has to be divided by $4$ if the considered redshifts are equal.
The final result for the signal-to-noise ratio given by Eq. (3.3) shows that
even if the erorr on a single estimator is typically rather large so that
$\tau_{0}>1$, the quality of the best estimator can still be good if we have
sufficiently many galaxies at our disposal.
Note that here, we assumed that all the individual estimators are
statistically independent. In reality, this is not the case, as we can assume
that the galaxies in the same pixel are somehow correlated (either their shape
or their orientation). Hence intrinsic alignment enters here in the error
estimate but not in the signal. Furthermore, in the number of estimators given
in (3.4) the same couples of pixels are used multiple times. We therefore
prefer to use a more pessimistic estimation for the number of independent
estimators setting
$N_{\mathrm{e}}(\varphi,z_{1},z_{2})\simeq N_{c}(\varphi)=\frac{8f_{\rm
sky}\sin\varphi}{\delta\theta^{3}}\,,$ (3.5)
where $N_{\mathrm{c}}$ is the number of couples of pixels separated by an
angle $\varphi$. Here we admit just one galaxy from each pixel. More details
can be found in Appendix B.
Finally, and to conclude this Section, another method would be to simply
compute the estimated shear field ${\boldsymbol{\gamma}}(\boldsymbol{n},z)$ in
every pixel using Eq. (2.6). By doing this, the signal-to-noise ratio for
every pixel would be given by $\sqrt{N_{\mathrm{g}}}/\tau_{0}$, where
$N_{\mathrm{g}}$ is the galaxy number in this specific pixel and $\tau_{0}$ is
the mean relative error on one measurement. In this way one could construct a
shear map in the sky for each redshift bin. From this map one can then extract
the power spectrum with its associated error. As we know e.g. from CMB lensing
maps [37], even if the map itself is noise dominated, we can obtain a good
estimator for its power spectrum. Note that to extract the shear in one pixel,
one needs to consider only a pair of galaxy, as the shear has two real
components $\gamma_{1}$ and $\gamma_{1}$. However, to compute the shear
correlation function, one needs to know the shear in two pixels. In other
words, even in this context, it is necessary to have two pairs of galaxies to
build an estimator for the correlation function. The selling argument for the
method we present here is that one could, in principle, construct a map of the
cosmic shear simply considering pairs of galaxies, without taking into account
a potential intrinsic correlation.
## 4 Results and discussion
In Fig. 2, we show an example of the results we can obtain. As discussed in
the previous section, we assume $N_{\mathrm{g}}=1$ to take into account that
the galaxies in the same pixel are not independent from each other, and use
Eq. (3.5). The parameters are taken from SKA2, see [38] for more details. We
choose a sky fraction and a pixel size of
$\displaystyle f_{\rm sky}$ $\displaystyle\approx 0.7\,,$ (4.1)
$\displaystyle\delta\theta$ $\displaystyle=5^{\prime}\approx 1.4\times
10^{-3}\,.$ (4.2)
Moreover, the typical shear signal $\gamma$ will be of order $10^{-3}$. For a
precise estimate of the error per galaxy pair, we would need precise values
for the errors on the various quantities as they are available once a mission
is planned. To get a pessimistic rough estimate, we have realised several
simulation using an error of $\pi/5$ on the angles and $1/2$ on $\varepsilon$.
This leads to a conservative relative error per galaxy pair of the order of
$\tau_{0}\approx 10^{3}$. This estimate is pessimistic, as in real experiments
one can hope to make this error smaller. On the other hand, the assumption
that the polarisation is perfectly aligned with the main axes of the galaxy is
optimistic. The idea is that these two assumptions might roughly compensate
each other,leading to the right order of magnitude for the resulting estimate.
Of course this treatment is simplistic and for a real observational campaign,
detailed simulations will be necessary. Inserting these numbers in (3.5) and
(3.3) we obtain a signal-to-noise ratio of order
${\rm SNR}\approx 45\sqrt{\sin\varphi}\,.$ (4.3)
This is the signal-to-noise ratio for our estimator
$\hat{\zeta}_{\pm}(\varphi,z_{1},z_{2})$ in two redshift bins around $z_{1}$
and $z_{2}$ and within one angular bin. One also needs the estimated value of
$\zeta_{\pm}(\varphi)$, which would be obtained from a catalogue with the
method we describe in this paper. As we do not yet have such a catalogue, we
compute the theoretical value of the correlation function. We compute the
power spectrum of the lensing potential, $C^{\phi}_{\ell}(z_{1},z_{2})$ for
$(z_{1},z_{2})=(1,1),(1,2),(2,2)$ with CLASS [39, 40] using the by default
parameters from the Planck 2018 data [41]
($h=0.6781\,,h^{2}\Omega_{\mathrm{cdm}}=0.0552278\,,h^{2}\Omega_{\mathrm{b}}=0.0102921\,,\log(10^{9}A_{\mathrm{s}})=0.742199\,,n_{\mathrm{s}}=0.9660499$)
To compute the correlation functions, one would need to invert the relations
(2.8) (2.9), i.e. evaluate the sums Eq. (A.82) and Eq. (A.83). However, the
polynomials $\tilde{P}$ and $\tilde{Q}$ are highly oscillating as $\ell$ gets
large and the computation is very badly converging. Instead, we use the flat
sky approximation, see [42] and [43] for more details, to approximate the
correlation functions as
$\displaystyle\zeta_{+}(z_{1},z_{2},\varphi)$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi}\int_{0}^{\infty}\;\ell
J_{0}(\ell\varphi)\frac{1}{4}\left(\ell(\ell+1)\right)^{2}C_{\ell}(z_{1},z_{2})\,\mathrm{d}\ell\,$
(4.4) $\displaystyle\zeta_{-}(z_{1},z_{2},\varphi)$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi}\int_{0}^{\infty}\;\ell
J_{4}(\ell\varphi)\frac{1}{4}\left(\ell(\ell+1)\right)^{2}C_{\ell}(z_{1},z_{2})\,\mathrm{d}\ell\,.$
(4.5)
Truncating the integral at $\ell=20^{\prime}000$ seems reasonable, as the
relative error is less than $10^{-3}$ in this case, which is much smaller than
the inverse signal-to-noise ratio.
In Fig. 2 we show the results for the correlation functions
$\zeta_{\pm}(\varphi)$ computed in the flat-sky approximation. The shaded
region around each curve represents the uncertainty computed with ${\rm
SNR}=40\sqrt{\sin\varphi}$. Different panels correspond to different redshift
bins. The result is not very sensitive to the thickness of the redshift bins.
In a true survey this is an advantage as it allows us the enhance the number
of galaxies per bin.
Figure 2: Correlation functions $\zeta_{\pm}(\varphi)$ computed with the
flat-sky approximation. The $C_{\ell}$ are taken from CLASS, and the sum is
truncated at $\ell=20^{\prime}000$. The error bars were computed with
$\mathrm{SNR}=40\sqrt{\sin\varphi}$. We consider redshift bins
$(z_{1},z_{2})=(1,1),(1,2)$ and $(2,2)$ (from top left to bottom panel).
## 5 Conclusions
In this paper we proposed a new method to extract the shear correlation
function, by measuring the correlation function of the angle between the image
major axis and the polarisation direction of radio galaxies. In particular, we
built an estimator for the shear correlation function given two couples of
galaxies separated by an angle $\varphi$, and estimated the error one gets by
combining all possible pairs separated by this angle.
The advantage of this method with respect to traditional shear measurements is
that we do not rely on the assumption that galaxy eccentricities are
uncorrelated, hence we do not have to deal with a parametrisation of intrinsic
alignment and its uncertainties, which are one of the major source of error in
standard shear measurements in present and planned surveys [2, 3, 6, 7, 8,
10].Even though our signal does not depend on intrinsic alignment, we have
seen that the error does since intrinsic alignment correlates the measurements
from different galaxies which therefore cannot be considered as independent
estimators. In the presented estimation of the signal-to-noise we have taken
this into account in a very conservative way, assuming that we can make only 1
independent measurement per pixel.
We find that even if the signal-to-noise ratio for a single measurements (i.e.
for a given galaxy quadruplet) is expected to be rather small, the fact that
all galaxies in a given pixel are subject to the same shear can be used to
overcome the noise. As a case study, we considered the specifications of SKA2:
the number of independent estimators for a given angular separation $\varphi$
and two redshifts $z_{1}$, $z_{2}$ is expected to scale as $\sim
10^{9}\sin\varphi$. As a consequence, the noise on a single measurement can
exceed the signal by a factor $10^{3}$, and still yield an signal-to-noise of
order 40 which is largely sufficient to detect the signal. Therefore, even if
the maps of $\delta\alpha$ measurements for each redshift bin will be largely
noise dominated, we will be able to obtain a good estimate of for the shear
correlation function when combining all the measurements together.
We stress that the goal of the present paper was to present a new method to
reconstruct the shear correlation functions with a new observable, and to
build an estimator for it. Of course, the limiting factor of our forecasts is
that we had to assume some number for the precision with which the various
angles $\delta\alpha$ and $\alpha$ can be measured. However, as explained
above, our choice of errors is quite conservative, and the crucial factor
setting the signal-to-noise level of our estimator is the high statistics. For
this reason, we do not expect a more refined analysis to drastically change
the conclusions of our study.
Finally we point out that, while in this work we focused on the reconstruction
of the shear correlation function, our new observable can be used also to get
a shear sky map. This is another advantage of our method with respect to
standard shear reconstruction methods, which look at galaxy shapes only (from
the study of galaxy ellipticity it is not possible to get a shear mapping, but
only to extract correlation functions). A natural extension of our work is to
apply this method to simulated (or real) galaxy lensing and polarisation data.
This would provide us with a more realistic estimate of the uncertainties, and
allow us to compare this shear reconstruction method with traditional
LSST/Euclid techniques to measure the shear correlation function.
## Acknowledgements
We thank Richard Battye, Michael Brown, Charles Dalang, Ian Harrison, Alan
Heavens, Azadeh Moradinezhad Dizgah, Serge Parnovskii, Cyril Pitrou and Isaac
Tutusaus for useful discussions and comments. We are very grateful to
Francesca Lepori for her valuable help with class. This work is supported by
the Swiss National Science Foundation.
## Appendix A Special functions
### A.1 Spin Weighted Spherical Harmonics
This appendix follows Refs. [44, 45]. Let $(\theta,\varphi)$ be the usual
spherical coordinates on the sphere. We define the Spin Weighted Spherical
Harmonics, ${}_{s}Y_{\ell,m}$, where $s$ represents the weight. The $s=0$
Spherical Harmonics are the usual Spherical Harmonics functions
$\tensor[_{0}]{Y}{}_{\ell,m}\equiv\tensor{Y}{}_{\ell,m}$, with the convention
$Y_{\ell,-m}=(-1)^{m}Y_{\ell,m}^{\star}\,.$ (A.1)
For a generic integer $s$, we define first the spin raising and spin lowering
operations, $\not{\partial}$ and $\not{\partial}^{\star}$, on a function
$\tensor[_{s}]{Y}{}_{\ell,m}$ with spin weight $s$ as
$\displaystyle\not{\partial}_{s}Y_{\ell,m}$
$\displaystyle=\left(s\cot\theta-\partial_{\theta}-\frac{\mathrm{i}}{\sin\theta}\partial_{\varphi}\right)\,_{s}Y_{\ell,m}\,,$
(A.2)
and
$\displaystyle{\not{\partial}^{\star}}_{s}Y_{\ell,m}$
$\displaystyle={\left(-s\cot\theta-\partial_{\theta}+\frac{\mathrm{i}}{\sin\theta}\partial_{\varphi}\right)}\,_{s}Y_{\ell,m}\,.$
(A.3)
The Spin Weighted Spherical Harmonics for generic $s\in\mathbb{Z}$ are
obtained recursively with the spin raising and spin lowering operators given
by Eq. (A.2) and Eq. (A.3) via
$\displaystyle\not{\partial}_{s}Y_{\ell,m}$
$\displaystyle=\sqrt{(\ell-s)(\ell+s+1)}\;_{s+1}Y_{\ell,m}\,,$ (A.4)
$\displaystyle{\not{\partial}^{\star}}_{s}Y_{\ell,m}$
$\displaystyle=-\sqrt{(\ell+s)(\ell-s+1)}\;_{s-1}Y_{\ell,m}\,,$ (A.5)
together with the starting point
${}_{0}\tensor{Y}{}_{\ell,m}\equiv\tensor{Y}{}_{\ell,m}$. Hence, the slashed
derivatives can be interpreted as spin raising/lowering operators. In
particular, for $s=\pm 2$, these definitions yield
$\displaystyle\not{\partial}^{2}{Y}{{}_{\ell,m}}$
$\displaystyle=\nu_{\ell}\;\;{{}_{2}\tensor{Y}{{}_{\ell,m}}}\,,$ (A.6)
$\displaystyle{\not{\partial}^{\star}}^{2}{Y}{{}_{\ell,m}}$
$\displaystyle=\nu_{\ell}\;\;{{}_{-2}\tensor{Y}{{}_{\ell,m}}}\,,$ (A.7)
$\displaystyle\mbox{where}~{}~{}\nu_{\ell}$
$\displaystyle=\sqrt{\frac{(\ell+2)!}{(\ell-2)!}}\,.$ (A.8)
The Spin Weighted Spherical Harmonics satisfy the orthogonality condition (
$d\Omega=\sin\theta d\theta d\varphi$)
$\int\;_{s}Y_{\ell_{1},m_{1}}(\theta,\varphi)\,_{s}Y^{\star}_{\ell_{2},m_{2}}(\theta,\varphi)\,\mathrm{d}\Omega=\delta_{\ell_{1},\ell_{2}}\delta_{m_{1},m_{2}}\,,$
(A.9)
and the conjugation relation
$_{-s}Y_{\ell,-m}=(-1)^{s+m}\,_{s}Y^{\star}_{\ell,m}(\theta,\varphi)\,.$
(A.10)
The Spin Weighted Spherical Harmonics also satisfy the the following addition
theorem
$\sqrt{\frac{4\pi}{2\ell+1}}\sum_{m}\;{{}_{s_{1}}\tensor{Y}{{}_{\ell,m}}}(\theta_{1},\varphi_{1})\;{{}_{-s_{2}}\tensor{Y}{{}^{\star}_{\ell,m}}}(\theta_{2},\varphi_{2})=\;{{}_{s_{1}}\tensor{Y}{{}_{\ell,s_{2}}}}(\beta,\alpha)\mathrm{e}^{-\mathrm{i}s_{1}\gamma}\,,$
(A.11)
where the angles $(\alpha,\beta,\gamma)$ are defined through the implicit
relation
$R_{\mathrm{E}}(\alpha,\beta,\gamma)=R_{\mathrm{E}}(\varphi_{1},\theta_{1},0)^{-1}R_{\mathrm{E}}(\varphi_{2},\theta_{2},0)\,.$
(A.12)
Here $R_{\mathrm{E}}(\alpha,\beta,\gamma)$ is the rotation matrix with the
Euler angles $\alpha$, $\beta$ and $\gamma$. More precisely
$R_{\mathrm{E}}(\alpha,\beta,\gamma)=\begin{pmatrix}\cos\alpha\cos\beta\cos\gamma-\sin\alpha\sin\gamma&\
&-\cos\gamma\sin\alpha-\cos\alpha\cos\beta\sin\gamma&\ &\cos\alpha\sin\beta\\\
\cos\beta\cos\gamma\sin\alpha+\cos\alpha\sin\gamma&\
&\cos\alpha\cos\gamma-\cos\beta\sin\alpha\sin\gamma&\ &\sin\alpha\sin\beta\\\
-\cos\gamma\sin\beta&\ &\sin\beta\sin\gamma&\ &\cos\beta\end{pmatrix}\,.$
(A.13)
Explicit expressions of the Spin Weighted Spherical Harmonics for $s=0,1,2$
and $\ell\leq 2$ are given in Tables 2 and 3. Note that the remaining cases
can be deduced from the conjugation relation given by Eq. (A.10).
We also introduce the auxiliary polynomials $\tilde{P}_{\ell}(\mu)$ and
$\tilde{Q}_{\ell}(\mu)$ which will be useful later. For $\mu=\cos\theta$ they
are defined as
$\displaystyle\frac{2\ell+1}{16\pi}\tilde{Q}_{\ell}(\mu)$
$\displaystyle\equiv$
$\displaystyle-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,+2}}(\theta,\pi/2)~{}=~{}-\sqrt{\frac{2\ell+1}{4\pi}}\;{{}_{-2}\tensor{Y}{{}_{\ell,-2}}(\theta,\pi/2)}\,,$
(A.14) $\displaystyle\frac{2\ell+1}{16\pi}\tilde{P}_{\ell}(\mu)$
$\displaystyle\equiv$
$\displaystyle-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,-2}}(\theta,\pi/2)~{}=~{}-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,-2}}(\theta,\pi/2)\,.$
(A.15)
From the orthonormality condition Eq. (A.9), it is easy to see that
$\int_{-1}^{+1}\;\tilde{P}_{\ell_{1}}(\mu)\tilde{P}_{\ell_{2}}(\mu)\,\mathrm{d}\mu=\int_{-1}^{+1}\;\tilde{Q}_{\ell_{1}}(\mu)\tilde{Q}_{\ell_{2}}(\mu)\,\mathrm{d}\mu=\frac{32}{2\ell_{1}+1}\delta_{\ell_{1},\ell_{2}}\,.$
(A.16)
The explicit expressions for these polynomials for $\ell=2,\dots,5$ are given
in table 1
$\ell$ | $\tilde{P}_{\ell}(\mu)$ | $\tilde{Q}_{\ell}(\mu)$
---|---|---
$2$ | $(\mu+1)^{2}$ | $(\mu-1)^{2}$
$3$ | $(\mu+1)^{2}(3\mu-2)$ | $(\mu-1)^{2}(3\mu+2)$
$4$ | $(\mu+1)^{2}(7\mu^{2}-7\mu+1)$ | $(\mu-1)^{2}(7\mu^{2}+7\mu+1)$
$5$ | $(\mu+1)^{2}(15\mu^{3}-18\mu^{2}+3\mu+1)$ | $(\mu-1)^{2}(15\mu^{3}+18\mu^{2}+3\mu-1)$
Table 1: The polynomials $\tilde{P}_{\ell}(\mu)$ and $\tilde{Q}_{\ell}(\mu)$. $m$ | $\tensor{Y}{{}_{1,m}}(\theta,\varphi)$ | ${}_{1}\tensor{Y}{{}_{1,m}}(\theta,\varphi)$
---|---|---
$-1$ | $\frac{1}{2}\sqrt{\frac{3}{2\pi}}\mathrm{e}^{-\mathrm{i}\varphi}\sin\theta$ | $-\frac{1}{4}\sqrt{\frac{3}{\pi}}\mathrm{e}^{-\mathrm{i}\varphi}(1+\cos\theta)$
$0$ | $\frac{1}{2}\sqrt{\frac{3}{\pi}}\cos\theta$ | $\frac{1}{2}\sqrt{\frac{3}{2\pi}}\sin\theta$
$1$ | $-\frac{1}{2}\sqrt{\frac{3}{2\pi}}\mathrm{e}^{\mathrm{i}\varphi}\sin\theta$ | $\frac{1}{4}\sqrt{\frac{3}{\pi}}\mathrm{e}^{\mathrm{i}\varphi}(-1+\cos\theta)$
Table 2: Spherical Harmonics of Spin Weight $s=0,1$ and $\ell=1$ $m$ | $\tensor{Y}{{}_{2,m}}(\theta,\varphi)$ | ${}_{1}\tensor{Y}{{}_{2,m}}(\theta,\varphi)$ | ${}_{2}\tensor{Y}{{}_{2,m}}(\theta,\varphi)$
---|---|---|---
$-2$ | $\frac{1}{4}\sqrt{\frac{15}{2\pi}}\mathrm{e}^{-2\mathrm{i}\varphi}\sin^{2}\theta$ | $-\frac{1}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{-2\mathrm{i}\varphi}(1+\cos\theta)\sin\theta$ | $\frac{1}{8}\sqrt{\frac{5}{\pi}}\mathrm{e}^{-2\mathrm{i}\varphi}(1+\cos\theta)^{2}$
$-1$ | $\frac{1}{2}\sqrt{\frac{15}{2\pi}}\mathrm{e}^{-\mathrm{i}\varphi}\sin\theta\cos\theta$ | $-\frac{1}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{-\mathrm{i}\varphi}(2\cos^{2}\theta+\cos\theta-1)$ | $-\frac{1}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{-\mathrm{i}\varphi}\sin\theta(1+\cos\theta)$
$0$ | $\frac{1}{8}\sqrt{\frac{5}{\pi}}(1+3\cos(2\theta))$ | $\frac{1}{2}\sqrt{\frac{15}{2\pi}}\sin\theta\cos\theta$ | $\frac{1}{4}\sqrt{\frac{15}{2\pi}}\sin^{2}\theta$
$1$ | $-\frac{1}{2}\sqrt{\frac{15}{2\pi}}\mathrm{e}^{\mathrm{i}\varphi}\sin\theta\cos\theta$ | $\frac{1}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{\mathrm{i}\varphi}(2\cos^{2}\theta-\cos\theta-1)$ | $\frac{1}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{\mathrm{i}\varphi}\sin\theta(-1+\cos(\theta))$
$2$ | $\frac{1}{4}\sqrt{\frac{15}{2\pi}}\mathrm{e}^{2\mathrm{i}\varphi}\sin^{2}\theta$ | $\frac{2}{4}\sqrt{\frac{5}{\pi}}\mathrm{e}^{2\mathrm{i}\varphi}\sin\theta(-1+\cos\theta)$ | $\frac{1}{8}\sqrt{\frac{5}{\pi}}\mathrm{e}^{2\mathrm{i}\varphi}(1-\cos\theta)^{2}$
Table 3: Spherical Harmonics of Spin Weight $s=0,1,2$ and $\ell=2$
### A.2 Expression of the shear
In this Appendix, we present useful relations involving spin Spherical
Harmonics. More details can be found in [44, 45]. This second reference is a
very useful PhD thesis covering the topic in depth. The interested reader is
referred to it for further details.
Let $(\boldsymbol{e}_{1},\boldsymbol{e}_{2})$ be an orthonormal basis on the
sphere associated with the usual spherical coordinates $(\theta,\varphi)$. We
define the $(+,-)$ basis
$\boldsymbol{e}_{\pm}=\frac{1}{\sqrt{2}}\left(\boldsymbol{e}_{1}\mp\mathrm{i}\boldsymbol{e}_{2}\right)\,.$
(A.53)
The spin raising and lowering operators are simply related to the covariant
derivatives in directions $\boldsymbol{e}_{\pm}$,
$\boldsymbol{\nabla}_{\boldsymbol{e}_{-}}=-\frac{1}{\sqrt{2}}\not{\partial}\,,\quad\boldsymbol{\nabla}_{\boldsymbol{e}_{+}}=-\frac{1}{\sqrt{2}}{\not{\partial}^{\star}}\,.$
(A.54)
With these identities, the relevant operators to compute the shear from the
lensing potential are
$\boldsymbol{\nabla}_{1}^{2}-\boldsymbol{\nabla}_{2}^{2}=\frac{1}{2}(\not{\partial}^{2}+{{\not{\partial}^{\star}}}^{2})\,,$
(A.55)
and
$\boldsymbol{\nabla}_{1}\boldsymbol{\nabla}_{2}=-\frac{\mathrm{i}}{4}(\not{\partial}^{2}-{{\not{\partial}^{\star}}}^{2})\,,$
(A.56)
where it is assumed
$\not{\partial}{\not{\partial}^{\star}}={\not{\partial}^{\star}}\not{\partial}$,
as in this context it acts on the scalar lensing potential $\phi$. The
definition of the shear in the $(\boldsymbol{e}_{1},\boldsymbol{e}_{2})$ basis
is
$\displaystyle\gamma_{1}$
$\displaystyle=-\frac{1}{2}(\boldsymbol{\nabla}_{1}^{2}-\boldsymbol{\nabla}_{2}^{2})\phi\,,$
(A.57) $\displaystyle\gamma_{2}$
$\displaystyle=-\boldsymbol{\nabla}_{1}\boldsymbol{\nabla}_{2}\phi\,,$ (A.58)
where $\phi$ is the lensing potential. This shows that the shear is a spin $2$
object. Using $\gamma^{\pm}=\gamma_{1}\pm\mathrm{i}\gamma_{2}$ and the
relations given above, the shear in the $(+,-)$ basis is given by the slashed
derivatives of the lensing potential as
$\displaystyle\gamma^{+}$
$\displaystyle=-\frac{1}{2}{\not{\partial}}^{2}\phi\,,$ (A.59)
$\displaystyle\gamma^{-}$
$\displaystyle=-\frac{1}{2}{{\not{\partial}^{\star}}}^{2}\phi\,.$ (A.60)
Hence $\gamma^{+}$ has helicity $+2$ while $\gamma^{-}$ has helicity $-2$.
Using the standard decomposition for the lensing potential
$\phi(\boldsymbol{n},z)=\sum_{\ell,m}\phi_{\ell,m}(z)Y_{\ell,m}(\boldsymbol{n})\,,$
(A.61)
and the squared raising/lowering operators given in Eq (A.6), one obtains the
decomposition of the shear in the $(+,-)$ basis as
$\displaystyle\gamma^{+}(\boldsymbol{n},z)$
$\displaystyle=-\frac{1}{2}\sum_{\ell=2,m}\phi_{\ell,m}(z)\nu_{\ell}\;{}_{+2}{Y}{{}_{\ell,m}}(\boldsymbol{n})\,,$
(A.62) $\displaystyle\gamma^{-}(\boldsymbol{n},z)$
$\displaystyle=-\frac{1}{2}\sum_{\ell=2,m}\phi_{\ell,m}(z)\nu_{\ell}\;{}_{-2}{Y}{{}_{\ell,m}}(\boldsymbol{n})\,.$
(A.63)
The complex numbers $\phi_{\ell,m}(z)$ are random variables whose expectation
values define the angular power spectrum of the lensing potential,
$\langle\phi_{\ell_{1},m_{1}}(z_{1})\phi^{\star}_{\ell_{2},m_{2}}(z_{2})\rangle=C_{\ell}(z_{1},z_{2})\delta_{\ell_{1},\ell_{2}}\delta_{m_{1},m_{2}}\,.$
(A.64)
As the lensing potential $\phi(\boldsymbol{n},z)$ is real. They satisfy
$\phi_{\ell,m}^{\star}=(-1)^{m}\phi_{\ell,-m}\,.$ (A.65)
### A.3 Correlation functions on the equator
In this Section, we compute the correlation functions of the shear in the
$(+,-)$ and $(\boldsymbol{e}_{1},\boldsymbol{e}_{2})$ basis. Using the
decomposition given by Eq. (A.62) and Eq. (A.63) we find for the correlation
function of $\gamma^{+}$ and $\gamma^{-}$
$\displaystyle\langle\gamma^{+}(\boldsymbol{n}_{1},z_{1})\gamma^{-}(\boldsymbol{n}_{2},z_{2})\rangle$
$\displaystyle=\frac{1}{4}\sum\langle\phi_{\ell_{1},m_{1}}(z_{1})\phi_{\ell_{2},m_{2}}(z_{2})\rangle\nu_{\ell_{1}}\nu_{\ell_{2}}\;{}_{+2}{Y}{{}_{\ell_{1},m_{1}}}(\boldsymbol{n}_{1})\;_{-2}{Y}{{}_{\ell_{2},m_{2}}}(\boldsymbol{n}_{2})$
(A.66) $\displaystyle=\frac{1}{4}\sum
C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}(-1)^{m}\;_{+2}{Y}{{}_{\ell,m}}(\boldsymbol{n}_{1})\;_{-2}{Y}{{}_{\ell,-m}}(\boldsymbol{n}_{2})$
(A.67) $\displaystyle=\frac{1}{4}\sum
C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}\;{}_{+2}{Y}{{}_{\ell,m}}(\boldsymbol{n}_{1})\;_{+2}{Y}{{}^{\star}_{\ell,m}}(\boldsymbol{n}_{2})\,,$
(A.68)
where we used the conjugation properties Eq. (A.65) and Eq. (A.10). Using the
addition theorem Eq. (A.11) (with $s_{1}=+2$ and $s_{2}=-2$), the sum over $m$
reads
$\displaystyle\sum_{m}{{}_{+2}{Y}{{}_{\ell,m}}(\boldsymbol{n}_{1})}\;_{+2}Y^{\star}_{\ell,m}(\boldsymbol{n}_{2})$
$\displaystyle=-\sqrt{\frac{2\ell+1}{4\pi}}\;_{+2}\tensor{Y}{{}_{\ell,-2}}(\varphi,\pi/2)$
(A.69) $\displaystyle\equiv\frac{2\ell+1}{16\pi}\tilde{P}_{\ell}(\mu)\,,$
(A.70)
Finally, the correlation function is given by
$\langle\gamma^{+}(\boldsymbol{n}_{1},z_{1})\gamma^{-}(\boldsymbol{n}_{2},z_{2})\rangle=\sum_{\ell}\frac{2\ell+1}{64\pi}\nu_{\ell}^{2}C_{\ell}(z_{1},z_{2})\tilde{P}_{\ell}(\mu)\,.$
(A.71)
The two other correlations can be obtained following exactly the same steps
(the values of the Euler angles are the same), yielding
$\displaystyle\langle\gamma^{+}(\boldsymbol{n}_{1},z_{1})\gamma^{+}(\boldsymbol{n}_{2},z_{2})\rangle=\langle\gamma^{-}(\boldsymbol{n}_{1},z_{1})\gamma^{-}(\boldsymbol{n}_{2},z_{2})\rangle=\sum_{\ell}\frac{2\ell+1}{64\pi}\nu_{\ell}^{2}C_{\ell}(z_{1},z_{2})\tilde{Q}_{\ell}(\mu)\,.$
(A.72)
Inverting the relations $\gamma^{\pm}=\gamma_{1}\pm\mathrm{i}\gamma_{2}$
yields
$\displaystyle\gamma_{1}$
$\displaystyle=\frac{1}{2}(\gamma^{+}+\gamma^{-})\,,$ (A.73)
$\displaystyle\gamma_{2}$
$\displaystyle=\frac{\mathrm{i}}{2}(\gamma^{-}-\gamma^{+})\,.$ (A.74)
Using the correlations given above by Eq. (A.71) and Eq. (A.72) yield
$\displaystyle\langle\gamma_{1}(\boldsymbol{n}_{1},z_{1})\gamma_{1}(\boldsymbol{n}_{2},z_{2})\rangle$
$\displaystyle=\sum_{\ell}\frac{2\ell+1}{128\pi}\nu_{\ell}^{2}C_{\ell}(z_{1},z_{2})(\tilde{P}_{\ell}(\mu)+\tilde{Q}_{\ell}(\mu))\,,$
(A.75)
$\displaystyle\langle\gamma_{2}(\boldsymbol{n}_{1},z_{1})\gamma_{2}(\boldsymbol{n}_{2},z_{2})\rangle$
$\displaystyle=\sum_{\ell}\frac{2\ell+1}{128\pi}\nu_{\ell}^{2}C_{\ell}(z_{1},z_{2})(\tilde{P}_{\ell}(\mu)-\tilde{Q}_{\ell}(\mu))\,,$
(A.76)
$\displaystyle\langle\gamma_{1}(\boldsymbol{n}_{1},z_{1})\gamma_{2}(\boldsymbol{n}_{2},z_{2})\rangle$
$\displaystyle=0\,,$ (A.77)
where the points $\boldsymbol{n}_{1}$ and $\boldsymbol{n}_{1}$ lie on the
equator and subtend and angle $\varphi$ with $\mu=\cos\varphi$.
### A.4 Invariant correlation functions
Here, we compute the shear and its correlation functions in a coordinate
invariant way, see for example [46]. Let $(\theta,\varphi)$ be the spherical
coordinates and $(\boldsymbol{e}_{1},\boldsymbol{e}_{2})$ the associated
orthonormal frame. With such a basis, the shear is a 2-tensor of the form
$\boldsymbol{\Gamma}=\begin{pmatrix}-\gamma_{1}&-\gamma_{2}\\\
-\gamma_{2}&\gamma_{1}\end{pmatrix}\,.$ (A.78)
For a generic tangent vector $\boldsymbol{e}=(\cos\alpha,\sin\alpha)$ in the
$(\boldsymbol{e}_{1},\boldsymbol{e}_{2})$, the shear in direction
$\boldsymbol{e}$ is defined as
$\gamma_{\alpha}\equiv\gamma_{\boldsymbol{e}}\equiv
e^{a}e^{b}\Gamma_{ab}=-\gamma_{1}\cos(2\alpha)-\gamma_{2}\sin(2\alpha)\,.$
(A.79)
It is clear from the definition that $\gamma_{1,2}$ and the angle $\alpha$ do
depend on the coordinate system. However, for a fixed (physically defined)
vector $\boldsymbol{e}$, the shear in direction $\boldsymbol{e}$,
$\gamma_{\boldsymbol{e}}$ does not depend on the coordinates, which makes this
quantity a good candidate to study correlation functions. For two galaxies
located at $(\boldsymbol{n}_{1},z_{1})$ and $(\boldsymbol{n}_{2},z_{2})$, we
can define the geodesic joining them to be the equator of our system of
coordinates. As this process does not depend on the coordinates and is well-
defined for every pair of galaxies, the result that follows is also coordinate
independent. From this construction, we define the two invariant correlation
functions
$\displaystyle\zeta_{\mathrm{p}}(\mu,z_{1},z_{2})$
$\displaystyle=\langle\gamma_{0}(\boldsymbol{n}_{1},z_{1})\gamma_{\pi}(\boldsymbol{n}_{2},z_{2})\rangle=\langle\gamma_{1}(\boldsymbol{n}_{1},z_{1})\gamma_{1}(\boldsymbol{n}_{2},z_{2})\rangle\,,$
(A.80) $\displaystyle\zeta_{\mathrm{c}}(\mu,z_{1},z_{2})$
$\displaystyle=\langle\gamma_{-\pi/4}(\boldsymbol{n}_{1},z_{1})\gamma_{3\pi/4}(\boldsymbol{n}_{2},z_{2})\rangle=\langle\gamma_{2}(\boldsymbol{n}_{1},z_{1})\gamma_{2}(\boldsymbol{n}_{1},z_{1})\rangle\,,$
(A.81)
with $\mu=\boldsymbol{n}_{1}\cdot\boldsymbol{n}_{2}=\cos\varphi$. The last
equality is valid in the preferred system of coordinates, where both galaxies
lie on the equator. An illustration of this definition is shown in Fig. 3.
Using the results of Sec. A.3 yields
$\displaystyle\zeta_{\mathrm{p}}(\mu,z,z^{\prime})$
$\displaystyle=\sum_{\ell}\frac{2\ell+1}{128\pi}C_{\ell}(z,z^{\prime})\nu_{\ell}^{2}(\tilde{P}_{\ell}(\mu)+\tilde{Q}_{\ell}(\mu))\,,$
(A.82) $\displaystyle\zeta_{\mathrm{c}}(\mu,z,z^{\prime})$
$\displaystyle=\sum_{\ell}\frac{2\ell+1}{128\pi}C_{\ell}(z,z^{\prime})\nu_{\ell}^{2}(\tilde{P}_{\ell}(\mu)-\tilde{Q}_{\ell}(\mu))\,.$
(A.83)
Note that the sums start at $\ell=2$. Defining,
$\displaystyle\zeta_{\pm}$
$\displaystyle=\frac{1}{2}(\zeta_{\mathrm{p}}\pm\zeta_{\mathrm{c}})\,,$ (A.84)
and using the orthogonality properties of the polynomials $\tilde{P}_{\ell}$
and $\tilde{Q}_{\ell}$ given in Eq.(A.16), we have
$\displaystyle\int_{-1}^{+1}\zeta_{+}(\mu,z_{1},z_{2})\tilde{P}_{\ell}(\mu)\,\mathrm{d}\mu$
$\displaystyle=\frac{1}{4\pi}C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}\,,$ (A.85)
$\displaystyle\int_{-1}^{+1}\zeta_{-}(\mu,z_{1},z_{2})\tilde{Q}_{\ell}(\mu)\,\mathrm{d}\mu$
$\displaystyle=\frac{1}{4\pi}C_{\ell}(z_{1},z_{2})\nu_{\ell}^{2}\,.$ (A.86)
Note that the relations Eq. (A.85) and Eq. (A.86) relate only coordinate
independent observables. The correlation functions $\zeta_{\pm}$ can be
estimated by observations as explained in the main text. Via (A.85) we can
then use them to estimate the lensing power spectrum $C_{\ell}$.
Figure 3: The correlation functions $\zeta_{\mathrm{p}}$ and
$\zeta_{\mathrm{c}}$. Both galaxies are located on the ’Equator’ - represented
by the dotted lines - which defines a preferred system of coordinates. The
angles between the direction of the shear and the connecting line are
indicated. For a fixed separation angle, these correlations are intrinsically
given and do not depend on the coordinate system.
## Appendix B Error estimation
### B.1 Best estimator
Given $X_{j}$ measurements of an observable $X$, each of them with error
$\delta X_{j}=\tau_{j}X_{j}$ ($\tau$ is the relative error). We want to
construct an estimator for $X$. We define
$\hat{X}=\sum w_{j}X_{j}\,,\qquad\sum w_{j}=1n\,.$ (B.1)
In order to obtain the best possible estimator for $X$ We want to choose the
weights $w_{j}$ which yield the highest signal-to-noise ratio (SNR). We claim
$w_{j}=\frac{1}{Z}\frac{1}{X_{j}\tau_{j}^{2}}\qquad\mbox{where}\quad
Z=\sum\frac{1}{X_{j}\tau_{j}^{2}}\,.$ (B.2)
To see that this is the best choice, we note that the error on the estimator
is given by
$N^{2}=\sum w_{j}^{2}\delta X_{j}^{2}=\sum w_{j}^{2}\tau_{j}^{2}X_{j}^{2}\,.$
(B.3)
The square of the SNR which we want to maximise is the quantity
$A=\frac{\hat{X}^{2}}{N^{2}}.$ (B.4)
Using the Ansatz (B.2), one can verify directly that this is choice of the
weights gives
$\frac{\partial A}{\partial w_{i}}=0\,,$ (B.5)
and it is the only zero of the gradient of $A$ (with positive weights which
sum up to 1) and it is a minimum. Hence the $w_{i}$ given above are the best
choice if one wants to maximize the SNR of an observable. The constant $Z$ is
determined by the requirement that
$\sum w_{j}=1\,.$
Computing $A$ explicitly one finds the well known result
${\rm SNR}=\sqrt{A}=\sqrt{\sum\frac{1}{\tau_{j}^{2}}}\,.$ (B.6)
### B.2 Specific example
If we consider our estimator of the correlation function,
$\zeta_{\pm}(\mu,z_{1},z_{2})$ and denote the value obtained from two pairs of
galaxies by $\hat{\zeta}_{j}$ and the error by $\delta\hat{\zeta}_{j}$, then
we find
$A=\sum\left(\frac{\hat{\zeta}_{j}}{\delta\hat{\zeta}_{j}}\right)^{2}\,,$
(B.7)
and the optimal estimator for $\zeta_{\pm}(\mu,z_{1},z_{2})$ is
$\hat{\zeta}(\mu,z_{1},z_{2})=\sum\frac{1}{Z}\left(\frac{\hat{\zeta}_{j}}{\delta\hat{\zeta}_{j}}\right)^{2}\,,$
(B.8)
with
$Z=\sum\frac{\hat{\zeta}_{j}}{(\delta\hat{\zeta}_{j})^{2}}\,.$ (B.9)
### B.3 Counting the pairs of galaxies
Here we want to count the number of pairs of galaxies which can be used to
estimate $\xi_{\pm}(\mu,z_{1},z_{2})$. For this we need to estimate the number
of galaxies with fixed opening angle $\varphi$, $\mu=\cos\varphi$. We suppose
that we have pixels of angular aperture $\delta\theta$. The solid angle of a
cone with this opening angle is at lowest order
$\delta\Omega=\delta\theta^{2}\pi\,.$ (B.10)
Let us set the first pixel at the North Pole. We want to count the number of
pixels whose center is at an angle $\varphi\pm\delta\theta/2$ from this first
pixel. The solid volume of these pixels is
$\delta\Omega_{\varphi}=2\pi\int_{\varphi-\delta\theta/2}^{\varphi+\delta\theta/2}\;\sin\theta\,\mathrm{d}\theta=2\pi\delta\theta\sin\varphi\,.$
(B.11)
Here we assume that the full ring with angle $\varphi$ around the first pixel
is observed. For incomplete sky coverage this is not true for all values of
$\varphi$, but we neglect this in our treatment and take the sky coverage into
account as an overall factor $f_{\rm sky}$ which denotes the fraction of the
sky covered by the survey. Hence, the number of pixels forming such an angle
with the original pixel is given by
$N(\varphi)=\frac{\delta\Omega_{\varphi}}{\delta\Omega}=\frac{2\sin(\varphi)}{\delta\theta}\,.$
(B.12)
We also need the total number of pixels which we can choose as our first
pixel, given by
$N_{\rm tot}=f_{\rm sky}\frac{4\pi}{\delta\Omega}=\frac{4f_{\rm
sky}}{\delta\theta^{2}}\,.$ (B.13)
Here we have introduced $f_{\rm sky}$, the observed sky fraction. The total
number of couples separated by an angle $\varphi$ is
$N_{\mathrm{c}}(\varphi)=N_{\rm tot}\times N(\varphi)=\frac{8f_{\rm
sky}\sin\varphi}{\delta\theta^{3}}\,.$ (B.14)
If we consider auto-correlations, $z_{1}=z_{2}$, this number has to be be
divided by $2$ due to symmetry. Let us now denote the number of galaxies in a
pixel at redshift $z$ by $N_{\mathrm{g}}(z)$. For a given pair of pixels at
$z_{1}$ and $z_{2}$, one can choose
$N_{\mathrm{g}}(z_{1})N_{\mathrm{g}}(z_{2})$ pairs of galaxies. Hence, the
total number of pairs of galaxies which we can consider for the estimator
$\hat{\zeta}_{\pm}(\varphi,z_{1},z_{2})$ is
$N_{\mathrm{p}}(\varphi,z_{1},z_{2})=N_{\mathrm{g}}(z_{1})N_{\mathrm{g}}(z_{2})N_{\mathrm{c}}(\varphi)\,,$
(B.15)
To compute an estimator $\hat{\zeta}_{\pm}$, we need $4$ galaxies, or $2$
different pairs. The number of estimators we can form is therefore
$N_{\mathrm{e}}(\varphi,z_{1},z_{2})=\frac{N_{\mathrm{p}}(\varphi,z_{1},z_{2})(N_{\mathrm{p}}(\varphi,z_{1},z_{2})-1)}{2}\simeq\frac{1}{2}\left(N_{\mathrm{g}}(z_{1})N_{\mathrm{g}}(z_{2})\frac{8f_{\rm
sky}\sin\varphi}{\delta\theta^{3}}\right)^{2}\,.$ (B.16)
The division by $2$ of $N_{\mathrm{c}}$ becomes a division by $4$ of
$N_{\mathrm{e}}$ if we consider auto-correlations, $z_{1}=z_{2}$.
## References
* [1] M. Bartelmann and P. Schneider, Weak gravitational lensing, Phys. Rept. 340 (2001) 291–472, [astro-ph/9912508].
* [2] C. M. Hirata and U. Seljak, Intrinsic alignment-lensing interference as a contaminant of cosmic shear, Phys. Rev. D 70 (2004) 063526, [astro-ph/0406275]. [Erratum: Phys.Rev.D 82, 049901 (2010)].
* [3] D. Kirk, S. Bridle, and M. Schneider, The Impact of Intrinsic Alignments: Cosmological Constraints from a Joint Analysis of Cosmic Shear and Galaxy Survey Data, Mon. Not. Roy. Astron. Soc. 408 (2010) 1502–1515, [arXiv:1001.3787].
* [4] H. Aihara et al., First Data Release of the Hyper Suprime-Cam Subaru Strategic Program, Publ. Astron. Soc. Jap. 70 (2018) S8, [arXiv:1702.08449].
* [5] H. Aihara et al., The Hyper Suprime-Cam SSP Survey: Overview and Survey Design, Publ. Astron. Soc. Jap. 70 (2018) S4, [arXiv:1704.05858].
* [6] H. Hildebrandt et al., KiDS+VIKING-450: Cosmic shear tomography with optical and infrared data, Astron. Astrophys. 633 (2020) A69, [arXiv:1812.06076].
* [7] KiDS Collaboration, M. Asgari et al., KiDS-1000 Cosmology: Cosmic shear constraints and comparison between two point statistics, Astron. Astrophys. 645 (2021) A104, [arXiv:2007.15633].
* [8] DES Collaboration, C. Doux et al., Consistency of cosmic shear analyses in harmonic and real space, Mon. Not. Roy. Astron. Soc. 503 (2021), no. 3 3796–3817, [arXiv:2011.06469].
* [9] DES Collaboration, T. M. C. Abbott et al., Dark Energy Survey Year 3 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing, arXiv:2105.13549.
* [10] DES Collaboration, C. Doux et al., Dark Energy Survey Year 3 results: cosmological constraints from the analysis of cosmic shear in harmonic space, arXiv:2203.07128.
* [11] D. Yamauchi, T. Namikawa, and A. Taruya, Weak lensing generated by vector perturbations and detectability of cosmic strings, JCAP 10 (2012) 030, [arXiv:1205.2139].
* [12] J. Adamek, R. Durrer, and V. Tansella, Lensing signals from Spin-2 perturbations, JCAP 01 (2016) 024, [arXiv:1510.01566].
* [13] F. Montanari and R. Durrer, Measuring the lensing potential with tomographic galaxy number counts, JCAP 1510 (2015), no. 10 070, [arXiv:1506.01369].
* [14] F. Lepori, J. Adamek, and R. Durrer, Cosmological simulations of number counts, JCAP 12 (2021), no. 12 021, [arXiv:2106.01347].
* [15] V. Nistane, M. Jalilvand, J. Carron, R. Durrer, and M. Kunz, An Estimator for the lensing potential from galaxy number counts, arXiv:2201.04129.
* [16] SDSS Collaboration, R. Scranton et al., Detection of cosmic magnification with the Sloan Digital Sky Survey, Astrophys. J. 633 (2005) 589–602, [astro-ph/0504510].
* [17] X. Liu, D. Liu, Z. Gao, C. Wei, G. Li, L. Fu, T. Futamase, and Z. Fan, Detection of Cosmic Magnification via Galaxy Shear – Galaxy Number Density Correlation from HSC Survey Data, Phys. Rev. D 103 (2021), no. 12 123504, [arXiv:2104.13595].
* [18] H. Johnston et al., KiDS+GAMA: Intrinsic alignment model constraints for current and future weak lensing cosmology, Astron. Astrophys. 624 (2019) A30, [arXiv:1811.09598].
* [19] V. Perlick, Gravitational lensing from a spacetime perspective, Living Rev. Rel. 7 (2004) 9.
* [20] J. M. Stil, M. Krause, R. Beck, and A. R. Taylor, The Integrated Polarization of Spiral Galaxy Disks, Astrophys. J. 693 (2009) 1392–1403, [arXiv:0810.2303].
* [21] LSST Science Collaboration, Lsst science book, version 2.0, 2009.
* [22] L. Amendola et al., Cosmology and fundamental physics with the Euclid satellite, Living Rev. Rel. 21 (2018), no. 1 2, [arXiv:1606.00180].
* [23] J. Francfort, G. Cusin, and R. Durrer, Image rotation from lensing, Class. Quant. Grav. 38 (2021), no. 24 245008, [arXiv:2106.08631].
* [24] V. H. Mahatma, M. J. Hardcastle, J. Harwood, S. P. O’Sullivan, G. Heald, C. Horellou, and D. J. B. Smith, A low-frequency study of linear polarization in radio galaxies, Monthly Notices of the Royal Astronomical Society 502 (dec, 2020) 273–292.
* [25] F. F. Gardner and J. B. Whiteoak, The Polarization of Cosmic Radio Waves, Annual Review of Astronomy and Astrophysics 4 (Jan., 1966) 245.
* [26] P. P. Kronberg, C. C. Dyer, E. M. Burbidge, and V. T. Junkkarinen, A Technique for Using Radio Jets as Extended Gravitational Lensing Probes, The Astrophysical Journal Letters 367 (Jan., 1991) L1.
* [27] P. P. Kronberg, C. C. Dyer, and H.-J. Röser, Estimates of the global masses of two distant galaxies using a new type of astrophysical mass “laboratory”, The Astrophysical Journal 472 (1996), no. 1 115\.
* [28] C. R. Burns, C. C. Dyer, P. P. Kronberg, and H.-J. Roser, Theoretical modeling of weakly lensed polarized radio sources, The Astrophysical Journal 613 (oct, 2004) 672–681.
* [29] M. L. Brown and R. A. Battye, Polarization as an indicator of intrinsic alignment in radio weak lensing, Monthly Notices of the Royal Astronomical Society (oct, 2010) no–no.
* [30] M. L. Brown and R. A. Battye, Mapping the dark matter with polarized radio surveys, The Astrophysical Journal 735 (jun, 2011) L23.
* [31] L. Whittaker, M. L. Brown, and R. A. Battye, Separating weak lensing and intrinsic alignments using radio observations, Mon. Not. Roy. Astron. Soc. 451 (2015), no. 1 383–399, [arXiv:1503.00061].
* [32] S. Camera, I. Harrison, A. Bonaldi, and M. L. Brown, SKA weak lensing – III. Added value of multiwavelength synergies for the mitigation of systematics, Mon. Not. Roy. Astron. Soc. 464 (2017), no. 4 4747–4760, [arXiv:1606.03451].
* [33] I. Harrison et al., SuperCLASS – III. Weak lensing from radio and optical observations in Data Release 1, Mon. Not. Roy. Astron. Soc. 495 (2020), no. 2 1737–1759, [arXiv:2003.01736].
* [34] L. Whittaker, R. A. Battye, and M. L. Brown, Measuring cosmic shear and birefringence using resolved radio sources, Mon. Not. Roy. Astron. Soc. 474 (2018), no. 1 460–477, [arXiv:1702.01700].
* [35] G. Fanizza, E. Di Dio, R. Durrer, and G. Marozzi, The gauge invariant cosmological Jacobi map from weak lensing at leading order, arXiv:2201.11552.
* [36] S. Parnovskii, Y. Kudrya, and A. Aleksandrov, Apparent anisotropy in the orientation of extragalactic objects due to the curvature of spacetime, JETP 79 (1994) 840.
* [37] Planck Collaboration, N. Aghanim et al., Planck 2018 results. VIII. Gravitational lensing, Astron. Astrophys. 641 (2020) A8, [arXiv:1807.06210].
* [38] P. Bull, Extending Cosmological tests of General Relativity with the square kilometre array, The Astrophysical Journal 817 (jan, 2016) 26.
* [39] J. Lesgourgues, The Cosmic Linear Anisotropy Solving System (CLASS) I: Overview, arXiv:1104.2932.
* [40] D. Blas, J. Lesgourgues, and T. Tram, The Cosmic Linear Anisotropy Solving System (CLASS) II: Approximation schemes, JCAP 07 (2011) 034, [arXiv:1104.2933].
* [41] Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209]. [Erratum: Astron.Astrophys. 652, C4 (2021)].
* [42] M. Kilbinger, Cosmology with cosmic shear observations: a review, Rept. Prog. Phys. 78 (2015) 086901, [arXiv:1411.0115].
* [43] M. Bartelmann, Gravitational Lensing, Class. Quant. Grav. 27 (2010) 233001, [arXiv:1010.3829].
* [44] R. Durrer, The Cosmic Microwave Background. Cambridge University Press, 12, 2020.
* [45] K. Seibert, Spin-Weighted Spherical Harmonics and Their Applicaion for the Construction of Tennsor Slepian Functions on the Spherical Cap, PhD Dissertation,. Siegen University, Germany, 2018.
* [46] B. Ghosh, R. Durrer, and E. Sellentin, General Relativistic corrections in density-shear correlations, JCAP 1806 (2018), no. 06 008, [arXiv:1801.02518].
|
11institutetext: 1 Centre for Medical Image Computing and Wellcome/EPSRC
Centre for Interventional & Surgical Sciences, University College London,
London, UK
2 Urological Research Network, Miami Lakes, Florida, USA
3 Focalyx Technologies, Miami, FL, USA
4 City University of Hong Kong, Hong Kong, China
11email<EMAIL_ADDRESS>
# Controlling False Positive/Negative Rates for Deep-Learning-Based Prostate
Cancer Detection on Multiparametric MR images
Zhe Min1 Fernando J. Bianco2 Qianye Yang1 Rachael Rodell1,3 Wen Yan1,4 Dean
Barratt1 Yipeng Hu1
###### Abstract
Prostate cancer (PCa) is one of the leading causes of death for men worldwide.
Multi-parametric magnetic resonance (mpMR) imaging has emerged as a non-
invasive diagnostic tool for detecting and localising prostate tumours by
specialised radiologists. These radiological examinations, for example, for
differentiating malignant lesions from benign prostatic hyperplasia in
transition zones and for defining the boundaries of clinically significant
cancer, remain challenging and highly skill-and-experience-dependent. We first
investigate experimental results in developing object detection neural
networks that are trained to predict the radiological assessment, using these
high-variance labels. We further argue that such a computer-assisted diagnosis
(CAD) system needs to have the ability to control the false-positive rate
(FPR) or false-negative rate (FNR), in order to be usefully deployed in a
clinical workflow, informing clinical decisions without further human
intervention. However, training detection networks typically requires a multi-
tasking loss, which is not trivial to be adapted for a direct control of
FPR/FNR. This work in turn proposes a novel PCa detection network that
incorporates a lesion-level cost-sensitive loss and an additional slice-level
loss based on a lesion-to-slice mapping function, to manage the lesion- and
slice-level costs, respectively. Our experiments based on 290 clinical
patients concludes that 1) The lesion-level FNR was effectively reduced from
0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by
changing the lesion-level cost; 2) The slice-level FNR was reduced from 0.19
to 0.00 by taking into account the slice-level cost; (3) Both lesion-level and
slice-level FNRs were reduced with lower FP/FPR by changing the lesion-level
or slice-level costs, compared with post-training threshold adjustment using
networks without the proposed cost-aware training. For the PCa application of
interest, the proposed CAD system is capable of substantially reducing FNR
with a relatively preserved FPR, therefore is considered suitable for PCa
screening applications.
###### Keywords:
Prostate Cancer Multi-Parametric Resonance Images Object Detection False
Negative Reduction.
## 1 Introduction
Prostate Cancer (PCa) is one major public health problem for males globally
[12]. It is estimated that 191,930 cases have been newly diagnosed with PCa
and 33,330 associate deaths in the United States in 2020 [12]. Multi-
parametric Magnetic Resonance images (mpMR) has potential to play a part in
every stage of prostate cancer patient management, including enabling targeted
biopsy for early-to-medium stage cancer diagnosis and screening programmes for
avoiding unnecessary biopsy [6, 14]. However, reading mp-MR requires highly
specialised radiologists and, for those experienced, it remains a challenging
and arguably tedious task.
Automated computer-aided PCa detection not only can help significantly reduce
the radiologist’s time in examining the volumetric, multi-modality mpMR
images, but also provides higher consistency over human interpreters with
rivaling human performance at the same time [9]. Computer-aided diagnosis
(CAD) of PCa using mpMR has therefore attracted growing attention and, in
particular, modern machine learning methods have been proposed recently for
the end-to-end, fully-automated CAD tasks, such as classification, detection
and localisation. However, automating PCa detection has to overcome several
challenges innate to several imaging and pathology characteristics specific in
this application. For example, inherently high inter-patient variance in shape
and size among cancerous regions; spatial misalignment between different MR
sequences [16]; and similar imaging patterns exhibited between the benign
prostatic hyperplasia (BPH) and high grade PCa, which subsequently leads to
false positives (FPs) [15, 9], for both CAD models and human observers, thus
their labelling.
Scores based on Prostate Imaging and Reporting Data System (PI-RADS) [13] and
Gleason groups based on biopsy or prostatectomy specimens are examples of
radiological and histopathological labels. These two types of labels and their
combinations are useful to train a CAD system. Sanford et al. utilized a
ResNet-based network to assign specific PI-RADS scores to already delineated
lesions [10]. Schelb et al. compared the clinical performance between PI-RADS
and U-Net-based methods for classification and segmentation of suspicious
lesions on T2w and diffusion MRI sequences, where the ground-truth is acquired
by combined targeted and extended systematic MRI–transrectal US fusion biopsy
[11]. While the radiological labels are limited by the challenges discussed
above, histopathological labels are also subject to errors and bias in
sampling, due to, for example, shift in patient cohort, localisation error in
needle biopsy and variance in pathology report. Searching best gold-standard
between the two is still an open question and may be beyond the scope of this
study. In our work, we use the experienced radiologist PI-RADS scores as our
prediction of interest - the training labels. See more details of the data in
Section 3.
A CAD system for detecting PCa has been considered as a semantic segmentation
problem. Many recent PCa segmentation algorithms adopted convolution neural
networks (CNNs) [1]. Cao et al. has proposed a multiclass CNN called FocalNet
to jointly segment the cancerous region and predict the Gleason scores on mpMR
images [1]. Cao et al. adapted the focal loss (FL) to the PCa detection task
that predicts the pixel-level lesion probability map on both the Apparent
Diffusion Coefficient (ADC) and T2-Weighted (T2w) images, where the training
concentrates more on the cancerous or suspicious pixels [2]. In addition, to
account for the fact that the lesions may show different size or shapes across
imaging modalities, an imaging component selector called selective dense
conditional random field is designed to select the best imaging modality where
the lesion is observable more clearly [2]. Finally, the predicted probability
maps is refined into the lesion segmentation on that selected imaging
component [2]. It should be noted that only slices with annotated lesions are
included in both the training and validation in [1, 2]. Yu et al. utilised a
standalone false positive reduction network with inputs being the detected
true positives (TPs) and false positives (FPs) from another U-net-based
detection network [15].
Object detection algorithms have also been proposed for detecting and
segmenting PCa from mpMR images, explicitly discriminating between different
lesions through instance classification and segmentation. Multiple-staged
object detection networks have been shown to have fewer false positives in
challenging lesion detecting tasks, compared with segmentation methods such as
U-Net [16]. Li et al. adapted the MaskRCNN to detect the presence of
epithelial cells on histological images for predicting Gleason grading [5],
with an addition branch classifying epithelial cell presence in and the
MaskRCNN branch classifying, detecting (bounding boxes), and segmenting (into
binary masks) the epithelial areas. Dai et al. investigated the performances
of the MaskRCNN to segment prostate gland and the intra-prostatic lesions and
reported consistent superior performances over the U-net [3]. Yu et al. also
used the MaskRCNN in the PCa detection task, where an additional segmentation
branch has been added to the original detection network [16].
Two- or multi-stage object detectors have been shown superior performance,
compared with the one-stage counterparts [7]. However, existing two-stage
object detection network in fields of computer vision, such as Mask-RCNN,
optimise for overall accuracy, weighting false positive and false negative
regions based on their respective prevalence, rather than the associated
clinical and societal costs. In this work, we focus on real-world clinical
scenarios, in which the CAD system is developed for, for example, assisting
population screening or treatment referrals, by alleviating the need for
further radiologist examining individual lesions or slices. These clinical
applications mandate the developed CAD system to guarantee a low false
negative rate and a low false positive rate at lesion or slice levels, in the
two respective examples. Instead of thresholding the detection network post-
training to achieve the desired sensitivity/specificity at either lesion or
slice level, in this study, we aim to answer the research question: With a
two-stage object detector, can more desirable FPR or FNR be controlled by
changing their costs during training?
We explore the plausible answer to this question through formulating and
incorporating two cost-sensitive classification losses at the lesion and slice
levels respectively, which will give the flexibility of biasing towards
reducing FPR or FNR during training. This is not trivial for a detection
network training scheme that minimises a multi-tasking loss, as the following
technical questions need to be addressed in this work: a) whether a cost-
sensitive loss replacing the original instance-level classification loss is
effective; b) how slice-level cost can be quantified and subsequently
controlled; c) whether changing slice-level cost by the additional slice-level
loss is effective; and d) how these two level costs can be combined during
training to archive desirable levels of control of FPR/FNR at lesion or slice
level, on test data set.
Our key contributions of this study are summarised as follows. (1) We modify
the classification loss in the original detection network with the aim of
controlling the lesion-level FP/FN for PCa detection. (2) We propose a novel
slice-level classification loss with the aim of controlling the slice-level
FP/FN for PCa detection. We investigate its utility in improving baseline
sensitivity with lower FPR by incorporating the classifier into the overall
detection network. (3) We study the effect of different weighting schemes in
the two classifier branches on lesion-level and slice-level FP/FN reduction.
## 2 Methods
### 2.1 Problem definition
In this work, PCa detection is formulated as an instance segmentation problem.
The slices within mpMR images without annotated cancerous regions are regarded
as background images. The multiple tasks in PCa detection include: classify
whether one proposal region is a lesion or not; regress the coordinates of the
bounding box (BB) surrounding the proposal region; segment the mask of the
lesion.
The overall architecture of our proposed CAD system is depicted in Fig. 1. The
network utilizes a Feature Pyramid Network (FPN) backbone on top of the ResNet
architecture [4], to generate multi-scale features. The extracted features are
shared by the following two modules: (a) a region proposal network (RPN)
module that generates candidate object bounding boxes [8]; (b) a detection
module that performs the bounding box regression, classification and the
region of interest (RoI) mask prediction on the candidates boxes.
Figure 1: Illustration of the overall architecture based on the MaskRCNN. ROI:
region of interest, RPN: region proposal network. $L_{rpn\\_reg}$: RPN
regression loss, $L_{rpn\\_cls}$: RPN classification loss, $L_{box}$: the
regression loss at the proposal/RoI level, $L_{mask}$: the mask loss at the
RoI level, $L_{cost\\_cls}$: the lesion-level (i.e., RoI level) cost-sensitive
classification loss, $L_{slice\\_cls}$: the slice-level cost-sensitive
classification loss.
### 2.2 Overall training loss function
As shown in Fig. 1, our multi-task loss consists of the following six terms
$L_{total}=L_{rpn\\_reg}+L_{rpn\\_cls}+L_{box}+L_{mask}+L_{cost\\_cls}+L_{slice\\_cls}$
(1)
where $L_{rpn\\_reg}$ and $L_{rpn\\_cls}$ are the smoothed bounding box
regression loss based on ${L^{1}}$-norm and the cross entropy classification
loss, at the anchor level, respectively; $L_{box}$, $L_{mask}$ and
$L_{cost\\_cls}$ are the ${L^{1}}$-norm smoothed bounding box regression loss,
the binary cross entropy loss and the weighted cross entropy classification
loss, at the RoI level, respectively; and $L_{slice\\_cls}$ is the weighted
cross entropy classification loss at the slice level. Among all the loss terms
in Eq.(1), $L_{rpn\\_reg}$, $L_{rpn\\_cls}$, $L_{box}$, and $L_{mask}$ are the
same as those in the original Mask-RCNN framework. The rationale of
$L_{slice\\_cls}$ is to evaluate whether the model can classify the category
of a slice being cancerous or not. The inputs to $L_{slice\\_cls}$ and
$L_{slice\\_cls}$ are the class probabilities of the proposals being
cancerous.
### 2.3 Lesion-Level cost-sensitive classification loss
To control the cost of mis-classification of individual lesions, the lesion-
level (RoI level) cost sensitive classification loss
$L_{cost\\_cls}(p_{i},p_{i}^{\star})$ is defined as follows
$L_{cost\\_cls}(p_{i},p_{i}^{\star})=\underbrace{-\alpha_{lesion}p_{i}^{\star}\log{p_{i}}}_{L_{cost\\_cls}^{positive}}\underbrace{-\beta_{lesion}(1-p_{i}^{\star})\log{(1-p_{i})}}_{L_{cost\\_cls}^{negative}}$
(2)
where $p_{i}^{\star}=1$ if the $i^{th}$ region proposal is positive,
$p_{i}^{\star}=0$ if negative, $p_{i}\in[0,1]$ is the predicted class
probability (by the classification branch in MaskRCNN) of the region proposal
$i$ being an cancerous region, $\alpha_{lesion}$ and $\beta_{lesion}$ are the
weights associated with the positive and negative regions. In this study,
three different combinations of $\alpha_{lesion}$ and $\beta_{lesion}$ are
tested as follows. (i) $\alpha_{lesion}>1$ and $\beta_{lesion}=1$, during
training, the network emphasizes more on regions with positive labels; (ii)
$\alpha_{lesion}=1$ and $\beta_{lesion}>1$, the network emphasizes more on the
regions with negative labels; (iii) $\alpha_{lesion}=1$ and
$\beta_{lesion}=1$, the network weights them equally. In other words, in the
above first two cases, the network will penalise more on the (1) false
negatives (FNs); (2) false positives (FPs) respectively at the lesion level.
In the third case, $L_{cost\\_cls}(p_{i},p_{i}^{\star})$ degenerates to the
binary cross entropy loss when $\alpha_{lesion}=1$ and $\beta_{lesion}=1$.
Positive Slices In the slices where there are GT lesions, the training loss
associated with that slice is defined as
$\begin{split}L_{total}=L_{rpn\\_reg}+L_{rpn\\_cls}+L_{box}+L_{mask}+L_{cost\\_cls}.\end{split}$
(3)
Negative Slices In the slices where there is no GT lesion, the training loss
associated with that slice is defined as
$\begin{split}L_{total}&=L_{rpn\\_cls}+L_{cost\\_cls}^{negative}\\\
&=L_{rpn\\_cls}-\beta_{lesion}(1-p_{i}^{\star})\log{(1-p_{i})}.\end{split}$
(4)
### 2.4 Slice-Level cost-sensitive classification loss
Let us suppose there are $N$ proposal regions or region of interest (ROI) in
one slice. The slice-level cost-sensitive classification loss is defined as
the weighted cross entropy as follows
$L_{slice\\_cls}=\underbrace{-\alpha_{slice}p_{slice}^{\star}\log
p_{slice}}_{L_{slice\\_cls}^{positive}}-\underbrace{\beta_{slice}(1-p_{slice}^{\star})\log(1-p_{slice})}_{L_{slice\\_cls}^{negative}},$
(5)
where $p_{slice}^{\star}\in\\{0,1\\}$ and $p_{slice}\in[0,1]$ is given by
$p_{slice}^{\star}=max(p^{\star}_{1},...,p^{\star}_{N}),$ (6)
$p_{slice}=max(p_{1},...,p_{N}),$ (7)
where $p_{i}^{\star}$ and $p_{i}$ are the GT and predicted probability that
$i^{th}$ region being cancerous. More specifically, $p_{slice}^{\star}=1$
indicates that there is at least one cancerous region in the interested slice,
$p_{slice}$ is the largest predicted probability of one detected region being
cancerous. The rational behind the lesion-to-slice mapping function, for
computing $p_{slice}^{\star}$ and $p_{slice}$, is that (1) for GT labels, one
slice is considered to be a ‘cancerous’ slice if there exists at least one
‘positive’ region (i.e., $p_{i}^{\star}=1$ for at least one $i$); (2) for
predictions, the probability of one slice being ‘cancerous’ is the largest
predicted probability of one detected region being ’positive’ in the
interested slice. Like the function of $\alpha_{lesion}$ and $\beta_{lesion}$
in $L_{cost\\_cls}(p_{i},p_{i}^{\star})$, $\alpha_{slice}$ and $\beta_{slice}$
weight the loss $L_{slice\\_cls}$ in an adversarial manner: Whilst
$\alpha_{slice}>1$ and $\beta_{slice}=1$, the network penalises FNs more
heavily at the slice level, $\alpha_{slice}=1$ and $\beta_{slice}>1$, the
network penalises FPs more.
Positive Slices In the slices where there are GT lesions, the overall training
loss remains $L_{total}$, defined in Eq.(1), and can be expanded as follows
$L_{total}=L_{rpn\\_reg}+L_{rpn\\_cls}+L_{box}+L_{mask}+L_{cost\\_cls}-\alpha_{slice}p_{slice}^{\star}\log
p_{slice}.$ (8)
Negative Slices In the slices where there is no GT lesion, the overall
training loss is therefore given by
$\begin{split}L_{total}&=L_{rpn\\_cls}+L_{cost\\_cls}^{negative}+L_{slice\\_cls}^{negative}\\\
&=L_{rpn\\_cls}-\beta_{lesion}(1-p_{i}^{\star})\log{(1-p_{i})}-\beta_{slice}(1-p_{slice}^{\star})\log(1-p_{slice}).\end{split}$
(9)
where only the classification losses at the anchor, lesion/region, and slice
levels are included.
## 3 Experiments and Evaluation
### 3.1 Data set and implementation details
Our data sets consist of 290 clinical prostate cancer patients with approved
Institutional Review Board (IRB) protocol. The ground-truth labels (including
cancerous masks) have been acquired based on the Prostate Imaging Reporting
and Data System (PI-RADS) scores reported by radiologists with more than 15
years of experience. PIRADS $\geq 3$ annotated lesions are regarded as
clinically significant and are considered positive in this work. The ratios of
number of patients in the training, validation and test sets are 8:1:1. The
inputs to our proposed detection include the T2-Weighted (T2w), the Apparent
Diffusion Coefficient (ADC), and the Diffusion-Weighted Images (DWI) b-2000
images. ADC and DWI b-2000 images were spatially aligned with corresponding
T2w images using the rigid transformation based on the coordinate information
stored in the imaging files. All slices were cropped from the center to be
160$\times$160 and the intensity values were normalized to [0,1]. Our networks
were constructed with 2D convolutional layers, with a so-called 2.5D input
bundle which concatenated two neighboring slices for each of the T2, ADC and
DWI b-2000 image slices at the slice of interest, i.e. resulting in a nine-
channel input as denoted in Fig. 1.
The proposed method was implemented with the TensorFlow framework. Each
network was trained for 100 epochs with the stochastic gradient descent (SGD)
optimizer and the initial learning rate was set to be 0.001. Random affine
transformations were applied for data augmentation during training. If not
otherwise specified, the parameter threshold111We use threshold to denote
parameter DETECTION$\\_$MIN$\\_$CONFIDENCE in the original MaskRCNN codes, for
brevity. was set to 0.7 and the maximum number of lesions in one slice being 6
was configured at both the training and test stages.
### 3.2 Evaluation metrics
We evaluate the methods with descriptive statistics at both the lesion and
slice levels. The slice-level false positive rate (FPR) and false negative
rate (FNR) are defined as follows
$\textrm{FPR}=\frac{\textrm{FP}}{\textrm{FP}+\textrm{TN}}=1-\textrm{specificity}$,
$\textrm{FNR}=\frac{\textrm{FN}}{\textrm{FN}+\textrm{TP}}=1-\textrm{sensitivity}$,
$\textrm{ACC}=\frac{\textrm{TP}+\textrm{TN}}{\textrm{TP}+\textrm{TN}+\textrm{FP}+\textrm{FN}}$,
where FP, TN FN and TP are numbers of false positive, true negative, false
negative and true positive cases, respectively. It is noteworthy that the
above definitions are defined and used at the slice level. At the lesion
level, only the definition of FNR remains valid. Instead, we compute the mean
FP per slice.
At the lesion level, a TP prediction requires the GT lesion has an
Intersection of Union (IoU) greater than or equal to 0.2, between the GT
bounding box (BB) and any predicted BB. A FP prediction means IoUs are smaller
than 0.2 (including no overlap) between the predicted BB and all GT BBs. A GT
lesion that has no TP prediction is counted as a FN. TN is not defined at the
lesion level.
At the slice level, one slice with at least one GT annotated lesion mask is
considered as a TP if there is any detected region at that slice. If there is
no detection on the slices with GT lesion masks, the slice is counted as a FN.
A TN slice means no lesion found in both prediction and GT. Any positive
lesion predicted on a slice that has no GT lesion leads to a FP slice.
| $\alpha_{lesion}/\beta_{lesion}=1$ | $\alpha_{lesion}=3$, $\beta_{lesion}=1$ | $\alpha_{lesion}=1$, $\beta_{lesion}=3$
---|---|---|---
Lesion-level FP | 1.0327 | 2.0218 | $\bm{0.6567}$
Lesion-level FNR | 0.1941 | $\bm{0.1013}$ | 0.4118
Slice-level FPR | 0.5878 | 0.8434 | $\bm{0.5049}$
Slice-level FNR | 0.0097 | $\bm{0.0028}$ | 0.0736
ACC | 0.5744 | 0.3924 | $\bm{0.6161}$
Table 1: The false positive rate (FPR) and false negative rate (FNR) on the test data sets, where $L_{cost\\_cls}$ was used in the training process. With $\alpha_{lesion}=3$, $\beta_{lesion}=1$ in Eq.(2), both the lesion-level and slice-level FNRs were considerably reduced, compared to the case where $\alpha_{lesion}=1,\beta_{lesion}=1$. With $\alpha_{lesion}=1$, $\beta_{lesion}=3$ in Eq.(2), the lesion-level FPs and slice-level FPRs were lower, compared to the case where $\alpha_{lesion}=1,\beta_{lesion}=1$. | $\alpha_{slice}=1$, $\beta_{slice}=1$ | $\alpha_{slice}=3$, $\beta_{slice}=1$ | $\alpha_{slice}=1$, $\beta_{slice}=3$
---|---|---|---
Lesion-level FP | $\bm{1.7202}$ | 1.9493 | 1.7965
Lesion-level FNR | 0.1190 | $\bm{0.0970}$ | 0.1232
Slice-level FPR | 0.8505 | $\bm{0.8234}$ | 0.8277
Slice-level FNR | $\bm{0.0000}$ | $\bm{0.0000}$ | 0.0014
ACC | 0.3882 | $\bm{0.4076}$ | 0.4041
Table 2: The false positive rate (FPR) and false negative rate (FNR) on the test data sets where $L_{cost\\_cls}$($\alpha_{lesion}/\beta_{lesion}=1$) and $L_{slice\\_cls}$ were incorporated into the training. With $\alpha_{slice}=3,\alpha_{slice}=1$ in Eq.(5): (a) the lesion-level FNR was reduced; (b) the slice-level FNR remained close to zero with reduced slice-level FPR, compared to the case where $\alpha_{slice}=1,\beta_{slice}=1$. With $\alpha_{slice}=1,\beta_{slice}=3$ in Eq.(5), the slice-level FPR was reduced, compared to the case where $\alpha_{slice}=1,\beta_{slice}=1$. | $\alpha=1$,$\beta=1$ | $\alpha=3$ $\beta=1$ | $\alpha=1$, $\beta=3$
---|---|---|---
Lesion-level FP | 1.7202 | 2.3827 | $\bm{1.0982}$
Lesion-level FNR | 0.1190 | $\bm{0.0734}$ | 0.2262
Slice-level FPR | 0.8505 | 0.9220 | $\bm{0.6576}$
Slice-level FNR | $\bm{0.0000}$ | $\bm{0.0000}$ | 0.0014
ACC | 0.3882 | 0.3367 | $\bm{0.5265}$
Table 3: The false positive rate (FPR) and false negative rate (FNR) on the
test data sets where $L_{cost\\_cls}$ and $L_{slice\\_cls}$ were incorporated.
With $\alpha=3,\beta=1$ in Eq.(2) and Eq.(5), (a) the lesion-level FNR was
reduced; (b) the slice-level FNR remained to be 0, compared to the case where
$\alpha=1,\beta=1$. With $\alpha=1,\beta=3$, the lesion-level FP and slice-
level FPR were reduced, compared to those where $\alpha=1,\beta=1$. Figure 2:
In all figures in this paper, (1) the red circles denote the ground-truth (GT)
lesion region, while the blue circles denote the predicted regions of
interest; (2) a false positive (FP) predicted detection is denoted with the
yellow arrow, while another false negative (FN) lesion is denoted with the
green arrow. In this study, only the lesion-level classification loss
$L_{cost\\_cls}$ in the training process. All example sub-figures shown here
correspond to the performances on one same slice in the test data set. In the
first row, threshold=0.7 while threshold=0.95 in the second row. The first
three columns from the left show the detected results with only
$L_{cost\\_cls}$ incorporated into the training loss. (a,e)
$\alpha_{lesion}=1,\beta_{lesion}=1$; (b,f)
$\alpha_{lesion}=1,\beta_{lesion}=3$; (c,g)
$\alpha_{lesion}=3,\beta_{lesion}=1$. (d) Apparent Diffusion Coefficient (ADC)
image; (h) Diffusion-Weighted Images (DWI) b-2000 image.
## 4 Results
### 4.1 Adjusting mis-classification cost at lesion-level
In this experiment, we study the impact on the lesion-level and slice-level
performances, due to different $L_{cost\\_cls}$ in the training, while the
slice-level loss $L_{slice\\_cls}$ in Eq.(5) is not included. More
specifically, we compare the original MaskRCNN (i.e.,
$\alpha_{lesion}=1,\beta_{lesion}=1$ in Eq.(2)), with our proposed two
variants where $\alpha_{lesion}=3,\beta_{lesion}=1$ and
$\alpha_{lesion}=1,\beta_{lesion}=3$.
Table 1 summarises the comparative results with the case where
$\alpha_{lesion}=1,\beta_{lesion}=1$. With $\alpha_{lesion}=3$,
$\beta_{lesion}=1$, the lesion-level and slice-levels FNRs were reduced from
0.1941 to 0.1013, from 0.0097 to 0.0028, respectively. With
$\alpha_{lesion}=1$, $\beta_{lesion}=3$, the lesion-level FP was reduced from
1.0327 to 0.6567 while the slice-level FPR was reduced from 0.5878 to 0.5049.
Figure 3: This figure demonstrates the reduction of the lesion-level FPs by
changing the lesion-level classification cost $L_{cost\\_cls}$. The same
setting in training was adopted as that in Fig. 2, and all example sub-figures
shown here correspond to the performances on one same slice in the test data
set (but a different slice with that in Fig. 2. In the first row,
threshold=0.7 while threshold=0.95 in the second row. The weighting schemes
are summarised as follows: (a,e) $\alpha_{lesion}=1,\beta_{lesion}=1$; (b,f)
$\alpha_{lesion}=3,\beta_{lesion}=1$; (c,g)
$\alpha_{lesion}=1,\beta_{lesion}=3$. (d) ADC image; (h) DWI b-2000 image.
Figure 4: This figure demonstrates that both the lesion-level and slice-level
FNs were reduced by incorporating $L_{slice\\_cls}$ into the training process.
In all the ablation examples presented in this figure,
$\textsf{threshold}=0.95$. Example sub-figures shown here in the same row
correspond to the same slice in the test data set. (a,e,i) depicts the
detection results with only the lesion-level classification loss
$L_{cost\\_cls}$ incorporated, where $\alpha_{lesion},\beta_{lesion}$ vary.
(b,f,j) depicts the detection results with both $L_{cost\\_cls}$ and
$L_{slice\\_cls}$ utilized in the training. The weighting schemes in this
ablation study are as follows. (a) $\alpha_{lesion}=1,\beta_{lesion}=1$; (b)
$\alpha=1,\beta=1$; (e) $\alpha_{lesion}=1,\beta_{lesion}=3$; (f)
$\alpha=1,\beta=3$; (i) $\alpha_{lesion}=3,\beta_{lesion}=1$; (j)
$\alpha=3,\beta=1$. (c,g,k) ADC images; (d,h,l) DWI b-2000 images.
Fig. 2 shows the examples where the FNs were reduced with $\alpha_{lesion}=3$,
$\beta_{lesion}=1$, by comparing Fig. 2 (c) with Fig. 2. (a,b), and comparing
Fig. 2 (g) with Fig. 2 (e,f). By comparing Fig. 2 (g) with Fig. 2 (c), the FP
was reduced with a higher threshold. In contrast, more FNs can be found with
larger threshold and $\alpha_{lesion}=1,\beta_{lesion}=3$ by comparing Fig. 2
(f) with Fig. 2 (b).
Fig. 3 shows the example where the FPs were avoided/reduced with
$\alpha_{lesion}=1$, $\beta_{lesion}=3$, by comparing Fig. 3 (g) with Fig. 3
(a,b,c,e,f). In the first row in Fig. 3 (c), with relatively lower value of
the parameter threshold, the FP still exists with
$\alpha_{lesion}=1,\beta_{lesion}=3$. In contrast, by comparing Fig. 3 (g)
with Fig. 3 (e,f), with larger value of the parameter threshold, the FP was
avoided as shown in Fig. 3 (g).
### 4.2 Adjusting mis-classification cost at slice-level
In this experiment, we study the effect of incorporating and changing
$L_{slice\\_cls}$ in the training loss whereas the weighting in
$L_{cost\\_cls}$ was fixed as $\alpha_{lesion}=1,\beta_{lesion}=1$. Table 2
includes the quantitative results with different settings of
$\alpha_{slice},\beta_{slice}$: (1) $\alpha_{slice}=1,\beta_{slice}=1$; (2)
$\alpha_{slice}=3,\beta_{slice}=1$; (3) $\alpha_{slice}=1,\beta_{slice}=3$.
With $\alpha_{slice}=3,\beta_{slice}=1$, (a) the lesion-level FNR was reduced
from 0.1190 to 0.0970; (b) the slice-level FNR remained to be 0.0000 while the
slice-level FPR was also reduced from 0.8505 to 0.8234, compared to the case
where $\alpha=1,\beta=1$. With $\alpha_{slice}=1,\beta_{slice}=3$, (a) the FPR
was reduced from 0.8505 to 0.8277; (b) the lesion-level FP was increased from
1.7202 to 1.7965, compared to the case where $\alpha=1,\beta=1$.
By comparing the second column in Table 1 and the second column in Table 2, we
can find that the lesion-level and slice-level FNRs were reduced from 0.1941
to 0.1190 and from 0.0097 to 0, respectively. Comparing the third column in
Table 1 and the third column in Table 2, finds that lesion-level and slice-
level FNRs were reduced from 0.1013 to 0.0970 and from 0.0028 to 0.0000
respectively while (1) the lesion-level FP was reduced from 2.0218 to 1.9493;
(2) the slice-level FPR was reduced from 0.8434 to 0.8234. The improvements in
both FPRs and FNRs, by incorporating and further changing the slice-level
cost, indicate the benefits and the significance of using the slice-level
cost-sensitive classification loss.
### 4.3 Adjusting mis-classification cost at both levels
In this experiment, we study the effect of changing both $L_{cost\\_cls}$ and
$L_{slice\\_cls}$ on the performance by varying $\alpha$ and $\beta$. Table 3
shows the corresponding results with three different settings of $\alpha$ and
$\beta$: (a) $\alpha_{lesion/slice}=1,\beta_{lesion/slice}=1$; (b)
$\alpha_{lesion/slice}=3,\beta_{lesion/slice}=1$; (c)
$\alpha_{lesion/slice}=1,\beta_{lesion/slice}=3$. With $\alpha=3,\beta=1$,
compared to the case where $\alpha=1,\beta=1$, (a) the lesion-level FNR was
reduced from 0.1190 to 0.0734; (b) the slice-level FNR remained to be 0. With
$\alpha=1,\beta=3$, compared to the case where $\alpha=1,\beta=1$, (a) the
lesion-level FP was reduced from 1.7202 to 1.0982; (b) the slice-level FPR was
reduced from 0.8505 to 0.6576.
By comparing the corresponding results in the same columns in Table 3 with
those in Table 1 respectively, both the lesion-level and slice-level FNRs were
substantially reduced by incorporating the slice-level classification loss
$L_{slice\\_cls}$ into training. By comparing corresponding results in the
third column in Table 3 with those in Table 2, (1) the lesion-level FNR was
reduced from 0.0970 to 0.0734; (2) the slice-level FNR remained to be 0. By
comparing corresponding results in the last column in Table 3 with those in
Table 2, it becomes clear that (1) the lesion-level FP was reduced from 1.7965
to 1.0982; (2) the slice-level FPR was reduced from 0.8277 to 0.6576.
Fig. 4 includes the three ablation examples where the slice-level FNs were
reduced by incorporating the slice-level classification loss $L_{slice\\_cls}$
into training. Three different slices are utilized to demonstrate the
improvements in the three different rows in Fig. 4. Comparing Fig. 4 (b) with
Fig. 4 (a), shows that the slice-level FN was reduced with the sacrifice of
one more lesion-level FP. By comparing Fig. 4 (f) with Fig. 4 (e), we find
that both lesion-level and slice-level FNs were reduced with one more lesion
level FP. By comparing Fig. 4 (j) with Fig. 4 (i), we find that both lesion-
level and slice-level FNs were reduced with the sacrifice of one more lesion-
level FP.
### 4.4 Results analysis
It should be noted that all the terms in the loss are weighted equally in this
work. The effects of different weighting factors associated with different
sub-tasks will be explored in the future. In addition, a wider range of
$\alpha$ and $\beta$ will be tested to find their optimal values. In this
section, we quantitatively analyse the impact of changing the training-time
cost-sensitive losses, compared with those where the threshold parameter was
adjusted post-training. For brevity, in what follows, we use (1)
$\alpha_{lesion},\beta_{lesion}$ to refer to the case where only the cost-
sensitive loss $L_{cost\\_cls}$ was used in training; (2)
$\alpha_{slice},\beta_{slice}$ to refer to the case where
$\alpha_{lesion}=1,\beta_{lesion}=1$ while the cost-sensitive slice-level loss
$L_{slice\\_cls}$ was also utilized in training, and the weights may vary; (3)
$\alpha,\beta$ to refer to the case where both $L_{cost\\_cls}$ and
$L_{slice\\_cls}$ were used in training, and the weights in the both losses
can change.
We further group the interesting conclusions into positive and negative
results, indicating the resulting impact difference to our specific PCa
application. These, however, may not generalise to other clinical applications
that adopt the same proposed cost-adjusting strategies.
#### 4.4.1 Positive Results
1. 1.
With $\alpha_{lesion}=1$, $\beta_{lesion}=1$, by adjusting the post-training
threshold, the lesion-level FNR was reduced to 0.1131 with the lesion-level FP
being $\bm{5.9758}$. In contrast, (1) with
$\alpha_{lesion}=3,\beta_{lesion}=1$, the lesion-level FNR was 0.1013 while
the FP was $\bm{2.0218}$; (2) with $\alpha_{slice}=1,\beta_{slice}=1$, the
lesion-level FNR was 0.1190 while the FP was $\bm{1.7202}$; (3) with
$\alpha_{slice}=3,\beta_{slice}=1$, the lesion-level FNR was 0.0970 with the
FP was $\bm{1.9493}$. To summarize, by choosing the appropriate loss during
training, a considerable lower FP value can be achieved with comparable or
reduced lesion-level FNs, compared to those from changing the threshold.
2. 2.
With $\alpha_{lesion}=1,\beta_{lesion}=1$, by adjusting the threshold, the
slice-level FNR was reduced to be 0.0042 with the FPR being $\bm{0.6972}$. In
contrast, with $\alpha=1,\beta=3$, the slice-level FNR was 0.0014 while the
FPR was $\bm{0.6576}$.
3. 3.
With $\alpha_{slice}=1,\beta_{slice}=1$, by adjusting the threshold, the
lesion-level FNR was reduced to 0.0987 while the FP was $\bm{2.0530}$. In
contrast, with $\alpha_{slice}=3,\beta_{slice}=1$, the lesion-level FNR and FP
were 0.0970 and $\bm{1.9493}$, respectively.
4. 4.
With $\alpha_{slice}=3,\beta_{slice}=1$, compared to the case where
$\alpha_{slice}=1,\beta_{slice}=1$, the slice-level FPR was reduced to 0.8234
while the FNR remained to be 0.
5. 5.
With $\alpha=1,\beta=1$, by adjusting the threshold, the lesion-level FNR was
reduced to be 0.0734 while the lesion-level FP was $\bm{5.4910}$. In contrast,
with $\alpha=3,\beta=1$, the lesion-level FP was $\bm{2.3827}$ while the FNR
was 0.0734.
6. 6.
With $\alpha=1,\beta=1$, the slice-level FNR was reduced to be 0.014 with the
slice-level FPR being $\bm{0.7161}$. In contrast, with $\alpha=1,\beta=3$, the
slice FNR and the slice-level FPR were 0.0014 and $\bm{0.6576}$, respectively.
Comparing results in 1 and 5, at the lesion level, shows that the added FPs
can be reduced in order to achieve a lower FNR by simply adding the
classification loss at the slice level. The above results demonstrate the
significant advantage of incorporating the cost-sensitive classification loss
in reducing the lesion-level and slice-level FNRs.
#### 4.4.2 Negative Results
1. 1.
With $\alpha=1,\beta=1$, by adjusting the threshold, the slice-level FNR was
reduced to be 0 with the FPR being 0.9166, which is smaller than 0.9220 where
$\alpha=3,\beta=1$.
2. 2.
With $\alpha_{lesion}=1,\beta_{lesion}=1$, by adjusting the threshold, the
lesion-level FP was reduced to be $\bm{0.4774}$ with the FNR being
$\bm{0.3662}$. These two values are smaller than those where
$\alpha_{lesion}=1,\beta_{lesion}=3$, respectively.
3. 3.
At the slice level where $\alpha=1,\beta=1$, the FNR was reduced to be 0.014
with FPR being $\bm{0.7161}$. In contrast, in the case where
$\alpha=1,\beta=3$, FNR was 0.0014 with FPR being $\bm{0.8277}$.
In the training data set, the class imbalance problem was present where much
more background objects/slices exist. Interestingly, this is the reason we
believe that the so-called negative results originated in this application, in
which, a greater weighting towards the majority class(es) would further reduce
the biased (usually lower) prediction performance on the minority class(es),
although the associated costs may have been correctly minimised. Further
analysis for this phenomenon between prediction performance with- and without
considering costs might warrant further investigation.
## 5 Conclusions
In this study, we explore the feasibility of controlling the false
positives/negatives at the lesion or slice level during training, together
with an in-depth analysis of the associated advantages and disadvantages. We
conclude the quantitative results obtained from the clinical patient data set
as follows: 1) Incorporating the proposed cost-sensitive classification losses
at either lesion or slice level (or both) demonstrates the expected
flexibility of controlling the false positive rate (FPR) and false negative
rate (FNR); and 2) Incorporating the proposed cost-aware losses was able to
reduce the FNRs while maintaining or further reducing the FPRs, which can be
particularly useful for real-world clinical applications such as population
screening for prostate cancer.
## Acknowledgements
This work is supported by the Wellcome/EPSRC Centre for Interventional and
Surgical Sciences (203145Z/16/Z). This work was supported by the International
Alliance for Cancer Early Detection, a partnership between Cancer Research UK
[C28070/A30912; C73666/A31378], Canary Center at Stanford University, the
University of Cambridge, OHSU Knight Cancer Institute, University College
London and the University of Manchester.
## References
* [1] Cao, R., Bajgiran, A.M., Mirak, S.A., Shakeri, S., Zhong, X., Enzmann, D., Raman, S., Sung, K.: Joint prostate cancer detection and gleason score prediction in mp-mri via focalnet. IEEE transactions on medical imaging 38(11), 2496–2506 (2019)
* [2] Cao, R., Zhong, X., Shakeri, S., Bajgiran, A.M., Mirak, S.A., Enzmann, D., Raman, S.S., Sung, K.: Prostate cancer detection and segmentation in multi-parametric mri via cnn and conditional random field. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). pp. 1900–1904. IEEE (2019)
* [3] Dai, Z., Carver, E., Liu, C., Lee, J., Feldman, A., Zong, W., Pantelic, M., Elshaikh, M., Wen, N.: Segmentation of the prostatic gland and the intraprostatic lesions on multiparametic magnetic resonance imaging using mask region-based convolutional neural networks. Advances in Radiation Oncology 5(3), 473–481 (2020)
* [4] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
* [5] Li, W., Li, J., Sarma, K.V., Ho, K.C., Shen, S., Knudsen, B.S., Gertych, A., Arnold, C.W.: Path r-cnn for prostate cancer diagnosis and gleason grading of histological images. IEEE transactions on medical imaging 38(4), 945–954 (2018)
* [6] Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: Computer-aided detection of prostate cancer in mri. IEEE transactions on medical imaging 33(5), 1083–1092 (2014)
* [7] Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietikäinen, M.: Deep learning for generic object detection: A survey. International journal of computer vision 128(2), 261–318 (2020)
* [8] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497 (2015)
* [9] Saha, A., Hosseinzadeh, M., Huisman, H.: End-to-end prostate cancer detection in bpmri via 3d cnns: Effect of attention mechanisms, clinical priori and decoupled false positive reduction. arXiv preprint arXiv:2101.03244 (2021)
* [10] Sanford, T., Harmon, S.A., Turkbey, E.B., Kesani, D., Tuncer, S., Madariaga, M., Yang, C., Sackett, J., Mehralivand, S., Yan, P., et al.: Deep-learning-based artificial intelligence for pi-rads classification to assist multiparametric prostate mri interpretation: A development study. Journal of Magnetic Resonance Imaging 52(5), 1499–1507 (2020)
* [11] Schelb, P., Kohl, S., Radtke, J.P., Wiesenfarth, M., Kickingereder, P., Bickelhaupt, S., Kuder, T.A., Stenzinger, A., Hohenfellner, M., Schlemmer, H.P., et al.: Classification of cancer at prostate mri: deep learning versus clinical pi-rads assessment. Radiology 293(3), 607–617 (2019)
* [12] Siegel, R.L., Miller, K.D., Jemal, A.: Cancer statistics, 2020. CA: a cancer journal for clinicians 70(1), 7–30 (2020)
* [13] Turkbey, B., Choyke, P.L.: Pirads 2.0: what is new? Diagnostic and Interventional Radiology 21(5), 382 (2015)
* [14] Wildeboer, R.R., van Sloun, R.J., Wijkstra, H., Mischi, M.: Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. Computer methods and programs in biomedicine 189, 105316 (2020)
* [15] Yu, X., Lou, B., Shi, B., Winkel, D., Arrahmane, N., Diallo, M., Meng, T., von Busch, H., Grimm, R., Kiefer, B., et al.: False positive reduction using multiscale contextual features for prostate cancer detection in multi-parametric mri scans. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). pp. 1355–1359. IEEE (2020)
* [16] Yu, X., Lou, B., Zhang, D., Winkel, D., Arrahmane, N., Diallo, M., Meng, T., von Busch, H., Grimm, R., Kiefer, B., et al.: Deep attentive panoptic model for prostate cancer detection using biparametric mri scans. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 594–604. Springer (2020)
|
# Stability of local tip pool sizes
Sebastian Müller *Sebastian Müller (corresponding author), Aix Marseille
Université, CNRS, Centrale Marseille, I2M - UMR 7373, 13453 Marseille, France
& IOTA Foundation, 10405 Berlin, Germany
<EMAIL_ADDRESS>, Isabel Amigo Isabel Amigo, Alexandre
Reiffers-Masson, and Santiago Ruano-Rincón IMT Atlantique, LabSTICC, UMR CNRS
6285, 29238 Brest, France<EMAIL_ADDRESS>, Alexandre
Reiffers-Masson<EMAIL_ADDRESS>and Santiago
Ruano-Rincón<EMAIL_ADDRESS>
###### Abstract.
In distributed ledger technologies (DLTs) with a directed acyclic graph (DAG)
data structure, a block-issuing node can decide where to append new blocks
and, consequently, how the DAG grows. This DAG data structure is typically
decomposed into two pools of blocks, dependent on whether another block
already references them. The unreferenced blocks are called the tips. Due to
network delay, nodes can perceive the set of tips differently, giving rise to
local tip pools.
We present a new mathematical model to analyse the stability of the different
local perceptions of the tip pools and allow heterogeneous and random network
delay in the underlying peer-to-peer communication layer. Under natural
assumptions, we prove that the number of tips is ergodic, converges to a
stationary distribution, and provide quantitative bounds on the tip pool
sizes. We conclude our study with agent-based simulations to illustrate the
convergence of the tip pool sizes and the pool sizes’ dependence on the
communication delay and degree of centralization.
###### Key words and phrases:
distributed queueing system, DAG-based distributed ledgers, stochastic
process, stationarity, ergodicity
## 1\. Introduction
A major challenge in distributed systems is the _relativity of simultaneity_
and the fact that whether two spatially separated events occur simultaneously
or in a particular order is not absolute but depends on the local perceptions
of the participants. To fight this phenomenon, classical approaches in
distributed ledger technologies (DLTs) such as Bitcoin [27] typically use a
totally ordered data structure, a blockchain, to find consensus on the order
of the events. However, this design creates a bottleneck, e.g. a miner or
validator, through which each transaction must pass. And even in this
solution, due to network delay, block creation can happen concurrently at
different parts of the network, leading to bifurcations of the chain that must
be resolved. This resolution is typically made by the longest–chain rule [27],
or some variant of the heaviest sub-tree [39].
In blockchain-like DLTs, the system’s throughput is artificially limited to
guarantee the system’s security so that each block propagates to all the
participants before the next block is created. The blocks are created by
miners or validators, and the blockchain can be seen as a three-step process.
In the first step, a client sends a transaction to the block producers, then a
particular block producer, also called the “leader”, proposes a block
containing a batch of transactions, and in the last step, validators validate
the block.
A more novel approach that addresses the limited throughput problem and the
bifurcation of the chain problem of distributed ledgers uses a directed
acyclic graph (DAG) instead of a chain to encode the dependencies of the
blocks. For instance, protocols like SPECTRE [37], Byteball [5], Algorand
[13], PHANTOM [38], Prism [3], Aleph [14], Narwhal [8], and IOTA [34]) were
proposed to improve the performance of distributed ledgers. The consensus
mechanism and the writing access in a DAG-based system can be conceptually
different from the one in a linear blockchain system, and the transaction
throughput is potentially no longer limited. For instance, in DAG-based
protocols like Aleph [14] and Narwhal [8], only a predefined set of nodes can
add new blocks to the ledger, while in IOTA [26], every participant has
writing access.
We consider the more general model where every participant can add blocks to
the data structure, referring to at least two previous blocks. This property
reduces the update of the ledger to two steps: one node proposes a block to
the ledger and waits for the other nodes to validate it, i.e., by adding a new
block referencing them. This collaborative design in which all participants
play the same role promises to mitigate (or even solve) several problems of
the blockchain design, e.g., mining races [9], centralisation [25], miner
extractable value [7], and negative externalities [36]. However, the
parallelism in adding new blocks to the ledger implies that local perceptions
of the nodes may differ much more than in the traditional blockchain design.
In this paper, we give a mathematical model describing the evolution of the
local number of unreferenced blocks, or tips, in a distributed ledger and
prove their stability. More precisely, we prove the stationarity and
ergodicity of the number of tips. Except for [20], this model is new, as
previous research neglected the difference between local perceptions due to
heterogeneous network delays. In [20], a similar, but much more restrictive,
model has been considered with deterministic delay, deterministic arrival of
blocks and discrete time. This paper considers a continuous time model with
random block creation and random delays. In the next section, we give an
informal description of the model.
### 1.1. Informal description
We consider a network of nodes that manage a distributed database. In
cryptocurrency applications, this database is called a ledger, but the model
could potentially be applied to other use cases of collaborative databases.
The data consists of blocks that contain atomic data in the sense that either
the entire block is added to the database or all the information in the block
is discarded. The distributed ledger is assumed to be built using two
fundamental mechanisms:
Sharing mechanism: Each node aims to create new blocks and inform the other
nodes about these blocks. The information about the blocks is passed via a
gossip protocol on an underlying communication layer. Specifically, each node
is only directly connected to a subset of the other nodes. Once a node has
created a block and added it to its local database, it broadcasts it to a
random subset of its neighbours. As soon as a node receives a block that it
has not yet received, it adds this block to its database and forwards it to a
random subset of its neighbours.
Reference mechanism: The blocks in the database (which we referred to also as
vertices) are connected to each other by references. The rule is that each
newly created block must refer to up to $k\geq 2$ already existing blocks. The
meaning of these references can depend on the specific use case of the
protocol. For example, in cryptocurrency applications, a reference of a block
means that the node issuing the referencing transaction verifies the previous
blocks. Verification includes semantic and syntactical checks of the block
content. In addition, referencing a block can be used for validation and
consensus building; see IOTA, [33], and IOTA 2.0, [26]. In distributed-queuing
theory, blocks can correspond to different jobs. Referencing can then imply
that the issuing node handles or will handle the jobs in the referenced
blocks. The way nodes choose previously referenced blocks has an impact on the
performance of the system. In particular, previously referenced blocks should
no longer be referenced. Instead, the focus should be on referencing non-
referenced blocks, which we call tips.
Regarding the reference mechanism, we can note that the delay between nodes
has a huge impact on the performance of the reference mechanism. Indeed, it is
instructive to consider the extreme case where all nodes have the same
perception of the database. This can be the case when the creation of a block
is instantaneous, i.e., there is no delay between selecting the references of
the block and sending it to the neighbours, and all neighbours receive the new
blocks without delay. Suppose we start the database with one block (the
genesis) and assume that no blocks can be created simultaneously. In that
case, there will always be only one tip (non-referenced block), as each block
is referenced by precisely one other block. However, this situation changes
drastically if there is a delay between the selection of references and the
time when all nodes have received the new block. In this case, the blocks are
created concurrently, and the blocks can be referenced by more than one other
block. Thus, a prior, it is no longer clear whether the system is in a
stationary regime or the number of tips explodes. In this paper, we propose a
mathematical procedure to model the different local tip tools and prove the
stability of their sizes under standard synchrony assumptions.
### 1.2. Contributions
This paper has three major contributions:
1. (1)
We formalize the above description of the (distributed) protocol using an
appropriate stochastic process. This is the first continuous-time model for
local perceptions of a DAG-based distributed ledger together with the
communication on the underlying peer-to-peer network.
2. (2)
Our main result, Theorem 3.10, is a _formal proof_ of the stability of the
local tip pool sizes. The proof relies on an asymptotic drift analysis,
Theorem 3.1, that allows, together with a regeneration structure, to obtain
qualitative results on the stationarity and ergodicity of the local tip pools.
3. (3)
Finally, through Monte-Carlo simulations, we provide more quantitative results
highlighting the influence of the protocols environment on the differences in
the local perceptions.
### 1.3. Related work
To the best of our knowledge, S. Popov introduced the first mathematical model
on DAG-based ledgers [33]. Popov’s analysis is based on a global and perfect
observation of the existing blocks. The communication delay is assumed to be
homogeneous, and newly created blocks can be referenced only after a given
constant network delay. The author heuristically obtains a formula for the
expected number of tips assuming that the tip pool size is stationary.
Under the above assumptions of the existence of a central node, a lot of works
have extended the work of Popov, studying non-Poisson arrival rate [23], fluid
limit approximations of the evolution of the number of tips [11, 12],
discrete-time model [4], and simulation-based works [21, 28]. One of the main
drawbacks of all these works is that they do not consider heterogeneous delays
between nodes.
Three recent works have introduced different types of heterogeneous delays and
studied the evolution of the number of tips under such conditions. First, a
simulator of DAG-based distributed ledgers with delays in the transmission of
information between nodes has been proposed in [41]. From a more theoretical
perspective, the authors in [31] have studied the impact of heterogeneous
delays coming from different processing times of the blocks and not due to
propagation of information delay. They also assume the existence of a central
node which maintains a stable version of the ledger and have not considered
the different views of each node in the network. Our work is building on the
model proposed in [20]. In that paper, the authors model the evolution of the
number of tips using coupled stochastic processes in discrete time. However,
[20] makes strong assumptions; the delay between two nodes is deterministic
and constant over time and the number of issued blocks by each node at each
discrete time-step is constant. Under these conditions, they prove the
stability of the stochastic process using super martingale arguments and drift
analysis.
## 2\. Notations and setting
Variable | Description
---|---
${\mathcal{N}}:=\\{1,\ldots,N\\}$ | set of nodes
$\lambda_{i}\in\mathbb{R}_{+}$ | block issuance rate of node $i$
$\lambda:=\sum_{i=1}^{N}\lambda_{i}$ | total block issuance rate
${\delta}_{j}^{({b})}(i)$ | random variable describing
| latency from node $j$ to node $i$ for block ${b}$
$\Delta_{j}(i)$ | latency distribution from node $j$ to node $i$
$\Delta\in\mathbb{R}_{+}$ | maximal latency between two nodes
$k$ | number of blocks to be referenced by a new block
$\mathrm{pool}_{n}^{(i)}$ | tip pool of node $i$ at time $t_{n}$
$\mathrm{pool}_{n}^{(c)}$ | common tip pool at time $t_{n}$
$\mathrm{pool}_{n}^{(o)}$ | tips of the perfect observer at time $t_{n}$
$X_{n}^{(i)}:=\lvert\mathrm{pool}_{n}^{(i)}\rvert$ | size of the tip pool of node $i$
$X_{n}^{(c)}:=\lvert\mathrm{pool}_{n}^{(c)}\lvert$ | size of the common tip pool
$X_{n}^{(o)}:=\lvert\mathrm{pool}_{n}^{(o)}\rvert$ | size of the tip pool of the perfect observer
Table 1. Table of notations
### 2.1. Peer-to-peer network
There are several factors that should be considered when modelling a peer-to-
peer (P2P) network, including the number and distribution of participants, the
speed and capacity of the network connections, and the rules governing the
creation and exchange of data.
We consider a peer-to-peer network with $N$ nodes and denote the set of nodes
by ${\mathcal{N}}:=\\{1,\ldots,N\\}$. These nodes can create (or issue) and
exchange blocks of data without the need of a central authority.
Nodes communicate their blocks on the P2P network, leading to communication
delays and, thus, different local perceptions of the system’s state. The
network latency is the time it takes for a block to travel from the source
node to the destination node and can be affected by a number of factors,
including the distance between the two nodes, the speed of the network
connection, and the amount of traffic on the network at the time the message
is sent. Network latency is an important factor in the performance of a
communication network, as it can affect the speed at which information is
transmitted and the overall reliability of the network.
Thus, latency plays a crucial role in our model. We allow these delays to be
random, asymmetric, and different for different nodes. More precisely, the
delay between a node $i$ and a node $j$, for a given block ${b}$, is described
by a random variable ${\delta}_{j}^{(b)}(i)$ with values in $\mathbb{R}_{+}$.
These delays are supposed to be i.i.d. in the following sense: for every block
${b}$ issued by a node $i$ the delay ${\delta}_{i}^{({b})}(j)$ is
independently distributed as $\Delta_{i}(j)$.
The nature of the random distribution $\Delta_{i}(j)$ is important in the
context of distributed systems. In a fully synchronous system, the
distributions $\Delta_{i}(j)$ are almost surely bounded, and the bound is
known and used in the protocol. In a fully asynchronous system, there is no
fixed upper bound on the delays; the distributions $\Delta_{i}(j)$ have
infinite support. As a result, a fully asynchronous system relies less on
precise timing and can tolerate a higher degree of latency or delay. This can
make it more resilient and less prone to failure, but it can also make it less
efficient for applications that require low latency and high reliability.
The concept of partial synchrony in a distributed system refers to a system
that falls between a fully synchronous system and a fully asynchronous system;
we refer to [10] for more details.
###### Assumption 2.1 (Partial Synchronicity).
There exists some $\Delta<\infty$ such that
${\mathbb{P}}(\Delta_{i}(j)\leq\Delta)=1,\forall i,j\in{\mathcal{N}}.$
The exact value of $\Delta$ is unknown, and its value is not used in the
protocol design.
This assumption means that there is a finite (but unknown) time for a node to
receive information from another node. Usually, distributed ledgers are using
P2P networks as a means to exchange information.
Nodes communicate directly with each other, rather than through a central
server, to exchange information; in our situation, this information consists
of blocks. One approach to exchanging information in a P2P network is through
a technique called “gossiping”. In gossiping, a node sends a piece of
information to a (random) subset of its neighbours, and each of those
neighbours then sends the information to a subset of their own neighbours, and
so on. This can allow for the rapid dissemination of information throughout
the network, even if some nodes are offline or unable to communicate directly
with each other, ensuring a finite time to transmit information between two
nodes.
### 2.2. Block issuance
The blocks are created or issued by the participating nodes. We model this
issuance by a Poisson point process. More precisely, each node
$i\in{\mathcal{N}}$ issues blocks according to a given Poisson point process
of intensity $\lambda_{i}$. In other words, the intervals between issued
blocks are distributed as $Exp(\lambda_{i})$, where the parameter
$\lambda_{i}$ corresponds to the issuance rate of node $i$. We define
$\lambda:=\sum_{i=1}^{N}\lambda_{i}$ to be the total block issuance rate.
We define a marked point process $\xi=(t_{n},\kappa_{n})_{n\in{\mathbb{N}}}$
on $\mathbb{R}^{+}$ that will describe the time of the creation of the blocks
in the network. The times $t_{n}$ in the marked process $\xi$ are given by a
Poisson point process on the line and the marks $\kappa_{n}$ consist of the
following
$\kappa_{n}=(\mathrm{blockID}_{n},\mathrm{Ref}_{n},\mathrm{nodeID}_{n},\mathrm{delay}_{n}),$
(1)
where:
* •
$\mathrm{blockID}_{n}$ is the id of the $n$-th block;
* •
$\mathrm{Ref}_{n}$ is the list of references of the $n$th block;
* •
$\mathrm{nodeID}_{n}$ is the id of the node who created the $n$th block;
* •
$\mathrm{delay}_{n}$ is a (random) vector of network delays. It describes the
times it takes for the other nodes to receive the $n$th block.
In other words, at time $t_{n}$ the $n$th block with ID $\mathrm{blockID}_{n}$
is created by node $\mathrm{nodeID}_{n}$. This block refers to
$\mathrm{Ref}_{n}$ previous blocks and is delayed by the random vector
$\mathrm{delay}_{n}$.
We describe the construction of these marks in more detail. The variable
$\mathrm{blockID}_{n}$ identifies the issued block and is uniformly
distributed in $[0,1]$. This is a usual assumption that is justified by the
fact that in practice the block ids are deduced from cryptographic hash
functions. The $\mathrm{nodeID}_{n}$ describes the node ID of the issuing
node, it is independent (of the rest of the process) and identically
distributed on ${\mathcal{N}}$. More precisely, we have that
${\mathbb{P}}(\mathrm{nodeID}_{n}=i)=\frac{\lambda_{i}}{\lambda},\forall
i\in{\mathcal{N}}.$ (2)
Every new block references $k$ previous blocks; they are chosen uniformly
(with replacement) among all blocks that have not yet been referenced, a.k.a.
tips. More precisely, once a node $i$ issues a new block it references $k$
blocks (sampled uniformly with replacement) from its local tip pool. The
references of the $n$th block are written as
$\mathrm{Ref}_{n}=(\mathrm{ref}_{1},\ldots,\mathrm{ref}_{k})$, where each
$r_{i}$ is a $\mathrm{blockID}$ of a previous block. The references are not
independent of the previous history of the process. More precisely, we denote
$(\Omega,{\mathcal{F}},{\mathbb{P}})$ the underlying probability space and let
${\mathcal{F}}_{n}=\sigma((t_{1},\kappa_{1}),\ldots,(t_{n},\kappa_{n}))$ be
the filtration corresponding to the marked Poisson process. Then, the
“$\mathrm{Ref}_{n}$-mark” is not independent (in contrast to the other marks)
of ${\mathcal{F}}_{n-1}$. In the next section, we give more details on the tip
selection and the different local perceptions of the nodes.
The variable $\mathrm{delay}_{n}$ defined as:
$\mathrm{delay}_{n}=({\delta}_{\mathrm{nodeID}_{n}}^{(\mathrm{blockID}_{n})}(j))_{j\in{\mathcal{N}}}$,
describes the delay between $t_{n}$ (the issuance time of the block) and the
arrival time of the block at each of the other nodes. It is therefore a random
vector and the delays are i.i.d. given $\mathrm{nodeID}_{n}$ and supposed to
satisfy Assumption 2.1.
### 2.3. Tip selection and dynamics
In this section, we describe the different (local) perceptions of the nodes;
namely of the issued blocks known by the node and whether these blocks are
already referenced. For our purposes, it is enough to observe the process only
at (block issuance) times $t_{1},t_{2},\ldots.$ The set of blocks created up
to time $t_{n}$ is defined by
$\mathrm{Blocks}_{n}:=\bigcup_{k=1}^{n}\mathrm{blockID}_{k}.$ (3)
The set of blocks created between $t_{\ell}$ and $t_{m}$ is denoted by
$\mathrm{Blocks}_{\ell,m}:=\bigcup_{k=\ell}^{m}\mathrm{blockID}_{k}.$ (4)
Due to the communication delay, these blocks are not immediately visible to
all nodes. For every node $i$, we define the set of all visible blocks at time
$t_{n}$ as
$\mathrm{visBlocks}_{n}(i):=\bigcup_{k:t_{k}+\mathrm{delay}_{k}(i)<t_{n}}\mathrm{blockID}_{k}$
(5)
and the set of all visible references as
$\mathrm{visRef}_{n}(i):=\bigcup_{k:t_{k}+\mathrm{delay}_{k}(i)<t_{n}}\mathrm{Ref}_{k},$
(6)
where we treat $\mathrm{Ref}_{k}$ not as a vector but as a set.
###### Definition 2.2 (Different tip pools).
The _local tip pool_ from node $i\in{\mathcal{N}}$ at time $t_{n}$ is defined
as
$\mathrm{pool}_{n}(i)=\mathrm{visBlocks}_{n}(i)\setminus\mathrm{visRef}_{n}(i).$
(7)
The _common tip pool_ at time $t_{n}$ is defined as
$\mathrm{pool}_{n}^{(c)}:=\bigcap_{i\in{\mathcal{N}}}\mathrm{pool}_{n}(i).$
(8)
The _(perfect) observer tip pool_ at time $t_{n}$ is defined as
$\mathrm{pool}_{n}^{(o)}:=\mathrm{Blocks}_{n}\setminus\bigcup_{k=1}^{n}\mathrm{Ref}_{k}$
(9)
###### Definition 2.3 (Tip pool sizes).
We denote by $X_{n}^{(i)}:=\lvert\mathrm{pool}_{n}(i)\rvert$ the number of
tips at node $i$ at time $t_{n}$. We also define the common tip pool size
$X_{n}^{(c)}=\lvert\mathrm{pool}_{n}^{(c)}\rvert$. We denote by
$X_{n}^{(o)}=\lvert\mathrm{pool}_{n}^{(o)}\rvert$ the number of tips of the
perfect observer.
The process starts at time $n=0$ with one tip called the genesis. More
precisely, we set
$\mathrm{pool}_{0}^{(o)}=\mathrm{pool}_{0}^{(c)}=\mathrm{pool}_{0}^{(i)}~{}\forall
i\in{\mathcal{N}},X_{0}^{(o)}=1.$ (10)
The different tip pool sizes can be defined for all positive real times and
can be seen as continuous time stochastic processes. Due to the delay, the
local and common tip pool sizes may even change at times different to the ones
given by the point process. However, since nodes do issue blocks only at times
$t_{1},t_{2},\ldots$ we only observe the processes at these times.
Since we assume $\delta_{i}(i)=0$ we have that $X_{n}^{(c)}\leq X_{n}^{(o)}$.
To see this, note that the observer has zero delays and perceives the blocks
right after their creation. Hence, once a node takes tips out of its local tip
pool, these are immediately deleted from the observer tip pool and the newly
issued block is added to the observer tip pool. The newly referenced blocks
are also removed, immediately, from the common tip pool, but the new block is
added to the common tip pool only after all nodes receive it.
A crucial observation is that we also have a lower estimate conditioned on the
number of blocks recently issued
$L_{n}:=\lvert\mathrm{Blocks}_{t_{n}-\Delta,t_{n}}\rvert$. $L_{n}$ can also be
interpreted as the number of all possible non-visible blocks. This definition
of $L_{n}$ also implies that the selected tips at time step $n$ by each node
will only depend on the tips at $n-L_{n}$ known by the observer and the new
blocks issued between $n-L_{n}$ and $n$.
###### Lemma 2.4.
For all $L\in{\mathbb{N}}$ we have that
${\mathbb{P}}\left(X_{n}^{(c)}\geq
X_{n}^{(o)}-(k+1)L|L_{n}=L\right)=1,\quad\forall n\in{\mathbb{N}},$ (11)
and
${\mathbb{P}}\left(X_{n}^{(i)}\leq X_{n}^{(o)}+kL~{}\forall
i\in{\mathcal{N}}|L_{n}=L\right)=1,\quad\forall n\in{\mathbb{N}},$ (12)
###### Proof.
We have $X_{n}^{(o)}\leq X_{n-L}^{(o)}+L$ as, in the worst case, none of the
$L$ recently added blocks removed a tip from the tip pool. Assumption 2.1
(Partial Synchronicity assumption) implies that all tips from time $n-L$ are
perceived/known by any other node at time $n$. During this time, at most $kL$
tips could have been removed from their local tip pool. Hence, in the best
case $X_{n}^{(c)}$ is equal to $X_{n-L}^{(o)}-kL$. Therefore, almost surely,
given $L_{n}=L$, we obtain
$X_{n}^{(c)}\geq X_{n-L}^{(o)}-kL\geq X_{n}^{(o)}-(k+1)L.$ (13)
For the second claim, it suffices to observe that all blocks that have been
tips in the observer tip pool at time $n-L$ are visible to every node $i$ at
time n, and at most $L$ new tips could have been added to the local tip pool.
Hence,
$X_{n}^{(i)}\leq X_{n-L}^{(o)}+L.$ (14)
At every block creation, at most $(k-1)$ tip can be removed from the observer
tip pool since every new block becomes a tip. Hence,
$X_{n-L}^{(o)}-(k-1)L\leq X_{n}^{(o)},$ (15)
and the second claim follows. ∎
## 3\. Stability of tip pool sizes
We start with our central result on the asymptotic negative drift of the
observer tip pool size. This first result will show that when $X_{n}^{(0)}=x$
is large, our stochastic process becomes a super-martingale. Therefore, we can
use tools coming from martingale theory to obtain upper bounds on the
distribution tail of $X_{n}^{(0)}$.
###### Theorem 3.1 (Asymptotic negative drift).
There exist $K\in{\mathbb{N}}$ and $\varepsilon>0$ such that
${\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x\right]\leq-\varepsilon,\quad\forall
x\geq K.$ (16)
###### Proof.
Recall that $L_{n}=\lvert\mathrm{Blocks}_{t_{n}-\Delta,t_{n}}\rvert$ and write
$\displaystyle{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x\right]=$
$\displaystyle\sum_{L=0}^{\infty}{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}lvertX_{n}^{(o)}=x;L_{n}=L\right]{\mathbb{P}}\left(L_{n}=L\right)$
(17) $\displaystyle=0\cdot{\mathbb{P}}\left(L_{n}=0\right)$ (18)
$\displaystyle+\sum_{L=1}^{\tilde{L}}{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x;L_{n}=L\right]{\mathbb{P}}\left(L_{n}=L\right)$
(19)
$\displaystyle+\sum_{L=\tilde{L}+1}^{\infty}{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x;L_{n}=L\right]{\mathbb{P}}\left(L_{n}=L\right)$
(20)
with $\tilde{L}$ such that ${\mathbb{P}}(0<L_{n}\leq\tilde{L})\geq
2{\mathbb{P}}(L_{n}>\tilde{L})$ for all $n\in N$. Note that the existence of
such a random variable $\tilde{L}$ follows from the stationarity of $L_{n}$.
The last summand is bounded above by
$\sum_{L=\tilde{L}+1}^{\infty}{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)};L_{n}=L\right]{\mathbb{P}}\left(L_{n}=L\right)\leq
1\cdot{\mathbb{P}}(L_{n}>\tilde{L}),$ (21)
since, in the worst case, a new tip is added to the observer tip pool.
To control the second summand, we suppose that $K>2(k+1)\tilde{L}$. Lemma 2.4
implies that there are at least $X_{n}^{(o)}-(k+1)L$ common tips at time $n$
for all $L\leq\tilde{L}$ and $X_{n}^{(o)}\geq K$. The block at time $n$ will
be issued by some node $i$, the probability that this node chooses at least
two tips from the common tip pool is therefore larger than
$\displaystyle\frac{X_{n}^{(c)}}{X_{n}^{(i)}}\cdot\frac{X_{n}^{(c)}-1}{X_{n}^{(i)}}$
$\displaystyle\geq\frac{X_{n}^{(o)}-(k+1)L}{X_{n}^{(i)}}\cdot\frac{X_{n}^{(o)}-(k+1)L-1}{X_{n}^{(i)}}$
(22)
$\displaystyle\geq\frac{X_{n}^{(o)}-(k+1)L}{X_{n}^{(o)}+L}\cdot\frac{X_{n}^{(o)}-(k+1)L-1}{X_{n}^{(o)}+L}$
(23) $\displaystyle\geq\frac{K-(k+1)L}{K}\cdot\frac{K-(k+1)L-1}{K}=:p(K,L),$
(24)
where we use the second statement of Lemma 2.4 in the second estimate and
$X_{n}^{(o)}\geq K$ for the last bound. We obtain
$\displaystyle\sum_{L=1}^{\tilde{L}}$
$\displaystyle{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}|X_{n}^{(o)}=x;L_{n}=L\right]{\mathbb{P}}\left(L_{n}=L\right)$
(25) $\displaystyle\leq\sum_{L=1}^{\tilde{L}}\left(-1\cdot
p(K,L)+1\cdot(1-p(K,L))\right){\mathbb{P}}(L_{n}=L)$ (26)
$\displaystyle=\sum_{L=1}^{\tilde{L}}\left(1-2p(K,L)\right){\mathbb{P}}(L_{n}=L)$
(27)
$\displaystyle\xrightarrow[K\to\infty]{}-{\mathbb{P}}(0<L_{n}\leq\tilde{L}).$
(28)
Finally, we obtain that
$\displaystyle{\mathbb{E}}\left[X_{n+1}^{(o)}-X_{n}^{(o)}\big{|}X_{n}^{(o)}=x\right]$
$\displaystyle\leq
0+(-{\mathbb{P}}(0<L_{n}\leq\tilde{L})+\tilde{\varepsilon}+{\mathbb{P}}(L_{n}>\tilde{L})$
(29)
$\displaystyle\leq-\frac{1}{2}{\mathbb{P}}(0<L_{n}\leq\tilde{L})+\tilde{\varepsilon},$
(30)
with $\tilde{\varepsilon}<\frac{1}{2}{\mathbb{P}}(0<L_{n}\leq\tilde{L})$ and
$K$ sufficiently large. This yields Inequality (16) with
$\varepsilon=-\frac{1}{2}{\mathbb{P}}(0<L_{n}\leq\tilde{L})+\tilde{\varepsilon}$.
∎
### 3.1. Bounds on hitting-times and tails
The last theorem has several important and well-known consequences as
ergodicity and concentration type of results. Our first focus is on general
bounds on hitting times and tails. The drift condition (16) suggests that
$X_{n}^{(o)}$ should eventually cross below $K$ and not lie too far above $K$
most of the time. In the following, we give quantitative results of this
intuition. These results are essentially straightforward implications of (16)
together with the fact that the increments of $X_{n}^{(o)}$ are bounded. In
this work, we do not strive for optimal results but prefer to gather classical
results that follow from [15] and define the necessary terms to apply the
results.
Let us first observe that the increments of $X_{n}^{(o)}$ are bounded; the
number of tips is increased at most by $k$ and decreased at most by $k-1$ at
each time step. Let $Z$ be a random variable that stochastically dominates the
increments $|X_{n+1}^{(o)}-X_{n}^{(o)}|$ for all $n$. In our case, we use
$Z=k$, which is deterministic and not random.
For $\lambda>0$ define
$c:=c(\lambda):=\sum_{j=2}^{\infty}\frac{\lambda^{j-2}}{j!}{\mathbb{E}}[Z^{j}]=\frac{e^{k\lambda}-(1-\lambda
k)}{\lambda^{2}},$ (31)
and
$D:={\mathbb{E}}[e^{\lambda Z}]=e^{\lambda k}.$ (32)
As suggested in [15] we choose
$0<\eta:=\eta(\lambda,\varepsilon)<\min\left\\{\lambda,\frac{\varepsilon}{2c}\right\\}\mbox{
and }\rho:=\rho(\lambda,\varepsilon):=1-\frac{1}{2}\eta\varepsilon\in(0,1),$
(33)
where $\varepsilon$ is the constant in Inequality (16). We define
$\tau_{K,m}:=\min\\{n\geq 0:X_{m+n}^{(o)}\leq K\\}$ (34)
the return time after $m$ to the set $\\{1,\ldots,K\\}.$ Note that here $K$ is
from Inequality (16). In our notation, we rewrite [15, Theorem 2.3].
###### Theorem 3.2 (Hitting-time and tail bounds).
Under Assumption 2.1 we have that
$\displaystyle{\mathbb{E}}[e^{\eta X_{m+n}^{(o)}}|{\mathcal{F}}_{m}]$
$\displaystyle\leq\rho^{n}e^{\eta
X_{m}^{(o)}}+\frac{1-\rho^{n}}{1-\rho}De^{\eta K},$ (35)
$\displaystyle{\mathbb{E}}[s^{\tau_{K,m}}|{\mathcal{F}}_{m}]$
$\displaystyle\leq e^{\eta(X_{m}^{(o)}-K)}\frac{s-1}{1-\rho s}+1,\quad
1<s<\rho^{-1},$ (36) $\displaystyle{\mathbb{P}}(X_{m+n}^{(o)}\geq
M|{\mathcal{F}}_{m})$
$\displaystyle\leq\rho^{n}e^{\eta(X_{m}^{(o)}-M)}+\frac{1-\rho^{n}}{1-\rho}De^{\eta(K-M)},$
(37) $\displaystyle{\mathbb{P}}(\tau_{K,m}>n|{\mathcal{F}}_{m})$
$\displaystyle\leq e^{\eta(X_{m}^{(o)}-K)}\rho^{n}.$ (38)
###### Remark 3.3.
The case ${\mathcal{F}}_{m}={\mathcal{F}}_{o}$ gives the bounds for the
original model starting with a “genesis block” at time $n=0$. A crucial fact,
however, is that the bounds are uniform on $m$, indicating a memoryless
property of the process. This will be used to construct a regeneration
structure in Section 3.2.
###### Remark 3.4.
Similar bounds as in Theorem 3.2 are also valid for the local tip pool sizes
$X_{n}^{(i)}.$ This is due to Lemma 2.4 and the fact that the random variables
$L_{n}$ have exponential moments. This holds for all concentration and
stability results on the tip pool sizes in this section.
We also obtain bounds on the occupation-time from [15, Theorem 3.1]. We start
with the observation that
$\liminf_{n\to\infty}{\mathbb{P}}(X_{n}^{(o)}<M)\geq p_{0}$ (39)
with
$p_{0}:=p_{0}(M):=1-\frac{1}{1-\rho}De^{\eta(K-M)}.$ (40)
###### Theorem 3.5 (Occupation-time bounds).
Under Assumption 2.1 for every $\varepsilon^{\prime}$ there exists some
constants $C$ and $\gamma<1$ such that
${\mathbb{P}}\left(\frac{1}{n}\sum_{j=1}^{n}\mathbf{1}\\{X_{n}^{(o)}<M\\}\leq
p_{0}(1-\varepsilon^{\prime})\right)\leq C\gamma^{n},\quad\forall n\geq 1,$
(41)
where $p_{0}$ is given in (40).
It follows from the above results or directly from [30, Theorem 1] that all
moments of $X_{n}^{(o)}$ are bounded.
###### Theorem 3.6.
Let $\varepsilon$ and $K$ be the constants from Theorem 3.1 and suppose
Assumption 2.1 holds. Then, for every $r>0$ there exists some constant
$c=c(r,\varepsilon,K)$ such that
${\mathbb{E}}\left[\left(X_{n}^{(o)}\right)^{r}\right]\leq c,~{}\forall
n\in{\mathbb{N}}.$ (42)
The same statement holds true for the local tip pool sizes $X_{n}^{(i)}$ and
constants $c=c(r,\varepsilon,K,i)$ depending additionally on $i$.
###### Remark 3.7.
We want to note that [30] also provides bounds on the constant $c$. However,
as these bounds are rather implicit and do not straightforwardly lead to an
explicit formula, we do not address the question of finding the optimal bounds
in the present work.
### 3.2. Regeneration structure, ergodicity, and stationarity
The asymptotic negative drift and the bounds of the previous section do not
immediately imply the ergodicity of the various tip pool sizes since the
processes are not Markov processes. In this section, we construct a
regeneration structure that allows proving (mean) ergodicity and stationarity.
The main idea behind this construction is quite natural and is first sketched
informally. The trajectory of the process $X_{n}^{(c)}$ will be decomposed
into independent and identically distributed pieces. Processes of this kind
are known as regenerative processes, see e.g., [2, Chapter VI]. Let us
consider the indicator function of the event that all nodes are synchronized
and there is only one active tip:
$\mathrm{sync}_{n}:={\mathbf{1}}\left\\{\mathrm{pool}_{n}^{(o)}=\mathrm{pool}_{n}^{(c)}=\mathrm{pool}_{n}^{(i)}~{}\forall
i\in{\mathcal{N}},X_{n}^{(o)}=1\right\\}.$ (43)
We construct the sequences of time $\tau_{n}$ where the nodes are in the
synchronized state. More precisely, let $\tau_{0}:=0$ and inductively for
$k>0$
$\displaystyle\tau_{k}$ $\displaystyle:=$
$\displaystyle\inf\\{n>\tau_{k-1}:\mathrm{sync}_{n}=1\\}.$ (44)
We start with the observation that the process $X_{n}^{(o)}$ is
${\mathcal{F}}_{n}$-measurable for all $n$ but not necessarily a Markov chain.
However, we have the following “decoupling properties”.
###### Lemma 3.8.
We have that for every $x>1$ there exists some constant $c_{x}$ such that
${\mathbb{P}}\left(X_{n+1}^{(o)}=x-1,t_{n+1}-t_{n}>\Delta|X_{n}^{(o)}=x\right)\geq
c_{x}.$ (45)
Furthermore, for every $x>1$ there exists some constants $d_{x}$ and $n_{x}$
such that
${\mathbb{P}}\left(\mathrm{sync}_{n+n_{x}}=1|X_{n}^{(o)}=x\right)\geq d_{x}.$
(46)
###### Proof.
Under the assumption that no new blocks are issued between $t_{n}$ and
$t_{n}+\Delta$ (which happens with a positive probability independent of
${\mathcal{F}}_{n}$), all nodes will share the same perception of the tip
pool. The node that issues the next block will choose only two distinct tips
with positive probability. As this probability only depends on $x$ the first
claim follows. The second claim follows by recursively applying the first. ∎
###### Lemma 3.9.
The regeneration times $\tau_{n}$ are almost surely finite and for any
$k\in{\mathbb{N}}$ and any subset sets $A\in{\mathbb{N}}^{\mathbb{N}}$ we have
${\mathbb{P}}\left(\left(X_{\tau_{k}+n}^{(o)}\right)_{n\in{\mathbb{N}}}\in
A\right)={\mathbb{P}}\left(\left(X_{n}^{(o)}\right)_{n\in{\mathbb{N}}}\in
A\right).$ (47)
In particular, $(\tau_{k+1}-\tau_{k}),k\in{\mathbb{N}},$ are i.i.d. random
variables under ${\mathbb{P}}$, and, in addition, have some exponential
moments. The random variables
$M_{k}:=\max\left\\{\tau_{k}\leq
n\leq\tau_{k+1}:X_{n}^{(c)}\right\\},k\in{\mathbb{N}},$ (48)
are i.i.d. and have some exponential moments.
###### Proof.
We start by verifying that the first return time $\tau_{1}$ is a.s. finite.
Let $K$ be from Inequality (16) and define $A:=\\{K-k,\ldots,K\\}$. Now, by
Lemma 3.8 we have that there exists some $d_{A}:=\max_{x\in A}d_{x}$ and
$n_{A}:=\max_{x\in A}n_{x}$ such that
${\mathbb{P}}(\exists m\leq n_{A}:\mathrm{sync}_{n+m}=1|X_{n}^{(o)}\in A)\geq
d_{A}.$ (49)
Hence, whenever our process is in the “state” $A$, we have a positive
probability of regenerating. If we regenerate we have that $\tau_{1}$ is
finite; if we are not successful, then $X_{n+n_{A}}^{(o)}\leq K+kn_{A}$ and
Theorem 3.2, see also Remark 3.3, ensures that we return to the set $A$ in a
time with exponential moments. Therefore, it takes a geometrically distributed
number of such trials to regenerate.
The claim (47) for $k=1$ follows from the observation that if (at time $n$)
the event
$\left\\{\mathrm{pool}_{n}^{(o)}=\mathrm{pool}_{n}^{(c)}=\mathrm{pool}_{n}^{(i)}~{}\forall
i\in{\mathcal{N}},X_{n}^{(o)}=1\right\\}$
occurs, all nodes have the same information on the state of the system and the
state equals the state at time $0$ together with the “memorylessness property”
of the exponential random variables in the underlying Poisson point process.
Recursively, we obtain the a.s. finiteness of the $\tau_{k}$ and Equality (47)
for all $k.$ The exponential moments of $\tau_{k+1}-\tau_{k}$ follow from (47)
and Theorem 3.2. The Claim (48) follows from the fact that the increments of
$X_{n}^{(o)}$ are bounded and that $\tau_{k+1}-\tau_{k}$ has exponential
moments. ∎
The previous two lemmas allow us to show that it is possible to view the
stochastic process $\\{X_{n}^{(o)}\\}$ as a regenerative process, e.g., see
[2]. In the next theorem, using the regenerative structure of
$\\{X_{n}^{(o)}\\}$, we prove the convergence in $L^{2}$ of the ergodic
average of $X_{n}^{(o)}$ and $X_{n}^{(i)}$ for all $i$.
###### Theorem 3.10 (Mean ergodicity and stationarity).
Under Assumption 2.1 there exist some constants
$\mu^{(o)},\mu^{(i)},i\in{\mathcal{N}}$, such that
$\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(o)}~{}\underset{n\to\infty}{\longrightarrow}\mu^{(o)}$
(50)
and
$\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(i)}~{}\underset{n\to\infty}{\longrightarrow}\mu^{(i)},\forall
i\in{\mathcal{N}}$ (51)
almost surely and in $L^{2}$ (mean square sense). Moreover, $X_{n}^{(o)}$ and
$X_{n}^{(i)},i\in{\mathcal{N}},$ converge in distribution to some random
variables $X^{(o)}$ and $X^{(i)},i\in{\mathcal{N}}$.
###### Proof.
The law of large numbers for i.i.d. sequences, applied to
$(\tau_{n+1}-\tau_{n})_{n\in{\mathbb{N}}}$, yields
$\frac{\tau_{n}}{n}~{}\to~{}{\mathbb{E}}[\tau_{2}-\tau_{1}]$ (52)
Define $k(n)=\max\\{k\in{\mathbb{N}}_{0}:\,\tau_{k}\leq n\\}$. Clearly,
$k(n)\to\infty$ as $n\to\infty$. Further,
$\frac{n}{k(n)}~{}=~{}\frac{n}{\tau_{k(n)}}\frac{\tau_{k(n)}}{k(n)}.$
The second factor tends to ${\mathbb{E}}[\tau_{2}-\tau_{1}]$
${\mathbb{P}}$-a.s. as $n\to\infty$ by (52). Regarding the first factor,
observe that $\tau_{k(n)}\leq n\leq\tau_{k(n)+1}$ and, therefore,
$1~{}\leq~{}\frac{n}{\tau_{k(n)}}~{}\leq~{}\frac{\tau_{k(n)+1}}{\tau_{k(n)}}~{}\to~{}1\quad{\mathbb{P}}\text{-a.s.\
as }n\to\infty.$
Consequently, $\lim_{n\to\infty}n/k(n)={\mathbb{E}}[\tau_{1}-\tau_{0}]$
${\mathbb{P}}$-a.s. The convergence also holds for all $L^{p}$, $p\geq 1$,
which can be shown similarly by using the exponential moments of
$\tau_{k+1}-\tau_{k}$ and Hölder’s Inequality. We can now decompose the sum as
$\displaystyle\frac{1}{n}\sum_{k=1}^{n}X_{k}^{(c)}=\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)}+\frac{1}{n}\sum_{k=\tau_{k(n)}+1}^{n}X_{k}^{(c)}.$
(53)
The first summand becomes
$\displaystyle\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)}$
$\displaystyle=$
$\displaystyle\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{k(n)}\tilde{X}_{k}^{(c)},$
(54)
with
$\tilde{X}_{k}^{(c)}:=\sum_{j=\tau_{k(n)-1}+1}^{\tau_{k(n)}}X_{j}^{(c)}.$
Due to Lemma 3.9 the random variables $\tilde{X}_{k}^{(c)},k\in{\mathbb{N}},$
are i.i.d. with exponential moments, and hence,
$\frac{k(n)}{n}\frac{1}{k(n)}\sum_{k=1}^{\tau_{k(n)}}X_{k}^{(c)}\underset{n\to\infty}{\longrightarrow}\mu_{c},$
for some constant $\mu_{c}$ and convergence a.s. and in $L^{2}$. It remains to
treat the second term on the right-hand side of (53). We have
$\frac{1}{n}\sum_{k=\tau_{k(n)}+1}^{n}X_{k}^{(c)}\leq\frac{1}{n}(\tau_{k(n)+1}-\tau_{k(n)})M_{k}$
(55)
and hence, using (48), we see that this terms converges a.s. and in mean to
$0$. Note that the convergence in $L^{2}$ can be seen using the Cauchy
criteria, e.g., [16, Proposition 2.9], together with the Cauchy-Schwarz
Inequality. It remains to prove convergence in distribution. For this, let us
note that we constructed a so-called regeneration structure, and, hence, the
convergence follows directly from [2, Corollary 1.5]. The proofs for the local
tip pool sizes are analogous. ∎
## 4\. Experimental results
We provide quantitative simulation results to further demonstrate the tip
pools’ stability. While our theoretical results do provide stability of the
local tip pools, they do not allow us to compare the different perceptions and
how they depend on the model parameters. We thus evaluate the impact of delay
on the tip pools for different scenarios through simulations. The simulations
are performed in an open-source simulator [17] also used in [24]. This
simulator simulates both communications over the peer-to-peer layer and blocks
creations. The statistical analysis of the data is done with the software R
(4.1.2), and the package “ggstatsplot” [29].
We use a gossip protocol to model the network latency on top of a network
topology with a small diameter. More precisely, we use a Watts-Strogatz
network [40] with mean degree $10$ and re-wiring probability $1$. The gossip
algorithm forwards the new blocks to all its neighbours in the Watts-Strogatz
network. The delay for each of these connections on the P2P layer is
independent and uniformly distributed in the interval
$[\delta_{min},\delta_{max}].$
We model the different issuance rates of the nodes in the network using the
Zipf empirical law with parameter $s$, [35]. This is motivated by the fact
that in a real-world scenario with heterogeneous weights, the Zipf law is
frequently observed, e.g., see [1, 18, 22]. Note that, with Zipf’s law, a
homogeneous network, e.g., can be modelled for $s=0$, while the higher the
$s$, the more heterogeneous or centralized the weight distribution becomes.
### 4.1. Heterogeneous rates
The issuing rates of the $N=100$ nodes are Zipf-distributed with parameter
$s$, i.e.,
$\lambda_{i}=\frac{i^{-s}}{\sum_{j=1}^{N}j^{-s}}\lambda,$ (56)
where $\lambda$ is the total issuance rate.
We have set the other parameters of our numerical experiments as follows: the
number of references $k=8$. This choice of $k=8$ is made since it is in the
“middle” on a logarithmic scale of the extreme cases $2^{0}$ and $2^{7}$. If
$k=1$ we obtain a tree and if $k$ is close to the number of nodes, then the
number of tips is generally very small. Moreover, $k=8$ is the value
considered in [24].
The network latency between two peers in the P2P network is modelled by a
uniform random variable with $\delta_{min}=20ms,$ $\delta_{max}=180ms$. It is
a common assumption to consider the mean latency to be close to $100$ms.
Moreover, most delays in wide area networks and the Internet fall into our
interval, e.g., see [19]. The total block issuance rate is set to
$\lambda=500$ blocks per second (BPS). The local tip pools are measured in the
simulation every $50ms$, and every simulation lasts for $60$ seconds.
Let us first consider the case of a heterogeneous node activity, $s=1$. In
this scenario, Node $1$ issues blocks at a rate of $96$ BPS, Node $2$ with a
rate of $48$ BPS, and the two slower nodes, Node $99$ and $100$ issue with
rates around $1$ BPS.
In Figures 1(a) and 2(a), we present the different perceptions of the tip pool
sizes for these nodes.
### 4.2. Homogeneous rates
We consider the homogeneous case, where every node issues blocks with the same
rate, i.e. $s=0$. The other parameters are set as before. The results in
Figures 1(b) and 2(b) show that the local tip pools have similar sizes.
Comparing these results with the results in the heterogeneous setting above,
Figure 2(a), we can also note that the size of the tip pools decreases with
the system’s centralisation, i.e. higher values of $s$.
### 4.3. Randomness of delay
In the last section, we identified that different issuing rates might
considerably affect the local tip pools. A natural explanation is that the
average delay of high-frequency nodes is much smaller than those of lower
frequencies. In previous heuristic results, [20], it was observed that the
random distribution of the delay might already impact the tip pool sizes.
Consequently, optimal bounds on the tip pool sizes must contain more
information than only the mean delay. We illustrate this effect by performing
the same simulations as above for $s=0$ but keeping the message delay constant
with $100ms$, see Figure 1(c) and 2(c). In this case, we see larger tip pools
than in the case with more “randomness”. This effect is also present for
heterogeneous rates, but we omit the figures for brevity.
(a) Heterogeneous rates according to Zipf law with $s=1$, BPS of $500$, and
random network delay.
(b) Homogeneous rates according to Zipf law with $s=0$, BPS of $500$, and
random network delay.
(c) Homogeneous rates according to Zipf law with $s=0$, BPS of $500$, and
constant network delay of $100$ms.
Figure 1. Tip pool sizes of the top and bottom nodes with $N=100$ nodes for
different scenarios. Randomness in the delay results in smaller tip pool
sizes. Heterogeneity in the rates results in more disparate and smaller tip
pool sizes.
(a) Heterogeneous rates according to Zipf law with $s=1$, BPS of $500$, and
random network delay.
(b) Homogeneous rates according to Zipf law with $s=0$, BPS of $500$, and
random network delay.
(c) Homogeneous rates according to Zipf law with $s=0$, BPS of $500$, and
constant network delay of $100$ms.
Figure 2. Comparison of the local tip pool sizes; $N=100$ nodes, different
scenarios.
## 5\. Discussion and extensions
This paper presents a DAG-based distributed ledgers model that considers
variable and heterogeneous network delays. It is a continuous time model with
random arrivals of new blocks and random communication delays between the
nodes. First, we have proven asymptotic negative drift of the tip pool sizes,
3.1, that implies concentration results, Theorem 3.2. A regeneration structure
then led to the stationarity and ergodicity of the tip pool sizes, Theorem
3.10. Finally, using Monte-Carlo simulations, we showcase the impact of the
rate distribution and the randomness of delays on the evolution of the local
tip pool sizes. Let us discuss possible extensions of our work.
Different type of delays: As already mentioned in subsection 1.3, a different
type of delay (time to validate a block) has been studied in [31]. One natural
way to incorporate such delays is to include an additional mark in the Poisson
point process that encodes the block type. The delays of a block then also
depend on its type. While our obtained results carry over to this more general
situation, understanding how these delays impact the tip pool sizes is more
challenging as it requires more quantitative results.
Quantitative results We obtained qualitative results about the stability. For
real-world applications, quantitative bounds are essential. The most important
measure is the expected tip pool size. Previous results, [20, 31, 6], and our
simulations show that the tip pool size depends on the distribution of the
delays. Hence, explicit formulas for the expected tip pool size seem currently
out of reach. A more feasible approach is to obtain meaningful upper and lower
bounds on the tip pool sizes. Moreover, Figures 1 and 2 show the fast
convergence to the stationary regime, and it seems achievable to obtain
quantitative bounds on this speed of convergence as described in Remark 3.7.
Extreme values and large deviations: In Theorem 3.2, we derived an upper
bound on the probability that $X_{k}^{(o)}$ is greater than a given value $L$.
Such a result is important from an application perspective because we can
quantify the risk that the number of tips is too high at a given instant. The
probabilities of deviating from the mean are usually expressed by large
deviation results and the distribution of the maximal values by extreme value
results. The regeneration structure introduced in Section 3.2 offers an i.i.d.
decomposition of the underlying process and, with the exponential moment
bounds, strongly suggests the validity of a large deviation principle and an
extreme value theorem. We refer to [32] for details on how to obtain a large
deviation principle from a regeneration structure and to [2], Chapter IV
Section 4, for more details on extreme value theory for regenerative
processes.
General arrival point processes: In our model, the assumption of a pure
Poisson point process is not necessary, and the results seem to carry over the
stationary and ergodic point processes. A more realistic model, for instance,
is to consider stationary point processes with minimal distance between the
points; so-called hard care point processes.
## References
* [1] Lada A. Adamic and B. Huberman. Zipf’s law and the internet. Glottometrics, 3:143–150, 2002.
* [2] Søren Asmussen. Applied Probability and Queues. Springer Publishing Company, Incorporated, New York, 2nd edition, 2003\.
* [3] Vivek Bagaria, Sreeram Kannan, David Tse, Giulia Fanti, and Pramod Viswanath. Prism: Deconstructing the blockchain to approach physical limits. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 585–602, 2019.
* [4] Quentin Bramas. The Stability and the Security of the Tangle. working paper or preprint, April 2018.
* [5] Anton Churyumov. Byteball: A decentralized system for storage and transfer of value, 2016\.
* [6] A. Cullen, P. Ferraro, C. King, and R. Shorten. Distributed ledger technology for smart mobility: Variable delay models. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 8447–8452, 2019.
* [7] Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, and Ari Juels. Flash boys 2.0: Frontrunning in decentralized exchanges, miner extractable value, and consensus instability. In 2020 IEEE Symposium on Security and Privacy (SP), pages 910–927, 2020.
* [8] George Danezis, Lefteris Kokoris-Kogias, Alberto Sonnino, and Alexander Spiegelman. Narwhal and Tusk: A DAG-Based Mempool and Efficient BFT Consensus. In Proceedings of the Seventeenth European Conference on Computer Systems, EuroSys ’22, page 34–50, New York, NY, USA, 2022. Association for Computing Machinery.
* [9] Alex de Vries and Christian Stoll. Bitcoin’s growing e-waste problem. Resources, Conservation and Recycling, 175:105901, 2021.
* [10] Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. Consensus in the presence of partial synchrony. J. ACM, 35(2):288–323, apr 1988.
* [11] Pietro Ferraro, Christopher King, and Robert Shorten. Distributed ledger technology for smart cities, the sharing economy, and social compliance. IEEE Access, 6:62728–62746, 2018.
* [12] Pietro Ferraro, Christopher King, and Robert Shorten. Iota-based directed acyclic graphs without orphans. arXiv preprint arXiv:1901.07302, 2018.
* [13] Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, and Nickolai Zeldovich. Algorand: Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th symposium on operating systems principles, pages 51–68, 2017.
* [14] Adam Gągol, Damian Leśniak, Damian Straszak, and Michał Świętek. Aleph: Efficient atomic broadcast in asynchronous networks with byzantine nodes. In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, pages 214–228, 2019.
* [15] Bruce Hajek. Hitting-time and occupation-time bounds implied by drift analysis with applications. Advances in Applied Probability, 14(3):502–525, 1982.
* [16] Bruce Hajek. Random Processes for Engineers. Cambridge University Press, Cambridge, 2015.
* [17] IOTA Foundation. Multiverse Simulation. https://github.com/iotaledger/multiverse-simulation, 2022. [Online; accessed 15/11 2022].
* [18] Charles I. Jones. Pareto and Piketty: The Macroeconomics of Top Income and Wealth Inequality. Journal of Economic Perspectives, 29(1):29–46, February 2015.
* [19] E. Kamrani, H.R. Momeni, and A.R. Sharafat. Modeling internet delay dynamics for teleoperation. In Proceedings of 2005 IEEE Conference on Control Applications, 2005\. CCA 2005., pages 1528–1533, 2005.
* [20] Navdeep Kumar, Alexandre Reiffers-Masson, Isabel Amigo, and Santiago Ruano Rincón. The effect of network delays on distributed ledgers based on direct acyclic graphs: A mathematical model. Available at SSRN 4253421, 2022.
* [21] Bartosz Kusmierz, William Sanders, Andreas Penzkofer, Angelo Capossele, and Alon Gal. Properties of the tangle for uniform random and random walk tip selection. In 2019 IEEE International Conference on Blockchain (Blockchain), pages 228–236. IEEE, 2019.
* [22] Wentian Li. Zipf’s law everywhere. Glottometrics, 5:14–21, 2002.
* [23] Yixin Li, Bin Cao, Mugen Peng, Long Zhang, Lei Zhang, Daquan Feng, and Jihong Yu. Direct acyclic graph-based ledger for internet of things: performance and security analysis. IEEE/ACM Transactions on Networking, 28(4):1643–1656, 2020.
* [24] Bing-Yang Lin, Daria Dziubałtowska, Piotr Macek, Andreas Penzkofer, and Sebastian Müller. Robustness of the Tangle 2.0 Consensus, 2022.
* [25] Igor Makarov and Antoinette Schoar. Blockchain analysis of the bitcoin market. Working Paper 29396, National Bureau of Economic Research, October 2021\.
* [26] Sebastian Müller, Andreas Penzkofer, Nikita Polyanskii, Jonas Theis, William Sanders, and Hans Moog. Tangle 2.0 Leaderless Nakamoto Consensus on the Heaviest DAG. IEEE Access, 10:105807–105842, 2022.
* [27] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008.
* [28] Seongjoon Park, Seounghwan Oh, and Hwangnam Kim. Performance analysis of dag-based cryptocurrency. In 2019 IEEE International Conference on Communications workshops (ICC workshops), pages 1–6. IEEE, 2019.
* [29] Indrajeet Patil. Visualizations with statistical details: The ’ggstatsplot’ approach. Journal of Open Source Software, 6(61):3167, 2021.
* [30] Robin Pemantle and Jeffrey S. Rosenthal. Moment conditions for a sequence with negative drift to be uniformly bounded in Lr. Stochastic Processes and their Applications, 82(1):143–155, 1999\.
* [31] Andreas Penzkofer, Olivia Saa, and Daria Dziubałtowska. Impact of delay classes on the data structure in iota. In Data Privacy Management, Cryptocurrencies and Blockchain Technology, pages 289–300. Springer, 2021.
* [32] Jonathon Peterson and Ofer Zeitouni. On the annealed large deviation rate function for a multi-dimensional random walk in random environment. arXiv: Probability, 2008.
* [33] Serguei Popov. The Tangle, 2015.
* [34] Serguei Popov, Hans Moog, Darcy Camargo, Angelo Capossele, Vassil Dimitrov, Alon Gal, Andrew Greve, Bartosz Kusmierz, Sebastian Mueller, Andreas Penzkofer, Olivia Saa, William Sanders, Luigi Vigneri, Wolfgang Welz, and Vidal Attias. The Coordicide. 2020\.
* [35] David M. W. Powers. Applications and explanations of Zipf’s law. In New Methods in Language Processing and Computational Natural Language Learning, 1998.
* [36] David Rosenthal. EE380 Talk, 2022.
* [37] Yonatan Sompolinsky, Yoad Lewenberg, and Aviv Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016.
* [38] Yonatan Sompolinsky, Shai Wyborski, and Aviv Zohar. Phantom and ghostdag: A scalable generalization of nakamoto consensus. Cryptology ePrint Archive, Report 2018/104, 2018.
* [39] Yonatan Sompolinsky and Aviv Zohar. Secure high-rate transaction processing in bitcoin. In Rainer Böhme and Tatsuaki Okamoto, editors, Financial Cryptography and Data Security, pages 507–527, Berlin, Heidelberg, 2015. Springer Berlin Heidelberg.
* [40] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. nature, 393(6684):440–442, 1998.
* [41] Manuel Zander, Tom Waite, and Dominik Harz. Dagsim: Simulation of dag-based distributed ledger protocols. ACM SIGMETRICS Performance Evaluation Review, 46(3):118–121, 2019\.
|
# Gauge invariants of linearized gravity with a general background metric
Deepen Garg Department of Astrophysical Sciences, Princeton University,
Princeton, New Jersey 08544, USA I. Y. Dodin Department of Astrophysical
Sciences, Princeton University, Princeton, New Jersey 08544, USA Princeton
Plasma Physics Laboratory, Princeton, NJ 08543, USA
###### Abstract
A general method is proposed for identifying the gauge-invariant part of the
metric perturbation within linearized gravity, and the six independent gauge
invariants per se, for an arbitrary background metric. For the Minkowski
background, the operator that projects the metric perturbation on the
invariant subspace is proportional to the well-known dispersion operator of
linear gravitational waves in vacuum.
## I Introduction
The perturbation approach has proven useful in studying various phenomena in
classical gravity, for example, gravitational waves (GWs). Within this
approach, the spacetime metric $\smash{\mathsf{g}_{\alpha\beta}}$ is split
into a background metric $\smash{g_{\alpha\beta}}=\mathcal{O}(1)$ and a
perturbation $\smash{h_{\alpha\beta}}=\mathcal{O}(a)$, where $a\ll 1$ is a
small parameter. A coordinate transformation $\smash{x^{\mu}\to
x^{\prime\mu}=x^{\mu}+\xi^{\mu}}$, with $\smash{\xi^{\mu}}=\mathcal{O}(a)$,
induces a metric transformation
$\smash{\mathsf{g}_{\alpha\beta}}\to\smash{\mathsf{g}^{\prime}_{\alpha\beta}}=\mathsf{g}_{\alpha\beta}-\smash{\mathrm{\text{\pounds}}_{\xi}\mathsf{g}_{\alpha\beta}}+\mathcal{O}(a^{2})$,
where $\smash{\mathrm{\text{\pounds}}_{\xi}}$ is the Lie derivative along the
vector field $\smash{\xi^{\mu}}$ book:carroll . Assuming linearized gravity,
where $\smash{\mathcal{O}(a^{2})}$ corrections are neglected and the
background is $a$-independent by definition, this implies
$\smash{g_{\alpha\beta}}\to\smash{g^{\prime}_{\alpha\beta}}=\smash{g_{\alpha\beta}}$
and
$\smash{h_{\alpha\beta}}\to\smash{h^{\prime}_{\alpha\beta}}=\smash{h_{\alpha\beta}}-\smash{\mathrm{\text{\pounds}}_{\xi}g_{\alpha\beta}}$.
If $\smash{h_{\alpha\beta}}$ is treated as a tensor field on the unperturbed
spacetime, so its indices are manipulated using $\smash{g_{\alpha\beta}}$ as
the metric, one also has
$\displaystyle h^{\alpha\beta}\to
h^{\prime\alpha\beta}=h^{\alpha\beta}+\mathrm{\text{\pounds}}_{\xi}g^{\alpha\beta}.$
(1)
The transformation (1) can be viewed as a gauge transformation and, by general
covariance, cannot have measurable effects. Thus, the physical, gauge-
invariant, part of $\smash{h^{\alpha\beta}}$ is defined only up to the Lie
derivative of $g^{\alpha\beta}$ along an arbitrary vector field, which is
encoded by four functions (in a four-dimensional spacetime). Because the
symmetric tensor $\smash{h^{\alpha\beta}}$ is encoded by ten functions, this
leaves room for six gauge-invariant degrees of freedom.
Identifying these degrees of freedom for a given metric perturbation
$\smash{h^{\alpha\beta}}$ is important for representing the linearized-gravity
equations in a gauge-invariant form. This problem has a well-known solution
for the Minkowski background ref:flanagan05 , and it has also been solved ad
hoc for the Friedmann–Lemaître–Robertson–Walker background ref:bardeen80 ;
ref:malik09 ; ref:malik13 ; ref:nakamura07c ; ref:bruni97 ; ref:luca20 .
However, less attention has been paid to general background metrics,
particularly those that emerge in problems involving GW–matter coupling
my:gwponder ; ref:baym17 ; ref:bamba18 ; ref:asenjo20 ; ref:barta18 ;
ref:chesters73 ; ref:asseo76 ; ref:macedo83 ; ref:flauger18 ; ref:servin01 ;
ref:moortgat03 ; ref:forsberg10a ; ref:isliker06 ; ref:duez05a ;
ref:mendonca02b . This leads to the question: how can one find the invariant
part of $\smash{h^{\alpha\beta}}$ for general $g_{\alpha\beta}$?
Here, we answer this question. We start by showing that any symmetric tensor
$\smash{h^{\alpha\beta}}$ can be uniquely decomposed as
$\displaystyle h^{\alpha\beta}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\gamma\delta}+\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}h^{\gamma\delta},$ (2)
where the operators $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}}$ and $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}}$ satisfy
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}+\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}=\delta^{\alpha}_{(\gamma}\delta^{\beta}_{\delta)},$ (3a)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\smash{\widehat{\Pi}}^{\gamma\delta}_{\rm
inv}{}_{\lambda\varepsilon}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\lambda\varepsilon},$ (3b)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}\smash{\widehat{\Pi}}^{\gamma\delta}_{\rm
g}{}_{\lambda\varepsilon}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\lambda\varepsilon},$ (3c)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\smash{\widehat{\Pi}}^{\gamma\delta}_{\rm
g}{}_{\lambda\varepsilon}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}\smash{\widehat{\Pi}}^{\gamma\delta}_{\rm
inv}{}_{\lambda\varepsilon}=0,$ (3d)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}=0,$ (3e)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}=\mathrm{\text{\pounds}}_{u}g^{\alpha\beta},$
(3f) $\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{{\rm inv},{\rm
g}}{}_{\gamma\delta}=\smash{\widehat{\Pi}}^{\alpha\beta}_{{\rm inv},{\rm
g}}{}_{\delta\gamma}=\smash{\widehat{\Pi}}^{\beta\alpha}_{{\rm inv},{\rm
g}}{}_{\gamma\delta}.$ (3g)
(Parentheses in indices denote symmetrization, as usual, and $u^{\mu}$ is any
vector field.) In Sec. II, we present a method for how to calculate the
operators $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}}$ and $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}}$ for general $g_{\alpha\beta}$. We also show that the
gauge-invariant part of a metric perturbation $\smash{h^{\alpha\beta}}$ can be
calculated as $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\gamma\delta}}$, while
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}h^{\gamma\delta}}$ is the gauge-dependent part
representable as $\smash{\mathrm{\text{\pounds}}_{\zeta}g^{\alpha\beta}}$,
where $\zeta^{\mu}$ is a vector field linear in $\smash{h^{\alpha\beta}}$.
These results lead to a gauge-invariant formulation of linearized gravity with
any background metric, as will be shown in a follow-up paper. In Sec. III, we
illustrate the application of our results to the Minkowski background. We
derive the six gauge-invariant components of $\smash{h^{\alpha\beta}}$ and
show the agreement with the commonly known results. In addition, we show that
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}}$ is
proportional to the dispersion operator of linear GWs in Minkowski vacuum. In
Sec. IV, we summarize our main results. Auxiliary calculations are presented
in appendices.
## II Basic theory
In this section, we present a method for how to calculate the operators
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}}$ and
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm g}{}_{\gamma\delta}}$ for
general $g_{\alpha\beta}$. We assume the sign convention is as in Refs.
book:carroll ; book:misner77 , so
$\displaystyle[\nabla_{\beta},\nabla^{\alpha}]\xi^{\beta}=R^{\alpha}{}_{\beta}\xi^{\beta}$
(4)
for any vector field $\xi^{\alpha}$, where $\smash{R^{\alpha}{}_{\beta}}$ is
the Ricci tensor. Also as a reminder,
$\displaystyle\mathrm{\text{\pounds}}_{\xi}g^{\alpha\beta}=-\nabla^{\alpha}\xi^{\beta}-\nabla^{\beta}\xi^{\alpha}\equiv-2\nabla^{(\alpha}\xi^{\beta)}.$
(5)
### II.1 Special case
Let us consider an auxiliary problem first, only to motivate introducing the
machinery that will be used in Sec. II.2. For a given symmetric tensor
$\smash{h^{\alpha\beta}}$, let us search for a divergence-free vector field
$\smash{u^{\alpha}}$ such that the tensor
$\displaystyle h^{\prime\alpha\beta}\doteq
h^{\alpha\beta}-\mathrm{\text{\pounds}}_{u}g^{\alpha\beta}$ (6)
(the symbol $\doteq$ denotes definitions) satisfies the Lorenz gauge; i.e.,
$\displaystyle\nabla_{\beta}h^{\prime\alpha\beta}=0.$ (7)
Because we assume
$\displaystyle\nabla_{\alpha}u^{\alpha}=0,$ (8)
Eqs. (5)–(7) lead to an equation for $u^{\alpha}$ that is similar to the
driven Maxwell’s equation for the Lorenz-gauge electromagnetic vector
potential in vacuum my:spinhall :
$\displaystyle\smash{\widehat{Q}}^{\alpha}{}_{\beta}u^{\beta}=\nabla_{\beta}h^{\alpha\beta},$
(9a)
$\displaystyle\smash{\widehat{Q}}^{\alpha}{}_{\beta}\doteq-\delta_{\beta}^{\alpha}\nabla_{\mu}\nabla^{\mu}-R^{\alpha}{}_{\beta},$
(9b)
where we also used Eq. (4). Equation (9a) has a solution
$\displaystyle
u^{\alpha}=\smash{\widehat{\Xi}}^{\alpha}{}_{(\gamma}\nabla_{\delta)}h^{\gamma\delta},$
(10)
where $\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$ is the Green’s
operator of Eq. (9a). (Symmetrization with respect to the lower indices is
added for convenience and does not affect the result.) From now on, we assume
the adiabatic limit my:nonloc , when
$\smash{\smash{\widehat{Q}}^{\alpha}{}_{\beta}}$ is approximately invertible
for fields of interest. Then, one can express
$\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$ as
$\displaystyle\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}=(\smash{\widehat{Q}}^{-1})^{\alpha}{}_{\beta}.$
(11)
Let us assume, for now, that $\smash{h^{\gamma\delta}}$ is such that the
solution (10) does in fact satisfy the previously imposed constraint (8).
Then, Eq. (7) is satisfied by
$\smash{h^{\prime\alpha\beta}}=\smash{\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}h^{\gamma\delta}}$,
where
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\doteq\delta^{\alpha}_{(\gamma}\delta^{\beta}_{\delta)}+2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}.$
(12)
In combination with Eq. (6), these results yield that
$\displaystyle
h^{\alpha\beta}=\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}h^{\gamma\delta}+\mathrm{\text{\pounds}}_{u}g^{\alpha\beta},$
(13a)
$\displaystyle\mathrm{\text{\pounds}}_{u}g^{\alpha\beta}=-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}h^{\gamma\delta},$
(13b)
and a direct calculation shows that (Appendix B)
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}=0.$
(14)
Equation (14) is similar to Eq. (3e) and makes the decomposition (13) close to
Eq. (2) indeed, except it is constrained by Eq. (8). This can be taken as a
hint that $\smash{\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}}$ is
close to the sought $\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}}$. Hence, we approach the general case as follows.
### II.2 General case
Let us consider application of
$\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}$ to
$\smash{\mathrm{\text{\pounds}}_{u}g^{\alpha\beta}}$ with a general
$\smash{u^{\alpha}}$. A direct calculation shows that the result can be
expressed as (Appendix B)111Here and further,
$g_{\alpha\beta}\equiv\smash{\widehat{g}}_{\alpha\beta}$ and
$R^{\alpha}{}_{\beta}\equiv\smash{\widehat{R}}^{\alpha}{}_{\beta}$ serve as
multiplication operators, and the assumed notation is
$\smash{\smash{\widehat{A}}\smash{\widehat{B}}f=\smash{\widehat{A}}(\smash{\widehat{B}}f)}$
for any operators $\smash{\smash{\widehat{A}}}$ and
$\smash{\smash{\widehat{B}}}$ and function $f$ that they act upon. For
example,
$\nabla^{\mu}g_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}\equiv\nabla^{\mu}[g_{\gamma\delta}(\mathrm{\text{\pounds}}_{u}g^{\gamma\delta})]$.
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}=\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}.$
(15)
Hence, the operator
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\doteq\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}-\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}$
(16)
automatically satisfies Eq. (3e). Let us substitute Eq. (12) and rewrite this
operator as follows:
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}=\delta^{\alpha}_{(\gamma}\delta^{\beta}_{\delta)}-\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta},$ (17a)
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}\doteq-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}.$
(17b)
This satisfies Eqs. (3a), (3f), and (3g). (The latter ensures that
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{{\rm inv},{\rm
g}}{}_{\gamma\delta}f^{\gamma\delta}}=0$ for all anti-symmetric
$\smash{f^{\gamma\delta}}$, which is convenient.) The property (3c) is proven
by direct calculation (Appendix C). Equation (3d) can be derived from Eqs.
(3a) and (3c), and the remaining property (3b) can then be obtained from Eqs.
(3a) and (3d).
Let us discuss how this result helps identify the invariant part of a metric
perturbation. First, notice that
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}h^{\gamma\delta}$
$\displaystyle=-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}h^{\gamma\delta}+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}h^{\gamma\delta}$
$\displaystyle=-2\nabla^{(\alpha}\zeta^{\beta)}$
$\displaystyle=\mathrm{\text{\pounds}}_{\zeta}g^{\alpha\beta},$ (18)
where we introduced
$\displaystyle\zeta^{\beta}\doteq\smash{\widehat{\Xi}}^{\beta}{}_{(\gamma}\nabla_{\delta)}h^{\gamma\delta}-\frac{1}{2}\,\smash{\widehat{\Xi}}^{\beta}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}h^{\gamma\delta}.$
(19)
Hence, Eq. (2) can be rewritten as
$\displaystyle h^{\alpha\beta}=\psi^{\alpha\beta}+\phi^{\alpha\beta},$ (20a)
$\displaystyle\psi^{\alpha\beta}\doteq\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\gamma\delta},$ (20b)
$\displaystyle\phi^{\alpha\beta}\doteq\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}h^{\gamma\delta}=\mathrm{\text{\pounds}}_{\zeta}g^{\alpha\beta}.$
(20c)
Upon a gauge transformation (1), one obtains
$\displaystyle
h^{\prime\alpha\beta}=\psi^{\prime\alpha\beta}+\phi^{\prime\alpha\beta},$
(21a)
$\displaystyle\psi^{\prime\alpha\beta}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\prime\gamma\delta}=\psi^{\alpha\beta}+\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{\xi}g^{\alpha\beta}=\psi^{\alpha\beta},$
(21b)
$\displaystyle\phi^{\prime\alpha\beta}=\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
g}{}_{\gamma\delta}h^{\prime\gamma\delta}=\phi^{\alpha\beta}+\mathrm{\text{\pounds}}_{\xi}g^{\alpha\beta}=\mathrm{\text{\pounds}}_{\zeta+\xi}g^{\alpha\beta},$
(21c)
where we used Eqs. (3d)–(3f). The function $\smash{\phi^{\prime\alpha\beta}}$
can be zeroed by choosing the gauge $\xi^{\mu}=-\zeta^{\mu}$. This means that
$\smash{\phi^{\alpha\beta}}$, which is encoded by the four functions
$\zeta^{\mu}$, does not contain gauge-independent information. Hence, any
solution that has nonzero $\smash{\phi^{\alpha\beta}}$ and zero
$\smash{\psi^{\alpha\beta}}$ can be classified as a coordinate artifact. In
contrast, $\smash{\psi^{\alpha\beta}}$ is gauge-invariant by Eq. (21b). By the
argument presented in Sec. I, it is encoded by six independent functions, or
gauge-invariant degrees of freedom. Also note that
$\smash{\psi^{\alpha\beta}}$ does not necessarily satisfy the Lorenz-gauge
condition $\smash{\nabla_{\beta}\psi^{\alpha\beta}=0}$.
### II.3 Gauge invariants
Now let us discuss how to extract the six independent functions from the
sixteen gauge-invariant functions $\psi^{\alpha\beta}$. To do so, let us
consider $h^{\alpha\beta}$ as a 16-dimensional (16-D) field $h^{a}$, or
${\boldsymbol{h}}$ in the index-free notation, of the form
$\displaystyle{\boldsymbol{h}}=(h^{00},h^{01},h^{02},h^{03},h^{10},\ldots,h^{32},h^{33})^{\intercal},$
(22)
where ⊺ denotes transpose. In other words,
$\displaystyle h^{a}=h^{\alpha\beta},$ $\displaystyle\quad
h_{b}=h_{\gamma\delta},$ (23) $\displaystyle\\{\alpha,\beta\\}=\iota(a),$
$\displaystyle\quad\\{\gamma,\delta\\}=\iota(b),$ (24)
where the index function $\iota$ is defined via
$\displaystyle\iota(a)\doteq\big{\\{}1+\lfloor(a-1)/4\rfloor,1+(a-1)\,\text{mod}\,4\big{\\}}.$
(25)
(Here and further, Latin indices from the beginning of the alphabet range from
1 to 16.) Let us define $\mathscr{H}_{1}$ as a Hilbert space of one-component
functions on the background spacetime with the usual inner product
$\braket{\cdot\,,\cdot}_{1}$. Then, the 16-D fields (22) can be considered as
vectors in the Hilbert space $\mathscr{H}_{16}$ that is the tensor product of
16 copies of $\mathscr{H}_{1}$, with the inner product
$\displaystyle\braket{{\boldsymbol{\xi}},{\boldsymbol{\varphi}}}=\int\mathrm{d}^{4}x\sqrt{-g}\,\xi_{a}^{*}\,\varphi^{a}=\sum_{a=1}^{16}\braket{\xi_{a},\varphi^{a}}_{1},$
(26)
where $g\doteq\det g_{\alpha\beta}$. (Unlike in the rest of the paper,
summation is shown explicitly here in order to emphasize the difference
between $\braket{\cdot\,,\cdot}$ and $\braket{\cdot\,,\cdot}_{1}$.) Then,
$\smash{\smash{\widehat{\Pi}}_{\rm inv}^{\alpha\beta}{}_{\gamma\delta}}$
induces an operator $\smash{\smash{\widehat{\Pi}}^{a}{}_{b}}$ on
$\mathscr{H}_{16}$ defined via
$\displaystyle\smash{\widehat{\Pi}}^{a}{}_{b}h^{b}\doteq\smash{\widehat{\Pi}}_{\rm
inv}^{\alpha\beta}{}_{\gamma\delta}h^{\gamma\delta},$ (27)
where we again assumed the notation as in Eq. (24). From Eqs. (3), one finds
that
$\displaystyle\smash{\widehat{\Pi}}^{a}{}_{b}\smash{\widehat{\Pi}}^{b}{}_{c}=\smash{\widehat{\Pi}}^{a}{}_{c},$
(28a)
$\displaystyle\smash{\widehat{\Pi}}^{a}{}_{b}\mathrm{\text{\pounds}}_{u}g^{b}=0.$
(28b)
Equation (28a), which in the index-free notation can be written as
$\smash{\widehat{\boldsymbol{\Pi}}}^{2}=\smash{\widehat{\boldsymbol{\Pi}}}$,
means that $\smash{\widehat{\boldsymbol{\Pi}}}$ is a projector. (Note that
$\smash{\widehat{\boldsymbol{\Pi}}}^{\dagger}\neq\smash{\widehat{\boldsymbol{\Pi}}}$,
so the projector is not orthogonal but oblique.) Hence, each eigenvalue of
$\smash{\widehat{\boldsymbol{\Pi}}}$ is either zero or unity and
$\smash{\widehat{\boldsymbol{\Pi}}}$ is diagonalizable. This means that
$\smash{\widehat{\boldsymbol{\Pi}}}$ can be represented as
$\displaystyle\smash{\widehat{\boldsymbol{\Pi}}}=\smash{\widehat{\boldsymbol{V}}}\smash{\widehat{\boldsymbol{J}}}\smash{\widehat{\boldsymbol{V}}}^{-1},$
(29)
where $\smash{\smash{\widehat{\boldsymbol{V}}}}$ is a diagonalizing
transformation and the operator $\smash{\widehat{\boldsymbol{J}}}$ is such
that $\smash{\widehat{\boldsymbol{J}}}{\boldsymbol{h}}$ equals either zero or
${\boldsymbol{h}}$ for any ${\boldsymbol{h}}$. But each linear operator in
$\mathscr{H}_{16}$ is a $16\times 16$ matrix of operators on
$\mathscr{H}_{1}$. Then, $\smash{\widehat{\boldsymbol{J}}}$ must be
represented by a constant matrix ${{\boldsymbol{J}}}$ of the form
$\displaystyle{{\boldsymbol{J}}}=\text{diag}\,\big{\\{}\underbrace{1,1,\ldots,1}_{n},\underbrace{0,0,\ldots,0,0}_{16-n}\big{\\}},$
(30)
where, for clarity, we have ordered the basis such that the nonzero
eigenvalues are grouped together and have indices $1,\ldots,n$.
The gauge-invariant part of ${\boldsymbol{h}}$, which is given by Eq. (20b),
can now be expressed as
${\boldsymbol{\psi}}=\smash{\widehat{\boldsymbol{\Pi}}}{\boldsymbol{h}}$.
Using Eq. (29), one can also rewrite this as
$\displaystyle\smash{\widehat{\boldsymbol{P}}}{\boldsymbol{\psi}}={\boldsymbol{\Psi}},\quad{\boldsymbol{\Psi}}={{\boldsymbol{J}}}\smash{\widehat{\boldsymbol{P}}}{\boldsymbol{h}},\quad\smash{\widehat{\boldsymbol{P}}}\doteq\smash{\widehat{\boldsymbol{V}}}^{-1}.$
(31)
Because ${\boldsymbol{h}}$ is an arbitrary vector field parameterized by 16
functions and $\smash{\widehat{\boldsymbol{P}}}$ is invertible, the field
$\smash{\widehat{\boldsymbol{P}}}{\boldsymbol{h}}$ is also parameterized by 16
functions. Then, ${\boldsymbol{\Psi}}$ is parameterized by $n$ functions. But
we know that ${\boldsymbol{\psi}}$ is parameterized by 6 functions (Sec. I),
and thus so is ${\boldsymbol{\Psi}}$. Then, $n=6$, and the nonzero elements of
${\boldsymbol{\Psi}}$ are the sought invariants.
In summary, to find the gauge invariants, one needs to find the diagonalizing
transformation $\smash{\smash{\widehat{V}}^{a}{}_{b}}$ that brings
$\smash{\smash{\widehat{\Pi}}^{a}{}_{b}}$ to the form given by Eqs. (29) and
(30). Then, the invariants can be found as
$\displaystyle\Psi^{s}=J^{s}{}_{b}(\smash{\widehat{V}}^{-1})^{b}{}_{c}h^{c},\quad
s=1,2,\ldots,6.$ (32)
## III Example: Minkowski background
### III.1 Gauge invariants
In the flat-space limit, when $R^{\alpha}{}_{\beta}\to 0$, one has
$\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}\to-\delta^{\alpha}_{\beta}\nabla^{-2}}$.
Below we assume the Minkowski background metric, in which case,
$\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$ is further simplified to
$\displaystyle\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}\to-\delta^{\alpha}_{\beta}\partial^{-2}.$
(33)
Here, $\smash{\partial^{-2}}$ is the operator inverse to
$\smash{\partial^{2}\doteq\partial_{\mu}\partial^{\mu}}$; i.e.,
$\smash{\varphi^{\alpha}}=\smash{\partial^{-2}q^{\alpha}}$ is the solution of
$\smash{\partial^{2}\varphi^{\alpha}=q^{\alpha}}$ (Appendix A). Formally,
$\smash{\partial^{-2}}$ is singular on free vacuum GWs, but the vacuum case
can still be considered as a limit (Sec. III.2).
Using Eq. (33), one can rewrite Eqs. (17a) as
$\displaystyle\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}=\delta^{\alpha}_{(\gamma}\delta^{\beta}_{\delta)}-2\,\partial^{-2}\partial^{(\alpha}\delta^{\beta)}_{(\gamma}\partial^{\phantom{\beta)}}_{\delta)}+\partial^{-2}\partial^{\alpha}\partial^{\beta}g_{\gamma\delta}.$
(34)
Let us consider this operator in the Fourier representation, in which case it
becomes a local matrix function of the wavevector $k_{\mu}$; namely,
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}}=\Pi^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}$,
$\displaystyle\Pi^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}=\delta^{\alpha}_{(\gamma}\delta^{\beta}_{\delta)}-\frac{2k^{(\alpha}_{\phantom{\beta)}}\delta^{\beta)}_{(\gamma}k^{\phantom{\beta)}}_{\delta)}}{k^{2}}+g_{\gamma\delta}\,\frac{k^{\alpha}k^{\beta}}{k^{2}}.$
(35)
Using that $\nabla_{\mu}\to\partial_{\mu}\to\mathrm{i}k_{\mu}$ in the Fourier
representation [and in particular,
$\mathrm{\text{\pounds}}_{u}g^{\alpha\beta}=-2\mathrm{i}k^{(\alpha}u^{\beta)}$],
the properties (3) are easily verified. One also finds by direct calculation
foot:math that, as expected from Eqs. (29) and (30),
$\displaystyle\text{rank}\,{{\boldsymbol{\Pi}}}=6.$ (36)
The invariant part of the metric perturbation (20b) is now given by
$\smash{\psi^{\alpha\beta}}=\smash{\Pi^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\gamma\delta}}$, or explicitly,
$\displaystyle\psi^{\alpha\beta}=h^{\alpha\beta}-\frac{k^{\alpha}k_{\mu}}{k^{2}}h^{\mu\beta}-\frac{k^{\beta}k_{\mu}}{k^{2}}h^{\alpha\mu}+\frac{k^{\alpha}k^{\beta}}{k^{2}}h,$
(37)
where $h\doteq\smash{\text{tr}\,h^{\alpha\beta}}$. Without loss of generality,
let us assume coordinates such that
$\displaystyle k^{\alpha}=(\omega,0,0,\mathsf{k}),$ (38)
where $\mathsf{k}$ is the spatial wavenumber. Using this, the fact that
$k^{2}=\mathsf{k}^{2}-\omega^{2}$, and also Eq. (24), the 16-D vector
${\boldsymbol{\psi}}$ is found to be:
$\displaystyle{\boldsymbol{\psi}}=\frac{1}{k^{2}}\begin{pmatrix}{h^{00}\mathsf{k}^{2}-2h^{03}\omega\mathsf{k}+\omega^{2}(h^{11}+h^{22}+h^{33})}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
{h^{01}\mathsf{k}^{2}-h^{13}\omega\mathsf{k}}\vskip 3.0pt plus 1.0pt minus
1.0pt\\\ {h^{02}\mathsf{k}^{2}-h^{23}\omega\mathsf{k}}\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ {(h^{11}+h^{22})\mathsf{k}\omega}\vskip 3.0pt plus 1.0pt minus
1.0pt\\\ {h^{01}\mathsf{k}^{2}-h^{13}\omega\mathsf{k}}\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ h^{11}({\mathsf{k}^{2}-\omega^{2}})\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ h^{12}({\mathsf{k}^{2}-\omega^{2}})\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ {h^{01}\omega\mathsf{k}-h^{13}\omega^{2}}\vskip 3.0pt plus
1.0pt minus 1.0pt\\\ {h^{02}\mathsf{k}^{2}-h^{23}\omega\mathsf{k}}\vskip 3.0pt
plus 1.0pt minus 1.0pt\\\ h^{12}({\mathsf{k}^{2}-\omega^{2}})\vskip 3.0pt plus
1.0pt minus 1.0pt\\\ h^{22}({\mathsf{k}^{2}-\omega^{2}})\vskip 3.0pt plus
1.0pt minus 1.0pt\\\ {h^{02}\omega\mathsf{k}-h^{23}\omega^{2}}\vskip 3.0pt
plus 1.0pt minus 1.0pt\\\ {(h^{11}+h^{22})\mathsf{k}\omega}\vskip 3.0pt plus
1.0pt minus 1.0pt\\\ {h^{01}\omega\mathsf{k}-h^{13}\omega^{2}}\vskip 3.0pt
plus 1.0pt minus 1.0pt\\\ {h^{02}\omega\mathsf{k}-h^{23}\omega^{2}}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\mathsf{k}^{2}(-h^{00}+h^{11}+h^{22})+2h^{03}\omega\mathsf{k}-h^{33}\omega^{2}\end{pmatrix}.$
In order to extract the six gauge invariants from this ${\boldsymbol{\psi}}$,
notice that the operator (27) is represented by a local function of $k_{\mu}$,
$\smash{\widehat{\boldsymbol{\Pi}}}={{\boldsymbol{\Pi}}}$, and thus so is the
diagonalizing transformation (29). Specifically,
$\smash{\smash{\widehat{\boldsymbol{V}}}}={{\boldsymbol{V}}}$, and the columns
of the matrix ${{\boldsymbol{V}}}$ are just the eigenvectors of
${{\boldsymbol{\Pi}}}$:
$\displaystyle{\boldsymbol{V}}=({\boldsymbol{v}}_{1}\kern
5.0pt{\boldsymbol{v}}_{2}\kern 5.0pt\ldots\kern
5.0pt{\boldsymbol{v}}_{16}),\quad{{\boldsymbol{\Pi}}}{\boldsymbol{v}}_{a}=\lambda_{a}{\boldsymbol{v}}_{a},$
(39)
where $\lambda_{a}\in\\{0,1\\}$. The calculation of these eigenvectors and of
the matrix $\smash{{{\boldsymbol{V}}}^{-1}}$ can be automated foot:math , and
the six gauge invariants (32) are readily found to be
$\displaystyle{\boldsymbol{\Psi}}=\begin{pmatrix}\displaystyle\frac{\mathsf{k}^{2}(-h^{00}+h^{11}+h^{22})+2\omega\mathsf{k}h^{03}-\omega^{2}h^{33}}{\mathsf{k}^{2}-\omega^{2}}\\\\[10.0pt]
\displaystyle\frac{\omega\mathsf{k}h^{01}-\omega^{2}h^{13}}{\mathsf{k}^{2}-\omega^{2}}\\\\[10.0pt]
\displaystyle\frac{\omega\mathsf{k}h^{02}-\omega^{2}h^{23}}{\mathsf{k}^{2}-\omega^{2}}\\\\[10.0pt]
\displaystyle\frac{\omega\mathsf{k}(h^{11}+h^{22})}{\mathsf{k}^{2}-\omega^{2}}\\\\[10.0pt]
h^{22}\\\\[5.0pt] \displaystyle h^{12}\end{pmatrix}.$ (40)
The coordinate representation of these invariants is found by taking the
inverse Fourier transform of Eq. (40).
Our result is in agreement with Eqs. (2.45)–(2.47) in Ref. ref:flanagan05
(which operates with $h_{\alpha\beta}$ instead of our $h^{\alpha\beta}$). This
is seen from the fact that any linear combinations of our $\Psi^{s}$ are gauge
invariants too. In other words, instead of $\smash{\Psi^{s}}$, one can
introduce the invariants as $\smash{\bar{\Psi}^{s}}$ given by
$\displaystyle\bar{\Psi}^{s}\doteq C^{s}{}_{r}\Psi^{r},\quad
r,s=1,2,\ldots,6,$ (41)
or $\bar{{\boldsymbol{\Psi}}}={\boldsymbol{C}}{\boldsymbol{\Psi}}$ in the
index-free representation, where ${\boldsymbol{C}}$ is an arbitrary matrix
that may depend on $k_{\mu}$. This is particularly convenient at
$\smash{k^{2}}\equiv\smash{\mathsf{k}^{2}-\omega^{2}}\to 0$, when
${\boldsymbol{\Psi}}$ becomes singular. Specifically, by choosing
$\displaystyle{\boldsymbol{C}}=\text{diag}\,\big{\\{}k^{2},k^{2},k^{2},k^{2},1,1\big{\\}},$
(42)
we obtain invariants that are well-behaved at all $k_{\mu}$:
$\displaystyle\bar{{\boldsymbol{\Psi}}}=\begin{pmatrix}\displaystyle\mathsf{k}^{2}(-h^{00}+h^{11}+h^{22})+2\omega\mathsf{k}h^{03}-\omega^{2}h^{33}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\displaystyle\omega\mathsf{k}h^{01}-\omega^{2}h^{13}\vskip 3.0pt plus 1.0pt
minus 1.0pt\\\ \displaystyle\omega\mathsf{k}h^{02}-\omega^{2}h^{23}\vskip
3.0pt plus 1.0pt minus 1.0pt\\\
\displaystyle\omega\mathsf{k}(h^{11}+h^{22})\vskip 3.0pt plus 1.0pt minus
1.0pt\\\ h^{22}\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \displaystyle
h^{12}\end{pmatrix}.$ (43)
Let us also address why the original vectors ${\boldsymbol{\psi}}$ and
${\boldsymbol{\Psi}}$ are singular at $\smash{k^{2}}\to 0$. In this limit, the
vectors ${\boldsymbol{v}}_{a}$ [Eq. (39)] are well-behaved, and thus so is the
matrix ${{\boldsymbol{V}}}$. However, they cease to be linearly independent at
$k^{2}=0$, so $\smash{{{\boldsymbol{V}}}^{-1}}$ becomes singular, and as a
result, ${\boldsymbol{\Pi}}$ becomes singular too. This means that no finite
invariant projection of a generic $\smash{h^{\alpha\beta}}$ can be defined in
the Fourier space at $k^{2}=0$. The corresponding gauge-dependent part
$\phi^{\alpha\beta}$ becomes singular as well in this limit, as seen from Eqs.
(19) and (20c), where $\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$
becomes singular (Appendix A).222This is the same effect as the unlimited
growth, at $x^{\mu}\to\infty$, of the gauge field that brings a generic
$\smash{h^{\alpha\beta}}$ to the Lorenz gauge. See Appendix A in conjunction
with Eq. (10), which is commonly known for the Minkowski background
foot:schutz . Still, our general formulation correctly predicts the invariants
(43) at zero $k^{2}$, and these invariants can be related to vacuum GWs as
discussed in the next section.
### III.2 Free GWs in the Minkowski space
It is instructive to compare the key operator in our formulation,
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}}$, with
the operator that governs linear GWs in vacuum. By comparing Eq. (34) with,
for example, Eqs. (5.4) and (2.7) in Ref. ref:isaacson68a , one finds that the
equation for vacuum GWs in the Minkowski spacetime can be expressed as
$\displaystyle\smash{\widehat{D}}^{\alpha\beta}{}_{\gamma\delta}h^{\gamma\delta}=0,\quad\smash{\widehat{D}}^{\alpha\beta}{}_{\gamma\delta}=\partial^{2}\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}.$ (44)
In other words, in the special case of the Minkowski spacetime, the dispersion
operator $\smash{\smash{\widehat{D}}^{\alpha\beta}{}_{\gamma\delta}}$ of
vacuum GWs is exactly $\partial^{2}$ times the operator that projects a metric
perturbation on the invariant subspace.
For completeness, let us also briefly discuss monochromatic waves.333Cf. a
similar discussion in Ref. ref:maccallum73 , except their Eq. (3.6) describes
the _trace-reversed_ metric perturbation. In this case, Eq. (44) becomes
$\displaystyle k^{2}\,\Pi^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}\,h^{\gamma\delta}=0,$ (45)
where the matrix $\smash{k^{2}\,\Pi^{\alpha\beta}_{\rm inv}}$ is well-behaved
for all $k_{\mu}$. Equation (45) can be written as the following six of
equations, which determine the six gauge invariants (43):
$\displaystyle{\mathsf{k}^{2}h^{00}+\omega(-2\mathsf{k}h^{03}+\omega
h^{33})}=0,$ (46a)
$\displaystyle{\mathsf{k}^{2}h^{01}-\omega\mathsf{k}h^{13}}=0,$ (46b)
$\displaystyle{\mathsf{k}^{2}h^{02}-\omega\mathsf{k}h^{23}}=0,$ (46c)
$\displaystyle\mathsf{k}\omega{(h^{11}+h^{22})}=0,$ (46d) $\displaystyle
k^{2}(h^{11}-h^{22})=0,$ (46e) $\displaystyle k^{2}h^{12}=0.$ (46f)
For $k^{2}\neq 0$, Eqs. (46) indicate that all the six invariants (43) are
zero, so only coordinate waves are possible in this case. For $k^{2}=0$, Eqs.
(46a)–(46d) yield
$\displaystyle\bar{\Psi}^{1}=\bar{\Psi}^{2}=\bar{\Psi}^{3}=\bar{\Psi}^{4}=0,$
(47)
and in particular, $h^{11}+h^{22}=0$. However, Eqs. (46e) and (46f) are
satisfied identically at $k^{2}=0$, so the other two invariants,
$\displaystyle\bar{\Psi}^{5}=h^{22}=-h^{11},\quad\bar{\Psi}^{6}=h^{12}=h^{21},$
(48)
can be arbitrary, in agreement with known results ref:flanagan05 . In
particular, these are the two invariants that determine the commonly known
transverse-traceless polarization of a GW in the Lorenz gauge:
$\displaystyle h^{\alpha\beta}=\left(\begin{array}[]{cccc}0&0&0&0\\\
0&h_{+}&h_{\times}&0\\\ 0&h_{\times}&-h_{+}&0\\\ 0&0&0&0\end{array}\right).$
(53)
Specifically,
$\displaystyle h_{+}\doteq(h^{11}-h^{22})/2=-\bar{\Psi}^{5},$ (54a)
$\displaystyle h_{\times}\doteq h^{12}=h^{21}=\bar{\Psi}^{6}.$ (54b)
## IV Conclusions
In summary, we propose a method for identifying the gauge-invariant part
$\smash{\psi^{\alpha\beta}}$ of the metric perturbation
$\smash{h^{\alpha\beta}}$ within linearized gravity for an arbitrary
background metric $\smash{g_{\alpha\beta}}$. Specifically, we show that
$\smash{\psi^{\alpha\beta}}=\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm
inv}{}_{\gamma\delta}h^{\gamma\delta}}$, where
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}}$ is a
linear operator given by Eq. (17a). This result leads to a gauge-invariant
formulation of linearized gravity with any background metric, as will be shown
in a follow-up paper. The six independent functions from the sixteen gauge-
invariant functions $\psi^{\alpha\beta}$ can be found using Eq. (32). We also
show that for the Minkowski background,
$\smash{\smash{\widehat{\Pi}}^{\alpha\beta}_{\rm inv}{}_{\gamma\delta}}$ is
proportional to the well-known dispersion operator of linear GWs in vacuum
[Eq. (44)]. Also, our general formulation systematically yields the known
gauge invariants for the Minkowski background.
This material is based upon the work supported by National Science Foundation
under the grant No. PHY 1903130.
## Appendix A Asymptotic representation of
$\boldsymbol{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$
The operator $\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$ defined in Eq.
(11) can be written in the index-free representation as
$\displaystyle\smash{\widehat{\boldsymbol{\Xi}}}=-(\nabla^{2}+\smash{\widehat{\boldsymbol{R}}})^{-1},$
(55)
where $\nabla^{2}\doteq\nabla_{\mu}\nabla^{\mu}$,
$\smash{\widehat{\boldsymbol{R}}}$ is the operator whose coordinate
representation is the Ricci tensor $\smash{R^{\alpha}{}_{\beta}}$, and -1
denotes the operator inverse. In order for this inverse to exist
(approximately), we assume the adiabatic limit. Specifically, we assume that
the characteristic GW wavelength $\lambda$ is much smaller than the
characteristic radius $L$ of the spacetime curvature, i.e., when
$\epsilon\doteq\lambda/L\ll 1$. Assuming the ordering $\lambda=\mathcal{O}(1)$
and $L=\mathcal{O}(\epsilon^{-1})$, one has $\nabla^{2}=\mathcal{O}(1)$ and
$\smash{\widehat{\boldsymbol{R}}}=\mathcal{O}(\epsilon^{2})$. Then,
$\displaystyle\smash{\widehat{\boldsymbol{\Xi}}}=-\nabla^{-2}+\nabla^{-2}\smash{\widehat{\boldsymbol{R}}}\,\nabla^{-2}+\mathcal{O}(\epsilon^{4}),$
(56)
where $\nabla^{-2}$ is the inverse of $\smash{\nabla^{2}}$; i.e.,
$\smash{\varphi^{\alpha}}=\smash{\nabla^{-2}q^{\alpha}}$ is defined as the
solution of $\smash{\nabla^{2}\varphi^{\alpha}=q^{\alpha}}$.
Because the operators in Eqs. (55) and (56) are intended to act specifically
on vector fields, one can also write them explicitly. For example, in normal
coordinates, one has (Appendix D)
$\displaystyle\nabla^{2}=\partial^{2}-\frac{\smash{\widehat{\boldsymbol{R}}}}{3},$
(57)
and the corresponding inverse is
$\displaystyle\nabla^{-2}=\partial^{-2}+\frac{1}{3}\,\partial^{-2}\smash{\widehat{\boldsymbol{R}}}\,\partial^{-2}+\mathcal{O}(\epsilon^{4}),$
(58)
so Eq. (56) leads to
$\displaystyle\smash{\widehat{\boldsymbol{\Xi}}}=-\partial^{-2}+\frac{2}{3}\,\partial^{-2}\smash{\widehat{\boldsymbol{R}}}\,\partial^{-2}+\mathcal{O}(\epsilon^{4}).$
(59)
The operator $\partial^{-2}$ that enters here is understood as the Green’s
operator of the equation
$\displaystyle\partial^{2}\varphi^{\alpha}=q^{\alpha}.$ (60)
(This is the same equation that emerges in the well-known linear gravity in
the Minkowski background foot:schutz ; see also Eq. (10).) Suppose that the
right-hand side of Eq. (60) is quasimonochromatic, i.e.,
$\smash{q^{\alpha}=Q^{\alpha}\exp[\mathrm{i}\theta(x^{\mu})]}$ with
$\partial_{\beta}Q^{\alpha}=\mathcal{O}(\epsilon)$ and
$\partial_{\beta}k_{\alpha}=\mathcal{O}(\epsilon)$, where
$k_{\alpha}\doteq\partial_{\alpha}\theta$ is the local wavevector. Then,
$\displaystyle\partial^{-2}=(k_{\mu}k^{\mu})^{-1}+\smash{\widehat{\Delta}},$
(61)
where $\smash{\widehat{\Delta}}=\mathcal{O}(\epsilon)$ is a differential
operator to act on the envelope $\smash{Q^{\alpha}}$. If $k^{2}\doteq
k_{\mu}k^{\mu}$ approaches zero, as would be the case for GWs in the Minkowski
vacuum, then $\varphi^{\alpha}$ grows indefinitely at $x^{\mu}\to\infty$. This
is due to the fact that at $k^{2}\to 0$, $q^{\alpha}$ acts as a resonant
driving force for $\varphi^{\alpha}$. No quasimonochromatic solution is
possible in this case, and $\varphi^{\alpha}$ necessarily diverges at
infinity. In particular, this means that even if the Fourier spectrum of
$q^{\alpha}$ is analytic but includes harmonics with $k^{2}=0$, the Fourier
spectrum of the corresponding $\varphi^{\alpha}$ is singular.
This indicates that the case $k^{2}=0$ cannot be treated within the adiabatic
approximation that we assume in this paper. However, it still can be
considered as a limit, as discussed in Sec. III. Also, no such issues arise in
problems that involve GW–matter coupling, because then $k^{2}\neq 0$. In this
case, the term $\smash{\widehat{\Delta}}$ in Eq. (61) can also be found
explicitly using asymptotic methods of the Weyl calculus my:quasiop1 ;
ref:mcdonald88 . Because an explicit formula for $\smash{\widehat{\Delta}}$ is
not needed for our purposes, and because its derivation would involve
machinery that us far beyond the scope of our paper, such a derivation is not
presented here.
## Appendix B Derivation of Eqs. (14) and (15)
Using Eq. (12) for
$\smash{\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}}$ and Eq. (5) for
$\smash{\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}}$, one obtains
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}$
$\displaystyle=-\big{(}\delta_{\gamma}^{\alpha}\delta_{\delta}^{\beta}+\nabla^{\alpha}{\smash{\widehat{\Xi}}^{\beta}}{}_{\gamma}\nabla_{\delta}$
$\displaystyle\qquad+\nabla^{\beta}{\smash{\widehat{\Xi}}^{\alpha}}{}_{\gamma}\nabla_{\delta}\big{)}\left(\nabla^{\gamma}u^{\delta}+\nabla^{\delta}u^{\gamma}\right).$
(62)
Then using Eq. (4) in the above equation yields
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}$
$\displaystyle=-2\nabla^{(\alpha}u^{\beta)}-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\left[\nabla_{\delta},\nabla^{\gamma}\right]u^{\delta}$
$\displaystyle\qquad-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}u^{\delta}-\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\nabla^{2}u^{\gamma}$
$\displaystyle=-2\nabla^{(\alpha}u^{\beta)}-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\left(\delta_{\delta}^{\gamma}\nabla^{2}+{R^{\gamma}}_{\delta}\right)u^{\delta}$
$\displaystyle\qquad-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}u^{\delta}.$
(63)
Using Eq. (9b) for $\smash{\smash{\widehat{Q}}^{\alpha}{}_{\beta}}$ in
combination with Eq. (11), one obtains
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}$
$\displaystyle=-2\nabla^{(\alpha}u^{\beta)}+2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\mu}{\smash{\widehat{Q}}^{\mu}}{}_{\delta}u^{\delta}$
$\displaystyle\qquad-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\mu}\nabla^{\mu}\nabla_{\delta}u^{\delta}$
$\displaystyle=-2\nabla^{(\alpha}u^{\beta)}+2\nabla^{(\alpha}_{\phantom{\delta}}\delta^{\beta)}_{\delta}u^{\delta}$
$\displaystyle\qquad-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\mu}\nabla^{\mu}\nabla_{\delta}u^{\delta}$
$\displaystyle=-2\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\mu}\nabla^{\mu}\nabla_{\delta}u^{\delta}.$
(64)
For $\nabla_{\delta}u^{\delta}=0$, this leads to
$\smash{\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}}=0$,
which is Eq. (14). Otherwise, notice that
$\displaystyle
2\nabla_{\delta}u^{\delta}=2g_{\delta\gamma}\nabla^{\gamma}u^{\delta}=2g_{\gamma\delta}\nabla^{(\gamma}u^{\delta)}=-g_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}.$
(65)
Then, one can rewrite Eq. (64) as
$\displaystyle\smash{\widehat{\pi}}^{\alpha\beta}{}_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta}=\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\mathrm{\text{\pounds}}_{u}g^{\gamma\delta},$
(66)
which is precisely Eq. (15).
## Appendix C Derivation of Eq. (3c)
Using Eq. (17b), we get
$\smash{\widehat{\Pi}}_{\rm
g}^{\alpha\beta}{}_{\gamma\delta}\smash{\widehat{\Pi}}_{\rm
g}^{\gamma\delta}{}_{\lambda\varepsilon}=4\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}\\\
-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}\\\
-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}\\\
+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}.$
(67)
Let us simplify the individual terms on the right-hand side separately. We
start by expanding one pair of symmetrized indices to get
$\displaystyle
4\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle\qquad=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla_{\delta}\nabla^{\gamma}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}+2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{2}\smash{\widehat{\Xi}}^{\gamma}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle\qquad=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}+2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{2}\smash{\widehat{\Xi}}^{\gamma}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle\qquad\qquad+2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\left[\nabla_{\delta},\nabla^{\gamma}\right]\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}.$
(68)
Recognizing that the operator would act on a rank-2 tensor
$\smash{h^{\lambda\varepsilon}}$, we can use Eq. (4) for the commutator;
hence,
$4\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}\\\
+2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\left({R^{\gamma}}_{\delta}+\delta_{\delta}^{\gamma}\nabla^{2}\right)\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}.$
(69)
The terms in the parenthesis on the right-hand side of the above equation can
be expressed through $\smash{\smash{\widehat{Q}}^{\alpha}{}_{\beta}}$ [Eq.
(9b)], which is also the inverse of
$\smash{\smash{\widehat{\Xi}}^{\alpha}{}_{\beta}}$ [Eq. (11)]; hence,
$\displaystyle
4\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle\quad=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}{\smash{\widehat{Q}}^{\gamma}}{}_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle\quad=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\lambda}\nabla_{\varepsilon)}.$
(70)
Using a similar process, the second term is found to be
$\displaystyle
2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\gamma}\nabla_{\delta)}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}$
$\displaystyle\quad=\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla_{\delta}\nabla^{\gamma}{\smash{\widehat{\Xi}}^{\delta}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{2}{\smash{\widehat{\Xi}}^{\gamma}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}$
$\displaystyle\quad=\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\left({R^{\gamma}}_{\delta}+\delta_{\delta}^{\gamma}\nabla^{2}\right){\smash{\widehat{\Xi}}^{\delta}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}$
$\displaystyle\qquad+\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}{\smash{\widehat{\Xi}}^{\delta}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}$
$\displaystyle\quad=-\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\lambda\varepsilon}+\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}{\smash{\widehat{\Xi}}^{\delta}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}.$
(71)
The third and the fourth terms are simply
$\displaystyle
2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{(\lambda}\nabla_{\varepsilon)}=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)},$
$\displaystyle\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\gamma\delta}\nabla^{(\gamma}\smash{\widehat{\Xi}}^{\delta)}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}=\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}.$
Combining all these expressions, we get
$\displaystyle\smash{\widehat{\Pi}}_{\rm
g}^{\alpha\beta}{}_{\gamma\delta}\smash{\widehat{\Pi}}_{\rm
g}^{\gamma\delta}{}_{\lambda\varepsilon}$
$\displaystyle=2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\gamma}\nabla^{\gamma}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\lambda}\nabla_{\varepsilon)}+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\lambda\varepsilon}$
$\displaystyle-\nabla^{(\alpha}{\smash{\widehat{\Xi}}^{\beta)}}_{\gamma}\nabla^{\gamma}\nabla_{\delta}{\smash{\widehat{\Xi}}^{\delta}}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}$
$\displaystyle-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{(\lambda}\nabla_{\varepsilon)}$
$\displaystyle+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}\nabla_{\delta}\smash{\widehat{\Xi}}^{\delta}{}_{\nu}\nabla^{\nu}g_{\lambda\varepsilon}.$
(72)
Canceling the first term on the right-hand side with the fifth term, and the
fourth term with the sixth term, we arrive at
$\displaystyle\smash{\widehat{\Pi}}_{\rm
g}^{\alpha\beta}{}_{\gamma\delta}\smash{\widehat{\Pi}}_{\rm
g}^{\gamma\delta}{}_{\lambda\varepsilon}=-2\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{(\lambda}\nabla_{\varepsilon)}+\nabla^{(\alpha}\smash{\widehat{\Xi}}^{\beta)}{}_{\mu}\nabla^{\mu}g_{\lambda\varepsilon}.$
(73)
Upon comparison with Eq. (17b), this leads to Eq. (3c).
## Appendix D Derivation of Eq. (57)
For any vector field $u^{\alpha}$, one has
$\displaystyle\nabla^{\beta}\nabla_{\beta}u^{\alpha}$
$\displaystyle=\nabla^{\beta}\left(\partial_{\beta}u^{\alpha}+\Gamma^{\alpha}_{\beta\lambda}u^{\lambda}\right)$
$\displaystyle=\partial^{\beta}\left(\partial_{\beta}u^{\alpha}+\Gamma^{\alpha}_{\beta\lambda}u^{\lambda}\right)$
$\displaystyle\quad+g^{\beta\gamma}\Gamma^{\alpha}_{\gamma\rho}\left(\partial_{\beta}u^{\rho}+\Gamma^{\rho}_{\beta\lambda}u^{\lambda}\right)$
$\displaystyle\quad-g^{\beta\gamma}\Gamma^{\rho}_{\beta\gamma}\left(\partial_{\rho}u^{\alpha}+\Gamma^{\alpha}_{\rho\lambda}u^{\lambda}\right),$
(74)
where $\Gamma^{\alpha}_{\beta\gamma}$ are the Christoffel symbols. In normal
coordinates, the Christoffel symbols are zero, but their derivatives are not.
This leads to
$\displaystyle\nabla^{2}u^{\alpha}=\partial^{2}u^{\alpha}+u^{\lambda}\partial^{\beta}\Gamma^{\alpha}_{\beta\lambda}.$
(75)
The derivatives of the Christoffel symbols can be expressed through the
Riemann tensor ${R^{\rho}}_{\sigma\mu\nu}$ ref:brewin98 :
$\displaystyle\partial_{\nu}\Gamma_{\mu\sigma}^{\rho}=-\frac{1}{3}\left({R^{\rho}}_{\sigma\mu\nu}+{R^{\rho}}_{\mu\sigma\nu}\right).$
(76)
Using the well-known symmetries of the Riemann tensor and of the Ricci tensor
$R_{\sigma\nu}\doteq{R^{\rho}}_{\sigma\rho\nu}$, one then finds that
$\partial^{\beta}\Gamma_{\beta\lambda}^{\alpha}=-\frac{1}{3}\left({R^{\alpha}}_{\lambda\beta}{}^{\beta}+{R^{\alpha}}_{\beta\lambda}{}^{\beta}\right)=-\frac{1}{3}\,{R^{\alpha}}_{\beta\lambda}{}^{\beta}\\\
=-\frac{1}{3}\,{R_{\lambda}}^{\beta\alpha}{}_{\beta}=-\frac{1}{3}\,{R^{\beta}}{}_{\lambda\beta}{}^{\alpha}=-\frac{1}{3}\,{R}_{\lambda}{}^{\alpha}=-\frac{1}{3}\,{R^{\alpha}}_{\lambda}.$
(77)
Hence, one can rewrite Eq. (75) as
$\displaystyle\nabla^{2}u^{\alpha}=\partial^{2}u^{\alpha}-\frac{1}{3}\,{R^{\alpha}}_{\beta}u^{\beta},$
(78)
or equivalently, as
$\displaystyle(\nabla^{2})^{\alpha}{}_{\beta}=\delta^{\alpha}_{\beta}\partial^{2}-\frac{1}{3}\,R^{\alpha}{}_{\beta}.$
(79)
In the index-free representation, this leads to Eq. (57).
## References
* (1) S. Carroll, Spacetime and Geometry: An Introduction to General Relativity (Addison-Wesley, San Francisco, 2004).
* (2) E. E. Flanagan and S. A. Hughes, The basics of gravitational wave theory, New J. Phys. 7, 204 (2005).
* (3) J. M. Bardeen, Gauge-invariant cosmological perturbations, Phys. Rev. D 22, 1882 (1980).
* (4) K. A. Malik and D. Wands, Cosmological perturbations, Phys. Rep. 475, 1 (2009).
* (5) K. A. Malik and D. R. Matravers, Comments on gauge-invariance in cosmology, Gen. Relativ. Gravit. 45, 1989 (2013).
* (6) K. Nakamura, Second-order gauge invariant cosmological perturbation theory: — Einstein equations in terms of gauge invariant variables —, Prog. Theor. Phys. 117, 17 (2007).
* (7) M. Bruni, S. Matarrese, S. Mollerach, and S. Sonego, Perturbations of spacetime: gauge transformations and gauge invariance at second order and beyond, Class. Quantum Gravity 14, 2585 (1997).
* (8) V. De Luca, G. Franciolini, A. Kehagias, and A. Riotto, On the gauge invariance of cosmological gravitational waves, J. Cosmol. Astropart. Phys. 2020, 014 (2020).
* (9) D. Garg and I. Y. Dodin, Average nonlinear dynamics of particles in gravitational pulses: Effective Hamiltonian, secular acceleration, and gravitational susceptibility, Phys. Rev. D 102, 064012 (2020).
* (10) G. Baym, S. P. Patil, and C. J. Pethick, Damping of gravitational waves by matter, Phys. Rev. D 96, 084033 (2017).
* (11) K. Bamba, S. Nojiri, and S. D. Odintsov, Propagation of gravitational waves in strong magnetic fields, Phys. Rev. D 98, 024002 (2018).
* (12) F. A. Asenjo and S. M. Mahajan, Resonant interaction between dispersive gravitational waves and scalar massive particles, Phys. Rev. D 101, 063010 (2020).
* (13) D. Barta and M. Vasúth, Dispersion of gravitational waves in cold spherical interstellar medium, Int. J. Mod. Phys. D 27, 1850040 (2018).
* (14) D. Chesters, Dispersion of gravitational waves by a collisionless gas, Phys. Rev. D 7, 2863 (1973).
* (15) E. Asseo, D. Gerbal, J. Heyvaerts, and M. Signore, General-relativistic kinetic theory of waves in a massive particle medium, Phys. Rev. D 13, 2724 (1976).
* (16) P. G. Macedo and A. H. Nelson, Propagation of gravitational waves in a magnetized plasma, Phys. Rev. D 28, 2382 (1983).
* (17) R. Flauger and S. Weinberg, Gravitational waves in cold dark matter, Phys. Rev. D 97, 123506 (2018).
* (18) M. Servin, G. Brodin, and M. Marklund, Cyclotron damping and Faraday rotation of gravitational waves, Phys. Rev. D 64, 024013 (2001).
* (19) J. Moortgat and J. Kuijpers, Gravitational and magnetosonic waves in gamma-ray bursts, Astron. Astrophys. 402, 905 (2003).
* (20) M. Forsberg and G. Brodin, Linear theory of gravitational wave propagation in a magnetized, relativistic Vlasov plasma, Phys. Rev. D 82, 124029 (2010).
* (21) H. Isliker, I. Sandberg, and L. Vlahos, Interaction of gravitational waves with strongly magnetized plasmas, Phys. Rev. D 74, 104009 (2006).
* (22) M. D. Duez, Y. T. Liu, S. L. Shapiro, and B. C. Stephens, Relativistic magnetohydrodynamics in dynamical spacetimes: Numerical methods and tests, Phys. Rev. D 72, 024028 (2005).
* (23) J. T. Mendonça, Gravitational waves in plasmas, Plasma Phys. Control. Fusion 44, B225 (2002).
* (24) C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973).
* (25) M. A. Oancea, J. Joudioux, I. Y. Dodin, D. E. Ruiz, C. F. Paganini, and L. Andersson, Gravitational spin Hall effect of light, Phys. Rev. D 102, 024075 (2020).
* (26) I. Y. Dodin, A. I. Zhmoginov, and D. E. Ruiz, Variational principles for dissipative (sub)systems, with applications to the theory of linear dispersion and geometrical optics, Phys. Lett. A 381, 1411 (2017).
* (27) Our calculations were facilitated by Mathematica © 1988–2019 Wolfram Research, Inc., version number 12.0.0.0.
* (28) B. Schutz, A First Course in General Relativity (Cambridge University Press, New York, 2009), Eq. (8.36).
* (29) R. A. Isaacson, Gravitational radiation in the limit of high frequency. I. The linear approximation and geometrical optics, Phys. Rev. 166, 1263 (1968).
* (30) M. A. H. MacCallum and A. H. Taub, The averaged Lagrangian and high-frequency gravitational waves, Commun. Math. Phys. 30, 153 (1973).
* (31) I. Y. Dodin, D. E. Ruiz, K. Yanagihara, Y. Zhou, and S. Kubo, Quasioptical modeling of wave beams with and without mode conversion. I. Basic theory, Phys. Plasmas 26, 072110 (2019).
* (32) S. W. McDonald, Phase-space representations of wave equations with applications to the eikonal approximation for short-wavelength waves, Phys. Rep. 158, 337 (1988).
* (33) L. Brewin, Riemann normal coordinates, smooth lattices and numerical relativity, Class. Quantum Gravity 15, 3085 (1998).
|
# Guiding Attention using Partial-Order Relationships for Image Captioning
Murad Popattia1, Muhammad Rafi1, Rizwan Qureshi1,2, Shah Nawaz3†
1National University of Computer and Emerging Sciences, Karachi, Pakistan,
2Hamad Bin Khalifa University, Doha, Qatar
3Pattern Analysis & Computer Vision (PAVIS) - Istituto Italiano di Tecnologia
(IIT)
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
The use of attention models for automated image captioning has enabled many
systems to produce accurate and meaningful descriptions for images. Over the
years, many novel approaches have been proposed to enhance the attention
process using different feature representations. In this paper, we extend this
approach by creating a guided attention network mechanism, that exploits the
relationship between the visual scene and text-descriptions using spatial
features from the image, high-level information from the topics, and temporal
context from caption generation, which are embedded together in an ordered
embedding space. A pairwise ranking objective is used for training this
embedding space which allows similar images, topics and captions in the shared
semantic space to maintain a partial order in the visual-semantic hierarchy
and hence, helps the model to produce more visually accurate captions. The
experimental results based on MSCOCO dataset shows the competitiveness of our
approach, with many state-of-the-art models on various evaluation metrics.
## 1 Introduction
††$\dagger$ Current Affiliation: Deutsches Elektronen-Synchrotron
(DESY)††Email<EMAIL_ADDRESS>
Recent success of deep neural networks in computer vision, speech, and natural
language processing have prompted academics to think beyond these fields as
separate entities, instead solving challenges at their intersections [9, 33,
6, 2, 17, 19]. Generating descriptive and meaningful captions for images, and
to capture its semantic meaning, is one such multimodal inference problem [10,
3]. Despite its complexity, it has various applications, including visually-
impaired assistance, intelligent chat-bots, medical report generation, self-
driving cars, and many more [23]. In general, an image captioning model should
be able to find objects, their positions, map the relationship, as well as
express this relationships in a human understandable language.
A typical image caption system consists of a convolutional neural network
(CNN) and a recurrent neural network (RNN), with CNN as the image encoder and
RNN as the sentence decoder [26, 21]. However, in order to capture the spatial
context from the image in an efficient manner, other approaches such as [32,
8, 34, 31] incorporate high-level information from topics or detected objects
as semantic features to the decoder model. Another line of research was to
make use of cross-modal associations between image and text features in a
joint-embedding space. Earlier research work [15, 13] treated images and
caption as a symmetric relationship by using Euclidean or cosine distances to
gauge similarities between these two modalities. On the other hand, in [25]
treated these associations as asymmetric by enforcing a hierarchical order
within the embedding space, and has shown to perform better than symmetric
relationships.
Figure 1: Examples of generated captions by humans (GT), attention (ATT) and
using guided attention (T-OE-ATT). The words higlighted in respective colors
denote a comparison between the semantic detail captured by the approaches
used.
Further improvement in this framework, is the introduction of attention
mechanism [22], which allows the decoder to focus on a sub-region of the
image, when predicting the next word in the caption [28]. Despite of focusing
only on spatial attention, Lu et al. [16] presented a novel adaptive mechanism
for helping the attention module to learn, when to shift between spatial and
temporal context during word prediction. In addition, Anderson et al. [1]
improves the attention process by first detecting a set of salient image
regions (bottom-up) and then attending to these fixated regions (top-down).
[30] builds upon this concept by exploiting the semantic relationships between
the detected spatial regions using GCN (Graph Convolution Networks). [11] also
make use of a similar approach but instead modify the attention module by
adding self-attention module on top of the conventional attention mechanism,
which helps the decoder to draw relations between various attended vectors.
On the other hand, Jiang et al. [12], focused on increasing the semantic
information fed to the decoder by using a fusion of multiple encoders, each
focusing on a different view point, to build better representations for the
decoder. Likewise, Wang et al. [27] also worked in a similar direction that
guides attention using a hierarchy of semantic features. However, lack of
inter-feature correlations between these encoders makes it difficult for the
decoder to leverage the association from the resulting joint representations.
Lastly, despite relying on spatial cues from encoded features, Ke et al. [14]
worked on improving the temporal coherence of words during descriptions by
applying attention on both visual and textual domains.
Alongside the same line of work of incorporating semantic associations between
different spatial regions using GCNs [30], our idea is to make use of multi-
modal representations such as ordered embeddings [25] as our semantic feature
vectors to guide the attention module. Similar to the late-fusion of features
as done in [30], we instead use a weighted summation as our fusion mechanism
to fuse these embeddings.
Overall the main contributions of our work are three-fold:
* •
We make use of ordered embedding features for topics and images to guide the
attention module instead of feeding them as low-level features. This step has
been shown to improved metrics, see ablation study (Section 3.3.1).
* •
We incorporate a global weighted sum for fusing “visual” and “temporal” states
instead of feeding them at each time-step separately which helps the model to
learn the best estimation of the attention required for each image.
* •
Lastly, we present an ablation study of each contribution and how it effects
the overall performance of the model on the MSCOCO dataset.
Figure 2: The overall framework of the proposed model, where $\Sigma_{w}$
represents a weighted-summation and $\oplus$ denotes matrix addition. The
model consists of a feature extractor and a topic classifier to extract
spatial features and topics given in an image. These semantic attributes are
then fed into a retrieval model which arranges an image, topic and caption
triplet in a partial-order hierarchy. The resultant embeddings are then late-
fused using weighted summation and then fed into a ’guiding LSTM’. The ’core-
lstm’ then makes use of this hidden state for temporal attention.
Consequently, two separate attention blocks are used, each attending to
different aspects of decoding and the resulting attention vectors are late-
fused again in a weighted fashion to produce captions.
## 2 Methodology
### 2.1 Overall Framework
Our approach follows the traditional encoder-decoder framework, where the
encoder is responsible to pass on features used by the decoder to output the
most likely word during captioning. Figure 2 illustrates the overall
framework.
Similar to recent approaches of sending objects or topics during encoding [34,
29], we used topics instead of objects to capture both the ”actors” as well
the “activities” binding them. The encoder consists of three components: 1)
topic classifier 2) feature extractor and the 3) retrieval model. We use a
pre-trained deep CNN model as a feature extractor to extract visual features
from the image and train a multi-label topic classifier to predict topics for
given images. After that, we train a retrieval model which embeds captions,
image and topics into a shared semantic space, in which a similarity score can
be calculated between them. Interestingly, using embeddings helps to better
learn the latent relationships between image and topic features, lost during
feature extraction. This helps the attention module in describing and
discriminating spatial regions more effectively. (Details in Section 2.3)
Inspired from the simple yet effective architecture defined in [1], we used
two LSTM branches in the decoder i.e. the guiding-lstm and the core-lstm.
Here, we define a weighted sum of the semantic embeddings of both the images
and topics, as input to the guiding-lstm at the first time-step, which gives
the model a better understanding of the alignment of visual features and the
topics. We then utilize its hidden state ht-1 for guiding the language LSTM
and the context vector zt used for attention. Similar to using a visual
sentinel [16], we used a weighted summation for fusing the attention weights
instead of a sentinel gate to shift between spatial and temporal attentions.
This allows for a more simpler architecture in terms of learning parameters
involved, whilst maintaining the accuracy during word prediction.
### 2.2 Topic Classifier
For extracting topics T, the ground-truth captions are concatenated to form
documents D, where each document d corresponds to captions C, for a given
image and contains a set of words W. After that, we train a Latent Dirichlet
Allocation (LDA) model [4], which is a probabilistic model to learn the topic
representations from documents. The trained topic model outputs a set of topic
probabilities T={T1, T2, T3, … Tn}.
For training the classifier, the topics are sampled and converted to one-hot
representations using the following function:
$f\textsubscript{t\textsubscript{i} $\subseteq$
T\textsubscript{i}}(x)=\left\\{\begin{array}[]{rcl}1&\mbox{if}&P(x)\geq 0.1\\\
0&\mbox{else}\end{array}\right.$ (1)
where ti represents a single topic from a set of topics T, for image i from a
set of images I and P(x) represents the topic-confidence from LDA. We
formulate this as a multi-label classification problem, since an image can
have multiple topics. A pre-trained CNN model is used to extract image
features which are then fed into a feed-forward neural network with a sigmoid
activation for the prediction layer. This layer outputs an (Ni$\times$Nt)
vector where Ni corresponds to the number of images and Tt are the number of
topics. We report the evaluation for the topic classifier in Section 4.1 of
the paper.
### 2.3 Retrieval Model
The architecture of the retrieval model is inspired by the approaches in [32,
25]. It follows the idea of [13] to align caption and images in the same
space, but with a partial-order relation rather than a symmetric relation.
This is a more intuitive approach as images have captions with different
levels of details, and because the captions are so dissimilar, it is
impossible to map both their embeddings close to the same image embedding
using a symmetric distance measure like cosine similarity. Nevertheless,
maintaining order is robust to such affect, as dissimilar caption can have
embeddings placed very far away from the image, while remaining above it in
the partial order. The partial order relation can be defined as:
$x\preceq y$ $\Longleftrightarrow\forall x\forall y(x\geq y)$. This imposes
for all values of the vector x to be greater than all values of the vector y
in the embedding space to maintain order.
We start with three entities i.e. images I, topics T and captions C. As per
[25], we utilized domain-specific encoders to extract features for training
the embeddings. For images and topics, we utilized the fully-connected
features from the feature-extractor and the topic features from the topic
classifier respectively. While for captions, we used a Gated Relu Unit (GRU)
as the RNN based text-encoder instead of an LSTM, because of its computational
efficiency. These feature vectors are then weighted with WI, WT and WC before
being projected in the embedding space:
$\vspace{1mm}O\textsubscript{i}=\|W\textsubscript{I}\cdot
f\textsubscript{FE}(I)\|^{2}$ (2)
$\vspace{1mm}O\textsubscript{t}=\|W\textsubscript{T}\cdot
f\textsubscript{TC}(T)\|^{2}$ (3)
$O\textsubscript{c}=\|W\textsubscript{C}\cdot GRU(C)\|^{2}$ (4)
Oi, Ot, Oc represents the order embeddings of image, topics, and captions
respectively. fFE(I) represents the image features from the feature-extractor,
while fTC(T) represents the features from the topics classifier. We use
L2-Norm during encoding instead of an absolute value function to mitigate
overfitting [25].
Similarity Function The general notion of similarity between two vectors x and
y in the embedding space can hence be quantified as the degree to which a pair
of points violates the partial order $x\preceq y$ [25]:
$S(x,y)=-(\|max(0,O\textsubscript{y}-O\textsubscript{x})\|^{2})$ (5)
where Ox and Oy represents the encoded feature vector in the embedding space.
The negative sign constitutes to the fact that a positive difference between
Oy and Ox denotes violation of the order penalty.
Loss Function As previous works which learn embedding in cross-modal retrieval
tasks [15, 13], we re-use the pair-wise ranking loss objective to increase the
similarity for the matching pairs and vice-versa for the contrastive terms by
a margin $\alpha$:
$\begin{split}L(x,y)=\sum_{(x,y)}(\sum_{x^{\prime}}max\\{0,\alpha-S(x,y)+S(x^{\prime},y)\\}+\\\
\sum_{y^{\prime}}max\\{0,\alpha-S(x,y)+S(x,y^{\prime})\\})\end{split}$ (6)
where $(x,y)$ is the ground-truth pair while $(x^{\prime},y)$ and
$(x,y^{\prime})$ are constrastive terms. Our hierarchy has image at the top of
the partial order, followed by captions which are then bounded by the topics.
Hence, the total loss can be defined as the summation of losses over all three
partial orders:
$L=L(I,C)+L(I,T)+L(C,T)$ (7)
### 2.4 Caption Generation
We now describe the decoding phase of the model. The trained encoding
functions $O\textsubscript{i}$ and $O\textsubscript{t}$ are used to produce
relevant embeddings for image and topics during feature extraction. We then
used a weighted-summation ( $\Sigma_{w}$ ), of these embeddings:
$\Sigma_{w(OE)}=\lambda\cdot O_{i}+(1-\lambda)\cdot O_{t}$ (8)
where $\lambda$ is a learnable parameter. The reason for a weighted-sum is to
allow the model to learn the relative importance of each embedding during
training. Different from the approach of [32], we focused on guiding the
decoder in a 3-way manner i.e. using the embedding information, visual
features and reliance on past information from the hidden states.
Dual-LSTM branch We used an auxiliary guiding-lstm, to process the information
from the learned embeddings and feeding the hidden state information to both
the attention vector $z_{t}$ and the core-lstm at initial timestep t = -1:
$h_{t-1}=LSTM_{g}(\Sigma_{w(OE)})$ (9) $z_{t}=W_{g}h_{t-1}+W_{c}h_{t}$ (10)
$h_{t}=LSTM_{c}(x_{t},h_{t-1})$ (11)
where $h_{t-1}$ and $h_{t}$ represents the hidden states at relevant
timesteps, $W_{g}$ and $W_{c}$ are learnable parameters in the context vector
$z_{t}$. $LSTM_{g}$ and $LSTM_{c}$ represent the guiding and core LSTMs
respectively. The initial hidden state for $LSTM_{g}$ is essentially zeroes
and hence not shown in the formulation.
Spatial Attention Block This block is responsible to generate the attention
distribution vector over the important visual regions of the image. Similar to
the idea of soft-attention [16], we utilize the context-vector $z_{t}$ from
equation 10 instead of just the hidden state information done in [16], in
order to guide attention pertaining to the partial-order relation between the
image and topic:
$\alpha_{t}=softmax(W_{\alpha}[W_{f}F_{L}+W_{z}z_{t}])$ (12)
$\rho_{s}=\sum_{i=1}^{N}\alpha_{ti}f_{i}$ (13)
where $F_{L}=\\{f_{1},f_{2},....f_{N}\\}$ represent the local image features
from the convolution layer just before the FC layer of the feature extractor,
$\alpha_{t}$ denotes the attention weights over the features in $F_{L}$,
$\alpha_{ti}$ denotes the weight over the ith part of $F_{L}$ and $\rho_{s}$
denotes the spatial-context vector.
Temporal Attention Block The temporal block guides the attention module
whether the information is required at all, or the next word can be predicted
using the past information stored within the decoder [16]. Likewise, we
utilize the information from the LSTM’s memory cell along with the context
vector $z_{t}$ which contains the residual embedding information from the
previous timestep. It helps the temporal block decide whether the current
timestep requires attending to visual features or not. This is illustrated
below:
$\rho_{t}=\tanh(c_{t})\bigodot\sigma({W}_{x}x_{t}+W_{z^{\prime}}z_{t})$ (14)
where $c_{t}$ is the memory cell of the core-lstm, $x_{t}$ is the word vector
at timestep t, $z_{t}$ denotes the context vector, $\bigodot$ refers to an
element-wise product and $\rho_{t}$ denotes the temporal-context vector.
Word Prediction Instead of keeping track of the temporal information for each
word, we let the model generalize the ratio between these attentions using a
weighted-summation ( $\Sigma_{w}$ ). This is because ideally it is a more
simpler approach to rely less on the attention gate at each timestep and
generalize from the embedding context obtained from $z_{t}$.
$\Sigma_{w(ATT)}=\mu\cdot\rho_{s}+(1-\mu)\cdot\rho_{t}$ (15)
We then calculate the word probability over a vocabulary of possible words at
time t:
$p_{t}=softmax(f_{MLP}(z_{t}+\Sigma_{w(ATT)}))$ (16)
where $f_{MLP}$ denotes a dense layer with ReLU activation.
## 3 Experiments
### 3.1 Implementation Details
As our model is divided into sub-components, we train each part separately
instead of training them end-to-end.
Feature Extractor We use a ResNet-$152$ [7] model trained on ImageNet dataset.
The FC features are taken from the last layer of the CNN which have a
dimension of $2048$$\times$$1$. We use
$F_{L}=\\{f_{1},f_{2},....f_{N}\\},f_{i}\in R^{512}$ to represent the spatial
CNN features at each of the N grid locations where N = $49$.
Topic Classifier For the training the topic model, we limit our vocabulary to
top $5000$ and train the LDA on these features for $100$ iterations. We
empirically set the number of topics to be $80$ for our case. Increasing the
topics made the topic vectors more sparse and decreased the recall for the
topic classifier. For the topic classifier, we used the image features
$R^{2048\times 1}$ to be fed into a $5$-layer feed-forward NN, with the
prediction layer $R^{80}$ having a sigmoid activation. The classifier was
optimized using SGD with a learning-rate of $0.1$ and momentum $0.9$. The
learning-rate was changed in case of plateauing with a patience of $0.4$ and a
factor of $0.2$.
Retrieval Model For the retrieval model, we reused the FC image features
$R^{2048\times 1}$ from the feature extractor and the $R^{80}$ topic features
from the topic classifier in Section 2.2. The dimensions of the embedding
space and the GRU hidden state in equation (4) were set to $1024$, and the
margin $\alpha$ is set to $0.05$ as per [25].
Caption Model For the decoder, our model used LSTMs. The guiding and core
LSTMs both have a dimension of $512$. For the captions, we use a word
embedding size of $256$. During training, we see that downsizing and
concatenating FC image features with this embedding improved results. The
initial value for $\lambda$ and $\mu$ in equation (8) is set to $0.5$ for
both, and learned during training. Furthermore, the number of units for
$f_{MLP}$ was set to $1024$. Lastly, for sampling the captions, we use beam
size of $1$. The whole model was optimized using Adam optimizer with a mini-
batch size of 128 and learning rate of 0.001. The model trained for $10$
epochs on a Tesla T$4$ GPU and the training finished in $10$ hours to produce
the results.
### 3.2 Datasets
We conducted experiments on the popular benchmark: Microsoft COCO dataset
111https://cocodataset.org/ as this has been widely used for benchmarking in
the related literature. Also, we adopt the ‘Karpathy’ splits setting [14],
which includes 118,287 training images, and 5K testing images for evaluation.
Some images had more than 5 corresponding captions, the excess of which are
discarded for consistency. We directly use the publicly available code
222https://github.com/tylin/coco-caption provided by Microsoft for result
evaluation, which includes BLEU, METEOR, ROUGE-L and CIDEr.
Approaches | BLEU-I | BLEU-II | BLEU-III | BLEU-IV | METEOR | ROUGE-L | CIDEr
---|---|---|---|---|---|---|---
Adaptive ATT [16] | 74.2 | 58.0 | 43.9 | 33.2 | 26.6 | - | 108.5
LSTM-A [31] | 75.4 | - | - | 35.2 | 26.9 | 55.8 | 108.8
RF-Net [12] | 76.4 | 60.4 | 46.6 | 35.8 | 27.4 | 56.5 | 112.5
Up-Down ATT [1] | 77.2 | - | - | 36.2 | 27.0 | 56.4 | 113.5
HAN [27] | 77.2 | 61.2 | 47.7 | 36.2 | 27.5 | 56.6 | 114.8
RDN [14] | 77.5 | 61.8 | 47.9 | 36.8 | 27.2 | 56.8 | 115.3
GCN-LSTM [30] | 77.4 | - | - | 37.1 | 28.1 | 57.2 | 117.1
AoA-Net [11] | 77.4 | - | - | 37.2 | 28.4 | 57.5 | 119.8
Ours (T-OE-ATT) | 77.0 | 61.2 | 47.1 | 35.9 | 28.4 | 57.3 | 115.9
Table 1: Performance comparison on MSCOCO ’Karpathy’ test split trained on a single-model using cross-entropy loss without CIDEr optimization. (-) indicates metrics not provided. All values are provided in percentages (%) with the highest bold-faced. Approach | B-IV | METEOR | ROUGE-L | CIDEr
---|---|---|---|---
Topic | 25.5 | 22.9 | 50.1 | 80.2
T-OE(VGG) | 34.4 | 27.8 | 56.5 | 112.7
T-OE(Resnet) | 35.4 | 28.2 | 57.0 | 114.4
T-OE-ATT | 35.9 | 28.4 | 57.3 | 115.9
Table 2: Ablation study on MSCOCO ’Karpathy’ test split.
### 3.3 Evaluation
#### 3.3.1 Ablation Study
To study the effects of guiding the attention module, we design an ablation
experiment to assess the effect of 1) using an embedding space 2) using a
different feature extractor and 3) using embedding along with attention as
shown in Table 2. We see that the initial approach of feeding topics as low-
level features performs poorly. A dramatic improvement was seen when using an
embedding space in the process. This confirms the hypothesis that embeddings
serve as a better auxiliary guidance for attention. We term this as (T-OE).
Moreover, we assess the model’s performance on a less accurate feature
extractor such as VGG-19 [20] which only incurred as small change in the
metrics signifying that trained embeddings are robust to changes in the
feature extractor. Lastly, we incorporate attention in the process (T-OE-ATT)
and guide them using the trained embeddings which shows an improved score in
all metrics, signifying the importance of the embeddings to guide attention.
#### 3.3.2 Quantitative Evaluation
In Table 1, we compare our proposed architecture with recent state-of-the-art
models on the MSCOCO dataset that make use of LSTMs in their decoder
architecture. For fair comparison, we report the scores for single model for
each approach that use the same CNN backbone as ours (ResNet [7]), without
using ensembling and CIDEr optimizations.
Our approach is able to outperform RF-Net [12] and HAN [27] signifying that
using partial order is more suitable for building joint multi-modal
representations as compared to using domain-specific encoders alone. Moreover,
incorporating attention with T-OE, as shown in Table 2, also helps us
outperform RDN [14] over notable metrics such as METEOR, ROUGE-L and CIDEr
which show that orthogonal improvements to encoder or decoder alone are less
susceptible to improvement as compared to jointly improving both the feature
representations and the caption generation process. It is worth noting that
compared to our architecture, RDN [14] and RF-Net [12] have a greater number
of learning parameters (1.15B parameters [14] for RDN), whilst our decoder
contains comprises of only 29M parameters and yet is able to produce
competitive results. Both GCN-LSTM [30] and AoA-Net [11] use Faster-RCNN as
their feature encoder which is able to feed in region-level information while
our model uses only the fully connected features from the ResNet backbone and
is still competitive over METEOR and ROUGE-L scores. It should also be noted
that AoA-Net [11] leverage the use of self-attention mechanisms which have
been used alongside transformers and are able to produce state-of-the-art
results. On the contrary, our work can be extended to incorporate region level
information alongside topics or to use a different attention mechanism to
improve results and has not been explored in this study.
As our model uses LSTMs for caption generation, hence this comparison does not
take into account transformer-based architectures [5, 24]. Transformers are a
different class of architecture as compared to LSTMs as they do not follow the
auto-regressive nature of LSTMs and process the inputs in a parallel fashion
[24] so incorporating partial-order embeddings alongside this class of
architecture could also be a favourable research direction.
#### 3.3.3 Qualitative Evaluation
We assess our model qualitatively as illustrated in Figure 1. The baseline
model is based on model’s output based on topic and image features, while the
guided attention model is based on topic and image embeddings. Without
embeddings, we see that attention lack descriptiveness of the context
associated with the visual features such as double-decker, grassy, drinking
etc.
Figure 3: Examples of inaccurate captions from the model.
We also see an influence when comparing ground truth captions where the model
was able to capture semantic context like parked instead of sitting and
drinking water instead of in the water. It is because the model is able to
draw associations between objects and actions due to partial-order information
from the underlying topics of the captions fed into the decoder module
denoting how attention was guided.
However, as denoted in Figure 3, the attention module can pick up on noise
from these embedded features such as confusing between a bus and a truck. This
is evident from T-OE, where the caption contains truck even though it is
absent from the image. An explanation can be bus and truck being semantically
closer in the embedding space. Moreover, relying on spatial attention can also
lead to mis-classifying objects in the image from spatula to knife. This can
be seen from the caption generated from the model without T-OE where the
object is misidentified as a knife.
## 4 Discussion
### 4.1 Evaluation of topic classifier and retrieval model
As the topic classifier and the embedding sub-space act as intermediaries to
the final model, hence, we evaluate their performance on relative metrics as
well. The output of the topic classifier is a set of sigmoid probabilities
which are converted into one-hot encodings. Using precision solely for
evaluating one-hot encodings is not enough as we can see that a higher
precision does not mean our model has a good recall. Hence, we use F1-score
with a $\beta$ more inclined towards recall. The highest F1-score was achieved
in the COCO dataset which may be due to a larger amount of data being used to
train the model. We summarize these results in Table 3.
Dataset | Precision | Recall | F1-Score
---|---|---|---
Flickr30k | 60.08 | 42.33 | 43.56
MSCOCO | 77.54 | 60.48 | 61.52
Table 3: Performance results of the topic classifier on validation sets of
Flick30k and MSCOCO dataset
For the order embedding model, we assess the quality of the model by treating
it as a Caption Retrieval task. The metric used in this experiment was
Recall@K which refers to the percentage of recall achieved in top-k items. We
summarize these results in Table 4.
Dataset | R@1 | R@5 | R@10
---|---|---|---
Flickr30k | 35.2 | 61.9 | 73.4
MSCOCO | 49.5 | 79.6 | 89.3
Table 4: Performance results of the retrieval model on validation sets of
Flick30k and MSCOCO dataset
Nevertheless, the scores for both the topic classifier and the retrieval model
were not state-of-the-art but were enough to extract suitable features for the
training images. Respective improvements to the models in terms of fine-tuning
or using a different architecture, might positively impact the overall
accuracy during captioning but is beyond the scope of this paper.
### 4.2 Visualizing the embedding space
In this section, we present a high-level visualization of the partial-order
structure between images, topics and captions in the embedding space, as shown
in Figure 4.
The embedding space consist of three modalities, with images being at the
highest order, captions being at the center and topics being at the lowest
order of the hierarchy posing a lower bound for the captions. This
hierarchical arrangement also conforms with the cognitive arrangement of these
modalities. Images are generally abstract points from which we derive meaning
about its context while separate words such as topics can be used to
complement images but do not contribute to any meaning on their own. Captions
on the other hand, describe a story which the spatial cues of the image
support.
We can then visualize these captions as a collection of words each of which
can constitute to a topic. Treating the problem as a caption retrieval task,
where given an image the model outputs the set of all possible captions,
setting a lower bound with topics helps constraint this search space and helps
reduce noise from overlapping caption regions. [32]
### 4.3 Analysis of the weighted summation for attention
Figure 4: Representation of order in the embedding space
Contrasting to the approach followed in [16], where the model is trained to
shift attention at each word prediction step, we constraint the model in
determining an overall ratio of the spatial or temporal attention needed for
word prediction and keep this as a static value for all succeeding
predictions. However, despite setting the values randomly, we allow the
decoder to generalize from a set of caption on the amount of attention needed
for each caption. For testing, we set the temporal context $\mu$ to be 0.3 for
spatial and consequently, 0.7 for temporal attention. The reason for a higher
ratio for temporal context is because it complements the RNNs capability to
work with sequences. For the model we use the ATT approach where the image
features are fed directly as spatial cues to the decoder. Figure 5 shows the
learned ratios after several iteration of training.
It can be seen that the model gradually learns to increase the gradient flow
from the spatial block of the attention module, signifying the need of visual
attention. However, we do notice some peaks for the flow of temporal
information. A plausible reason is because while visual information is
necessary, it may not always be inline with temporal coherence when describing
images. Hence, we sample captions with different values for $\mu$ as shown in
Figure 6.
Figure 5: Weight distributions of spatial and temporal attention for several
iterations on MSCOCO dataset.
For a lower value of $\mu$ in the formulation 15 in Section 2.4, the model
allows the flow of temporal information in the decoder and hence we see a
time-relative aspect in sentences with phrases such as ”about to hit” and ”is
laying on”. On the contrary, if we shift the value of $\mu$ higher, it boosts
the gradient flow from the spatial block filling in visual details from the
image such as ”white shirt”, ”white shorts”, ”laptop computer”, ”top of desk”.
However, we see that despite being rich in scene-specific details, the model
misses out the global context of the image imposing the need for a good
balance between both the attention modules. This is the reason we allow the
model to learn these weights during training.
## 5 Limitations
In this section, we discuss the architectural limitations to our work and also
explore future extensions to this approach. Firstly, the performance of the
decoder phase is dependant on the output from the topic classifier and bottle-
necks the overall improvement from training. Moreover, most recent works such
as GCN-LSTM [30] and AoA-Net [11] make use of Faster-RCNN to feed in region-
level information and hence incorporating these object-level associations
alongside topics in the multi-modal embedding space are susceptible to
increase in efficacy of the approach used. Another limitation of our work is
the use of traditional attention mechanisms. Our study make use of soft-
attention mechanism which involves the averaging of feature maps. Comparing
our approach with HAN [27] which also makes use of soft-attention mechanism,
we gain a relative improvement as discussed in Section 3.3.2. However, our
approach struggles against AoA-Net [11] which uses a more robust attention
mechanism. Moroever, the use of self-attention has been shown to improve
performance over traditional attention mechanisms such as [11], more notably
in transformers [5, 24] and hence can be incorporated with the use of these
multi-modal embeddings to improve performance. Lastly, using recent
reinformcement learning based techniques such as CIDEr optimizations [18] have
yielded state-of-the-art results for image captioning, incorporating them with
our study may further boost the performance over the metrics used.
Figure 6: Sampled captions on varying $\mu$ values on COCO dataset. A higher
value of $\mu$ denotes more weight being given to the spatial flow of
information within the decoder and viceversa.
## 6 Conclusion
In this work, we proposed a new approach to guide the attention model by
exploiting partial-order relationships between image, captions and topics.
Arranging the image and textual modalities in an asymmetric fashion results in
more effective learning of the latent space. Hence, we make use of a multi-
modal embedding space that is able to arrange the visual and textual
modalities in an asymmetrical hierarchy where the caption embeddings are
bounded between image and topic features. We then make use of these joint
representations to guide the attention module. An extensive ablation study was
also performed to indicate that using ordered embeddings, the attention model
was able to draw accurate links between semantically important regions of the
image when attending to them, which helped improve the overall
interpretability, syntax and descriptiveness of the captions. The proposed
architecture was not only simpler in terms of complexity, but also competitive
with many recent LSTM-based architectures. For next steps, a promising
direction can be to incorporate the highlighted approach with transformers or
leveraging the model architecture to be trained in an end-to-end manner.
## References
* [1] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086, 2018.
* [2] Omer Arshad, Ignazio Gallo, Shah Nawaz, and Alessandro Calefati. Aiding intra-text representations with visual context for multimodal named entity recognition. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 337–342. IEEE, 2019.
* [3] Shuang Bai and Shan An. A survey on automatic image caption generation. Neurocomputing, 311:291–304, 2018.
* [4] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022, 2003.
* [5] Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. Meshed-memory transformer for image captioning, 2020.
* [6] Ignazio Gallo, Alessandro Calefati, and Shah Nawaz. Multimodal classification fusion in real-world scenarios. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR), volume 5, pages 36–41. IEEE, 2017.
* [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [8] Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. Image captioning: Transforming objects into words, 2020.
* [9] Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, and Kazuhiko Sumi. Attention-based multimodal fusion for video description. In Proceedings of the IEEE international conference on computer vision, pages 4193–4202, 2017.
* [10] MD Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. A comprehensive survey of deep learning for image captioning. ACM Computing Surveys (CsUR), 51(6):1–36, 2019.
* [11] Lun Huang, Wenmin Wang, Jie Chen, and Xiaoyong Wei. Attention on attention for image captioning. CoRR, abs/1908.06954, 2019.
* [12] Wenhao Jiang, Lin Ma, Yu-Gang Jiang, Wei Liu, and Tong Zhang. Recurrent fusion network for image captioning, 2018.
* [13] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137, 2015.
* [14] Lei Ke, Wenjie Pei, Ruiyu Li, Xiaoyong Shen, and Yu-Wing Tai. Reflective decoding network for image captioning, 2019.
* [15] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
* [16] Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 375–383, 2017.
* [17] Shah Nawaz, Muhammad Kamran Janjua, Ignazio Gallo, Arif Mahmood, Alessandro Calefati, and Faisal Shafait. Do cross modal systems leverage semantic relationships? In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0–0, 2019.
* [18] Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. CoRR, abs/1612.00563, 2016.
* [19] Muhammad Saad Saeed, Muhammad Haris Khan, Shah Nawaz, Muhammad Haroon Yousaf, and Alessio Del Bue. Fusion and orthogonal projection for improved face-voice association. arXiv preprint arXiv:2112.10483, 2021.
* [20] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [21] Moses Soh. Learning cnn-lstm architectures for image caption generation. Dept. Comput. Sci., Stanford Univ., Stanford, CA, USA, Tech. Rep, 2016.
* [22] Mike W Spratling and Mark H Johnson. A feedback model of visual attention. Journal of cognitive neuroscience, 16(2):219–237, 2004.
* [23] Gargi Srivastava and Rajeev Srivastava. A survey on automatic image captioning. In International Conference on Mathematics and Computing, pages 74–83. Springer, 2018.
* [24] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
* [25] Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language, 2016.
* [26] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164, 2015.
* [27] Weixuan Wang, Zhihong Chen, and Haifeng Hu. Hierarchical attention network for image captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):8957–8964, Jul. 2019.
* [28] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. PMLR, 2015.
* [29] Zhongliang Yang, Yu-Jin Zhang, Sadaqat ur Rehman, and Yongfeng Huang. Image captioning with object detection and localization, 2017.
* [30] Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei. Exploring visual relationship for image captioning. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
* [31] Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei. Boosting image captioning with attributes, 2016.
* [32] Niange Yu, Xiaolin Hu, Binheng Song, Jian Yang, and Jianwei Zhang. Topic-oriented image captioning based on order-embedding. IEEE Transactions on Image Processing, 28(6):2743–2754, 2018.
* [33] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041–13049, 2020.
* [34] Zhihao Zhu, Zhan Xue, and Zejian Yuan. Topic-guided attention for image captioning. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 2615–2619, 2018.
|
# System Information Decomposition
Aobo Lyu Department of Electrical and Systems Engineering, Washington
University in St. Louis, St. Louis, Missouri, United States of America, 63130
Swarma Research, Beijing, China, 102308 Bing Yuan Swarma Research, Beijing,
China, 102308 Ou Deng Graduate School of Human Sciences, Waseda University,
Tokorozawa city, Saitama, Japan, 359-1192 Mingzhe Yang School of Systems
Science, Beijing Normal University, Beijing, China, 100875 Swarma Research,
Beijing, China, 102308 Andrew Clark Department of Electrical and Systems
Engineering, Washington University in St. Louis, St. Louis, Missouri, United
States of America, 63130 Jiang Zhang School of Systems Science, Beijing
Normal University, Beijing, China, 100875 Swarma Research, Beijing, China,
102308
###### Abstract
In order to characterize complex higher-order interactions among variables in
a system, we introduce a new framework for decomposing the information entropy
of variables in a system, termed System Information Decomposition (SID).
Diverging from Partial Information Decomposition (PID) correlation methods,
which quantify the interaction between a single target variable and a
collection of source variables, SID extends those approaches by equally
examining the interactions among all system variables. Specifically, we
establish the robustness of the SID framework by proving all the information
atoms are symmetric, which detaches the unique, redundant, and synergistic
information from the specific target variable, empowering them to describe the
relationship among variables. Additionally, we analyze the relationship
between SID and existing information measures and propose several properties
that SID quantitative methods should follow. Furthermore, by employing an
illustrative example, we demonstrate that SID uncovers a higher-order
interaction relationships among variables that cannot be captured by current
measures of probability and information and provide two approximate
calculation methods verified by this case. This advance in higher-order
measures enables SID to explain why Holism posits that some systems cannot be
decomposed without loss of characteristics under existing measures, and offers
a potential quantitative framework for higher-order relationships across a
broad spectrum of disciplines.
_K_ eywords Information decomposition $\cdot$ Information entropy $\cdot$
Complex systems $\cdot$ Multivariate system $\cdot$ System decomposition
## 1 Introduction
Systems Science is a multidisciplinary field investigating the relationships
and interactions among internal variables within a system, with applications
spanning neuroscience, biology, social sciences, engineering, and finance [1,
2]. Complex systems are defined by many interconnected variables that engage
in intricate interactions, the understanding of which is critical for
predicting emergent properties, devising novel treatments, and optimizing
system performance.
In the field of information theory, mutual information is a widely employed
method for quantifying interactions between two variables by encapsulating
shared information or the reduction in uncertainty facilitated by each
variable [3]. However, mutual information is restricted to describing pairwise
interactions, which often proves inadequate for analyzing complex systems that
necessitate multivariate interaction assessments.
As a solution, Beer et al. introduced the Partial Information Decomposition
(PID) method, which characterizes information interactions between a target
variable and multiple source variables by decomposing the mutual information
shared among them [4]. In the past ten years, PID and related theories, such
as Information Flow Modes [5] and integrated information theory [6], have been
applied in many fields, such as quantitative identification of Causal
Emergence [7], dynamical process analysis [8] and information disclosure [9,
10]. However, PID-related techniques only decompose the partial information of
a single target variable at a time. This leads to the fact that selecting or
constructing a suitable and plausible target variable can be challenging or
even unfeasible when addressing complex systems problems, and also raising
questions as to why certain variables are prioritized as targets over others.
Moreover, this variable-specific perspective results in a unidirectional
relationship between the specified target variable and source variable, which
makes information atoms bound to a specific target variable and insufficient
for a comprehensive description of the relationships among variables. This
further limits our exploration of system functions and properties, as many of
them originate from the relationship between system variables rather than
specific variables or its asymmetric properties.
To overcome these limitations, we need a system analysis method based on a
system perspective, analogous to the synchronization model [11] or the Ising
model [12], rather than a variable perspective like PID. Furthermore, this
method should capture the nature and characteristics of the system without
specifying or introducing any special variable, and also take into account all
the interactive relationships among all variables in the system, including
pairwise and higher-order relationships. Therefore, we propose System
Information Decomposition (SID), an innovative method that treats all system
variables equally and effectively captures their intricate interactions. This
novel approach enhances our capacity to scrutinize and understand the
complexities of multivariate systems.
Specifically, we firstly expand the PID’s conceptual framework to a system
horizon by taking all variables in the system as target variable separately.
Then, without relying on any PID quantitative method, we proving the symmetry
properties of information decomposition based on a set theory perspective of
information theory. That means the value of information atoms, the non-
overlapping units obtained by decomposing variables’ information entropy
according to their relationship, will not be affected by the the choice of
target variable. Therefore, we put forward a general SID framework, wherein
redundant, synergistic, and unique information atoms become a multivariate
system’s property, reflecting the complex (pairwise and higher-order)
relationships among variables. Furthermore, we explore the connections between
existing information entropy indicators and the information atoms within the
SID framework while proposing the necessary properties for information atom
quantification and several variable calculation approaches. Through a detailed
case analysis, we provide an intuitive demonstration that SID can unveil
higher-order relationships within the system that cannot be captured by
existing probability or information measures. Finally, we discuss the
potential application scenarios and implications of SID from the philosophical
perspective of system decomposition as well as from areas such as Higher-order
Networks and theory of Causality.
Our contributions to Information and System Science are twofold. Firstly, the
SID framework broadens the application of information decomposition methods in
complex systems by introducing a methodology to decompose all variables’
entropy within a system. This achievement also unifies information entropy and
information decomposition onto one Venn diagram, where three variables can be
well represented on a two-dimensional plane. Secondly, this framework reveals
previously unexplored higher-order relationship that cannot be represented by
existing probability or information measures, providing a potential data-
driven quantitative framework for Higher-order Networks related research.
The remainder of this paper is organized as follows. Section 2 reviews the
development of information theory, PID and related research. Section 3 extends
the PID method to multivariate system scenarios, defines SID, shows the
connections between existing information entropy indicators and the
information atom. Section 4 presents the characteristics of the SID framework
through a case analysis. Then, Section 5 gives the properties for information
atom calculation and there possible calculation approaches. The significance
and potential applications of SID are discussed in Section 6.
## 2 Information Decomposition
### 2.1 Information Theory Framework
Shannon’s classical information theory has provided a robust foundation for
understanding information entropy [3]. Mutual information and conditional
entropy further decompose information and joint entropy according to the
pairwise relationship between variables, which can be intuitively shown in
Venn diagrams 1, a precise tool for depicting the information composition
within systems. In this paper, we explore the potential of Venn diagrams to
provide valuable insights into the complex decomposition of multivariate
systems and extend the entropy decomposition methods of classical information
theory.
Figure 1: Information Theory Venn Diagram.
### 2.2 Partial Information Decomposition Framework
In classical information theory, the joint mutual information may occasionally
be larger or smaller than the sum of the mutual information between individual
variables. Consequently, traditional redundant information calculations may
yield negative values, contradicting our intuitive understanding. To address
this phenomenon, Beer et al. proposed the Partial Information Decomposition
(PID) framework [4].
The PID framework facilitates the decomposition of joint mutual information
between multiple source variables and a target variable. Specifically, for a
random target variable $Y$ and a random source variables
$X={X_{1},X_{2},\cdots,X_{n}}$, the PID framework allows for the decomposition
of the information that $X$ provides about $Y$ into information atoms, such as
redundant, synergistic and unique information. These atoms represent the
partial information contributed by various subsets of $X$, individually or
jointly, providing a more nuanced understanding of the relationships between
the target and source variables.
Considering the simplest case of a system with three variables, one can employ
a Venn diagram to elucidate their interactions [4]. The unique information
$Un(Y:X_{1})$ from $X_{1}$ signifies the information that $X_{1}$ provides to
$Y$, which is not provided by $X_{2}$ and vice versa. In other words, unique
information refers to the contribution made by a specific source variable to
the target variable that is exclusive to that variable and not shared by other
source variables. Redundant information $Red(Y:X_{1},X_{2})$ represents the
common or overlapping information that $X_{1}$ and $X_{2}$ provide to $Y$.
Synergistic information $Syn(Y:X_{1},X_{2})$ captures the combined
contribution of $X_{1}$ and $X_{2}$ to $Y$, which cannot be obtained from
either variable individually.
Figure 2: Venn Diagram of PID.
###### Definition 1 (Redundant Information).
For an arbitrary multivariate system, we can select any variable as the target
variable $Y$ and the remaining variables as the source variables
${X_{1},\cdots,X_{n}}$. The redundant information
$Red(Y:{X_{1},\cdots,X_{n}})$ denotes the common or overlapping information
provided by the source variables [4], which is contained in each source [13].
Redundant information has the following properties [4]:
###### Axiom 1 (Symmetry of source variables).
$Red(Y:{X})$ is invariant to the permutation of X. For the source variables
$X_{i}$ and $X_{j}$ from $\\{X_{1},\cdots,X_{n}\\}$,$i,j\in\\{1\cdots n\\}$ ,
there is $Red(Y:X_{i},\cdots X_{j})=Red(Y:X_{j},\cdots X_{i})$.
###### Axiom 2 (Self-redundancy).
When there is only one source variable, the redundant information is
equivalent to the mutual information between the target variable $Y$ and the
source variable $X_{i}$, i.e. $Red(Y:X_{i})=I(Y:X_{i})$.
###### Axiom 3 (Monotonicity).
The redundancy should exhibit a monotonically decreasing behavior with the
inclusion of additional inputs, i.e. $Red(Y:X_{1},\cdots,X_{n})\leq
Red(Y:X_{1},\cdots,X_{n-1})$, where $n\in N$.
Despite numerous quantitative methods for information atoms in PID, a widely
accepted method still needs to be discovered, primarily due to negative
solutions. Such inconsistencies undermine the notion of information entropy as
a non-negative measure of uncertainty. To circumvent reliance on a specific
quantitative method, we employ classical mutual information and conditional
entropy for calculating the sum of the information entropy of certain
information atoms. Although this approach does not permit the precise
calculation of individual information atoms [4, 14], it ensures that the
framework remains independent of any specific PID calculation methods.
Consequently, when a particular PID calculation method computes the value of
one information atom, the information entropy of the remaining information
atoms is determined by the following Axiom:
###### Axiom 4 (Quantitative Computation).
In a three-variable system with a target variable $Y$ and source variables
$X_{i}$ and $X_{j}$, the following relationships hold:
$Un(Y:X_{i})=I(X_{i}:Y)-Red(Y:X_{i},X_{j})$
$Syn(Y:X_{i},X_{j})=I(Y:X_{i}|X_{j})-Un(Y:X_{i})=H(Y|X_{j})-H(Y|X_{i},X_{j})-Un(Y:X_{i})$
$Syn(Y:X_{i},X_{j})+Red(Y:X_{i},X_{j})+Un(Y:X_{i})+Un(Y:X_{j})=I(Y:X_{i},X_{j})$
[4]
Although several enlightening perspectives on PID have been proposed [4, 15,
16, 17, 18, 19], there is still no perfect quantitative definition. To make
our work not rely on any specific computational method, we need to explore
information decomposition and the properties of information atoms from a more
conceptual perspective. Given the high similarity between information
decomposition, especially the concept of redundant information, and the
concept of inclusion and overlapping, set theory may allow us to explore the
properties of PID more deeply.
### 2.3 A Set-theoretic Understanding of PID
Kolchinsky’s remarkable work [13] offers an understanding based on set theory.
Given that PID is inspired by an analogy between information theory and set
theory [14], the redundant information can be understood as information sets
that the sources provide to the target. More specifically, the definition of
set intersection $\cap\\{X_{i}\\}$ in set theory means the largest set that is
contained in all of the $X_{i}$, and these set-theoretic definitions can be
mapped into information-theoretic terms by treating “sets” as random
variables, “set size” as entropy, and “set inclusion” as an ordering relation
$\sqsubset$, which indicates when one random variable is more informative than
another.
Considering a set of sources variables $X_{1},...,X_{n}$ and a target $Y$, PID
aims to decompose $I(Y:X_{i},X_{j})$ and get $Red(Y:X_{1},\cdots,X_{n})$, the
total same information provided by all sources about the target, into a set of
non-negative terms. Therefore, redundant information can be viewed as the
"intersection" of the information contributed by different sources, leading to
the following definition:
###### Definition 2 (Set Intersection of Information [13] ).
For a variable-system, the redundant information from the source variables
$X_{1},\cdots,X_{n}$ to the target variable $Y$ is the information that all
source variables can provide to the target variable, the largest mutual
information between the target variable and a non-unique variable $Q$ that has
an ordering relation $\sqsubset$ with all source variables. That is
$\displaystyle Red(Y:X_{1},\cdots,X_{n})=I_{\cap}(X_{1},\cdots,X_{n}\to Y)$
$\displaystyle:=\sideset{}{}{\sup}_{Q}\\{I(Q:Y):Q\sqsubset X_{i},\forall
i\in\\{1\cdots n\\}\\}$
The ordering relation $\sqsubset$ is an analogy to the relation contained
$\subseteq$ in set theory, which is not specified but follows some
assumptions: i) Monotonicity of mutual information, $A\sqsubset B\Rightarrow
I(A:Y)\leq I(B:Y)$. ii) Reflexivity: $A\sqsubset A$ for all variable $A$. iii)
For all sources $X_{i}$, $O\sqsubset X_{i}\sqsubset(X_{1},\cdots,X_{n})$,
where $H(O)=0$ and $(X_{1},\cdots,X_{n})$ indicates all sources considered
jointly. One example of a partial order is Q $\sqsubset$ X if and only if
$H(Q|X)=0$. More derivative properties can be found in Kolchinsky’s work [13].
## 3 System Information Decomposition
In this section, we develop a mathematical framework of SID. The objective of
this framework is to decompose the information of all variables within a
system based on their interrelationships. By addressing the limitation of PID,
which focuses solely on a single target variable, we progress towards multi-
variable information decomposition for systems.
### 3.1 Extension of PID in a System Scenario
The PID method only decomposes joint mutual information between multiple
source variables and a specific target variable, as illustrated by the
outermost circle of the Venn diagram in Figure 2. We redesign the Venn diagram
to extend this method and encompass a system-wide perspective, as demonstrated
in Figure 3. The system comprises two source variables,$X_{1}$ and $X_{2}$,
and one target variable, $Y$, represented by the three intersecting circles.
The area size within the figure signifies the information entropy of the
variables or information atoms, and the central area denotes the joint mutual
information, encompassing redundant, unique from $X_{1}$, unique from $X_{2}$,
and synergistic information. This arrangement aligns with the Venn diagram
framework of PID.
Figure 3: Venn diagram from different perspectives of PID.
To enhance the comprehensiveness of the framework, it is necessary to
elucidate the unexplored section of the updated Venn diagram 3. In addition to
the four sections of joint mutual information, the information entropy of the
target variable $Y$ contains an unaccounted-for area. According to Shannon’s
formula, this area corresponds to the joint conditional entropy of the source
variables to the target variable $H(Y|X_{1},X_{2})$, which also characterizes
the interrelationships between the target variable and the source variables.
In the SID framework, numerous joint conditional entropy exist, including one
that stands out: the joint conditional entropy originating from all variables
except the target variable. To optimize the usefulness of the SID framework,
we define this specific joint conditional entropy as the target variable’s
external information ($Ext$). The definition is grounded in the philosophical
assumption that everything is interconnected. Since joint conditional entropy
implies the uncertainty that cannot be eliminated by the internal variables of
the system, the variables capable of providing this information must exist
outside the system. To some extent, external information can emphasize the
relationship between the target variable and the entire system rather than
just a simple relationship with other variables. Therefore, we also consider
it a kind of information atom within the SID framework.
###### Definition 3 (External Information).
For a system containing variables $Y$ and $\\{X_{1},\cdots,X_{n}\\}$, the
external information $Ext(Y)$ is defined as
$Ext(Y)=H(Y|X_{1},X_{2},\cdots,X_{n})$.
Thus, we have been able to decompose the target variable’s entropy into a
finite number of non-repeated information atoms according to the relationship
between it and the other variables in the system. Furthermore, we can apply
this information decomposition method to each variable in the system to
decompose the entire information entropy of the system, which results in a
preliminary version of the SID. For the convenience of expression, we use
$Un_{i-j}$, $Syn_{ij-k}$, and $Red_{ij-k}$ to represent $Un(X_{j},X_{i})$,
$Syn(X_{k}:X_{i},X_{j})$, and $Red(X_{k}:X_{i},X_{j})$ respectively. A Venn
diagram for a three-variable system is shown in Figure 4:
Figure 4: Venn diagram of SID’s Preliminary version.
### 3.2 Properties of Information Atoms
Although the preliminary version of SID can decompose all variables in a
system, the decomposition of each variable is carried out separately, and the
description of information atoms is directional (from source variables to the
target variable). For instance, the unique information provided by $X_{1}$ to
$X_{3}$ in Fig. 4 is not directly related to the unique information provided
by $X_{3}$ to $X_{1}$. To make information atoms better reflect the
relationship among variables and unifies the Venn diagram of Shannon’s
framework2.1 and PID framework 2.2, it is necessary to further explore the
properties of information atoms within the SID framework. In this subsection,
we are going to prove the symmetry property of information atoms by
demonstrating that unique, redundant, and synergistic information atoms remain
stable when different variables are considered as target variables.
###### Theorem 1 (Symmetry of Redundant Information).
Let $X_{1},\cdots,X_{n}$ be the variables in a system. In SID, there is only
one redundant information $Red(X_{1},\cdots,X_{n})$, which implies that the
redundant information is equal irrespective of the chosen target variable.
Formally, we write
$Red(X_{1},\cdots,X_{n})=Red(X_{i}:{X_{1},\cdots,X_{n}}\setminus
X_{i}),\forall i\in\\{1\cdots n\\}$.
###### Proof.
Suppose we have a multivariate system containing a target variable $Y$ and
source variables $X_{1},\cdots,X_{n}$. For the convenience of expression, we
use $\mathcal{X}$ to represent all the source variables $X_{1},\cdots,X_{n}$.
The proof is to show that $Red(Y:\mathcal{X},Y)=Red(Y;\mathcal{X})$ and
$Red(U:\mathcal{X},Y)=Red(Y:\mathcal{X},Y)$, where $U$ is the union variable
of $Y$ and $\mathcal{X}$, such that $U=(\mathcal{X},Y)$. Then, we can
demonstrate that redundant information is equal regardless of which variable
is chosen as the target variable.
Step One, to prove $Red(Y:\mathcal{X},Y)=Red(Y;\mathcal{X}):$
By Definition 2,
$Red(Y:\mathcal{X},Y)=\sup_{Q_{j}}\\{I(Q_{j}:Y):Q_{j}\sqsubset
Y,Q_{j}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$. According to the
Monotonicity property of redundant information (Axiom 3) that adding new
source variables will only impose stricter restrictions on top of existing
ones, and the Symmetry property of source variables (Axiom 1) that the order
in which restrictions are imposed will not affect the results, we can make
this optimization problem into two steps, such that:
$\sup_{Q_{j}}\\{I(Q_{j}:Y):Q_{j}\sqsubset Y,Q_{j}\sqsubset X_{i},\forall
i\in\\{1\cdots n\\}\\}$
$=\sup_{Q_{j},Q_{k}}\\{I(Q_{j}:Y):Q_{j}\sqsubset Y,Q_{j}\sqsubset
Q_{k},Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$
$=\sup_{Q_{k}}\\{\sup_{Q_{j}}\\{I(Q_{j}:Y):Q_{j}\sqsubset Y,Q_{j}\sqsubset
Q_{k}\\}:Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$
$=\sup_{Q_{k}}\\{\sup_{Q_{j}}\\{H(Q_{j}):Q_{j}\sqsubset Y,Q_{j}\sqsubset
Q_{k}\\}:Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$, since
$Q_{j}\sqsubset Y$
$=\sup_{Q_{k}}\\{I(Q_{k}:Y):Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots
n\\}\\}$, since $\sup_{Q_{j}}\\{H(Q_{j}):Q_{j}\sqsubset Y,Q_{j}\sqsubset
Q_{k}\\}=I(Q_{k}:Y)$.
Therefore, $Red(Y:\mathcal{X},Y)=Red(Y;\mathcal{X})$
Step Two, to prove $Red(U:\mathcal{X},Y)=Red(Y:\mathcal{X},Y)$: Building upon
the conclusion that $Red(Y:\mathcal{X},Y)=Red(Y:\mathcal{X})$, we can replace
the target variable with the union variable $U=(\mathcal{X},Y)$, which
combines the target variable $Y$ and the source variables $\mathcal{X}$. (The
entropy of the union variable $U$ can be expressed as
$H(U)=H(\mathcal{X},Y)$.)
Firstly, let’s employ the contradiction method by assuming that
$Red(U:\mathcal{X},Y)<Red(Y:\mathcal{X},Y)$. That means that
$\sup_{Q_{j}}\\{I(Q_{j}:U):Q_{j}\sqsubset Y,Q_{j}\sqsubset X_{i},\forall
i\in\\{1\cdots n\\}\\}<\sup_{Q_{k}}\\{I(Q_{k}:Y):Q_{k}\sqsubset
Y,Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$. Let $Q_{j}^{*}$ and
$Q_{k}^{*}$ that satisfies or infinitely approaches the above conditions
($I(Q_{j}^{*}:U)=\sup_{Q_{j}}\\{I(Q_{j}:U):Q_{j}\sqsubset Y,Q_{j}\sqsubset
X_{i},\forall i\in\\{1\cdots n\\}\\}-\varepsilon,\forall\varepsilon>0$, and
$Q_{k}^{*}$ can also be inferred similarly). Since $Y\sqsubset U$ from
$U=(\mathcal{X},Y)$, we have $I(Q_{k}^{*},Y)\leq I(Q_{k}^{*},U)$. Given that
$Q_{k}^{*}\sqsubset Y$ and $Q_{k}^{*}\sqsubset X_{i}$ (same with the
restrictions of $Q_{j}^{*}$), the mutual information $I(Q_{j}^{*},U)$ should
greater or equal to $I(Q_{k}^{*},Y)$, which lead to a contradiction.
Consequently, we can conclude that $Red(U:\mathcal{X},Y)\geq
Red(Y:\mathcal{X},Y)$.
Secondly, let’s also use the contradiction method by assuming that
$Red(U:\mathcal{X},Y)>Red(Y:\mathcal{X},Y)$. In this case,
$\sup_{Q_{j}}\\{I(Q_{j}:U):Q_{j}\sqsubset Y,Q_{j}\sqsubset X_{i},\forall
i\in\\{1\cdots n\\}\\}>\sup_{Q_{k}}\\{I(Q_{k}:Y):Q_{k}\sqsubset
Y,Q_{k}\sqsubset X_{i},\forall i\in\\{1\cdots n\\}\\}$. Let’s focus on the
$Q_{j}^{*}$ and $Q_{k}^{*}$ that satisfies or infinitely approaches the above
conditions. Since $Q_{j}^{*}\sqsubset Y$ and $Y\sqsubset U$ from
$U=(\mathcal{X},Y)(H(Y|U)=0)$, we have $I(Q_{j}^{*}:U)=I(Q_{j}^{*}:Y)$, which
lead to a contradiction ($I(Q_{j}^{*}:Y)>I(Q_{k}^{*}:Y)$ with the same
restriction on $Q_{j}^{*}$ and $Q_{k}^{*}$). Therefore, we obtain
$Red(U:\mathcal{X},Y)\leq Red(Y:\mathcal{X},Y)$.
Since we have both $Red(U:\mathcal{X},Y)\geq Red(Y:\mathcal{X},Y)$ and
$Red(U:\mathcal{X},Y)\leq Red(Y:\mathcal{X},Y)$,
$Red(U:\mathcal{X},Y)=Red(Y:\mathcal{X},Y)$ is proved.
In Summary: Since we have established that
$Red(Y:\mathcal{X},Y)=Red(Y:\mathcal{X})$, and
$Red(U:\mathcal{X},Y)=Red(Y:\mathcal{X},Y)$, we can conclude that for all
$X_{i}$ in $\\{\mathcal{X}\\}$, $Red(X_{i}:Y,\\{\mathcal{X}\\}\setminus
X_{i})=Red(Y:\\{\mathcal{X}\\})$. Therefore, Theorem 1 is proved, and we can
use $Red(X_{1},\cdots,X_{n})$ or $Red_{1\cdots n}$ denote the redundant
information within the system $\\{X_{1},\cdots,X_{n}\\}$. ∎
###### Theorem 2 (Symmetry of Unique Information).
Let $X_{1},\cdots,X_{n}$ be the variables in a system. In SID, the unique
information of any two variables relative to each other is equal, regardless
of which is chosen as the target variable. Formally, we write
$Un(X_{i}:X_{j})=Un(X_{j}:X_{i})$, $\forall i\neq j$ where
$i,j\in\\{1,\cdots,n\\}$.
###### Proof.
According to Axiom 4, unique information is a part of the information provided
by the source variable to the target variable, that is, mutual information
minus redundant information. In a three-variable system
$\\{X_{1},X_{2},X_{3}\\}$, we have
$Un(X_{i}:X_{j})+Red(X_{i}:X_{j},X_{k})=I(X_{i};X_{j})$, for all $i\neq
j\in\\{1,2,3\\}$. Since $I(X_{i}:X_{j})=I(X_{j}:X_{i})$ according to the
symmetry of Shannon’s formula [3], and
$Red(X_{i}:X_{j},X_{k})=Red(X_{j}:X_{i},X_{k})=Red(X_{i},X_{j},X_{k})$
according to Theorem 1, we have $Un(X_{i}:X_{j})=Un(X_{j}:X_{i})$. Therefore,
we can represent this information atom as $Un(X_{i},X_{j})$, or $Un_{i,j}$. ∎
###### Theorem 3 (Symmetry of Synergistic Information).
Let $X_{1},\cdots,X_{n}$ be the variables in a system. In SID, the synergistic
information of any group of variables is equal, regardless of which is chosen
as the target variable. Formally, we write
$Syn(X_{1},\cdots,X_{n})=Syn(X_{i}:\\{X_{1},\cdots,X_{n}\\}\setminus
X_{i}),\forall i\in\\{1\cdots n\\}$.
###### Proof.
According to Axiom 4, Theorem 1, Theorem 2, and the chain rule of Shannon
formula, for a three-variable system with ${X_{i},X_{j},X_{k}}$:
$\displaystyle Syn(X_{k}:X_{i},X_{j})$
$\displaystyle=H(X_{k}|X_{j})-H(X_{k}|X_{i},X_{j})-Un(X_{i},X_{k})$
$\displaystyle=(H(X_{j},X_{k})-H(X_{j}))-(H(X_{i},X_{j},X_{k})-H(X_{i},X_{j}))-Un(X_{i},X_{k})$
$\displaystyle=H(X_{j},X_{k})+H(X_{i},X_{j})-H(X_{j})-H(X_{i},X_{j},X_{k})-Un(X_{i},X_{k})$
$\displaystyle=(H(X_{i},X_{j})-H(X_{j}))-(H(X_{i},X_{j},X_{k})-H(X_{j},X_{k}))-Un(X_{i},X_{k})$
$\displaystyle=H(X_{i}|X_{j})-H(X_{i}|X_{j},X_{k})-Un(X_{i},X_{k})$
$\displaystyle=Syn(X_{i}:X_{j},X_{k})$
Therefore, we proved Theorem 3 and we can write synergistic information in the
form of $Syn(X_{1},\cdots,X_{n})$ or $Syn_{1\cdots n}$. ∎
Based on the Theorem 1 2 3 (the symmetry of information atoms), the SID
framework can be merged into the formal version in Figure 5. In the formal
version of SID, the concept of target variable is canceled, and all variables
are equally decomposed according to their relationship with other variables.
Specifically, redundant information and unique information are merged.
redundant information (atoms) in any group of variables and unique information
(atoms) between any two variables appear only shown one time in the Venn
diagram, while synergistic information (atoms) appear in each participating
variable with the same value, and each variable contains one external
information (atom). So far, we can give the formal definition of SID:
Figure 5: Venn diagram of SID’s Formal Version.
###### Definition 4 (System Information Decomposition Framework).
SID is a system decomposition framework based on information entropy, that can
divide the whole information entropy of a multivariate system into non-
overlapping information atoms according to the relationship among variables.
In this framework, redundant information represents the common or overlapping
information of all the variables; unique information represents information
that is only owned by two variables but not by others; and synergistic
information represents the information that can be known from any variable
only when the other variables are observed simultaneously.
In the SID framework, the Venn diagram unifies the Shannon’s framework2.1 and
PID framework 2.2. Considering that Venn diagrams cannot present the systems
with more than three variables on a two-dimensional plane, we only present the
simple case of three-variable system ($\\{X_{1},X_{2},X_{3}\\}$) in this
paper. For the presentation of SID in systems with more than three variables,
we will analyze it in the discussion section.
### 3.3 SID and Information Measure
In addition to Axiom 4 and Definition 3 for the relationship between SID and
mutual information, conditional entropy and joint conditional entropy, there
are still some important information measures that deserve our attention.
###### Corollary 1 (Joint Entropy Decomposition).
For any subsystem with 3 variables,
$H(X_{1},X_{2},X_{3})=Ext(X_{1})+Ext(X_{2})+Ext(X_{3})+Un(X_{1},X_{2})+Un(X_{1},X_{3})+Un(X_{2},X_{3})+2*Syn(X_{1},X_{2},X_{3})+Red(X_{1},X_{2},X_{3})$.
Based on Corollary 1, which can be easily proved by Axiom 4, we can have a
deeper understanding of information atoms, that is, any information atom can
be understood as some kind of information stored by $m$ variables, and at
least $n$ variables need to be known to obtain the information
($m>n,m,n\in\mathbb{Z}$). Specifically, the external information of the system
is owned by the variable independently, so $m=1$ and $n=1$; redundant
information is owned by all variables, so $m=number\>of\>variables$ and $n=1$;
unique information is owned by two variables, Therefore $m=2$ and $n=1$;
synergistic information is shared by all variables, so
$m=number\>of\>variables$ and $n=number\>of\>variables-1$. Therefore, the
joint entropy decomposition is the sum of each information atom multiplied by
its $m-n$ quantity. This perspective will deepen our understanding of the
essence of information atoms and facilitate our exploration of the joint
entropy decomposition of systems with more than three variables. Besides, this
phenomenon also reflects the differences between information measures and Venn
diagrams. Considering that Venn diagrams cannot fully reflect the nature of
information decomposition, alternative visualization solution will be
discussed in the discussion section.
###### Corollary 2 (Total Correlation Decomposition).
For any subsystem with 3 variables,
$TC(X_{1},X_{2},X_{3})=Un(X_{1},X_{2})+Un(X_{1},X_{3})+Un(X_{2},X_{3})+Syn(X_{1},X_{2},X_{3})+2*Red(X_{1},X_{2},X_{3})$.
###### Corollary 3 (Intersection Information Decomposition).
For any system with 3 variables, its Intersection Information
$CoI(X_{1},X_{2},X_{3})=Red(X_{1},X_{2},X_{3})-Syn(X_{1},X_{2},X_{3})$.
According to the calculation of
$CoI(X,Y,Z)=H(X_{1},X_{2},X_{3})+H(X_{1})+H(X_{2})+H(X_{3})-H(X_{1},X_{2})-H(X_{1},X_{3})-H(X_{2},X_{3})$,
$Col$ is symmetry and unique for a system, which also verifies the symmetry of
information atoms ($Syn$ and $Red$) to some extent.
## 4 Case Studies
In this section, through a series of case analyses, we elucidate the unique
properties of the SID framework and its capacity to uncover higher-order
relationships that surpass the capabilities of current information and
probability measures.
Without loss of generality, we can construct a case that includes both macro
and micro perspectives, which can not only analyze the properties of SID at
the macro level but also obtain "ground truth" through known micro properties.
First, we construct six uniformly distributed Boolean variables $a,b,c,d,e,f$,
ensuring that these variables are independent. We then create new variables by
performing XOR operations on the existing variables: let $g=c\oplus e$,
$h=d\oplus f$, $i=c\oplus f$, and $j=d\oplus e$, where $\oplus$ represents
XOR.
Next, we construct new macro variables by combining these micro variables: let
$X_{1}=(a,b,c,d)$, $X_{2}=(a,b,e,f)$, $X_{3}=(c,d,e,f)$, $X_{4}=(a,c,e,h)$,
$X_{5}=(a,b,g,h)$, $X_{6}=(a,b,i,j)$. The combination method involves simple
splicing; e.g., when $a=1$, $b=0$, $c=1$, $d=1$, $X_{1}$ is equal to $1011$.
Appendix A provides a concrete example that matches this design. As the micro-
level variables are independent of each other, this combination ensures that
the properties of the macro variables are a combination of the properties of
the micro variables.
Then, we fix $X_{1}$ and $X_{2}$ as constants and form different three-
variable systems (Cases 1-4) by adding $X_{3}$, $X_{4}$, $X_{5}$, and $X_{6}$
respectively, as shown in Table 1. After knowing the microscopic dynamics of
these cases, we can more intuitively analyze their characteristics under the
SID framework.
Case | Variables | Micro-component | Micro-relationship
---|---|---|---
1 | $X_{1}$, $X_{2}$, $X_{3}$ | $abcd$, $abef$, $cdef$ | $abcdef$ are independent
2 | $X_{1}$, $X_{2}$, $X_{4}$ | $abcd$, $abef$, $aceh$ | $abcdef$ are independent, $h=d\oplus f$
3 | $X_{1}$, $X_{2}$, $X_{5}$ | $abcd$, $abef$, $abgh$ | $abcdef$ are independent, $g=c\oplus e$, $h=d\oplus f$
4 | $X_{1}$, $X_{2}$, $X_{6}$ | $abcd$, $abef$, $abij$ | $abcdef$ are independent, $i=c\oplus f$, and $j=d\oplus e$
Table 1: Cases Construction.
It is worth noting that these four cases yield identical results under
existing probability theory and information theory measures. The system has 64
equally probable outcomes, each variable has 16 equally probable outcomes, the
total information amount in the system is 6, the pairwise mutual information
between variables is 2, and the conditional entropy is 2. Existing systems
analysis methods cannot identify the differences observed in these four
examples.
However, the four systems exhibit three distinct internal characteristics
under the SID framework. Since these examples comprise mutually independent
micro variables, we can intuitively map the micro variables to the information
atoms in each case. In Case 1, the micro variables $a,b$ provide 2-bit unique
information between $X_{1}$ and $X_{2}$ ($c,d$ correspond to $X_{1}$ and
$X_{3}$, $e,f$ correspond to $X_{2}$ and $X_{3}$). In Case 2, micro variable
$a$ provides 1-bit redundant information, while $b$, $c$, and $e$ provide
1-bit unique information between $X_{1}$ and $X_{2}$, $X_{1}$ and $X_{4}$,
$X_{2}$ and $X_{4}$ respectively. The XOR relationship between $d-f-h$
provides 1-bit synergistic information between variables. In Cases 3 and 4,
micro variables $a$ and $b$ provide 2-bit redundant information, and XOR
relationships of $c-e-g$, $d-f-h$, and $c-f-i$, $d-e-j$ provide 2-bit
synergistic information for the two cases, respectively. Figure 6 displays the
SID Venn diagrams for Cases 1–4.
(a) Case 1.
(b) Case 2.
(c) Case 3.
(d) Case 4.
Figure 6: SID Venn Diagrams for Cases 1-4. In Case 1, $a,b$ provide 2-bit
unique information between $X_{1}$ and $X_{2}$ ($c,d$ correspond to $X_{1}$
and $X_{3}$, $e,f$ correspond to $X_{2}$ and $X_{3}$). In Case 2, $a$ provides
1-bit redundant information, $b$, $c$, and $e$ provide 1-bit unique
information between $X_{1}$ and $X_{2}$, $X_{1}$ and $X_{4}$, $X_{2}$ and
$X_{4}$ respectively. The XOR relationship between $d-f-h$ provides 1-bit
synergistic information. In Cases 3 and 4, $a,b$ provide 2-bit redundant
information, XOR relationships of $c-e-g$, $d-f-h$, and $c-f-i$, $d-e-j$
provide 2-bit synergistic information for the two cases, respectively.
## 5 Calculation of SID
Although we have proposed the framework of SID and proved the symmetry of
information atoms, the problem of exact computation has not been fully
resolved. Therefore, in this section, we alternatively propose the properties
that the calculation method of the SID framework should satisfy, and accept
any method that can meet these properties. Additionally, we propose a direct
methods for some special cases and two novel methods for more general cases
and validate their accuracy and applicability through the examination of the
cases 4.
### 5.1 Properties of Calculation Methods for SID
###### Property 1 (Shannon’s formula).
The sum of certain information atoms should equal to the mutual information
and conditional information. For a three-variable system, it is Axiom 4.
The information atoms can be regarded as a finer-grained division of Shannon’s
information entropy calculation, so calculation methods such as information
entropy, mutual information, and conditional entropy can accurately calculate
the sum of some information atoms, which means that the SID’s calculation
should conform to the Shannon formula. It is worth noting that when the
specific PID calculation method calculates the value of one information atom,
the rest of the information atoms will also get the results according to Axiom
4. This means that the calculation method of SID only needs to focus on one
information atom in the system.
###### Property 2 (Computational Symmetry).
The results of SID calculation should satisfy Theorems 1, 2, and 3.
For the same system, the order of variables in the calculation method will not
affect the results. This ensures that the SID framework provides a consistent
decomposition of information entropy, regardless of the order of variables.
Specifically, for redundant information and synergistic information, changing
the order of any variable in the calculation method will not change the
result; for unique information, exchanging the positions of the two focused
variables or changing the order of the remaining variables will not change the
result.
###### Property 3 (Non-negativity of information atoms).
After applying SID, the value of any information atom is greater than or equal
to zero. This non-negativity property holds because information measures, the
degree of uncertainty are always non-negative as per the principles of
information theory.
Although the computational problem of information atoms has not been
completely solved yet, just like finding the Lyapunov function, for a specific
case, we can often use specific methods, analysis, and some intuition to get
the result. For example, a direct and rigorous method is to use properties 1
and 3.
###### Proposition 1 (Direct Method).
If certain mutual information or conditional entropy is zero, we can directly
draw the conclusion that: (1) the redundant information and the corresponding
unique information are zero if some mutual information is zero, or (2) the
synergistic information and the corresponding unique information are zero if
some conditional entropy is zero. Then, we can obtain the values of the
remaining information atoms.
For a more general scenario, we are going to give a calculation formula that
can be applied to most situations and a neural network method that can give
approximate values.
### 5.2 A Calculation Formula
Although we can calculate some cases through the Direct Method 1 or from the
perspective of case construction like previous case analysis 4, in order to
make the SID framework applicable in a wider range of scenarios, we need to
find a general solution for information atoms. After analyzing a large number
of known-result cases and combining some intuitions, we reveal the
correspondence between the values of information atoms and certain structures
on the data, which we called Synergistic Block and Unique Block. Based on this
correspondence, we propose an identification method for unique information and
synergistic information and further construct a formula for calculating
synergistic information that is applicable in most cases.
###### Definition 5 (Synergistic Block and Unique Block).
For a full probability table containing the values of all variables, if we fix
a certain value of a variable (let $X_{1}=x_{1}$), we can get the possible
values ($j$and $k$) of the remaining variables under this condition
($j\in\\{X_{2}|X_{1}=x_{1}\\}$ , $k\in\\{X_{3}|X_{1}=x_{1}\\}$). Then, mark
all these values ($j$ and $k$) of the remaining variables ($X_{2}$, $X_{3}$)
while the fixed variables take other values ($X_{2}=j|X_{1}\neq x_{1}$ ,
$X_{3}=k|X_{1}\neq x_{1}$). For all values of remaining variables where both
occur simultaneously, such that $X_{2}=j$ and $X_{3}=k$ when $X_{1}\neq
x_{1}$, we call it Synergistic Block. For all values of remaining variables
where only one occurs, we call it Unique Block, such that $X_{2}=j$ and
$X_{3}\neq k$ when $X_{1}\neq x_{1}$ for $X_{2}$, or $X_{2}\neq j$ and
$X_{3}=k$ when $X_{1}\neq x_{1}$ for $X_{3}$.
Take Table A.1 as an example, we fixed the value of $X_{1}=0000$, and marked
the values of all variables in this scenario in yellow. Then, we mark the
values where $X_{2}$ to $X_{6}$ still take the same value when $X_{1}\neq
0000$ as pink. Taking $X_{1}$, $X_{2}$ and $X_{4}$ as examples, we marked the
synergistic blocks in bold, and marked the unique blocks of $X_{2}$ and
$X_{3}$ in italics. Besides, although not as obvious as the previous two,
redundant information also has corresponding redundant blocks.
###### Proposition 2 (Information Atom Identification).
The synergistic information is greater than zero if and only if the
synergistic block exists. For a three-variable system
$\\{X_{1},X_{2},X_{3}\\}$, $Syn(X_{1},X_{2},X_{3})>0$ iff $P(X_{2}=j$,
$X_{3}=k$, $X_{1}\neq x_{1}$, $j\in\\{X_{2}|X_{1}=x_{1}\\}$,
$k\in\\{X_{3}|X_{1}=x_{1}\\})>0$. The unique information between two variables
is greater than zero if and only if fix any of them, the remaining variable
have unique block for a three-variable system. That is $Un(X_{1},X_{2})>0$ iff
$P(X_{2}\neq j$, $X_{3}=k$, $X_{1}\neq x_{1}$, $j\in\\{X_{2}|X_{1}=x_{1}\\}$,
$k\in\\{X_{3}|X_{1}=x_{1}\\})>0$.
Based on the Proposition 2, we construct a calculation formula that can
calculate synergistic information. The specific calculation method for
synergistic information for a three-variable system involving $X_{1}$,
$X_{2}$, and $X_{3}$ is as follows:
$\displaystyle Syn(X_{1},X_{2},X_{3})=(\sum P(x_{1},x_{2},x_{3})*$
$\displaystyle\log(\frac{P(X_{2}=x_{2},X_{3}=k,k\in\\{X_{3}|X_{1}=x_{1}\\})}{P(X_{2}=x_{2}|X_{1}=x_{1})}*\frac{P(X_{3}=x_{3},X_{2}=j,j\in\\{X_{2}|X_{1}=x_{1}\\})}{P(X_{3}=x_{3}|X_{1}=x_{1})}*$
$\displaystyle\frac{P(X_{1}=x_{1})}{P(X_{2}=j,X_{3}=k,j\in\\{X_{2}|X_{1}=x_{1}\\},k\in\\{X_{3}|X_{1}=x_{1}\\})}))-H(X_{1}|X_{2},X_{3})$
(1)
In the previous case 4, since the data is relatively uniform, fixing any value
of $X_{1}$ will have the same result, so we can quickly calculate the
synergistic information of the four cases by fixing $X_{1}=0000$. In these
cases, the $log$ part of the formula can be intuitively understood as $log$
(yellow + synergistic block / yellow), which is $\log(4/4)=0$ in case 1;
$\log(8/4)=1$ in case 2; $\log(16/4)=2$ in cases 3 and 4. Unique information
can also be calculated by a similar method like $log$(yellow + unique block /
yellow).
### 5.3 An Approximate Method by Neural Information Squeezer
Another possible method is to use a generalized form of neural information
squeezer (NIS, a machine learning framework by using invertible neural
networks proposed in Ref [20]) to numerically calculate the redundancy of the
system, and then to derive other information atoms.
Figure 7: A generalized form of the Neural Information Squeezer network (NIS,
see [20]) to calculate mutual information(a) and redundancy(b) of a trivariate
system ($X,Y,Z$). In (a), there are two invertible neural networks
($\psi,\phi$) which can play the roles of encoder and decoder, respectively.
The whole network accepts the input $X$ to predict $Y$, and the intermediate
variable $\hat{Y}_{X}$, which is the minimum low-dimensional representation of
$X$, can be used to calculate the mutual information between $X$ and $Y$. In
(b), two NIS networks are stacked together. The first one is just the network
in (a), and the intermediate variable $\hat{Y}_{X}$ is fed into the second NIS
network to predict $Z$. Then the intermediate variable,
$\hat{Z}_{\hat{Y}_{X}}$ which is the minimum low-dimensional representation of
$\hat{Y}_{X}$, can be used to calculate the redundancy of the system
$\\{X,Y,Z\\}$.
As shown in Figure 7(a), the NIS framework has two parts: an encoder and a
decoder. The encoder can accept any real vector variable with dimension $p$,
and it contains two operators: a bijector $\psi$ modeled by an invertible
neural network (see details in [20]) with dimension $p$ and a projector $\chi$
which can drop out the last $p-q$ dimensions from the variable $\psi_{p}(X)$
to form variable $U$. The remaining part ($\hat{Y}_{X}$) can be regarded as a
low-dimensional representation of $X$ which will be used to construct the
target $Y$ via another invertible neural network $\phi$ by mapping
$[V,\hat{Y}_{X}]$ into $\hat{Y}$, where $V\sim\mathcal{N}(0,I)$ is a
$p^{\prime}-q$ dimensional random noise with Gaussian distribution, where
$p^{\prime}$ is the dimension of $Y$. Then, we need to train the whole
framework to conform that (1) $\hat{Y}$ approximates the target variable $Y$,
and (2) $U$ follows a $p-q$ dimensional standard normal distribution. It can
be proven that the following proposition holds:
###### Proposition 3.
For any random variables $X$ with $p$ dimension and $Y$ with $p^{\prime}$
dimension, and suppose $p$ and $p^{\prime}$ are very large, then we can use
the framework of Figure 7(a) to predict $Y$ by squeezing the information
channel of $\hat{Y}_{X}$ as the minimum dimension $q^{*}$ but satisfying
$\hat{Y}\approx Y$ and $U\sim\mathcal{N}(0,I)$. Further, if we suppose
$H(X)>H(X|Y)>0$, then:
$H(\hat{Y}_{X})\approx I(X;Y),$ (2)
and
$H(U)\approx H(X|Y).$ (3)
We will provide the proof in the appendix. The reason why we require that the
numbers of dimensions of $X,Y$ are large is because the maximal $q$ for
accurate predictions may not be integer if $p,p^{\prime}$ are small.
Therefore, we can enlarge the dimensions by duplicating the vectors.
To calculate the redundancy for a system with three variables: $X,Y,Z$, we can
use the NIS network twice, as shown in Figure 7(b). The first NIS network is
to use the intermediate variable $\hat{Y}_{X}$, the dense low-dimensional
representation of $X$ with the minimum dimension $q$, to construct $Y$. Then,
the second NIS network is to use $\hat{Z}_{\hat{Y}_{X}}$, the minimal
dimensional dense low-dimensional representation of $\hat{Y}_{X}$ to construct
$Z$. After these two steps, the Shannon entropy of the intermediate variable
of $NIS_{2}$: $\hat{Z}_{\hat{Y}_{X}}$ can approach the redundancy. Thus, the
redundancy of the system can be calculated approximately in the following way:
$Red(X,Y,Z)\approx H(\hat{Z}_{\hat{Y}_{X}}).$ (4)
To verify that $Red(X,Y,Z)$ calculated in this way can be regarded as the
redundancy of the system, we need to prove that Equation 4 satisfies the
property of symmetry for all the permutations of $X,Y,Z$, i.e., the following
proposition:
###### Proposition 4.
For a system with three random variables $X,Y,Z$, without losing generality,
we suppose that the conditional information satisfy $H(X)>H(X|Y)>0$ ,
$H(X)>H(X|Z)>0$, and $H(Y)>H(Y|X)>0$, then the redundancy calculated in
Equation 4 is symmetric:
$Red(X,Y,Z)\approx Red(X,Z,Y).$ (5)
To be noticed that $Red(X,Z,Y)\approx H(\hat{Y}_{\hat{Z}_{X}})$ is different
from $Red(X,Y,Z)$ in the way that the order of the predictions from $X$ is $Z$
and then $Y$.
The proof of Theorem 4 is also provided in the appendix. With the calculation
of redundancy, we can easily calculate unique and synergistic information
atoms. Furthermore, we can extend the method to systems with more variables by
just stacking more NIS networks in the similar way as shown in Figure 7 (b).
However, there are two disadvantages to this method, one is that the
calculation is inaccurate and requires a large number of training epochs.
Second, the numbers of dimensions of all variables must be large enough such
that the independent information among the variables can be discarded by
dropping out the dimensions. Further studies are needed.
To verify that the NIS framework can calculate redundant information, we
conducted numerical experiments using Case3 as an example, as Figure8 shows,
where the mutual information between each pair of variables and the redundant
information is 2 bits.
In this experiment, variable $X_{1}$ is used as the input of NIS1 in the
framework, with $X_{2}$ predicted as the target $Y$, and the intermediate
variable $\hat{Y}_{X}$ is fed into NIS2 to predict $X_{3}$. In this
experiment, both inputs and targets were expanded to 64 dimensions by direct
replication of the original variables, and let the two intermediate variables
in the NIS maintain consistent dimensions, both denoted by $q$ . The minimum
dimension of $\hat{Y}_{X}$ and $\hat{Z}_{\hat{Y}_{X}}$ are selected by
monitoring the changes in the loss curves.
From the above results, it can be seen that when $q$, the dimension of
intermediate variable, is relatively large, the entropy of the intermediate
variable is quite accurate for mutual information or redundant information. As
the $q$ drops below a threshold, the loss signally increases, indicating that
the intermediate variable cannot capture all the mutual information and the
redundant information.
Figure 8: (a) The changes of $H(\hat{Y}_{X})$ in NIS1 under $q=4,2,1$
respectively; (b) The changes of $H(\hat{Z}_{\hat{Y}_{X}})$ in NIS2 under
$q=4,2,1$ respectively; (c) The changes of training loss in NIS1 under
$q=4,2,1$ respectively; (d) The changes of training loss in NIS2 under
$q=4,2,1$ respectively. The same experiments were conducted for the other
three cases, and the redundant information could be accurately calculated
under the NIS framework.
## 6 Discussion
### 6.1 SID and PID
As an information decomposition method compatible with PID’s conceptual
framework, SID has mainly completed two developments on the basis of it: i)
The scope of information decomposed is expanded from the mutual information of
the source variables and the target variable to the information of all
variables in the system; ii) After decomposing all information in the system,
SID show the symmetry of information atoms among different variables. Besides,
it is worth noting that SID is not based on any existing PID calculation
methods, but instead proposes a set of computational properties that should be
satisfied.
Based on these two changes, the biggest difference between SID and PID is the
analysis perspective: PID focuses on the directed pairwise (second-order)
relationship between the set of source variables and the target variable,
while SID focuses on all relationship among variables in the system, from
pairwise to undirected high-order relationship. This exhaustion of
relationships enables SID to pay attention to the relationships among the
source variables and the high-order symmetric relationships that PID ignores.
Take Case 4 as an example. From the perspective of PID, there are directional
redundant, synergistic and unique information from $X1$ and $X2$ to the target
variable, but the information interaction relationship between $X1$ and $X2$
is unknown. Also, it cannot be realized from PID that the synergistic
information provided by $X1,X2$ to the target variable is only a partial
understanding of the undirected synergistic effect among the three variables,
and this effect also occurs when $X1$ or $X2$ as target. In addition, on the
basis of being compatible with PID, SID adds more constraints, such as Theorem
1 2 3, so it provides more ways in the calculation of information
decomposition. For example, in Proposition 1, it can be inferred that the
redundant information is zero through the presence of variable pairs with zero
mutual information in the variable set, which is not satisfied in some
existing PID calculation methods.
To sum up, SID extents the analysis scope and reveals several essential
natures of information atoms on the basis of being compatible with the PID
framework, and greatly expands the application scenarios of information
decomposition, which will be discussed in the next few paragraphs.
### 6.2 SID and Higher-order Measurement
The holism-versus-reductionism debate persists in modern literature [21].
Those who hold a reductionist view believe that any system can be divided into
many subsystems, and we can fully understand the entire system by studying the
properties of the subsystems and their connections, which is also the research
philosophy followed by most disciplines [22]. But holism holds that the system
should be treated as a whole because the splitting of the system will
inevitably lose the understanding of some of its properties [23]. This
contradiction seems irreconcilable when we don’t discuss in detail how to
decompose the system.
However, the SID offers a perspective that can explain this conflict by
accounting for higher-order relationships in the system that are not captured
by previous measures. To better divide the different measures, we divide
information entropy into first-order measures, which reflect a certain
attribute of a single variable. Mutual information and conditional entropy, on
the other hand, can be divided into second-order measures, which capture some
aspects of pairwise relationships between variables [24]. Although among the
second-order measurement, information theory’s cross-entropy can measure the
information shared among multiple variables, it still captures linear
superpositions of second-order relationships, which provides limited insight
into multivariate interactions. But under the SID framework, redundant,
synergistic, and unique information can be regarded as three- or higher-order
measures, revealing a new dimension of multivariate relationships that is
entirely distinct from the first and second orders and facilitating a deeper
comprehension of higher-order system relationships. In the case analysis, the
internal structure of Case 1 aligns well with the results of the second-order
measures, and can be considered a reducible, decomposable system. Cases 2, 3,
and 4, however, have internal structures that cannot be captured by second-
order measures and are thus regarded by holism as systems that cannot be
decomposed and understood individually. To some extent, SID and the case
analysis offer an explanation that bridges the gap between holism and
reductionism; that is, some of the system properties that holism insists
cannot be understood separately might be explained by higher-order measures or
decomposition methods.
### 6.3 Potential Application
In addition to philosophical discussions, higher-order measures can also be
applied to many fields. A foreseeable application across many domains comes
from that SID deepens our understanding of data, measures, and information. In
the case studies 4, the data contains information about the construction of
the four variable systems, but the inner relationship of the system cannot be
captured by probability measures or existing information measures. That means
the incompleteness of measures may limit our ability to analyze existing
systems, even if we have obtained complete data. Therefore, conducting higher-
order information measures in the analysis of complex systems may offer
valuable insights, especially in the field where traditional information
measures fail to capture the relationship among systems. A worth exploring
direction the quantitative analysis Higher-order Networks [25]. Since SID can
provide a data-driven framework for identifying and analyzing of high-order
network structures, it may potentially impact the analysis and understanding
of complex systems across various domains [26]. For example, in studying
neural networks and brain connectivity [27], the SID framework can provide
further insights into the higher-order information flow between multiple
neurons or brain regions, which will allow us to directly generate higher-
order network models between neurons through the temporal data of multiple
neurons, and use this model to explain the implementation of specific
functions; in ecological [28], financial, or social systems, the quantitative
characterization of high-order relationships among multiple agents can assist
in the development of more accurate models and forecasts, as well as the
design of effective control methods. Also, this combination is also a two-way
promotion: Since Venn diagrams have limitation on presenting more than three
variable systems on a two-dimensional plane, hypergraphs in the field of
Higher-order Networks may be a better tool for visualizing SID frameworks.
Another field where SID may interact is Causal Science, since it is a field
for studying the intrinsic relationships between multiple variables, just like
the SID framework. One of the goals of causal science is to search for
invariance in the system. We hope that the revealed properties of the system
are independent of the distribution of the data. However, the results obtained
from SID can vary with changes in the data distribution. Therefore, adopting
the methods of causal science to reveal system invariance is one direction in
which SID can be improved. In addition, conditional independence plays an
important role in causal discovery and causal inference in multivariate
systems [29], while in the quantitative calculation of SID, conditional
independence also plays a similar role in eliminating the uncertainty of
higher-order relations, refer to the calculation method in the first way.
Therefore, studying the properties of conditional independence within the
framework of SID may provide a bridge between causal science and SID. The
benefits of this association are mutual: from the perspective of Pearl Causal
Hierarchy theory [30], SID is a research technique that utilizes observational
data, which is at the lowest rung of the causal ladder. Investigating whether
lifting the approach to higher rungs of causal ladder can yield deeper
insights into the system is an area worth exploring, for instance, by
incorporating causal graphs (DAGs) into SID methods, etc.
Apart from the above fields, SID may also has potential applications. Since
information atoms provide a more refined division of information entropy, when
the physical meaning of information atoms within the SID framework is
revealed, specific information atoms may also become indicators for some
optimization or learning problems; The symmetry property of synergistic
information in SID may provide inspiration for the information disclosure, an
important application of PID in information protection field. In summary, SID,
as a progress in the underlying measurement, may play a role in many
application scenarios, which is also the focus of our next stage of work.
### 6.4 Limitations and Future Works
In addition to the above-mentioned promising progress and expectations, there
are still several limitations worthy of attention. The first limitation is the
absence of a fully compatible quantitative method for the proposed framework,
which restricts the practical application of SID in addressing real-world
problems. As we continue to develop and refine the SID framework, it is a
priority to develop robust computation methods for calculating SID components
and consider how higher-order information measures can be integrated into
existing analytical approaches. Furthermore, the existing proofs of framework
properties and computational methods have only been established for three-
variable systems. Although extending current work to general multivariate
systems is not a formidable challenge, it contains many aspects of work, such
as how to present the decomposition results of multivariate systems on a two-
dimensional plane; how to optimize the calculation algorithm to avoid the
exponential calculation cost as the number of variables increases, which will
be considered in the next stage of research. For the above-mentioned and any
possible problems, we cordially invite other scholars who share an interest in
this field to collaborate on addressing the existing challenges of SID and
contribute to the model’s refinement.
## 7 Conclusion
In this study, we introduced the System Information Decomposition (SID)
framework, which offers novel insights for decomposing complex systems and
analyzing higher-order relationships while addressing the limitations of
existing information decomposition methods.By proving the symmetries of
information atoms and connecting them to higher-order relationships, we show
that the SID framework can provide insights and advance beyond existing
measures in understanding the internal interactions and dynamics of complex
systems.Furthermore, we explored the far-reaching implications that SID’s
unveiling of higher-order measures could have on the philosophical aspects of
systems research, higher-order networks, and causal science. Despite the fact
that current research on SID still faces challenges in terms of quantitative
calculations and multivariate analysis, we believe that continued
collaboration and exploration by the scientific community will help overcome
these obstacles.In conclusion, the SID framework signifies a promising new
direction for investigating complex systems and information decomposition. We
anticipate that the SID analysis framework will serve as a valuable tool
across an expanding array of fields in the future.
## Acknowledgments
We sincerely thank all the non-authors who played a crucial role in its
successful completion. Our heartfelt appreciation goes to the Swarma Club, an
open academic community for Complex Systems, where the Causal Emergence
reading club provided the foundation for the ideas presented in this paper. We
are also very grateful to Professor Duguid at UC Berkeley, whose course
steadied an author’s orientation towards understanding systems from an
information perspective, serving as the genesis of this paper. We are also
very grateful to the reviewers for their constructive comments, which have
improved the theoretical rigor and comprehensiveness of the paper.
## References
* [1] Brian Castellani and Frederic William Hafferty. Sociology and complexity science: a new field of inquiry. Springer Science & Business Media, 2009.
* [2] Cristoforo Sergio Bertuglia and Franco Vaio. Nonlinearity, chaos, and complexity: the dynamics of natural and social systems. Oxford University Press on Demand, 2005.
* [3] Claude Elwood Shannon. A mathematical theory of communication. ACM SIGMOBILE mobile computing and communications review, 5(1):3–55, 2001.
* [4] Paul L Williams and Randall D Beer. Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515, 2010.
* [5] Ryan G James, Blanca Daniella Mansante Ayala, Bahti Zakirov, and James P Crutchfield. Modes of information flow. arXiv preprint arXiv:1808.06723, 2018.
* [6] Pedro AM Mediano, Fernando Rosas, Robin L Carhart-Harris, Anil K Seth, and Adam B Barrett. Beyond integrated information: A taxonomy of information dynamics phenomena. arXiv preprint arXiv:1909.02297, 2019.
* [7] Fernando E Rosas, Pedro AM Mediano, Henrik J Jensen, Anil K Seth, Adam B Barrett, Robin L Carhart-Harris, and Daniel Bor. Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS computational biology, 16(12):e1008289, 2020.
* [8] Ryan G James, Nix Barnett, and James P Crutchfield. Information flows? a critique of transfer entropies. Physical review letters, 116(23):238701, 2016.
* [9] Fernando E Rosas, Pedro AM Mediano, Borzoo Rassouli, and Adam B Barrett. An operational information decomposition via synergistic disclosure. Journal of Physics A: Mathematical and Theoretical, 53(48):485001, 2020.
* [10] Borzoo Rassouli, Fernando E Rosas, and Deniz Gündüz. Data disclosure under perfect sample privacy. IEEE Transactions on Information Forensics and Security, 15:2012–2025, 2019.
* [11] Juan A Acebrón, Luis L Bonilla, Conrad J Pérez Vicente, Félix Ritort, and Renato Spigler. The kuramoto model: A simple paradigm for synchronization phenomena. Reviews of modern physics, 77(1):137, 2005.
* [12] Barry A Cipra. An introduction to the ising model. The American Mathematical Monthly, 94(10):937–959, 1987.
* [13] Artemy Kolchinsky. A novel approach to the partial information decomposition. Entropy, 24(3):403, 2022.
* [14] Paul L Williams. Information dynamics: Its theory and application to embodied cognitive systems. PhD thesis, Indiana University, 2011.
* [15] Virgil Griffith, Edwin KP Chong, Ryan G James, Christopher J Ellison, and James P Crutchfield. Intersection information based on common randomness. Entropy, 16(4):1985–2000, 2014.
* [16] Robin AA Ince. Measuring multivariate redundant information with pointwise common change in surprisal. Entropy, 19(7):318, 2017.
* [17] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, and Jürgen Jost. Shared information—new insights and problems in decomposing information in complex systems. In Proceedings of the European conference on complex systems 2012, pages 251–269. Springer, 2013.
* [18] Malte Harder, Christoph Salge, and Daniel Polani. Bivariate measure of redundant information. Physical Review E, 87(1):012130, 2013.
* [19] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying unique information. Entropy, 16(4):2161–2183, 2014.
* [20] Jiang Zhang and Kaiwei Liu. Neural information squeezer for causal emergence. Entropy, 25(1):26, 2023.
* [21] Melanie Mitchell. Complexity: A guided tour. Oxford university press, 2009.
* [22] Richard Gallagher and Tim Appenzeller. Beyond reductionism, 1999.
* [23] Jan Christiaan Smuts. Holism and evolution. Macmillan, 1926.
* [24] Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999.
* [25] Ginestra Bianconi. Higher-order networks. Cambridge University Press, 2021.
* [26] Federico Battiston, Enrico Amico, Alain Barrat, Ginestra Bianconi, Guilherme Ferraz de Arruda, Benedetta Franceschiello, Iacopo Iacopini, Sonia Kéfi, Vito Latora, Yamir Moreno, et al. The physics of higher-order interactions in complex systems. Nature Physics, 17(10):1093–1098, 2021.
* [27] Ed Bullmore and Olaf Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature reviews neuroscience, 10(3):186–198, 2009.
* [28] Simon A Levin. Self-organization and the emergence of complexity in ecological systems. Bioscience, 55(12):1075–1079, 2005.
* [29] Judea Pearl. Causality. Cambridge university press, 2009.
* [30] Judea Pearl and Dana Mackenzie. The book of why: the new science of cause and effect. Basic books, 2018.
## Appendix A Appendix
### A.1 Case Table
$X_{1}$ | $X_{2}$ | $X_{3}$ | $X_{4}$ | $X_{5}$ | $X_{6}$
---|---|---|---|---|---
$a$ | $b$ | $c$ | $d$ | $a$ | $b$ | $e$ | $f$ | $c$ | $d$ | $e$ | $f$ | $a$ | $c$ | $e$ | $h$ | $a$ | $b$ | $g$ | $h$ | $a$ | $b$ | $i$ | $j$
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1
0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1
0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1
0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0
0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0
0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0
0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1
0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1
0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1
0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0
0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0
0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0
0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1
0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1
0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1
0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1
0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0
0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0
0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0
0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0
0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1
0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1
0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1
0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1
0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0
0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0
1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0
1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0
1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1
1 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1
1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1
1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1
1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0
1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0
1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0
1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0
1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1
1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1
1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1
1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1
1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0
1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0
1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0
1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1
1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1
1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1
1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1
1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0
1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0
1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0
1 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0
1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1
1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1
1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1
1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 1
1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0
### A.2 Proof of Propositions for Neural Information Squeezer Network
Here we provide mathematical proves for the two propositions of the neural
network framework to calculate mutual information and redundancy.
First, we rephrase the proposition 1 and then we give the proof here.
Proposition 1: For any random variables $X$ and $Y$, we can use the framework
of Figure 7(a) to predict $Y$ by squeezing the information channel of
$\hat{Y}_{X}$ as the minimum dimension but satisfying $\hat{Y}\approx Y$ and
$U\sim\mathcal{N}(0,I)$. And we suppose the conditional entropy $H(X|Y)>0$
holds, then:
$H(\hat{Y}_{X})\approx I(X;Y)$ (6)
###### Proof.
The whole structure of the alternative NIS network (Figure 7(a)) can be
regarded as the similar structure as in Ref [20], but the dynamics learner is
absent. However, we can understand the dynamic is a fixed identical mapping.
In this way, all the conclusions proved in [20] can be applied here. Thus, we
have:
$I(X;Y)\approx I(\hat{Y}_{X};\hat{Y}_{X})=H(\hat{Y}_{X})$ (7)
if all the neural networks are well trained. The first equation holds because
of Theorem 2 (information bottle-neck) and Theorem 3(mutual information of the
model will be closed to the data for a well trained framework) in [20], the
second holds when $q$ is minimized such that the information channel of
$\hat{Y}_{X}$ is squeezed as possible as we can and because of the property of
mutual information.
Further, because $U$ is an independent Gaussian noise, therefore:
$H(U)=H(\psi(X))-H(\hat{Y_{X}})\approx H(X)-I(X;Y)=H(X|Y)$ (8)
The approximated equation holds because $\psi$ is a bijector which can keep
the entropy unchanged, and Equation 7 holds. Therefore, we can prove
proposition 1. ∎
To calculate the redundancy for a system with three variables we can further
feed the variable of $\hat{Y}_{X}$ into another NIS network to predict $Z$,
and narrow down the information channel of the intermediate variable
$\hat{Z}_{\hat{Y}_{X}}$ to get the minimum dimension $q^{*^{\prime}}$ for
$\hat{Z}_{\hat{Y}_{X}}$, then its Shannon entropy can approach the redundancy,
and the redundancy satisfies the property of permutation symmetry for all the
variables. We can prove the following proposition:
Proposition 2: For a system with three random variables $X,Y,Z$, suppose the
conditional information $H(X|Y)>0,H(X|Z)>0$, then the redundancy calculated by
Equation 4 is symmetric, which means:
$Red(X,Y,Z)\approx Red(X,Z,Y)$ (9)
###### Proof.
If we accept the definition of Equation 4, then:
$Red(X,Y,Z)\approx
H(\hat{Z}_{\hat{Y}_{X}})=H(\hat{Y}_{X})-H(U_{\hat{Y}_{X}})=H(X)-H(X|Y)-H(\hat{Y}_{X}|Z),$
(10)
where $U_{\hat{Y}_{X}}$ is the discarded Gaussian noise to predict
$\hat{Y}_{Z}$.
In another way, we can use $X$ to predict $Z$, and the intermediate variable
$\hat{Z}_{X}$ can be used to predict $Y$, and the intermediate variable
$\hat{Y}_{\hat{Z}_{X}}$ can be used to approximate the redundancy which is
denoted as $Red(X,Z,Y)$. Therefore,
$Red(X,Z,Y)\approx H(X)-H(X|Z)-H(\hat{Z}_{X}|Y).$ (11)
Because the discarded noise variable $U_{\hat{Y}_{X}}$ in the process of
predicting $Y$ by $X$ is independent on all the variables, therefore:
$H(U_{\hat{Y}_{X}})=H(U_{\hat{Y}_{X}}|Z)=H(U_{\hat{Y}_{X}}|Y,Z)=H(X|Y,Z),$
(12)
Similarly, the discarded noise variable $U_{\hat{Z}_{\hat{Y}_{X}}}$ in the
process of predicting $Z$ by $\hat{Y}_{X}$ is also independent on all the
other variables, and $\psi(X)$ is the combination of $U_{\hat{Y}_{X}}$ and
$\hat{Y}_{X}$, thus:
$H(X|Y,Z)=H(U_{\hat{Y}_{X}}|Z)=H(X|Z)-H(\hat{Y}_{X}|Z).$ (13)
In the same way, we can obtain:
$H(X|Z,Y)=H(U_{\hat{Z}_{X}}|Y)=H(X|Y)-H(\hat{Z}_{X}|Y).$ (14)
Because $H(X|Y,Z)=H(X|Z)-H(\hat{Y}_{X}|Z)=H(X|Y,Z)=H(X|Y)-H(\hat{Z}_{X}|Y)$,
therefore:
$H(X|Z)+H(\hat{Z}_{X}|Y)=H(X|Y)+H(\hat{Y}_{X}|Z)$ (15)
and the Equation 10 and 11 lead to:
$Red(X,Y,Z)=Red(X,Z,Y).$ (16)
This equation is general for all the permutations of $X,Y$ and $Z$, thus, the
redundancy defined in the neural network NIS satisfies permutation symmetry.
∎
|
11institutetext: Instituut voor Sterrenkunde (IvS), KU Leuven,
Celestijnenlaan 200D, B-3001 Leuven, Belgium
11email<EMAIL_ADDRESS>22institutetext: Department
of Astrophysics, IMAPP, Radboud University Nijmegen, PO Box 9010, 6500 GL,
Nijmegen, The Netherlands 33institutetext: Max Planck Institute for Astronomy,
Koenigstuhl 17, 69117, Heidelberg, Germany
# Detection of period-spacing patterns due to the gravity modes of rotating
dwarfs in the TESS southern continuous viewing zone
S. Garcia Detection of period-spacing patterns due to the gravity modes of
rotating dwarfs in the TESS southern continuous viewing zoneDetection of
period-spacing patterns due to the gravity modes of rotating dwarfs in the
TESS southern continuous viewing zone T. Van Reeth Detection of period-spacing
patterns due to the gravity modes of rotating dwarfs in the TESS southern
continuous viewing zoneDetection of period-spacing patterns due to the gravity
modes of rotating dwarfs in the TESS southern continuous viewing zone J. De
Ridder Detection of period-spacing patterns due to the gravity modes of
rotating dwarfs in the TESS southern continuous viewing zoneDetection of
period-spacing patterns due to the gravity modes of rotating dwarfs in the
TESS southern continuous viewing zone A. Tkachenko Detection of period-spacing
patterns due to the gravity modes of rotating dwarfs in the TESS southern
continuous viewing zoneDetection of period-spacing patterns due to the gravity
modes of rotating dwarfs in the TESS southern continuous viewing zone L.
IJspeert Detection of period-spacing patterns due to the gravity modes of
rotating dwarfs in the TESS southern continuous viewing zoneDetection of
period-spacing patterns due to the gravity modes of rotating dwarfs in the
TESS southern continuous viewing zone C. Aerts Detection of period-spacing
patterns due to the gravity modes of rotating dwarfs in the TESS southern
continuous viewing zoneDetection of period-spacing patterns due to the gravity
modes of rotating dwarfs in the TESS southern continuous viewing zone2233
(Accepted November 22, 2021)
###### Abstract
Context. The theory of stellar evolution presents shortcomings when confronted
with asteroseismic probes of interior physical properties. The differences
between observations and theory are often great because stellar models have
mainly been calibrated from observables connected to the surface of stars.
Period-spacing patterns caused by gravity modes are a particularly powerful
asteroseismic tool that are useful for probing the near-core rotation and
mixing of chemical elements in main-sequence stars with convective cores.
Aims. We aim to compose a catalog of intermediate-mass stars in the Transiting
Exoplanet Survey Satellite (TESS) southern continuous viewing zone (CVZ) to
reveal period-spacing patterns caused by gravity modes for use in future
asteroseismic modeling.
Methods. TESS full frame images (FFI) were inspected to select stars of
intermediate- and high-mass using color-magnitude criteria. Light curves were
extracted from custom masks per star, adopting stringent constraints on the
aperture masks and contamination. The extracted light curves were subject to
iterative prewhitening to detect gravity modes. We developed a method relying
on the assumption that period spacings are an approximately linear function of
the mode periods to build a template pattern. This template was used to
extract the patterns and their uncertainties, relying on a bootstrap approach.
Results. Our TESS catalog of high-quality period-spacing patterns is the first
of its kind and contains 140 gravity-mode patterns in 106 $\gamma\,$Dor stars
and two slowly pulsating B-type (SPB) stars. Half of these patterns contain
seven or more measured mode periods and the longest pattern contains 20 modes.
We provide the community with a convenient software tool to search for period-
spacing patterns and to process the extracted light curves.
Conclusions. Our catalog offers a fruitful starting point for future gravity-
mode asteroseismology of rotating dwarfs with convective cores in the southern
hemisphere.
###### Key Words.:
Asteroseismology – Waves – Stars: Rotation – Stars: Interiors – Stars:
oscillations (including pulsations) – Stars: catalog
## 1 Introduction
The theory of stellar structure and evolution is well established and capable
of describing the different stages of a star throughout its life in general
terms (e.g., Kippenhahn et al., 2012). However, the theory is mostly
calibrated to the surface properties of stars, such as their surface
gravities, surface chemical compositions, surface rotations, and effective
temperatures. Today, advances in asteroseismology and the advent of high-
precision space photometry from telescopes such as CoRoT (Auvergne et al.,
2009), Kepler (Koch et al., 2010) and TESS (Ricker et al., 2015) allow us to
probe stellar interiors with a precision that cannot be reached from
extrapolating surface quantities (e.g., Hekker & Christensen-Dalsgaard, 2017;
García & Ballot, 2019; Bowman, 2020; Aerts, 2021, for recent reviews).
Asteroseismic modeling based on space photometry has revealed large
discrepancies between observations and the theory of stellar structure and
evolution, such as in the transport of angular momentum (e.g., Aerts et al.,
2019).
Gravity modes (hereafter g modes) are stellar oscillations that have buoyancy
as their dominant restoring force and have optimal probing power in the near-
core regions of main-sequence stars (e.g., Aerts et al., 2018). They are
detected in main-sequence stars with a convective core and a radiative
envelope known as $\gamma$ Doradus ($\gamma$ Dor) stars (Kaye et al., 1999)
and SPB stars (Waelkens, 1991), which have masses from 1.3 to 1.9 $M_{\odot}$
and 3 to 9 $M_{\odot}$, respectively (e.g., Aerts et al., 2010, Chapter 2, for
more properties). To detect and accurately measure the frequencies of their
individual g modes, which have periodicities of the order of days, high-
precision long-term uninterrupted observations are needed. These requirements
are met for time-series photometry from space missions such as Kepler and
TESS, which allows us to detect g-mode period spacing patterns. The first such
detection in a dwarf was made for the SPB star HD 50230 from CoRoT space
photometry by Degroote et al. (2010). The five-month CoRoT light curve was
sufficient to detect a period-spacing pattern of eight dipole g modes thanks
to this star’s very slow rotation, also justifying asteroseismic modeling
having ignored the Coriolis acceleration in the pulsation computations (Wu &
Li, 2019).
Gravity-mode period spacing patterns, describing the difference in periods of
g modes with an identical spherical degree, $\ell,$ and azimuthal order, $m,$
while having a consecutive radial order, $n$, are a powerful asteroseismic
tool (Aerts et al., 2010, for a detailed theoretical derivation). This tool
allows us to probe the near-core regions of main-sequence stars with
convective core and a radiative envelope. As shown by Shibahashi (1979) and
Tassoul (1980), g-mode periods $P_{n\ell}$ are equidistant for a spherical
chemically homogeneous non-rotating star when considering the asymptotic
regime, that is with $2\pi/P_{n\ell}\ll N$ with $N$ the buoyancy frequency:
$P_{nl}=\frac{\Pi_{0}}{\sqrt{\ell(\ell+1)}}\,(n+\epsilon_{g}),$ (1)
with
$\Pi_{0}=\frac{2\pi^{2}}{\int_{r_{1}}^{r_{2}}N(r)\,r^{-1}\,\mathrm{d}r},$ (2)
where $\epsilon_{g}$ is a phase term which depends on the boundaries, $r_{1}$
and $r_{2}$, of the g-mode propagation cavity. Gradients in the stellar
chemical composition profile cause mode trapping, which introduces wave-like
behavior and periodic dips also known as buoyancy glitches in the period-
spacing patterns. As shown by Miglio et al. (2008), the amplitude and location
of these modulations in the pattern depend on the steepness and the location
of the chemical gradients inside the star.
For a rotating star, the Coriolis acceleration and the difference between the
corotating reference frames and the inertial ones of the observer shift the
g-mode frequencies quite drastically. As a result, the observed period
spacings reveal a decreasing trend for prograde ($m>0$) and zonal ($m=0$)
modes when plotted as a function of increasing mode period as observed in an
inertial frame of reference. For retrograde modes ($m<0$), on the other hand,
an overall increase in the observed spacings of modes with increasing
pulsation period occurs (e.g., Bouabid et al., 2013; Van Reeth et al., 2015a,
b; Ouazzani et al., 2017). Schmid & Aerts (2016) investigated the limit of the
rotation frequency at which the Coriolis acceleration can still be treated
perturbatively and found this approximation already breaks down for rotation
frequencies above roughly $\sim\\!0.1\,$d-1, which is the case for almost all
intermediate-mass dwarfs (cf. Aerts et al., 2017).
A strong magnetic field in the stellar interior requires the Lorentz force to
be included in the pulsation computations. This modifies the morphology of the
observed period spacing pattern by introducing a saw-tooth modulation of the
period spacing values as the consecutive pulsation periods increase (Prat et
al., 2019, 2020; Van Beeck et al., 2020). Moreover, coupling between inertial
modes in the rotating core and g modes in the envelope occurs and may cause
dips superposed to buoyancy glitches at particular mode periods in the spacing
diagram (Ouazzani et al., 2020; Saio et al., 2021; Lee, 2021). This implies
that interpretations of buoyancy glitches from mathematical analytical
descriptions ignoring the Coriolis acceleration (e.g., Cunha et al., 2019) can
only be applied to slowly-rotating non-magnetic stars. The majority of g-mode
pulsators require the modeling and interpretation of the observed period
spacing patterns to be done numerically, based on the inclusion of the
Coriolis (and perhaps Lorentz) force when solving the pulsation equations (cf.
Townsend & Teitler, 2013; Townsend et al., 2018).
After the initial discovery by Degroote et al. (2010), it took another five
years before period-spacing patterns were detected from four-year Kepler light
curves in several hundreds of $\gamma$ Dor stars (e.g., Van Reeth et al.,
2015b; Li et al., 2019, 2020) and several dozens of SPB stars (e.g., Pápics et
al., 2017; Pedersen et al., 2021; Szewczuk et al., 2021). These patterns have
been used to measure the near-core rotation rates of all these stars (e.g.,
Van Reeth et al., 2016; Christophe et al., 2018a; Van Reeth et al., 2018; Li
et al., 2019, 2020; Takata et al., 2020; Pedersen et al., 2021) and place
constraints on the chemical transport processes that take place in the deep
stellar interior (e.g., Mombarg et al., 2020, 2021; Pedersen et al., 2021).
From the point of view of improving stellar evolution theory, dwarfs are the
most interesting targets as they still have their evolved stages ahead of them
and uncertainties in the transport processes are cumulative over time.
Moreover, g modes are potentially excited along the main sequence in various
instability regions for stars born with a convective core covering a broad
mass range (Aerts et al., 2010, Chapter 3). This is why we focus our work on
dwarfs covering spectral types between O and F.
To date, the photometric observations obtained with the nominal Kepler mission
cover the longest time base and are more precise than the observations from
any other high-cadence space-photometry mission. Hence, most breakthroughs in
g-mode asteroseismology of dwarfs were achieved thanks to Kepler observations
so far. Here, we exploit the asteroseismic potential of the ongoing TESS space
mission and compare it to that of Kepler. The TESS extended mission is
gradually providing data of progressively higher frequency resolution and has
opened the door to analyzing numerous stars located in regions of the sky
other than the Kepler field of view and in different metallicity regimes. In
this work, we present results based on the first full year of uninterrupted
TESS monitoring, to evaluate its capacity for the g-mode asteroseismology of
rotating dwarfs. Future works will involve the addition of data from the
extended TESS mission to the stars in our current catalog to increase their
capacity for asteroseismic modeling.
Our study is aimed at identifying new $\gamma$ Dor or SPB stars that have been
observed by TESS in the southern continuous viewing zone (CVZ) to build the
first TESS catalog of high-quality g-mode period-spacing patterns for such
pulsators. The southern CVZ was observed uninterruptedly during the first year
of the nominal TESS mission, with a 24° square field-of-view centered at the
Southern ecliptic pole rotating about every 27 d. This long observation period
has provided light curves with a nominal frequency resolution of about
$0.003\,\mathrm{d}^{-1}$.
The paper is organized as follows. In Section 2, we describe our criteria for
selecting O/B- and A/F-type stars in the TESS southern CVZ. In Section 3, we
describe our method used to extract light curves from the TESS full frame
images, including our data analysis treatments to detrend and optimize the
extracted light curves to search for g modes. In Section 4, we discuss the
frequency extraction from the light curves and our posterior analysis. Section
5 describes our method for finding period-spacing patterns. Finally, we
discuss our final catalog of g-mode pulsators with period-spacing patterns in
Section 6.
## 2 Data set
To select our sample of stars, we started from the TESS Input Catalog (TIC)
version 8 (Stassun et al., 2019) and reduced it to the TESS southern CVZ by
imposing an ecliptic latitude $\beta\leq-72$°. To exclude extended objects and
keep only point-like sources, we used the TIC’s flag Objtype=star. Stars
likely to be white dwarfs or giants were identified with the TIC’s flag
wdflag=1 and lumclass=GIANT, respectively, and excluded from the sample. The
first flag represents a cut in absolute Gaia magnitude and Gaia color
($G_{BP}-G_{RP}$) while the second flag represents a cut in radius (calculated
from Gaia parallaxes) and $T_{\text{eff}}$. We refer to Stassun et al. (2019)
for a description of the TIC flags.
To narrow down our sample of stars to candidates of spectral type F and
hotter, that is the most likely g-mode pulsators, we used 139 TESS O/B-type
stars selected manually by Pedersen et al. (2019) and 616 Kepler A/F-type
$\gamma\,$Dor pulsators taken from Tkachenko et al. (2013); Li et al. (2020).
We placed those stars in a color-magnitude diagram and used them to define two
pairs of color-magnitude cuts that enclose 95% of these bona fide O/B- and
A/F-type stars. By applying these pairs of cuts to our TESS sample, we
extracted all the O/B- and A/F-type candidate pulsators of interest. We
calculated their absolute magnitudes as
$M=m-5\log(d)+5,$ (3)
where $m$ is the apparent magnitude in a given passband. To obtain the
distance $d$, we used the Bayesian distance estimate from Bailer-Jones et al.
(2018) reported in the TIC. To ensure reliable distances, we used stars with a
positive Gaia parallax of relative error of less than 25% that passed both
astrometric and photometric criteria given by Eqs. (1) and (2) in Arenou et
al. (2018), namely, with TIC flag gaiaqflag=1. To minimize the effect of
extinction, we used the 2MASS infrared bands $J$, $H,$ and $K$ and adopted the
cuts listed in Table 1. Figure 1 shows these cuts in $K$ and $J-K$ as straight
lines and our sample in the background. A/F-type candidates correspond to
stars in the top-left quadrant delineated by the red straight lines minus the
O/B-type candidates that correspond to stars in the top-left quadrant
delineated by the cyan straight lines. The final candidates are obtained after
an analogous additional selection in a color-magnitude diagram based on $H$
and $J-H$. Table 1 and Figure 1 do not consider corrections for extinction.
The potential contamination by cooler stars that are not expected to pulsate
in g modes will be treated in Section 4, based on the frequency analysis
results.
Figure 1: 2MASS color-magnitude diagram showing the pairs of cuts in absolute magnitude and color defining candidate g-mode pulsators of spectral type O/B (cyan straight lines) and A/F (red straight lines) in $K$ and $J-K$. The cyan and red circles are bona fide O/B- and A/F-type stars, respectively, while the retained dwarfs from the TIC in the southern CVZ are plotted in the background. The side histograms show the distribution of stars; the pairs of cuts enclose 95% of the respective bona fide stars. Table 1: Pairs of cuts in absolute magnitude and color in the 2MASS system (uncorrected for extinction) used to define O/B- and A/F-type candidates. The pairs of $K$ and $J-K$ are displayed in Figure 1. Spectral type candidate | $K$ | $J-K$ | $H$ | $J-H$
---|---|---|---|---
O/B | 1.429 | 0.06 | 1.55 | 0.045
A/F | 2.300 | 0.30 | 2.35 | 0.240
Finally, to favor a high signal-to-noise ratio (S/N) and non-contaminated flux
in the light curves, we limited our sample further to stars with apparent TESS
magnitude brighter than 15 (uncorrected for extinction) and situated at least
2 arcsec apart from other stars in the TIC. Our selected sample consists of
345 O/B-type candidates and 9369 A/F-type candidates in the TESS southern CVZ,
all from our Galaxy.
## 3 Our TESS data reduction pipeline
Figure 2: Square 20-by-20 pixels around TIC 38845463, TESS sector 1. The color
bar is common to the three panels and shows a logarithmic scale of the flux in
electrons per second. The red circle represents the target star while white
circles indicate TIC neighboring stars down to four TESS magnitudes fainter
with respect to the target star. The symbol sizes are inversely proportional
to the TESS magnitude. Declination grids are $2^{\prime}$ apart. Left: Median
image of the target pixel file. Middle: Panel with the final aperture mask
(red shade) and background mask (gray shade) overplotted. The aperture mask
results from the threshold parameter $n=20$ (see text for explanation). Right:
Best fit based on the left panel used to estimate the level of contamination
in the aperture mask due to flux from neighboring stars. The image was modeled
as six 2-D Gaussian functions plus a 2-D plane. See Section 3.1 for further
details.
Figure 3: Our star sample without constraints on the aperture mask size (7385
sources, gray histogram) and after imposing a minimum size of 9 pixels (2162
sources, blue histogram). Other constraints described in Section 3.1 apply to
both histograms. Left: Sample distribution of the median aperture mask sizes,
calculated per star over all TESS sectors. Middle: Sample distribution of the
TESS magnitudes. The dashed red line marks a magnitude value of 13.5. Right:
Median contamination level caused by neighboring stars, calculated per star
over all TESS sectors. The dashed orange line marks a contamination of 5%.
We searched sectors 1 to 13 of TESS for long-cadence (i.e., 27.4 minutes)
full-frame images available in the Mikulski Archive for Space Telescopes and
used the TESScut API (Brasseur et al., 2019) to download, for every star in
our sample, a 20X20 pixel image with the target star at the center. These
images are known as target pixel files and contain the flux information for
all available time stamps. The 20X20 pixel size was chosen such that the
target pixel file contains both the flux of the target star and the flux of
the representative background around it; a typical example is shown in Figure
2 in which the middle panel shows a background mask 11 times larger than the
aperture mask. Light curves were extracted from the target pixel files using
aperture photometry, as explained in Section 3.1, while the background and
systematic effects were corrected using two standard statistical methods, that
is, a principal component analysis (PCA) and a linear regression, as further
detailed in Section 3.2. The Python package Lightkurve (Lightkurve
Collaboration et al., 2018) was used during the reduction.
### 3.1 Aperture and background masks
To define the aperture mask of a star, we used the median frame of all the
target pixel files (left panel in Figure 2) and selected all pixels with a
flux count larger than the median flux plus $n$-times the standard deviation.
Those pixels are our first estimate of the aperture mask. The standard
deviation was robustly estimated as 1.4826 times the median absolute deviation
(Ruppert, 2011). To reduce the contamination from nearby stars falling into
the aperture mask, we used the increasing values of $n=5.0,7.5,10,15,20,30,$
and $40$ for the threshold in standard deviation to shrink the aperture mask
until the target star was the only bright star contained within it by at least
4 TESS magnitudes. The target and neighboring stars with apparent TESS
magnitudes $m_{\textrm{TESS}}^{\textrm{target}}$ and $m_{\textrm{TESS}}$,
respectively, within the aperture mask thus follow the condition:
$m_{\textrm{TESS}}-m_{\textrm{TESS}}^{\textrm{target}}\geq 4,$ (4)
ensuring that the flux of individual fainter stars contributes no more than
approximately $0.25\%$ of the total flux within the aperture mask. For cases
where the resulting aperture mask consists of disjointed sets of pixels, only
the set containing the target star is kept. Finally, to help prevent flux from
neighboring stars leaking into the aperture mask, pixels showing an increase
in flux in any direction away from the target star are removed from the mask
and used as its edge. The background mask was defined in the same way as the
first estimate of the aperture mask but selecting the pixels below a threshold
with $n=3$, thus ensuring a minimum two-standard-deviation flux gap between
the aperture mask and the background mask. A typical example of both final
apertures is shown in the middle panel of Figure 2.
To estimate the level of contamination in the aperture mask due to the flux of
neighboring stars, we calculate the ratio of this flux to that of the target
star. To obtain such fluxes, all stars complying with Eq. (4) were modeled as
2D Gaussian functions and fitted to the median image of the target pixel file
along with a 2D plane to account for the background flux. The Gaussian
functions were centered at the location of each star, had all the same
standard deviation, and their relative amplitudes kept the same relation as
the fluxes from the stars. Fluxes were converted from TESS magnitudes using
the TASOC Photometry pipeline (Handberg et al., 2021). The right panel in
Figure 2 shows an example of this fit.
We rejected the aperture mask (together with the target pixel file) when the
final mask contained stars that do not comply with Eq. (4). To avoid both
corrupted and bleeding pixels, we also rejected masks that had pixels with
null counts or that were too elongated (i.e., with fewer than four rows or
columns, while the other is at least 14-pixels long). To average out the
stochastic noise of individual pixels, we only kept aperture masks with at
least nine pixels, as shown in the left panel of Figure 3. After careful
assessment, our sample consisted of 2162 stars with a flux contamination due
to neighboring stars smaller than 5%, as shown in the right panel in Figure 3.
The middle panel in Figure 3 shows that our light curve extraction is
consistent with previous data pipelines for which the extraction is considered
to be trustworthy only for stars brighter than TESS magnitude of 13.5 (e.g.,
Handberg et al., 2019; Huang, 2020; Caldwell et al., 2020).
Because of our stringent constraints on the aperture mask, not all TESS
sectors yielded a satisfactory mask for a given star. We therefore only kept
stars with aperture masks found in at least 11 of the 13 TESS sectors, as
shown by the dashed line in Figure 4. The 11 sectors are not necessarily
consecutive. After this cut, our sample consists of 1967 stars. Figure 4 also
shows a higher level of contamination when the aperture mask is found in fewer
numbers of TESS sectors, indicating that stars in a crowded field are more
prone to fail the requirements of the aperture mask selection.
Figure 4: Blue histogram in the bottom shows the number of TESS sectors per
star with a satisfactory aperture mask. The dark-gray histogram shows the mean
contamination in each bin of the blue histogram. The dashed red line shows the
cut for stars with at least 11 TESS sectors (a total of 1967 stars).
### 3.2 Light curve extraction and correction
To remove part of the systematic flux variability in the extracted light
curves, we used the data quality flags provided in the headers of the target
pixel file111Descriptions about TESS quality flags can be found in the section
“Data Product Overview” in https://outerspace.stsci.edu/display/TESS. (Twicken
et al., 2020) and removed the data from the time stamps affected by loss of
fine pointing222Flagged as attitude tweak, coarse pointing and desaturation
event., thermal transients333Flagged as safe mode., Earth pointing, and other
specific effects444Flagged as manual exclude. (e.g., coronal mass ejections).
We then extracted the light curves using simple aperture photometry with the
aperture masks we constructed as described in Section 3.1. An example is shown
in Figure 5, where the gaps in the data correspond to the removed time stamps.
We noted that the use of the quality flags according to the TESS release notes
did not cover all systemics present in the light curves and proceeded to
manually remove time intervals (common to all stars) which still were
significantly affected by systematic effects (e.g., telescope jittering,
thermal transients, and loss of fine pointing). Such time intervals are
present in sectors 1 to 8, as listed in Table 2 and shown in red in Figure 5.
Figure 5: Uncorrected light curves of TIC 30192406, showing in red the time
intervals that have been excluded for all stars according to Table 2. Sector 1
shows an example of jittering of the satellite. Sector 2 shows an example of
scattered sunlight reflected by the Earth or the Moon. Sector 3 shows an
example of systematic flux variability caused by the periodic re-pointing of
the camera. Figure 6: First seven normalized principal components (PC; columns
of the matrix $\mathbf{U}$) from the background mask of TIC 374944608, sector
2. The PCs are displayed in ascending order with the first normalized PC at
the bottom and manually set 0.25 units apart from each other for a better
visualization. Only black normalized PCs have a level of scattering $<10^{-4}$
(see Section 3.2) and were used for the detrending of the light curve. Figure
7: Final light curve for TIC 374944608 as derived from our developed pipeline
discussed in Section 3.
The remaining systematic variability (e.g., the gradual increase of flux in
sector 2 due to scattered light or the rapid decrease of flux at the beginning
of sector 8 as illustrated in Figure 5) and background flux were removed using
a linear regression of the light curve against the flux variability present in
the background mask defined in Section 3.1. We started this process by
extracting a light curve using aperture photometry from each of the background
pixels (an average of 330 pixels per pixel target file in our sample).
Subsequently, we applied a PCA to these light curves to capture their dominant
flux variability (see Feinstein et al., 2019). We let $\mathbf{B}$ be the
matrix whose columns are the light curves extracted from the background
pixels, its singular value decomposition is
$\mathbf{B}=\mathbf{U}\textbf{S}\textbf{V}^{T}\;,$ (5)
where $\mathbf{U}$ and V are orthonormal matrices and S is a diagonal matrix
with entries $s_{i}>s_{i+1}$. The principal components of $\mathbf{B}$ are
given by the columns of $\mathbf{US}$, which are ordered by their contribution
to the background flux variability. No universal number $k$ of principle
components can be used to estimate background flux variability because
different TESS sectors are subjected to different systematic effects. Since
the first principal components capture most of the systematic variability
while subsequent ones are affected by an increasing amount of white noise (see
Figure 6 for an example), we used the level of scatter in the normalized
principal component (columns of $\mathbf{U}$) as a criterion to determine $k$.
The level of scatter was determined by the median of the moving variances with
a window length $W=16$ hours. Denoting the elements in a column of
$\mathbf{U}$ by $f(t)$, where the time $t$ represents a cadence, the moving
variance at a cadence $t$ was computed as
$\sigma^{2}_{L}(t)=\frac{1}{N-1}\sum^{N}_{|t_{n}-t|<W}\left(f(t_{n})-\overline{f(t_{n})}\right)^{2}\;,$
(6)
where $\overline{f(t_{n})}$ is the mean of $f(t_{n})$ within the window
$|t_{n}-t|<W$. The $W=16$ hours was chosen to yield a large number of short
light curve segments while being shorter than the typical period of g modes.
We found that the level of scatter in the columns of $\mathbf{U}$, that is,
$\@vec{u_{i}}$, increased with $i$ as expected, since increasing values of $i$
had less systematic signal and more white noise. The level of scatter
converged to a value of order $10^{-3}$ for $i\gtrsim 6$ regardless of the
TESS sector. We therefore used the principal components with a level of
scatter $<10^{-4}$ to represent the systematic variability present in the
background. The median number of principal components used by our method is
$k=4$. To prevent further injection of white noise into the reduced light
curves, we used a maximum of seven principal components.
Once the number $k$ of principal components was determined, we created the
regressor matrix $\mathbf{X,}$ using the principal components as its columns
and added a constant column to account for a constant level of background.
Subsequently, we performed the following linear regression, assuming that the
model fits the data within Gaussian uncertainties:
$\@vec{Y}=\mathbf{X}\@vec{w}+\varepsilon,$ (7)
where $\@vec{Y}$ represents the uncorrected light curve of the target star,
$\@vec{w}$ contains the regression coefficients, and $\varepsilon$ represents
the noise. The corrected light curve was then computed as
$\@vec{Y}-\mathbf{X}\@vec{w}$. These corrected light curves were then
normalized by subtracting their mean flux and then dividing them by it. Values
further away than 5$\sigma$ were treated as outliers and removed (e.g., spikes
due to cosmic rays). Finally, we stitched together the normalized light curves
from each TESS sector as shown in Figure 7.
## 4 Frequency analysis from iterative prewhitening
We analyzed the light curves resulting from our developed data analysis
pipeline following a procedure of iterative prewhitening. Van Beeck et al.
(2021) has offered an extensive description of five different prewhitening
methods applied to g-mode pulsators, relying on various regression techniques
and stopping criteria. Since Van Beeck et al. (2021) developed their
methodology for g modes in SPB stars and four-year Kepler light curves, their
paper is highly relevant for our TESS work as well. We refer to that paper for
a detailed description, as well as an elaborate comparative study of the
efficiency of these five methods. Here, we rely on a method using the same
frequency resolution restriction, a stopping criterion based on the amplitudes
of the modes, and a nonlinear optimization to achieve the final regression
result, as such methods were found to be the most powerful procedures based on
the assumption of a harmonic fit to the light curve by Van Beeck et al. (2021,
see methods 2 and 3). We provide a summary of the adopted procedure, where
frequencies were extracted in a 5-step process from the stitched light curves
as the one shown in Figure 7.
Step 1: We computed a Lomb-Scargle periodogram of the light curve using a
10-fold oversampled frequency range (compared to $T^{-1}$ with $T$ the time
span of the light curve) from zero up to the Nyquist frequency. The frequency
with highest amplitude is selected and a harmonic fit with that frequency to
the light curve is determined, using linear regression. The best-fitting
harmonic function is subtracted from the light curve and the process is
iteratively repeated on the residual light curve until the selected frequency
has an amplitude with $\mathrm{S/N}<4$. The noise level is calculated as the
mean amplitude of the periodogram within a symmetric frequency window of size
$1\ \mathrm{d}^{-1}$ centered on the selected frequency. An example of the
periodogram for TIC 38845463 is visualized in panel A of Figure 11 discussed
further in the text. In order to find candidates with g-mode period-spacing
patterns, we only kept stars with at least ten significant potential g-mode
frequencies, that is, those with pulsations of periods longer than 0.25 days.
This restriction, illustrated in Figure 8, leaves us with a sample of 369
candidate stars.
Figure 8: Number of potential g-mode pulsations with $\mathrm{S/N}\geq 4$ per
star. Frequencies were deduced from Step 1 described in Section 4. Our sample
of 1967 stars has 1108 stars that are not in compliance with our Step 1
criterion and those are not plotted. The dashed red line shows the cut used to
select rich enough period-spacing pattern candidates, namely, stars with at
least ten significant mode periods. This resulted in 369 candidate pulsators.
Step 2: Unresolved frequencies were removed by requiring a conservative
difference of at least $2.5\times T^{-1}$ between extracted frequencies
(Loumos & Deeming, 1978). In case of unresolved frequencies, we kept the one
with the largest amplitude.
Step 3: All accepted frequencies were optimized simultaneously using a
Levenberg-Marquardt algorithm to perform a non-linear regression, with the
output of the linear regression as initial input guesses. Uncertainties in the
parameters were calculated following Montgomery & O’Donoghue (1999) and
corrected for their possible correlated nature following Schwarzenberg-Czerny
(2003). Frequencies whose corresponding amplitudes were consistent with zero
within three standard deviations were considered as insignificant and
rejected.
Step 4: To minimize the influence of the spectral window convolved with
dominant frequencies in the periodogram during the iterative prewhitening
process, we used the amplitude criterion developed by Van Reeth et al.
(2015a):
$\alpha\leq\frac{A_{f}}{A_{loc}}\leq\frac{1}{\alpha}\,,$ (8)
where $A_{f}$ and $A_{loc}$ are the optimized amplitudes and the amplitudes
from the original Lomb-Scargle periodogram, respectively. This constraint is
independent of the S/N and helps to avoid spurious frequency detections that
can occur for space time-series data like Kepler and TESS with
$\mathrm{S/N}>4$ (e.g., Zong et al., 2016; Baran & Koen, 2021; Bowman &
Michielsen, 2021). Moreover, Van Beeck et al. (2021) have shown criteria based
on mode amplitudes to work better than just using S/N as a stop criterion. In
practice, we used $\alpha=0.75,$ which was found to work optimally for TESS
light curves by Antoci et al. (2019).
Step 5: Combination frequencies were not considered as independent mode
frequencies to hunt for period-spacing patterns. Such combination frequencies
were identified through the following equation:
$\left|f_{k}-\left(n_{i}f_{i}+n_{j}f_{j}\right)\right|\leq\varepsilon\,,$ (9)
where we adopt the terminology of Degroote et al. (2009), meaning that $f_{i}$
and $f_{j}$ are the parent frequencies, $n_{i}$ and $n_{j}$ are integer
combination coefficients, $f_{k}$ is the combination frequency, and
$\varepsilon$ is the threshold tolerance. We selected the 20 highest-amplitude
frequencies per star as parent frequencies and searched for combinations of up
to two parents, which leads to $|n_{i}|+|n_{j}|\leq 2$.
Given the large number of frequencies per star (we note that Figure 8 only
counts g modes and ignores p modes), a linear combination of frequencies is
likely to occur close to another independent frequency in the data without
having to be a combination frequency (Pápics, 2012; Kurtz et al., 2015). This
is illustrated in Figure 9 by the background count level marked with a gray
shade while genuine combination frequencies correspond to the excess above
such level. We therefore took three times that background level (gray dashed
line in Figure 9) as a 67% probability of being a genuine combination
frequency. Such a probability corresponds to $\varepsilon=0.0002$ (red
vertical line in Figure 9) and is consistent with the threshold tolerance
reported by Li et al. (2019).
Figure 9: Histogram following Eq. (9) for the frequencies of stars in our
g-mode sample. The gray shade marks the background level that represents a
random match among combination frequencies and the horizontal dashed gray line
marks 3 times that level. The vertical red shows the intercept of the dashed
gray line and the distribution; it marks $\varepsilon=0.0002$ according to Eq.
(9). Frequencies occurring to the left of this line have a 67% probability of
corresponding to a genuine combination frequency. Figure 10: Number of
frequencies remaining in our sample after each step in Section 4 is applied.
The pie chart is based on the frequencies of the 369 stars after step 1 of our
frequency analysis. The total number of frequencies is 10927. Figure 11: Best-
fit period-spacing pattern for TIC 374944608. Plots generated with the
interactive code FLOSSY. A : Lomb-Scargle periodogram (solid black line),
observed periods (vertical dotted black lines), best-fit linear template
(vertical red lines). Observed periods used for the fit are indicated with
black triangles at the top. B : Deviations from the linear pattern (i.e.,
difference between the periods indicated by the black triangles and the red
vertical lines). C : Period spacing as a function of mode period. Both black
and white circles are the observations. The red markers are the best-fit
linear pattern with slope $\alpha$. Note that the fit is performed on $P$, not
on $\Delta P$, and missing mode periods in the observations create
artificially larger $\Delta P$ values (white circles). D : échelle diagram.
The black circles are the periods used for the fit. The size of the circles is
proportional to the amplitude of the amplitudes in the periodogram. The red
markers are the best-fit linear pattern. The supplementary material contains a
version of this figure for every pattern in Table LABEL:Tab:results.
Figure 10 shows the impact on the number of independent mode frequencies after
each step of the frequency analysis. As an example, the final accepted
frequencies of TIC 374944608 (light curve in Figure 7) are indicated with
vertical dotted lines in Figure 11 (to be discussed below). We checked the 369
light curves of our sample after the frequency analysis and removed eclipsing
binaries. These were studied further in the separate paper by IJspeert et al.
(2021). We also inspected the light curves for cases where systematic flux
variability persisted after our data reduction pipeline (see Section 3) and
removed these from the sample. This inspection was done by eye, narrowing down
our sample of period-spacing pattern candidates to 304 stars.
## 5 Period-spacing pattern search
Figure 12: Values of the cost function around the best-fit solution for TIC
374944608 shown in Figure 11. Plots generated with the interactive code
FLOSSY. Top: Correlation plots. Bottom: Minimum value of the cost function S
as a function of the template period-spacing parameters. The supplementary
material contains a version of this figure for every pattern in Table
LABEL:Tab:results.
To search for period-spacing patterns in our sample, we fitted the following
template to the list of periods of each star (Li et al., 2019):
$P_{i}=\sum_{j=0}^{i-1}\Delta P_{j}+P_{0}=\Delta
P_{0}\frac{(1+\alpha)^{i}-1}{\alpha}+P_{0}\;,$ (10)
where $\Delta P_{j}=P_{j+1}-P_{j}$ is the local period spacing and $\Delta
P_{0}$ is the period spacing at a reference period $P_{0}$. This period
pattern allows for a linear change $\alpha\equiv\mathrm{d}(\Delta
P)/\mathrm{d}P$ of the local period spacing caused by stellar rotation
(Ouazzani et al., 2017; Christophe et al., 2018b; Li et al., 2019). The
template depends on the three parameters $\\{P_{0},\Delta P_{0},\alpha\\}$. To
account for the amplitude of the individual periods and the local size of the
period spacing, we fitted Eq. (10) to the observations, by minimizing the
following custom cost function:
$S\left(P_{0},\Delta P_{0},\alpha\right)=\sum_{i=1}^{n}\frac{A_{i}}{A_{\rm
max}}\frac{\left(P_{\,i}^{\mathrm{\;obs}}-{P}_{i}\right)^{2}}{\sigma_{i}^{2}+\Delta{P}_{i}^{2}}\;,$
(11)
where $P_{i}$ is the estimated period closest to the observed pulsation period
$P^{\mathrm{\;obs}}_{i}$, $\Delta P_{i}$ is the estimated local period
spacing, $A_{i}$ is the observed amplitude corresponding to $P_{i}^{\rm obs}$,
$A_{\rm max}$ is the maximum observed amplitude, and $\sigma_{i}$ is the
observed period uncertainty. Rather than minimizing the square of the absolute
differences $(P_{i}^{\rm obs}-P_{i})^{2},$ we minimize the relative
differences $(P_{i}^{\rm obs}-P_{i})^{2}/\Delta P^{2}_{i}$. In this way,
period mismatches are more strongly penalized when they are large compared to
the local period spacing. The addition of $\sigma_{i}^{2}$ in the denominator
serves to limit the penalization when the local period spacing is comparable
to the observational period uncertainty, $\sigma_{i}$. The extra weight,
$A_{i}/A_{\rm max}$, serves to penalize a pattern more strongly when it
mismatches the higher-amplitude mode periods. The minimization of the cost
function $S$ was done with the quasi-Newton method L-BFGS-B (Byrd et al.,
1995) implemented in the Python module Scipy (Virtanen et al., 2020).
To find the location of the patterns in the periodogram as well as the initial
guesses $\mathbf{\theta}^{\textrm{\;init}}=\\{P_{0}^{\textrm{\;init}},\Delta
P_{0}^{\textrm{\;init}},\alpha^{\textrm{\;init}}\\}$ for the fit, we used two
diagnostic plots to cover both the cases of rapid and slow rotators. Slow
rotators show an approximately constant period spacing. Their period-spacing
pattern can therefore be identified in an échelle diagram, where the period is
plotted as a function of the period modulo $\Delta P$. In such a case, g modes
of a given angular degree roughly form vertical ridges (analogous to the
acoustic modes in the case of solar-like oscillations). On the other hand,
rapid rotators show a period spacing that depends approximately linearly on
the mode period (Van Reeth et al., 2016; Ouazzani et al., 2017). Therefore,
their period-spacing pattern can be easier identified in a plot of $\Delta P$
as a function of period. For each star, we also complemented such two plots
with a periodogram where observed and template periods were overplotted.
To facilitate the exploration of the parameter space, we developed the
interactive tool FLOSSY555https://github.com/IvS-KULeuven/FLOSSY, a Python
utility that allows a user to efficiently browse the periodogram of a large
number of stars and visualize the period-spacing patterns by displaying the
period échelle diagram and period-spacing plot at each location in the
periodogram. FLOSSY also overplots Eq. (10) in the aforementioned plots with
customized parameters $\\{P_{0},\Delta P_{0},\alpha\\}$. The latter can be
modified on the fly, along with the number of mode periods to fit. Figure 11
shows part of FLOSSY’s output, as well as the best-fit pattern for TIC
374944608.
We used FLOSSY to manually select the $\mathbb{\theta^{\textrm{\;init}}}$ for
every candidate period-spacing pattern found from the list of mode periods per
star. In doing so we considered the parameter space
$|P_{0}-P_{0}^{\textrm{\;init}}|\leq\delta P/2$, $100\ \textrm{s}\leq\Delta
P_{0}\leq 4000\ \textrm{s}$ and, $-0.3\leq\alpha\leq 0.3$, where
$P_{0}^{\textrm{\;init}}\in\\{P^{\mathrm{\;obs}}_{i}\\}$ and $\delta P$
corresponds to the resolution set in Step 2 of the procedure discussed in
Section 4. To ensure that we found a global minimum, we computed $S$ around
the best-fit solution in a radius of 400 s for $P_{0}$, 40 s for $\Delta
P_{0},$ and 0.05 units for $\alpha$. Those values for $S$ are shown in Figure
12, which is also an output of FLOSSY.
To estimate uncertainties for the detected period-spacing pattern, we computed
the 68% confidence interval of the parameters using a bootstrap method with
non-parametric residual resampling in the periodogram. We generated 10000
datasets of the same size as the original one. Subsequently, we minimized Eq.
(11) in each of these datasets using as initial guess the same
$\mathbf{\theta}^{\textrm{\;init}}$ as in the original best fit. The
confidence intervals were then determined as the 16% and 84% quantiles of the
bootstrap distribution for the parameters $\mathbf{\theta}$. As an example,
Figure 13 shows the bootstrap distribution of $\alpha$ for the pattern found
in TIC 374944608.
Figure 13: Confidence interval (CI) for $\alpha$ following a bootstrap
residual procedure for TIC 374944608. The meaning of the various vertical
dashed lines are indicated in the legend.
## 6 Catalog of g-mode pulsators in the TESS southern CVZ with identified
period-spacing patterns
Our final catalog of g-mode pulsators revealing period-spacing patterns
consists of 108 bright dwarfs in the TESS southern CVZ. These stars revealed a
total of 140 resolved period-spacing patterns listed in Table
LABEL:Tab:results. Each of these patterns are shown in the supplementary
material in the same format as Figures 11 and 12. Stars in our catalog have
apparent TESS magnitudes between 7.5 and 12, with a median of about 10; the
star TIC 350144657 is an exception with an apparent TESS magnitude of about
6.9. The contamination of light curves, due to the flux from neighboring
stars, is $<2\%$ thanks to our stringent requirements on the aperture mask
described in Section 3.1. Figure 14 shows the distributions of brightness and
contamination in the catalog. Only two stars, TIC 177162802 and TIC 375038081,
are candidates to be SPB stars, as determined by the color-magnitude selection
done in Section 2, while the other members of the catalog are $\gamma$ Dor
stars. Figure 15 shows a Gaia color-magnitude diagram that compares our
catalog to the 611 $\gamma$ Dor stars with detected period-spacing patterns
found by Li et al. (2020) in 4-year Kepler light curves. We find that our
catalog stars occur on the hotter end of the Kepler $\gamma$ Dor stars.
Figure 14: Final catalog. Top: Apparent TESS magnitude. Bottom: Median
contamination level in the aperture mask caused by flux of neighboring stars.
Figure 15: Gaia color-magnitude diagram showing our sample (red) and the 611
$\gamma$-Dor stars with period-spacing patterns found by Li et al. (2020) in
Kepler data (yellow). Stars marked with blue circles are SPB candidates.
Background stars correspond to the TESS southern CVZ. Magnitudes are not
corrected for extinction
Out of the 140 period-spacing patterns, 93 have a downward slope ($\alpha<0$)
and 47 have an upward slope ($\alpha>0$). The former are prograde or zonal g
modes while the latter are retrograde g modes or Rossby modes (Van Reeth et
al., 2016). The averaged period-spacing value per pattern, $\langle\Delta
P\rangle$, is $\sim 110$ s. The shortest pattern contains four measured
pulsation periods, more than half of the patterns contain more than 12 and the
longest pattern contains 20. In 26% of the stars we detected two or three
patterns. When multiple patterns are detected in a star, this allows for a
better constraint of the stellar interior from asteroseismic modeling (Aerts
et al., 2018). Furthermore, 29% of our catalog stars are hybrid pulsators,
meaning that they also exhibit p-mode pulsations, therefore, providing us with
a means to probe the outer stellar envelope and allowing for a differential
study of the star.
Figure 16 shows the distributions of the pattern parameters. Typical
uncertainties for the pattern parameters reported in Table LABEL:Tab:results
are 43 s for $P_{0}$, 13 s for $\Delta P_{0}$ and 0.006 for $\alpha$. The
pattern slopes fulfill $|\alpha|\leq 0.1$ for 88% of the catalog stars, while
the tails of the distribution reach $|\alpha|\sim 0.2$. Using the empirical
relations in Li et al. (2020), we can estimate the near-core rotation to be
$<1.68\,\textrm{d}^{-1}$ for 86% of the stars in our sample, with a few stars
reaching up to about 2.86 $\textrm{d}^{-1}$. We also made use of the Figure 8
in Li et al. (2020) to define the empirical cut delineating regimes of dipole
and quadrupole modes in Figure 17 (dashed blue line). This suggests that 13 of
our prograde patterns have $\ell=2,$ while the rest of them have $\ell=1$. We
noted that because pulsations in $\gamma$ Dor stars are sensitive to
metallicity, the empirical estimates above drawn using Li et al. (2020) remain
to be confirmed. Since the nominal Kepler field-of-view was in the northern
hemisphere, we cannot directly cross-validate our TESS southern CVZ results
with those from Kepler.
Figure 16: Characterization of the best-fit patterns in our sample (see also
Table LABEL:Tab:results). The stacked histograms show retrograde modes in red
and prograde modes in cyan. The vertical orange and blue lines are the median
of the retrograde and prograde distributions, respectively. A : Slope
$\alpha\equiv\mathrm{d}\Delta P/\mathrm{d}P$. B : Mean period $\langle
P\rangle$. C : Mean period spacing $\langle\Delta P\rangle$. D : Span of the
overtones. Figure 17: $P$-$\alpha$ relation for g-mode pulsators in our
catalog. The dashed blue line is the empirical cut from Figure 8 in Li et al.
(2020) that separates prograde $\ell=1$ g modes (below) from $\ell=2$ (above).
The median of the uncertainties reported in Table LABEL:Tab:results has been
plotted as a typical uncertainty for $\alpha$. Uncertainties in $\langle
P\rangle$ are smaller than the symbol size.
We found a positive correlation for prograde modes with $l=1$ between the
parameters $\langle P\rangle$ and $\alpha$ with a Pearson correlation
coefficient of 0.67 and a $p$-value of $10^{-9}$, while prograde modes with
$l=2$ show the same Pearson correlation coefficient but with a $p$-value of
0.016. Since $\langle P\rangle$ is a proxy for the evolutionary stage, these
correlations reveal that the near-core rotation rate of the stars slows down
as they evolve, implying that an efficient angular momentum transport
mechanism must be at work as already found in the literature (cf. Aerts et
al., 2019, for a review). We did not search for a correlation in retrograde
modes because their parameter $\alpha$ is less sensitive to the star’s local
rotation rate. For those, a more precise analysis involving the traditional
approximation of rotation (TAR; Eckart, 1960) is necessary and will be
addressed in a future paper. Furthermore, the range in overtones in the
patterns is a proxy for the radial order $n$ of the g modes. The exact value
of $n$, $\ell,$ and $m$ can only be identified from asteroseismic modeling,
for example based on the TAR as applied in Van Reeth et al. (2016). The mode
identification and asteroseismic modeling based on the TAR will be addressed
in a future paper dedicated to the ensemble of stars in our catalog of g-mode
pulsators, relying on the pattern properties deduced in this work.
Besides quasi-linear period-spacing patterns like the one shown in panel C of
Figure 11, where the zigzag feature is caused by missing periods in the
pattern (white circles), our catalog contains tens of patterns with zigzag
signatures that are not related to missing modes. These patterns are presented
in the supplementary material, where Figures 11 and 12 are reproduced for each
period-spacing pattern in our catalog. Such signatures have also been observed
in period-spacing patterns of SPB stars, where they are interpreted as the
result of strong envelope mixing deep inside the star. These signatures were
recently used by Pedersen et al. (2021) to constrain the internal mixing
profile in SPB stars observed by Kepler. The levels of envelope mixing found
thus far in $\gamma\,$Dor stars are far lower than those of SPB stars (cf.
Table 1 in Aerts, 2021). Our catalog presents an opportunity to further assess
and refine these recent conclusions in the literature from our TESS southern
CVZ catalog of g-mode dwarf pulsators. We also noted that many of the detected
period-spacing patterns in our catalog show residuals with a sinusoidal-like
modulation after the subtraction of the linear fit (red line in panel C of
Figure 11). This type of periodic residuals is very similar to the one found
originally for the slowly rotating CoRoT SPB HD 50230 by Degroote et al.
(2010) and allows for stringent constraints on the core overshooting and
envelope mixing.
Finally, we compared our catalog to the spectroscopic parameters published in
the GALAH Data Release (DR) 3 paper by Buder et al. (2021). This DR3 has 38
stars in common with our sample. For these 38 stars, we found the effective
temperature, surface gravity, and surface velocity deduced from line-profile
broadening to be consistent with such properties of $\gamma$ Dor stars (e.g.,
Van Reeth et al., 2015b; Li et al., 2020), that is, early F- to late A-type
main-sequence stars. This agreement in stellar parameters supports the
selection methods used throughout the current paper. The corresponding
distributions can be found in Appendix A, where we also discuss the
correlation found between the surface velocity estimate from spectral line
broadening and the average pulsation period $\langle P\rangle$. As expected
for moderate- to fast-rotating pulsators, the larger surface velocity is
accompanied by a shorter average pulsation period.
## 7 Summary
In this work, we present a new data analysis pipeline to create light curves
from TESS full frame images (FFI), with an emphasis on the search for g-mode
frequencies in intermediate- to high-mass stars. We present guidelines for
extracting light curves from unprocessed TESS images, including the selection
of aperture and background masks, the identification of time stamps affected
by systematics in sectors 1-13 of the TESS southern CVZ, and a modified PCA to
detrend the light curves. A color-magnitude criterion was presented as a
method to identify main-sequence A/F- and O/B-type star candidates. We also
introduced FLOSSY, an open source utility for inspecting periodograms of
g-mode pulsators to facilitate searches for period-spacing patterns.
Based on the light curves extracted with our pipeline, we composed the first
catalog of g-mode period-spacing patterns detected in TESS space photometry of
dwarfs having colors representative of spectral types F to O. Our catalog
contains 140 g-mode period-spacing patterns observed in 106 $\gamma$ Dor stars
and 2 SPB stars. The patterns were manually reviewed and contain g-mode
frequencies having amplitudes of S/N¿4. In a future work, we will use the
detected patterns to derive the internal rotation frequency near the
convective core of the stars, as well as the buoyancy travel time across the
stars (known as $\Pi_{0}$). These two key quantities constitute important
observables that are useful for performing asteroseismic modeling of
intermediate-mass stars (e.g., Szewczuk & Daszyńska-Daszkiewicz, 2018; Mombarg
et al., 2021; Pedersen et al., 2021). The nominal frequency resolution of
modes in the detected patterns amounts to $0.003\textrm{d}^{-1}$, following
the 352 d long TESS southern CVZ light curves. This frequency resolution can
be improved by a factor of 3 when the extended TESS Cycle 3 data will be
included in the analysis. This will also lower the noise level in the Fourier
domain and offers the future potential to detect more modes per star, as well
as more stars with g-mode patterns. The global properties of the detected
patterns occurring in our catalog are listed in Table LABEL:Tab:results, and
the patterns themselves, are shown in the supplementary material. This catalog
constitutes a base for future ensemble asteroseismic modeling of TESS g-mode
pulsators following methodologies as in Mombarg et al. (2021) or Pedersen et
al. (2021). In this way, we will be able to constrain the internal physics of
more rotating dwarfs with a convective core using the new available TESS data,
in addition to the asteroseismic modeling achieved so far for a legacy sample
of g-mode pulsators from the Kepler mission (Gebruers et al., 2021). This will
increase the number of dwarfs with such modeling and will lead to a better
understanding of the transport processes and their relationship to the
internal rotation profile of these stars.
###### Acknowledgements.
The research leading to these results has received funding from the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (grant agreement no. 670519: MAMSIE) and from the KU
Leuven Research Council (grant C16/18/005: PARADISE). TVR gratefully
acknowledges support from the Research Foundation Flanders under grant
agreement nr. 12ZB620N. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France. This work made use of the Third Data
Release of the GALAH Survey (Buder et al. 2021). The GALAH Survey is based on
data acquired through the Australian Astronomical Observatory, under programs:
A/2013B/13 (The GALAH pilot survey); A/2014A/25, A/2015A/19, A2017A/18 (The
GALAH survey phase 1); A2018A/18 (Open clusters with HERMES); A2019A/1
(Hierarchical star formation in Ori OB1); A2019A/15 (The GALAH survey phase
2); A/2015B/19, A/2016A/22, A/2016B/10, A/2017B/16, A/2018B/15 (The HERMES-
TESS program); and A/2015A/3, A/2015B/1, A/2015B/19, A/2016A/22, A/2016B/12,
A/2017A/14 (The HERMES K2-follow-up program). We acknowledge the traditional
owners of the land on which the AAT stands, the Gamilaraay people, and pay our
respects to elders past and present. This paper includes data that has been
provided by AAO Data Central (datacentral.aao.gov.au). This work has made use
of data from the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement. In addition to the software
cited in the main body of the paper we have also made use of Lightkurve, a
Python package for Kepler and TESS data analysis (Lightkurve Collaboration et
al., 2018), Astropy,666http://www.astropy.org a community-developed core
Python package for Astronomy (Astropy Collaboration et al., 2013, 2018),
Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), SciPy (Virtanen et
al., 2020), pandas (Wes McKinney, 2010). Finally, we would like to thank the
anonymous referee for useful comments and suggestions.
## References
* Aerts (2021) Aerts, C. 2021, Reviews of Modern Physics, 93, 015001
* Aerts et al. (2010) Aerts, C., Christensen-Dalsgaard, J., & Kurtz, D. W. 2010, Asteroseismology, Springer-Verlag, Heidelberg
* Aerts et al. (2019) Aerts, C., Mathis, S., & Rogers, T. M. 2019, ARA&A, 57, 35
* Aerts et al. (2018) Aerts, C., Molenberghs, G., Michielsen, M., et al. 2018, The Astrophysical Journal Supplement Series, 237, 15
* Aerts et al. (2014) Aerts, C., Simón-Díaz, S., Groot, P. J., & Degroote, P. 2014, A&A, 569, A118
* Aerts et al. (2017) Aerts, C., Van Reeth, T., & Tkachenko, A. 2017, ApJ, 847, L7
* Antoci et al. (2019) Antoci, V., Cunha, M. S., Bowman, D. M., et al. 2019, MNRAS, 490, 4040
* Arenou et al. (2018) Arenou, F., Luri, X., Babusiaux, C., et al. 2018, A&A, 616, A17
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Auvergne et al. (2009) Auvergne, M., Bodin, P., Boisnard, L., et al. 2009, A&A, 506, 411
* Bailer-Jones et al. (2018) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, AJ, 156, 58
* Baran & Koen (2021) Baran, A. S. & Koen, C. 2021, A Detection Threshold in the Amplitude Spectra Calculated from TESS Time-Series Data
* Bouabid et al. (2013) Bouabid, M. P., Dupret, M. A., Salmon, S., et al. 2013, MNRAS, 429, 2500
* Bowman (2020) Bowman, D. M. 2020, Frontiers in Astronomy and Space Sciences, 7, 70
* Bowman & Michielsen (2021) Bowman, D. M. & Michielsen, M. 2021, manuscript submitted for publication.
* Brasseur et al. (2019) Brasseur, C. E., Phillip, C., Fleming, S. W., Mullally, S. E., & White, R. L. 2019, Astrocut: Tools for creating cutouts of TESS images
* Buder et al. (2021) Buder, S., Sharma, S., Kos, J., et al. 2021, Monthly Notices of the Royal Astronomical Society, 506, 150
* Byrd et al. (1995) Byrd, R. H., Lu, P., Nocedal, J., & Zhu, C. 1995, SIAM Journal on Scientific Computing, 16, 1190
* Caldwell et al. (2020) Caldwell, D. A., Tenenbaum, P., Twicken, J. D., et al. 2020, Research Notes of the AAS, 4, 201
* Christophe et al. (2018a) Christophe, S., Ballot, J., Ouazzani, R. M., Antoci, V., & Salmon, S. J. A. J. 2018a, A&A, 618, A47
* Christophe et al. (2018b) Christophe, S., Ballot, J., Ouazzani, R.-M., Antoci, V., & Salmon, S. J. A. J. 2018b, A&A, 618, A47
* Cunha et al. (2019) Cunha, M. S., Avelino, P. P., Christensen-Dalsgaard, J., et al. 2019, MNRAS, 490, 909
* Degroote et al. (2010) Degroote, P., Aerts, C., Baglin, A., et al. 2010, Nature, 464, 259
* Degroote et al. (2009) Degroote, P., Briquet, M., Catala, C., et al. 2009, A&A, 506, 111
* Eckart (1960) Eckart, C. 1960, Physics of Fluids, 3, 421
* Feinstein et al. (2019) Feinstein, A. D., Montet, B. T., Foreman-Mackey, D., et al. 2019, Publications of the Astronomical Society of the Pacific, 131, 094502
* García & Ballot (2019) García, R. A. & Ballot, J. 2019, Living Reviews in Solar Physics, 16, 4
* Gebruers et al. (2021) Gebruers, S., Straumit, I., Tkachenko, A., et al. 2021, A&A, 650, A151
* Handberg et al. (2019) Handberg, R., Lund, M., & Huber, D. 2019, TESS Data For Asteroseismology Lightcurves (”TASOC”)
* Handberg et al. (2021) Handberg, R., Lund, M. N., White, T. R., et al. 2021, arXiv e-prints, arXiv:2106.08341
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357
* Hekker & Christensen-Dalsgaard (2017) Hekker, S. & Christensen-Dalsgaard, J. 2017, The Astronomy and Astrophysics Review, 25
* Huang (2020) Huang, C. X. 2020, TESS Lightcurves From The MIT Quick-Look Pipeline (”QLP”)
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90
* IJspeert et al. (2021) IJspeert, L. W., Tkachenko, A., Johnston, C., et al. 2021, A&A, in press, arXiv:2107.10005
* Kaye et al. (1999) Kaye, A. B., Handler, G., Krisciunas, K., Poretti, E., & Zerbi, F. M. 1999, PASP, 111, 840
* Kippenhahn et al. (2012) Kippenhahn, R., Weigert, A., & Weiss, A. 2012, Stellar Structure and Evolution
* Koch et al. (2010) Koch, D. G., Borucki, W. J., Basri, G., et al. 2010, ApJ, 713, L79
* Kurtz et al. (2015) Kurtz, D. W., Shibahashi, H., Murphy, S. J., Bedding, T. R., & Bowman, D. M. 2015, MNRAS, 450, 3015
* Lee (2021) Lee, U. 2021, MNRAS, 505, 1495
* Li et al. (2019) Li, G., Bedding, T. R., Murphy, S. J., et al. 2019, MNRAS, 482, 1757
* Li et al. (2020) Li, G., Van Reeth, T., Bedding, T. R., et al. 2020, MNRAS, 491, 3586
* Lightkurve Collaboration et al. (2018) Lightkurve Collaboration, Cardoso, J. V. d. M., Hedges, C., et al. 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library
* Loumos & Deeming (1978) Loumos, G. L. & Deeming, T. J. 1978, Ap&SS, 56, 285
* Miglio et al. (2008) Miglio, A., Montalbán, J., Noels, A., & Eggenberger, P. 2008, MNRAS, 386, 1487
* Mombarg et al. (2020) Mombarg, J. S. G., Dotter, A., Van Reeth, T., et al. 2020, ApJ, 895, 51
* Mombarg et al. (2021) Mombarg, J. S. G., Van Reeth, T., & Aerts, C. 2021, A&A, 650, A58
* Montgomery & O’Donoghue (1999) Montgomery, M. H. & O’Donoghue, D. 1999, Delta Scuti Star Newsletter, 13, 28
* Ouazzani et al. (2020) Ouazzani, R. M., Lignières, F., Dupret, M. A., et al. 2020, A&A, 640, A49
* Ouazzani et al. (2017) Ouazzani, R.-M., Salmon, S. J. A. J., Antoci, V., et al. 2017, MNRAS, 465, 2294
* Pápics (2012) Pápics, P. I. 2012, Astronomische Nachrichten, 333, 1053
* Pápics et al. (2017) Pápics, P. I., Tkachenko, A., Van Reeth, T., et al. 2017, A&A, 598, A74
* Pedersen et al. (2021) Pedersen, M. G., Aerts, C., Pápics, P. I., et al. 2021, Nature Astronomy
* Pedersen et al. (2019) Pedersen, M. G., Chowdhury, S., Johnston, C., et al. 2019, The Astrophysical Journal Letters, 872, l9
* Prat et al. (2019) Prat, V., Mathis, S., Buysschaert, B., et al. 2019, A&A, 627, A64
* Prat et al. (2020) Prat, V., Mathis, S., Neiner, C., et al. 2020, A&A, 636, A100
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Ruppert (2011) Ruppert, D. 2011, Statistics and Data Analysis for Financial Engineering (Springer New York)
* Saio et al. (2021) Saio, H., Takata, M., Lee, U., Li, G., & Van Reeth, T. 2021, MNRAS, 502, 5856
* Schmid & Aerts (2016) Schmid, V. S. & Aerts, C. 2016, A&A, 592, A116
* Schwarzenberg-Czerny (2003) Schwarzenberg-Czerny, A. 2003, in Astronomical Society of the Pacific Conference Series, Vol. 292, Interplay of Periodic, Cyclic and Stochastic Variability in Selected Areas of the H-R Diagram, ed. C. Sterken, 383
* Shibahashi (1979) Shibahashi, H. 1979, PASJ, 31, 87
* Stassun et al. (2019) Stassun, K. G., Oelkers, R. J., Paegert, M., et al. 2019, The Astronomical Journal, 158, 138
* Szewczuk & Daszyńska-Daszkiewicz (2018) Szewczuk, W. & Daszyńska-Daszkiewicz, J. 2018, Monthly Notices of the Royal Astronomical Society, 478, 2243
* Szewczuk et al. (2021) Szewczuk, W., Walczak, P., & Daszyńska-Daszkiewicz, J. 2021, MNRAS, 503, 5894
* Takata et al. (2020) Takata, M., Ouazzani, R. M., Saio, H., et al. 2020, A&A, 635, A106
* Tassoul (1980) Tassoul, M. 1980, ApJS, 43, 469
* Tkachenko et al. (2013) Tkachenko, A., Aerts, C., Yakushechkin, A., et al. 2013, A&A, 556, A52
* Townsend et al. (2018) Townsend, R. H. D., Goldstein, J., & Zweibel, E. G. 2018, MNRAS, 475, 879
* Townsend & Teitler (2013) Townsend, R. H. D. & Teitler, S. A. 2013, MNRAS, 435, 3406
* Twicken et al. (2020) Twicken, J. D., Caldwell, D. A., Jenkins, J. M., et al. 2020, TESS Science Data Products Description Document: EXP-TESS-ARC-ICD-0014 Rev F, Tech. Rep. 20205008729, NASA
* Van Beeck et al. (2021) Van Beeck, J., Bowman, D. M., Pedersen, M. G., & et al. 2021, A&A, under revision
* Van Beeck et al. (2020) Van Beeck, J., Prat, V., Van Reeth, T., et al. 2020, A&A, 638, A149
* Van Reeth et al. (2018) Van Reeth, T., Mombarg, J. S. G., Mathis, S., et al. 2018, A&A, 618, A24
* Van Reeth et al. (2016) Van Reeth, T., Tkachenko, A., & Aerts, C. 2016, A&A, 593, A120
* Van Reeth et al. (2015a) Van Reeth, T., Tkachenko, A., Aerts, C., et al. 2015a, A&A, 574, A17
* Van Reeth et al. (2015b) Van Reeth, T., Tkachenko, A., Aerts, C., et al. 2015b, ApJS, 218, 27
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Waelkens (1991) Waelkens, C. 1991, A&A, 246, 453
* Wes McKinney (2010) Wes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 – 61
* Wu & Li (2019) Wu, T. & Li, Y. 2019, ApJ, 881, 86
* Zong et al. (2016) Zong, W., Charpinet, S., Vauclair, G., Giammichele, N., & Van Grootel, V. 2016, A&A, 585, A22
## Appendix A Extended tables and figures
Table 2 lists the removed time intervals before the detrending of the light
curves described in Section 3.2. Table LABEL:Tab:results lists the best-fit
values of the parameters defined in Section 5 that characterize the 140
period-spacing patterns in our catalog. The supplementary material available
online contains a version of Figures 11 and 12 that show the periodogram and
the best-fit period-spacing pattern for all stars in Table LABEL:Tab:results.
Whenever periods are plotted with gray circles in those figures, those periods
are ignored in the fit.
Figure 18 shows a negative correlation with a Pearson correlation coefficient
of -0.34 and a p-value of 0.04 between the mean pulsation period of our
catalog stars and their surface velocity estimated from spectral line
broadening as reported in GALAH DR3 (Buder et al. 2021). In other words,
larger surface velocities are measured for stars with shorter $\langle
P\rangle$. While the measurement of the surface velocity from spectral line
broadening is complicated and can never be precise in the presence of time-
dependent gravity-mode velocity broadening (Aerts et al. 2014), this result is
consistent with the expected effect of faster rotation on the periods of the
pulsation modes (Bouabid et al. 2013). There is significant scatter in the
figure, as expected for a sample of stars seen at random inclination angles,
given the small overlap of only 38 stars between our catalog and GALAH DR3.
Moreover, the uncertainties reported by GALAH DR3 are underestimates given
that the spectral line broadening changes considerably throughout the
pulsation cycles as shown by Aerts et al. (2014), while that time dependence
has been ignored in the reported velocity estimates based on snapshot spectra.
This precludes us to connect in-depth conclusions based on this correlation in
Fig. 18. Figure 19 shows the distributions of mass, surface gravity, and
effective temperature for our catalog stars available in GALAH DR3. The ranges
of these three parameters are in agreement with those of $\gamma$ Dor stars
(Van Reeth et al. 2015b; Li et al. 2020).
Figure 18: Correlation between the surface velocity deduced from spectral line broadening as reported by GALAH DR3 and our pattern parameter $\langle P\rangle$. If a star has multiple patterns, then the pattern with the highest-amplitude pulsations was used. The plot contains the 38 stars in common in GALAH DR3 and our catalog, where we only considered stars with spectra flagged as reliable in GALAH DR3. Uncertainties in $\langle P\rangle$ are smaller than the symbol size, while those of the velocities are underestimates for reasons explained in the text. Table 2: Intervals excluded from the light curves to remove systematic flux variability due to, e.g., scattered light, telescope jittering, and loss of fine pointing. TESS sector | Excluded interval
---|---
| (BJD-2457000)
1 | (1334.8, 1335.1)
(1347.0, 1349.5)
2 | (1356.2, 1356.5)
(1361.0, 1361.3)
(1363.5, 1363.8)
(1365.9, 1366.2)
(1373.8, 1374.1)
(1375.8, 1376.0),
(1377.9, 1378.7)
3 | (1380.0, 1385.0)
(1387.6, 1387.9)
(1390.1, 1390.4)
(1392.6, 1392.9)
(1395.1, 1395.4)
(1398.6, 1398.9)
(1400.6, 1400.9)
(1402.6, 1402.9)
(1404.6, 1404.9)
(1406.1, 1406.4)
4 | (1420.0, 1427.0)
5 | (1463.0, 1465.0)
6 | (1476.0, 1479.0)
7 | (1502.5, 1506.0)
8 | (1529.5, 1533.0)
Figure 19: Histograms for the spectroscopic stellar parameters of the 38 stars in common with our catalog having spectra flagged as reliable in the GALAH DR3 catalog by Buder et al. (2021). Table 3: TIC stars in our catalog and their period-spacing pattern parameters. The $\langle P\rangle$ and $\langle\Delta P\rangle$ are the mean period and the mean period spacing, respectively. The $P_{0}$, $\Delta P_{0}$ and $\alpha$ are the parameters defined in Eq. (10). The column Period is the number of observed periods in the pattern. The column $n$-span is the range of overtones in the model pattern. Hybrid pulsators are marker with an “Y”. TIC | TESS | $\langle P\rangle$ | $\langle\Delta P\rangle$ | $P_{0}$ | $\Delta P_{0}$ | $\alpha$ | Periods | $n$-span | Hybrid
---|---|---|---|---|---|---|---|---|---
| (mag) | (d) | (ks) | (d) | (ks) | | | |
25152923 | 9.85 | 0.383 | 0.30 | ${0.376933}_{-0.000104}^{+0.000067}$ | ${0.2399}_{-0.0027}^{+0.0030}$ | ${0.1153}_{-0.0076}^{+0.0056}$ | 7 | 12 |
32150270 | 9.85 | 0.985 | 2.70 | ${1.00830}_{-0.00085}^{+0.00082}$ | ${2.406}_{-0.054}^{+0.056}$ | ${-0.146}_{-0.012}^{+0.012}$ | 5 | 6 |
33766642 | 10.35 | 0.472 | 0.77 | ${0.42510}_{-0.00011}^{+0.00012}$ | ${0.9255}_{-0.0024}^{+0.0026}$ | ${-0.03839}_{-0.00048}^{+0.00050}$ | 8 | 22 | Y
33766642 | 10.35 | 0.895 | 1.35 | ${0.9189}_{-0.0012}^{+0.0013}$ | ${1.254}_{-0.012}^{+0.021}$ | ${-0.0445}_{-0.0025}^{+0.0034}$ | 11 | 24 | Y
33879968 | 9.77 | 0.426 | 3.61 | ${0.3667}_{-0.0027}^{+0.0015}$ | ${4.000}_{-0.070}^{+0.000}$ | ${-0.0771}_{-0.0085}^{+0.0137}$ | 7 | 8 | Y
38515566 | 8.85 | 1.551 | 2.91 | ${1.49973}_{-0.00034}^{+0.00033}$ | ${2.9069}_{-0.0057}^{+0.0059}$ | ${0.00165}_{-0.00081}^{+0.00080}$ | 12 | 16 | Y
40335866 | 9.72 | 0.260 | 0.46 | ${0.25565}_{-0.00025}^{+0.00021}$ | ${0.4898}_{-0.0014}^{+0.0051}$ | ${-0.0982}_{-0.0022}^{+0.0046}$ | 10 | 14 | Y
40663416 | 10.07 | 0.706 | 1.98 | ${0.72518}_{-0.00071}^{+0.00042}$ | ${2.263}_{-0.089}^{+0.069}$ | ${0.175}_{-0.035}^{+0.025}$ | 5 | 5 |
41483281 | 10.25 | 0.617 | 2.76 | ${0.68534}_{-0.00044}^{+0.00034}$ | ${2.6441}_{-0.0107}^{+0.0085}$ | ${-0.01876}_{-0.00123}^{+0.00083}$ | 9 | 15 | Y
55451820 | 9.60 | 0.554 | 0.92 | ${0.53764}_{-0.00065}^{+0.00042}$ | ${1.185}_{-0.031}^{+0.033}$ | ${-0.195}_{-0.025}^{+0.022}$ | 5 | 6 |
55453219 | 8.84 | 0.396 | 0.53 | ${0.412394}_{-0.000075}^{+0.000088}$ | ${0.4289}_{-0.0033}^{+0.0045}$ | ${-0.0676}_{-0.0018}^{+0.0023}$ | 5 | 12 |
55849446 | 9.27 | 0.662 | 0.55 | ${0.670135}_{-0.000096}^{+0.000101}$ | ${0.6080}_{-0.0049}^{+0.0045}$ | ${0.0879}_{-0.0049}^{+0.0046}$ | 4 | 8 |
55849446 | 9.27 | 0.473 | 0.18 | ${0.467041}_{-0.000069}^{+0.000067}$ | ${0.19380}_{-0.00045}^{+0.00044}$ | ${-0.03616}_{-0.00056}^{+0.00045}$ | 10 | 31 |
140511383 | 11.92 | 0.436 | 0.45 | ${0.37710}_{-0.00027}^{+0.00036}$ | ${0.7114}_{-0.0100}^{+0.0087}$ | ${-0.0515}_{-0.0015}^{+0.0018}$ | 9 | 20 |
140756824 | 10.83 | 0.273 | 0.18 | ${0.27303276}_{-0.00000088}^{+0.00000097}$ | ${0.179016}_{-0.000023}^{+0.000024}$ | ${0.03393593}_{-0.00000060}^{+0.00000278}$ | 4 | 9 |
140756824 | 10.83 | 0.560 | 0.75 | ${0.56470}_{-0.00097}^{+0.00069}$ | ${0.707}_{-0.021}^{+0.029}$ | ${-0.132}_{-0.016}^{+0.022}$ | 9 | 9 |
141122677 | 10.14 | 0.426 | 0.76 | ${0.34900}_{-0.00054}^{+0.00058}$ | ${1.385}_{-0.031}^{+0.027}$ | ${-0.0944}_{-0.0032}^{+0.0036}$ | 7 | 15 |
141122677 | 10.14 | 0.746 | 1.02 | ${0.7295}_{-0.0010}^{+0.0013}$ | ${1.255}_{-0.019}^{+0.272}$ | ${-0.1666}_{-0.0031}^{+0.0868}$ | 7 | 11 |
141153472 | 9.82 | 0.890 | 2.00 | ${0.8178}_{-0.0016}^{+0.0023}$ | ${3.25}_{-0.17}^{+0.16}$ | ${-0.198}_{-0.022}^{+0.025}$ | 5 | 6 |
141154953 | 10.29 | 0.544 | 0.53 | ${0.49105}_{-0.00032}^{+0.00039}$ | ${1.141}_{-0.018}^{+0.017}$ | ${-0.1321}_{-0.0031}^{+0.0033}$ | 6 | 14 |
141479660 | 11.23 | 0.465 | 1.33 | ${0.46527}_{-0.00035}^{+0.00033}$ | ${1.3324}_{-0.0032}^{+0.0065}$ | ${-0.0390}_{-0.0011}^{+0.0016}$ | 12 | 18 | Y
141826495 | 9.73 | 0.456 | 1.01 | ${0.41665}_{-0.00096}^{+0.00061}$ | ${1.571}_{-0.035}^{+0.039}$ | ${-0.1651}_{-0.0056}^{+0.0121}$ | 10 | 12 |
141826495 | 9.73 | 0.663 | 0.55 | ${0.72258}_{-0.00014}^{+0.00013}$ | ${0.7077}_{-0.0051}^{+0.0051}$ | ${0.03109}_{-0.00084}^{+0.00083}$ | 9 | 23 |
141826495 | 9.73 | 0.723 | 0.72 | ${0.714501}_{-0.000012}^{+0.000013}$ | ${0.66689}_{-0.00072}^{+0.00069}$ | ${0.0717741}_{-0.0000034}^{+0.0000016}$ | 5 | 5 |
142083629 | 7.68 | 0.382 | 1.08 | ${0.397258}_{-0.000095}^{+0.000117}$ | ${0.843}_{-0.011}^{+0.013}$ | ${-0.1869}_{-0.0058}^{+0.0068}$ | 5 | 5 | Y
142083629 | 7.68 | 0.305 | 0.53 | ${0.260452}_{-0.000061}^{+0.000092}$ | ${0.8561}_{-0.0037}^{+0.0030}$ | ${-0.08684}_{-0.00070}^{+0.00089}$ | 8 | 13 | Y
149253072 | 10.91 | 1.034 | 1.82 | ${1.02011}_{-0.00086}^{+0.00079}$ | ${1.963}_{-0.021}^{+0.022}$ | ${-0.1185}_{-0.0096}^{+0.0081}$ | 7 | 9 |
149253072 | 10.91 | 0.554 | 1.32 | ${0.49571}_{-0.00052}^{+0.00041}$ | ${1.153}_{-0.027}^{+0.024}$ | ${0.0339}_{-0.0053}^{+0.0058}$ | 5 | 9 |
149444771 | 11.23 | 0.882 | 1.12 | ${0.83452}_{-0.00093}^{+0.00085}$ | ${1.749}_{-0.062}^{+0.067}$ | ${-0.152}_{-0.014}^{+0.013}$ | 6 | 7 |
149540525 | 8.39 | 0.675 | 0.75 | ${0.67404}_{-0.00025}^{+0.00025}$ | ${0.7590}_{-0.0040}^{+0.0043}$ | ${-0.0614}_{-0.0030}^{+0.0032}$ | 8 | 12 | Y
149540525 | 8.39 | 0.625 | 0.94 | ${0.66392}_{-0.00048}^{+0.00050}$ | ${0.7939}_{-0.0086}^{+0.0098}$ | ${-0.0444}_{-0.0019}^{+0.0021}$ | 13 | 21 | Y
149573437 | 9.48 | 0.282 | 0.50 | ${0.291346}_{-0.000025}^{+0.000025}$ | ${0.4367}_{-0.0014}^{+0.0013}$ | ${-0.0742}_{-0.0013}^{+0.0011}$ | 6 | 8 | Y
149630117 | 9.05 | 0.392 | 1.47 | ${0.41974}_{-0.00011}^{+0.00011}$ | ${1.3585}_{-0.0069}^{+0.0063}$ | ${-0.0477}_{-0.0021}^{+0.0018}$ | 5 | 8 | Y
149993830 | 8.91 | 0.551 | 0.48 | ${0.55334}_{-0.00022}^{+0.00021}$ | ${0.4892}_{-0.0064}^{+0.0073}$ | ${0.0361}_{-0.0096}^{+0.0125}$ | 6 | 8 |
150063580 | 9.91 | 0.279 | 0.32 | ${0.29271}_{-0.00016}^{+0.00015}$ | ${0.3087}_{-0.0021}^{+0.0022}$ | ${-0.00950}_{-0.00106}^{+0.00095}$ | 11 | 28 | Y
150106884 | 11.45 | 0.702 | 0.72 | ${0.74061}_{-0.00038}^{+0.00038}$ | ${0.5339}_{-0.0099}^{+0.0103}$ | ${-0.0551}_{-0.0026}^{+0.0028}$ | 8 | 18 |
150106884 | 11.45 | 0.407 | 0.71 | ${0.3886610}_{-0.0000086}^{+0.0000127}$ | ${0.85487}_{-0.00055}^{+0.00047}$ | ${-0.0915352}_{-0.0000021}^{+0.0000017}$ | 5 | 5 |
150165657 | 8.66 | 0.411 | 1.02 | ${0.44579}_{-0.00045}^{+0.00050}$ | ${1.053}_{-0.017}^{+0.020}$ | ${0.0123}_{-0.0046}^{+0.0051}$ | 11 | 13 | Y
150250236 | 9.67 | 0.799 | 1.74 | ${0.80779}_{-0.00061}^{+0.00031}$ | ${1.780}_{-0.034}^{+0.027}$ | ${0.050}_{-0.018}^{+0.013}$ | 6 | 6 |
150250236 | 9.67 | 0.535 | 0.54 | ${0.52858}_{-0.00026}^{+0.00029}$ | ${0.5881}_{-0.0045}^{+0.0039}$ | ${-0.0793}_{-0.0038}^{+0.0047}$ | 5 | 12 |
150318672 | 10.16 | 0.296 | 0.36 | ${0.298849}_{-0.000103}^{+0.000096}$ | ${0.3484}_{-0.0038}^{+0.0051}$ | ${-0.0420}_{-0.0056}^{+0.0095}$ | 6 | 8 |
150324086 | 10.13 | 0.516 | 0.66 | ${0.479533}_{-0.000063}^{+0.000125}$ | ${0.7805}_{-0.0044}^{+0.0039}$ | ${-0.0388}_{-0.0012}^{+0.0015}$ | 8 | 12 |
150392753 | 8.52 | 0.304 | 0.17 | ${0.296854}_{-0.000034}^{+0.000036}$ | ${0.21511}_{-0.00058}^{+0.00053}$ | ${-0.06696}_{-0.00070}^{+0.00086}$ | 11 | 22 |
150440102 | 11.72 | 0.933 | 1.38 | ${0.8630}_{-0.0014}^{+0.0016}$ | ${0.975}_{-0.060}^{+0.060}$ | ${0.068}_{-0.012}^{+0.013}$ | 6 | 11 |
150440362 | 8.02 | 0.870 | 1.31 | ${0.8724}_{-0.0014}^{+0.0016}$ | ${1.301}_{-0.029}^{+0.034}$ | ${-0.045}_{-0.017}^{+0.017}$ | 5 | 9 |
167124706 | 8.75 | 0.906 | 2.46 | ${0.80673}_{-0.00022}^{+0.00017}$ | ${2.5089}_{-0.0043}^{+0.0051}$ | ${-0.00567}_{-0.00056}^{+0.00047}$ | 11 | 16 |
167124706 | 8.75 | 1.197 | 3.12 | ${0.96407}_{-0.00025}^{+0.00026}$ | ${3.0517}_{-0.0067}^{+0.0068}$ | ${0.00343}_{-0.00037}^{+0.00038}$ | 12 | 16 |
167722437 | 8.42 | 0.385 | 0.41 | ${0.41119}_{-0.00013}^{+0.00012}$ | ${0.4293}_{-0.0050}^{+0.0048}$ | ${0.0085}_{-0.0020}^{+0.0019}$ | 8 | 16 |
176874440 | 11.10 | 1.159 | 2.50 | ${1.14639}_{-0.00034}^{+0.00024}$ | ${2.5576}_{-0.0089}^{+0.0094}$ | ${-0.0561}_{-0.0044}^{+0.0050}$ | 6 | 6 |
176935965 | 9.88 | 0.338 | 0.30 | ${0.33462}_{-0.00013}^{+0.00015}$ | ${0.3013}_{-0.0018}^{+0.0021}$ | ${-0.0150}_{-0.0038}^{+0.0041}$ | 8 | 13 |
176980185 | 10.86 | 0.628 | 1.00 | ${0.64161}_{-0.00039}^{+0.00024}$ | ${0.936}_{-0.016}^{+0.014}$ | ${-0.0531}_{-0.0087}^{+0.0058}$ | 7 | 9 |
177082055 | 8.28 | 1.149 | 1.78 | ${1.20087}_{-0.00025}^{+0.00030}$ | ${1.7853}_{-0.0048}^{+0.0055}$ | ${0.00010}_{-0.00074}^{+0.00086}$ | 7 | 16 | Y
177115672 | 11.70 | 0.806 | 0.50 | ${0.804662}_{-0.000047}^{+0.000045}$ | ${0.50390}_{-0.00053}^{+0.00055}$ | ${-0.02618}_{-0.00052}^{+0.00048}$ | 8 | 16 |
177162802 | 10.68 | 0.809 | 3.00 | ${0.6510}_{-0.0029}^{+0.0018}$ | ${1.876}_{-0.051}^{+0.062}$ | ${0.0822}_{-0.0054}^{+0.0048}$ | 12 | 17 |
177164485 | 10.41 | 0.707 | 0.94 | ${0.76156}_{-0.00027}^{+0.00019}$ | ${1.0111}_{-0.0109}^{+0.0099}$ | ${0.0153}_{-0.0019}^{+0.0017}$ | 10 | 15 |
177386428 | 9.19 | 0.269 | 0.32 | ${0.256794}_{-0.000012}^{+0.000010}$ | ${0.41500}_{-0.00042}^{+0.00052}$ | ${-0.087406}_{-0.000596}^{+0.000011}$ | 6 | 7 | Y
231084221 | 10.26 | 0.482 | 0.97 | ${0.458757}_{-0.000015}^{+0.000021}$ | ${1.04841}_{-0.00088}^{+0.00092}$ | ${-0.0382265}_{-0.0002841}^{+0.0000039}$ | 5 | 5 | Y
257721280 | 9.36 | 0.860 | 0.99 | ${0.73313}_{-0.00025}^{+0.00027}$ | ${1.0200}_{-0.0050}^{+0.0051}$ | ${-0.00252}_{-0.00047}^{+0.00046}$ | 10 | 27 |
260265631 | 10.21 | 0.666 | 1.56 | ${0.601269}_{-0.000032}^{+0.000047}$ | ${1.7389}_{-0.0021}^{+0.0018}$ | ${-0.03207}_{-0.00032}^{+0.00041}$ | 6 | 10 |
260353074 | 8.97 | 0.950 | 3.64 | ${0.8575}_{-0.0027}^{+0.0047}$ | ${2.78}_{-0.18}^{+0.12}$ | ${0.107}_{-0.020}^{+0.029}$ | 5 | 6 | Y
260373272 | 10.38 | 1.075 | 1.19 | ${0.98124}_{-0.00078}^{+0.00079}$ | ${0.934}_{-0.021}^{+0.022}$ | ${0.0312}_{-0.0032}^{+0.0031}$ | 8 | 16 | Y
260502142 | 10.58 | 0.328 | 0.32 | ${0.29556}_{-0.00018}^{+0.00023}$ | ${0.3975}_{-0.0065}^{+0.0056}$ | ${-0.0278}_{-0.0019}^{+0.0025}$ | 9 | 17 | Y
260540780 | 10.30 | 0.382 | 0.32 | ${0.399407}_{-0.000075}^{+0.000099}$ | ${0.2607}_{-0.0028}^{+0.0034}$ | ${-0.0402}_{-0.0016}^{+0.0021}$ | 8 | 11 |
260542342 | 11.80 | 0.624 | 0.71 | ${0.67111}_{-0.00033}^{+0.00067}$ | ${0.591}_{-0.016}^{+0.021}$ | ${-0.0300}_{-0.0039}^{+0.0050}$ | 6 | 13 |
262614966 | 10.83 | 0.594 | 0.66 | ${0.60134}_{-0.00078}^{+0.00044}$ | ${0.618}_{-0.011}^{+0.012}$ | ${-0.0703}_{-0.0082}^{+0.0074}$ | 8 | 12 |
270503717 | 10.79 | 0.500 | 0.53 | ${0.49714}_{-0.00012}^{+0.00016}$ | ${0.5411}_{-0.0011}^{+0.0011}$ | ${-0.0423}_{-0.0011}^{+0.0012}$ | 11 | 17 |
270503717 | 10.79 | 0.952 | 2.12 | ${0.92867}_{-0.00064}^{+0.00089}$ | ${2.181}_{-0.032}^{+0.023}$ | ${-0.0310}_{-0.0097}^{+0.0126}$ | 5 | 7 |
271639931 | 8.08 | 0.380 | 0.33 | ${0.37788}_{-0.00023}^{+0.00016}$ | ${0.3284}_{-0.0046}^{+0.0048}$ | ${-0.021}_{-0.019}^{+0.021}$ | 4 | 6 |
271721220 | 10.56 | 0.622 | 1.87 | ${0.58873}_{-0.00040}^{+0.00043}$ | ${1.803}_{-0.021}^{+0.019}$ | ${0.0249}_{-0.0060}^{+0.0069}$ | 7 | 8 |
271721220 | 10.56 | 0.627 | 1.95 | ${0.59271}_{-0.00040}^{+0.00028}$ | ${1.785}_{-0.015}^{+0.017}$ | ${0.0564}_{-0.0060}^{+0.0049}$ | 6 | 8 |
271721220 | 10.56 | 0.318 | 0.14 | ${0.307272}_{-0.000017}^{+0.000016}$ | ${0.16913}_{-0.00069}^{+0.00064}$ | ${-0.02888}_{-0.00070}^{+0.00073}$ | 9 | 13 |
271723952 | 8.40 | 0.343 | 0.17 | ${0.33569}_{-0.00014}^{+0.00010}$ | ${0.1421}_{-0.0047}^{+0.0049}$ | ${0.0418}_{-0.0098}^{+0.0093}$ | 6 | 9 |
271723952 | 8.40 | 0.703 | 0.59 | ${0.66200}_{-0.00039}^{+0.00027}$ | ${0.7548}_{-0.0082}^{+0.0085}$ | ${-0.0457}_{-0.0019}^{+0.0026}$ | 11 | 21 |
272127517 | 9.35 | 0.914 | 0.99 | ${0.87120}_{-0.00029}^{+0.00061}$ | ${1.1182}_{-0.0160}^{+0.0055}$ | ${-0.0350}_{-0.0014}^{+0.0029}$ | 11 | 19 |
279055960 | 11.02 | 0.329 | 0.24 | ${0.327252}_{-0.000043}^{+0.000036}$ | ${0.24511}_{-0.00050}^{+0.00051}$ | ${-0.05659}_{-0.00091}^{+0.00098}$ | 6 | 15 |
279360930 | 11.30 | 0.290 | 0.66 | ${0.282510}_{-0.000066}^{+0.000049}$ | ${0.7006}_{-0.0017}^{+0.0017}$ | ${-0.0651}_{-0.0025}^{+0.0020}$ | 5 | 7 | Y
279510278 | 11.44 | 0.745 | 0.72 | ${0.75334}_{-0.00021}^{+0.00023}$ | ${0.7088}_{-0.0085}^{+0.0111}$ | ${-0.0159}_{-0.0058}^{+0.0081}$ | 8 | 9 |
293221812 | 11.20 | 0.346 | 0.73 | ${0.29479}_{-0.00017}^{+0.00013}$ | ${0.7541}_{-0.0052}^{+0.0051}$ | ${-0.0054}_{-0.0013}^{+0.0012}$ | 5 | 13 |
293221812 | 11.20 | 0.709 | 2.92 | ${0.79363}_{-0.00086}^{+0.00104}$ | ${2.890}_{-0.071}^{+0.088}$ | ${-0.0038}_{-0.0079}^{+0.0096}$ | 5 | 8 |
293270956 | 11.30 | 0.626 | 0.26 | ${0.61271}_{-0.00016}^{+0.00016}$ | ${0.2731}_{-0.0020}^{+0.0019}$ | ${-0.0125}_{-0.0016}^{+0.0019}$ | 9 | 22 |
293271232 | 7.51 | 0.595 | 2.27 | ${0.56875}_{-0.00033}^{+0.00046}$ | ${2.2763}_{-0.0082}^{+0.0080}$ | ${-0.0035}_{-0.0021}^{+0.0027}$ | 10 | 11 |
293271232 | 7.51 | 0.265 | 0.23 | ${0.251493}_{-0.000093}^{+0.000147}$ | ${0.2179}_{-0.0052}^{+0.0044}$ | ${0.0121}_{-0.0038}^{+0.0048}$ | 6 | 13 |
293273274 | 11.12 | 1.009 | 1.06 | ${1.03286}_{-0.00033}^{+0.00029}$ | ${1.066}_{-0.010}^{+0.010}$ | ${0.0049}_{-0.0031}^{+0.0029}$ | 7 | 13 |
293273274 | 11.12 | 0.623 | 0.68 | ${0.61201}_{-0.00033}^{+0.00040}$ | ${0.6845}_{-0.0025}^{+0.0027}$ | ${-0.00275}_{-0.00089}^{+0.00080}$ | 8 | 28 |
293345700 | 9.95 | 0.461 | 0.26 | ${0.462564}_{-0.000028}^{+0.000034}$ | ${0.2602}_{-0.0012}^{+0.0012}$ | ${-0.0091}_{-0.0035}^{+0.0024}$ | 6 | 8 |
293974233 | 11.32 | 0.466 | 2.69 | ${0.52847}_{-0.00048}^{+0.00088}$ | ${2.679}_{-0.034}^{+0.042}$ | ${-0.0016}_{-0.0044}^{+0.0057}$ | 8 | 9 | Y
294092361 | 10.56 | 1.070 | 0.79 | ${1.06922}_{-0.00019}^{+0.00017}$ | ${0.7843}_{-0.0059}^{+0.0063}$ | ${0.0449}_{-0.0085}^{+0.0091}$ | 7 | 7 |
300033585 | 9.99 | 0.378 | 1.06 | ${0.3734484}_{-0.0000094}^{+0.0000057}$ | ${1.08201}_{-0.00023}^{+0.00020}$ | ${-0.0587525}_{-0.0001301}^{+0.0000060}$ | 6 | 8 | Y
300033857 | 9.09 | 0.537 | 0.65 | ${0.537869}_{-0.000053}^{+0.000051}$ | ${0.64448}_{-0.00068}^{+0.00071}$ | ${-0.04022}_{-0.00041}^{+0.00042}$ | 11 | 18 | Y
300033857 | 9.09 | 0.673 | 1.18 | ${0.61989}_{-0.00029}^{+0.00023}$ | ${1.482}_{-0.015}^{+0.016}$ | ${-0.0671}_{-0.0041}^{+0.0036}$ | 6 | 8 | Y
300033857 | 9.09 | 0.288 | 0.50 | ${0.276815}_{-0.000045}^{+0.000041}$ | ${0.4853}_{-0.0036}^{+0.0026}$ | ${0.0140}_{-0.0033}^{+0.0034}$ | 4 | 7 | Y
300138080 | 11.64 | 0.529 | 0.70 | ${0.44776}_{-0.00043}^{+0.00047}$ | ${0.923}_{-0.013}^{+0.011}$ | ${-0.0314}_{-0.0015}^{+0.0019}$ | 7 | 19 |
300140867 | 9.84 | 0.738 | 1.39 | ${0.7326}_{-0.0011}^{+0.0014}$ | ${1.442}_{-0.016}^{+0.026}$ | ${-0.1091}_{-0.0048}^{+0.0087}$ | 9 | 16 |
300157862 | 11.08 | 0.259 | 0.89 | ${0.22706}_{-0.00042}^{+0.00043}$ | ${0.8509}_{-0.0058}^{+0.0060}$ | ${0.0126}_{-0.0019}^{+0.0021}$ | 18 | 21 |
300653681 | 10.25 | 0.771 | 1.58 | ${0.79646}_{-0.00064}^{+0.00068}$ | ${1.656}_{-0.022}^{+0.025}$ | ${0.0335}_{-0.0057}^{+0.0064}$ | 6 | 10 | Y
306631280 | 10.60 | 0.412 | 0.33 | ${0.43612}_{-0.00025}^{+0.00026}$ | ${0.3018}_{-0.0077}^{+0.0084}$ | ${-0.0120}_{-0.0038}^{+0.0037}$ | 9 | 18 |
349092320 | 9.15 | 0.751 | 1.07 | ${0.81021}_{-0.00073}^{+0.00064}$ | ${0.967}_{-0.014}^{+0.015}$ | ${-0.0213}_{-0.0022}^{+0.0023}$ | 12 | 22 |
349096085 | 11.22 | 0.438 | 2.00 | ${0.32557}_{-0.00049}^{+0.00059}$ | ${2.391}_{-0.027}^{+0.025}$ | ${-0.0399}_{-0.0027}^{+0.0030}$ | 6 | 10 | Y
349158735 | 10.76 | 0.505 | 0.43 | ${0.51496}_{-0.00026}^{+0.00025}$ | ${0.449}_{-0.013}^{+0.015}$ | ${0.018}_{-0.011}^{+0.011}$ | 6 | 9 |
349158735 | 10.76 | 0.752 | 1.62 | ${0.70333}_{-0.00046}^{+0.00053}$ | ${1.190}_{-0.026}^{+0.024}$ | ${0.1020}_{-0.0077}^{+0.0083}$ | 6 | 7 |
349310718 | 10.52 | 0.408 | 0.22 | ${0.391297}_{-0.000025}^{+0.000020}$ | ${0.28630}_{-0.00091}^{+0.00086}$ | ${-0.04142}_{-0.00053}^{+0.00056}$ | 8 | 13 |
349410895 | 9.45 | 0.244 | 0.42 | ${0.24126}_{-0.00018}^{+0.00012}$ | ${0.4155}_{-0.0078}^{+0.0078}$ | ${0.033}_{-0.041}^{+0.039}$ | 4 | 4 |
349521873 | 8.96 | 0.804 | 1.56 | ${0.86785}_{-0.00062}^{+0.00060}$ | ${1.419}_{-0.027}^{+0.032}$ | ${-0.0247}_{-0.0042}^{+0.0048}$ | 7 | 12 |
349680479 | 7.89 | 0.276 | 0.22 | ${0.265102}_{-0.000125}^{+0.000084}$ | ${0.1336}_{-0.0033}^{+0.0043}$ | ${0.0908}_{-0.0063}^{+0.0050}$ | 7 | 13 | Y
349680479 | 7.89 | 0.350 | 0.32 | ${0.35580}_{-0.00027}^{+0.00021}$ | ${0.3792}_{-0.0085}^{+0.0082}$ | ${0.121}_{-0.013}^{+0.011}$ | 6 | 11 | Y
349680479 | 7.89 | 0.428 | 0.73 | ${0.41565}_{-0.00019}^{+0.00020}$ | ${0.8063}_{-0.0068}^{+0.0069}$ | ${-0.0741}_{-0.0066}^{+0.0063}$ | 8 | 8 | Y
349683884 | 10.07 | 0.394 | 0.83 | ${0.418680}_{-0.000055}^{+0.000062}$ | ${0.7111}_{-0.0044}^{+0.0047}$ | ${-0.0574}_{-0.0017}^{+0.0018}$ | 7 | 8 | Y
349785797 | 10.44 | 0.323 | 0.27 | ${0.29155}_{-0.00012}^{+0.00016}$ | ${0.2384}_{-0.0035}^{+0.0030}$ | ${0.0100}_{-0.0013}^{+0.0015}$ | 10 | 22 |
349832567 | 10.80 | 0.448 | 0.25 | ${0.461516}_{-0.000011}^{+0.000014}$ | ${0.20589}_{-0.00030}^{+0.00035}$ | ${-0.03877}_{-0.00023}^{+0.00027}$ | 12 | 20 |
349835272 | 9.76 | 0.535 | 3.28 | ${0.516852}_{-0.000010}^{+0.000014}$ | ${3.38344}_{-0.00090}^{+0.00095}$ | ${-0.0668691}_{-0.0000012}^{+0.0000015}$ | 4 | 4 |
349835272 | 9.76 | 0.572 | 3.23 | ${0.60230}_{-0.00059}^{+0.00054}$ | ${3.331}_{-0.017}^{+0.016}$ | ${0.0368}_{-0.0026}^{+0.0022}$ | 7 | 11 |
349902873 | 8.62 | 0.287 | 0.33 | ${0.30225}_{-0.00012}^{+0.00013}$ | ${0.2972}_{-0.0044}^{+0.0045}$ | ${-0.0227}_{-0.0028}^{+0.0029}$ | 7 | 15 | Y
350092538 | 10.12 | 0.596 | 0.26 | ${0.599127}_{-0.000033}^{+0.000058}$ | ${0.25064}_{-0.00050}^{+0.00064}$ | ${-0.01811}_{-0.00053}^{+0.00080}$ | 5 | 17 |
350144504 | 7.99 | 0.329 | 0.48 | ${0.331860}_{-0.000062}^{+0.000064}$ | ${0.4833}_{-0.0038}^{+0.0039}$ | ${0.0357}_{-0.0063}^{+0.0070}$ | 5 | 6 |
350144504 | 7.99 | 0.664 | 1.02 | ${0.66329}_{-0.00030}^{+0.00050}$ | ${1.014}_{-0.016}^{+0.022}$ | ${0.107}_{-0.021}^{+0.037}$ | 5 | 5 |
350144657 | 10.81 | 0.513 | 0.27 | ${0.509384}_{-0.000028}^{+0.000028}$ | ${0.28170}_{-0.00027}^{+0.00028}$ | ${-0.03737}_{-0.00029}^{+0.00031}$ | 15 | 25 |
350295588 | 12.02 | 0.649 | 0.63 | ${0.66124}_{-0.00035}^{+0.00044}$ | ${0.6585}_{-0.0056}^{+0.0064}$ | ${0.0267}_{-0.0029}^{+0.0032}$ | 11 | 17 |
350295588 | 12.02 | 0.889 | 0.64 | ${0.91804}_{-0.00013}^{+0.00013}$ | ${0.5348}_{-0.0064}^{+0.0063}$ | ${-0.0421}_{-0.0023}^{+0.0023}$ | 7 | 11 |
350343297 | 9.00 | 0.538 | 0.78 | ${0.50172}_{-0.00026}^{+0.00027}$ | ${0.7801}_{-0.0037}^{+0.0045}$ | ${0.0010}_{-0.0012}^{+0.0012}$ | 9 | 21 | Y
350344057 | 6.94 | 0.263 | 0.11 | ${0.264377}_{-0.000042}^{+0.000061}$ | ${0.11675}_{-0.00091}^{+0.00095}$ | ${0.0176}_{-0.0027}^{+0.0032}$ | 7 | 14 | Y
350444342 | 8.07 | 0.375 | 0.69 | ${0.34963}_{-0.00335}^{+0.00074}$ | ${0.832}_{-0.021}^{+3.168}$ | ${-0.0677}_{-0.0040}^{+0.3677}$ | 9 | 9 |
350444342 | 8.07 | 0.745 | 0.61 | ${0.80028}_{-0.00051}^{+0.00050}$ | ${0.4924}_{-0.0035}^{+0.0113}$ | ${-0.02551}_{-0.00072}^{+0.00186}$ | 20 | 32 |
350477538 | 11.34 | 0.356 | 1.01 | ${0.33391}_{-0.00017}^{+0.00035}$ | ${0.925}_{-0.024}^{+0.018}$ | ${0.046}_{-0.013}^{+0.017}$ | 4 | 5 |
350715741 | 10.87 | 0.545 | 0.59 | ${0.545268}_{-0.000052}^{+0.000116}$ | ${0.59135}_{-0.00075}^{+0.00077}$ | ${-0.04540}_{-0.00031}^{+0.00028}$ | 7 | 28 | Y
350840969 | 11.36 | 0.693 | 0.58 | ${0.70792}_{-0.00022}^{+0.00028}$ | ${0.5386}_{-0.0060}^{+0.0091}$ | ${-0.0314}_{-0.0032}^{+0.0041}$ | 6 | 13 |
358181695 | 8.82 | 0.479 | 0.53 | ${0.45686}_{-0.00039}^{+0.00030}$ | ${0.707}_{-0.013}^{+0.015}$ | ${-0.0961}_{-0.0070}^{+0.0066}$ | 8 | 12 |
364325752 | 9.98 | 0.597 | 0.28 | ${0.590531}_{-0.000034}^{+0.000050}$ | ${0.27860}_{-0.00075}^{+0.00074}$ | ${0.00057}_{-0.00085}^{+0.00149}$ | 7 | 15 |
374944608 | 9.90 | 0.439 | 0.39 | ${0.453893}_{-0.000071}^{+0.000068}$ | ${0.3450}_{-0.0012}^{+0.0013}$ | ${-0.03955}_{-0.00069}^{+0.00067}$ | 15 | 22 |
374944608 | 9.90 | 0.844 | 1.38 | ${0.85144}_{-0.00033}^{+0.00030}$ | ${1.394}_{-0.016}^{+0.017}$ | ${0.0295}_{-0.0096}^{+0.0093}$ | 5 | 6 |
375038081 | 9.31 | 1.175 | 2.31 | ${0.77596}_{-0.00121}^{+0.00076}$ | ${3.489}_{-0.018}^{+0.025}$ | ${-0.03428}_{-0.00074}^{+0.00045}$ | 11 | 34 |
381950897 | 9.56 | 0.764 | 0.54 | ${0.740466}_{-0.000038}^{+0.000044}$ | ${0.4077}_{-0.0015}^{+0.0015}$ | ${0.06492}_{-0.00090}^{+0.00092}$ | 12 | 13 |
381950897 | 9.56 | 0.479 | 0.22 | ${0.470503}_{-0.000072}^{+0.000103}$ | ${0.2440}_{-0.0025}^{+0.0019}$ | ${-0.0391}_{-0.0022}^{+0.0027}$ | 8 | 16 |
382519218 | 9.55 | 0.316 | 0.20 | ${0.3012781}_{-0.0000070}^{+0.0000086}$ | ${0.27533}_{-0.00041}^{+0.00027}$ | ${-0.05858}_{-0.00025}^{+0.00035}$ | 7 | 12 |
382519218 | 9.55 | 0.347 | 0.21 | ${0.33893}_{-0.00014}^{+0.00018}$ | ${0.1958}_{-0.0087}^{+0.0086}$ | ${0.017}_{-0.013}^{+0.014}$ | 5 | 8 |
388131027 | 8.03 | 0.947 | 1.10 | ${0.94919}_{-0.00032}^{+0.00032}$ | ${1.0951}_{-0.0045}^{+0.0051}$ | ${-0.0266}_{-0.0023}^{+0.0025}$ | 11 | 13 |
391744540 | 8.69 | 0.502 | 0.44 | ${0.46847}_{-0.00021}^{+0.00022}$ | ${0.572}_{-0.010}^{+0.010}$ | ${-0.0460}_{-0.0038}^{+0.0040}$ | 6 | 15 | Y
391744540 | 8.69 | 0.378 | 0.61 | ${0.35510}_{-0.00020}^{+0.00031}$ | ${0.694}_{-0.017}^{+0.017}$ | ${-0.0416}_{-0.0098}^{+0.0095}$ | 5 | 7 | Y
391892842 | 8.09 | 0.383 | 0.35 | ${0.40119}_{-0.00030}^{+0.00016}$ | ${0.4199}_{-0.0044}^{+0.0031}$ | ${0.0439}_{-0.0021}^{+0.0011}$ | 15 | 28 |
391894459 | 9.31 | 0.436 | 1.30 | ${0.48116}_{-0.00024}^{+0.00028}$ | ${1.291}_{-0.014}^{+0.017}$ | ${-0.0021}_{-0.0029}^{+0.0036}$ | 7 | 11 | Y
407661375 | 10.04 | 0.715 | 0.46 | ${0.73411}_{-0.00022}^{+0.00033}$ | ${0.526}_{-0.019}^{+0.019}$ | ${0.0381}_{-0.0098}^{+0.0105}$ | 5 | 10 |
Table 3: continued.
|
# GPT-too: A language-model-first approach for AMR-to-text generation
Manuel Mager[1] Ramón Fernandez Astudillo[2] Tahira Naseem[2] Md Arafat
Sultan[2] Young-Suk Lee[2] Radu Florian[2] Salim Roukos[2]
[1] Institute for Natural Language Processing,
University of Stuttgart, Germany
[2] IBM Research AI, Yorktown Heights, NY 10598, USA
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
{tnaseem<EMAIL_ADDRESS>This research was done during an internship at
IBM Research AI.
###### Abstract
Abstract Meaning Representations (AMRs) are broad-coverage sentence-level
semantic graphs. Existing approaches to generating text from AMR have focused
on training sequence-to-sequence or graph-to-sequence models on AMR annotated
data only. In this paper, we propose an alternative approach that combines a
strong pre-trained language model with cycle consistency-based re-scoring.
Despite the simplicity of the approach, our experimental results show these
models outperform all previous techniques on the English LDC2017T10 dataset,
including the recent use of transformer architectures. In addition to the
standard evaluation metrics, we provide human evaluation experiments that
further substantiate the strength of our approach.
## 1 Introduction
Abstract Meaning Representation (AMR) Banarescu et al. (2013) is a rooted,
directed, acyclic graph with labeled edges (relations) and nodes (concepts)
expressing “who is doing what to whom”. AMR-to-text generates sentences
representing the semantics underlying an AMR graph.
Initial works in AMR-to-text used transducers Flanigan et al. (2016), phrase-
based machine translation Pourdamghani et al. (2016) and neural sequence-to-
sequence (seq2seq) models with linearized graphs Konstas et al. (2017). Cao
and Clark (2019) leverage constituency parsing for generation. Beck et al.
(2018) improve upon prior RNN graph encoding Song et al. (2018) with Levi
Graph Transformations. Damonte and Cohen (2019) compare multiple
representations and find graph encoders to be the best. Guo et al. (2019) use
RNN graph encoders with dense graph convolutional encoding. Ribeiro et al.
(2019) use RNN encoders with dual graph representations. Transformer-based
seq2seq Vaswani et al. (2017) was first applied to AMR-to-text in Sinh and Le
Minh (2019). Zhu et al. (2019) greatly improve over the prior state-of-the-art
by modifying self-attention to account for AMR graph structure. Using
transformers has also been recently explored by Wang et al. (2020) who propose
a mutli-head graph attention mechanism and by Cai and Lam (2020) who propose a
graph transformer architecture.
Pre-trained transformer representations Radford et al. (2018); Devlin et al.
(2019); Radford et al. (2019) use transfer learning to yield powerful language
models that considerably outperform the prior art. They have also shown great
success when fine-tuned to particular text generation tasks See et al. (2019);
Zhang et al. (2019); Keskar et al. (2019). Given their success, it would be
desirable to apply pre-trained transformer models to a graph-to-text task like
AMR-to-text, but the need for graph encoding precludes in principle that
option. Feeding the network with some sequential representation of the graph,
such as a topological sorting, looses some of the graphs representational
power. Complex graph annotations, such as AMR, also contain many special
symbols and special constructs that departure from natural language and may by
not interpretable by a pre-trained language model.
In this paper we explore the possibility of directly fine-tuning a pre-trained
transformer language model on a sequential representation of AMR graphs,
despite the expected difficulties listed above. For this we re-purpose a GPT-2
language model Radford et al. (2019) to yield an AMR-to-text system. We show
that it is surprisingly easy to fine-tune GPT-2 to learn AMR graph to text
mapping that outperforms the previous state-of-the-art on automatic evaluation
metrics. Since a single graph AMR, graph corresponds to multiple sentences
with the same meaning, we also provide human evaluation and semantic
similarity metric results Zhang et al. (2020) which are less dependent on
reference text. Human evaluation and semantic similarity results highlight the
positive impact of a strong language model strategy. Finally we also introduce
a simple re-scoring technique based on cycle-consistency that further improves
performance.
## 2 Fine-tuning GPT-2 for conditional language generation
In order to fine-tune a generative model (GPT-2; Radford et al. (2019)) for
conditional text generation, prior works fine-tune the language model to
predict target text starting from the additional source text as context. In
our experiments, we found it beneficial to fine-tune on the joint distribution
of AMR and text instead i.e. also reconstruct the source. Given a tokenized
sentence $w_{1}\cdots w_{N}$ and the sequential AMR representation
$a_{1}\cdots a_{M}$ we maximized the joint probability
$\displaystyle p_{\mbox{GPT-2}}(\mathbf{w},\mathbf{a})=$
$\displaystyle\prod_{j=1}^{N}p_{\mbox{GPT-2}}(w_{j}\mid w_{1:j-1},a_{1:M})$
$\displaystyle\cdot\prod_{i=1}^{M}p_{\mbox{GPT-2}}(a_{i}\mid a_{1:i-1})$
A special separator token is added to mark the end of the sequential AMR
representation. Special AMR symbols that should not be interpreted literally
are assigned tokens from the GPT-2 unused token list. In addition to this, we
also observed that freezing the input embeddings when fine-tuning had positive
impact in performance.
At test time, we provide the AMR as context as in conventional conditional
text generation:
$\hat{w}_{j}=\arg\max_{w_{j}}\\{p_{\mbox{GPT-2}}(w_{j}\mid
w_{1:j-1},a_{1:M})\\}$
## 3 Re-scoring via Cycle Consistency
The general idea of cycle consistency is to assess the quality of a system’s
output based on how well an external ‘reverse’ system can reconstruct the
input from it. In previous works, cycle-consistency based losses have been
used as part of the training objective in machine translation He et al. (2016)
and speech recognition Hori et al. (2019). It has also been used for filtering
synthetic training data for question answering Alberti et al. (2019). Here we
propose the use of a cycle consistency measure to re-score the system outputs.
In particular, we take the top $k$ sentences generated by our system from each
gold AMR graph and parse them using an off-the-shelf parser to obtain a second
AMR graph. We then re-score each sentence using the standard AMR parsing
metric Smatch Cai and Knight (2013) by comparing the gold and parsed AMRs.
## 4 Experimental setup
Following Previous works on AMR-to-text, we Use the standard LDC2017T10 AMR
corpus for evaluation of the proposed model. This Corpus contains 36,521
training instances of AMR graphs in PENMAN notation and the corresponding
texts. It also includes 1368 and 1371 development and test instances,
respectively. We tokenize each input text using The JAMR toolkit Flanigan et
al. (2014). The concatenation of an AMR graph and the corresponding text is
split into words, special symbols and sub-word units using the GPT-2
tokenizer. We add all arc labels seen in the training set and the root node
:root to the vocabulary of the GPT-2model, but we freeze the embedding layer
for training. We use the Hugging Face implementation of Wolf et al. (2019) for
GPT-2 small (GPT-2S), medium (GPT-2M) and large (GPT-2L). Fine-tuning
converges after $6$ epochs, which takes just a few hours on a V100 GPU111Code
for this paper is available at: https://github.com/IBM/GPT-too-AMR2text. For
cycle-consistency re-scoring we use an implementation of Naseem et al. (2019)
in PyTorch. For re-scoring experiments, we use a beam size of 15.
#### AMR input representation.
we test three variants of AMR representation. First, a depth-first search
(DFS) through the graph following Konstas et al. (2017), where the input
sequence is the path followed in the graph. Second, to see if GPT-2 is in fact
learning from the graph structure, we remove all the edges from the DFS,
keeping only the concept nodes. This has the effect of removing the relation
information between concepts, such as subject/object relations. As a third
option, we use the PENMAN representation without any modification. The three
input representations are illustrated below:
Nodes | recommend advocate-01 it vigorous
---|---
DFS | recommend :ARG1 advocate-01 :ARG1 it :manner vigorous
Penman | (r / recommend-01 :ARG1 (a / advocate-01 :ARG1 (i / it) :manner (v / vigorous)))
#### Decoding.
For generation, we experiment with greedy decoding, beam search, and nucleus
sampling Holtzman et al. (2019). For beam search, we explore beam sizes of
$5$, $10$ and $15$. As the system, in some cases, produces repetitive output
at the end of the text, we additionally perform a post-processing step to
remove these occurrences.
#### Metrics.
We considered the three automatic evaluation metrics commonly used in previous
works. We compute BLEU Papineni et al. (2002) using SacreBLEU Ma et al.
(2019). We compute chrF++ Popović (2017) using both SacreBLEU and the scripts
used by authors of the baseline systems. We compute METEOR Banerjee and Lavie
(2005) with the default values for English of the CMU
implementation.222https://www.cs.cmu.edu/~alavie/METEOR
In addition to the standard automatic metrics, we also carry out human
evaluation experiments and use the semantic similarity metric BERTScore Zhang
et al. (2020). Both metrics arguably have less dependency on the surface
symbols of the reference text used for evaluation. This is particularly
relevant for the AMR-to-text task, since one single AMR graph corresponds to
multiple sentences with the same semantic meaning. Conventional metrics for
AMR-to-text are are strongly influenced by surface symbols and thus do not
capture well the ability of the system to produce a diverse sentences with
same underlying semantics.
Human evaluations are carried out by three professional annotators on $51$
randomly selected sentences from the $1371$ test sentences, on a 6 point
scale, ranging from 0 to 5.
* •
0=Exceptionally poor (No useful information is conveyed at all.)
* •
1=Poor (Fundamental errors in grammar and vocabulary make it difficult to
understand the meaning.)
* •
2=Not good enough (Errors in grammar, vocabulary and style make it difficult
to understand the meaning.)
* •
3=Good enough (There are errors in the text, but I am reasonably confident
that I understand the meaning.)
* •
4=Very good (There may be minor errors in the text, but I am very confident
that I understand the meaning.)
* •
5=Excellent (The information is presented clearly and with appropriate
grammar, vocabulary and style.)
For each system, scores from all annotators are averaged to compute a single
score. Inter-annotator agreement was $0.7$ when measured by Pearson
correlation coefficient.
Our system produces de-tokenized cased output after BPE decoding, whereas
previous systems produce traditional tokenized lower-cased output. Therefore,
we lowercase and tokenize our system outputs to have fair comparisons with
previous systems.
Model | Input | BLEU | chrF++
---|---|---|---
GPT-2S Rec. | Only nodes AMR | 9.45 | 41.59
GPT-2S Rec. | Lin. AMR w/o edges. | 11.35 | 43.25
GPT-2S Rec. | Lin. AMR w/edges. | 20.14 | 53.12
GPT-2S Rec. | Penman AMR | 22.37 | 53.92
GPT-2M Rec. | Lin. AMR w/edges. | 22.86 | 55.04
GPT-2M Rec. | Penman AMR | 27.99 | 61.26
Table 1: Results on the LDC2017T10 development set using GPT-2 S(mall) and M(edium) with Rec(onstruction) loss (see §2) for different AMR representations (see §4). Approach | Decoding | BLEU | chrF++
---|---|---|---
GPT-2M Conditional | Greedy | 25.73 | 57.2
GPT-2M Rec. | Greedy | 30.41 | 61.36
GPT-2M Rec. | BEAM | 31.8 | 62.56
GPT-2M Rec. | BEAM 10 | 32.32 | 62.79
GPT-2M Rec. | Sampling | 28.75 | 61.19
Table 2: Results on the LDC2017T10 development set. Rec(onstruction) uses the
AMR reconstruction term (see §2) whereas Conditional does not.
### 4.1 Results
System | Performance
---|---
| BLEU | Meteor | chrF++
Beck et al. (2018) | 23.30 | - | 50.40
Damonte and Cohen (2019) | 24.54 | 24.07 | -
Guo et al. (2019) | 27.60 | - | 57.30
Cao and Clark (2019) | 26.80 | - | -
Sinh and Le Minh (2019) | 18.36 | - | -
Ribeiro et al. (2019) | 27.87 | 33.21 | -
Cai and Lam (2020) | 29.80 | 35.10 | 59.4
Zhu et al. (2019) | 31.82 | 36.38 | 64.05
GPT-2M Rec. | $32.10^{\blacklozenge}$ | $35.86^{\Diamond}$ | $61.81^{\blacklozenge}$
GPT-2L Rec. | $32.47^{\blacklozenge}$ | $36.80^{\Diamond}$ | $62.88^{\blacklozenge}$
GPT-2M Rec. re-scoring | $32.98^{\blacklozenge}$ | $37.33^{\Diamond}$ | $63.09^{\blacklozenge}$
GPT-2L Rec. re-scoring | 33.02◆ | 37.68◇ | 63.89□
Table 3: Results on the LDC2017T10 test set for best performing models
compared to other results reported in the literature. ◆ indicates statistical
significance at $(P<.01)$, ◇ at $(P<0.05)$ and □, not significant. All
significance tests are with respect to (Zhu et al., 2019).
Regarding the type of AMR representation, as shown in Table 1, using directly
the PENMAN notation for AMR representation leads to the best results
outperforming DFS. Edge information, indicating relations between concepts,
seems also to play a fundamental role since its absence strongly decreases
performance in both DFS and PENMAN representations. Penman notation was chosen
for the rest of the experiments.
The impact of the use of a reconstruction term explained in §2 is shown in
Table 2. The model trained using this additional term achieves $30.41$ BLEU
and $61.36$ chrF++, as opposed to $25.73$ BLEU and $57.2$ chrF++ without the
term. We therefore use a reconstruction term training in the rest of the
experiments.
Beam search improves system performance greatly over the greedy baseline with
$1.91$ BLEU points (see Table 2). With beam size $10$, we obtain $32.32$ BLEU
and $62.79$ chrF++. With nucleus sampling at a cumulative probability mass of
$0.9$, performance drops to $28.75$ BLEU and $61.19$ chrF++. Finally, cycle-
consistency re-ranking of the beam search outputs improves performance
($33.57$ BLEU, $64.86$ chrF++) over the one best output.
System | LDC2017T10
---|---
| Human Eval. | SemSim
| Avg. | P45 | F1
Guo et al. (2019) | $2.48$ | 15.69% | 92.68
Ribeiro et al. (2019) | $2.42$ | 16.37% | 92.63
Zhu et al. (2019) | $2.61$ | 20.26% | 93.31
GPT-2M Rec. | $3.03$ | 37.91% | 94.55
GPT-2L Rec. | 3.04 | 41.83% | 94.63
Table 4: Human evaluation and semantic similarity (SemSim) results on the LDC2017T10 test set. Human evaluations (Human Eval.) show the average (Avg.) of scores (0 to 5) and the ratio of sentence evaluated between 4 and 5 (P45). All results for human evaluation are on $51$ randomly selected sentences and statistically significant at $(P<0.05)$. SemSim results are significant at $(P<0.01)$. All significance tests refer to a comparison with Zhu et al. (2019). | System | Generated text
---|---|---
(1) | REF: | the doctors gave her medication and it ’s made her much better .
| G2S: | the doctor gives her medications and they make her much better .
| Transf: | doctors give her medications and make her much better .
| Our: | the doctor gave her the medication and made her feel much better.
| Our R.: | the doctor gave her the medication and made her ” much better ” .
(2) | REF: | at the state scientific center of applied microbiology there is every kind of deadly bacteria that was studied for use in the secret biological weapons program of the soviet union .
| G2S: | there are every kind of killing <unk> in the state scientific center of applied microbiology to use themselves for soviet union ’s secret biological weapons programs .
| Transf: | there is every kind of bacterium , which is studied in using bacterium for the soviet union secret biological weapons program .
| Our: | every kind of bacterium that was studied was found at the state scientific center of applied microbiology and was used in soviet secret weapons programs for biological weapons of biology .
| Our R.: | every kind of bacterium that has been studied and used in soviet secret programs for biological weapons has been in the state scientific center of applied microbiology .
(3) | REF: | among the nations that have not signed the treaty only india and israel would qualify for admission to the nsg under the israeli proposal .
| G2S: | only one of the nations who do not sign the treaty are qualified for their proposal to admit the nsg .
| Transf: | india and israel are only qualified for the nations that do not sign the treaty , but they admitted to the nsg .
| Our: | india and israel are the only countries eligible to admit to the nsg by proposing a treaty .
| Our R.: | only india and israel are eligible to admit to the nsg by proposing a treaty .
Table 5: Output examples from four systems of the LDC2017T10 dataset. REF
stands for reference, G2S for (Guo et al., 2019) and Transf. for (Zhu et al.,
2019). Our is the top beam output for GPT-2L and Our R. is with re-scoring.
Table 3 compares the best GPT-2M and GPT-2L results, fine-tuned using the
reconstruction term and PENMAN notation. For all scores we test statistical
significance with a standard two-tailed student t-test. Our model achieves a
large improvement of $1.2$ BLEU and $1.3$ METEOR scores over the previous
state-of-the-art model using GPT-2L and re-scoring. For chrF++, we get
different scores from SacreBLEU and the scripts provided by the authors of our
baseline systems, achieving comparable results with the former ($63.89$), and
improving over the best score with the latter ($65.01$) $(P<.01)$.
Table 4 shows human Evaluation results and semantic similarity scores of
GPT-2L and GPT-2M compared to Zhu et al. (2019); Ribeiro et al. (2019); Guo et
al. (2019). Our approach produces a large number of high-quality sentences
with $41.8\%$, a significant gain over the previous best system ($20.26\%$).
Regarding semantic similarity, prior art methods show relatively close scores,
a $0.9$ points difference, while GPT-2L Rec. improves $1.6$ points over the
best of these models. It should be noted that differences with Zhu et al.
(2019) for GPT-2L Rec. are statistically significantly with $P<.05$, while
differences for GPT-2M Rec are not significant due to the small sample size.
In Table 5 we show three nontrivial examples, where we compare our system
outputs with those of previous work. In the first example, the reference
sentence contains a grammatical error. Our system not only generates the
correct output, but also corrects the error in the reference. The proposed
system can generate fluent long sentences as shown in example 2. The third
example shows a sentence where all systems including ours fail to generate a
correct text.
### 4.2 Discussion
Due to the large amounts of data they are trained on, pre-trained transformer
language models can be expected to generate fluent and diverse text See et al.
(2019). It should however be highlighted that fine-tuned GPT-2 learns to
produce not only fluent but also adequate text, despite using a sequential
representation of an AMR graph as input. As shown in the experimental setup,
encoding of relations plays as well a fundamental role in AMR-to-text
performance, indicating that GPT-2 attains a fine-grained understanding of the
underlying semantics to reach state of the art performance.
While a sequence of PENMAN notation tokens is far from an optimal encoding of
a graph, it is noteworthy how far performance-wise current strong language
models can go. Furthermore, It is likely that standard metrics (BLEU, Meteor,
chrF++) that rely on a reference text do not properly reflect AMR-to-text
quality. An AMR graph corresponds to multiple sentences with the same
semantics and these measures are likely biased towards the single available
reference. In metrics that are less influenced by the reference text such as
human evaluation and semantic similarity, the proposed system shows a larger
improvement over the previous systems with close to $50\%$ of the generated
sentences considered excellent or good.
Finally it is worth considering that leveraging pre-trained transformers
greatly expands the vocabulary available on AMR-to-text systems. A single AMR
graph can correspond to multiple sentences with markedly different surface
realizations, but manual annotation of AMR is a time consuming task.
Approaches like the one proposed may be a simple solution for generation of
diverse text data for AMR parser training or other applications were diversity
play a role.
## 5 Conclusions
In this work, we present a language model-based approach for the AMR-to-text
generation task. We show that a strong pre-trained transformer language model
(GPT-2) can be fine-tuned to generate text directly from the PENMAN notation
of an AMR graph. Comparison with state-of-the-art models in BLUE, chrF++,
METEOR as well as SemSim and human evaluation metrics show that while simple,
this approach can outperform existing methods including methods training
transformers from scratch. We also show that cycle consistency-based re-
scoring using a conventional AMR parser and the Smatch metric can notably
improve the results. Future work will focus on incorporating better encoding
of the AMR graph into the current system and exploring data augmentation
techniques leveraging the proposed approach.
## Acknowledgments
We thank the reviewers for their valuable suggestions. We would also like to
thank Chunchuan Lyu for his valuable feedback and help.
## References
* Alberti et al. (2019) Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019\. Synthetic QA corpora generation with roundtrip consistency. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_.
* Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract meaning representation for sembanking. In _Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse_ , pages 178–186.
* Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In _Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization_ , pages 65–72.
* Beck et al. (2018) Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 273–283.
* Cai and Lam (2020) Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In _34th AAAI conference on artificial intelligence_.
* Cai and Knight (2013) Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 748–752.
* Cao and Clark (2019) Kris Cao and Stephen Clark. 2019. Factorising amr generation through syntax. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2157–2163.
* Damonte and Cohen (2019) Marco Damonte and Shay B Cohen. 2019. Structural neural encoders for amr-to-text generation. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3649–3658.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Flanigan et al. (2016) Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In _Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies_ , pages 731–739.
* Flanigan et al. (2014) Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014\. A discriminative graph-based parser for the abstract meaning representation. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1426–1436.
* Guo et al. (2019) Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. _Transactions of the Association for Computational Linguistics_ , 7:297–312.
* He et al. (2016) Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In _Advances in Neural Information Processing Systems_ , pages 820–828.
* Holtzman et al. (2019) Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. _arXiv preprint arXiv:1904.09751_.
* Hori et al. (2019) Takaaki Hori, Ramon Astudillo, Tomoki Hayashi, Yu Zhang, Shinji Watanabe, and Jonathan Le Roux. 2019. Cycle-consistency training for end-to-end speech recognition. In _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 6271–6275. IEEE.
* Keskar et al. (2019) Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. _arXiv preprint arXiv:1909.05858_.
* Konstas et al. (2017) Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 146–157.
* Ma et al. (2019) Qingsong Ma, Johnny Wei, Ondřej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In _Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)_ , pages 62–90.
* Naseem et al. (2019) Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding Smatch: Transition-based AMR parsing with reinforcement learning.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318.
* Popović (2017) Maja Popović. 2017. chrf++: words helping character n-grams. In _Proceedings of the second conference on machine translation_ , pages 612–618.
* Pourdamghani et al. (2016) Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating english from abstract meaning representations. In _Proceedings of the 9th international natural language generation conference_ , pages 21–25.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. 2018. _URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/language-unsupervised/language_understanding_paper. pdf_.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI Blog_ , 1(8).
* Ribeiro et al. (2019) Leonardo FR Ribeiro, Claire Gardent, and Iryna Gurevych. 2019. Enhancing amr-to-text generation with dual graph representations. _arXiv preprint arXiv:1909.00352_.
* See et al. (2019) Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? _arXiv preprint arXiv:1909.10705_.
* Sinh and Le Minh (2019) Vu Trong Sinh and Nguyen Le Minh. 2019. A study on self-attention mechanism for amr-to-text generation. In _International Conference on Applications of Natural Language to Information Systems_ , pages 321–328. Springer.
* Song et al. (2018) Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr-to-text generation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1616–1626.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Wang et al. (2020) Tianming Wang, Xiaojun Wan, and Hanqi Jin. 2020. Amr-to-text generation with graph transformer. _Transactions of the Association for Computational Linguistics_ , 8:19–33.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_.
* Zhang et al. (2020) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020\. Bertscore: Evaluating text generation with bert. In _International Conference on Learning Representations_.
* Zhang et al. (2019) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. _arXiv preprint arXiv:1911.00536_.
* Zhu et al. (2019) Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-to-text generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5462–5471.
|
# Copilot for Xcode: Exploring AI-Assisted Programming by Prompting Cloud-
based Large Language Models
Chee Wei Tan1 Shangxin Guo2 Man Fai Wong2&Ching Nam Hang2 1Nanyang
Technological University
2City University of Hong Kong
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
This paper presents an AI-assisted programming tool called Copilot for Xcode
for program composition and design to support human software developers. By
seamlessly integrating cloud-based Large Language Models (LLM) with Apple’s
local development environment, Xcode, this tool enhances productivity and
unleashes creativity for software development in Apple software ecosystem
(e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP)
techniques, Copilot for Xcode effectively processes source code tokens and
patterns within code repositories, enabling features such as code generation,
autocompletion, documentation, and error detection. Software developers can
also query and make “small” decisions for program composition, some of which
can be made simultaneously, and this is facilitated through prompt engineering
in a chat interface of Copilot for Xcode. Finally, we present simple case
studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt
popular LLM services like OpenAI ChatGPT for program composition and design.
Figure 1: An overview of the AI-assisted programming application, Copilot for
Xcode modeled as an intermediary software entity to connect user requests
(e.g., prompt tokens) with cloud-based large language models.
## 1 Introduction
The field of natural language processing (NLP) has witnessed remarkable
achievements through the use of large language models (LLMs). These models
exhibit remarkable skills in understanding and generating natural languages.
Additionally, they employ feedback mechanisms, such as rewards or penalties,
to improve their comprehension and fine-tune their future performance
Christiano et al. (2017); Ouyang et al. (2022). The application of LLMs to AI-
assisted programming has recently attracted considerable attention Rajamani
(2022); Wong et al. (2023), as it offers the possibility to embed advanced
conversational agents in software development Li et al. (2022b); Chen et al.
(2021). In fact, the emergence of LLM-driven tools like ChatGPT (Chat
Generative Pre-Trained Transformer), Github Copilot, DeepMind’s AlphaCode
resonates with the visionary ideas presented in Edsger W. Dijkstra’s seminal
paper in Dijkstra (transcribed 2007), illustrating the transformative
potential of computers in facilitating a seamless integration of code and
creativity. By surpassing the boundaries of debugging, AI-assisted programming
tools can embrace the harmonious combination of program composition and
elegant design Dijkstra (transcribed 2007).
Since the work in Dijkstra (transcribed 2007), one of the earliest AI-assisted
programming tool is the MIT programmer’s apprentice, which aimed to simulate a
knowledgeable junior programmer and utilized natural language processing to
acquire understanding of programming patterns, clichés, and interactions
Waters (1982); Rich and Waters (1988). The “MIT programmer’s apprentice”
played a pioneering role in introducing revolutionary concepts such as code
generation (e.g., see Handsaker (1982)) and an early form of ”prompt
engineering” (e.g., see Rich et al. (1978)). These advancements were driven by
the recognition of computer programming as a systematic process of abstraction
and simplification Dijkstra (transcribed 2007); Rich and Waters (1982).
AI-assisted programming improves software productivity by automating tasks,
detects errors, enhances code quality, promotes usability, improves
reliability and accelerates the overall software development cycles Wong et
al. (2023). Rather than replacing human programmers, these tools empower them
to unleash their creative potential. By automating repetitive and mundane
tasks, AI-assisted programming frees up valuable time and mental energy for
human programmers to focus on innovative problem-solving and designing elegant
solutions with the help of predictive analysis. Furthermore, by incorporating
natural language processing capabilities (i.e., via prompt engineering), these
tools enable human programmers to interact with software systems in a more
intuitive and human-like manner, thus streamlining the software development
process Dijkstra (1972).
Cloud-based tools that leverage LLMs such as Codeium Codeium (2023), GitHub
Copilot Friedman (2021), OpenAI ChatGPT OpenAI (2023a), and Amazon
CodeWhisperer Amazon (2022), enable users to access their cloud-based LLM
services and online resources through dedicated application programming
interface (API) in an on-demand access. The pricing models for these tools
vary depending on the complexity of the tool and the target audience. Some
pricing models include enterprise pricing, subscription-based pricing, usage-
based pricing, freemium (i.e., the tool is available for free, but additional
premium features require payment), pay-per-use pricing or entirely free of
charge. In fact, these pricing models and LLM-based services can be
incorporated into existing systems like a local integrated development
environment (IDE) or implemented via a Software-as-a-Service (SaaS) web
interface, acting as a virtual service entity to meeting objectives and saving
costs for the human programmer Zheng et al. (2015, 2016). It is expected that
the expanding reach and high demand usage of these LLM-based tools reflect the
growing need for advanced NLP capabilities in software development. This trend
aligns with Dijkstra’s visionary ideas as discussed in Dijkstra (1972,
transcribed 2007).
This paper presents Copilot for Xcode, an AI-assisted programming tool that
was open-sourced on December 7, 2022, one week after OpenAI launched its
ChatGPT on November 30, 2022.111Apple’s recently-issued patent in Siracusa et
al. (2023) dated June 27, 2023 suggests that they are actively exploring the
integration of machine learning models into their software development system,
specifically within Xcode, rather than relying solely on existing solution.
Acting as an intermediary entity, as shown in Figure 1, it seamlessly
integrates cloud-based large language model services with local IDEs like
Xcode. This integration benefits software developers in the Apple ecosystem by
streamlining AI-assisted programming service delivery and enhancing the
accessibility of a myriad of cloud-based LLM applications. Copilot for Xcode
enables real-time prompt engineering and efficient interaction between the
human programmer and the large language models, offering the potential to
integrate serverless computing capabilities with natural language processing
in the cloud. The source code of Copilot for Xcode can be publicly accessed at
https://github.com/intitni/CopilotForXcode.
Figure 2: The sequence diagram illustrates the functionality of Copilot for
Xcode, enabling real-time suggestions through integration with GitHub Copilot.
When a user initiates a code update, Copilot for Xcode receives a notification
and subsequently sends a request to the GitHub Copilot API. Upon receiving the
suggestions from GitHub Copilot, the user has the option to adopt the
recommendations and directly apply the changes within Xcode.
## 2 Related Works
### 2.1 Language Models for Big Code Analysis
LLMs have surfaced as a promising approach to tackle challenges in computer
programming, leveraging the software naturalness hypothesis Hindle et al.
(2012). This hypothesis posits that programming languages can be understood
and manipulated in a similar fashion to how natural language processing
techniques handle human languages. Since the introduction of the transformer
architecture in 2017 Vaswani et al. (2017), LLMs trained on large-scale
datasets of programs have shown significant benefits in code-related tasks by
effectively learning programming language patterns and structures, which are
collectively part of Big Code analysis Vechev et al. (2016). Recent LLMs such
as T5 Raffel et al. (2020), BERT Devlin et al. (2018), GPT-4 OpenAI (2023b)
and Palm 2 Anil et al. (2023) have demonstrated impressive capabilities in
understanding and generating human-like text, opening up new possibilities for
enhancing software engineers’ development experiences. These models undergo a
two-step process involving pre-training and fine-tuning. Following these
steps, prompt engineering can be applied to further optimize the model’s
performance. As an integral part of AI-assisted programming, AI-based
predictive analysis Ji et al. (2020) can anticipate potential issues in a
software development life cycle. For example, it can proactively identify and
flag critical incidents Surameery and Shakor (2023) before they manifest
Talamadupula (2021).
### 2.2 AI-assisted Programming
AI-assisted programming is the incorporation of machine learning techniques
and tools into the software development process Mozannar et al. (2022) to
improve computer programming tasks. This concept shares similarities with pair
programming Bird et al. (2022); Imai (2022), whereby two human programmers
collaborate to develop software by alternating between writing code (driver)
and reviewing (observer) in a continuous switch. AI-assisted programming
essentially replaces one of the two human programmers with an AI assistant,
akin to the aforementioned “MIT programmer’s apprentice” Waters (1982); Rich
and Waters (1988). The AI assistant automates tasks that can be broadly
classified into two categories: generation and understanding. Generation tasks
encompass activities such as code generation Waldinger and Lee (1969); Manna
and Waldinger (1971), code completion Robbes and Lanza (2008); Bruch et al.
(2009), code translation Acharya et al. (2007); Allamanis et al. (2014), code
refinement Saha et al. (2017), and code summarization Sridhara et al. (2010,
2011). On the other hand, understanding tasks encompass activities like defect
detection Charniak (1996) and clone detection Kontogiannis et al. (1996).
Improving the quality of large language models for these tasks focus on
enhancing pre-training schemes Li et al. (2022b), expanding training corpora
Husain et al. (2019), and employing improved evaluation metrics Chen et al.
(2021). The GitHub Copilot Friedman (2021) is an example of an AI-powered
programming tool that utilizes OpenAI Codex, which is based on GPT-3 LLM that
has been trained on a vast amount of source code from the GitHub repository,
totaling over 159GB OpenAI (2023a). For further details on the latest
advancements in AI-assisted programming, please see Wong et al. (2023).
Figure 3: The user interface of Copilot for Xcode demonstrates the code
suggestion capability, specifically showcasing the integration of GitHub
Copilot for real-time suggestions related to the merge sort algorithm. In the
accompanying Figure, the right-hand side displays the open source code editor
within Xcode, focused on an interactive Swift playground. On the left-hand
side, the cloud-based services deliver code suggestion responses.
## 3 Copilot for Xcode
### 3.1 Xcode and its Offline Functionalities
Xcode Apple (2003) is an IDE created by Apple for developing software
applications for the Apple ecosystem such as macOS and iOS. It provides a
comprehensive set of tools, including editors, compilers, debugging tools, and
interface builders, to help software developers create and maintain their
applications. Xcode includes a source code editor with features like syntax
highlighting, code completion, and refactoring capabilities. It supports
multiple programming languages, including Swift, Objective-C, C, and C++,
allowing developers to write code for a variety of Apple platforms. In
addition to the code editor, Xcode offers a wide range of tools to assist in
app development, such as an Interface Builder for designing user interfaces
visually, a graphical debugger for finding and fixing issues in code, and
various performance analysis instruments. It also integrates with other
developer tools, such as the iOS Simulator, which allows software developers
to test their apps on virtual devices, and Instruments, a powerful profiling
tool for measuring and optimizing app performance. Despite the extensive
functionalities of Xcode, it has some limitations. For example, certain
features depend on an offline rule-based system and may necessitate batch
updates from Apple. Consequently, these services may not remain up-to-date
consistently.
### 3.2 The Copilot for Xcode Framework
The main limitation in Xcode is the sandboxing mechanism, which restricts
plugin access to specific resources and prevents the launching of other
programs. We address ways to overcome this limitation in order to enable the
functionality of GitHub Copilot in Xcode. In particular, GitHub Copilot
requires an additional program provided by GitHub to be executed alongside the
plugin. Let us take the real-time suggestion feature as an example: the
application first needs to bypass the sandbox in order to run the GitHub
Copilot language server. This is accomplished by establishing communication
between the Xcode source editor extension and a non-sandboxed XPC Service,
which acts as a cross-process call service that facilitates the communication
between the extension and the GitHub Copilot server. The server then presents
suggestions in a user interface (UI) that is not managed by Xcode. To assemble
a request for the language server, the application must gather sufficient
information, but Xcode only provides the source code and file type. To obtain
additional information without relying on Xcode, the application leverages the
Accessibility API from the software development kit. This particular API
exposes information about each object within the application. Furthermore, to
enable in-place code editing, the application executes extension commands
programmatically. This is accomplished by utilizing the Accessibility API to
interact with the menu bar items. These implementations thus allow Apple
software developers to leverage GitHub Copilot Friedman (2021) and Codeium
Codeium (2023) for code suggestions, while utilizing ChatGPT OpenAI (2023a)
for code explanations, generation and natural language-based code
modifications. The technical interaction of integrating Copilot with Xcode are
depicted in Figure 2.
In addition, it facilitates the integration of an external chat panel that can
access and read the user’s code. This chat panel serves as a connection point
to leverage LLMs for functionalities such as code explanation and mutation
using natural language. The chat panel can also be extended with plugins to
offer additional features, including support for answering questions with
real-time information from search engines. Some latest cloud-based LLM provide
direct access through their official APIs for direct integration. In
particular, Copilot for Xcode leverages the LangChain Chase (2022) framework,
which facilitates the creation of customized LLMs tailored to specific use
cases. This framework significantly enhances the prompt engineering process Wu
et al. (2022); Poldrack et al. (2023), allowing for the design of more
effective prompts that can be utilized with the LLMs. This integration and
framework combination optimize the functionality and usability of the LLMs,
providing users with enhanced capabilities and improved prompt customization.
Figure 4: The user interface of Copilot for Xcode shows its Chat and prompt-
to-code features, which enable code generation for the merge sort algorithm.
These features are connected to ChatGPT, allowing for online prompt
engineering and code generation within Xcode. The Figure illustrates the
source code editor in Xcode on the right-hand side, while the chat
conversational panel is displayed on the left-hand side.
### 3.3 Code Suggestion
The code suggestion function offers a viable option for code completion and
generation under diverse usage scenarios. Code completion, commonly referred
to as auto-completion Wong et al. (2023), is an invaluable feature in software
development that assists in completing unfinished code segments. On the other
hand, code generation involves the automatic generation of source code from
natural language input Li et al. (2022a), guided by user-defined constraints.
This capability strengthens the efficiency on the development process by
automating the creation of code based on linguistic specifications provided by
the user.
In Copilot for Xcode, we offer real-time code suggestions that dynamically
update whenever users modify their code. This capability, depicted in Figure
3, is powered by the integration of GitHub Copilot and Codeium, ensuring that
the suggestions are specifically tailored to the files currently open in the
workspace, thus enhancing productivity and accuracy while leveraging the
capabilities of the code suggestion function. The feature offers two
presentation modes for displaying suggestions, which includes two distinct
modes. In the “Nearby Text Cursor” mode, suggestions are presented based on
the current position of the text cursor. On the other hand, the “Floating
Widget” mode displays suggestions in close proximity to the circular widget.
When the user updates their code, the integrated application retrieves and
integrates relevant suggestions for display within Xcode.
The software development experience is further enhanced by a range of pre-
defined commands offered by Copilot for Xcode. The first useful command is Get
Suggestions, which retrieves customized suggestions based on the current
cursor position in the edited file on Xcode. In cases where multiple
suggestions are available, users can conveniently navigate through them using
the Next Suggestion and Previous Suggestion commands to choose the code
suggestions based on their preferences. When incorporating suggested code, the
Accept Suggestion command comes in handy to immediately select the code
suggestion, while the Reject Suggestion command allows users to remove
unnecessary suggestions along with their associated comments. Furthermore,
there are two commands specifically designed for the usage of Copilot for
Xcode. The Real-time Suggestions command, which can only be called by the
Copilot for Xcode automatically, provides real-time suggestions after a
successful retrieval so that the code suggestion can be presented in Xcode,
while the Prefetch Suggestions, which can also be called by the Copilot for
Xcode, command proactively fetches real-time suggestions in the background,
thus improving the overall responsiveness.
### 3.4 Chat and Prompt-to-Code for Code Generation
Copilot for Xcode provides its chat and prompt-to-Code features for code
generation, as depicted in Figure 4. This functionality focuses on generating
code from text inputs, enabling text-to-code generation within the IDE. By
incorporating these advanced code generation capabilities,Copilot for Xcode
enhances coding workflows, making them more efficient and intuitive. The chat
function of our application, also powered by ChatGPT, complements these code
generation features and offers additional enhancements for a interactive
coding experience. Users can leverage specific features customized to their
programming needs, such as extracting selected code in the active editor for
reference and discussion of specific code snippets. Access to the relative
path of the file being worked on facilitates easy navigation within the
codebase. The chat or prompt-to-code functions also assist in capturing error
and warning labels in the active editor, enabling swift issue resolution.
Users can also obtain information about the text cursor location, facilitating
precise discussions and context-aware conversations. These combined features
empower users to engage in productive coding discussions and streamline their
coding process, harnessing the capabilities of our AI-powered application.
The prompt-to-Code function offers a range of capabilities for code
modification and creation using natural language. It is particularly
beneficial when there is a need to update a specific section of code. This
feature provides various use cases, such as enhancing code readability,
rectifying code bugs, including documentation within the code, dividing
extensive functions into smaller, manageable ones, generating code based on
specific templates using custom commands, refining grammar and spelling errors
in documentation, and facilitating the translation of localizable strings
files. With ”Prompt to Code,” users can refactor existing code or write new
code by harnessing the power of natural language.
## 4 Evaluation
We describe three case studies that illustrate the power of Copilot for Xcode
in tackling real-world programming challenges through AI-assisted programming.
The case studies presented here are based on real-world programming
assignments given to undergraduate students. Furthermore, the case studies
also highlight the significance of prompt engineering for code suggestion
query and making “small” decisions for program composition and design in
Copilot for Xcode.
### 4.1 Case Study: HCF of Two Numbers
The first case study considers computing the Highest Common Factor (HCF) or
the Greatest Common Divisor (GCD) of two natural numbers Dijkstra (2007). The
HCF/GCD represents the largest positive integer that divides both numbers,
leaving no remainder. Many existing approaches can be used to solve this
problem, including subtraction, brute-force, and binary algorithm. One of the
oldest algorithms to compute the HCF for two given natural numbers is the
Euclidean algorithm. We tasked students to implement the Euclidean algorithm
using the Swift programming language based on the observation that if $r$ is
the remainder when $a$ is divided by $b$, then the HCF of $a$ and $b$ is
equivalent to the HCF of $b$ and $r$. Figure 5 depicts the brute-force
approach for the HCF of two natural numbers, while Figure 6 provides a correct
depiction of the HCF calculation using the Euclidean algorithm.
Prompt: | HCF of Two Numbers
---|---
Result: |
|
Table 1: Figure 5: Code suggestion of HCF of two natural numbers using a brute-force algorithm without specific instructions in the prompt. Prompt: | HCF of Two Numbers by Euclidean
---|---
| Algorithm
Result: |
|
Table 2: Figure 6: Code suggestion of HCF of two natural numbers with specific
instructions on using the Euclidean algorithm.
### 4.2 Case Study: LCM of Two Numbers
The Least Common Multiple (LCM) of two natural numbers $a,b$ refers to the
smallest positive integer that is divisible by both numbers. Notably, this LCM
is dual to the HCF/GCD Dijkstra (2007).222The product of two natural numbers
is equal to the product of their respective least common multiple and greatest
common denominator. This principle can be demonstrated using the Fundamental
Theorem of Arithmetic in number theory or through an algorithmic method
outlined in Dijkstra (2007). From our observations, the ChatGPT Codex (GPT-3)
was able to understand this concept (even able to demonstrate a plausible
proof with prompt engineering), although it faced difficulties in extending
the duality to encompass more than two natural numbers. Typically, the LCM
algorithm makes use of the HCF algorithm. However, in this assignment, a
unique requirement is to develop an LCM algorithm that does not rely on the
HCF algorithm. By default, the code suggestion in Copilot for Xcode assumes
that developers have already implemented the HCF function without any
additional instructions in the prompt. The tool thus generates a result that
utilizes the HCF implementation as a helper function for the LCM as shown in
Figure 7. However, to comply with the specific prompt of not relying on the
HCF, Figure 8 presents a correct answer for the LCM calculation without
requiring to compute the HCF.
Prompt: | LCM of Two Numbers
---|---
Result: |
|
Table 3: Figure 7: Code suggestion of LCM of two natural numbers using the HCF without specific instructions. Prompt: | LCM of Two Numbers without
---|---
| Using the HCF
Result: |
|
Table 4: Figure 8: Code suggestion of LCM of two natural numbers without using
HCF algorithm as a helper function.
### 4.3 Case Study: Navigating App on iOS
In the example below, we delve into a code generation scenario that
effectively highlights the fundamental concepts of SwiftUI in iOS app
development, specifically focusing on view navigation. This scenario is
visually represented in Figure 9. The iOS app, built with SwiftUI, comprises
two distinct views: HomeView and DetailView, as depicted in Figure 10. View
refer to a crucial component that constructs the UI and plays a pivotal role
in displaying and handling the visual elements that users observe and engage
with on the screen. To effectively manage views, software developers are
required to arrange them in a hierarchical structure and personalize each view
by configuring different properties.
Prompt: | Create a navigating views app
---|---
| with SwiftUI
Result: |
|
Table 5: Figure 9: This SwiftUI-based app consists of two screens: a home screen and a detail screen. The ContentView sets up a navigation with the entry of the app and provides a navigation bar title. Prompt: | Create the HomeView and
---|---
| DetailsView with SwiftUI
Result: |
|
Table 6: Figure 10: The home and detail screen for a navigating app. When a
button on the home screen is tapped, the app navigates to the detail screen.
Additionally, a back button on the detail screen allows the user to navigate
back to the home screen.
## 5 Conclusion
This paper introduced Copilot for Xcode that integrates cloud-based large
language model services (Github Copilot and OpenAI’s GPT) with Apple’s
integrated development environment, Xcode, for AI-assisted programming. We
also discussed the efficacy of prompt engineering and possible strategies for
AI-assisted programming using simple case studies to illustrate the practical
application of this tool to program composition and design. When designing a
program, making small decisions often involves breaking down complex tasks
into smaller components manageable by the large language model. By carefully
constructing prompts, programmers can influence the generation of code and
steer the langage model’s understanding towards the desired outcome.
As a software prototype, Copilot for Xcode has some limitations to consider
during practical usage. For example, to bypass the sandboxing restrictions, it
employs unconventional methods to retrieve information like file and
project/workspace paths. As such, it is important to be aware that this might
not always function seamlessly in future versions of Xcode. Also, the current
code suggestions are presented as C-style comments in comment mode, which can
inadvertently disrupt a user’s code if they are working on a format, e.g.,
JSON file, where such comments are not applicable.
By combining the capabilities of large language models and integrated tools
for prompt engineering, Copilot for Xcode enhances and streamlines the
software development process within Apple’s Xcode. The integration of Copilot
for Xcode with other cloud-based services like Xcode Cloud can also improve
the overall productivity and efficiency in software development, which is
especially important to continuous integration (CI) and continuous delivery
(CD) in the software development pipeline. As AI-assisted programming tools
like Copilot get incorporated into more IDEs, it brings us closer to the
realization of Dijkstra’s vision in Dijkstra (transcribed 2007), fostering a
symbiotic relationship between human programmers and AI-powered tools to
achieve more efficient and reliable software development.
## References
* Acharya et al. [2007] Mithun Acharya, Tao Xie, Jian Pei, and Jun Xu. Mining api patterns as partial orders from source code: From usage scenarios to specifications. In 6th Joint Meeting of The European Software Engineering Conference and The ACM SIGSOFT Symposium on The Foundations of Software Engineering, pages 25–34, 2007.
* Allamanis et al. [2014] Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, page 281–293. Association for Computing Machinery, 2014\.
* Amazon [2022] CodeWhisperer Amazon. AI code generator - amazon codewhisperer. https://aws.amazon.com/codewhisperer, 2022. Accessed on June 1, 2023.
* Anil et al. [2023] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
* Apple [2003] Xcode Apple. Xcode 15 - apple developer. https://developer.apple.com/xcode/, 2003. Accessed on June 1, 2023.
* Bird et al. [2022] Christian Bird, Denae Ford, Thomas Zimmermann, Nicole Forsgren, Eirini Kalliamvakou, Travis Lowdermilk, and Idan Gazit. Taking flight with copilot: Early insights and opportunities of AI-powered pair-programming tools. Queue, 20(6):35–57, 2022.
* Bruch et al. [2009] Marcel Bruch, Martin Monperrus, and Mira Mezini. Learning from examples to improve code completion systems. In 7th Joint Meeting of The European Software Engineering Conference and The ACM SIGSOFT Symposium on The Foundations of Software Engineering, pages 213–222, 2009.
* Charniak [1996] Eugene Charniak. Statistical Language Learning. MIT press, 1996.
* Chase [2022] Harrison Chase. Langchain, 2022.
* Chen et al. [2021] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
* Christiano et al. [2017] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 2017.
* Codeium [2023] Exafunction Codeium. Codeium - free AI code completion & chat. https://codeium.com/, 2023. Accessed on June 1, 2023.
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* Dijkstra [1972] Edsger W Dijkstra. The humble programmer. Communications of the ACM, 15(10):859–866, 1972.
* Dijkstra [2007] Edsger Wybe Dijkstra. Defining the greatest common divisor. E. W. Dijkstra Archive (EWD 1257), 2007.
* Dijkstra [transcribed 2007] Edsger Wybe Dijkstra. A preliminary investigation into computer assisted programming. E. W. Dijkstra Archive (EWD 237), (transcribed) 2007.
* Friedman [2021] Nat Friedman. Introducing github copilot: your AI pair programmer, 2021.
* Handsaker [1982] Robert E. Handsaker. Code generation in the programmer’s apprentice. Working Paper 233, MIT AI Lab, May 1982.
* Hindle et al. [2012] Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In 2012 34th International Conference on Software Engineering, pages 837–847. IEEE, 2012.
* Husain et al. [2019] Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. CodeSearchNet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436, 2019.
* Imai [2022] Saki Imai. Is github copilot a substitute for human pair-programming? an empirical study. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings, pages 319–321, 2022.
* Ji et al. [2020] Yangfeng Ji, Antoine Bosselut, Thomas Wolf, and Asli Celikyilmaz. The amazing world of neural language generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 37–42, 2020.
* Kontogiannis et al. [1996] Kostas A Kontogiannis, Renator DeMori, Ettore Merlo, Michael Galler, and Morris Bernstein. Pattern matching for clone and concept detection. Automated Software Engineering, 3(1-2):77–108, 1996.
* Li et al. [2022a] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092–1097, 2022.
* Li et al. [2022b] Zhiyu Li, Shuai Lu, Daya Guo, Nan Duan, Shailesh Jannu, Grant Jenks, Deep Majumder, Jared Green, Alexey Svyatkovskiy, Shengyu Fu, et al. Automating code review activities by large-scale pre-training. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2022.
* Manna and Waldinger [1971] Zohar Manna and Richard J Waldinger. Toward automatic program synthesis. Communications of the ACM, 14(3):151–165, 1971.
* Mozannar et al. [2022] Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. Reading between the lines: Modeling user behavior and costs in AI-assisted programming. arXiv preprint arXiv:2210.14306, 2022.
* OpenAI [2023a] OpenAI. Chatgpt: Optimizing language models for dialogue, Jan 2023.
* OpenAI [2023b] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
* Ouyang et al. [2022] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
* Poldrack et al. [2023] Russell A Poldrack, Thomas Lu, and Gašper Beguš. AI-assisted coding: Experiments with gpt-4. arXiv preprint arXiv:2304.13187, 2023.
* Raffel et al. [2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020.
* Rajamani [2022] Sriram Rajamani. AI assisted programming. In 15th Annual ACM India Compute Conference, COMPUTE ’22, page 5, New York, NY, USA, 2022. Association for Computing Machinery.
* Rich and Waters [1982] Charles Rich and Richard C. Waters. The disciplined use of simplifying assumptions. ACM SIGSOFT Software Engineering Notes, 7(5):150–154, December 1982\.
* Rich and Waters [1988] Charles Rich and Richard C. Waters. The programmer’s apprentice: a research overview. Computer, 21(11):10–25, November 1988.
* Rich et al. [1978] Charles Rich, Howard E. Shrobe, Robert C. Waters, Gerald J. Sussman, and Carl E. Hewitt. Programming viewed as an engineering activity. AI Memo 459, Massachusetts Institute of Technology, January 1978.
* Robbes and Lanza [2008] Romain Robbes and Michele Lanza. How program history can improve code completion. In 23rd IEEE/ACM International Conference on Automated Software Engineering, pages 317–326, 2008.
* Saha et al. [2017] Ripon K Saha, Yingjun Lyu, Hiroaki Yoshida, and Mukul R Prasad. Elixir: Effective object-oriented program repair. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering, pages 648–659. IEEE, 2017.
* Siracusa et al. [2023] M. R. Siracusa, A. K. Katti, and et al. Integrating learning models into software development systems, June 27th 2023. US Patent and Trademark Office, US Patent 11,687,830.
* Sridhara et al. [2010] Giriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori Pollock, and K Vijay-Shanker. Towards automatically generating summary comments for java methods. In IEEE/ACM International Conference on Automated Software Engineering, pages 43–52, 2010.
* Sridhara et al. [2011] Giriprasad Sridhara, Lori Pollock, and K Vijay-Shanker. Generating parameter comments and integrating with method summaries. In 2011 IEEE 19th International Conference on Program Comprehension, pages 71–80. IEEE, 2011.
* Surameery and Shakor [2023] Nigar M Shafiq Surameery and Mohammed Y Shakor. Use chatgpt to solve programming bugs. International Journal of Information Technology & Computer Engineering (IJITC) ISSN: 2455-5290, 3(01):17–22, 2023.
* Talamadupula [2021] Kartik Talamadupula. Applied AI matters: Ai4code: Applying artificial intelligence to source code. AI Matters, 7(1):18–20, 2021.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
* Vechev et al. [2016] Martin Vechev, Eran Yahav, et al. Programming with “big code”. Foundations and Trends® in Programming Languages, 3(4):231–284, 2016.
* Waldinger and Lee [1969] Richard J Waldinger and Richard CT Lee. Prow: A step toward automatic program writing. In 1st International Joint Conference on Artificial Intelligence, pages 241–252, 1969.
* Waters [1982] Richard C. Waters. The programmer’s apprentice: Knowledge based program editing. IEEE Transactions on Software Engineering, SE-8(1):1–12, January 1982.
* Wong et al. [2023] Man-Fai Wong, Shangxin Guo, Ching-Nam Hang, Siu-Wai Ho, and Chee-Wei Tan. Natural language generation and understanding of big code for AI-assisted programming: A review. Entropy, 25(6):888, 2023.
* Wu et al. [2022] Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J Cai. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pages 1–10, 2022.
* Zheng et al. [2015] Liang Zheng, Carlee Joe-Wong, Chee Wei Tan, Mung Chiang, and Xinyu Wang. How to bid the cloud. In Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication (SIGCOMM), pages 71–84, 2015.
* Zheng et al. [2016] Liang Zheng, Carlee Joe-Wong, Christopher G Brinton, Chee Wei Tan, Sangtae Ha, and Mung Chiang. On the viability of a cloud virtual service provider. In Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science, pages 235–248, 2016\.
|
# Self-Learning Emulators and Eigenvector Continuation
Avik Sarkar<EMAIL_ADDRESS>Facility for Rare Isotope Beams and Department
of Physics and Astronomy, Michigan State University, East Lansing, MI 48824,
USA Dean Lee<EMAIL_ADDRESS>Facility for Rare Isotope Beams and
Department of Physics and Astronomy, Michigan State University, East Lansing,
MI 48824, USA
###### Abstract
Emulators that can bypass computationally expensive scientific calculations
with high accuracy and speed can enable new studies of fundamental science as
well as more potential applications. In this work we discuss solving a system
of constraint equations efficiently using a self-learning emulator. A self-
learning emulator is an active learning protocol that can be used with any
emulator that faithfully reproduces the exact solution at selected training
points. The key ingredient is a fast estimate of the emulator error that
becomes progressively more accurate as the emulator is improved, and the
accuracy of the error estimate can be corrected using machine learning. We
illustrate with three examples. The first uses cubic spline interpolation to
find the solution of a transcendental equation with variable coefficients. The
second example compares a spline emulator and a reduced basis method emulator
to find solutions of a parameterized differential equation. The third example
uses eigenvector continuation to find the eigenvectors and eigenvalues of a
large Hamiltonian matrix that depends on several control parameters.
#### Introduction
The frontiers of scientific discovery often reside just beyond the limits of
computability. This explains the great interest across many scientific
disciplines in using machine learning tools to build efficient emulators that
predict scientific processes beyond what is possible with direct calculations
Carleo et al. (2019); Thiagarajan et al. (2020); Kasim et al. (2020); Bedaque
et al. (2021). However, a problem arises in that large amounts of training
data for such an emulator are not possible since the required computations are
difficult and expensive. In this work, we provide a potential solution to this
problem when the objective is to solve a system of constraint equations over
some domain of control parameters. We introduce a method called self-learning
emulation, an active learning protocol Settles (2009); Cohn et al. (1996,
1994) that relies on a fast estimate of the emulator error and a greedy local
optimization algorithm that becomes progressively more accurate as the
emulator improves. Provided that the emulator faithfully reproduces the exact
solution at the training points, the error will decrease with the number of
training points as either a power law for piecewise continuous emulators or
exponentially fast for smooth function emulators. The resulting acceleration
is typically several orders of magnitude or more, and the gain in
computational speed is achieved by using the emulator itself to estimate the
error. As we will show, self-learning emulators are highly efficient
algorithms that offer both high speed and accuracy as well as a reliable
estimate of the error. We note that the self-learning emulators we discuss
here are qualitatively different from other machine learning algorithms that
model the solutions using gradient descent optimization of some chosen loss
function. While these gradient descent optimization methods are highly
parallelizable and very fast, they usually suffer from critical slowing down
with respect to error and cannot achieve arbitrarily high accuracy in
polynomial computing time. Sometimes scientific discovery requires seeing very
small but important new phenomena that might otherwise be absent in
approximate machine learning models.
We will demonstrate several contrasting examples of self-learning emulators.
The first uses a cubic spline emulator to find the solution of a
transcendental equation with variable coefficients. The second example uses
the spline emulator and a reduced basis method emulator to find solutions of a
parameterized differential equation. The third example is our primary example
for quantum many body calculations. It uses eigenvector continuation to find
the eigenvectors and eigenvalues of a large Hamiltonian matrix that depends on
several control parameters. See Ref. Frame et al. (2018); Frame (2019) for an
introduction to eigenvector continuation and Ref. König et al. (2019); Ekström
and Hagen (2019) for applications to the quantum many body problem.
#### Constraint equations and error estimates
We consider a general set of simultaneous constraint equations $G_{i}({\bf
x},{\bf c})=0$ that we solve for variables ${\bf x}=\\{x_{j}\\}$ as a function
of control parameters ${\bf c}=\\{c_{k}\\}$ over some domain ${\bf D}$. Let us
denote the exact solutions as ${\bf x}({\bf c})$. We assume that we have an
emulator which can take the exact solutions for some set of training points
$\\{{\bf c}^{(i)}\\}$ and construct an approximate solution
${\bf\tilde{x}}({\bf c})$ for all ${\bf c}\in{\bf D}$. Let us define the error
or loss function as the norm $\lVert\Delta{\bf x}({\bf c})\rVert$ of the
residual $\Delta{\bf x}({\bf c})={\bf x}({\bf c})-{\bf\tilde{x}}({\bf c})$.
The objective is to train the emulator to minimize the peak value of the error
function over the domain ${\bf D}$ using as few additional training points as
possible.
Since the error function will vary over many orders of magnitude, it is more
convenient to work with the natural logarithm of the error function,
$\log\lVert\Delta{\bf x}({\bf c})\rVert$. The emulator will reproduce the
exact solution at the training points $\\{{\bf c}^{(i)}\\}$. Therefore, the
logarithm of the error function will become a rapidly varying function of
${\bf c}$ as we include more training points.
Let us consider the case where $\Delta{\bf x}({\bf c})$ is small enough that
we can accurately expand the constraint equations as
$G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})+\Delta{\bf x}({\bf
c}){\bf\cdot}{\bf\nabla_{\bf x}}G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\approx
0.$ (1)
If the number of degrees of freedom is small, we can solve the linear
inversion problem for $\Delta{\bf x}({\bf c})$ and provide a fast estimate for
the logarithm of the error. This estimate is nothing more than the
multivariate form of the Newton-Raphson method.
#### Fast error estimates
For most cases of interest, however, there will be many degrees of freedom and
the matrix inversion required to solve for $\Delta{\bf x}({\bf c})$ will be
too slow for our self-learning emulator training process. We therefore choose
another non-negative functional $F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf
c})\\}]$ as a surrogate for $\lVert\Delta{\bf x}({\bf c})\rVert$. The only
essential requirement we impose on $F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf
c})\\}]$ is that it is linearly proportional to $\lVert\Delta{\bf x}({\bf
c})\rVert$ in the limit $\lVert\Delta{\bf x}({\bf c})\rVert\rightarrow 0$.
This allows us to write the logarithm of the error as
$\displaystyle\log\lVert\Delta{\bf x}({\bf c})\rVert=\log
F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\\}]+A+B({\bf c}),$ (2)
where $A$ is a constant and the average of $B({\bf c})$ over the domain ${\bf
D}$ is zero. Since $F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\\}]$ is
linearly proportional to $\lVert\Delta{\bf x}({\bf c})\rVert$ in the limit
$\lVert\Delta{\bf x}({\bf c})\rVert\rightarrow 0$, the function $\log
F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\\}]$ will have the same steep
hills and valleys as the function $\log\lVert\Delta{\bf x}({\bf c})\rVert$ as
we include more training points. In the limit of large number of training
points, we can neglect the much smaller variation of $B({\bf c})$ over the
domain ${\bf D}$. We can therefore approximate the logarithm of the error as
$\log F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\\}]+A$. We note that the
unknown constant $A$ is irrelevant for comparing the logarithm of the error
for different points ${\bf c}$. Nevertheless, we can also quickly estimate $A$
simply by taking several random samples of ${\bf c}$ and computing the average
value of the difference between $\log\lVert\Delta{\bf x}({\bf c})\rVert$ and
$\log F[\\{G_{i}({\bf\tilde{x}}({\bf c}),{\bf c})\\}]$. We can refine this
estimate further using machine learning to approximate the function $B({\bf
c})$. In several of our examples we show the improvement resulting from these
additional steps.
The self-learning emulator training program is a greedy algorithm where we
search to find the point ${\bf c}$ where the logarithm of the error is
greatest. We then add this point to the training set and repeat the whole
process. In this manner we have constructed a fast emulator that becomes more
and more accurate as more training points are added and provides a reliable
estimate of the emulator error. It should be emphasized that the self-learning
emulation is just an algorithm to learn the best training points for the
emulator, and it does not change the process of emulation itself. Thus it can
be used with any emulator that faithfully reproduces the exact solution at the
training points. This could be a simple method such as polynomial
interpolation or a Gaussian process, or a more involved method such as neural
networks or eigenvector continuation. We retain all the beneficial properties
of the emulator such as its computational speed advantage, parallelizablilty,
ease of application, etc. It can be applied to any system of constraints such
as solutions of algebraic or transcendental equations, linear and nonlinear
differential equations, and linear and nonlinear eigenvalue problems.
#### Model 1
For the first example, Model 1, we use a natural cubic spline emulator to find
the lowest real solution of a transcendental equation. We consider the
solution to the equation
$\displaystyle
c_{5}x^{5}+c_{4}x^{4}\sin(10x)+c_{3}x^{3}+c_{2}x^{2}+c_{1}x+c_{0}=0,$ (3)
where all the coefficients $c_{i}$ are real. We fix coefficients
$c_{5}=c_{3}=c_{2}=c_{1}=c_{0}=1$, and we vary the coefficient $c_{4}$. We are
interested in the lowest real $x$ that satisfies Eq. (3). We know that a real
solution for $x$ always exist for real $c_{4}$, however the dependence of the
solution on $c_{4}$ is not trivial and has discontinuities with respect to
parameter $c_{4}$.
We start with three training points for $c_{4}$, two on the boundary and one
in the interior, and use natural cubic splines to define the cubic spline
approximation ${\tilde{x}}(c_{4})$ for all values of $c_{4}$. The logarithm of
the error function is then $\log|\Delta x(c_{4})|$ where $\Delta
x(c_{4})=x(c_{4})-{\tilde{x}}(c_{4})$. We can estimate $|\Delta x(c_{4})|$
using the Newton-Raphson method,
$\displaystyle|\Delta x(c_{4})|\approx\frac{\lvert
p({\tilde{x}}(c_{4}))\rvert}{\sqrt{\lvert
p^{\prime}({\tilde{x}}(c_{4}))\rvert^{2}+\epsilon^{2}}},$ (4)
where we have included a small regulator $\epsilon$ to avoid divergences when
the derivative $p^{\prime}$ vanishes. We use the right-hand side of Eq. (4)
for our error estimate with $\epsilon=1$.
Figure 1: Logarithm of the actual error and error estimate for the cubic
spline self-learning emulator in Model 1 after $20$ iterations.
In Fig. 1 we show results for the logarithm of the error estimate and actual
error, spanning the interval from $c_{4}=-1$ to $c_{4}=2$ with $23$ training
points. The fact that more training points are needed near $c_{4}\approx 1.2$
shows that the training process is not simply adding more training points at
random, but is instead uniformly improving the emulator performance across the
entire domain. As shown in the Supplemental Material, there is a discontinuity
at $c_{4}\approx 1.232$, and we need a higher density of training points near
the discontinuity. Fig. 1 shows that our error estimates are matching well
with the actual error. Therefore both $A$ and $B({\bf c})$ as defined in Eq.
(2) are negligible for Model 1.
In the limit of large number of training points, $N$, the error for the spline
interpolation for a smooth function scales as $O(N^{-4})$ Ahlberg et al.
(1967). For the case of Model 1, however, the exact solution has a jump
discontinuity, and so the power law scaling is slower. Numerically, we find
that the error is approximately $O(N^{-2.2})$. See the Supplemental Material
for details on the error scaling versus number of training points as well as
the dependence on the choice of initial training points. On a single Intel
i7-9750H processor, evaluating the exact solution using standard root finding
methods for one value of $c_{4}$ requires about $10^{-1}$ s of computational
time. In contrast, it takes about $10^{-6}$ s for spline interpolation for
$23$ training points. The raw emulator speedup factor is therefore $s_{\rm
raw}\sim 10^{5}$. Let $M$ be the number of evaluations of needed and
$N_{\epsilon}$ be the number of emulator training points needed to achieve
error tolerance $\epsilon$. The overall computational speedup factor for the
self-learning emulator can then be estimated by the minimum of
$M/N_{\epsilon}$ and $s_{\rm raw}$. If the fast error estimate were not used,
then $N_{\epsilon}$ would be replaced by the number of evaluations needed to
train the emulator to the desired error tolerance $\epsilon$, which is
generally much larger than $N_{\epsilon}$.
#### Model 2
In our next example, Model 2, we will emulate the solution of an ordinary
differential equation with one variable $z$ and one control parameter $c$. We
consider a family of differential equations $Lx(z)=0$, where
$\displaystyle
L=\frac{1}{(1+2z)^{2}}\frac{d^{2}}{dz^{2}}-\frac{2}{(1+2z)^{3}}\frac{d}{dz}+c^{2}e^{2c},$
(5)
and $c$ is a real parameter. Our boundary conditions are $x(z=0,c)=0$ and
$\partial_{z}x(z=0,c)=1$ for all $c$. We consider the region $0\leq z\leq 1$,
and $0\leq c\leq 1$. The exact solution is
$x(z,c)=\frac{1}{ce^{c}}\sin[ce^{c}(z+z^{2})]$. We consider two different
emulators. The first is the natural spline emulator, which we use to perform
interpolations and extrapolations in $c$ for each value of $z$. The second
emulator is a reduced basis emulator, which uses high-fidelity solutions of
the differential equation for several training values of $c$ and solves the
constraint equations approximately using subspace projection. Reduced basis
(RB) emulators have proven useful for solving computationally-intensive
parameterized partial differential equations Bonilla et al. (2022); Melendez
et al. (2022); Quarteroni et al. (2016, 2011); Field et al. (2011).
For our fast error estimate $F[\tilde{x}(z,c),c]$, we need some function that
is linearly proportional to the actual error $\lVert\Delta{x}({z,c})\rVert$ in
the limit $\lVert\Delta{x}({z,c})\rVert\rightarrow 0$. There are many good
choices one can make, and here we choose
$\displaystyle
F[\tilde{x}(z,c),c]=\left\|\frac{L\tilde{x}(z,c)}{\sqrt{\big{(}\frac{d}{dz}L\tilde{x}(z,c)\big{)}^{2}+\epsilon^{2}}}\right\|_{1},$
(6)
where we have again included a small regulator $\epsilon$ to avoid
divergences. Here we are using the $L_{1}$ norm, which is the integral over
$z$ of the absolute value.
We initialize with two training points at the boundaries and one in the
interior. For the spline emulator, the error scales approximately as
$O(N^{-1.88})$ for $N$ up to $300$ training points. Meanwhile, the error for
the RB emulators scales exponentially fast in $N$. We have therefore extended
the domain to the wider interval $0\leq c\leq 2$ in order to show more details
of the performance before reaching the limits of machine precision. Over this
interval, the RB emulator error scales approximately as $O(e^{-2.66N})$, for
$N$ above $10$ training points. Fig. 2 shows the actual error and estimated
error after 20 iterations of the self-learning spline emulator. Fig. 3 shows
the actual error and estimated error after 10 iterations of the self-learning
RB algorithm. In both cases the difference between the actual error and
estimate error is a slowly-varying function of $c$ as predicted. We note that
the exact solution $x(z,c)=\frac{1}{ce^{c}}\sin[ce^{c}(z+z^{2})]$ oscillates
more rapidly with increasing $c$, and the emulators therefore need more
training points for larger $c$.
We can estimate the difference between the error estimate and the actual error
by constructing a Gaussian Process (GP) emulator for the difference function
$A+B(c)$. We train the GP by computing $A+B(c)$ at the midpoints in between
the emulator training points. We have performed this correction for both the
spline and RB emulators, and the results are shown in Figs. 2 and 3. We see
that the corrected error estimate is in excellent agreement with the actual
error.
Figure 2: Logarithm of the actual error, error estimate, and corrected error
estimate for the natural spline emulator with self-learning in Model 2 after
$20$ iterations.
Figure 3: Logarithm of the actual error, error estimate, and corrected error
estimate for the reduced basis emulator with self-learning in Model 2 after
$10$ iterations.
On a single Intel i7-9750H processor, numerically solving the differential
equation for one value of $c$ takes about $7\times 10^{-2}$ s. In contrast the
spline emulator requires about $1.7\times 10^{-3}$ s for $23$ training points,
and the RB emulator takes about $5.5\times 10^{-4}$ s for $13$ training
points. Therefore the spline emulator has a raw speedup factor of $s_{\rm
raw}\sim 40$, while the RB emulator has a raw speedup factor of $s_{\rm
raw}\sim 130$. Given the somewhat comparable values for $s_{\rm raw}$ and the
exponential scaling of the error for the RB emulator, we conclude that the RB
emulator significantly outperforms the spline emulator for this example.
#### Model 3
For our final example, Model 3, we will use eigenvector continuation as the
emulator. Eigenvector continuation (EC) belongs to the family of RB methods
Bonilla et al. (2022); Melendez et al. (2022), however the applications may
involve extremely large vector spaces where general vector operators may not
be possible Frame et al. (2018). EC consists of projecting the Hamiltonian
onto a subspace spanned by a set of exact eigenvectors of the Hamiltonian for
selected training points and then solving the resulting generalized eigenvalue
problem. While it may not be possible to represent general vectors in
extremely large vector spaces, the inner products and matrix elements of
eigenvectors can be computed using Monte Carlo simulations Frame et al.
(2018); Frame (2019), coupled cluster calculations Ekström and Hagen (2019),
or some other many body method in order to solve the generalized eigenvalue
problem. EC has been used to deal with Monte Carlo sign oscillations Frame
(2019), a resummation method for perturbation theory Demol et al. (2020,
2021), and an accurate emulator for quantum systems König et al. (2019). More
recently there have been a number of new developments, applications, and
connections to other methods Ekström and Hagen (2019); Furnstahl et al.
(2020); Bai and Ren (2021); Wesolowski et al. (2021); Sarkar and Lee (2021);
Yoshida and Shimizu (2021); Melendez et al. (2021); Bonilla et al. (2022);
Melendez et al. (2022). The implementation of EC within an active learning
framework was first discussed in Ref. Eklind (2021). However, one faces a
computational bottleneck for large systems if the training process requires
many repeated calculations of eigenvectors. Here we instead use a fast
estimate of the error function based upon the variance of the Hamiltonian.
Let $H({\bf c})$ be a manifold of Hamiltonians where the dependence on the
control parameters ${\bf c}$ is smooth. Let $\ket{v({\bf c})}$ be the
corresponding eigenvector of interest and $E({\bf c})$ be the corresponding
energy eigenvalue. The EC approximation consists of projecting $H({\bf c})$
onto the subspace spanned by the training eigenvectors $\\{\ket{v({\bf
c}^{(i)})}\\}$. By solving the generalized eigenvalue, we obtain the EC
approximation to the eigenvector $\ket{\tilde{v}({\bf c})}$. Throughout our
discussion, we assume that all eigenvectors are unit normalized. The
corresponding approximate energy $\tilde{E}({\bf c})$ is equal to the
expectation value $\braket{\tilde{v}({\bf c})}{H({\bf c})}{\tilde{v}({\bf
c})}$.
The logarithm of the error is $\log\lVert\ket{\Delta v({\bf c})}\rVert$, where
$\ket{\Delta v({\bf c})}=\ket{v({\bf c})}-\ket{\tilde{v}({\bf c})}$. Computing
the error directly will be computationally too expensive for large systems,
and so we will instead work with $\log F[\tilde{v}({\bf c}),H({\bf c})]$,
where $F[\tilde{v}({\bf c}),H({\bf c})]$ is proportional to the square root of
the variance of the Hamiltonian,
$\displaystyle F[\tilde{v}({\bf c}),H({\bf
c})]=\sqrt{\frac{\braket{{\tilde{v}}({\bf c})}{[H({\bf c})-{\tilde{E}}({\bf
c})]^{2}}{{\tilde{v}}({\bf c})}}{\braket{{\tilde{v}}({\bf c})}{[H({\bf
c})]^{2}}{{\tilde{v}}({\bf c})}}}.$ (7)
We note that $F[\tilde{v}({\bf c}),H({\bf c})]$ will be linearly proportional
to $\lVert\ket{\Delta v({\bf c})}\rVert$ in the limit $\lVert\ket{\Delta
v({\bf c})}\rVert\rightarrow 0$. Therefore $\log F[\tilde{v}({\bf c}),H({\bf
c})]$ can be used as a surrogate for the logarithm of the error.
For Model 3 we consider the ground state of a system of four distinguishable
particles with equal masses on a three-dimensional lattice with zero-range
interactions. We will work in lattice units where physical quantities are
multiplied by the corresponding power of the lattice spacing to make
dimensional combinations. Furthermore, we set the particles masses to equal
$1$ in lattice units. We label the particles as $1,2,3,4$ and take the control
parameters to be the six possible pairwise interactions, $c_{ij}$, with $i<j$.
The lattice volume is a periodic cube of size $L^{3}=4^{3}$, and the
corresponding Hamiltonian is a linear space with $262,144$ dimensions. The
details of the Hamiltonian can be found in the Supplemental Material. This
model can be viewed as a generalization of the four two-component fermions
with zero-range interactions considered in Ref. Sarkar and Lee (2021); Bour et
al. (2011) or the Bose-Hubbard model considered in Ref. Frame et al. (2018).
We would like to study the appearance of interesting structures such as
particle clustering Elhatisari et al. (2017); Freer et al. (2018) in the
ground state wave function as a function of the six coupling parameters
$c_{ij}$. Some simple indicators of particle clustering are discussed in the
Supplemental Material. Such detailed multi-parameter studies are very
difficult due to the number of repeated calculations necessary. However, we
now show that self-learning emulation with eigenvector continuation can make
such studies fairly straightforward.
Since it is difficult to visualize data for all six parameters, we present
results corresponding to one two-dimensional slice. We set
$c_{14}=c_{23}=c_{24}=c_{34}=-2.3475$ and use EC as an emulator for the ground
state as a function of $c_{12}$ and $c_{13}$ over a square domain where each
coefficient ranges from $-5$ to $5$. We initialize the self-learning emulator
with one random training point for $c_{12}$ and $c_{13}$. When searching for
new training points, we use the method of simulated annealing Pincus (1970)
with an energy functional given by $-\log F[\tilde{v}({\bf c}),H({\bf c})]$.
(a)
(b)
Figure 4: Logarithm of the error in Model 3 after 40 iterations using self-
learning EC. In panel (a) we show the logarithm of the actual error (red), and
in panel (b) we show the logarithm of the estimated error (blue). Figure 5:
Plot of the two-particle clustering and short-range correlations in Model 3.
$\rho_{13}$ (red) measures the probability that particles $1$ and $3$ occupy
the same lattice site, and the correlation function $\rho_{23}$ (blue)
measures the probability that particles $2$ and $3$ occupy the same lattice
site.
In Fig. 4 we show the logarithm of the error obtained after 40 iterations. In
panel (a) we show the logarithm of the actual error, and in panel (b) we show
the logarithm of the estimated error. As predicted in Eq. (2), we see that the
two plots are approximately the same up to a constant offset $A$, with
$A\approx-2.3$. The peak value of the actual error is $\lVert\ket{\Delta
v({\bf c})}\rVert=2\times 10^{-5}$. From the figure we see that the local
maxima of the error reside along an approximately flat horizontal surface. The
flatness of this surface indicates that our self-learning emulator is
performing as intended, with the training algorithm removing the peak error at
each iteration. We note that the distribution of training points is far from
uniform. The region near the line $c_{12}+c_{13}=-1$ has a higher density of
training points, indicating that the ground state wave function has a more
complicated dependence on $c_{12}$ and $c_{13}$ in that location.
The error scaling is exponential in the number of training points,
$O(e^{-0.27N})$. On a single Intel i7-9750H processor, direct calculation of
the eigenvector and eigenvalue requires about $1.95$ s, whereas EC emulation
with $41$ training points can be done in $0.013$ s. This corresponds to a raw
speedup factor of $s_{\rm raw}\sim 150$. Using the self-learning emulator, we
can now measure particle clustering and short-range correlations between pairs
of particle in the ground state wave function for all values of $c_{12}$ and
$c_{13}$. In Fig. 4 we show the short-range correlations for pairs of
particles $1$ with $2$, and $1$ with $3$. The correlation function $\rho_{12}$
measures the probability that particles $1$ and $2$ occupy the same lattice
site, and the correlation function $\rho_{13}$ measures the probability that
particles $1$ and $3$ occupy the same lattice site. We see that $\rho_{12}$ is
close to zero when $c_{12}$ is positive and rises to a peak of $1$ when
$c_{12}$ is negative and increasing in magnitude. Similarly, $\rho_{13}$ is
close to zero when $c_{13}$ is positive and rises to a peak of $1$ when
$c_{13}$ is negative and increasing in magnitude. The change in structure is
most prominent near the line $c_{12}+c_{13}=-1$, consistent with our emulator
data on the selection of training points. We have also studied the performance
of the self-learning EC emulator when we vary all six control parameters
$c_{ij}$ over the range from $-5$ to $0$. After $80$ iterations, the peak
value of the error over the entire six-dimensional parameter space is
$\lVert\ket{\Delta v({\bf c})}\rVert=4\times 10^{-3}$.
#### Summary
Self-learning emulation is a general approach that can be implemented with any
emulator that faithfully reproduces the exact solution at selected training
points. They use a fast estimate for the error in the training process and
perform full calculations only for the chosen new training points. If needed,
the difference between the estimated error and exact error can be corrected
using machine learning. If many evaluations are required, the computational
advantage can grow as large as the raw speedup factor of the emulator, $s_{\rm
raw}$, which can be several orders of magnitude or more. Self-learning
emulators are a highly efficient class of algorithms that offer both high
speed and accuracy as well as a reliable estimate of the error.
#### Acknowledgement
We are grateful for discussions with E. Bonilla, J. Bonitati, R. Furnstahl, G.
Given, P. Giuliani, K. Godbey, C. Hicks, M. Hjorth-Jensen, Da. Lee, J.
Melendez, W. Nazarewicz, E. Ng, Z. Qian, J. Vary, J. Watkins, S. Wild, C.
Yang, and X. Zhang. We gratefully acknowledge funding by the U.S. Department
of Energy (DE-SC0013365 and DE-SC0021152) and the Nuclear Computational Low-
Energy Initiative (NUCLEI) SciDAC-4 project (DE-SC0018083) as well as
computational resources provided by the Oak Ridge Leadership Computing
Facility through the INCITE award “Ab-initio nuclear structure and nuclear
reactions”, the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for
computing time on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre
(JSC), and Michigan State University.
## References
* Carleo et al. (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019).
* Thiagarajan et al. (2020) J. J. Thiagarajan, B. Venkatesh, R. Anirudh, P.-T. Bremer, J. Gaffney, G. Anderson, and B. Spears, Nature Communications 11, 5622 (2020), eprint 2005.02328.
* Kasim et al. (2020) M. F. Kasim, D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, M. Jarvis, S. Khatiwala, J. Korenaga, et al., arXiv e-prints arXiv:2001.08055 (2020), eprint 2001.08055.
* Bedaque et al. (2021) P. Bedaque et al., Eur. Phys. J. A 57, 100 (2021), eprint 2006.05422.
* Settles (2009) B. Settles, Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009).
* Cohn et al. (1996) D. A. Cohn, Z. Ghahramani, and M. I. Jordan, Journal of artificial intelligence research 4, 129 (1996).
* Cohn et al. (1994) D. Cohn, L. Atlas, and R. Ladner, Machine learning 15, 201 (1994).
* Frame et al. (2018) D. Frame, R. He, I. Ipsen, D. Lee, D. Lee, and E. Rrapaj, Phys. Rev. Lett. 121, 032501 (2018), eprint 1711.07090.
* Frame (2019) D. K. Frame, Ph.D. thesis, Michigan State University (2019), eprint 1905.02782.
* König et al. (2019) S. König, A. Ekström, K. Hebeler, D. Lee, and A. Schwenk (2019), eprint 1909.08446.
* Ekström and Hagen (2019) A. Ekström and G. Hagen, Phys. Rev. Lett. 123, 252501 (2019), eprint 1910.02922.
* Ahlberg et al. (1967) J. H. Ahlberg, E. N. Nilson, and J. H. Walsh, Academic Press (1967).
* Bonilla et al. (2022) E. Bonilla, P. Giuliani, K. Godbey, and D. Lee (2022), eprint 2203.05284.
* Melendez et al. (2022) J. A. Melendez, C. Drischler, R. J. Furnstahl, A. J. Garcia, and X. Zhang (2022), eprint 2203.05528.
* Quarteroni et al. (2016) A. Quarteroni, A. Manzoni, and F. Negri, _Reduced Basis Methods for Partial Differential Equations_ (Springer, 2016).
* Quarteroni et al. (2011) A. Quarteroni, G. Rozza, and A. Manzoni, J.Math.Industry 1 (2011).
* Field et al. (2011) S. E. Field, C. R. Galley, F. Herrmann, J. S. Hesthaven, E. Ochsner, and M. Tiglio, Phys. Rev. Lett. 106, 221102 (2011).
* Demol et al. (2020) P. Demol, T. Duguet, A. Ekström, M. Frosini, K. Hebeler, S. König, D. Lee, A. Schwenk, V. Somà, and A. Tichai, Phys. Rev. C101, 041302(R) (2020), eprint 1911.12578.
* Demol et al. (2021) P. Demol, M. Frosini, A. Tichai, V. Somà, and T. Duguet, Annals Phys. 424, 168358 (2021), eprint 2002.02724.
* Furnstahl et al. (2020) R. J. Furnstahl, A. J. Garcia, P. J. Millican, and X. Zhang, Phys. Lett. B 809, 135719 (2020), eprint 2007.03635.
* Bai and Ren (2021) D. Bai and Z. Ren, Phys. Rev. C 103, 014612 (2021), eprint 2101.06336.
* Wesolowski et al. (2021) S. Wesolowski, I. Svensson, A. Ekström, C. Forssén, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips (2021), eprint 2104.04441.
* Sarkar and Lee (2021) A. Sarkar and D. Lee, Phys. Rev. Lett. 126, 032501 (2021).
* Yoshida and Shimizu (2021) S. Yoshida and N. Shimizu (2021), eprint 2105.08256.
* Melendez et al. (2021) J. A. Melendez, C. Drischler, A. J. Garcia, R. J. Furnstahl, and X. Zhang (2021), eprint 2106.15608.
* Eklind (2021) C. Eklind, Ph.D. thesis, Chalmers University of Technology (2021), eprint https://hdl.handle.net/20.500.12380/302455.
* Bour et al. (2011) S. Bour, X. Li, D. Lee, U.-G. Meißner, and L. Mitas, Phys. Rev. A83, 063619 (2011), eprint arXiv:1104.2102 [cond-mat.quant-gas].
* Elhatisari et al. (2017) S. Elhatisari, E. Epelbaum, H. Krebs, T. A. Lähde, D. Lee, N. Li, B.-n. Lu, U.-G. Meißner, and G. Rupak, Phys. Rev. Lett. 119, 222505 (2017), eprint 1702.05177.
* Freer et al. (2018) M. Freer, H. Horiuchi, Y. Kanada-En’yo, D. Lee, and U.-G. Meißner, Rev. Mod. Phys. 90, 035004 (2018), eprint 1705.06192.
* Pincus (1970) M. Pincus, Operations Research 18, 1225 (1970).
## I Supplemental Material
### I.1 Model 1
In Model 1, we want the solution to the equation
$\displaystyle
c_{5}x^{5}+c_{4}x^{4}\sin(10x)+c_{3}x^{3}+c_{2}x^{2}+c_{1}x+c_{0}=0$ (S1)
where all the coefficients $c_{i}$ are real. We fix coefficients
$c_{5}=c_{3}=c_{2}=c_{1}=c_{0}=1$, and we vary the coefficient $c_{4}$. With
these choices, the lowest solution to the equation is shown in Fig. S1. We
notice that the dependence of the solution on variable $c_{4}$ is non-trivial,
and there is a discontinuity at $c_{4}\approx 1.232$. As a result, the self-
learning emulator needs to take significantly more training points near the
discontinuity.
Figure S1: Plot of the lowest real solution to Eq. (S1) versus $c_{4}$. The
self-learning emulator needs to take significantly more training points near
the discontinuity at $c_{4}\approx 1.232$.
### I.2 Dependence on initial training points
In this section we examine the performance of the self-learning emulator for
Model 1 when starting from a poor choice of initial training points. In Fig.
S2, we show the logarithm of the actual error and error estimate for the cubic
spline self-learning emulator in Model 1 after $20$ iterations when starting
from training points $c_{4}=-1.000,-0.997,-0.994$. In Fig. S3, we show the
logarithm of the actual error and error estimate for the cubic spline self-
learning emulator in Model 1 after $20$ iterations when starting from training
points $c_{4}=1.994,1.997,2.000$. We see that in both cases there is almost no
loss of performance in comparison with Fig. 1 of the main text despite the
poor choice of initial starting points.
Figure S2: Logarithm of the actual error and error estimate for the cubic
spline self-learning emulator in Model 1 after $20$ iterations when starting
from training points $c_{4}=-1.000,-0.997,-0.994$.
Figure S3: Logarithm of the actual error and error estimate for the cubic
spline self-learning emulator in Model 1 after $20$ iterations when starting
from training points $c_{4}=1.994,1.997,2.000$.
### I.3 Error scaling
If the solution is smoothly varying, we expect $O(N^{-4})$ error scaling for
our self-learning natural spline emulator. This is because the error of the
cubic interpolation scales as the fourth power of the interval between
training points. However, this holds true only when the function is smooth and
in the limit that $N$ is large. For Model 1, however, the exact solution has a
jump discontinuity, and so the power law scaling is slower. Numerically, we
find that the error is approximately $O(N^{-2.2})$. We see this in Fig. S4,
where the slope of the graph is $-2.2$.
Figure S4: Natural spline emulator error scaling for Model 1. We plot the
logarithm of the error versus the logarithm of the number of iterations.
In Model 2, the solution is smoothly varying function. However it seems that
have not yet reached the asymptotic scaling large $N$ limit, and the error
scaling is approximately $O(N^{-1.88})$. This can be seen from the $-1.88$
slope in Fig. S5. In contrast, the reduced basis emulator has exponentially
fast error scaling. This is because the reduced basis emulator is itself a
smooth function. We can view the addition of training points as matching more
derivatives of the smooth emulator to derivatives of the smooth exact
solution. The error scaling is therefore similar to the error scaling of a
convergent power series. In Fig. S6 we show the error scaling for the reduced
basis emulator for Model 2. We see that the error scaling is $O(e^{-2.66N})$.
Figure S5: Natural spline emulator error scaling for Model 2. We plot the
logarithm of the error versus the logarithm of the number of iterations.
Figure S6: Reduced basis method error scaling for Model 2. We plot the
logarithm of the error versus the number of iterations.
For the eigenvector continuation emulator in Model 3, we again expect
exponential error scaling because both the emulator and exact solution are
both smoothly varying functions. In Fig. S7 we show the error scaling for the
eigenvector continuation emulator in Model 3. We see that the error scaling is
$O(e^{-0.27N})$.
Figure S7: Eigenvector continuation emulator error scaling for Model 3. We
plot the logarithm of the error versus the number of iterations.
### I.4 Geometrical picture of eigenvector continuation error
We will present a geometrical picture of eigenvector continuation (EC) error
as well as some additional insight into the error estimate that appears in Eq.
(3) of the main text. We consider a Hamiltonian manifold $H({\bf c})$ that
depends on the control parameters ${\bf c}$. We write $\ket{v({\bf c})}$ for
the eigenvector of interest and $E(\bf c)$ for the corresponding energy
eigenvalue. Suppose we know the eigenvectors at $M$ different training points,
$\\{{\bf c}^{(1)},\cdots,{\bf c}^{(M)}\\}$. We label the set of $M$ training
eigenvectors as $S_{M}=\\{\ket{v({\bf c}^{(1)})},\cdots,\ket{v({\bf
c}^{(M)})}\\}$. Let us define the norm matrix ${\cal N}(S_{M})$ as
$\displaystyle\begin{bmatrix}\braket{v({\bf c}^{(1)})}{v({\bf
c}^{(1)})}&\cdots&\braket{v({\bf c}^{(1)})}{v({\bf c}^{(M)})}\\\
\vdots&\ddots&\vdots\\\ \braket{v({\bf c}^{(M)})}{v({\bf
c}^{(1)})}&\cdots&\braket{v({\bf c}^{(M)})}{v({\bf c}^{(M)})}\\\
\end{bmatrix},$ (S2)
and let $\Omega^{2}(S_{M})$ be the determinant of ${\cal N}(S_{M})$. Then
$\Omega^{2}(S_{M})$ corresponds to the square of the volume of the
$M$-dimensional parallelopiped defined by the vectors in the set $S_{M}$. If
all the eigenvectors are normalized, then the maximum possible volume is 1,
which is attained when all the eigenvectors are orthogonal.
Let us now consider selecting the next training point, ${\bf c}_{M+1}$. Let
$P$ be the projection operator onto the linear span of $S_{M}$, and let $Q$ be
the orthogonal complement so that $Q=1-P$. Suppose we now expand our training
set $S_{M}$ by adding another training vector $\ket{v({\bf c})}$ to form
$S_{M+1}$. Let us define the perpendicular projection vector
$\ket{v_{\perp}({\bf c})}$ as
$\displaystyle\ket{v_{\perp}({\bf c})}=Q\ket{v({\bf c})}.$ (S3)
Since $\Omega^{2}(S_{M})$ is the squared volume of the parallelopiped defined
by the vectors in $S_{M}$ and $\Omega^{2}(S_{M+1})$ is the squared volume of
the parallelopiped defined by the vectors in $S_{M+1}$, it follows that the
ratio $\Omega^{2}(S_{M+1})$ to $\Omega^{2}(S_{M})$ is given by the squared
norm of $\ket{v_{\perp}({\bf c})}$,
$\displaystyle\frac{\Omega^{2}(S_{M+1})}{\Omega^{2}(S_{M})}=\braket{v_{\perp}({\bf
c})}{v_{\perp}({\bf c})}.$ (S4)
Let us define the projections of $H$ onto $P$ and $Q$ subspaces as
$\displaystyle H^{P}({\bf c})=PH({\bf c})P,\qquad H^{Q}({\bf c})=QH({\bf
c})Q.$ (S5)
The EC approximation is nothing more than the approximation of $\ket{v({\bf
c})}$ by some eigenvector of $H^{P}({\bf c})$, which we denote as
$\ket{v^{P}({\bf c})}$. Let the corresponding energy be labelled $E^{P}({\bf
c})$ so that
$\displaystyle H^{P}({\bf c})\ket{v^{P}({\bf c})}=E^{P}({\bf
c})\ket{v^{P}({\bf c})}.$ (S6)
We also label the eigenvectors of $H^{Q}({\bf c})$ contained in the orthogonal
complement $Q$ as,
$\displaystyle H^{Q}({\bf c})\ket{v^{Q}_{j}({\bf c})}=E^{Q}({\bf
c})\ket{v^{Q}_{j}({\bf c})}.$ (S7)
When the difference between the exact eigenvector and the eigenvector
continuation approximation of the eigenvector is small, we can use first order
perturbation theory to write
$\displaystyle\ket{v({\bf c})}\approx\ket{v^{P}({\bf
c})}+\sum_{j}\frac{\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{E^{P}({\bf c})-E^{Q}_{j}({\bf c})}\ket{v^{Q}_{j}({\bf c})}.$ (S8)
To first order in perturbation theory, the residual vector is just
$\ket{v_{\perp}({\bf c})}\approx\ket{v({\bf c})}-\ket{v^{P}({\bf c})}$. We
therefore have
$\displaystyle\ket{v_{\perp}({\bf
c})}\approx\sum_{j}\frac{\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{E^{P}({\bf c})-E^{Q}_{j}({\bf c})}\ket{v^{Q}_{j}({\bf c})}$ (S9)
If we now combine with Eq. (S4), we get
$\displaystyle\frac{\Omega^{2}(S_{M+1})}{\Omega^{2}(S_{M})}=\lVert\ket{v_{\perp}({\bf
c})}\rVert^{2}=\sum_{j}\frac{\braket{v^{P}({\bf c})}{H({\bf
c})}{v^{Q}_{j}({\bf c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{[E^{P}({\bf c})-E^{Q}_{j}({\bf c})]^{2}}.$ (S10)
We can now connect this result with the error or loss function in the main
text. The second part of the equation gives an expression for the error term
$\lVert\ket{v_{\perp}({\bf c})}\rVert$ using first-order perturbation theory,
and the first part of the equation is a geometrical interpretation of the
error term as the ratio of the squared volumes, $\Omega^{2}(S_{M+1})$ to
$\Omega^{2}(S_{M})$. Taking the logarithm of the square root, we get
$\displaystyle\log\lVert\ket{v_{\perp}({\bf
c})}\rVert=\frac{1}{2}\log\sum_{j}\frac{\braket{v^{P}({\bf c})}{H({\bf
c})}{v^{Q}_{j}({\bf c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{[E^{P}({\bf c})-E^{Q}_{j}({\bf c})]^{2}}.$ (S11)
The term in the numerator,
$\displaystyle\braket{v^{P}({\bf c})}{H({\bf c})}{v^{Q}_{j}({\bf
c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf c})},$ (S12)
will go to zero at each of the training points, causing large variations in
the logarithm of the error as we add more and more training points. In
contrast, the term in the denominator, $[E^{P}({\bf c})-E^{Q}_{j}({\bf
c})]^{2}$, will be smooth as a function of $c$. Similarly, $\braket{v^{P}({\bf
c})}{[H({\bf c})]^{2}}{v^{P}({\bf c})}$ will also be a smooth function of
${\bf c}$. We can write
$\displaystyle\frac{1}{2}\log\sum_{j}\frac{\braket{v^{P}({\bf c})}{H({\bf
c})}{v^{Q}_{j}({\bf c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{[E^{P}({\bf c})-E^{Q}_{j}({\bf
c})]^{2}}=\frac{1}{2}\log\sum_{j}\frac{\braket{v^{P}({\bf c})}{H({\bf
c})}{v^{Q}_{j}({\bf c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{\braket{v^{P}({\bf c})}{[H({\bf c})]^{2}}{v^{P}({\bf c})}}+A+B({\bf
c}),$ (S13)
where $A$ is a constant and $B({\bf c})$ averages to zero over the entire
domain of ${\bf c}$. While the function $B({\bf c})$ is unknown, it will be
dominated by the large variations in the logarithm of the error as more and
more training points are added. We note that
$\displaystyle\sum_{j}$ $\displaystyle\frac{\braket{v^{P}({\bf c})}{H({\bf
c})}{v^{Q}_{j}({\bf c})}\braket{v^{Q}_{j}({\bf c})}{H({\bf c})}{v^{P}({\bf
c})}}{\braket{v^{P}({\bf c})}{[H({\bf c})]^{2}}{v^{P}({\bf
c})}}=\frac{\braket{v^{P}({\bf c})}{H({\bf c})(1-P)(1-P)H({\bf c})}{v^{P}({\bf
c})}}{\braket{v^{P}({\bf c})}{[H({\bf c})]^{2}}{v^{P}({\bf c})}}$
$\displaystyle=\frac{\braket{v^{P}({\bf c})}{[H({\bf c})-H^{P}({\bf
c})]^{2}}{v^{P}({\bf c})}}{\braket{v^{P}({\bf c})}{[H({\bf
c})]^{2}}{v^{P}({\bf c})}}=\frac{\braket{v^{P}({\bf c})}{[H({\bf
c})-E^{P}({\bf c})]^{2}}{v^{P}({\bf c})}}{\braket{v^{P}({\bf c})}{[H({\bf
c})]^{2}}{v^{P}({\bf c})}}.$ (S14)
We therefore arrive at the variance error estimate used in the main text,
$\displaystyle\log\lVert\ket{v_{\perp}({\bf
c})}\rVert=\frac{1}{2}\log\frac{\braket{v^{P}({\bf c})}{[H({\bf c})-E^{P}({\bf
c})]^{2}}{v^{P}({\bf c})}}{\braket{v^{P}({\bf c})}{[H({\bf
c})]^{2}}{v^{P}({\bf c})}}+A+B({\bf c}).$ (S15)
### I.5 Model 3 Hamiltonian
Model 3 describes four-distinguishable particles with equal masses $m$ on a
three-dimensional lattice with pairwise point interactions with coefficients
$c_{ij}$ for each pair $i<j$. We use lattice units where physical quantities
are multiplied by powers of the spatial lattice spacing to make the
combinations dimensionless. We take the common mass $m$ to equal $1$ in
lattice units. We let ${\bf n}$ denote the spatial lattice points on our three
dimensional $L^{3}$ periodic lattice. Let the lattice annihilation and
creation operators for particle $i$ be written as $a_{i}({\bf n})$ and
$a^{\dagger}_{i}({\bf n})$ respectively. The free non-relativistic lattice
Hamiltonian has the form
$\displaystyle H_{\text{free}}=\frac{3}{m}\sum_{i=1,2,3,4}\sum_{{\bf
n}}a_{i}^{\dagger}({\bf n})a_{i}({\bf n})-$
$\displaystyle\frac{1}{2m}\sum_{i=1,2,3,4}\sum_{{\bf\hat{l}}={\bf\hat{1}},{\bf\hat{2}},{\bf\hat{3}}}\sum_{{\bf
n}}a_{i}^{\dagger}({\bf n})\Big{[}a_{i}({\bf n}+{\bf\hat{l}})+a_{i}({\bf
n}-{\bf\hat{l}})\Big{]}.$ (S16)
We add to the free Hamiltonian single-site contact interactions, and the
resulting Hamiltonian then has the form
$H=H_{\text{free}}+\sum_{i<j}\sum_{{\bf n}}c_{ij}\rho_{i}({\bf
n})\rho_{j}({\bf n}),$ (S17)
where $\rho_{i}({\bf n})$ is the density operator for particle $i$,
$\displaystyle\rho_{i}({\bf n})$ $\displaystyle=a_{i}^{\dagger}({\bf
n})a_{i}({\bf n}).$ (S18)
For calculations discussed in this work, we use a basis of position
eigenstates on the lattice. As noted in Ref. Elhatisari et al. (2017), we can
determine the formation of particle clusters by measuring the expectation
values of products of local density operators. For example, $\rho_{ij}({\bf
n})=\rho_{i}({\bf n})\rho_{j}({\bf n})$ can serve as an indicator of two-
particle clusters, $\rho_{ijk}({\bf n})=\rho_{i}({\bf n})\rho_{j}({\bf
n})\rho_{k}({\bf n})$ for three-particle clusters, and $\rho_{ijkl}({\bf
n})=\rho_{i}({\bf n})\rho_{j}({\bf n})\rho_{k}({\bf n})\rho_{l}({\bf n})$ for
a four-particle cluster.
|
# Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede University of Warwick, Department of Physics, Coventry, CV4
7AL, UK<EMAIL_ADDRESS>
###### Abstract
Deep learning is transforming most areas of science and technology, including
electron microscopy. This review paper offers a practical perspective aimed at
developers with limited familiarity. For context, we review popular
applications of deep learning in electron microscopy. Afterwards, we discuss
hardware and software needed to get started with deep learning and interface
with electron microscopes. We then review neural network components, popular
architectures, and their optimization. Finally, we discuss future directions
of deep learning in electron microscopy.
Keywords: deep learning, electron microscopy, review.
## 1 Introduction
Following decades of exponential increases in computational capability[1] and
widespread data availability[2, 3], scientists can routinely develop
artificial neural networks[4, 5, 6, 7, 8, 9, 10, 11] (ANNs) to enable new
science and technology[12, 13, 14, 15, 16, 17]. The resulting deep learning
revolution[18, 19] has enabled superhuman performance in image
classification[20, 21, 22, 23], games[24, 25, 26, 27, 28, 29], medical
analysis[30, 31], relational reasoning[32], speech recognition[33, 34] and
many other applications[35, 36]. This introduction focuses on deep learning in
electron microscopy and is aimed at developers with limited familiarity. For
context, we therefore review popular applications of deep learning in electron
microscopy. We then review resources available to support researchers and
outline electron microscopy. Finally, we review popular ANN architectures and
their optimization, or “training”, and discuss future trends in artificial
intelligence (AI) for electron microscopy.
Deep learning is motivated by universal approximator theorems[37, 38, 39, 40,
41, 42, 43, 44, 45], which state that sufficiently deep and wide[37, 46, 40]
ANNs can approximate functions to arbitrary accuracy. It follows that ANNs can
always match or surpass the performance of methods crafted by humans. In
practice, deep neural networks (DNNs) reliably[47] learn to express[48, 49,
50, 51] generalizable[52, 53, 54, 55, 56, 57, 58, 59] models without a prior
understanding of physics. As a result, deep learning is freeing physicists
from a need to devise equations to model complicated phenomena[60, 61, 13, 14,
16]. Many modern ANNs have millions of parameters, so inference often takes
tens of milliseconds on graphical processing units (GPUs) or other hardware
accelerators[62]. It is therefore unusual to develop ANNs to approximate
computationally efficient methods with exact solutions, such as the fast
Fourier transform[63, 64, 65] (FFT). However, ANNs are able to leverage an
understanding of physics to accelerate time-consuming or iterative
calculations[66, 67, 68, 69], improve accuracy of methods[70, 30, 31], and
find solutions that are otherwise intractable[24, 71].
Figure 1: Example applications of a noise-removal DNN to instances of Poisson
noise applied to 512$\times$512 crops from TEM images. Enlarged 64$\times$64
regions from the top left of each crop are shown to ease comparison. This
figure is adapted from our earlier work[72] under a Creative Commons
Attribution 4.0[73] license.
### 1.1 Improving Signal-to-Noise
A popular application of deep learning is to improve signal-to-noise[74, 75].
For example, of medical electrical[76, 77], medical image[78, 79, 80], optical
microscopy[81, 82, 83, 84], and speech[85, 86, 87, 88] signals. There are many
traditional denoising algorithms that are not based on deep learning[89, 90,
91], including linear[92, 93] and non-linear[94, 95, 96, 97, 98, 99, 100, 101,
102] spatial domain filters, Wiener[103, 104, 105] filters, non-linear[106,
107, 108, 109, 110, 111] wavelet domain filters, curvelet transforms[112,
113], contourlet transforms[114, 115], hybrid algorithms[116, 117, 118, 119,
120, 121, 122] that operate in both spatial and transformed domains, and
dictionary-based learning[123, 124, 125, 126, 127]. However, traditional
denoising algorithms are limited by features (often laboriously) crafted by
humans and cannot exploit domain-specific context. In perspective, they
leverage an ever-increasingly accurate representation of physics to denoise
signals. However, traditional algorithms are limited by the difficulty of
programmatically describing a complicated reality. As a case in point, an ANN
was able to outperform decades of advances in traditional denoising algorithms
after training on two GPUs for a week[70].
Definitions of electron microscope noise can include statistical noise[128,
129, 130, 131, 132, 133, 134, 135], aberrations[136], scan distortions[137,
138, 139, 140], specimen drift[141], and electron beam damage[142].
Statistical noise is often minimized by either increasing electron dose or
applying traditional denoising algorithms[143, 144]. There are a variety of
denoising algorithms developed for electron microscopy, including algorithms
based on block matching[145], contourlet transforms[114, 115], energy
minimization[146], fast patch reorderings[147], Gaussian kernel density
estimation[148], Kronecker envelope principal component analysis[149] (PCA),
non-local means and Zernike moments[150], singular value thresholding[151],
wavelets[152], and other approaches[153, 154, 141, 155, 156]. Noise that is
not statistical is often minimized by hardware. For example, by using
aberration correctors[136, 157, 158, 159], choosing scanning transmission
electron microscopy (STEM) scan shapes and speeds that minimize
distortions[138], and using stable sample holders to reduce drift[160]. Beam
damage can also be reduced by using minimal electron voltage and electron
dose[161, 162, 163], or dose-fractionation across multiple frames in multi-
pass transmission electron microscopy[164, 165, 166] (TEM) or STEM[167].
Deep learning is being applied to improve signal-to-noise for a variety of
applications[168, 169, 170, 171, 172, 173, 174, 175, 176]. Most approaches in
electron microscopy involve training ANNs to either map low-quality
experimental[177], artificially deteriorated[70, 178] or synthetic[179, 180,
181, 182] inputs to paired high-quality experimental measurements. For
example, applications of a DNN trained with artificially deteriorated TEM
images are shown in figure 1. However, ANNs have also been trained with
unpaired datasets of low-quality and high-quality electron micrographs[183],
or pairs of low-quality electron micrographs[184, 185]. Another approach is
Noise2Void[168], ANNs are trained from single noisy images. However,
Noise2Void removes information by masking noisy input pixels corresponding to
target output pixels. So far, most ANNs that improve electron microscope
signal-to-noise have been trained to decrease statistical noise[183, 177, 70,
186, 181, 182, 184, 179, 180, 181] as other approaches have been developed to
correct electron microscope scan distortions[187, 188] and specimen drift[189,
188, 141]. However, we anticipate that ANNs will be developed to correct a
variety of electron microscopy noise as ANNs have been developed for
aberration correction of optical microscopy[190, 191, 192, 193, 194, 195] and
photoacoustic[196] signals.
Figure 2: Example applications of DNNs to restore 512$\times$512 STEM images
from sparse signals. Training as part of a generative adversarial network[197,
198, 199, 200] yields more realistic outputs than training a single DNN with
mean squared errors. Enlarged 64$\times$64 regions from the top left of each
crop are shown to ease comparison. a) Input is a Gaussian blurred 1/20
coverage spiral[201]. b) Input is a 1/25 coverage grid[202]. This figure is
adapted from our earlier works under Creative Commons Attribution 4.0[73]
licenses.
### 1.2 Compressed Sensing
Compressed sensing[203, 204, 205, 206, 207] is the efficient reconstruction of
a signal from a subset of measurements. Applications include faster medical
imaging[208, 209, 210], image compression[211, 212], increasing image
resolution[213, 214], lower medical radiation exposure[215, 216, 217], and
low-light vision[218, 219]. In STEM, compressed sensing has enabled electron
beam exposure and scan time to be decreased by 10-100$\times$ with minimal
information loss[201, 202]. Thus, compressed sensing can be essential to
investigations where the high current density of electron probes damages
specimens[161, 220, 221, 222, 223, 224, 225, 226]. Even if the effects of beam
damage can be corrected by postprocessing, the damage to specimens is often
permanent. Examples of beam-sensitive materials include organic crystals[227],
metal-organic frameworks[228], nanotubes[229], and nanoparticle
dispersions[230]. In electron microscopy, compressed sensing is especially
effective due to high signal redundancy[231]. For example, most electron
microscopy images are sampled at 5-10$\times$ their Nyquist rates[232] to ease
visual inspection, decrease sub-Nyquist aliasing[233], and avoid
undersampling.
Perhaps the most popular approach to compressed sensing is upsampling or
infilling a uniformly spaced grid of signals[234, 235, 236]. Interpolation
methods include Lancsoz[234], nearest neighbour[237], polynomial
interpolation[238], Wiener[239] and other resampling methods[240, 241, 242].
However, a variety of other strategies to minimize STEM beam damage have also
been proposed, including dose fractionation[243] and a variety of sparse data
collection methods[244]. Perhaps the most intensively investigated approach to
the latter is sampling a random subset of pixels, followed by reconstruction
using an inpainting algorithm[245, 244, 246, 247, 248, 249]. Random sampling
of pixels is nearly optimal for reconstruction by compressed sensing
algorithms[250]. However, random sampling exceeds the design parameters of
standard electron beam deflection systems, and can only be performed by
collecting data slowly[251, 138], or with the addition of a fast deflection or
blanking system[247, 252].
Sparse data collection methods that are more compatible with conventional STEM
electron beam deflection systems have also been investigated. For example,
maintaining a linear fast scan deflection whilst using a widely-spaced slow
scan axis with some small random ‘jitter’[251, 245]. However, even small jumps
in electron beam position can lead to a significant difference between nominal
and actual beam positions in a fast scan. Such jumps can be avoided by driving
functions with continuous derivatives, such as those for spiral and Lissajous
scan paths[201, 253, 138, 254, 247]. Sang[138, 254] considered a variety of
scans including Archimedes and Fermat spirals, and scans with constant angular
or linear displacements, by driving electron beam deflectors with a field-
programmable gate array[255] (FPGA) based system[138]. Spirals with constant
angular velocity place the least demand on electron beam deflectors. However,
dwell times, and therefore electron dose, decreases with radius. Conversely,
spirals created with constant spatial speeds are prone to systematic image
distortions due to lags in deflector responses. In practice, fixed doses are
preferable as they simplify visual inspection and limit the dose dependence of
STEM noise[129].
Deep learning can leverage an understanding of physics to infill images[256,
257, 258]. Example applications include increasing scanning electron
microscopy[259, 178, 260] (SEM), STEM[202, 261] and TEM[262] resolution, and
infilling continuous sparse scans[201]. Example applications of DNNs to
complete sparse spiral and grid scans are shown in figure 2. However, caution
should be used when infilling large regions as ANNs may generate artefacts if
a signal is unpredictable[201]. A popular alternative to deep learning for
infilling large regions is exemplar-based infilling[263, 264, 265, 266].
However, exemplar-based infilling often leaves artefacts[267] and is usually
limited to leveraging information from single images. Smaller regions are
often infilled by fast marching[268], Navier-Stokes infilling[269], or
interpolation[238].
### 1.3 Labelling
Deep learning has been the basis of state-of-the-art classification[270, 271,
272, 273] since convolutional neural networks (CNNs) enabled a breakthrough in
classification accuracy on ImageNet[71]. Most classifiers are single
feedforward neural networks (FNNs) that learn to predict discrete labels. In
electron microscopy, applications include classifying image region
quality[274, 275], material structures[276, 277], and image resolution[278].
However, siamese[279, 280, 281] and dynamically parameterized[282] networks
can more quickly learn to recognise images. Finally, labelling ANNs can learn
to predict continuous features, such as mechanical properties[283]. Labelling
ANNs are often combined with other methods. For example, ANNs can be used to
automatically identify particle locations[284, 285, 186, 286] to ease
subsequent processing.
Figure 3: Example applications of a semantic segmentation DNN to STEM images
of steel to classify dislocation locations. Yellow arrows mark uncommon
dislocation lines with weak contrast, and red arrows indicate that fixed
widths used for dislocation lines are sometimes too narrow to cover defects.
This figure is adapted with permission[287] under a Creative Commons
Attribution 4.0[73] license.
### 1.4 Semantic Segmentation
Semantic segmentation is the classification of pixels into discrete
categories. In electron microscopy, applications include the automatic
identification of local features[288, 289], such as defects[290, 291],
dopants[292], material phases[293], material structures[294, 295], dynamic
surface phenomena[296], and chemical phases in nanoparticles[297]. Early
approaches to semantic segmentation used simple rules. However, such methods
were not robust to a high variety of data[298]. Subsequently, more adaptive
algorithms based on soft-computing[299] and fuzzy algorithms[300] were
developed to use geometric shapes as priors. However, these methods were
limited by programmed features and struggled to handle the high variety of
data.
To improve performance, DNNs have been trained to semantically segment
images[301, 302, 303, 304, 305, 306, 307, 308]. Semantic segmentation DNNs
have been developed for focused ion beam scanning electron microscopy[309,
310, 311] (FIB-SEM), SEM[312, 313, 314, 311], STEM[315, 287], and TEM[286,
316, 317, 310, 318, 311, 319]. For example, applications of a DNN to semantic
segmentation of STEM images of steel are shown in figure 3. Deep learning
based semantic segmentation also has a high variety of applications outside of
electron microscopy, including autonomous driving[320, 321, 322, 323, 324],
dietary monitoring[325, 326], magnetic resonance images[327, 328, 329, 330,
331], medical images[332, 333, 334] such as prenatal ultrasound[335, 336, 337,
338], and satellite image translation[339, 340, 341, 342, 343]. Most DNNs for
semantic segmentation are trained with images segmented by humans. However,
human labelling may be too expensive, time-consuming, or inappropriate for
sensitive data. Unsupervised semantic segmentation can avoid these
difficulties by learning to segment images from an additional dataset of
segmented images[344] or image-level labels[345, 346, 347, 348]. However,
unsupervised semantic segmentation networks are often less accurate than
supervised networks.
Figure 4: Example applications of a DNN to reconstruct phases of exit
wavefunction from intensities of single TEM images. Phases in $[-\pi,\pi)$ rad
are depicted on a linear greyscale from black to white, and Miller indices
label projection directions. This figure is adapted from our earlier work[349]
under a Creative Commons Attribution 4.0[73] license.
### 1.5 Exit Wavefunction Reconstruction
Electrons exhibit wave-particle duality[350, 351], so electron propagation is
often described by wave optics[352]. Applications of electron wavefunctions
exiting materials[353] include determining projected potentials and
corresponding crystal structure information[354, 355], information storage,
point spread function deconvolution, improving contrast, aberration
correction[356], thickness measurement[357], and electric and magnetic
structure determination[358, 359]. Usually, exit wavefunctions are either
iteratively reconstructed from focal series[360, 361, 362, 363, 364] or
recorded by electron holography[352, 363, 365]. However, iterative
reconstruction is often too slow for live applications, and holography is
sensitive to distortions and may require expensive microscope modification.
Non-iterative methods based on DNNs have been developed to reconstruct optical
exit wavefunctions from focal series[69] or single images[366, 367, 368].
Subsequently, DNNs have been developed to reconstruct exit wavefunctions from
single TEM images[349], as shown in figure 4. Indeed, deep learning is
increasingly being applied to accelerated quantum mechanics[369, 370, 371,
372, 373, 374]. Other examples of DNNs adding new dimensions to data include
semantic segmentation described in section 1.4, and reconstructing 3D atomic
distortions from 2D images[375]. Non-iterative methods that do not use ANNs to
recover phase information from single images have also been developed[376,
377]. However, they are limited to defocused images in the Fresnel
regime[376], or to non-planar incident wavefunctions in the Fraunhofer
regime[377].
## 2 Resources
Access to scientific resources is essential to scientific enterprise[378].
Fortunately, most resources needed to get started with machine learning are
freely available. This section provides directions to various machine learning
resources, including how to access deep learning frameworks, a free GPU or
tensor processing unit (TPU) to accelerate tensor computations, platforms that
host datasets and source code, and pretrained models. To support the ideals of
open science embodied by Plan S[379, 378, 380], we focus on resources that
enhance collaboration and enable open access[381]. We also discuss how
electron microscopes can interface with ANNs and the importance of machine
learning resources in the context of electron microscopy. However, we expect
that our insights into electron microscopy can be generalized to other
scientific fields.
### 2.1 Hardware Acceleration
A DNN is an ANN with multiple layers that perform a sequence of tensor
operations. Tensors can either be computed on central processing units (CPUs)
or hardware accelerators[62], such as FPGAs[382, 383, 384, 385], GPUs[386,
387, 388], and TPUs[389, 390, 391]. Most benchmarks indicate that GPUs and
TPUs outperform CPUs for typical DNNs that could be used for image
processing[392, 393, 394, 395, 396] in electron microscopy. However, GPU and
CPU performance can be comparable when CPU computation is optimized[397]. TPUs
often outperform GPUs[394], and FPGAs can outperform GPUs[398, 399] if FPGAs
have sufficient arithmetic units[400, 401]. Typical power consumption per
TFLOPS[402] decreases in order CPU, GPU, FPGA, then TPU, so hardware
acceleration can help to minimize long-term costs and environmental
damage[403].
For beginners, Google Colab[404, 405, 406, 407] and Kaggle[408] provide
hardware accelerators in ready-to-go deep learning environments. Free compute
time on these platforms is limited as they are not intended for industrial
applications. Nevertheless, the free compute time is sufficient for some
research[409]. For more intensive applications, it may be necessary to get
permanent access to hardware accelerators. If so, many online guides detail
how to install[410, 411] and set up an Nvidia[412] or AMD[413] GPU in a
desktop computer for deep learning. However, most hardware comparisons for
deep learning[414] focus on Nvidia GPUs as most deep learning frameworks use
Nvidia’s proprietary Compute Unified Device Architecture (CUDA) Deep Neural
Network (cuDNN) primitives for deep learning[415], which are optimized for
Nvidia GPUs. Alternatively, hardware accelerators may be accessible from a
university or other institutional high performance computing (HPC) centre, or
via a public cloud service provider[416, 417, 418, 419].
Framework | License | Programming Interfaces
---|---|---
Apache SINGA[420] | Apache 2.0[421] | C++, Java, Python
BigDL[422] | Apache 2.0[423] | Python, Scala
Caffe[424, 425] | BSD[426] | C++, MATLAB, Python
Chainer[427] | MIT[428] | Python
Deeplearning4j[429] | Apache 2.0[430] | Clojure, Java, Kotlin, Python, Scala
Dlib[431, 432] | BSL[433] | C++
Flux[434] | MIT[435] | Julia
MATLAB Deep Learning Toolbox[436] | Proprietary[437] | MATLAB
Microsoft Cognitive Toolkit[438] | MIT[439] | BrainScript, C++, Python
Apache MXNet[440] | Apache 2.0[441] | C++, Clojure, Go, JavaScript, Julia, Matlab, Perl, Python, R, Scala
OpenNN[442] | GNU LGPL[443] | C++
PaddlePaddle[444] | Apache 2.0[445] | C++
PyTorch[446] | BSD[447] | C++, Python
TensorFlow[448, 449] | Apache 2.0[450] | C++, C#, Go, Haskell, Julia, MATLAB, Python, Java, JavaScript, R, Ruby, Rust, Scala, Swift
Theano[451, 452] | BSD[453] | Python
Torch[454] | BSD[455] | C, Lua
Wolfram Mathematica[456] | Proprietary[457] | Wolfram Language
Table 1: Deep learning frameworks with programming interfaces. Most
frameworks have open source code and many support multiple programming
languages.
### 2.2 Deep Learning Frameworks
A deep learning framework[458, 459, 460, 461, 462, 9, 463, 464] (DLF) is an
interface, library or tool for DNN development. Features often include
automatic differentiation[465], heterogeneous computing, pretrained models,
and efficient computing[466] with CUDA[467, 468, 469], cuDNN[415, 470],
OpenMP[471, 472], or similar libraries. Popular DLFs tabulated in table 1
often have open source code and support multiple programming interfaces.
Overall, TensorFlow[448, 449] is the most popular DLF[473]. However,
PyTorch[446] is the most popular DLF at top machine learning conferences[473,
474]. Some DLFs also have extensions that ease development or extend
functionality. For example, TensorFlow extensions[475] that ease development
include Keras[476], Sonnet[477], Tensor2Tensor[478] and TFLearn[479, 480], and
extensions that add functionality include Addons[481], Agents[482],
Dopamine[483], Federated[484, 485, 486], Probability[487], and TRFL[488]. In
addition, DLFs are supplemented by libraries for predictive data analysis,
such as scikit-learn[489].
A limitation of the DLFs in table 1 is that users must use programming
interfaces. This is problematic as many electron microscopists have limited,
if any, programming experience. To increase accessibility, a range of
graphical user interfaces (GUIs) have been created for ANN development. For
example, ANNdotNET[490], Create ML[491], Deep Cognition[492], Deep Network
Designer[493], DIGITS[494], ENNUI[495], Expresso[496], Neural Designer[497],
Waikato Environment for Knowledge Analysis[498, 499, 500] (WEKA) and
ZeroCostDL4Mic[501]. The GUIs offer less functionality and scope for
customization than programming interfaces. However, GUI-based DLFs are rapidly
improving. Moreover, existing GUI functionality is more than sufficient to
implement popular FNNs, such as image classifiers[272] and encoder-
decoders[502, 503, 305, 306, 307, 308, 504].
### 2.3 Pretrained Models
Training ANNs is often time-consuming and computationally expensive[403].
Fortunately, pretrained models are available from a range of open access
collections[505], such as Model Zoo[506], Open Neural Network Exchange[507,
508, 509, 510] (ONNX) Model Zoo[511], TensorFlow Hub[512, 513], and TensorFlow
Model Garden[514]. Some researchers also provide pretrained models via project
repositories[70, 349, 201, 231, 202]. Pretrained models can be used
immediately or to transfer learning[515, 516, 517, 518, 519, 520, 521] to new
applications. For example, by fine-tuning and augmenting the final layer of a
pretrained model[522]. Benefits of transfer learning can include decreasing
training time by orders of magnitude, reducing training data requirements, and
improving generalization[520, 523].
Using pretrained models is complicated by ANNs being developed with a variety
of DLFs in a range of programming languages. However, most DLFs support
interoperability. For example, by supporting the saving of models to a common
format or to formats that are interoperable with the Neural Network Exchange
Format[524] (NNEF) or ONNX formats. Many DLFs also support saving models to
HDF5[525, 526], which is popular in the pycroscopy[527, 528] and HyperSpy[529,
530] libraries used by electron microscopists. The main limitation of
interoperability is that different DLFs may not support the same
functionality. For example, Dlib[431, 432] does not support recurrent neural
networks[531, 532, 533, 534, 535, 536] (RNNs).
### 2.4 Datasets
Randomly initialized ANNs[537] must be trained, validated, and tested with
large, carefully partitioned datasets to ensure that they are robust to
general use[538]. Most ANN training starts from random initialization, rather
than transfer learning[515, 516, 517, 518, 519, 520, 521], as:
1. 1.
Researchers may be investigating modifications to ANN architecture or ability
to learn.
2. 2.
Pretrained models may be unavailable or too difficult to find.
3. 3.
Models may quickly achieve sufficient performance from random initialization.
For example, training an encoder-decoder based on Xception[539] to improve
electron micrograph signal-to-noise[70] can require less training than for
PASCAL VOC 2012[540] semantic segmentation[305].
4. 4.
There may be a high computing budget, so transfer learning is unnecessary[541,
542].
There are millions of open access datasets[543, 544] and a range of platforms
that host[545, 546, 547, 548, 549] or aggregate[550, 551, 552, 553] machine
learning datasets. Openly archiving datasets drives scientific enterprise by
reducing need to repeat experiments[554, 555, 556, 557, 558], enabling new
applications through data mining[559, 560], and standardizing performance
benchmarks[561]. For example, popular datasets used to standardize image
classification performance benchmarks include CIFAR-10[562, 563], MNIST[564]
and ImageNet[565]. A high range of both domain-specific and general platforms
that host scientific data for free are listed by the Open Access
Directory[566] and Nature Scientific Data[567]. For beginners, we recommend
Zenodo[568] as it is free, open access, has an easy-to-use interface, and will
host an unlimited number of datasets smaller than 50 GB for at least 20
years[569].
There are a range of platforms dedicated to hosting electron microscopy
datasets, including the Caltech Electron Tomography Database[570] (ETDB-
Caltech), Electron Microscopy Data Bank[571, 572, 573, 574, 575, 576]
(EMDataBank), and the Electron Microscopy Public Image Archive[577] (EMPIAR).
However, most electron microscopy datasets are small, esoteric or are not
partitioned for machine learning[231]. Nevertheless, a variety of large
machine learning datasets for electron microscopy are being published in
independent repositories[231, 578, 579], including Warwick Electron Microscopy
Datasets[231] (WEMD) that we curated. In addition, a variety of databases host
information that supports electron microscopy. For example, crystal structure
databases provide data in standard formats[580, 581], such as Crystallography
Information Files[582, 583, 584, 585] (CIFs). Large crystal structure
databases[586, 587, 588] containing over $10^{5}$ crystal structures include
the Crystallography Open Database[589, 590, 591, 592, 593, 594] (COD),
Inorganic Crystal Structure Database[595, 596, 597, 598, 599] (ICSD), and
National Institute of Standards and Technology (NIST) Crystal Data[600, 601].
Platform | Website | For Machine Learning
---|---|---
Amazon Mechanical Turk | https://www.mturk.com | General tasks
Appen | https://appen.com | Machine learning data preparation
Clickworker | https://www.clickworker.com | Machine learning data preparation
Fiverr | https://www.fiverr.com | General tasks
Hive | https://thehive.ai | Machine learning data preparation
iMerit | https://imerit.net | Machine learning data preparation
JobBoy | https://www.jobboy.com | General tasks
Minijobz | https://minijobz.com | General tasks
Microworkers | https://www.microworkers.com | General tasks
OneSpace | https://freelance.onespace.com | General tasks
Playment | https://playment.io | Machine learning data preparation
RapidWorkers | https://rapidworkers.com | General tasks
Scale | https://scale.com | Machine learning data preparation
Smart Crowd | https://thesmartcrowd.lionbridge.com | General tasks
Trainingset.ai | https://www.trainingset.ai | Machine learning data preparation
ySense | https://www.ysense.com | General tasks
Table 2: Microjob service platforms. The size of typical tasks varies for
different platforms and some platforms specialize in preparing machine
learning datasets.
To achieve high performance, it may be necessary to curate a large dataset for
ANN training[2]. However, large datasets like DeepMind Kinetics[602],
ImageNet[565], and YouTube 8M[603] may take a team months to prepare. As a
result, it may not be practical to divert sufficient staff and resources to
curate a high-quality dataset, even if curation is partially automated[604,
605, 606, 607, 608, 609, 603, 610]. To curate data, human capital can be
temporarily and cheaply increased by using microjob services[611]. For
example, through microjob platforms tabulated in table 2. Increasingly,
platforms are emerging that specialize in data preparation for machine
learning. Nevertheless, microjob services may be inappropriate for sensitive
data or tasks that require substantial domain-specific knowledge.
### 2.5 Source Code
Software is part of our cultural, industrial, and scientific heritage[612].
Source code should therefore be archived where possible. For example, on an
open source code platform such as Apache Allura[613], AWS CodeCommit[614],
Beanstalk[615], BitBucket[616], GitHub[617], GitLab[618], Gogs[619], Google
Cloud Source Repositories[620], Launchpad[621], Phabricator[622],
Savannah[623] or SourceForge[624]. These platforms enhance collaboration with
functionality that helps users to watch[625] and contribute improvements[626,
627, 628, 629, 630, 631, 632] to source code. The choice of platform is often
not immediately important for small electron microscopy projects as most
platforms offer similar functionality. Nevertheless, functionality comparisons
of open source platforms are available[633, 634, 635]. For beginners, we
recommend GitHub as it is actively developed, scalable to large projects and
has an easy-to-use interface.
### 2.6 Finding Information
Most web traffic[636, 637] goes to large-scale web search engines[638, 639,
640, 641, 642] such as Bing, DuckDuckGo, Google, and Yahoo. This includes
searches for scholarly content[643, 644, 645]. We recommend Google for
electron microscopy queries as it appears to yield the best results for
general[646, 647, 648], scholarly[645, 644] and other[649] queries. However,
general search engines can be outperformed by dedicated search engines for
specialized applications. For example, for finding academic literature[650,
651, 652], data[653], jobs[654, 655], publication venues[656], patents[657,
658, 659, 660], people[661, 662, 663], and many other resources. The use of
search engines is increasingly political[664, 665, 666] as they influence
which information people see. However, most users appear to be satisfied with
their performance[667].
Introductory textbooks are outdated[668, 669] insofar that most information is
readily available online. We find that some websites are frequent references
for up-to-date and practical information:
1. 1.
Stack Overflow[670, 671, 672, 673, 674, 675] is a source of working code
snippets and a useful reference when debugging code.
2. 2.
Papers With Code State-of-the-Art[561] leaderboards rank the highest
performing ANNs with open source code for various benchmarks.
3. 3.
Medium[676] and its subsidiaries publish blogs with up-to-date and practical
advice about machine learning.
4. 4.
The Machine Learning subreddit[677] hosts discussions about machine learning.
In addition, there is a Learn Machine Learning subreddit[678] aimed at
beginners.
5. 5.
Dave Mitchell’s DigitalMicrograph Scripting Website[679, 680] hosts a
collection of scripts and documentation for programming electron microscopes.
6. 6.
The Internet Archive[681, 682] maintains copies of software and media,
including webpages via its Wayback Machine[683, 684, 685].
7. 7.
Distill[686] is a journal dedicated to providing clear explanations about
machine learning. Monetary prizes are awarded for excellent communication and
refinement of ideas.
This list enumerates popular resources that we find useful, so it may
introduce personal bias. However, alternative guides to useful resources are
available[687, 688, 689]. We find that the most common issues finding
information are part of an ongoing reproducibility crisis[690, 691] where
machine learning researchers do not publish their source code or data.
Nevertheless, third party source code is sometimes available. Alternatively,
ANNs can reconstruct source code from some research papers[692].
### 2.7 Scientific Publishing
The number of articles published per year in reputable peer-reviewed[693, 694,
695, 696, 697] scientific journals[698, 699] has roughly doubled every nine
years since the beginning of modern science[700]. There are now over 25000
peer-reviewed journals[699] with varying impact factors[701, 702, 703], scopes
and editorial policies. Strategies to find the best journal to publish in
include using online journal finders[704], seeking the advice of learned
colleagues, and considering where similar research has been published.
Increasingly, working papers are also being published in open access preprint
archives[705, 706, 707]. For example, the arXiv[708, 709] is a popular
preprint archive for computer science, mathematics, and physics. Advantages of
preprints include ensuring that research is openly available, increasing
discovery and citations[710, 711, 712, 713, 714], inviting timely scientific
discussion, and raising awareness to reduce unnecessary duplication of
research. Many publishers have adapted to the popularity of preprints[705] by
offering open access publication options[715, 716, 717, 718] and allowing, and
in some cases encouraging[719], the prior publication of preprints. Indeed,
some journals are now using the arXiv to host their publications[720].
A variety of software can help authors prepare scientific manuscripts[721].
However, we think the most essential software is a document preparation
system. Most manuscripts are prepared with Microsoft Word[722] or similar
software[723]. However, Latex[724, 725, 726] is a popular alternative among
computer scientists, mathematicians and physicists[727]. Most electron
microscopists at the University of Warwick appear to prefer Word. A 2014
comparison of Latex and Word found that Word is better at all tasks other than
typesetting equations[728]. However, in 2017 it become possible to use Latex
to typeset equations within Word[727]. As a result, Word appears to be more
efficient than Latex for most manuscript preparation. Nevertheless, Latex may
still be preferable to authors who want fine control over typesetting[729,
730]. As a compromise, we use Overleaf[731] to edit Latex source code, then
copy our code to Word as part of proofreading to identify issues with grammar
and wording.
Figure 5: Reciprocity of TEM and STEM electron optics.
## 3 Electron Microscopy
An electron microscope is an instrument that uses electrons as a source of
illumination to enable the study of small objects. Electron microscopy
competes with a large range of alternative techniques for material
analysis[732, 733, 734], including atomic force microscopy[735, 736, 737]
(AFM); Fourier transformed infrared (FTIR) spectroscopy[738, 739]; nuclear
magnetic resonance[740, 741, 742, 743] (NMR); Raman spectroscopy[744, 745,
746, 747, 748, 749, 750]; and x-ray diffraction[751, 752] (XRD),
dispersion[753], fluorescence[754, 755] (XRF), and photoelectron
spectroscopy[756, 757] (XPS). Quantitative advantages of electron microscopes
can include higher resolution and depth of field, and lower radiation damage
than light microscopes[758]. In addition, electron microscopes can record
images, enabling visual interpretation of complex structures that may
otherwise be intractable. This section will briefly introduce varieties of
electron microscopes, simulation software, and how electron microscopes can
interface with ANNs.
### 3.1 Microscopes
Figure 6: Numbers of results per year returned by Dimensions.ai abstract
searches for SEM, TEM, STEM, STM and REM qualitate their popularities. The
number of results for 2020 is extrapolated using the mean rate before 14th
July 2020.
There are a variety of electron microscopes that use different illumination
mechanisms. For example, reflection electron microscopy[759, 760] (REM),
scanning electron microscopy[761, 762] (SEM), scanning transmission electron
microscopy[763, 764] (STEM), scanning tunnelling microscopy[765, 766] (STM),
and transmission electron microscopy[767, 768, 769] (TEM). To roughly gauge
popularities of electron microscope varieties, we performed abstract searches
with Dimenions.ai[770, 771, 651, 772] for their abbreviations followed by
“electron microscopy” e.g. “REM electron microscopy”. Numbers of results per
year in figure 6 qualitate that popularity increases in order REM, STM, STEM,
TEM, then SEM. It may be tempting to attribute the popularity of SEM over TEM
to the lower cost of SEM[773], which increases accessibility. However, a range
of considerations influence the procurement of electron microscopes[774] and
hourly pricing at universities[775, 776, 777, 778, 779] is similar for SEM and
TEM.
In SEM, material surfaces are scanned by sequential probing with a beam of
electrons, which are typically accelerated to 0.2-40 keV. The SEM detects
quanta emitted from where the beam interacts with the sample. Most SEM imaging
uses low-energy secondary electrons. However, reflection electron
microscopy[759, 760] (REM) uses elastically backscattered electrons and is
often complimented by a combination of reflection high-energy electron
diffraction[780, 781, 782] (RHEED), reflection high-energy electron loss
spectroscopy[783, 784] (RHEELS) and spin-polarized low-energy electron
microscopy[785, 786, 787] (SPLEEM). Some SEMs also detect Auger electrons[788,
789]. To enhance materials characterization, most SEMs also detect light. The
most common light detectors are for cathodoluminescence and energy dispersive
r-ray[790, 791] (EDX) spectroscopy. Nonetheless, some SEMs also detect
Bremsstrahlung radiation[792].
Alternatively, TEM and STEM detect electrons transmitted through specimens. In
conventional TEM, a single region is exposed to a broad electron beam. In
contrast, STEM uses a fine electron beam to probe a series of discrete probing
locations. Typically, electrons are accelerated across a potential difference
to kinetic energies, $E_{k}$, of 80-300 keV. Electrons also have rest energy
$E_{\text{e}}=m_{\text{e}}c^{2}$, where $m_{\text{e}}$ is electron rest mass
and $c$ is the speed of light. The total energy, $E_{t}=E_{\text{e}}+E_{k}$,
of free electrons is related to their rest mass energy by a Lorentz factor,
$\gamma$,
$\displaystyle E_{t}$ $\displaystyle=\gamma m_{\text{e}}c^{2}\,,$ (1)
$\displaystyle\gamma$ $\displaystyle=(1-v^{2}/c^{2})^{1/2}\,,$ (2)
where $v$ is the speed of electron propagation in the rest frame of an
electron microscope. Electron kinetic energies in TEM and STEM are comparable
to their rest energy, $E_{\text{e}}=511$ keV[793], so relativistic
phenomena[794, 795] must be considered to accurately describe their dynamics.
Electrons exhibit wave-particle duality[350, 351]. Thus, in an ideal electron
microscope, the maximum possible detection angle, $\theta$, between two point
sources separated by a distance, $d$, perpendicular to the electron
propagation direction is diffraction-limited. The resolution limit for imaging
can be quantified by Rayleigh’s criterion[796, 797, 798]
$\theta\simeq 1.22\frac{\lambda}{d},$ (3)
where resolution increases with decreasing wavelength, $\lambda$. Electron
wavelength decreases with increasing accelerating voltage, as described by the
relativistic de Broglie relation[799, 800, 801],
$\lambda=hc\left(E_{k}^{2}+2E_{\text{e}}E_{k}\right)^{-1/2}\,,$ (4)
where $h$ is Planck’s constant[793]. Electron wavelengths for typical
acceleration voltages tabulated by JEOL are in picometres[802]. In comparison,
Cu K-$\alpha$ x-rays, which are often used for XRD, have wavelengths near 0.15
nm[803]. In theory, electrons can therefore achieve over 100$\times$ higher
resolution than x-rays. Electrons and x-rays are both ionizing; however,
electrons often do less radiation damage to thin specimens than x-rays[758].
Tangentially, TEM and STEM often achieve over 10 times higher resolution than
SEM[804] as transmitted electrons in TEM and STEM are easier to resolve than
electrons returned from material surfaces in SEM.
In practice, TEM and STEM are also limited by incoherence[805, 806, 807]
introduced by inelastic scattering, electron energy spread, and other
mechanisms. TEM and STEM are related by an extension of Helmholtz
reciprocity[808, 809] where the source plane in a TEM corresponds to the
detector plane in a STEM[810], as shown in figure 5. Consequently, TEM
coherence is limited by electron optics between the specimen and image,
whereas STEM coherence is limited by the illumination system. For conventional
TEM and STEM imaging, electrons are normally incident on a specimen[811].
Advantages of STEM imaging can include higher contrast and resolution than TEM
imaging, and lower radiation damage[812]. As a result, STEM is increasing
being favoured over TEM for high-resolution studies. However, we caution that
definitions of TEM and STEM resolution can be disparate[813].
In addition to conventional imaging, TEM and STEM include a variety of
operating modes for different applications. For example, TEM operating
configurations include electron diffraction[814]; convergent beam electron
diffraction[815, 816, 817] (CBED); tomography[818, 819, 820, 821, 822, 823,
824, 825, 826]; and bright field[827, 828, 768, 829], dark field[768, 829] and
annular dark field[830] imaging. Similarly, STEM operating configurations
include differential phase contrast[831, 832, 833, 834]; tomography[818, 820,
822, 823]; and bright field[835, 836] or dark field[837] imaging. Further,
electron cameras[838, 839] are often supplemented by secondary signal
detectors. For example, elemental composition is often mapped by EDX
spectroscopy, electron energy loss spectroscopy[840, 841] (EELS) or wavelength
dispersive spectroscopy[842, 843] (WDS). Similarly, electron backscatter
diffraction[844, 845, 846] (EBSD) can detect strain[847, 848, 849] and
crystallization[850, 851, 852].
### 3.2 Contrast Simulation
The propagation of electron wavefunctions though electron microscopes can be
described by wave optics[136]. Further, the most popular approach to modelling
measurement contrast is multislice simulation[853, 854], where an electron
wavefunction is iteratively perturbed as it travels through a model of a
specimen. Multislice software for electron microscopy includes ACEM[854, 855,
856], clTEM[857, 858], cudaEM[859], Dr. Probe[860, 861], EMSoft[862, 863],
JEMS[864], JMULTIS[865], MULTEM[866, 867, 868], NCEMSS[869, 870], NUMIS[871],
Prismatic[872, 873, 874], QSTEM[875], SimulaTEM[876], STEM-CELL[877],
Tempas[878], and xHREM[879, 880, 881, 882, 883, 884]. We find that most
multislice software is a recreation and slight modification of common
functionality, possibly due to a publish-or-perish culture in academia[885,
886, 887]. Bloch-wave simulation[888, 889, 890, 891, 854, 892] is an
alternative to multislice simulation that can reduce computation time and
memory requirements for crystalline materials[893].
### 3.3 Automation
Most modern electron microscopes support Gatan Microscopy Suite (GMS)
Software[894]. GMS enables electron microscopes to be programmed by
DigitalMicrograph Scripting, a propriety Gatan programming language akin to a
simplified version of C++. A variety of DigitalMicrograph scripts, tutorials
and related resources are available from Dave Mitchell’s DigitalMicrograph
Scripting Website[679, 680], FELMI/ZFE’s Script Database[895] and Gatan’s
Script library[896]. Some electron microscopists also provide
DigitalMicrograph scripting resources on their webpages[897, 898, 899].
However, DigitalMicrograph scripts are slow insofar that they are interpreted
at runtime, and there is limited native functionality for parallel and
distributed computing. As a result, extensions to DigitalMicrograph scripting
are often developed in other programming languages that offer more
functionality.
Historically, most extensions were developed in C++[900]. This was problematic
as there is limited documentation, the standard approach used outdated C++
software development kits such as Visual Studio 2008, and programming
expertise required to create functions that interface with DigitalMicrograph
scripts limited accessibility. To increase accessibility, recent versions of
GMS now support python[901]. This is convenient as it enables ANNs developed
with python to readily interface with electron microscopes. For ANNs developed
with C++, users have the option to either create C++ bindings for
DigitalMicrograph script or for python. Integrating ANNs developed in other
programming languages is more complicated as DigitalMicrograph provides almost
no support. However, that complexity can be avoided by exchanging files from
DigitalMicrograph script to external libraries via a random access memory
(RAM) disk[902] or secondary storage[903].
Increasing accessibility, there are collections of GMS plugins with GUIs for
automation and analysis[904, 897, 898, 899]. In addition, various individual
plugins are available[905, 906, 907, 908, 909]. Some plugins are open source,
so they can be adapted to interface with ANNs. However, many high-quality
plugins are proprietary and closed source, limiting their use to automation of
data collection and processing. Plugins can also be supplemented by a variety
of libraries and interfaces for electron microscopy signal processing. For
example, popular general-purpose software includes ImageJ[910],
pycroscopy[527, 528] and HyperSpy[529, 530]. In addition, there are
directories for tens of general-purpose and specific electron microscopy
programs[911, 912, 913].
## 4 Components
Most modern ANNs are configured from a variety of DLF components. To take
advantage of hardware accelerators[62], most ANNs are implemented as sequences
of parallelizable layers of tensor operations[914]. Layers are often
parallelized across data and may be parallelized across other dimensions[915].
This section introduces popular nonlinear activation functions, normalization
layers, convolutional layers, and skip connections. To add insight, we provide
comparative discussion and address some common causes of confusion.
### 4.1 Nonlinear Activation
In general, DNNs need multiple layers to be universal approximators[37, 38,
39, 40, 41, 42, 43, 44, 45]. Nonlinear activation functions[916, 917] are
therefore essential to DNNs as successive linear layers can be contracted to a
single layer. Activation functions separate artificial neurons, similar to
biological neurons[918]. To learn efficiently, most DNNs are tens or hundreds
of layers deep[47, 919, 920, 921]. High depth increases representational
capacity[47], which can help training by gradient descent as DNNs evolve as
linear models[922] and nonlinearities can create suboptimal local minima where
data cannot be fit by linear models[923]. There are infinitely many possible
activation functions. However, most activation functions have low polynomial
order, similar to physical Hamiltonians[47].
Most ANNs developed for electron microscopy are for image processing, where
the most popular nonlinearities are rectifier linear units[924, 925] (ReLUs).
The ReLU activation, $f(x)$, of an input, $x$, and its gradient,
$\partial_{x}f(x)$, are
$f(x)=\max(0,x)$ (5a) | | $\frac{\partial f(x)}{\partial x}=\begin{cases}0,&\text{if }x\leq 0\\\ 1,&\text{if }x>0\end{cases}$ (5b)
---|---|---
Popular variants of ReLUs include Leaky ReLU[926],
$f(x)=\max(\alpha x,x)$ (6a) | | $\frac{\partial f(x)}{\partial x}=\begin{cases}\alpha,&\text{if }x\leq 0\\\ 1,&\text{if }x>0\end{cases}$ (6b)
---|---|---
where $\alpha$ is a hyperparameter, parametric ReLU[22] (PreLU) where $\alpha$
is a learned parameter, dynamic ReLU where $\alpha$ is a learned function of
inputs[927], and randomized leaky ReLU[928] (RReLU) where $\alpha$ is chosen
randomly. Typically, learned PreLU $\alpha$ are higher the nearer a layer is
to ANN inputs[22]. Motivated by limited comparisons that do not show a clear
performance difference between ReLU and leaky ReLU[929], some blogs[930] argue
against using leaky ReLU due to its higher computational requirements and
complexity. However, an in-depth comparison found that leaky ReLU variants
consistently slightly outperform ReLU[928]. In addition, the non-zero gradient
of leaky ReLU for $x\leq 0$ prevents saturating, or “dying”, ReLU[931, 932,
933], where the zero gradient of ReLUs stops learning.
There are a variety of other piecewise linear ReLU variants that can improve
performance. For example, ReLU$h$ activations are limited to a threshold[934],
$h$, so that
$f(x)=\min(\max(0,x),h)$ (7a) | | $\frac{\partial f(x)}{\partial x}=\begin{cases}0,&\text{if }x\leq 0\\\ 1,&\text{if }0<x\leq h\\\ 0,&\text{if }x>h\end{cases}$ (7b)
---|---|---
Thresholds near $h=6$ are often effective, so popular choice is ReLU6. Another
popular activation is concatenated ReLU[935] (CReLU), which is the
concatenation of $\text{ReLU}(x)$ and $\text{ReLU}(-x)$. Other ReLU variants
include adaptive convolutional[936], bipolar[937], elastic[938], and
Lipschitz[939] ReLUs. However, most ReLU variants are uncommon as they are
more complicated than ReLU and offer small, inconsistent, or unclear
performance gains. Moreover, it follows from the universal approximator
theorems[37, 38, 39, 40, 41, 42, 43, 44, 45] that disparity between ReLU and
its variants approaches zero as network depth increases.
In shallow networks, curved activation functions with non-zero Hessians often
accelerate convergence and improve performance. A popular activation is the
exponential linear unit[940] (ELU),
$f(x)=\begin{cases}\alpha(\exp(x)-1),&\text{if }x\leq 0\\\ x,&\text{if }x\geq 0\end{cases}$ (8a) | | $\frac{\partial f(x)}{\partial x}=\begin{cases}\alpha\exp(x),&\text{if }x\leq 0\\\ 1,&\text{if }x\geq 0\end{cases}$ (8b)
---|---|---
where $\alpha$ is a learned parameter. Further, a scaled ELU[941] (SELU),
$f(x)=\begin{cases}\lambda\alpha(\exp(x)-1),&\text{if }x\leq 0\\\ \lambda x,&\text{if }x\geq 0\end{cases}$ (9a) | | $\frac{\partial f(x)}{\partial x}=\begin{cases}\lambda\alpha\exp(x),&\text{if }x\leq 0\\\ \lambda,&\text{if }x\geq 0\end{cases}$ (9b)
---|---|---
with fixed $\alpha=1.67326$ and scale factor $\lambda=1.0507$ can be used to
create self-normalizing neural networks (SNNs). A SNN cannot be derived from
ReLUs or most other activation functions. Activation functions with curvature
are especially common in ANNs with only a couple of layers. For example,
activation functions in radial basis function (RBF) networks[942, 943, 944,
945], which are efficient universal approximators, are often Gaussians,
multiquadratics, inverse multiquadratics, or square-based RBFs[946].
Similarly, support vector machines[947, 948, 949] (SVMs) often use RBFs, or
sigmoids,
$f(x)=\frac{1}{1+\exp(-x)}$ (10a) | | $\frac{\partial f(x)}{\partial x}=f(x)\left(1-f(x)\right)$ (10b)
---|---|---
Sigmoids can also be applied to limit the support of outputs. Unscaled, or
“logistic”, sigmoids are often denoted $\sigma(x)$ and are related to $\tanh$
by $\tanh(x)=2\sigma(2x)-1$. To avoid expensive $\exp(-x)$ in the computation
of tanh, we recommend K-tanH[950], LeCun tanh[951], or piecewise linear
approximation[952, 953].
The activation functions introduced so far are scalar functions than can be
efficiently computed in parallel for each input element. However, functions of
vectors, $\textbf{x}=\\{x_{1},x_{2},...\\}$, are also popular. For example,
softmax activation[954],
$f(\textbf{x})=\frac{\exp(\textbf{x})}{\text{sum}(\exp(\textbf{x}))}$ (11a) | | $\frac{f(\textbf{x})}{\partial x_{j}}=\sum_{i}f(\textbf{x})_{i}(\delta_{ij}-f(\textbf{x})_{j})$ (11b)
---|---|---
is often applied before computing cross-entropy losses for classification
networks. Similarly, L$n$ vector normalization,
$f(\textbf{x})=\frac{\textbf{x}}{||\textbf{x}||_{n}}$ (12a) | | $\frac{f(\textbf{x})}{\partial x_{j}}=\frac{1}{||\textbf{x}||_{n}}\left(1-\frac{x_{j}^{n}}{||\textbf{x}||_{n}^{n}}\right)$ (12b)
---|---|---
with $n=2$ is often applied to vectors to ensure that they lie on a unit
sphere[349]. Finally, max pooling[955, 956],
$f(\textbf{x})=\max(\textbf{x})$ (13a) | | $\frac{f(\textbf{x})}{\partial x_{j}}=\begin{cases}1,&\text{if }j=\text{argmax}(\textbf{x})\\\ 0,&\text{if }j\neq\text{argmax}(\textbf{x})\end{cases}$ (13b)
---|---|---
is another popular multivariate activation function that is often used for
downsampling. However, max pooling has fallen out of favour as it is often
outperformed by strided convolutional layers[957]. Other vector activation
functions include squashing nonlinearities for dynamic routing by agreement in
capsule networks[958] and cosine similarity[959].
There are many other activation functions that are not detailed here for
brevity. Further, finding new activation functions is an active area of
research[960, 961]. Notable variants include choosing activation functions
from a set before training[962, 963] and learning activation functions[962,
964, 965, 966, 967]. Activation functions can also encode probability
distributions[968, 969, 970] or include noise[953]. Finally, there are a
variety of other deterministic activation functions[971, 961]. In electron
microscopy, most ANNs enable new or enhance existing applications.
Subsequently, we recommend using computationally efficient and established
activation functions unless there is a compelling reason to use a specialized
activation function.
### 4.2 Normalization
Normalization[972, 973, 974] standardizes signals, which can accelerate
convergence by gradient descent and improve performance. Batch
normalization[975, 976, 977, 978, 979, 980] is the most popular normalization
layer in image processing DNNs trained with minibatches of $N$ examples.
Technically, a “batch” is an entire training dataset and a “minibatch” is a
subset; however, the “mini” is often omitted where meaning is clear from
context. During training, batch normalization applies a transform,
$\displaystyle\mu_{B}$
$\displaystyle=\frac{1}{N}\sum\limits_{i=1}^{N}x_{i}\,,$ (14)
$\displaystyle\sigma_{B}^{2}$
$\displaystyle=\frac{1}{N}\sum\limits_{i=1}^{N}(x_{i}-\mu_{B})^{2}\,,$ (15)
$\displaystyle\hat{\textbf{x}}$
$\displaystyle=\frac{\textbf{x}-\mu_{B}}{(\sigma_{B}^{2}+\epsilon)^{1/2}}\,,$
(16) $\displaystyle\text{BatchNorm}(\textbf{x})$
$\displaystyle=\gamma\hat{\textbf{x}}+\beta\,,$ (17)
where $\textbf{x}=\\{x_{1},...,x_{N}\\}$ is a batch of layer inputs, $\gamma$
and $\beta$ are a learnable scale and shift, and $\epsilon$ is a small
constant added for numerical stability. During inference, batch normalization
applies a transform,
$\displaystyle\text{BatchNorm}(\textbf{x})=\frac{\gamma}{(\text{Var}[x]+\epsilon)^{1/2}}\textbf{x}+\left(\beta-\frac{\gamma\text{E}[x]}{(\text{Var}[x]+\epsilon)^{1/2}}\right)\,,$
(18)
where E[x] and Var[x] are expected batch means and variances. For convenience,
E[x] and Var[x] are often estimated with exponential moving averages that are
tracked during training. However, E[x] and Var[x] can also be estimated by
propagating examples through an ANN after training.
Increasing batch size stabilizes learning by averaging destabilizing loss
spikes over batches[261]. Batched learning also enables more efficient
utilization of modern hardware accelerators. For example, larger batch sizes
improve utilization of GPU memory bandwidth and throughput[981, 391, 982].
Using large batches can also be more efficient than many small batches when
distributing training across multiple CPU clusters or GPUs due to
communication overheads. However, the performance benefits of large batch
sizes can come at the cost of lower test accuracy as training with large
batches tends to converge to sharper minima[983, 984]. As a result, it is
often best not to use batch sizes higher than $N\approx 32$ for image
classification[985]. However, learning rate scaling[541] and layer-wise
adaptive learning rates[986] can increase accuracy of training with fixed
larger batch sizes. Batch size can also be increased throughout training
without compromising accuracy[987] to exploit effective learning rates being
inversely proportional to batch size[987, 541]. Alternatively, accuracy can be
improved by creating larger batches from replicated instances of training
inputs with different data augmentations[988].
There are a few caveats to batch normalization. Originally, batch
normalization was applied before activation[976]. However, applying batch
normalization after activation often slightly improves performance[989, 990].
In addition, training can be sensitive to the often-forgotten $\epsilon$
hyperparameter[991] in equation 16. Typically, performance decreases as
$\epsilon$ is increased above $\epsilon\approx 0.001$; however, there is a
sharp increase in performance around $\epsilon=0.01$ on ImageNet. Finally, it
is often assumed that batches are representative of the training dataset. This
is often approximated by shuffling training data to sample independent and
identically distributed (i.i.d.) samples. However, performance can often be
improved by prioritizing sampling[992, 993]. We observe that batch
normalization is usually effective if batch moments, $\mu_{B}$ and
$\sigma_{B}$, have similar values for every batch.
Batch normalization is less effective when training batch sizes are small, or
do not consist of independent samples. To improve performance, standard
moments in equation 16 can be renormalized[994] to expected means, $\mu$, and
standard deviations, $\sigma$,
$\displaystyle\hat{\textbf{x}}$ $\displaystyle\leftarrow
r\hat{\textbf{x}}+d\,,$ (19) $\displaystyle r$
$\displaystyle=\text{clip}_{[1/r_{\text{max}},r_{\text{max}}]}\left(\frac{\sigma_{B}}{\sigma}\right)\,,$
(20) $\displaystyle d$
$\displaystyle=\text{clip}_{[-d_{\text{max}},d_{\text{max}}]}\left(\frac{\mu_{B}-\mu}{\sigma}\right)\,,$
(21)
where gradients are not backpropagated with respect to (w.r.t.) the
renormalization parameters, $r$ and $d$. Moments, $\mu$ and $\sigma$ are
tracked by exponential moving averages and clipping to $r_{\text{max}}$ and
$d_{\text{max}}$ improves learning stability. Usually, clipping values are
increased from starting values of $r_{\text{max}}=1$ and $d_{\text{max}}=0$,
which correspond to batch normalization, as training progresses. Another
approach is virtual batch normalization[995] (VBN), which estimates $\mu$ and
$\sigma$ from a reference batch of samples and does not require clipping.
However, VBN is computationally expensive as it requires computing a second
batch of statistics at every training iteration. Finally, online[996] and
streaming[974] normalization enable training with small batch sizes by replace
$\mu_{B}$ and $\sigma_{B}$ in equation 16 with their exponential moving
averages.
There are alternatives to the $L_{2}$ batch normalization of equations 14-18
that standardize to different Euclidean norms. For example, $L_{1}$ batch
normalization[997] computes
$\displaystyle s_{1}$
$\displaystyle=\frac{1}{N}\sum\limits_{i=1}^{N}|x_{i}-\mu_{B}|\,,$ (22)
$\displaystyle\hat{\textbf{x}}$
$\displaystyle=\frac{\textbf{x}-\mu_{B}}{C_{L_{1}}s_{1}}\,,$ (23)
where $C_{L_{1}}=(\pi/2)^{1/2}$. Although the $C_{L_{1}}$ factor could be
learned by ANN parameters, its inclusion accelerates convergence of the
original implementation of $L_{1}$ batch normalization[997]. Another
alternative is $L_{\infty}$ batch normalization[997], which computes
$\displaystyle s_{\infty}$
$\displaystyle=\text{mean}(\text{top}_{k}(|\textbf{x}-\mu_{B}|))\,,$ (24)
$\displaystyle\hat{\textbf{x}}$
$\displaystyle=\frac{\textbf{x}-\mu_{B}}{C_{L_{\infty}}s_{\infty}}\,,$ (25)
where $C_{L_{\infty}}$ is a scale factor, and $\text{top}_{k}(\textbf{x})$
returns the $k$ highest elements of x. Hoffer et al suggest $k=10$[997]. Some
$L_{1}$ batch normalization proponents claim that $L_{1}$ batch normalization
outperforms[975] or achieves similar performance[997] to $L_{2}$ batch
normalization. However, we found that $L_{1}$ batch normalization often lowers
performance in our experiments. Similarly, $L_{\infty}$ batch normalization
often lowers performance[997]. Overall, $L_{1}$ and $L_{\infty}$ batch
normalization do not appear to offer a substantial advantage over $L_{2}$
batch normalization.
Figure 7: Visual comparison of various normalization methods highlighting
regions that they normalize. Regions can be normalized across batch, feature
and other dimensions, such as height and width.
A variety of layers normalize samples independently, including layer,
instance, and group normalization. They are compared with batch normalization
in figure 7. Layer normalization[998, 999] is a transposition of batch
normalization that is computed across feature channels for each training
example, instead of across batches. Batch normalization is ineffective in
RNNs; however, layer normalization of input activations often improves
accuracy[998]. Instance normalization[1000] is an extreme version of layer
normalization that standardizes each feature channel for each training
example. Instance normalization was developed for style transfer[1001, 1002,
1003, 1004, 1005] and makes ANNs insensitive to input image contrast. Group
normalization[1006] is intermediate to instance and layer normalization
insofar that it standardizes groups of channels for each training example.
The advantages of a set of multiple different normalization layers, $\Omega$,
can be combined by switchable normalization[1007, 1008], which standardizes to
$\displaystyle\hat{\textbf{x}}$
$\displaystyle=\frac{\textbf{x}-\sum\limits_{z\in\Omega}\lambda_{z}^{\mu}\mu_{z}}{\sum\limits_{z\in\Omega}\lambda_{z}^{\sigma}\sigma_{z}}\,,$
(26)
where $\mu_{z}$ and $\sigma_{z}$ are means and standard deviations computed by
normalization layer $z$, and their respective importance ratios,
$\lambda_{z}^{\mu}$ and $\lambda_{z}^{\sigma}$, are trainable parameters that
are softmax activated to sum to unity. Combining batch and instance
normalization statistics outperforms batch normalization for a range of
computer vision tasks[1009]. However, most layers strongly weighted either
batch or instance normalization, with most preferring batch normalization.
Interestingly, combining batch, instance and layer normalization
statistics[1007, 1008] results in instance normalization being preferred in
earlier layers, whereas layer normalization was preferred in the later layers,
and batch normalization was preferred in the middle layers. Smaller batch
sizes lead to a preference towards layer normalization and instance
normalization. Limitingly, using multiple normalization layers increases
computation. To limit expense, we therefore recommend either defaulting to
batch normalization, or progressively using single instance, batch, or layer
normalization layers.
A significant limitation of batch normalization is that it is not effective in
RNNs. This is a limited issue as most electron microscopists are developing
CNNs for image processing. However, we anticipate that RNNs may become more
popular in electron microscopy following the increasing popularity of
reinforcement learning[1010]. In addition to general-purpose alternatives to
batch normalization that are effective in RNNs, such as layer normalization,
there are a variety of dedicated normalization schemes. For example, recurrent
batch normalization[1011, 1012] uses distinct normalization layers for each
time step. Alternatively, batch normalized RNNs[1013] only have normalization
layers between their input and hidden states. Finally, online[996] and
streaming[974] normalization are general-purpose solutions that improve the
performance of batch normalization in RNNs by applying batch normalization
based on a stream of past batch statistics.
Normalization can also standardize trainable weights, w. For example, weight
normalization[1014],
$\text{WeightNorm}(\textbf{w})=\frac{g}{||\textbf{w}||_{2}}\textbf{w}\,,$ (27)
decouples the L2 norm, $g$, of a variable from its direction. Similarly,
weight standardization[1015] subtracts means from variables and divides them
by their standard deviations,
$\text{WeightStd}(\textbf{w})=\frac{\textbf{w}-\text{mean}(\textbf{w})}{\text{std}(\textbf{w})}\,,$
(28)
similar to batch normalization. Weight normalization often outperforms batch
normalization at small batch sizes. However, batch normalization consistently
outperforms weight normalization at larger batch sizes used in practice[1016].
Combining weight normalization with running mean-only batch normalization can
accelerate convergence[1014]. However, similar final accuracy can be achieved
without mean-only batch normalization at the cost of slower convergence, or
with the use of zero-mean preserving activation functions[937, 997]. To
achieve similar performance to batch normalization, norm-bounded weight
normalization[997] can be applied to DNNs with scale-invariant activation
functions, such as ReLU. Norm-bounded weight normalization fixes $g$ at
initialization to avoid learning instability[1016, 997], and scales outputs
with the final DNN layer.
Limitedly, weight normalization encourages the use of a small number of
features to inform activations[1017]. To encourage higher feature utilization,
spectral normalization[1017],
$\text{SpectralNorm}(\textbf{w})=\frac{\textbf{w}}{\sigma(\textbf{w})}\,,$
(29)
divides tensors by their spectral norms, $\sigma(\textbf{w})$. Further,
spectral normalization limits Lipschitz constants[1018], which often improves
generative adversarial network[197, 198, 199, 200] (GAN) training by bounding
backpropagated discriminator gradients[1017]. The spectral norm of v is the
maximum value of a diagonal matrix, $\boldsymbol{\Sigma}$, in the singular
value decomposition[1019, 1020, 1021, 1022] (SVG),
$\textbf{v}=\textbf{U}\boldsymbol{\Sigma}\textbf{V}^{*}\,,$ (30)
where U and V are orthogonal matrices of orthonormal eigenvectors for
$\textbf{v}\textbf{v}^{T}$ and $\textbf{v}^{T}\textbf{v}$, respectively. To
minimize computation, $\boldsymbol{\sigma}(\textbf{w})$ is often approximated
by the power iteration method[1023, 1024],
$\displaystyle\hat{\textbf{v}}$
$\displaystyle\leftarrow\frac{\textbf{w}^{\text{T}}\hat{\textbf{u}}}{||\textbf{w}^{\text{T}}\hat{\textbf{u}}||_{2}}\,,$
(31) $\displaystyle\hat{\textbf{u}}$
$\displaystyle\leftarrow\frac{\textbf{w}\hat{\textbf{v}}}{||\textbf{w}\hat{\textbf{v}}||_{2}}\,,$
(32) $\displaystyle\sigma(\textbf{w})$
$\displaystyle\simeq\hat{\textbf{u}}^{T}\textbf{w}\hat{\textbf{v}}\,,$ (33)
where one iteration of equations 31-32 per training iteration is usually
sufficient.
Parameter normalization can complement or be combined with signal
normalization. For example, scale normalization[1025],
$\text{ScaleNorm}(\textbf{x})=\frac{g}{||\textbf{x}||_{2}}\textbf{x}\,,$ (34)
learns scales, $g$, for activations, and is often combined with weight
normalization[1026, 1014] in transformer networks. Similarly, cosine
normalization[959],
$\text{CosineNorm}(\textbf{x})=\frac{\textbf{w}}{||\textbf{w}||_{2}}\cdot\frac{\textbf{x}}{||\textbf{x}||_{2}}\,,$
(35)
computes products of L2 normalized parameters and signals. Both scale and
cosine normalization can outperform batch normalization.
Figure 8: Visualization of convolutional layers. a) Traditional convolutional
layer where output channels are sums of biases and convolutions of weights
with input channels. b) Depthwise separable convolutional layer where
depthwise convolutions compute one convolution with weights for each input
channel. Output channels are sums of biases and pointwise convolutions weights
with depthwise channels.
### 4.3 Convolutional Layers
A convolutional neural network[1027, 1028, 1029, 1030] (CNN) is trained to
weight convolutional kernels to exploit local correlations, such as spatial
correlations in electron micrographs[231]. Historically, the development of
CNNs was inspired by primate visual cortices[1031], where partially
overlapping neurons are only stimulated by visual stimuli within their
receptive fields. Based on this idea, Fukushima published his
Neocognitron[1032, 1033, 1034, 1035] in 1980. Convolutional formulations were
then published by Atlas et al in 1988 for a single-layer CNN[1036], and LeCun
et al in 1998 for a multi-layer CNN[1037, 1038]. Subsequently, GPUs were
applied to accelerate convolutions in 2010[1039], leading to a breakthrough in
classification performance on ImageNet with AlexNet in 2012[71]. Indeed, the
deep learning era is often partitioned into before and after AlexNet[19]. Deep
CNNs are now ubiquitous. For example, there are review papers on applications
of CNNs to action recognition in videos[1040], cytometry[1041], image and
video compression[1042, 1043], image background subtraction[1044], image
classification[272], image style transfer[1001], medical image analysis[1045,
1046, 1047, 1048, 334, 1049, 1050, 332, 333, 1051, 1052], object
detection[1053, 1054], semantic image segmentation[304, 334, 333, 332], and
text classification[1055].
In general, the convolution of two functions, $f$ and $g$, is
$(f*g)(x)\coloneqq\int\limits_{s\in\Omega}f(s)g(x-s)\mathop{}\\!\mathrm{d}s\,,$
(36)
and their cross-correlation is
$(f\circ
g)(x)\coloneqq\int\limits_{s\in\Omega}f(s)g(x+s)\mathop{}\\!\mathrm{d}s\,,$
(37)
where integrals have unlimited support, $\Omega$. In a CNN, convolutional
layers sum convolutions of feature channels with trainable kernels, as shown
in figure 8. Thus, $f$ and $g$ are discrete functions and the integrals in
equations 36-37 can be replaced with limited summations. Since cross-
correlation is equivalent to convolution if the kernel is flipped in every
dimension, and CNN kernels are usually trainable, convolution and cross-
correlation is often interchangeable in deep learning. For example, a
TensorFlow function named “tf.nn.convolution” computes cross-
correlations[1056]. Nevertheless, the difference between convolution and
cross-correlation can be source of subtle errors if convolutional layers from
a DLF are used in an image processing pipeline with static asymmetric kernels.
Figure 9: Two 96$\times$96 electron micrographs a) unchanged, and filtered by
b) a 5$\times$5 symmetric Gaussian kernel with a 2.5 px standard deviation, c)
a 3$\times$3 horizontal Sobel kernel, and d) a 3$\times$3 vertical Sobel
kernel. Intensities in a) and b) are in [0, 1], whereas intensities in c) and
d) are in [-1, 1].
Kernels designed by humans[1057] are often convolved in image processing
pipelines. For example, convolutions of electron micrographs with Gaussian and
Sobel kernels are shown in figure 9. Gaussian kernels compute local averages,
blurring images and suppressing high-frequency noise. For example, a
5$\times$5 symmetric Gaussian kernel with a 2.5 px standard deviation is
$\begin{bmatrix}0.1689\\\ 0.2148\\\ 0.2326\\\ 0.2148\\\
0.1689\end{bmatrix}\begin{bmatrix}0.1689&0.2148&0.2326&0.2148&0.1689\end{bmatrix}=\begin{bmatrix}0.0285&0.0363&0.0393&0.0363&0.0285\\\
0.0363&0.0461&0.0500&0.0461&0.0363\\\ 0.0393&0.0500&0.0541&0.0500&0.0393\\\
0.0363&0.0461&0.0500&0.0461&0.0363\\\
0.0285&0.0363&0.0393&0.0363&0.0285\end{bmatrix}\,.$ (38)
Alternatives to Gaussian kernels for image smoothing[1058] include mean,
median and bilateral filters. Sobel kernels compute horizontal and vertical
spatial gradients that can be used for edge detection[1059]. For example,
3$\times$3 Sobel kernels are
$\begin{bmatrix}1\\\ 2\\\ 1\end{bmatrix}\begin{bmatrix}1&0&-1\end{bmatrix}=\begin{bmatrix}1&0&-1\\\ 2&0&-2\\\ 1&0&-1\end{bmatrix}$ (39a) | | $\begin{bmatrix}1\\\ 0\\\ -1\end{bmatrix}\begin{bmatrix}1&2&1\end{bmatrix}=\begin{bmatrix}1&2&1\\\ 0&0&0\\\ -1&-2&-1\end{bmatrix}$ (39b)
---|---|---
Alternatives to Sobel kernels offer similar utility, and include extended
Sobel[1060], Scharr[1061, 1062], Kayyali[1063], Roberts cross[1064] and
Prewitt[1065] kernels. Two-dimensional Gaussian and Sobel kernels are examples
of linearly separable, or “flattenable”, kernels, which can be split into two
one-dimensional kernels, as shown in equations 38-39b. Kernel separation can
decrease computation in convolutional layers by convolving separated kernels
in series, and CNNs that only use separable convolutions are effective[1066,
1067, 1068]. However, serial convolutions decrease parallelization and
separable kernels have fewer degrees of freedom, decreasing representational
capacity. Thus, separated kernels are usually at least 5$\times$5, and
separated 3$\times$3 kernels are unusual. Even-sized kernels, such as
2$\times$2 and 4$\times$4, are rare as symmetric padding is needed to avoid
information erosion caused by spatial shifts of feature maps[1069].
A traditional 2D convolutional layer maps inputs, $x^{\text{input}}$, with
height $H$, width, $W$, and depth, $D$, to
$x_{kij}^{\text{output}}=b_{k}+\sum\limits_{d=1}^{D}\sum\limits_{m=1}^{M}\sum\limits_{n=1}^{N}w_{dkmn}x_{d(i+m-1)(j+n-1)}^{\text{input}}\,,i\in[1,H-M+1]\,,j\in[1,W-N+1]\,,$
(40)
where $K$ output channels are indexed by $k\in[1,K]$, is the sum of a bias,
$b$, and convolutions of each input channel with $M\times N$ kernels with
weights, $w$. For clarity, a traditional convolutional layer is visualized in
figure 8a. Convolutional layers for 1D, 3D and higher-dimensional
kernels[1070] have a similar form to 2D kernels, where kernels are convolved
across each dimension. Most inputs to convolutional layers are padded[1071,
1072] to avoid reducing spatial resolutions by kernel sizes, which could
remove all resolution in deep networks. Padding is computationally inexpensive
and eases implementations of ANNs that would otherwise combine layers with
different sizes, such as FractalNet[1073], Inception[1074, 1075, 1076],
NASNet[1077], recursive CNNs[1078, 1079], and ResNet[1080]. Pre-padding inputs
results in higher performance than post-padding outputs[1081]. Following
AlexNet[71], most convolutional layers are padded with zeros for simplicity.
Reflection and replication padding achieve similar results to zero
padding[1072]. However, padding based on partial convolutions[1082]
consistently outperforms other methods[1072].
Convolutional layers are similar to fully connected layers used in multilayer
perceptrons[1083, 1084] (MLPs). For comparison with equation 40, a fully
connected, or “dense”, layer in a MLP computes
$x_{k}^{\text{output}}=b_{k}+\sum\limits_{d=1}^{D}w_{dk}x_{d}^{\text{input}}\,,$
(41)
where every input element is connected to every output element. Convolutional
layers reduce computation by making local connections within receptive fields
of convolutional kernels, and by convolving kernels rather than using
different weights at each input position. Intermediately, fully connected
layers can be regularized to learn local connections[1085]. Fully connected
layers are sometimes used at the middle of encoder-decoders[1086]. However,
such fully connected layers can often be replaced by multiscale atrous, or
“holey”, convolutions[955] in an atrous spatial pyramid pooling[306, 305]
(ASPP) module to decrease computation without a significant decrease in
performance. Alternatively, weights in fully connected layers can be
decomposed into multiple smaller tensors to decrease computation without
significantly decreasing performance[1087, 1088].
Convolutional layers can perform a variety of convolutional arithmetic[955].
For example, strided convolutions[1089] usually skip computation of outputs
that are not at multiples of an integer spatial stride. Most strided
convolutional layers are applied throughout CNNs to sequentially decrease
spatial extent, and thereby decrease computational requirements. In addition,
strided convolutions are often applied at the start of CNNs[1074, 539, 1075,
1076] where most input features can be resolved at a lower resolution than the
input. For simplicity and computational efficiency, stride is typically
constant within a convolutional layer; however, increasing stride away from
the centre of layers can improve performance[1090]. To increase spatial
resolution, convolutional layers often use reciprocals of integer
strides[1091]. Alternatively, spatial resolution can be increased by combining
interpolative upsampling with an unstrided convolutional layer[1092, 1093],
which can help to minimize output artefacts.
Convolutional layers couple the computation of spatial and cross-channel
convolutions. However, partial decoupling of spatial and cross-channel
convolutions by distributing inputs across multiple convolutional layers and
combining outputs can improve performance. Partial decoupling of convolutions
is prevalent in many seminal DNN architectures, including FractalNet[1073],
Inception[1074, 1075, 1076], NASNet[1077]. Taking decoupling to an extreme,
depthwise separable convolutions[539, 1094, 1095] shown in figure 8b compute
depthwise convolutions,
$\displaystyle x_{dij}^{\text{depth}}$
$\displaystyle=\sum\limits_{m=1}^{M}\sum\limits_{n=1}^{N}u_{dmn}x_{d(i+m-1)(j+n-1)}^{\text{input}}\,,i\in[1,H-M+1]\,,j\in[1,W-N+1]\,,$
(42)
then compute pointwise 1$\times$1 convolutions for $D$ intermediate channels,
$\displaystyle x_{kij}^{\text{output}}$
$\displaystyle=b_{k}+\sum\limits_{d=1}^{D}v_{dk}^{\text{point}}x_{dij}^{\text{depth}}\,,$
(43)
where $K$ output channels are indexed by $k\in[1,K]$. Depthwise convolution
kernels have weights, $u$, and the depthwise layer is often followed by extra
batch normalization before pointwise convolution to improve performance and
accelerate convergence[1094]. Increasing numbers of channels with pointwise
convolutions can increase accuracy[1094], at the cost of increased
computation. Pointwise convolutions are a special case of traditional
convolutional layers in equation 40 and have convolution kernel weights, $v$,
and add biases, $b$. Naively, depthwise separable convolutions require fewer
weight multiplications than traditional convolutions[1096, 1097]. However,
extra batch normalization and serialization of one convolutional layer into
depthwise and pointwise convolutional layers mean that depthwise separable
convolutions and traditional convolutions have similar computing times[539,
1097].
Most DNNs developed for computer vision use fixed-size inputs. Although fixed
input sizes are often regarded as an artificial constraint, it is similar to
animalian vision where there is an effectively constant number of retinal rods
and cones[1098, 1099, 1100]. Typically, the most practical approach to handle
arbitrary image shapes is to train a DNN with crops so that it can be tiled
across images. In some cases, a combination of cropping, padding and
interpolative resizing can also be used. To fully utilize unmodified variable
size inputs, a simple is approach to train convolutional layers on variable
size inputs. A pooling layer, such as global average pooling, can then be
applied to fix output size before fully connected or other layers that might
require fixed-size inputs. More involved approaches include spatial pyramid
pooling[1101] or scale RNNs[1102]. However, typical electron micrographs are
much larger than 299$\times$299, which often makes it unfeasible for electron
microscopists with a few GPUs to train high-performance DNNs on full-size
images. For comparison, Xception was trained on 299$\times$299 images with 60
K80 GPUs for over one month.
The Fourier transform[1103], $\hat{f}(k_{1},...,k_{N})$, at an $N$-dimensional
Fourier space vector, $\\{k_{1},...,k_{N}\\}$, is related to a function,
$f(x_{1},...,x_{N})$, of an $N$-dimensional signal domain vector,
$\\{x_{1},...,x_{N}\\}$, by
$\displaystyle\hat{f}(k_{1},...,k_{N})$
$\displaystyle=\left(\frac{|b|}{(2\pi)^{1-a}}\right)^{N/2}\int\limits_{-\infty}^{\infty}...\int\limits_{-\infty}^{\infty}f(x_{1},...,x_{N})\exp(+ibk_{1}x_{i}+...+ibk_{N}x_{N})\mathop{}\\!\mathrm{d}x_{1}...\mathop{}\\!\mathrm{d}x_{N}\,,$
(44) $\displaystyle f(x_{1},...,x_{N})$
$\displaystyle=\left(\frac{|b|}{(2\pi)^{1+a}}\right)^{N/2}\int\limits_{-\infty}^{\infty}...\int\limits_{-\infty}^{\infty}\hat{f}(k_{1},...,k_{N})\exp(-ibk_{1}x_{i}-...-ibk_{N}x_{N})\mathop{}\\!\mathrm{d}k_{1}...\mathop{}\\!\mathrm{d}k_{N}\,,$
(45)
where $\pi=3.141...$, and $i=(-1)^{1/2}$ is the imaginary number. Two
parameters, $a$ and $b$, can parameterize popular conventions that relate the
Fourier and inverse Fourier transforms. Mathematica documentation nominates
conventions[1104] for general applications $(a,b)$, pure mathematics $(1,-1)$,
classical physics $(-1,1)$, modern physics $(0,1)$, systems engineering
$(1,-1)$, and signal processing $(0,2\pi)$. We observe that most electron
microscopists follow the modern physics convention of $a=0$ and $b=1$;
however, the choice of convention is arbitrary and does not matter if it is
consistent within a project. For discrete functions, Fourier integrals are
replaced with summations that are limited to the support of a function.
Discrete Fourier transforms of uniformly spaced inputs are often computed with
a fast Fourier transform (FFT) algorithm, which can be parallelized for
CPUs[1105] or GPUs[65, 1106, 1107, 1108]. Typically, the speedup of FFTs on
GPUs over CPUs is higher for larger signals[1109, 1110]. Most popular FFTs are
based on the Cooley-Turkey algorithm[1111, 1112], which recursively divides
FFTs into smaller FFTs. We observe that some electron microscopists consider
FFTs to be limited to radix-2 signals that can be recursively halved; however,
FFTs can use any combination of factors for the sizes of recursively smaller
FFTs. For example, clFFT[1113] FFT algorithms support signal sizes that are
any sum of powers of 2, 3, 5, 7, 11 and 13.
Convolution theorems can decrease computation by enabling convolution in the
Fourier domain[1114]. To ease notation, we denote the Fourier transform of a
signal, I, by $\text{FT}(\textbf{I})$, and the inverse Fourier transform by
$\text{FT}^{-1}(\textbf{I})$. Thus, the convolution theorems for two signals,
$\textbf{I}_{1}$ and $\textbf{I}_{2}$, are[1115]
$\displaystyle\text{FT}(\textbf{I}_{1}*\textbf{I}_{2})$
$\displaystyle=\text{FT}(\textbf{I}_{1})\cdot\text{FT}(\textbf{I}_{2})\,,$
(46) $\displaystyle\text{FT}(\textbf{I}_{1}\cdot\textbf{I}_{2})$
$\displaystyle=\text{FT}(\textbf{I}_{1})*\text{FT}(\textbf{I}_{2})\,,$ (47)
where the signals can be feature channels and convolutional kernels. Fourier
domain convolutions,
$\textbf{I}_{1}*\textbf{I}_{2}=\text{FT}^{-1}\left(\text{FT}(\textbf{I}_{1})\cdot\text{FT}(\textbf{I}_{2})\right)$,
are increasingly efficient, relative to signal domain convolutions, as kernel
and image sizes increase[1114]. Indeed, Fourier domain convolutions are
exploited to enable faster training with large kernels in Fourier CNNs[1116,
1114]. However, Fourier CNNs are rare as most researchers use small 3$\times$3
kernels, following University of Oxford Visual Geometry Group (VGG)
CNNs[1117].
Figure 10: Residual blocks where a) one, b) two, and c) three convolutional
layers are skipped. Typically, convolutional layers are followed by batch
normalization then activation.
### 4.4 Skip Connections
Residual connections[1080] add a signal after skipping ANN layers, similar to
cortical skip connections[1118, 1119]. Residuals improve DNN performance by
preserving gradient norms during backpropagation[1120, 537] and avoiding bad
local minima[1121] by smoothing DNN loss landscapes[1122]. In practice,
residuals enable DNNs to behave like an ensemble of shallow networks[1123]
that learn to iteratively estimate outputs[1124]. Mathematically, a residual
layer learns parameters, $\textbf{w}_{l}$, of a perturbative function,
$f_{l}(\textbf{x}_{l},\textbf{w}_{l})$, that maps a signal, $\textbf{x}_{l}$,
at depth $l$ to depth $l+1$,
$\textbf{x}_{l+1}=\textbf{x}_{l}+f_{l}(\textbf{x}_{l},\textbf{w}_{l})\,.$ (48)
Residuals were developed for CNNs[1080], and examples of residual connections
that skip one, two and three convolutional layers are shown in figure 10.
Nonetheless, residuals are also used in MLPs[1125] and RNNs[1126, 1127, 1128].
Representational capacity of perturbative functions increases as the number of
skipped layers increases. As result, most residuals skip two or three layers.
Skipping one layer rarely improves performance due to its low representational
capacity[1080].
There are a range of residual connection variants that can improve
performance. For example, highway networks[1129, 1130] apply a gating function
to skip connections, and dense networks[1131, 1132, 1133] use a high number of
residual connections from multiple layers. Another example is applying a
1$\times$1 convolutional layer to $x_{l}$ before addition[1080, 539] where
$f_{l}(x_{l},w_{l})$ spatially resizes or changes numbers of feature channels.
However, resizing with norm-preserving convolutional layers[1120] before
residual blocks can often improve performance. Finally, long additive[1134]
residuals that connect DNN inputs to outputs are often applied to DNNs that
learn perturbative functions.
A limitation of preserving signal information with residuals[1135, 1136] is
that residuals make DNNs learn perturbative functions, which can limit
accuracy of DNNs that learn non-perturbative functions if they do not have
many layers. Feature channel concatenation is an alternative approach that is
not perturbative, and that supports combination of layers with different
numbers of feature channels. In encoder-decoders, a typical example is
concatenating features computed near the start with layers near the end to
help resolve output features[316, 305, 306, 308]. Concatenation can also
combine embeddings of different[1137, 1138] or variants of[366] input features
from multiple DNNs. Finally, peephole connections in RNNs can improve
performance by using concatenation to combine cell state information with
other cell inputs[1139, 1140].
## 5 Architecture
There is a high variety of ANN architectures[4, 5, 6, 7] that are trained to
minimize losses for a range of applications. Many of the most popular ANNs are
also the simplest, and information about them is readily available. For
example, encoder-decoder[502, 503, 305, 306, 307, 308, 504] or classifier[272]
ANNs usually consist of single feedforward sequences of layers that map inputs
to outputs. This section introduces more advanced ANNs used in electron
microscopy, including actor-critics, GANs, RNNs, and variational autoencoders
(VAEs). These ANNs share weights between layers or consist of multiple
subnetworks. Other notable architectures include recursive CNNs[1078, 1079],
Network-in-Networks[1141] (NiNs), and transformers[1142, 1143]. Although they
will not be detailed in this review, their references may be good starting
points for research.
Figure 11: Actor-critic architecture. An actor outputs actions based on input
states. A critic then evaluates action-state pairs to predict losses.
### 5.1 Actor-Critic
Most ANNs are trained by gradient descent using backpropagated gradients of a
differentiable loss function cf. section 6.1. However, some losses are not
differentiable. Examples include losses of actors directing their vision[1144,
1145], and playing competitive[24] or score-based[1146, 1147] computer games.
To overcome this limitation, a critic[1148] can be trained to predict
differentiable losses from action and state information, as shown in figure
11. If the critic does not depend on states, it is a surrogate loss
function[1149, 1150]. Surrogates are often fully trained before actor
optimization, whereas critics that depend on actor-state pairs are often
trained alongside actors to minimize the impact of catastrophic
forgetting[1151] by adapting to changing actor policies and experiences.
Alternatively, critics can be trained with features output by intermediate
layers of actors to generate synthetic gradients for backpropagation[1152].
Figure 12: Generative adversarial network architecture. A generator learns to
produce outputs that look realistic to a discriminator, which learns to
predict whether examples are real or generated.
### 5.2 Generative Adversarial Network
Generative adversarial networks[197, 198, 199, 200] (GANs) consist of
generator and discriminator subnetworks that play an adversarial game, as
shown in figure 12. Generators learn to generate outputs that look realistic
to discriminators, whereas discriminators learn to predict whether examples
are real or generated. Most GANs are developed to generate visual media with
realistic characteristics. For example, partial STEM images infilled with a
GAN are less blurry than images infilled with a non-adversarial generator
trained to minimize MSEs[201] cf. figure 2. Alternatively, computationally
inexpensive loss functions designed by humans, such as structural similarity
index measures[1153] (SSIMs) and Sobel losses[231], can improve generated
output realism. However, it follows from the universal approximator
theorems[37, 38, 39, 40, 41, 42, 43, 44, 45] that training with ANN
discriminators can often yield more realistic outputs.
There are many popular GAN loss functions and regularization mechanisms[1154,
1155, 1156, 1157, 1158]. Traditionally, GANs were trained to minimize
logarithmic discriminator, $D$, and generator, $G$, losses[1159],
$\displaystyle L_{D}$ $\displaystyle=-\log
D(\textbf{x})-\log(1-D(G(\textbf{z})))\,,$ (49) $\displaystyle L_{G}$
$\displaystyle=\log(1-D(G(\textbf{z})))\,,$ (50)
where z are generator inputs, $G(\textbf{z})$ are generated outputs, and x are
example outputs. Discriminators predict labels, $D(\textbf{x})$ and
$D(G(\textbf{z}))$, where target labels are 0 and 1 for generated and real
examples, respectively. Limitedly, logarithmic losses are numerically unstable
for $D(\textbf{x})\rightarrow 0$ or $D(G(\textbf{z}))\rightarrow 1$, as the
denominator, $f(x)$, in $\partial_{x}\log f(x)=\partial_{x}f(x)/f(x)$
vanishes. In addition, discriminators must be limited to $D(\textbf{x})>0$ and
$D(G(\textbf{z}))<1$, so that logarithms are not complex. To avoid these
issues, we recommend training discriminators with squared difference
losses[1160, 1161],
$\displaystyle L_{D}$
$\displaystyle=(D(\textbf{x})-1)^{2}+D(G(\textbf{z}))^{2}\,,$ (51)
$\displaystyle L_{G}$ $\displaystyle=(D(G(\textbf{z}))-1)^{2}\,.$ (52)
However, there are a variety of other alternatives to logarithmic loss
functions that are also effective[1154, 1155].
A variety of methods have been developed to improve GAN training[1162, 995].
The most common issues are catastrophic forgetting[1151] of previous learning,
and mode collapse[1163] where generators only output examples for a subset of
a target domain. Mode collapse often follows discriminators becoming Lipschitz
discontinuous. Wasserstein GANs[1164] avoid mode collapse by clipping
trainable variables, albeit often at the cost of 5-10 discriminator training
iterations per generator training iteration. Alternatively, Lipschitz
continuity can be imposed by adding a gradient penalty[1165] to GAN losses,
such as differences of L2 norms of discriminator gradients from unity,
$\displaystyle\tilde{x}$ $\displaystyle=G(\textbf{z})\,,$ (53)
$\displaystyle\hat{\textbf{x}}$
$\displaystyle=\epsilon\textbf{x}+(1-\epsilon)\tilde{\textbf{x}}\,,$ (54)
$\displaystyle L_{D}$
$\displaystyle=D(\tilde{\textbf{x}})-D(\textbf{x})+\lambda(||\partial_{\hat{\textbf{x}}}D(\hat{\textbf{x}})||_{2}-1)^{2}\,,$
(55) $\displaystyle L_{G}$ $\displaystyle=-D(G(\textbf{z}))\,,$ (56)
where $\epsilon\in[0,1]$ is a uniform random variate, $\lambda$ weights the
gradient penalty, and $\tilde{\textbf{x}}$ is an attempt to generate $x$.
However, using a gradient penalty introduces additional gradient
backpropagation that increases discriminator training time. There are also a
variety of computationally inexpensive tricks that can improve training, such
as adding noise to labels[995, 1075, 1166] or balancing discriminator and
generator learning rates[349]. These tricks can help to avoid discontinuities
in discriminator output distributions that can lead to mode collapse; however,
we observe that these tricks do not reliably stabilize GAN training.
Instead, we observe that spectral normalization[1017] reliably stabilizes GAN
discriminator training in our electron microscopy research[201, 349, 202].
Spectral normalization controls Lipschitz constants of discriminators by
fixing the spectral norms of their weights, as introduced in section 4.2.
Advantages of spectral normalization include implementations based on the
power iteration method[1023, 1024] being computationally inexpensive, not
adding a regularizing loss function that could detrimentally compete[1167,
1168] with discrimination losses, and being effective with one discriminator
training iterations per generator training iteration[1017, 1169]. Spectral
normalization is popular in GANs for high-resolution image synthesis, where it
is also applied in generators to stabilize training[1170].
There are a variety of GAN architectures[1171]. For high-resolution image
synthesis, computation can be decreased by training multiple discriminators to
examine image patches at different scales[1172, 201]. For domain translation
characterized by textural differences, a cyclic GAN[1004, 1173] consisting of
two GANs can map from one domain to the other and vice versa. Alternatively,
two GANs can share intermediate layers to translate inputs via a shared
embedding domain[1174]. Cyclic GANs can also be combined with a siamese
network[279, 280, 281] for domain translation beyond textural
differences[1175]. Finally, discriminators can introduce auxiliary losses to
train DNNs to generalize to examples from unseen domains[1176, 1177, 1178].
Figure 13: Architectures of recurrent neural networks with a) long short-term
memory (LSTM) cells, and b) gated recurrent units (GRUs).
### 5.3 Recurrent Neural Network
Recurrent neural networks[531, 532, 533, 534, 535, 536] reuse an ANN cell to
process each step of a sequence. Most RNNs learn to model long-term
dependencies by gradient backpropagation through time[1179] (BPTT). The
ability of RNNs to utilize past experiences enables them to model partially
observed and variable length Markov decision processes[1180, 1181] (MDPs).
Applications of RNNs include directing vision[1144, 1145], image
captioning[1182, 1183], language translation[1184], medicine[77], natural
language processing[1185, 1186], playing computer games[24], text
classification[1055], and traffic forecasting[1187]. Many RNNs are combined
with CNNs to embed visual media[1145] or words[1188, 1189], or to process RNN
outputs[1190, 1191]. RNNs can also be combined with MLPs[1144], or text
embeddings[1192] such as BERT[1193, 1192], continuous bag-of-words[1194, 1195,
1196] (CBOW), doc2vec[1197, 1198], GloVe[1199], and word2vec[1200, 1194].
The most popular RNNs consist of long short-term memory[1201, 1202, 1203,
1204] (LSTM) cells or gated recurrent units[1202, 1205, 1206, 1207] (GRUs).
LSTMs and GRUs are popular as they solve the vanishing gradient problem[1208,
1209, 537] and have consistently high performance[1210, 1211, 1212, 1213,
1214, 1215]. Their architectures are shown in figure 13. At step $t$, an LSTM
outputs a hidden state, $h_{t}$, and cell state, $C_{t}$, given by
$\displaystyle\textbf{f}_{t}$
$\displaystyle=\sigma(\textbf{w}_{f}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{f})\,,$
(57) $\displaystyle\textbf{i}_{t}$
$\displaystyle=\sigma(\textbf{w}_{i}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{i})\,,$
(58) $\displaystyle\tilde{\textbf{c}}_{t}$
$\displaystyle=\tanh(\textbf{w}_{C}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{C})\,,$
(59) $\displaystyle\textbf{C}_{t}$
$\displaystyle=\textbf{f}_{t}\textbf{C}_{t-1}+\textbf{i}_{t}\tilde{\textbf{C}_{t}}\,,$
(60) $\displaystyle\textbf{o}_{t}$
$\displaystyle=\sigma(\textbf{w}_{o}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{o})\,,$
(61) $\displaystyle\textbf{h}_{t}$
$\displaystyle=\textbf{o}_{t}\tanh(\textbf{C}_{t})\,,$ (62)
where $\textbf{C}_{t-1}$ is the previous cell state, $\textbf{h}_{t-1}$ is the
previous hidden state, $\textbf{x}_{t}$ is the step input, and $\sigma$ is a
logistic sigmoid function of equation 10a, $[\textbf{x},\textbf{y}]$ is the
concatenation of x and y channels, and $(\textbf{w}_{f},\textbf{b}_{f})$,
$(\textbf{w}_{i},\textbf{b}_{i})$, $(\textbf{w}_{C},\textbf{b}_{C})$ and
$(\textbf{w}_{o},\textbf{b}_{o})$ are pairs of weights and biases. A GRU
performs fewer computations than an LSTM and does not have separate cell and
hidden states,
$\displaystyle\textbf{z}_{t}$
$\displaystyle=\sigma(\textbf{w}_{z}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{z})\,,$
(63) $\displaystyle\textbf{r}_{t}$
$\displaystyle=\sigma(\textbf{w}_{r}\cdot[\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{r})\,,$
(64) $\displaystyle\tilde{\textbf{h}}_{t}$
$\displaystyle=\tanh(\textbf{w}_{h}\cdot[\textbf{r}_{t}\textbf{h}_{t-1},\textbf{x}_{t}]+\textbf{b}_{h})\,,$
(65) $\displaystyle\textbf{h}_{t}$
$\displaystyle=(1-\textbf{z}_{t})\textbf{h}_{t-1}+\textbf{z}_{t}\tilde{\textbf{h}}_{t}\,,$
(66)
where $(\textbf{w}_{z},\textbf{b}_{z})$, $(\textbf{w}_{r},\textbf{b}_{r})$,
and $(\textbf{w}_{h},\textbf{b}_{h})$ are pairs of weights and biases. Minimal
gated units (MGUs) can further reduce computation[1216]. A large-scale
analysis of RNN architectures for language translation found that LSTMs
consistently outperform GRUs[1210]. GRUs struggle with simple languages that
are learnable by LSTMs as the combined hidden and cell states of GRUs make it
more difficult for GRUs to perform unbounded counting[1214]. However, further
investigations found that GRUs can outperform LSTMs on tasks other than
language translation[1211], and that GRUs can outperform LSTMs on some
datasets[1217, 1212, 1213]. Overall, LSTM performance is usually comparable to
that of GRUs.
There are a variety of alternatives to LSTM and GRUs. Examples include
continuous time RNNs[1218, 1219, 1220, 1221, 1222] (CTRNNs), Elman[1223] and
Jordan[1224] networks, independently RNNs[1225] (IndRNNs), Hopfield
networks[1226], recurrent MLPs[1227] (RMLPs). However, none of the variants
offer consistent performance benefits over LSTMs for general sequence
modelling. Similarly, augmenting LSTMs with additional connections, such as
peepholes[1139, 1140] and projection layers[1228], does not consistently
improve performance. For electron microscopy, we recommend defaulting to LSTMs
as we observe that their performance is more consistently high than
performance of other RNNs. However, LSTM and GRU performance is often
comparable, so GRUs are also a good choice to reduce computation.
There are a variety of architectures based on RNNs. Popular examples include
deep RNNs[1229] that stack RNN cells to increase representational ability,
bidirectional RNNs[1230, 1231, 1232, 1233] that process sequences both
forwards and in reverse to improve input utilization, and using separate
encoder and decoder subnetworks[1205, 1234] to embed inputs and generate
outputs. Hierarchical RNNs[1235, 1236, 1237, 1238, 1239] are more complex
models that stack RNNs to efficiently exploit hierarchical sequence
information, and include multiple timescale RNNs[1240, 1241] (MTRNNs) that
operate at multiple sequence length scales. Finally, RNNs can be augmented
with additional functionality to enable new capabilities. For example,
attention[1242, 1243, 1244, 1182] mechanisms can enable more efficient input
utilization. Further, creating a neural Turing machine (NTM) by augmenting a
RNN with dynamic external memory[1245, 1246] can make it easier for an agent
to solve dynamic graphs.
Figure 14: Architectures of autoencoders where an encoder maps an input to a
latent space and a decoder learns to reconstruct the input from the latent
space. a) An autoencoder encodes an input in a deterministic latent space,
whereas a b) traditional variational autoencoder encodes an input as means,
$\mu$, and standard deviations, $\sigma$, of Gaussian multivariates,
$\mu+\sigma\cdot\epsilon$, where $\epsilon$ is a standard normal multivariate.
### 5.4 Autoencoders
Autoencoders[1247, 1248, 1249] (AEs) learn to efficiently encode inputs, I,
without supervision. An AE consists of a encoder, $E$, and decoder, $D$, as
shown in figure 14a. Most encoders and decoders are jointly trained[1250] to
restore inputs from encodings, $E(\textbf{I})$, to minimize a MSE loss,
$L_{\text{AE}}=\text{MSE}(D(E(\textbf{I})),\textbf{I})\,,$ (67)
by gradient descent. In practice, DNN encoders and decoders yield better
compression[1248] than linear techniques, such as principal component
analysis[1251] (PCA), or shallow ANNs. Indeed, deep AEs can outperform JPEG
image compression[1252]. Denoising autoencoders[1253, 1254, 1255, 1256, 1257]
(DAEs) are a popular AE variant that can learn to remove artefacts by
artificially corrupting inputs inside encoders. Alternatively, contractive
autoencoders[1258, 1259] (CAEs) can decrease sensitivity to input values by
adding a loss to minimize gradients w.r.t. inputs. Most DNNs that improve
electron micrograph signal-to-noise are DAEs.
In general, semantics of AE outputs are pathological functions of encodings.
To generate outputs with well-behaved semantics, traditional VAEs[969, 1260,
1261] learn to encode means, $\boldsymbol{\mu}$, and standard deviations,
$\boldsymbol{\sigma}$, of Gaussian multivariates. Meanwhile, decoders learn to
reconstruct inputs from sampled multivariates,
$\boldsymbol{\mu}+\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}$, where
$\boldsymbol{\epsilon}$ is a standard normal multivariate. Traditional VAE
architecture is shown in figure 14b. Usually, VAE encodings are regularized by
adding Kullback-Leibler (KL) divergence of encodings from standard
multinormals to an AE loss function,
$L_{\text{VAE}}=\text{MSE}(D(\boldsymbol{\mu}+\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}),\textbf{I})+\frac{\lambda_{\text{KL}}}{2Bu}\sum\limits_{i=1}^{B}\sum\limits_{j=1}^{u}\mu_{ij}^{2}+\sigma_{ij}^{2}-\log(\sigma_{ij}^{2})-1\,,$
(68)
where $\lambda_{\text{KL}}$ weights the contribution of the KL divergence loss
for a batch size of $B$, and a latent space with $u$ degrees of freedom.
However, variants of Gaussian regularization can improve clustering[231], and
sparse autoencoders[1262, 1263, 1264, 1265] (SAEs) that regularize encoding
sparsity can encode more meaningful features. To generate realistic outputs, a
VAE can be combined with a GAN to create a VAE-GAN[1266, 1267, 1268]. Adding a
loss to minimize differences between gradients of generated and target outputs
is computationally inexpensive alternative that can generate realistic outputs
for some applications[231].
A popular application of VAEs is data clustering. For example, VAEs can encode
hash tables[1269, 1270, 1271, 1272, 1273] for search engines, and we use VAEs
as the basis of our electron micrograph search engines[231]. Encoding clusters
visualized by tSNE can be labelled to classify data[231], and encoding
deviations from clusters can be used for anomaly detection[1274, 1275, 1276,
1277, 1278]. In addition, learning encodings with well-behaved semantics
enables encodings to be used for semantic manipulation[1279, 1278]. Finally,
VAEs can be used as generative models to create synthetic populations[1280,
1281], develop new chemicals[1282, 1283, 1284, 1285], and synthesize
underrepresented data to reduce imbalanced learning[1286].
## 6 Optimization
Training, testing, deployment and maintenance of machine learning systems is
often time-consuming and expensive[1287, 1288, 1289, 1290]. The first step is
usually preparing training data and setting up data pipelines for ANN training
and evaluation. Typically, ANN parameters are randomly initialized for
optimization by gradient descent, possibly as part of an automatic machine
learning algorithm. Reinforcement learning is a special optimization case
where the loss is a discounted future reward. During training, ANN components
are often regularized to stabilize training, accelerate convergence, or
improve performance. Finally, trained models can be streamlined for efficient
deployment. This section introduces each step. We find that electron
microscopists can be apprehensive about robustness and interpretability of
ANNs, so we also provide subsections on model evaluation and interpretation.
Figure 15: Gradient descent. a) Arrows depict steps across one dimension of a
loss landscape as a model is optimized by gradient descent. In this example,
the optimizer traverses a small local minimum; however, it then gets trapped
in a larger sub-optimal local minimum, rather than reaching the global
minimum. b) Experimental DNN loss surface for two random directions in
parameter space showing many local minima[1122]. The image in part b) is
reproduced with permission under an MIT license[1291]. Algorithm 1
Optimization by gradient descent.
Initialize a model, $f(\textbf{x})$, with trainable parameters,
$\boldsymbol{\theta}_{1}$.
for training step $t=1,T$ do
Forwards propagate a randomly sampled batch of inputs, x, through the model to
compute outputs, $\textbf{y}=f(\textbf{x})$.
Compute loss, $L_{t}$, for outputs.
Use the differentiation chain rule[1292] to backpropagate gradients of the
loss to trainable parameters, $\boldsymbol{\theta}_{t-1}$.
Apply an optimizer to the gradients to update $\boldsymbol{\theta}_{t-1}$ to
$\boldsymbol{\theta}_{t}$.
end for
Vanilla SGD[1293, 1294] $[\eta]$
$\displaystyle\theta_{t}=\theta_{t-1}-\eta\partial_{\theta}L_{t}$ (69)
Momentum[1295] $[\eta,\gamma]$
$\displaystyle v_{t}$ $\displaystyle=\gamma
v_{t-1}+\eta\partial_{\theta}L_{t}$ (70) $\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}-v_{t}$ (71)
Nesterov momentum[1296, 1297, 1298] $[\eta,\gamma]$
$\displaystyle\phi$ $\displaystyle=\theta_{t-1}+\eta\gamma v_{t-1}$ (72)
$\displaystyle v_{t}$ $\displaystyle=\gamma v_{t-1}+\partial_{\theta}L_{t}$
(73) $\displaystyle\theta_{t}$ $\displaystyle=\phi-\eta v_{t}(1+\gamma)$ (74)
Quasi-hyperbolic momentum[1299] $[\eta,\beta,\nu]$
$\displaystyle g_{t}$ $\displaystyle=\beta
g_{t-1}+(1-\beta)\partial_{\theta}L_{t}$ (75) $\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}-\eta(vg_{t}+(1-v)\partial_{\theta}L_{t})$ (76)
AggMo[1300] $[\eta,\beta^{(1)},...,\beta^{(K)}]$
$\displaystyle v_{t}^{(i)}$
$\displaystyle=\beta^{(i)}v_{t-1}^{(i)}-(\partial_{\theta}L_{t})$ (77)
$\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}+\frac{\eta}{K}\sum\limits_{i=1}^{K}v_{t}^{(i)}$
(78)
RMSProp[1301] $[\eta,\beta,\epsilon]$
$\displaystyle v_{t}$ $\displaystyle=\beta
v_{t-1}+(1-\beta)(\partial_{\theta}L_{t})^{2}$ (79) $\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}-\frac{\eta}{(v_{t}+\epsilon)^{1/2}}\partial_{\theta}L_{t}$
(80)
ADAM[1302] $[\eta,\beta_{1},\beta_{2},\epsilon]$
$\displaystyle m_{t}$
$\displaystyle=\beta_{1}m_{t-1}+(1-\beta_{1})\partial_{\theta}L_{t}$ (81)
$\displaystyle v_{t}$
$\displaystyle=\beta_{2}v_{t-1}+(1-\beta_{2})(\partial_{\theta}L_{t})^{2}$
(82) $\displaystyle\hat{m}_{t}$ $\displaystyle=\frac{m_{t}}{1-\beta_{1}^{t}}$
(83) $\displaystyle\hat{v}_{t}$ $\displaystyle=\frac{v_{t}}{1-\beta_{2}^{t}}$
(84) $\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}-\frac{\eta}{\hat{v}_{t}^{1/2}+\epsilon}\hat{m}_{t}$
(85)
AdaMax[1302] $[\eta,\beta_{1},\beta_{2}]$
$\displaystyle m_{t}$
$\displaystyle=\beta_{1}m_{t-1}+(1-\beta_{1})\partial_{\theta}L_{t}$ (86)
$\displaystyle u_{t}$
$\displaystyle=\max(\beta_{2}u_{t-1},|\partial_{\theta}L_{t}|)$ (87)
$\displaystyle\hat{m}_{t}$ $\displaystyle=\frac{m_{t}}{1-\beta_{1}^{t}}$ (88)
$\displaystyle\theta_{t}$
$\displaystyle=\theta_{t-1}-\frac{\eta}{u_{t}}\hat{m}_{t}$ (89)
Algorithms 1: Update rules of various gradient descent optimizers for a
trainable parameter, $\theta_{t}$, at iteration $t$, gradients of losses
w.r.t. the parameter, $\partial_{\theta}L_{t}$, and learning rate, $\eta$.
Hyperparameters are listed in square brackets.
### 6.1 Gradient Descent
Most ANNs are iteratively trained by gradient descent[1303, 1304, 1305, 465,
1306, 1307], as described by algorithm 1 and shown in figure 15. To minimize
computation, results at intermediate stages of forward propagation, where
inputs are mapped to outputs, are often stored in memory. Storing the forwards
pass in memory enables backpropagation memoization by sequentially computing
gradients w.r.t. trainable parameters. To reduce memory costs for large ANNs,
a subset of intermediate forwards pass results can be saved as starting points
to recompute other stages during backpropagation[1308, 1309]. Alternatively,
forward pass computations can be split across multiple devices[1310].
Optimization by gradient descent plausibly models learning in some biological
systems[1311]. However, gradient descent is not generally an accurate model of
biological learning[1312, 1313, 1314].
There are many popular gradient descent optimizers for deep learning[1303,
1304, 1305]. Update rules for eight popular optimizers are summarized in
figure 1. Other optimizers include AdaBound[1315], AMSBound[1315],
AMSGrad[1316], Lookahead[1317], NADAM[1318], Nostalgic Adam[1319], Power
Gradient Descent[1320], Rectified ADAM[1321] (RADAM), and trainable
optimizers[1322, 1323, 1324, 1325, 1326]. Gradient descent is effective in the
high-dimensional optimization spaces of overparameterized ANNs[1327] as the
probability of getting trapped in a sub-optimal local minima decreases as the
number of dimensions increases. The simplest optimizer is “vanilla” stochastic
gradient descent (SGD), where a trainable parameter perturbation,
$\Delta\theta_{t}=\theta_{t}-\theta_{t-1}$, is the product of a learning rate,
$\eta$, and derivative of a loss, $L_{t}$, w.r.t. the trainable parameter,
$\partial_{\theta}L_{t}$. However, vanilla SGD convergence is often limited by
unstable parameter oscillations as it a low-order local optimization
method[1328]. Further, vanilla SGD has no mechanism to adapt to varying
gradient sizes, which vary effective learning rates as
$\Delta\theta\propto\partial_{\theta}L_{t}$.
To accelerate convergence, many optimizers introduce a momentum term that
weights an average of gradients with past gradients[1329, 1296, 1330].
Momentum-based optimizers in figure 1 are momentum, Nesterov momentum[1296,
1297], quasi-hyperbolic momentum[1299], AggMo[1300], ADAM[1302], and
AdaMax[1302]. To standardize effective learning rates for every layer,
adaptive optimizers normalize updates based on an average of past gradient
sizes. Adaptive optimizers in figure 1 are RMSProp[1301], ADAM[1302], and
AdaMax[1302], which usually result in faster convergence and higher accuracy
than other optimizers[1331, 1332]. However, adaptive optimizers can be
outperformed by vanilla SGD due to overfitting[1333], so some researchers
adapt adaptive learning rates to their variance[1321] or transition from
adaptive optimization to vanilla SGD as training progresses[1315]. For
electron microscopy we recommend adaptive optimization with Nadam[1318], which
combines ADAM with Nesterov momentum, as it is well-established and a
comparative analysis of select gradient descent optimizers found that it often
achieves higher performance than other popular optimizers[1334]. Limitingly,
most adaptive optimizers slowly adapt to changing gradient sizes e.g. a
default value for ADAM $\beta_{2}$ is 0.999[1302]. To prevent learning being
destabilized by spikes in gradient sizes, adaptive optimizers can be combined
with adaptive learning rate[261, 1315] or gradient[1335, 1208, 1336] clipping.
For non-adaptive optimizers, effective learning rates are likely to vary due
to varying magnitudes of gradients w.r.t. trainable parameters. Similarly,
learning by biological neurons varies as stimuli usually activate a subset of
neurons[1337]. However, all neuron outputs are usually computed for ANNs.
Thus, not effectively using all weights to inform decisions is computational
inefficient. Further, inefficient weight updates can limit representation
capacity, slow convergence, and decrease training stability. A typical example
is effective learning rates varying between layers. Following the chain rule,
gradients backpropagated to the $i$th layer of a DNN from its start are
$\frac{\partial
L_{t}}{\partial\textbf{x}_{i}}=\left(\prod\limits_{l=i}^{L-1}\frac{\partial\textbf{x}_{l+1}}{\partial\textbf{x}_{l}}\right)\frac{\partial
L_{t}}{\partial\textbf{x}_{L}}\,,$ (90)
for a DNN with $L$ layers. Vanishing gradients[1209, 537, 1208] occur when
many layers have $\partial x_{l+1}/\partial x_{l}\ll 1$. For example, DNNs
with logistic sigmoid activations often exhibit vanishing gradients as their
maximum gradient is $1/4$ cf. equation 10b. Similarly, exploding
gradients[1209, 537, 1208] occur when many layers have $\partial
x_{l+1}/\partial x_{l}\gg 1$. Adaptive optimizers alleviate vanishing and
exploding gradients by dividing gradients by their expected sizes.
Nevertheless, it is essential to combine adaptive optimizers with appropriate
initialization and architecture to avoid numerical instability.
Optimizers have a myriad of hyperparameters to be initialized and varied
throughout training to optimize performance[1338] cf. figure 1. For example,
stepwise exponentially decayed learning rates are often theoretically
optimal[1339]. There are also various heuristics that are often effective,
such as using a DEMON decay schedule for an ADAM first moment of the momentum
decay rate[1340],
$\beta_{1}=\frac{1-t/T}{(1-\beta_{\text{init}})+\beta_{\text{init}}(1-t/T)}\beta_{\text{init}}\,,$
(91)
where $\beta_{\text{init}}$ is the initial value of $\beta_{1}$, $t$ is the
iteration number, and $T$ is the final iteration number. Developers often
optimize ANN hyperparameters by experimenting with a range of heuristic
values. Hyperparameter optimization algorithms[1341, 1342, 1343, 1344, 1345,
1346] can automate optimizer hyperparameter selection. However, automatic
hyperparameter optimizers may not yield sufficient performance improvements
relative to well-established heuristics to justify their use, especially in
initial stages of development.
Alternatives to gradient descent[1347] are rarely used for parameter
optimization as they are not known to consistently improve upon gradient
descent. For example, simulated annealing[1348, 1349] has been applied to CNN
training[1350, 1351], and can be augmented with momentum to accelerate
convergence in deep learning[1352]. Simulated annealing can also augment
gradient descent to improve performance[1353]. Other approaches include
evolutionary[1354, 1355] and genetic[1356, 1357] algorithms, which can be a
competitive alternative to deep reinforcement learning where convergence is
slow[1358]. Indeed, recent genetic algorithms have outperformed a popular deep
reinforcement learning algorithm[1359]. Another direction is to augment
genetic algorithms with ANNs to accelerate convergence[1360, 1361, 1362,
1363]. Other alternatives to backpropagation include direct search[1364], the
Moore-Penrose Pseudo Inverse[1365]; particle swarm optimization[1366, 1367,
1368, 1369] (PSO); and echo-state networks[1370, 1371, 1372] (ESNs) and
extreme learning machines[1373, 1374, 1375, 1376, 1377, 1378, 1379] (ELMs),
where some randomly initialized weights are never updated.
### 6.2 Reinforcement Learning
Reinforcement learning[1380, 1381, 1382, 1383, 1384, 1385, 1386] (RL) is where
a machine learning system, or “actor”, is trained to perform a sequence of
actions. Applications include autonomous driving[1387, 1388, 1389],
communications network control[1390, 1391], energy and environmental
management[1392, 1393], playing games[24, 25, 26, 27, 1394, 1146, 28, 29], and
robotic manipulation[1395, 1396]. To optimize a MDP[1180, 1181], a discounted
future reward, $Q_{t}$, at step $t$ in a MDP with $T$ steps is usually
calculated from step rewards, $r_{t}$, with Bellman’s equation,
$Q_{t}=\sum\limits_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r_{t^{\prime}},$
(92)
where $\gamma\in[0,1)$ discounts future step rewards. To be clear, multiplying
$Q_{t}$ by $-1$ yields a loss that can be minimized using the methods in
section 6.1.
In practice, many MDPs are partially observed or have non-differentiable
losses that may make it difficult to learn a good policy from individual
observations. However, RNNs can often learn a model of their environments from
sequences of observations[1147]. Alternatively, FNNs can be trained with
groups of observations that contain more information than individual
observations[1394, 1146]. If losses are not differentiable, a critic can learn
to predict differentiable losses for actor training cf. section 5.1.
Alternatively, actions can be sampled from a differentiable probability
distribution[1397, 1144] as training losses given by products of losses and
sampling probabilities are differentiable. There are also a variety of
alternatives to gradient descent introduced at the end of section 6.1 that do
not require differentiable loss functions.
There are a variety of exploration strategies for RL[1398, 1399]. Adding
Ornstein-Uhlenbeck[1400] (OU) noise to actions is effective for continuous
control tasks optimized by deep deterministic policy gradients[1146] (DDPG) or
recurrent deterministic policy gradients[1147] (RDPG) RL algorithms. Adding
Gaussian noise achieves similar performance for optimization by TD3[1401] or
D4PG[1402] RL algorithms. However, a comparison of OU and Gaussian noise
across a variety of tasks[1403] found that OU noise usually achieves similar
performance to or outperforms Gaussian noise. Similarly, exploration can be
induced by adding noise to ANN parameters[1404, 1405]. Other approaches to
exploration include rewarding actors for increasing action entropy[1406, 1407,
1405] and intrinsic motivation[1408, 1409, 1410], where ANNs are incentified
to explore actions that they are unsure about.
RL algorithms are often partitioned into online learning[1411, 1412], where
training data is used as it is acquired; and offline learning[1413, 1414],
where a static training dataset has already been acquired. However, many
algorithms operate in an intermediate regime, where data collected with an
online policy is stored in an experience replay[1415, 1416, 1417] buffer for
offline learning. Training data is often sampled at random from a replay.
However, prioritizing the replay of data with high losses[993] or data that
results in high policy improvements[992] often improves actor performance. A
default replay buffer size of around $10^{6}$ examples is often used; however,
training is sensitive to replay buffer size[1418]. If the replay is too small,
changes in actor policy may destabilize training; whereas if the replay is too
large, convergence may be slowed by delays before the actor learns from policy
changes.
### 6.3 Automatic Machine Learning
There are a variety of automatic machine learning[1419, 1420, 1421, 1422,
1423] (AutoML) algorithms that can create and optimize ANN architectures and
learning policies for a dataset of input and target output pairs. Most AutoML
algorithms are based on RL or evolutionary algorithms. Examples of AutoML
algorithms include AdaNet[1424, 1425], Auto-DeepLab[1426], AutoGAN[1427],
Auto-Keras[1428], auto-sklearn[1429], DARTS+[1430], EvoCNN[271], H2O[1431],
Ludwig[1432], MENNDL[1433, 1434], NASBOT[1435], XNAS[1436], and others[1437,
1438, 1439, 1440, 1441]. AutoML is becoming increasingly popular as it can
achieve higher performance than human developers[1442, 1077] and enables human
developer time to be traded for potentially cheaper computer time.
Nevertheless, AutoML is currently limited to established ANN architectures and
learning policies. Consequently, we recommend that researchers either focus on
novel ANN architectures and learning policies or developing ANNs for novel
applications.
### 6.4 Initialization
How ANN trainable parameters are initialized[537, 1443] is related to model
capacity[1444]. Further, initializing parameters with values that are too
small or large can cause slow learning or divergence[537]. Careful
initialization can also prevent training by gradient descent being
destabilized by vanishing or exploding gradients[1209, 537, 1208], or high
variance of length scales across layers[537]. Finally, careful initialization
can enable momentum to accelerate convergence and improve performance[1296].
Most trainable parameters are multiplicative weights or additive biases.
Initializing parameters with constant values can result in every parameter in
a layer receiving the same updates by gradient descent, reducing model
capacity. Thus, weights are often randomly initialized. Followingly, biases
are often initialized with constant values due to symmetry breaking by the
weights.
Consider the projection of $n_{\text{in}}$ inputs,
$\textbf{x}^{\text{input}}=\\{x_{1}^{\text{input}},...,x_{n_{\text{in}}}^{\text{input}}\\}$,
to $n_{\text{out}}$ outputs,
$\textbf{x}^{\text{output}}=\\{x_{1}^{\text{output}},...,x_{n_{\text{out}}}^{\text{output}}\\}$,
by an $n_{\text{in}}\times n_{\text{out}}$ weight matrix, w. The expected
variance of an output element is[1443]
$\displaystyle\text{Var}(\textbf{x}^{\text{output}})=n_{\text{in}}\text{E}(\textbf{x}^{\text{input}})^{2}\text{Var}(\textbf{w})+n_{\text{in}}\text{E}(\textbf{w})^{2}\text{Var}(\textbf{x}^{\text{input}})+n_{\text{in}}\text{Var}(\textbf{w})\text{Var}(\textbf{x}^{\text{input}})\,,$
(93)
where $\text{E}(\textbf{x})$ and $\text{Var}(\textbf{x})$ denote the expected
mean and variance of elements of x, respectively. For similar length scales
across layers, $\text{Var}(\textbf{x}^{\text{output}})$ should be constant.
Initially, similar variances can be achieved by normalizing ANN inputs to have
zero mean, so that $\text{E}(\textbf{x}^{\text{input}})=0$, and initializing
weights so that $\text{E}(\textbf{w})=0$ and
$\text{Var}(\textbf{w})=1/n_{\text{in}}$. However, parameters can shift during
training, destabilizing learning. To compensate for parameter shift, popular
normalization layers like batch normalization often impose
$\text{E}(\textbf{x}^{\text{input}})=0$ and
$\text{Var}(\textbf{x}^{\text{input}})=1$, relaxing need for
$\text{E}(\textbf{x}^{\text{input}})=0$ or $\text{E}(\textbf{w})=0$.
Nevertheless, training will still be sensitive to the length scale of
trainable parameters.
There are a variety of popular weight initializers that adapt weights to ANN
architecture. One of the oldest methods is LeCun initialization[951, 941],
where weights are initialized with variance,
$\text{Var}(\textbf{w})=\frac{1}{n_{\text{in}}}\,,$ (94)
which is argued to produce outputs with similar length scales in the previous
paragraph. However, a similar argument can be made for initializing with
$\text{Var}(\textbf{w})=1/n_{\text{out}}$ to produce similar gradients at each
layer during the backwards pass[1443]. As a compromise, Xavier
initialization[1445] computes an average,
$\text{Var}(\textbf{w})=\frac{2}{n_{\text{in}}+n_{\text{out}}}\,.$ (95)
However, adjusting weights for $n_{\text{out}}$ is not necessary for adaptive
optimizers like ADAM, which divide gradients by their length scales, unless
gradients will vanish or explode. Finally, He initialization[22] doubles the
variance of weights to
$\text{Var}(\textbf{{w}})=\frac{2}{n_{\text{in}}}\,,$ (96)
and is often used in ReLU networks to compensate for activation functions
halving variances of their outputs[22, 1446, 1443]. Most trainable parameters
are initialized from either a zero-centred Gaussian or uniform distribution.
For convenience, the limits of such a uniform distribution are
$\pm(3\text{Var}(\textbf{w}))^{1/2}$. Uniform initialization can outperform
Gaussian initialization in DNNs due to Gaussian outliers harming
learning[1443]. However, issues can be avoided by truncating Gaussian
initialization, often to two standard deviations, and rescaling to its
original variance.
Some initializers are mainly used for RNNs. For example, orthogonal
initialization[1447] often improves RNN training[1448] by reducing
susceptibility to vanishing and exploding gradients. Similarly, identity
initialization[1449, 1450] can help RNNs to learn long-term dependencies. In
most ANNs, biases are initialized with zeros. However, the forget gates of
LSTMs are often initialized with ones to decrease forgetting at the start of
training[1211]. Finally, the start states of most RNNs are initialized with
zeros or other constants. However, random multivariate or trainable variable
start states can improve performance[1451].
There are a variety of alternatives to initialization from random
multivariates. Weight normalized[1014] ANNs are a popular example of data-
dependent initialization, where randomly initialized weight magnitudes and
biases are chosen to counteract variances and means of an initial batch of
data. Similarly, layer-sequential unit-variance (LSUV) initialization[1452]
consists of orthogonal initialization followed by adjusting the magnitudes of
weights to counteract variances of an initial batch of data. Other approaches
standardize the norms of backpropagated gradients. For example, random walk
initialization[1453] (RWI) finds scales for weights to prevent vanishing or
exploding gradients in deep FNNs, albeit with varied success[1452].
Alternatively, MetaInit[1454] scales the magnitudes of randomly initialized
weights to minimize changes in backpropagated gradients per iteration of
gradient descent.
### 6.5 Regularization
There are a variety of regularization mechanisms[1455, 1456, 1457, 1458] that
modify learning algorithms to improve ANN performance. One of the most popular
is L$X$ regularization, which decays weights by adding a loss,
$L_{X}=\lambda_{X}\sum\limits_{i}\frac{|\theta_{i}|^{X}}{X}\,,$ (97)
weighted by $\lambda_{X}$ to each trainable variable, $\theta_{i}$. L2
regularization[1459, 1460, 1461] is preferred[1462] for most DNN optimization
as subtraction of its gradient,
$\partial_{\theta_{i}}L_{2}=\lambda_{2}\theta_{i}$, is equivalent to
computationally-efficient multiplicative weight decay. Nevertheless, L1
regularization is better at inducing model sparsity[1463] than L2
regularization, and L1 regularization achieves higher performance in some |
# Supervised and Reinforcement Learning from Observations in Reconnaissance
Blind Chess
Timo Bertram JKU Linz, Austria Johannes Fürnkranz JKU Linz, Austria Martin
Müller University of Alberta, Canada
###### Abstract
In this work, we adapt a training approach inspired by the original AlphaGo
system to play the imperfect information game of Reconnaissance Blind Chess.
Using only the observations instead of a full description of the game state,
we first train a supervised agent on publicly available game records. Next, we
increase the performance of the agent through self-play with the on-policy
reinforcement learning algorithm Proximal Policy Optimization. We do not use
any search to avoid problems caused by the partial observability of game
states and only use the policy network to generate moves when playing. With
this approach, we achieve an ELO of 1330 on the RBC leaderboard, which places
our agent at position 27 at the time of this writing. We see that self-play
significantly improves performance and that the agent plays acceptably well
without search and without making assumptions about the true game state.
## I Introduction
Games have served as immensely popular test domains for artificial
intelligence, but the ever-increasing performance in classical board games
such as chess and Go has long surpassed human capabilities [1]. However,
imperfect information games, where the game state is not perfectly observable,
still provide many research challenges for developing competent AI agents. In
many of these games, human and AI performance is much closer together [2, 3,
4, 5, 6, 7], and humans still hold their own in some of these domains. We
focus on the game of _Reconnaissance Blind Chess (RBC)_ , an imperfect
information variant of classical chess, where players only receive limited
information about the placement of the opponent’s pieces. We aim to apply the
training approach used by AlphaGo [1], which works in perfect information
games, to imperfect information games by making some practical adjustments.
Specifically, we avoid problems caused by trying to use forward search without
having perfect information by only using the trained policy network, which is
fully capable of playing the game on its own.
### I-A Reconnaissance Blind Chess
In RBC, the game starts with the regular setup of chess pieces. However, a
player can only learn about the opponent’s moves by a limited form of sensing,
which strongly reduces their knowledge of the current state. At the start of
each turn, a player senses a $3\times 3$ area of the $8\times 8$ board, and
the true state of these squares is revealed. A player is also informed if
their selected move was legal and if they capture a piece. If a move attempts
to move through an opponent’s piece, the move is truncated to capturing it.
Players are also notified whenever one of their own pieces is captured, so
they retain perfect information about their own pieces. A game is won by
capturing the opponent’s king. Finally, check-related chess rules do not
apply, so it is legal to castle through a check or even move a king into
check. As a consequence, draws are much less common, as stalemate does not
exist and even a bare king can still win.
### I-B Contributions
Our main contribution is to adapt an AlphaGo-inspired approach [1] to an
imperfect information game setting. In AlphaGo, the state is fully observable,
so the legal actions of both players are known, which allows deep forward
search. In RBC, we generally do not know the true state of the board, which
implies that a player cannot know the opponent’s options precisely without
making significant assumptions beyond the known observations. This greatly
restricts the ability to simulate games or conduct a search through a tree of
variations. In our work, we adapt the early AlphaGo framework, which first
primes a neural network by supervised training and then improves it via self-
play [1]. Concretely, we make two main adaptations to account for imperfect
information:
1. 1.
We use the history of observations as our input and avoid any attempt to guess
or directly reconstruct the unknown full game state.
2. 2.
We ignore search and solely use the trained policy network to play.
Thus, our network learns to directly map observations to a distribution of
actions, which is used to play the game. The aim of this work is to
demonstrate that working directly with the given observations of a complex
game like RBC, without assumptions about the full hidden game state, is
possible and leads to acceptable performance. This opens another angle to work
on RBC, which previously strongly focused on trying to explicitly reconstruct
the true game state.
## II Related Work
Most previous work on RBC is focused on trying to eliminate the uncertainty of
RBC, thereby reducing it to normal chess, which then allows the usage of
strong search-based chess engines. For example, the runner-up of the 2019 RBC
competition, PiecewiseGrid agent [8], maintains a probability distribution
over the possible squares for each piece and uses this to compute the
likelihood of game states. The program then uses full game states to choose
moves with Stockfish [9], a state-of-the-art chess engine. The 2020 winner
Penumbra [10] does not use a regular chess engine but tries to reduce
uncertainty by identifying the opponent, which limits the possible game
states, again allowing forward search. Their approach trains a separate
network for a list of hand-selected opponents through supervised learning, as
well as one catch-all network which is used if the recognition fails. They
then generate an approximation of the current state, which is used as the
input to the network. However, using opponent-specific training severely
limits the flexibility of the approach. The work most similar to ours [11]
uses an approach similar to AlphaZero. However, their method did not achieve
strong performance and barely outperformed a random player. Like many other
prior works, they also aimed to reduce uncertainty by trying to identify the
most likely game states, which were then used for forward search.
To the best of our knowledge, there is no previous work on RBC which directly
works on the given observations, and we consider this to be the main
contribution of our approach. In poker, some previous work exists on directly
learning from observations. [12] learned to play simple poker versions from
observations of hands, which resulted in a good, but not very strong
performance. [13] proposed a self-play algorithm that guarantees to converge
to a Nash equilibrium. While resulting in similar outcomes, they train a
neural network to approximate the average of the past best-responses and we
use multiple past agents, leading to a slight difference between their work
and the reinforcement learning part of our work. Other strong results in
imperfect-information domains are often based on counterfactual regret
minimization [14, 15, 16], which may also work well in RBC, but has so far not
been explored.
## III Method
TABLE I: Input representation for the agent (one observation)
Size of layer | Information represented
---|---
1 | Square where opponent captured one of our pieces
73 | Last move taken (see [17] for how a move is encoded)
1 | Square where agent captured opponent’s piece
1 | 1 if last move was illegal
6 | Position of own pieces (One layer per piece type)
1 | Last sense taken
6 | Result of last sense (One layer per piece type)
1 | Color
In this work, we explicitly aim to not make assumptions about the true game
state, but rather learn a policy that directly maps imperfect observations to
moves. For this, we represent all information received at each turn to build
up a history of observations, which forms the input to our network. The most
recent information for a player is represented by a 90 $8\times 8$ bitboard
(see Table I). Of this, a single 1 in a $73\times 8\times 8$ stack encodes the
last move, which is an idea put forward by [17]. Whether the last move was
illegal, and the color of the player, could be represented by a single 0 or 1,
but a whole plane is used to facilitate the convolution-based structure of the
network. To represent the past, a fixed-size history of the last 20
observations forms the input of size $1800\times 8\times 8$ to our network.
The network consists of a shared convolutional block, followed by two separate
heads for the sense and the move policy (see Figure 1. As a third output, we
also tasked the network to predict a scalar outcome of the game, $1$ for win
and $-1$ for loss, which was used as the starting network of the critic in
reinforcement learning. Although a sequence-based architecture, e.g. a Long
Short-Term Memory (LSTM) network or Gated Recurrent Unit (GRU), may
intuitively make more sense, we found that in our setup, those networks
required more training time without increasing the accuracy of the
predictions. Therefore, we speculate that a history of 20 turns is
sufficiently long to capture all important information, but even shorter
histories could yield benefits by reducing the amount of unnecessary
information. We again use $73\times 8\times 8$ outputs to describe the
proposed move of the network, but we add one more output to represent the
option of passing, i.e., to complete the turn without making a move. The
sensing policy is modeled as an $8\times 8$ output, denoting all possible
squares on the board. All of our experiments used only a few days of training
on a single Tesla M40 and can be reproduced using commonly available hardware.
Figure 1: Network architecture
### III-A Supervised Learning
Similar to the setup in [1], we first train our network on human data before
using self-play to tune the policy. To this end, we used a dataset previously
used for training the opponent-specific networks of Penumbra [10]. From these
games, we construct training examples by using the taken actions as the target
output of the network, with the history of observations as the network’s
input. In contrast to games like chess and Go, a turn in RBC consists of two
actions, sensing and moving. To optimize these two separate but related
policies, we use the cross-entropy loss of both heads of the network and
optimize their sum. We use all games in the dataset, including losses and
games of lower-skilled opponents. Importantly, we do not mask illegal moves,
which adds significant difficulty for the network, as only a fraction of all
4673 possible moves is legal in a given position. We also do not mask the 28
outer squares of the board in the sensing policy, which are inferior sensing
actions that are dominated by choosing a square in the inner 6x6 area of the
board. This decision was made to account for the significant number of sub-
optimal senses in the dataset. Future experiments may explore the differences
in results when masking those.
After training for 5 epochs, which is equal to about 8 hours of wall time on
our machine, the network achieved a 49.71% sense and 48.34% move accuracy on a
held-out test set of 10% of the data, and 50.78% sensing and 53.91% moving
accuracy on the training data. We stopped training at this point as the
network started to overfit. To compute the accuracy, we only counted the
outputs of the network which exactly matched the chosen action in the data. A
significant number of predictions involve a large degree of randomness, so
there is no “correct” answer for many decisions, but achieving these levels of
accuracy in this setting indicates that the data is rather homogeneous. We
also counted sensing actions that were strictly better than the sense in the
target data as mistakes, such as sensing at g7, which reveals the content of 9
squares, instead of sensing on h8, which only shows a subset of 4 squares. A
different measure would have been to count the overlap between the predicted
and the target sensing action, but the main problem with this approach is that
the purpose of the sensing action is unknown. For example, its true intent
might have been collecting information about one particular square only.
### III-B Reinforcement Learning
Although with our agent achieved a good accuracy of predictions through
supervised training, it did not learn learn to actually win the game.
Supervised learning attributes the same importance to correctly predicting to
capture the opponent’s king (which wins the game) as to playing the opening
move 1. e4. Moreover, it may also learn low-quality in-game actions such as
sensing at the edge of the board, which is never an optimal decision. Since
the goal of the agent should be to win the game, the second stage of training
by reinforcement learning rewards the agent only for achieving this objective.
We frame the problem as a Markov decision problem (MDP), where the opponent is
part of the stochastic environment, and use an on-policy reinforcement
learning algorithm. Although RBC is a partially observable Markov decision
problem (POMDP), we disregard the partial observability of the domain and use
the history of observations as if they were a complete description of the
state, thus approximating the POMDP as an MDP.
We train the agent by self-play and collect the actions taken and the result
of each game. With this experience, we optimize our agent using our own
implementation of Proximal Policy Optimization (PPO) [18]. To simultaneously
optimize both policies (sensing and moving) we compute separate losses for
both of them and perform gradient descent on their sum. While sensing is a
passive action that never directly results in winning the game, we reward the
final winning move and the sense just before it with a reward of 1. To reduce
overfitting to the current version of the agent, we let it play against
randomly selected past versions of itself, which are saved whenever they reach
a 65% win rate. This is similar to playing against the average strategy of
past best-responses, and a well-known technique [7, 14, 15].
In training, the probability $p_{i}$ of playing against a version $i$ of the
network depends on the win-rate $w_{i}$ of the training against it:
$p_{i}=1-(w_{i}/\sum_{i}w_{i})$ (1)
The win-rate $w_{i}$ is approximated as the average of the $K=500$ most recent
results of the training agent against it.
One important consideration in RBC, and an important difference from normal
chess, is that due to the partial observability, the optimal strategy should
be stochastic, as a deterministic strategy can easily be exploited. However,
when testing our agent, we found that choosing the action with highest
probability lead to slightly better performance.
## IV Results
We tested the performance of our agent at two time points; after supervised
training and after reinforcement learning. The hypothesis is that adding
reinforcement learning, which directly aims to optimize the real objective of
winning games, should increase the win-rate compared to only supervised
training on public games.
### IV-A ELO Performance
In order to evaluate the performance, we uploaded both versions to the public
leaderboard111https://rbc.jhuapl.edu/, which results in an approximate ELO
rating for each agent. After the supervised training, our agent’s performance
is similar to that of the publicly available baseline agent Trout, which is a
naive Stockfish-based agent. However, as seen in Table II, reinforcement
learning leads to a tangible performance benefit on the leaderboard. In
training, observed that it consistently learned to win against its previous
version. For example, the reinforced agent exhibits a win rate of more than
80% against the supervised agent throughout training. At the time of this
writing, these results put our agent at rank 27 out of 84 on the leaderboard,
although we did not see an indication of convergence.
TABLE II: Performance comparison of the proposed agent Name | Performance
---|---
Supervised agent | 1118
Reinforced agent | 1330
Trout (Public Stockfish baseline) | 1111
### IV-B Analysis of Example Games
We observed that our agent has a highly aggressive, even reckless at times,
playing style. It often aims for very quick attacks on the enemy king,
sacrificing one or more pieces (including very early queen sacrifices) in
order to get to a position where the opponent’s king is no longer surrounded
by defenders and is not able to defend reliably against multiple possible and
unobserved threats of the agent. This kind of strategy works well against many
lower and middle-skilled opponents, and even scores the occasional win against
top contenders.
One game we want to highlight can be replayed at
https://rbc.jhuapl.edu/games/462287, where the agent played against one of the
higher-rated opponents on the leaderboard. In the game, our agent created two
situations where the opponent could not certainly determine from which square
its king was attacked but was able to sense the correct positions in order to
defend against the threats. In contrast, playing against a lower-rated
opponent (https://rbc.jhuapl.edu/games/462288) the same strategy worked well,
as the opponent did not have information about the bishop on c4, which lead to
a quick win. Similarly, in https://rbc.jhuapl.edu/games/462249, our agent
continuously made threats, which in the end led to an undetected knight
capturing the king. Such a strategy would not work well in classical chess,
which provides some evidence that policies in chess and RBC are not
necessarily similar, and that trying to reduce RBC to chess may be problematic
at times.
## V Conclusion
In this work, we show our first results on applying an AlphaGo-inspired
training regime to the imperfect information game of Reconnaissance Blind
Chess. Our agent learns to use a history of observations to create
distributions of actions for both sensing and playing. First, we use
supervised training on publicly available expert games, where the task is to
predict the actions of the experts. Next, we use on-policy reinforcement
learning with self-play to strengthen the playing performance of the agent.
With this approach, we reached rank 27 of 84 on the leaderboard and an
estimated ELO of 1330, using no further game-specific optimizations.
To continue this work, we aim to refine our self-playing process. It is
currently unclear whether this process alone can lead to top performance..
Incorporating experience gained from playing on the leaderboard is much slower
than playing games against itself offline but may lead to more valuable
information from varied strong opponents, thus facilitating quicker
improvement. An additional angle that we aim to tackle is a combination of the
trained agent with a classical engine like Stockfish. Combining action
suggestions from both, or adapting Stockfish’s moves by using the probability
distribution of the agent, can lead to a more normal and classical playing
style, while also using learned experience from RBC self-play.
## Acknowledgements
We thank the reviewers of this paper for providing excellent feedback on
improving the presentation, pointers to related works that we had missed, and
suggestions for continuing this line of work.
## References
* [1] David Silver et al. “Mastering the game of Go with deep neural networks and tree search” In _Nature_ 529.7587 Nature Publishing Group, 2016, pp. 484–489
* [2] Nolan Bard et al. “The Hanabi challenge: A new frontier for AI research” In _Artificial Intelligence_ 280 Elsevier, 2020
* [3] Christopher Berner et al. “Dota 2 with Large Scale Deep Reinforcement Learning” In _CoRR_ , 2019 arXiv:1912.06680
* [4] Noam Brown and Tuomas Sandholm “Superhuman AI for multiplayer poker” In _Science_ 365.6456, 2019, pp. 885–890
* [5] Noam Brown, Anton Bakhtin, Adam Lerer and Qucheng Gong “Combining deep reinforcement learning and search for imperfect-information games” In _Advances in Neural Information Processing Systems_ 33, 2020, pp. 17057–17069
* [6] Matej Moravčík et al. “DeepStack: Expert-level artificial intelligence in heads-up no-limit poker” In _Science_ 356.6337, 2017, pp. 508–513
* [7] Oriol Vinyals et al. “Grandmaster level in StarCraft II using multi-agent reinforcement learning” In _Nature_ 575.7782 Nature Publishing Group, 2019, pp. 350–354
* [8] Timothy Highley, Brendan Funk and Laureen Okin “Dealing with uncertainty: A piecewise grid agent for reconnaissance blind chess” In _Journal of Computing Sciences in Colleges_ 35.8 Consortium for Computing Sciences in Colleges, 2020, pp. 156–165
* [9] Tord Romstad, Marco Costalba and Joona Kiiski “Stockfish” [accessed 26-April-2022], https://stockfishchess.org/, 2022
* [10] Gregory Clark “Deep Synoptic Monte-Carlo Planning in Reconnaissance Blind Chess” In _Advances in Neural Information Processing Systems_ 34, 2021
* [11] Sergey Savelyev “Mastering Reconnaissance Blind Chess with Reinforcement Learning”, 2020
* [12] Nikolai Yakovenko, Liangliang Cao, Colin Raffel and James Fan “Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in Poker Games Using Convolutional Networks” In _Proceedings of the 30th AAAI Conference on Artificial Intelligence_ AAAI Press, 2016, pp. 360–368
* [13] Johannes Heinrich and David Silver “Deep Reinforcement Learning from Self-Play in Imperfect-Information Games” In _CoRR_ , 2016 arXiv:1603.01121
* [14] Eric Steinberger, Adam Lerer and Noam Brown “DREAM: Deep Regret minimization with Advantage baselines and Model-free learning” In _CoRR_ abs/2006.10410, 2020
* [15] Eric Steinberger “Single Deep Counterfactual Regret Minimization” In _CoRR_ abs/1901.07621, 2019
* [16] Martin Schmid et al. “Player of Games” In _CoRR_ abs/2112.03178, 2021
* [17] David Silver et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm” In _CoRR_ , 2017 arXiv:1712.01815
* [18] John Schulman et al. “Proximal Policy Optimization Algorithms” In _CoRR_ , 2017 arXiv:1707.06347
|
# The Privacy-preserving Padding Problem: Non-negative Mechanisms for
Conservative Answers with Differential Privacy
Benjamin M. Case, James Honaker, Mahnush Movahedi Research Scientist,
Facebook<EMAIL_ADDRESS>Scientist, Facebook;
<EMAIL_ADDRESS>Scientist, Facebook<EMAIL_ADDRESS>
###### Abstract
Differentially private noise mechanisms commonly use symmetric noise
distributions. This is attractive both for achieving the differential privacy
definition, and for unbiased expectations in the noised answers. However,
there are contexts in which a noisy answer only has utility if it is
conservative, that is, has known-signed error, which we call a padded answer.
Seemingly, it is paradoxical to satisfy the DP definition with one-sided
error, but we show how it is possible to bury the paradox into approximate
DP’s $\delta$ parameter. We develop a few mechanisms for one-sided padding
mechanisms that always give conservative answers, but still achieve
approximate differential privacy. We show how these mechanisms can be applied
in a few select areas including making the cardinalities of set intersections
and unions revealed in Private Set Intersection protocols differential private
and enabling multiparty computation protocols to compute on sparse data which
has its exact sizes made differential private rather than performing a fully
oblivious more expensive computation.
## 1 Motivation
> Scenario: Jane wants to kill her husband Lord Edgware. She has an elaborate
> false alibi but is uncertain if Lord Edgware will be attending a particular
> dinner party. The butler is to be informed how many people to provision for;
> Jane may have already learned which other people are attending. Protective
> of his guests, the host sends the butler a differentially private count, so
> that if Jane intercepts the message, Jane can not learn if Lord Edgware will
> be attending so as to murder him. However, the butler does as is told, and
> it would be an impropriety for there not to be enough provisions for
> everyone in attendance. It is therefore essential that the number given to
> the butler is both differentially private (so no one is murdered), _and
> never be less than the true number of attendees_ (so every attendee can be
> accomodated).
The solution to this problem has direct applications to a number of scenarios
where a differentially private answer is needed to preserve privacy, but the
answer must be _conservative_ and thus any noise added must be non-negative
and never give an answer less than the true value. Conservative answers are
often desirable in contexts where the utility loss from error is more heavily
weighted on one side. One might carry a conservative amount of cash for a
purchase, because having left over cash has no cost, but having insufficient
cash means the purchase is not completed.
There are a number of applied contexts where a non-negative DP noise mechanism
would provide an important solution and basic building block:
* •
As a computational example, Kellaris et al. [1] consider the problem where an
adversary is monitoring the size of traffic between a secure database and an
owner subsetting observations by range queries. Even if the queries
transmitted and the rows returned are encrypted, if the adversary observes the
true distribution of the size of returns, they are eventually able to
reconstruct the distribution of the dataset. Sending query responses with
differentially private numbers of rows defeats the adversary. However while
sending a larger than correct subset (that is, padding with removable fake
data) has low cost, sending a smaller than correct subset (by omitting
observations that satisfy the query) has a high utility cost.
* •
Similarly, private set intersection (PSI) that enable computation on the
intersection of two private sets are important building block of secure
solutions [2, 3, Pinkas2019circuitlinear] which allows two parties to locate
the intersection of their data without directly revealing to either party
which observations the other has in common. However, many PSI protocols either
reveal the cardinalities of the set intersections and unions as an intended
output or as an additional leakage to the intended output; this opens up
attacks such as differencing for membership inference. Padding the set
intersection and union with dummy records sampled according to non-negative DP
machanisms can solve this problem by making the cardinalities differentially
private from the view of both parties.
* •
In MPC games for conversion measurement, such as in [4] a mechanism for
positively padded DP histograms would provide a means to prevent the leakage
of the number of conversions without computationally inefficient padding of
all conversion event data to the upper bound of conversion count.
* •
More generally, side-channel attack through timing or data access patterns
have been a long recognized problem in both MPC and differentially private
systems [5] when such side-channels are considered parts of the output that
need to meet the differential privacy definition. Methods suggested to avoid
such attacks have involved making all computations _constant-time_ but using
our mechanisms to pad compute or storage with a stocastically drawn strictly
positive DP waiting time we show can be a much more efficient solution.
* •
In the foundational question of releasing a differentially private mean on a
set of unknown size, Covington et al. [6] show a setting where lowest mean
squared error is achieved when you can privately undercount the set (which is
an example of needing a conservative answer with non-positive error).
In this work, we present a general approach to creating noise mechanisms with
error with guaranteed sign, and solidify three such mechanisms. We show how
this provides a padding solution to PSI intersection and union leakages, as
well as one example of overcoming distribution and differencing attacks on MPC
implementations in the context of conversion measurement.
## 2 One-sided Noise
We are going to consider privatizing noise addition mechanisms, where the
noise $z$ is drawn from some distribution $p$ with no support on the negative
numbers, that is:
$\displaystyle M(X)$ $\displaystyle=f(X)\pm z;\qquad z\sim
p:p(s)=0\quad\forall\ s<0$ (1)
Statistical estimators whose error is of a guaranteed sign are called
_conservative_ , and can be desirable in certain applied contexts. We use
$\pm$ to highlight that we can guarantee the sign of the error in either
direction; we can add $z$ or subtract $z$. Throughout this paper, we write as
if we are adding $z$ so as to guarantee non-negative errors and _overvalued_
answers, but everything holds if instead we need to subtract $z$ to guarantee
non-positive errors in a differentially private _undervalued_ answer.
Let us first highlight the key dilemma of any such mechanism. Imagine two
neighbouring datasets $X$ and $X^{\prime}$ which happen to have
$f(X)<f(X^{\prime})$. We release a differentially private answer $M$. Then
whenever we observe a release $M<f(X^{\prime})$ we know we must be in dataset
$X$ and never in $X^{\prime}$; the mechanism $M$ can never give an answer
below $f(X^{\prime})$ if $X^{\prime}$ is the private data. This means we can
leak with certainty which state of the world we are in, which in turn violates
differential privacy. For example, if we are doing a counting query, and know
we have either dataset $X$ which has count 100, or neighboring dataset
$X^{\prime}$ which has count 101, and we use a non-negative noise
distribution, then whenever we see an answer of 100, we know we were in
dataset $X$. To remove this probability we are going to have to move to
approximate differential privacy and bury all $M<f(X^{\prime})$ into $\delta$.
We now consider distributions $p$ that satisfy
$(\epsilon,\delta)$-differential privacy in this way.
## 3 Mechanisms
### 3.1 Truncated Laplace
In the continuous case, consider a differentially private mechanism, $M$, that
releases answers to a query, $f$, on a dataset, $X$, with truncated Laplace
noise as:
$\displaystyle M(X)$ $\displaystyle=f(X)+z;\quad z\sim
p_{TruncatedLaplace}(b,\mu)$ (2) $\displaystyle
Pr_{TruncatedLaplace}(x|b,\mu)$
$\displaystyle=\left\\{\begin{array}[]{ll}0&x<0\\\ \frac{A}{2b}\
\textrm{exp}\Big{(}-\frac{|x-\mu|}{b}\ \Big{)}&x\geq 0\\\
\end{array}\right.\quad\textrm{where }b=\frac{\Delta}{\epsilon}$ (5)
with mode $\mu$, and shape parameter $b$ (variance $2b^{2}$ when untruncated),
and where all the mass below 0 has been truncated, so we need a normalizing
constant $A$, to bring the total probability mass back to one. We can solve
for $A$ (assuming $x<\mu$) as:
$\displaystyle A$ $\displaystyle=\Big{(}1-\int_{-\infty}^{0}\frac{1}{2b}\
\textrm{exp}\Big{(}-\frac{|x-\mu|}{b}\
\Big{)}dx\Big{)}^{-1}=\Big{(}1-\frac{1}{2}\
\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}\Big{)}^{-1}\quad\ $ (6)
Note because $A$ works here by inflating the entire distribution by a constant
factor, in any ratio, these factors cancel. Thus the truncated Laplace
continues to obey the differential privacy definition which is itself defined
in terms of ratios. (This is only true when we truncate the distribution at
the same distance from the mean for all datasets.)
Whereas the Laplace satisfies pure $\epsilon$-differential privacy, the
truncated version is ($\epsilon$,$\delta$)-differentially private, so long as:
$\displaystyle\delta$ $\displaystyle\geq\int_{0}^{\Delta}\frac{A}{2b}\
\textrm{exp}\Big{(}-\frac{|x-\mu|}{b}\
\Big{)}dx=\frac{A}{2b}\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}\int_{0}^{\Delta}\textrm{exp}\Big{(}\frac{x}{b}\Big{)}dx=\frac{A}{2}\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}\Big{[}\textrm{exp}\Big{(}\frac{\Delta}{b}\Big{)}-1\Big{]}$
(7)
Figure 1: Depicted are two truncated Laplace noise distributions with the same
parameters, offset by some worst-case sensitivity $\Delta$. The shaded region
(exaggerated for visibility) shows the region with support in only one
distribution. Releases in this part of the noise distribution, can potentially
reveal which of two neighbouring datasets are true. That probability mass
needs to be covered by $\delta$.
A further note is that we can also symmetrically truncate the right tail of
the distribution for no additional cost, as represented in Figure 2. While
this means the regions where the distributions do not overlap apparently sum
now to $2\delta$, we are either in dataset $X$, which fails with $\delta$
probability in the left tail, or in dataset $X^{\prime}$ which fails with
$\delta$ probability in the right tail. In any state of the world, there
remains only a $\delta$ chance of failure. This slightly reduces the variance
of the noise, and returns the distribution to symmetry which conveniently
means the mode and mean are once again both at $\mu$. This makes the revised
inflationary constant:
$\displaystyle
A=\big{(}1-\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}\big{)}^{-1}$ (8)
Figure 2: When the Laplace is doubly and symmetrically truncated, two
distributions that are offset by the worst-case sensitivity, $\Delta$, have
two regions that have no overlap. However, if there is a $p$ probability of
being in the Red distribution, and a corresponding $1-p$ probability of being
in the blue distribution, the total worst-case probability of a realized
outcome in a non-overlapping region remains $\delta$. This allows us to
truncate both tails under approximate differential privacy, and retain a
symmetric distribution.
We now require to know the mode as a function of $\epsilon$. In conventional
uses of the Laplace, $\mu=0$. In this use, we need $\mu$ to be sufficiently
distant from zero that the integral for $\delta$ is satisfied. At equality,
Equation 7 together with 8 gives us:
$\displaystyle\delta=\frac{\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}}{2\big{[}1-\textrm{exp}\Big{(}\frac{-\mu}{b}\Big{)}\big{]}}\Big{[}\textrm{exp}\Big{(}\frac{\Delta}{b}\Big{)}-1\Big{]}$
$\displaystyle=\frac{\textrm{exp}\Big{(}\frac{-\mu\epsilon}{\Delta}\Big{)}}{2-2\
\textrm{exp}\Big{(}\frac{-\mu\epsilon}{\Delta}\Big{)}}\Big{(}e^{\epsilon}-1\Big{)}\Rightarrow\;\textrm{exp}\Big{(}\frac{-\mu\epsilon}{\Delta}\Big{)}=\frac{2\delta}{2\delta+e^{\epsilon}-1}$
$\displaystyle\Rightarrow\quad\mu$ $\displaystyle=-\frac{\Delta}{\epsilon}\
\textrm{ln}\Big{(}\frac{2\delta}{2\delta+e^{\epsilon}-1}\Big{)}$ (9)
Which ends up being quite succinct. Note that $e^{\epsilon}\\!-\\!1>0$ so the
logarithmic term is always a negative number, giving us a strictly positive
$\mu$ as expected.
#### 3.1.1 Example
If we have $\epsilon=0.5$ and $\delta=10^{-6}$ for a sum query with
sensitivity $\Delta=1$ then this gives us $\mu=25.4$. Thus we would expect to
add 25.4 to the true sum. Given the truncated symmetry of the noise
distribution, we could add 0, and we would never add more than twice the mean,
or 50.8. At $\epsilon=1$ the expectation shrinks almost by half to 13.7.
### 3.2 Truncated Geometric/Discrete Laplace
Many of the use cases of padding involve adding integer counts of dummy users
or events to mask true totals. It is natural then to turn to distributions
over the Whole numbers. The geometric mechanism gives a discretized analog to
the Laplace mechanism over the integers, and we can truncate this distribution
similar to how we truncated the Laplace. Symmetric about zero, Kotz et al.
refer to this straightforwardly as the _double geometric distribution_ [7,
p.130]. Consider a mechanism using this distribution:
$\displaystyle M(X)$ $\displaystyle=f(X)+z;\quad z\sim p_{DoubleGeometric}(n)$
(10) $\displaystyle\textrm{Pr}_{DoubleGeometric}(x|n)$
$\displaystyle=Ae^{-\epsilon|n-x|};\qquad x\in\\{0,\ldots,2n\\}$ (11)
For some normalizing constant $0<A<1$ and some $n\in\mathbb{N}$.
(a) Example of the double geometric for $n=10$. (b) Two geometric
distributions offset by worst case sensitivity, $\Delta$, set here in this
example to 1. The parameter $\delta$ has to cover either of the tail extremes
that are not non-overlapping in distribution.
Figure 3: Normalization and truncation of the geometric distribution.
As a probability this must sum to 1, which lets us solve for $A$. Let
$r=e^{-\epsilon}$. Then we can rewrite as a classic geometric sequence as:
$\displaystyle 1=\Big{(}2A\sum_{k=0}^{n}e^{-k\epsilon}\Big{)}-A$
$\displaystyle=A\Big{(}-1+2\sum_{k=0}^{n}r^{k}\Big{)}=A\Big{(}-1+2\frac{1-r^{n+1}}{1-r}\Big{)}=A\Big{(}\frac{1+r-2r^{n+1}}{1-r}\Big{)}$
$\displaystyle\Rightarrow\quad A$
$\displaystyle=\frac{1-r}{1+r-2r^{n+1}}=\frac{1-e^{-\epsilon}}{1+e^{-\epsilon}-2e^{-\epsilon(n+1)}}$
(12)
If we have sensitivity $\Delta\in\mathbb{Z}^{+}$, then we need $\delta$ to
cover the tail as:
$\displaystyle\delta$ $\displaystyle\geq
A\sum_{k=n-\Delta+1}^{n}e^{-k\epsilon}$ (13)
For the common case of $\Delta=1$, such as in counting queries of users, at
equality this simplifies (see appendix A) to:
$\displaystyle\delta=Ae^{-n\epsilon}=Ar^{n}$
$\displaystyle=\frac{r^{n}(1-r)}{1+r-2r^{n+1}}$ (14)
$\displaystyle\Rightarrow\quad n$
$\displaystyle=\Big{\lceil}-\frac{1}{\epsilon}\
ln\Big{(}\frac{\delta(1+r)}{1-r+2r\delta}\Big{)}\Big{\rceil}$ (15)
The geometric distribution is unwieldy analytically beyond $\Delta=1$ so it is
natural to consider alternative distributions that do not have to be truncated
to the non-negative integers, but naturally only have support there.
### 3.3 Example
Consider we again target privacy-loss parameters of $\epsilon=0.5$ and
$\delta=10^{-6}$ for a sum or counting query with sensitivity $\Delta=1$. Then
this would solve to $n=25$ as the expected value of the added noise, with the
upper bound being 50.
### 3.4 Poisson
The Poisson distribution is the most common introductory statistical model of
counting processes, and a useful starting point when we consider
conservatively privatizing functions such as counting queries that have
support over non-negative integers.
Consider a mechanism with Poisson noise:
$\displaystyle M(X)$ $\displaystyle=f(X)+z;\quad z\sim
p_{\textrm{Poisson}}(\lambda)$ (16)
$\displaystyle\textrm{Pr}_{Poisson}(y|\lambda)$
$\displaystyle=\frac{\lambda^{y}e^{-\lambda}}{y!};\quad\lambda>0.$ (17)
for some constant rate parameter $\lambda$. We assume $f(.)\in\mathbb{Z}$ has
sensitivity $\Delta\in\mathbb{N}$. Assume two neighbouring datasets are
ordered for convenience such that $f(X)\leq f(X^{\prime})$.
In the region $k\geq f(X)+\Delta$ we know any two neighbouring datasets must
have overlapping release distributions, whose ratio we can define as:
$\displaystyle
e^{\epsilon}\geq\max_{X,X^{\prime}}\frac{M(X^{\prime})}{M(X)}=\max_{y}\frac{\frac{\lambda^{y}e^{-\lambda}}{y!}}{\frac{\lambda^{y+\Delta}e^{-\lambda}}{(y+\Delta)!}}\quad\Rightarrow\quad\epsilon=-\Delta\textrm{log}(\lambda)+\max_{y}\prod_{i=1}^{\Delta}(y+i)$
(18)
However, this rightmost term has no limit as $y\Rightarrow\infty$, so the
right tail behavior of Poisson ratios does not collapse in a fashion that can
converge to a limit. One solution is that we can truncate the right tail of
the Poisson so this limit isn’t reached. Another solution would be to
determine the additional $\delta$ that is required to cover this tail
behaviour, as is done for the Gaussian mechanism. However, we instead shift
now to distributions with improved tail behaviour.
### 3.5 Negative Binomial
While the Poisson has an intuitive generative form, the constraint that both
the mean and the variance are directly determined by the same underlying
parameter $\lambda$ can lead to inflexibility in applied settings. There are,
therefore, many practical generalizations of the Poisson that allow the mean
and variance to be decoupled to independent parameters. Moreover, the Poisson
could not meet the differential privacy definition because it does not have a
sub-exponential tail. We turn now to a heavier tailed distribution on counts,
the Negative Binomial, and show it can meet the approximate differential
privacy definition as a noise mechanism.
Consider a mechanism with Negative Binomial noise as:
$\displaystyle M(X)$ $\displaystyle=f(X)+z;\quad z\sim
p_{\textrm{NegBin}}(\lambda)$ (19) $\displaystyle\textrm{Pr}_{NegBin}(y|r,p)$
$\displaystyle={k+r-1\choose r-1}(1-p)^{k}p^{r};\quad r\in\mathbb{N},\
0\\!>\\!p\\!>\\!1.$ (20)
We first show the ratio of the tails converges:
$\displaystyle\max_{X,X^{\prime}}\frac{Pr[M(X^{\prime})=k]}{Pr[M(X)=k]}$
$\displaystyle=\max_{k}\frac{{k+r-1\choose
r-1}(1-p)^{k}p^{r}}{{k+r-1+\Delta\choose
r-1}(1-p)^{k+\Delta}p^{r}}=\max_{k}\frac{(k+r-1)!(k+\Delta)!}{(k+r-1+\Delta)!(k)!}(1-p)^{-\Delta}$
$\displaystyle=(1-p)^{-\Delta}\lim_{k\rightarrow\infty}\frac{\prod_{j=1}^{\Delta}k+j}{\prod_{j=1}^{\Delta}k+r-1+j}=(1-p)^{-\Delta}$
(21)
Which allows us to solve for $\epsilon$ as:
$\displaystyle
e^{\epsilon}\geq(1-p)^{-\Delta}\quad\Rightarrow\quad\epsilon=-\Delta\;\textrm{ln}(1-p)\quad\Leftrightarrow\quad
p=1-\textrm{exp}(-\frac{\epsilon}{\Delta})$ (22)
Note this is roughly $p\approx\epsilon/\Delta,\forall\epsilon/\Delta<1$. Since
$\epsilon$ and $p$ are related, we attempt to fix $\delta$ simply as a
function of the remaining free parameter, $r$. As before:
$\displaystyle\delta\geq\sum_{j=0}^{\Delta-1}Pr_{NegBin}(j|p,r)$ (23)
For the common case where $\Delta=1$, as for example depicted in Figure 4,
this means:
$\displaystyle\delta\geq{r-1\choose
r-1}(1-p)^{0}p^{r}=p^{r}\quad\Rightarrow\quad\delta=p^{r}\quad\Leftrightarrow\quad
r=\Big{\lceil}\frac{\textrm{ln}(\delta)}{\textrm{ln}(p)}\Big{\rceil}$ (24)
Figure 4: Depicted are two negative binomial distributions with the same
parameters, offset by some worst-case sensitivity $\Delta$, where here
$\Delta=1$. The shaded region (exaggerated for visibility) shows the region
with support in only one distribution which has to be covered by $\delta$.
### 3.6 Example
Consider again we have $\epsilon=0.5$, and $\delta=10^{-6}$ for a sum or
counting query with sensitivity $\Delta=1$. This gives us $p=0.39$ from which
we can compute $r=15$. The negative binomial has expection $r(1-p)/p$ which
calculates to an expected value of 23.2 using the negative binomial mechanism.
This is a close but slightly lower number of padded values than using the
truncated Laplace or geometric mechanisms.
### 3.7 Discretized Uniform and Binomial Distributions
Two heuristics for padding of records that we have seen are to add records
drawn from a uniform distribution, or flip a (typically small) number of coins
and add records for every coin that comes up heads. Since we now have a
framework for formalizing the privacy of padding distributions, we briefly
point out the associated DP privacy parameters these heuristics imply.
$\displaystyle Pr_{DisUniform}(k|N)$ $\displaystyle=1/(N+1);$
$\displaystyle\quad k\in\\{0,1,\cdots,N\\}$ $\displaystyle\Rightarrow\epsilon$
$\displaystyle=0$ $\displaystyle\quad\delta$ $\displaystyle=\Delta/(N+1)$ (25)
$\displaystyle Pr_{Binomial}(k|N)$ $\displaystyle={N\choose k}0.5^{N};$
$\displaystyle\quad k\in\\{0,1,\cdots,N\\}$ $\displaystyle\Rightarrow\epsilon$
$\displaystyle=\textrm{ln}{N\choose\Delta}$ $\displaystyle\quad\delta$
$\displaystyle=0.5^{N}\sum_{j=0}^{\Delta-1}{N\choose j}$ (26)
From this perspective we see: (1) the discretized uniform mechanism has a very
large $\delta$ term, unless $N$ is of the same order as the size of the
dataset, (2) the binomial distribution has a very large $\epsilon$ for any $N$
that gives a conventionally sized $\delta$.
## 4 Application to Private Set Intersection
Private Set Intersection considers the problem where two parties $X$ and $Y$
hold private sets $D_{X},D_{Y}\subset D$ and wish to compute some function on
the intersection of their two sets; this function could i) reveal the
intersection to one or both parties $f(D_{X},D_{Y})=D_{X}\cap D_{Y}$
[meadows1986more, dhoriginal, freedman2004efficient, kissner2005privacy] or
ii) compute the cardinality of the intersection or union
$f(D_{X},D_{Y})=|D_{X}\cap D_{Y}|$ or $f(D_{X},D_{Y})=|D_{X}\cup D_{Y}|$ [8]
or iii) as more recent research on PSI has focused, enable a more general
computation on the intersection and associated values to each record and only
have parties learn this intended output. For instance, [9] appends one of the
sets $D_{X}$ with values $(D_{X},V_{X})$ and computes the sum of values for
intersected records $\sum_{x_{i}=y_{j}}v_{i}$ and [4] computes a sum also
conditioned on a comparison, for $(D_{X},T_{X},V_{X})$ and $(D_{Y},T_{Y})$
compute $\sum_{x_{i}=y_{j},t_{i}\geq t_{j}}v_{i}$.
The intended outputs of $f$ can be secured by various DP methods depending on
the nature of $f$, e.g. making a sum of associated values differentially
private. However, a common issue that many PSI protocols have is an additional
leakage of the intersection size $|D_{X}\cap D_{Y}|$ and union size
$|D_{X}\cup D_{Y}|$ besides the intended outputs. This is the case with the
Private-ID, PS3I, and Private Join and Compute protocols [3, 9, 10].
We assume a PSI protocol will leak intersection and union sizes, and that any
fix should treat the protocol as a black box and be entirely fixed by means of
the datasets submitted by the parties. This requires both parties to be semi-
honest, which is the assumption already being made in most PSI protocols. To
do this we consider how two parties can each independently draw observations
from a shared pool to pad the private data they submit to PSI. Whenever a
collision occurs, that is, both parties draw the same fictitious observation
from the pool, then the intersection count is padded by one more unit. We want
to ensure (1) that neither party can work out the other’s padded observations
(or the collisions) so they can not reverse engineer the noise addition, and
(2) the number of collisions—which increase the intersection size leaked by
PSI—is guaranteed to form a differentially private distribution and thus
sufficiently mask the true intersection size. To additionally protect the size
of the union we will have both parties draw fictitious records from non-
intersecting pools and append them to their input sets.
Our techniques naturally apply to give output privacy to protocols such as [8]
that compute the cardinality of the intersection and the union. Other PSI
protocols that our techniques can be applied to include the semi-honest
constructions from [2, 3, 11] that compute functions on the intersection while
leaking the size of the intersection in the process. Our techniques are
straighforward to apply as long as any associated values to dummy rows can be
assigned appropriately so as not the change the intended output. Since our
construction is semi-honest, it is not compatible with the maliciously secure
private intersection-sum and cardinality protocols of [12].
### 4.1 DP intersection size for both parties
Let $X$ and $Y$ be two parties who have private finite sets
$D_{X},D_{Y}\subset D$ whose intersection is unknown of size $I=|D_{X}\cap
D_{Y}|$. Let $A_{X}$ and $A_{Y}$ be public finite disjoint sets with no
intersection to each other or $D$. Let $p(\epsilon,\delta)$ be a non-negative
differentially private probability distribution on $\mathbb{N}_{0}$, such as
from those developed in Section 3. We have party $X$ draw an integer
$z_{x}\sim p(\epsilon_{x},\delta_{x})$ and then sample a random subset
$a_{X}\subset A_{X}$ of size $z_{x}$. They then submit a padded dataset
$D_{X}^{+}=\\{D_{X}\cup a_{X}\cup A_{Y}\\}$ to some PSI protocol, that is,
they combine their private data, with their recent sample and the entirety of
the set they did not sample from. In parallel, $Y$ draws $z_{y}\sim
p(\epsilon_{y},\delta_{y})$, samples $a_{Y}\subset A_{Y}$ of size $z_{y}$, and
submits padded data $D_{Y}^{+}=\\{D_{Y}\cup A_{X}\cup a_{Y}\\}$. What is core
to see is that the random subset $X$ generates, $a_{X}$, will all have
collisions in the PSI protocol because $Y$ submits the superset $A_{X}$, and
vice versa.
To ensure that there are $z_{x}$ or $z_{y}$ elements available to be sampled,
we can use the doubly truncated geometric distribution which has a known
maximum which we use to populate $A_{X}$ and $A_{Y}$ accordingly (otherwise we
need $A_{X}$, $A_{Y}$ to be unbounded sets, which even if approximated
increases communication). When we run PSI on $D_{X}^{+}$ and $D_{Y}^{+}$ we
leak the intersection size, which is $I+z_{x}+z_{y}$, to both parties. Party
$X$ privately knows $z_{x}$ so can subtract that and learn $I+z_{y}$ which is
still $(\epsilon_{y},\delta_{y})$-DP. Correspondingly, party $Y$ can subtract
$z_{y}$ and only learn $I+z_{x}$ which is $(\epsilon_{x},\delta_{x})$-DP. Thus
each party only learns a differentially private answer to the intersection
size, whose privacy-loss is controlled by their adversary.
The additional computation and communication is low. Recall that for the
doubly truncated geometric distribution the expectation, $n$, is given in Eq.
15 as a function of $\epsilon$ and $\delta$, and is typically of size 10–100.
For simplicity assume both parties use the same privacy-loss parameters, and
thus the same $n$. Then the sets $A_{X}$ and $A_{Y}$ will be of size $2n$, the
expected padding to $D_{X}^{+},D_{Y}^{+}$ will each be $3n$ and the padding to
$\\{D_{X}^{+}\cap D_{Y}^{+}\\}$ will have expectation $2n$.
Many PSI protocols that reveal the size of the intersection also reveal the
size of the union $|D_{X}\cup D_{Y}|$ as a result of revealing the sizes of
the input sets $D_{X}$ and $D_{Y}$; this reveals the union size since
$|D_{X}\cup D_{Y}|=|D_{X}|+|D_{Y}|-|D_{X}\cap D_{Y}|$. When we apply our above
technique for DP intersection sizes, it does not give a DP protection to the
size of the union, but we can modify it to do so. This is because the size of
the union is revealed when running on our input sets $D_{X}^{+}=\\{D_{X}\cup
a_{X}\cup A_{Y}\\}$ and $D_{Y}^{+}=\\{D_{Y}\cup A_{X}\cup a_{Y}\\}$. The union
size is $|D_{X}^{+}\cup D_{Y}^{+}|=|D_{X}\cup D_{Y}|+2n$ where
$n=|A_{X}|=|A_{Y}|$, and since $n$ is public both parties learn the true size
of $|D_{X}\cup D_{Y}|$.
The reason this does not also leak the size of the intersection through the
relation $|D_{Y}^{+}\cup D_{X}^{+}|=|D_{X}^{+}|+|D_{Y}^{+}|-|D_{X}^{+}\cap
D_{Y}^{+}|$ is that party $X$ (without loss of generality) in the formula
$\displaystyle|D_{Y}^{+}\cup D_{X}^{+}|$
$\displaystyle=|D_{X}^{+}|+|D_{Y}^{+}|-|D_{X}^{+}\cap D_{Y}^{+}|$
$\displaystyle|D_{Y}\cup D_{X}|+2n$
$\displaystyle=(|D_{X}|+z_{x})+(|D_{Y}|+z_{y})-(|D_{X}\cap
D_{Y}|+z_{x}+z_{y})$
knows the value of the left hand side, and the values of $|D_{X}|$, $z_{X}$,
$|D_{Y}|+z_{y}$ and $|D_{X}\cap D_{Y}|+z_{y}$. So moving all the terms $X$
knows to the left side
$\displaystyle|D_{Y}\cup D_{X}|+2n-|D_{X}|$
$\displaystyle=(|D_{Y}|+z_{y})-(|D_{X}\cap D_{Y}|+z_{y})$
we see $X$ cannot solve for any of the values of $|D_{Y}|$, $z_{y}$, or
$|D_{X}\cap D_{Y}|$ and can only conclude that $|D_{Y}|+|D_{X}\cap D_{Y}|=v$
where $X$ knows the value of $v$. Said another way $X$ learns only a
differentially private size of the party $Y$’s input and the intersection.
$a_{X,1}$ $a_{X,2}$ $a_{X,3}$ $a_{X,4}$ $a_{X,5}$ $a_{X,6}$ $a_{X,7}$
$a_{X,8}$ $a_{X,9}$ $a_{X,10}$ $a_{Y,1}$ $a_{Y,2}$ $a_{Y,3}$ $a_{Y,4}$
$a_{Y,5}$ $a_{Y,6}$ $a_{Y,7}$ $a_{Y,8}$ $a_{Y,9}$ $a_{Y,10}$ $a_{X,1}$
$a_{X,2}$ $a_{X,3}$ $a_{X,4}$ $a_{X,5}$ $a_{X,6}$ $a_{X,7}$ $a_{X,8}$
$a_{X,9}$ $a_{X,10}$ $a_{Y,1}$ $a_{Y,2}$ $a_{Y,3}$ $a_{Y,4}$ $a_{Y,5}$
$a_{Y,6}$ $a_{Y,7}$ $a_{Y,8}$ $a_{Y,9}$ $a_{Y,10}$ Selected Rejected
$A_{X}$$A_{Y}$$X$ Selects$Y$ Selects Figure 5: _The common pool of
observations with example selections from the collision protocol. Party $X$,
whose selections are shown on the right, randomly samples from $A_{X}$ and
takes all of $A_{Y}$. Party $Y$, shown on left, randomly samples from $A_{Y}$
and takes all of $A_{X}$. Each party’s random samples are guaranteed to
collide in the PSI protocol, creating padded users from the desired
differentially private distribution._
### 4.2 DP union size for both parties
We can extend our method to also give differential privacy to union
cardinality by considering another two sets $B_{X}$ and $B_{Y}$ which are
public finite disjoint sets with no intersection to each other or the sets
$D$, $A_{X}$, or $A_{Y}$. Similar to before we let $p(\epsilon,\delta)$ be a
non-negative differentially private probability distribution on
$\mathbb{N}_{0}$ and have party $X$ draw an integer $v_{x}\sim
p(\epsilon_{x},\delta_{x})$ and then sample a random subset $b_{X}\subset
B_{X}$ of size $v_{x}$. They then submit the padded dataset
$D_{X}^{++}=\\{D_{X}\cup a_{X}\cup A_{Y}\cup b_{x}\\}$. In parallel $Y$ draws
$v_{y}\sim p(\epsilon_{y},\delta_{y})$ and samples $b_{Y}\subset B_{Y}$ of
size $v_{x}$ and submits the padded data $D_{Y}^{++}=\\{D_{Y}\cup A_{X}\cup
a_{Y}\cup b_{Y}\\}$. Now the additionally padded sets $b_{x}$ and $b_{Y}$ will
not have any collisions and so will not contribute to the intersection size
but they will contribute to the union size so that $|D_{X}^{++}\cup
D_{Y}^{++}|=|D_{X}\cup D_{Y}|+2n+v_{x}+v_{y}$. These sets are illustrated in
Figure 7 in the Appendix. Thus if a PSI protocol reveals or leaks the size of
the union, Party $X$ knows $v_{x}$ and $n$ and so can subtract to learn
$|D_{X}\cup D_{Y}|+v_{y}$ which is still $(\epsilon_{y},\delta_{y})$-DP.
Correspondingly, party $Y$ can learn $|D_{X}\cup D_{Y}|+v_{y}$ which is still
$(\epsilon_{x},\delta_{x})$-DP. Thus each party only learns a differentially
private answer to the union size whose privacy-loss is controlled by their
adversary.
### 4.3 DP intersection size for one party
In some compute settings, only one party, say $X$, observes the intersection
size from PSI. In this simpler setting we only need to conceal the
intersection size from that party. We can then have $Y$ submit
$D_{Y}^{+}=\\{D_{Y}\cup a_{Y}\\}$, where $a_{Y}$ is generated as before, and
$X$ submit $D_{X}^{+}=\\{D_{X}\cup A_{Y}\\}$.
### 4.4 Integration with the Private-ID protocol
The private Private-ID protocol [3] allows the parties to privately compute a
set of pseudorandom universal identifiers (UID) corresponding to the records
in the union of their sets, where each party additionally learns which UIDs
correspond to which items in its set but not if they belong to the
intersection or not. This allows both parties to independently sort their UIDs
and the associated records and feed them to any general purpose MPC that
ignores the non-matching records and computes on the matching ones. This
protocol leaks the sizes of the input sets, union, and intersection to both
parties. Applying our techniques for DP intersection causes the sizes of the
input sets and intersection to be differential private. If we additionally
apply our techniques for DP unions, the size of the union becomes
differentially private. In the downstream usage of the UID, both parties will
input null associated records for the dummy rows created by the DP noise. This
will allow the DP noise added to secure the leakages of the Private-ID
protocol not to effect the actual outcomes computed in any downstream MPC
process, which may have its own separate DP mechanisms. Applying our
techniques to the multi-identifier version of the Private-ID protocol [10] is
a bit more complicated due to the greater leakage in that protocol; we leave
this as future work.
### 4.5 Integration with the PS3I protocol
The PS3I protocol is a decisional Diffie–Hellman (DDH) PSI variant that
attaches additive homomorphic encryptions (HE) of associated values to each
identifier; the output is secret shares of the associated values for matched
records. Both parties learn the sizes of each others input sets, the
intersection size, and thus the union size. If we apply our technique for DP
intersection sizes, we make the sizes of the input sets and the intersection
size differentially private, and further if we apply our technique for DP
union sizes, we make the union size differentially private. These additional
dummy rows can be assigned a zero associated value or other null value which
will be secret shared between the two parties and passed to some other
downstream MPC.
### 4.6 Integration with the Private Join and Compute protocol
The Private Join and Compute protocol [2] is similar to the PS3I protocol
except that attached HE encrypted values are not secret shared but rather
joined for users in the intersection. Downstream operations on these encrypted
values are thus limited by the additive HE scheme to being linear operations.
Both parties learn the sizes of each others input sets, the intersection size,
and thus the union size. If we apply our technique for DP intersection sizes,
we make the sizes of the input sets and the intersection size differentially
private, and further if we apply our technique for DP union sizes, we make the
union size differentially private. The dummy rows can include homomorphic
encryption of zero so as not to change the result of the downstream additive
HE calculation. Of course, the additive HE could be replaced with fully
homomorphic encryption enabling other additional operations in the downstream,
and depending on the nature of the downstream computation other values might
be included for the null values. For instance, if the downstream included all
multiplications, then encrypting the multiplicative identity, 1, for dummy
rows would ensure the downstream result was not effected by the noise.
## 5 Application to MPC side-channel attacks and DP Histograms
The blending of secure multiparty computation and differential privacy (MPDPC)
as interwoven privacy enhancing technologies has the promise to offer privacy
across stores, across computation, and across releases, that is, enabling a
computation to occur across the data of multiple parties while encrypting the
data during computation and offering differentially private answers at the
conclusion (sometimes described as input and output privacy respectively). In
simple settings this blend means the algorithm that is encoded in the MPC game
needs itself to have differentially private outputs, such as noise mechanisms
for the released values. However, in the typical DP threat models
(particularly in the centralized curator model) the “output” is only the final
answers released to the world at the end of the computation, whereas in the
MPC threat model, the act of computation is itself a continuous output under
observation by the adversarial parties. While MPC shields all the data values
and intermediate calculations, it is often susceptible to inference on the
data by sidechannels, a risk receiving increasing attention.
Consider an MPC implementation that counts a grand sum of a predicate over
user records. This is a common generic task, as for example in the style of
[4], where a dataset consists of the events associated with each user, and we
are counting the sum of all user events that meet a filter. The grand sum can
be made differentially private by noise additon, however, oftentimes one loops
first over users, and then over those user’s events. Within the MPC
calculation either (1) the storage access pattern, or (2) the total time to
compute the predicate over that user’s records, can directly leak the number
of events for each user.
Thus while MPC encrypts the data, storage access and timing present common
side channels, which are themselves an output (as for example explicitly
considered by [5]). If the goal of MPDPC is to privatize all outputs, then we
need DP promises on all such channels. The main solution to timing attacks and
storage in MPC has been to enforce constant-time/constant-storage
computations, which typically entail lengthening compute time to the worst
case, often at severe efficiency loss.
### 5.1 Non-negative padding for side-channel solution
A partial solution to this is to shuffle the order in which users are
evaluated. This means if we leak compute time or record recall for the first
user in the dataset and determine they had five events, we don’t know which
user had those five events. However, at the end of the grand sum, we will have
witnessed the number of events for each user in the dataset, leaking the
histogram of the number of events. If an adversary can rerun the computation
with a differencing attack, say by removing Alice’s data, then the category in
the histogram that changed will reveal Alice’s data, deterministically.
The constant-time/constant-storage solution to this is to pad all users with
the maximum number of events (but make sure that the fake events fail the
predicate being counted). From the differencing attack perspective, now the
histogram has only one category and no information is revealed. However, the
computational cost of this can be enormous, particularly if a rare number of
important users have very many events.
Instead of padding each user’s events, non-negative DP noise can be used to
generate new users whose data makes the leaked histogram differentially
private. Consider a dataset, $D$, of $N$ individuals who can have up to
$\\#d_{i}\leq K$ events, and assume the histogram of $\\#d$ is leaked by
timing. For each $i=\\{0,\cdots,K\\}$ we draw $j_{i}\sim p(\epsilon,\delta)$
from a non-negative DP noise distribution on $\mathbb{N}_{0}$, and add $j_{i}$
users with $i$ (predicate failing) events to dataset $D$. The resulting
histogram is $(\epsilon,\delta)$-DP and masks the histogram of user events.
### 5.2 Example
Assume users are roughly uniformly distributed in number of events across
$\\{0,\cdots,K\\}$. This is conservative as oftentimes such distributions have
modes at zero and heavy tails. A constant-time solution requires each user now
to have exactly $K$ events for MPC to cycle over, meaning we have padded the
data with $N\\!K/2$ events in expectation. If instead we add DP padded users
to each event number, we add $\sum_{i=0}^{K}nK=nK(K+1)/2$ padded events, where
$n$ is again the expectation of the DP distribution $p$ as in Eq. 15. For
datasets with many more users than possible events, that is $N\\!\gg\\!K$,
then $n(K\\!+\\!1)\\!\ll\\!N$ and the number of padded events—and hence added
MPC compute time—is much lower from using DP non-negative padded users, than
from constant-time computation.111We note that this is not a complete analysis
of the timing difference. There might be costs in shuffling the data, which
are not required in the constant-time solution, as well as fixed timing costs
in overhead per user (unrelated to number of events). These can be added for
an exact comparison if the timing difference is close, however, the key logic
remains.
### 5.3 Shuffle of Private-ID Universal Identifiers
The Private-ID protocol [3] allows the parties to privately compute a set of
pseudorandom universal identifiers (UID) corresponding to the records in the
union of their sets, where each party additionally learns which UIDs
correspond to which items in its set but not if they belong to the
intersection or not. This allows both parties to independently sort their UIDs
and the associated records and feed them to any general purpose MPC that
ignores the non-matching records and computes on the matching ones. The number
of associated records to be passed into the downstream MPC computation varies.
Non-matched records have zero associated records while matched records may
have many records depending on the application. In order to not reveal the
number of records per row, we can apply our padding for DP Histograms as long
as we can perform a shuffle on the Private-ID UIDs. The cost of an oblivious
shuffle of $n$ elements in MPC is often estimated as requiring $cnlog(n)^{2}$
where $c$ is a constant in the range $[4,7]$.
(a) Constant time versus DP padded datasets (b) Privacy-preserving histogram
from DP padding
Figure 6: _On the left figure we see a dataset padding every user to the
worst case. In the center is a dataset plus the padded observations created by
adding a DP non-negative number of users for each event count. These are
separated in the center for visual clarity, but are shuffled along with the
original data on the right. Far fewer padded events (blue items) are required
with DP padding.
On the right figure we see the privacy preserving histogram that would result
from this solution, broken into the contributions by the original data (gray)
and padded data (blue). The resulting blue noise in this DP histogram defeats
any differencing attack._
## 6 Prior Work
The combination of padded records and differential privacy in the context of
storage and joins has been recognized as a solution for side-channel attacks.
While we believe our mechanisms are lightweight and straightforward in
construction, and thus well tailored to individual queries, previous papers
have achieved positive padding for database stores in large scale ways that
permit repeated queries. Encryted storage techniques, such as Oblivious RAM,
have been shown to be susceptible to side-channel attacks on data access
patterns and communication volume [13, 14, 15]. Bogatov et al. [16] considers
a DP sanitization solution that uses a DP tree hierarchy. This effectively
results in buckets of storage each of which have some padded users and combine
to offer a DP guarantee to queries on the store. Relatedly, Xu et al. [17]
implement a DP padding solution for the same problem that uses a shifted and
tail-censored Laplace. This is the closest to anything we present, however,
their use of a $\delta$ parameter is censoring of the entire left tail of the
Laplace, whereas we use $\delta$ for a narrower purpose. Groce et al. [18]
also improve this Laplace approach within a Bayesian context. Allen et al.
[19] use a padding procedure on Oblivious RAM that adds sufficiently large
numbers of fake records that the standard (zero-centered) ($\epsilon$,0)-DP
Laplace mechanism computationally is ensured to result in answers that can be
filled with all the true records plus some number of the fake records which
act as padding, functionally quite similar to this shifted and tail-censored
Laplace mechanism reoccuring in the literature.
Differential privacy has been considered in the context of solving side-
channel leakage in private record linkage by [20, 21, 22]. Within private
record linkage, He et al. [23] explicitly connect this to private set
intersection, using the shifted tail-censored Laplace, and [18] extends this
work. The side-channel they both consider is load estimates in blocks from
hashing.
Any DP noise distribution on $\mathbb{R}$ (or $\mathbb{Z}$) can be converted
to a non-negative distribution on $\mathbb{R}_{\geq 0}$ (or $\mathbb{N}_{0}$)
by proceedures we have used for truncating and shifting the Laplace and
geometric mechanisms. However, some mechanisms from prior work naturally
generate releases on a bounded interval, and like our use of the negative
binomial, these could be more readily converted. Quick [24, 25, 26] has a
differentially private formulation of the Poisson-gamma distribution that is
used to make all DP released counts strictly non-negative, and it would be
straightforward to convert this mechanism from making the release non-negative
to the noise to be non-negative. Similarly, implementations of the Dirichlet
distribution going back to Machanavajjhala et al. [27] could be so converted.
## 7 Conclusion
Differentially private noise mechanisms typically have symmetric noise about
the true sample value. However, there are useful applications where we require
an error with a known sign. Truncations of the Laplace and geometric
distributions, as well as count distributions with subexponential tails, such
as the negative binomial, can be converted into non-negative (or non-positive)
noise mechanisms under approximate differential privacy. Such mechanisms are
particularly appealing for operations where we need to pad the underlying data
with extra observations so as to make any system leakage differentially
private, as in oblivious storage, private set intersection, and timing and
storage side-channels in MPC.
## Appendix A Appendix
The derivation of the $\delta$ constraint for the geometric mechanism can be
shown as:
$\displaystyle Ae^{-n\epsilon}=Ar^{n}$
$\displaystyle=\frac{r^{n}(1-r)}{1+r-2r^{n+1}}<\delta$ (27)
$\displaystyle\Rightarrow\quad r^{n}$
$\displaystyle<\frac{\delta(1+r-2r^{n+1})}{1-r}$ (28)
$\displaystyle\Rightarrow\quad r^{n}\Big{(}1+\frac{2r\delta}{1-r}\Big{)}$
$\displaystyle<\frac{\delta(1+r)}{1-r}$ (29) $\displaystyle\Rightarrow\quad
r^{n}\Big{(}\frac{1-r+2r\delta}{1-r}\Big{)}$
$\displaystyle<\frac{\delta(1+r)}{1-r}$ (30) $\displaystyle\Rightarrow\quad
r^{n}(1-r+2r\delta)$ $\displaystyle>\delta(1+r)$ (31)
$\displaystyle\Rightarrow\quad r^{n}$
$\displaystyle>\frac{\delta(1+r)}{1-r+2r\delta}$ (32)
$\displaystyle\Rightarrow\quad e^{-\epsilon n}$
$\displaystyle>\frac{\delta(1+r)}{1-r+2r\delta}$ (33)
$\displaystyle\Rightarrow\quad n$
$\displaystyle=\Big{\lceil}-\epsilon^{-1}ln\Big{(}\frac{\delta(1+r)}{1-r+2r\delta}\Big{)}\Big{\rceil}$
(34)
$a_{X,1}$ $a_{X,2}$ $a_{X,3}$ $a_{X,4}$ $a_{X,5}$ $a_{X,6}$ $a_{X,7}$
$a_{X,8}$ $a_{X,9}$ $a_{X,10}$ $a_{Y,1}$ $a_{Y,2}$ $a_{Y,3}$ $a_{Y,4}$
$a_{Y,5}$ $a_{Y,6}$ $a_{Y,7}$ $a_{Y,8}$ $a_{Y,9}$ $a_{Y,10}$ $b_{X,1}$
$b_{X,2}$ $b_{X,3}$ $b_{X,4}$ $b_{X,5}$ $b_{X,6}$ $b_{X,7}$ $b_{X,8}$
$b_{X,9}$ $b_{X,10}$ $b_{Y,1}$ $b_{Y,2}$ $b_{Y,3}$ $b_{Y,4}$ $b_{Y,5}$
$b_{Y,6}$ $b_{Y,7}$ $b_{Y,8}$ $b_{Y,9}$ $b_{Y,10}$ $a_{X,1}$ $a_{X,2}$
$a_{X,3}$ $a_{X,4}$ $a_{X,5}$ $a_{X,6}$ $a_{X,7}$ $a_{X,8}$ $a_{X,9}$
$a_{X,10}$ $a_{Y,1}$ $a_{Y,2}$ $a_{Y,3}$ $a_{Y,4}$ $a_{Y,5}$ $a_{Y,6}$
$a_{Y,7}$ $a_{Y,8}$ $a_{Y,9}$ $a_{Y,10}$ $b_{X,1}$ $b_{X,2}$ $b_{X,3}$
$b_{X,4}$ $b_{X,5}$ $b_{X,6}$ $b_{X,7}$ $b_{X,8}$ $b_{X,9}$ $b_{X,10}$
$b_{Y,1}$ $b_{Y,2}$ $b_{Y,3}$ $b_{Y,4}$ $b_{Y,5}$ $b_{Y,6}$ $b_{Y,7}$
$b_{Y,8}$ $b_{Y,9}$ $b_{Y,10}$ Selected Rejected $A_{X}$$A_{Y}$$X$ Selects$Y$
Selects$a_{Y}$$a_{X}$$b_{Y}$$b_{X}$ Figure 7: _Union and intersection DP
padded inputs_
## References
* [1] G. Kellaris, G. Kollios, K. Nissim, and A. O’Neill, “Accessing data while preserving privacy,” _arXiv preprint arXiv:1706.01552_ , 2017.
* [2] M. Ion, B. Kreuter, A. E. Nergiz, S. Patel, M. Raykova, S. Saxena, K. Seth, D. Shanahan, and M. Yung, “On deploying secure computing: Private intersection-sum-with-cardinality,” Cryptology ePrint Archive, Report 2019/723, 2019, https://ia.cr/2019/723.
* [3] P. Buddhavarapu, A. Knox, P. Mohassel, S. Sengupta, E. Taubeneck, and V. Vlaskin, “Private matching for compute.” _IACR Cryptol. ePrint Arch._ , vol. 2020, p. 599, 2020.
* [4] M. Movahedi, B. M. Case, J. Honaker, A. Knox, L. Li, Y. P. Li, S. Saravanan, S. Sengupta, and E. Taubeneck, “Privacy-preserving randomized controlled trials: A protocol for industry scale deployment,” _arXiv preprint_ , 2021\.
* [5] A. Haeberlen, B. C. Pierce, and A. Narayan, “Differential privacy under fire.” in _USENIX Security Symposium_ , vol. 33, 2011.
* [6] C. Covington, J. Honaker, and M. Shoemate, “Smartnoise: Working with unknown dataset sizes,” Notebook, 2020, https://github.com/opendp/smartnoise-samples/blob/master/analysis/unknown_dataset_size.ipynb.
* [7] S. Kotz, T. Kozubowski, and K. Podgorski, _The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance_. Boston: Birkhäuser Basel, 2001.
* [8] E. De Cristofaro, P. Gasti, and G. Tsudik, “Fast and private computation of cardinality of set intersection and union,” in _International Conference on Cryptology and Network Security_. Springer, 2012, pp. 218–231.
* [9] M. Ion, B. Kreuter, A. E. Nergiz, S. Patel, M. Raykova, S. Saxena, K. Seth, D. Shanahan, and M. Yung, “On deploying secure computing: Private intersection-sum-with-cardinality,” Cryptology ePrint Archive, Report 2019/723, 2019, https://eprint.iacr.org/2019/723.
* [10] P. Buddhavarapu, B. M. Case, L. Gore, A. Knox, P. Mohassel, S. Sengupta, E. Taubeneck, and M. Xue, “Multi-key private matching for compute,” Cryptology ePrint Archive, Report 2021/770, 2021, https://ia.cr/2021/770.
* [11] G. Garimella, P. Mohassel, M. Rosulek, S. Sadeghian, and J. Singh, “Private set operations from oblivious switching,” Cryptology ePrint Archive, Report 2021/243, 2021, https://ia.cr/2021/243.
* [12] P. Miao, S. Patel, M. Raykova, K. Seth, and M. Yung, “Two-sided malicious security for private intersection-sum with cardinality,” in _Annual International Cryptology Conference_. Springer, 2020, pp. 3–33.
* [13] G. Kellaris, G. Kollios, K. Nissim, and A. O’Neill, “Generic attacks on secure outsourced databases,” in _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , 2016, pp. 1329–1340.
* [14] P. Grubbs, M.-S. Lacharité, B. Minaud, and K. G. Paterson, “Pump up the volume: Practical database reconstruction from volume leakage on range queries,” in _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_ , 2018, pp. 315–331.
* [15] Z. Gui, O. Johnson, and B. Warinschi, “Encrypted databases: New volume attacks against range queries,” in _Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security_ , 2019, pp. 361–378.
* [16] D. Bogatov, G. Kellaris, G. Kollios, K. Nissim, and A. O’Neill, “$\mathcal{E}\text{psolute}$: Efficiently querying databases while providing differential privacy,” 2021.
* [17] M. Xu, A. Papadimitriou, A. Haeberlen, and A. Feldman, “Hermetic: Privacy-preserving distributed analytics without (most) side channels,” _External Links: Link Cited by_ , 2019.
* [18] A. Groce, P. Rindal, and M. Rosulek, “Cheaper private set intersection via differentially private leakage,” _Proceedings on Privacy Enhancing Technologies_ , vol. 2019, no. 3, 2019.
* [19] J. Allen, B. Ding, J. Kulkarni, H. Nori, O. Ohrimenko, and S. Yekhanin, “An algorithmic framework for differentially private data analysis on trusted processors,” _Advances in Neural Information Processing Systems_ , vol. 32, pp. 13 657–13 668, 2019.
* [20] A. Inan, M. Kantarcioglu, G. Ghinita, and E. Bertino, “Private record matching using differential privacy,” in _Proceedings of the 13th International Conference on Extending Database Technology_ , 2010, pp. 123–134.
* [21] M. Kuzu, M. Kantarcioglu, A. Inan, E. Bertino, E. Durham, and B. Malin, “Efficient privacy-aware record integration,” in _Proceedings of the 16th International Conference on Extending Database Technology_ , 2013, pp. 167–178.
* [22] J. Cao, F.-Y. Rao, E. Bertino, and M. Kantarcioglu, “A hybrid private record linkage scheme: Separating differentially private synopses from matching records,” in _2015 IEEE 31st International Conference on Data Engineering_. IEEE, 2015, pp. 1011–1022.
* [23] X. He, A. Machanavajjhala, C. Flynn, and D. Srivastava, “Composing differential privacy and secure computation: A case study on scaling private record linkage,” in _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security_ , 2017, pp. 1389–1406.
* [24] H. Quick, “Generating poisson-distributed differentially private synthetic data,” _Journal of the Royal Statistical Society: Series A (Statistics in Society)_ , vol. 184, no. 3, pp. 1093–1108, 2021. [Online]. Available: https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssa.12711
* [25] H. Quick, K. Chen, and D. DeLara, “Comparison of poisson-gamma and laplace mechanisms for differential privacy,” in _2021 Workshop on Theory and Practice of Differential Privacy_ , 2021.
* [26] H. Quick, “Improving the utility of poisson-distributed, differentially private synthetic data via prior predictive truncation with an application to cdc wonder,” _arXiv preprint arXiv:2103.03833_ , 2021.
* [27] A. Machanavajjhala, D. Kifer, J. Abowd, J. Gehrke, and L. Vilhuber, “Privacy: Theory meets practice on the map,” in _2008 IEEE 24th international conference on data engineering_. IEEE, 2008, pp. 277–286.
|
# Looking forward to photon-coupled long-lived particles IV: neutralino-
ALPino/gravitino
Krzysztof Jodłowski<EMAIL_ADDRESS>Particle Theory and Cosmology
Group, Center for Theoretical Physics of the Universe, Institute for Basic
Science (IBS), Daejeon, 34126, Korea
###### Abstract
Various supersymmetric (SUSY) scenarios predict a sub-GeV neutralino decaying
into a single photon and an invisible state. This signature has recently been
studied in a number of intensity frontier experiments, finding constraints
complementary to the usual collider searches. In this work, we study the
prospects of searches for long-lived neutralinos coupled to an ALPino or
gravitino, where each can act as the lightest SUSY particle (LSP). In addition
to the neutralino decays into a LSP and a photon, we also consider three-body
decays into a pair of charged leptons, and signatures related to scattering
with electrons and secondary neutralino production. For both models, we find
that the searches at FASER2 will allow to overcome the current bounds, while
SHIP will extend these limits by more than an order of magnitude in the value
of the coupling constant.
## I Introduction
Searches for supersymmetric (SUSY) [1, 2, 3, 4, 5] particles with masses close
to the electroweak scale has yielded no conclusive detection thus far [6, 7,
8, 9, 10]. This fact motivates the exploration of alternative possibilities,
including the regime of low-mass neutralinos, possibly connected to other
species by interactions beyond those predicted by the Minimal Supersymmetric
Standard Model (MSSM) [11, 12, 13, 14].
In particular, the neutralino of bino composition is almost unconstrained by
collider searches because its couplings to gauge bosons vanish. Moreover, if
the neutralino decays, and therefore does not constitute dark matter (DM), its
mass could be very light, possibly with masses in the sub-GeV range [15, 16].
Recent studies, e.g., [17, 18, 19, 20, 21] have investigated such light
neutralinos, which decay into SM states due to the R-parity violating
interactions [22, 23, 24]. Consequently, the bino behaves as a long-lived
particle (LLP), making it especially well-suited for intensity frontier
searches [25, 26, 27] looking for light and feebly-interacting particles
beyond the Standard Model (BSM).
Another possibility is that the neutralino is in fact the next-to-lightest
SUSY particle (NLSP), decaying into the LSP and a photon. In this work, we
study two such scenarios, both of them respecting the R-parity.
The first one corresponds to bino decaying into a SUSY partner of axion-like
particle (ALP) called ALPino. This is an attractive BSM scenario that
generalized the axion models invoked as a solution of strong CP problem [28,
29, 30], while the SUSY sector can solve the hierarchy problem [31, 32, 33,
34] and provide a DM candidate [35, 36].
The second scenario assumes that the LPS is composed of the gravitino, which
is the fermionic partner of the spin-2 graviton. It is predicted by
supergravity theories [37, 13, 38, 39], which incorporate local supersymmetry
transformations. For consistency, they necessarily combine SUSY and general
relativity, while at the same time mitigating the hierarchy problem. Moreover,
in certain regime, gravitino can be a DM candidate [40, 41, 42].
We note that the displaced bino decay signature has been investigated in
multiple previous works. In particular, ref. [20] considered bino decays
taking place at various fixed target experiments in the ALPino model,111We
extend their results by considering FASER, MATHUSLA, and NuCal detectors, and
by also investigating the scattering LLP signatures described in Sec. III.2.
while ref. [43], discussing the results of the SLAC beam dump experiment
E-137, considered its decay into gravitino and a photon.222It was an electron
beam dump experiment, so the leading production channel relied on the
t-channel selectron exchange. In light of LEP results on the masses of such
states, the bounds on gravitino mass obtained in [43] are not competitive with
LEP limits [44], while we will show that FASER2 and SHiP may improve them for
neutralinos with masses below $1\,\text{GeV}$. Other relevant works
considering $\sim$sub-GeV neutralinos at the intensity frontier are [45, 46,
47, 48, 49].
Our study is motivated by recent developments of the far-forward LHC detectors
[50], in particular the FASER experiment [51, 52, 53], which began operation
with the LHC Run 3. In particular, a number of papers [54, 55, 21, 56, 57, 58,
59, 60] considered the prospects of multiple BSM scenarios using the signature
of LLP decaying into a single photon and an invisible state. Compared to the
usual signature of LLP decay into a pair of charged leptons or photons, a
single high-energy photon can be present in the detector, e.g., due to the
muon or neutrino induced background. However, dedicated studies performed by
the FASER collaboration [53] have shown that it will be sensitive to them with
the same energy threshold on the visible energy deposited in the calorimeter
by the decaying LLP; see also the extensive discussion in [57].
In this work, we study such decays to investigate the prospects of sub-GeV
bino-like neutralino at the LHC-based detectors: FASER2 [61, 62, 63],
FASER$\nu$2 [64, 65, 66, 67], and MATHUSLA [68, 69], as well as in the beam
dump experiment CHARM [70], NuCal [71, 72], and SHiP [73, 74]. We also
consider additional LLP signatures - scattering with electrons and secondary
bino production by upscattering of the LSP with nuclei taking place in front
of decay vessel [75]. In particular, the latter process was shown to allow
FASER2 to cover a significant portion of the shorter-lived LLP regime,
characterized by LLP decay length of $d\sim 1\,\text{m}$, for many BSM
scenarios [57, 58, 59, 60].
The paper is organized as follows. In Sec. II, we introduce the SUSY models
for low-energy PQ or SUSY breaking scales. We describe how such a regime
naturally leads to long-lived neutralinos in both scenarios. In Sec. III, we
discuss the production mechanisms of light SUSY species - neutralino and LSP -
in intensity frontier searches. We also introduce signatures involving LLPs,
and we describe details of their simulation. In Sec. IV, we discuss our
results. The main one is the sensitivity reach of each detector considered for
both neutralino decay scenarios. In Sec. V, we summarize our findings.
Figure 1: Modes of neutralino production by meson decays as a function of
$m_{\tilde{\chi}_{0}}$ for ALPino (left) and gravitino (right). The decays
into neutralino-LSP depending on $f_{a}$ or $F_{\mathrm{SUSY}}$ are denoted by
solid lines, while decays into a pair of neutralinos, which are independent of
such couplings, are indicated by dotted lines; color coding indicates
contributions of each meson according to the legend. In fact, for ALPino, the
$f_{a}$-independent decays dominate for the allowed values of $f_{a}$, while
for gravitino, the decays depending on $F_{\mathrm{SUSY}}$ are the leading
ones.
## II Models
Since we are interested in the MeV-GeV mass range of a neutralino acting as
the NLSP, we assume that it consists of pure bino, while the LSP is either
ALPino or gravitino.333We note that a combined scenario with both ALPino and
gravitino is possible and has interesting cosmological implications - see,
e.g., [76, 77, 78]. In this scenario, NLSP decay into ALP and LSP is governed
by SUSY breaking coupling, while ALP decays into SM states by PQ-scale
suppressed couplings. Clearly, the LLP phenomenology crucially depends on the
interplay between these two scales, which we leave for future study. In both
of these models, the bino-LSP-photon coupling is the most relevant to our
analysis, while other interactions - or specifics of mass spectrum of other
SUSY states - play only a marginal role; see also the discussion in Sec.
III.1.
Such a coupling is responsible not only for bino decays,
$\tilde{\chi}_{0}\to\mathrm{LSP}+\gamma$, but also for the leading production
mechanism of $\tilde{\chi}_{0}$-$\mathrm{LSP}$ pairs - vector meson decays.
Moreover, it also governs the efficiency of the LSP-electron or LSP-nucleus
upscattering processes that result in NLSP production. When such secondary
production takes place in proximity to the decay vessel, it may allow the
sensitivity reach to be extended to shorter-lived LLPs [75].
The main similarity between our two SUSY scenarios is that the operator
responsible for the $\tilde{\chi}_{0}$-$\mathrm{LSP}$-$\gamma$ coupling is
mass dimension-5 operator, hence it is suppressed by the New Physics energy
scale. In fact, it is the PQ energy breaking scale $f_{a}$ for ALPino, and the
SUSY breaking scale $\sqrt{F_{\mathrm{SUSY}}}$ for gravitino. Both of these
parameters, along with the mass of the neutralino, are among the free
parameters in our analysis.
On the other hand, while the ALPino mass is not necessarily strictly related
to the SUSY or PQ breaking scales, and thus depends on the specifics of the
SUSY scenario considered [79, 78, 80], the gravitino mass is fixed by the SUSY
breaking scale,
$m_{\tilde{G}}=F_{\mathrm{\mathrm{SUSY}}}/(\sqrt{3}\,m_{\mathrm{Pl.}})$, where
$m_{\mathrm{Pl.}}=\sqrt{\hbar c/(8\pi G_{N})}=2.4\times 10^{18}\,\text{GeV}$
is the reduced Planck mass, and $G_{N}$ is the Newton’s gravitational
constant.
### II.1 Neutralino-ALPino
The relevant part of the Lagrangian is [81, 82, 83]
$\displaystyle\\!\\!\mathcal{L}$
$\displaystyle\supset\frac{\alpha_{\mathrm{EM}}C_{a\gamma\gamma}}{16\pi
f_{a}}\overline{\tilde{a}}\gamma^{5}\left[\gamma^{\mu},\gamma^{\nu}\right]\tilde{\chi}_{0}F_{\mu\nu},$
(1)
where $\tilde{a}$ and $\tilde{\chi}_{0}$ denote the ALPino and neutralino
fields, respectively, $F_{\mu\nu}$ is the electromagnetic (EM) field strength
tensor, $\alpha_{\mathrm{EM}}$ is the fine structure constant,
$C_{a\gamma\gamma}\sim O(1)$ is a mixing constant that depends on the ALP
scenario [35, 36], and $f_{a}$ denotes the PQ breaking scale.
The ALPino mass is in general a model-dependent quantity [79, 78, 80], so it
essentially acts as a free parameter. However, since the value of the ALPino
mass will not significantly affect our discussion (as long as it is
significantly smaller than the neutralino mass and does not cause large phase
space suppression of the NLSP decay width), we follow [20] and set its value
as follows: $m_{\tilde{a}}=10\,\text{MeV}$.
Ref. [20] considered the prospects of bino decays into ALPino and photon in
various beam dump experiments, in which the NLSP production and decay points
are separated by a large distance, typically $L\sim 100\,\text{m}$. This
length scale largely determines the LLP decay length that can be probed is
such detectors [43, 84]. In the case of a sub-GeV bino, the following
benchmark corresponds to a short-lived NLSP that can be covered in this way:
$\displaystyle d_{\tilde{\chi}_{0}}\simeq$
$\displaystyle\,100\,\text{m}\times\left(\frac{E}{1000\,\text{GeV}}\right)\left(\frac{0.1\,\text{GeV}}{m_{\tilde{\chi}_{0}}}\right)^{4}\left(\frac{f_{a}}{30\,\text{GeV}}\right)^{2},$
(2)
where $d_{\tilde{\chi}_{0}}=c\tau\beta\gamma$, $\tau=1/\Gamma$ is the bino
lifetime, $\gamma=E/m$ is the boost factor in the laboratory reference frame,
and $\beta=\sqrt{1-1/\gamma^{2}}$.
The lifetime of a sub-GeV bino is determined by two-body decays given by Eq.
10, while three-body decays mediated by an off-shell photon typically
contribute less than a percent, see Eq. 11.
### II.2 Neutralino-gravitino
The Lagrangian is [85, 38, 86]444Feynman rules for this model are given in
ref. [87].
$\displaystyle\\!\\!\mathcal{L}$
$\displaystyle\supset-\frac{i}{8m_{\mathrm{Pl.}}}\bar{\psi}_{\mu}[\gamma^{\rho},\gamma^{\sigma}]\gamma^{\mu}\tilde{\chi}_{0}F_{\rho\sigma},$
(3)
where $\psi_{\mu}$ denotes the gravitino wavefunction, and the Lorentz index
indicates the spin-$\frac{3}{2}$ character of the field.
Compared to the ALPino model, mass of gravitino is not a free parameter.
Instead, the SUSY breaking energy scale determines it by the super-Higgs
mechanism [37, 88, 89, 38, 90]. As a result, the gravitino mass is
$m_{\tilde{G}}=F_{\mathrm{\mathrm{SUSY}}}/(\sqrt{3}\,m_{\mathrm{Pl.}})$.
Moreover, due to the SUSY Equivalence Theorem [91], the gravitino wavefunction
can be approximated at high energies as follows:555In our calculations, we
instead take into the account all gravitino degrees of freedom through the
gravitino polarization tensor given by Eq. 13.
$\displaystyle\psi_{\mu}\simeq
i\sqrt{\frac{2}{3}}\frac{\partial_{\mu}\psi}{m_{\tilde{G}}},$ (4)
where $\psi$ is the spin-$\frac{1}{2}$ goldstino absorbed by the gravitino. As
a result, even though gravitino interactions are suppressed by the Planck mass
(due to its character as a SUSY partner of the graviton), cf. Eq. 3, the
massive gravitino compensates this suppression by the
$\frac{1}{m_{\tilde{G}}}$ factor. Effectively, the bino-gravitino-photon
coupling is therefore proportional to the inverse of the SUSY breaking scale,
$1/F_{\mathrm{SUSY}}$, instead of $1/m_{\mathrm{Pl.}}$.
For $\sim$sub-GeV neutralinos, the long-lived regime corresponds to low-energy
SUSY breaking scales,
$\displaystyle d_{\tilde{\chi}_{0}}\simeq$
$\displaystyle\,100\,\text{m}\times\left(\frac{E}{1000\,\text{GeV}}\right)\left(\frac{0.1\,\text{GeV}}{m_{\tilde{\chi}_{0}}}\right)^{5}\left(\frac{F_{\mathrm{\mathrm{SUSY}}}}{(60\,\text{GeV})^{2}}\right)^{2},$
(5)
where $d_{\tilde{\chi}_{0}}$ is the bino decay length in the laboratory
reference frame. Its lifetime is determined by decays into gravitino and
photon, while decays into gravitino and $e^{+}e^{-}$ pair are suppressed, cf.
Eqs. 12 and 15; see the bottom panels of Fig. 3.
## III Neutralino at intensity frontier searches
### III.1 Neutralino-LSP production modes
Light neutralinos with masses below the mass of a proton can be efficiently
produced in rare meson decays. These mesons are generated in large numbers in
both proton-proton (p-p) collisions at the LHC and proton-target collisions in
beam dump experiments.
The branching ratio of vector and pseudoscalar meson decays into a pair of
neutralinos taking place by $t$-channel squark exchange have been calculated
in [92, 48]. In fact, for the ALPino model, ref. [20] \- see Eq. 9 and the
discussion therein - used these results assuming the following mass spectrum:
squark mass were set at $m_{\tilde{q}}=3\,\text{TeV}$, while the masses of the
other SUSY particles were fixed at $10\,\text{TeV}$.666We adopt this SUSY mass
spectrum in the following discussion. Reducing the value of any of these
masses will not have a significant effect on our results, except for squark
masses, which affect bino pair production, which would increase the detectors
sensitivity to the ALPino model. However, it would typically not affect the
gravitino scenario. As discussed in Sec. IV.1, this production channel allows
for coverage of sizable part of the available parameter space.
On the other hand, in a number of BSM scenarios with a higher-dimensional LLP
coupling to a photon, the leading LLP production channels are vector meson
decays mediated by an off-shell photon, see, e.g., [93, 94, 59]. Therefore, we
also calculated the branching ratios of such decays, which are described by
Eq. 16 for the two SUSY scenarios considered. We also considered the phase-
space suppressed decays of pseudoscalar mesons - the corresponding
differential branching ratios are given by Eq. 17.
As shown in Fig. 1, the leading decay channel among the photon-mediated decays
is $J/\psi$ (gold solid line). It is the heaviest meson among those produced
in sufficiently large quantities - hence our result is consistent with the
discussion in Sec. III in [94], which states that the vector meson branching
ratio is approximately proportional to the mass squared of the meson. The same
relation holds for ALPino, while for gravitino, this branching ratio actually
depends on the fourth power of the meson mass, cf. top and bottom lines of Eq.
16.
For the ALPino model, we found that indeed the dominant neutral meson decays
are those mediated by the exchange of heavy squarks, which are thus
independent of $f_{a}$, and result in the production of a neutralino pair.
These contributions are denoted by dotted lines in Fig. 1, and for the allowed
region of the parameter space, $f_{a}\gtrsim 200\,\text{GeV}$, they clearly
overtake the photon-mediated decays (indicated by solid lines).
On the other hand, for the gravitino model, in the allowed SUSY breaking scale
that is relevant for the intensity frontier searches,
$200\,\text{GeV}\lesssim\sqrt{F_{\mathrm{SUSY}}}\lesssim 3\,\text{TeV}$, the
photon-mediated decays dominate - see the right panel in Fig. 1. Such a
dependence can be explained by comparing the Lagrangians of the two models,
given by Eqs. 1 and 3, respectively - the photon-coupling in the ALPino
scenario has an additional factor of $\alpha_{\mathrm{EM}}/(2\pi)\simeq
10^{-3}$ with respect to the gravitino scenario.
We also checked that the pair production of neutralinos occuring in p-p
collisions - either at the LHC or at beam dumps - does not improve the FASER
or SHiP sensitivity towards the larger mass regime, $m_{\tilde{\chi}_{0}}\gg
1\,\text{GeV}$. The first detector has too small angular coverage, while the
energy beam of the latter is too small to produce heavy neutralinos with
sufficient abundance. On the other hand, this production channel was found to
allow MATHUSLA to cover long-lived very heavy neutralinos,
$m_{\tilde{\chi}_{0}}\gtrsim 100\,\text{GeV}$ \- see discussion in MATHUSLA
physics case paper [69]. The difference between MATHUSLA and FASER is that the
first of these LHC-based detectors has much larger decay vessel volume, is
closer to the beamline, and is placed highly off-axis. As a result, it covers
a much larger part of the solid angle, and covers the LLPs produced with large
transverse momentum $p_{T}$, while FASER is positioned in the far-forward
direction.
Finally, we implemented the equations describing the production of bino-NLSP,
Eq. 16 and Eq. 17, and also the production of bino pairs, Eq. 9 from [20], in
a modified $\tt FORESEE$ package [95]. We use it to generate the spectrum of
bino-NLSP, which is then used to simulate bino decays taking place in each
detector considered and other signatures described in Sec. III.2. We note that
an opposite mass hierarchy between bino and ALPino or gravitino is also
possible, and our simulation is adapted to such a case as well.
### III.2 Experiments and LLP signatures
In this section, we briefly describe the intensity frontier detectors
sensitive to single-photon LLP decays. We also introduce the main signatures
of LLPs that we use to study our two SUSY scenarios.
The characteristics of both of these topics have been discussed at length in
Sec. III in [59] for the dark axion portal, which is characterized by similar
phenomenology. In particular, in that scenario, the dark sector (DS) states
are connected to the SM by a dimension-5 coupling to a photon. Therefore, the
following presentation is brief, while details can be found in the
aforementioned work.
#### Experiments
We consider a number of intensity frontier experiments in order to take
advantage of their different features that may allow complementary coverage of
the parameter space. The specifics of each detector is given in Tab. 1 in
[59]. We use these parameters in our simulation.
Among the beam dumps, these are: CHARM [70], NuCal [71, 72], and SHiP [73,
74]. We also study detectors dedicated for LLP searches at the LHC, such as
FASER2 [61, 62, 63], FASER$\nu$ [64, 65], and MATHUSLA [68, 69].
Moreover, as a result of the efforts of the LHC-based LLP community, another
separate facility, called the Forward Physics Facility (FPF) [96, 67, 97], has
been proposed. It would contain not only a much larger version of the FASER2
detector, but it would also house additional ones. The detectors relevant to
our analysis are FASER$\nu$2 [66, 67] and FLArE [66].
All of these detectors use beams of protons that hit a stationary, dense
target in beam dumps or collide with other protons at the LHC. Since the
energy at the LHC is several orders of magnitude greater than that obtainable
in beam dumps, FASER, MATHUSLA and other such detectors probe much more
boosted LLPs than fixed target experiments. On the other hand, the luminosity
of the proton beam in dedicated beam dump experiments is significantly higher
than at the LHC. This usually results in their deeper reach toward smaller LLP
coupling values.
#### Bino decays
Our main LLP signature is bino decays occuring inside a decay vessel that is
separated from the LLP production point by a distance $L\sim 100\,\text{m}$.
Such large separation, together with a dedicated infrastructure, allows to get
rid of the SM background, perhaps except for neutrinos or muons, depending on
the detector design.
Such decays take place with the following probability:
$\displaystyle p(E)=e^{-L/d}(1-e^{-\Delta/d}),$ (6)
where $\Delta$ is the detector length and $d=d(E)$ indicates the LLP decay
length in the laboratory reference frame.
It is well known that the distance $L$ sets the length scale for the minimal
decay length $d$ that can be probed in such a way [43, 98, 84]. When $d\ll L$,
$p(E)\simeq e^{-L/d}$, while in the opposite regime, $d\gg L$,
$p(E)\simeq\Delta/d$ [99, 26].
Consequently, only sufficiently long-lived species can be probed. Since the
detectors introduced in previous paragraph cover a wide range of the distance
$L$, some variation in their sensitivity to LLP decays can be expected.
#### LSP-NLSP upcattering
LLPs connected to the SM by a photon can also be efficiently produced by
upscattering process of the lighter DS species into the heavier, unstable one
- see Fig. 1 in [75] for an illustration, and [75, 57, 59, 60] for dedicated
studies.
Such scattering is particularly efficient in the coherent regime,
characterized by low-momentum exchange of the off-shell photon. Then, it can
take place in a coherent manner, scattering with the whole nucleus.
In fact, assuming a particular common from of the form factor - given in Eq.
B2 in [58] \- which is responsible for the partial screening of the nucleus by
the atomic electrons, one can obtain a closed-form of the upscattering cross
section, $\mathrm{LSP}+N\to\mathrm{NLSP}+N$:
$\displaystyle\sigma_{\tilde{a}N-\tilde{\chi}_{0}N}\simeq$
$\displaystyle\frac{\alpha_{\mathrm{EM}}^{3}\cos^{2}\theta_{W}Z^{2}}{16\pi^{2}f_{a}^{2}}\times$
(7) $\displaystyle\left(\log\left(\frac{d}{1/a^{2}-t_{max}}\right)-2\right),$
$\displaystyle\sigma_{\tilde{G}N-\tilde{\chi}_{0}N}\simeq$
$\displaystyle\frac{\alpha_{\mathrm{EM}}\cos^{2}\theta_{W}Z^{2}}{2F_{\mathrm{SUSY}}^{2}}\times$
$\displaystyle\left(d+m_{\tilde{\chi}_{0}}^{2}\left(\log\left(\frac{d}{1/a^{2}-t_{max}}\right)-2\right)\right),$
where $N$ is the nucleus, $m_{e}$ is the electron mass, $Z$ and $A$ are the
atomic number and mass number of a nucleus, respectively,
$a=111Z^{-1/3}/m_{e}$, $d=0.164\,\text{GeV}^{2}A^{-2/3}$, and
$t_{\mathrm{max}}\simeq-(m_{\mathrm{NLSP}}^{4}+m_{\mathrm{LSP}}^{4})/(4E_{\mathrm{LSP}}^{2})$.
We obtained these formulas by integrating the differential cross section
following the method used in [100] for photophilic ALP. In the calculation of
the squared amplitude, we included only the leading diagrams involving photon
exchange. In particular, we neglected the diagrams with sleptons in the
$t$-channel, which are negligible for slepton masses in the range of a few
hundred GeV.
In our case, the production process takes place on tungsten (W) layers of the
emulsion detector FASER$\nu 2$ located upstream of FASER2, which is the main
decay vessel. Such upscattering depends on the coupling constant for
$m_{\tilde{\chi}_{0}}=0.1\,\text{GeV}$ in the following way:
$\displaystyle\sigma^{\text{W}}_{\tilde{a}N\to\tilde{\chi}_{0}N}\simeq\frac{1.5\times
10^{-4}}{\mathrm{GeV}^{2}}\times\left(\frac{\mathrm{GeV}}{f_{a}}\right)^{2},$
(8)
$\displaystyle\sigma^{\text{W}}_{\tilde{G}N\to\tilde{\chi}_{0}N}\simeq\frac{3}{\mathrm{GeV}^{2}}\times\left(\frac{1\mathrm{GeV}^{2}}{F_{\mathrm{\mathrm{SUSY}}}}\right)^{2}.$
We note the suppression of the cross section for the ALPino is again caused by
the $\alpha_{\mathrm{EM}}/(2\pi)\simeq 10^{-3}$ factor in the Lagrangian given
by Eq. 1 with respect to the gravitino scenario. Moreover, even in the latter
case, the upscattering cross section is not very efficient. In fact, comparing
Eq. 8 with the result obtained in [100] for photophilic ALP, see Eq. 15
therein, it is smaller by a factor of $\sim 30$.
In the next section, we also describe the FASER2 sensitivity reach obtained by
secondary neutralino production at FASER$\nu$2, followed by its decay inside
either the neutrino detector, or inside the main decay vessel. Then, instead
of Eq. 6, we use Eq. 6 in [59] to obtain the probability of such decay - for
details, see discussion therein.
In addition to the coherent scattering, incoherent scattering with protons or
electrons is also possible. In fact, the latter signature is particularly
useful for sub-GeV BSM species and it has been investigated in many papers,
see, e.g., [101, 102, 66, 103].
Proceeding analogously to the procedure used to derive Eq. 7, we obtained the
following closed forms of the electron upscattering cross sections:
$\displaystyle\sigma_{\tilde{a}e^{-}\to\tilde{\chi}_{0}e^{-}}\simeq$
$\displaystyle\frac{\alpha_{\mathrm{EM}}^{3}\cos^{2}\theta_{W}}{16\pi^{2}f_{a}^{2}}\times\log\left(\frac{E_{R}^{\mathrm{max}}}{E_{R}^{\mathrm{min}}}\right),$
(9) $\displaystyle\sigma_{\tilde{G}e^{-}\to\tilde{\chi}_{0}e^{-}}\simeq$
$\displaystyle\frac{\alpha_{\mathrm{EM}}\cos^{2}\theta_{W}}{2F_{\mathrm{SUSY}}^{2}}\times$
$\displaystyle\left(2m_{e}(E_{R}^{\mathrm{max}}-E_{R}^{\mathrm{min}})+m_{\tilde{\chi}_{0}}^{2}\log\left(\frac{E_{R}^{\mathrm{max}}}{E_{R}^{\mathrm{min}}}\right)\right).$
For this signature, instead of Eq. 6, we used Eq. 8 in [59]. Since scattering
signature is affected by large neutrino background, we adapt angular and
energy cuts found in [66], see also Tab. 1 in [59].
Figure 2: The sensitivity of FASER2 to neutralino decays into ALPino and
photon for fixed $m_{\tilde{a}}=10\,\text{MeV}$. The FPF version of the
detector will exceed the current bounds set by NuCal and LEP due to its larger
size compared to the baseline version of FASER2.
Figure 3: The sensitivity of FASER2, MATHUSLA, and SHiP to the neutralino-
gravitino model. Leading two-body decays will allow FASER2 (solid black line)
to partly extend the LEP bound, while FPF FASER2 will even reach the LHC
(model-dependent) bound. We also present results for three-body neutralino
decays at MATHUSLA (brown) and FASER2 (red solid line), which cover high and
low $p_{T}$ regimes of LLPs produced due to the p-p collisions at the LHC,
respectively. Secondary neutralino production extends the sensitivity of
FASER2 (black dashed and dot-dashed lines) into the short-lived, higher mass
regime, while electron scattering at FASER$\nu$2 and FLArE (solid and dotted
gold lines, respectively) covers the lower mass regime, which, however, are
both already excluded by LEP.
## IV Results
### IV.1 Sensitivity reach for ALPino
In Fig. 2, we present our results for the scenario when ALPino is the LSP.
For beam dump experiment, we find agreement with results of [20]. We consider
additional detector of this type, NuCal, and we find that it actually improves
over the NOMAD [104] sensitivity shown in that work.
Since the leading channel of NLSP-LSP production is $f_{a}$-independent meson
decay into a pair of binos, there is hardly any flux of ALPinos - see the
discussion in Sec. III.1, in particular the left panel of Fig. 1. As a result,
neither the secondary production given by the top line of Eq. 7, nor
upscattering on electrons, given by the top line of Eq. 9, are efficient.
Consequently, FASER2 will not have sensitivity to such signatures.
On the other hand, bino-pair production can be quite efficient. While the
baseline versions of FASER2 taking data during the High Luminosity era of the
LHC will not improve over NuCal (but its sensitivity is greater than NOMAD),
the FPF FASER2 will extend it in the $m_{\tilde{\chi}_{0}}\simeq
0.1\,\text{GeV}$ mass regime. For LLP decays produced in the primary
production, its main advantage over the baseline version is simply its larger
size.
### IV.2 Sensitivity reach for gravitino
On the other hand, when gravitino acts as the LSP, the dominant production
modes produce equal fluxes of gravitinos and neutralinos, allowing the
additional upscattering signatures described in Sec. III.2. In fact, contrary
to the ALPino scenario, both neutralino production and decays are controlled
by the NLSP-LSP-photon coupling, which here depends on the SUSY breaking scale
as $1/F_{\mathrm{SUSY}}$.
This allows one to search not only for the displaced $\tilde{\chi}_{0}$
decays, but also for the electron scattering signature and for the decays of
$\tilde{\chi}_{0}$ produced by upscattering occuring at the FASER$\nu$2
detector located before FASER2.
In Fig. 3, we present our main results for this model. The areas shaded in
gray are excluded by NuCal or LEP [44, 105]. We also indicate a model-
dependent bound from the LHC [106, 107] by dashed gray line.
As mentioned earlier, we consider two versions of the FASER2 detector - the
results for the baseline version are in the left panel, while the results for
FPF FASER2 are in the right panel. The sensitivity lines derived for the two-
body bino decays are marked by black lines for FASER and green for SHiP, while
those for three-body decays are indicated by red (FASER2) and brown (MATHUSLA)
lines. The sensitivity lines correspond to the number of bino decays (number
of LLP signatures in general case) given in Tab. 1 in [59] for each detector
considered.
As is clearly seen, FASER2 will be able to significantly extend the LEP limit
for $m_{\tilde{\chi}_{0}}\gtrsim 0.1\,\text{GeV}$ mass range, while searches
for $e^{+}e^{-}$ pairs produced in the three-body decays at MATHUSLA and
FASER2 will be competitive with current LEP and NuCal bounds. Moreover, the
FPF version of FASER2 may even reach the strongest limit on light gravitinos
coming from the LHC.
The upscattering signatures allow to cover the smaller lifetime regime,
$d_{\tilde{\chi}_{0}}\sim 1\,\text{m}$, which, however, is already excluded by
LEP for both locations of the bino decays: FASER2 (black dashed line) and
FASER$\nu$2 (black dot-dashed line). Finally, the electron scattering
signature covers the low mass region of the bino, which, however is also
already excluded.
## V Conclusions
Neutralino can act as a LLP decaying into a single photon and LSP in various
SUSY scenarios. Constraining them can be challenging in high-energy detectors
designed for much heavier and short-lived BSM species. However, searches at
the intensity frontier are well suited to such a regime.
We found that FASER2 and SHiP, which are particularly predisposed to such a
difficult signature [108, 57, 20, 21], will be able to meaningfully extend the
current constraints on the low-energy SUSY breaking scale in two scenarios
involving sub-GeV neutralinos. In the former, the LSP is composed of ALPino,
while in the latter case, gravitino plays that role.
For the gravitino model, we also investigated additional LLP signatures:
secondary neutralino production due to upscattering taking place at the
FASER$\nu$2 detector in front of the main decay vessel, FASER2, and three-body
decays depositing visible energy into a $e^{+}e^{-}$ pair.
Finally, we considered the extended version of the FASER2 experiment, the
proposed FPF. Due to its larger size, the FPF scenario of FASER2 may reach the
strongest limit on light gravitinos coming from the LHC, while SHiP, due to
its higher luminosity, may improve them further.
###### Acknowledgements.
This work was supported by the Institute for Basic Science under the project
code, IBS-R018-D1.
## Appendix A Neutralino decays
In these appendices, we give the relevant decay widths and cross sections,
which we used in our simulation.
As in the rest of our paper, we assume the neutralino is composed of pure
bino, which leads to an additional factor of $\cos\theta_{W}$ \- where
$\theta_{W}$ is the Weinberg angle - when reading the Feynman rules described
by the Lagrangians given by Eqs. 1 and 3.
### A.1 ALPino
The two-body decay width for bino decaying into an ALPino and a photon is [20]
$\displaystyle\Gamma_{\tilde{\chi}_{0}\to\tilde{a}\gamma}=\frac{\alpha_{\mathrm{EM}}^{2}\cos^{2}\theta_{W}}{128\pi^{3}}\frac{m_{\tilde{\chi}_{0}}^{3}}{f_{a}^{2}}\left(1-\frac{m_{\tilde{a}}^{2}}{m_{\tilde{\chi}_{0}}^{2}}\right)^{3}.$
(10)
Below, we give the integrated decay width for the leading three-body decay
into an ALPino and an electron-positron pair in the limit of
$m_{\tilde{\chi}_{0}}\gg m_{\tilde{G}},m_{e^{-}}$,
$\Gamma_{\tilde{\chi}_{0}\to\tilde{a}e^{+}e^{-}}\simeq\frac{\alpha_{\mathrm{EM}}^{3}\cos^{2}\theta_{W}}{1152\pi^{4}f_{a}^{2}m_{\tilde{\chi}_{0}}^{3}}\left(18m_{\tilde{\chi}_{0}}^{4}m_{e^{-}}^{2}-4m_{\tilde{\chi}_{0}}^{6}-32m_{e^{-}}^{6}+3m_{\tilde{\chi}_{0}}^{6}\log\left(\frac{m^{2}_{\tilde{\chi}_{0}}}{4m_{e^{-}}^{2}}\right)\right).$
(11)
The amplitude squared for the general mass scheme and the code evaluating such
decay width, can be found in the auxiliary materials of the paper.
### A.2 Gravitino
The two-body decay width for bino decaying into an gravitino and a photon is
$\displaystyle\Gamma_{\tilde{\chi}_{0}\to\tilde{G}\gamma}=$
$\displaystyle\frac{\cos^{2}\theta_{W}m^{5}_{\tilde{\chi}_{0}}}{16\pi
F_{\mathrm{SUSY}}^{2}}\left(1-\frac{m^{2}_{\tilde{G}}}{m^{2}_{\tilde{\chi}_{0}}}\right)^{3}\left(1+\frac{m^{2}_{\tilde{G}}}{m^{2}_{\tilde{\chi}_{0}}}\right).$
(12)
We used the Feynman rules described in [87], where in particular we used the
full form of the gravitino polarization tensor. It is defined as the sum of
the gravitino field with momentum $p$ over its spin degrees of freedom,
$\displaystyle\Pi^{\pm}_{\mu\nu}(k)\equiv\sum_{s=\pm\frac{1}{2},\pm\frac{3}{2}}\psi^{\pm,s}_{\mu}(k)\overline{\psi}^{\pm,s}_{\nu}(k).$
(13)
In the high-energy limit, where
${\Pi^{\pm}_{\mu\nu}(k)\simeq-\not{k}(g_{\mu\nu}-\frac{2p_{\mu}p_{\nu}}{3m^{2}_{\tilde{G}}})}$,
we obtain the well-known result [109, 110, 111],
$\displaystyle\Gamma_{\tilde{\chi}_{0}\to\tilde{G}\gamma}=\frac{\cos^{2}\theta_{W}m^{5}_{\tilde{\chi}_{0}}}{16\pi
F_{\mathrm{SUSY}}^{2}}\left(1-\frac{m^{2}_{\tilde{G}}}{m^{2}_{\tilde{\chi}_{0}}}\right)^{3}\left(1+3\frac{m^{2}_{\tilde{G}}}{m^{2}_{\tilde{\chi}_{0}}}\right).$
(14)
Since the MATHUSLA detector may not be sensitive to single-photon decays, we
also considered phase-space suppressed decays into a gravitino and an
electron-positron pair. In the limit of $m_{\tilde{\chi}_{0}}\gg
m_{\tilde{G}},m_{e^{-}}$, the following formula describes it:
$\displaystyle\Gamma_{\tilde{\chi}_{0}\to\tilde{G}e^{+}e^{-}}\simeq$
$\displaystyle\frac{\alpha_{\mathrm{EM}}\cos^{2}\theta_{W}m_{\tilde{\chi}_{0}}^{5}}{576\pi^{2}F_{\mathrm{SUSY}}^{2}}\times$
(15)
$\displaystyle\left(24\log\left(\frac{m_{\tilde{\chi}_{0}}}{m_{e^{-}}}\right)-25-12\log(4)\right),$
while the general formula for the amplitude squared can be found in the
Mathematica notebook linked to the paper.
## Appendix B Pseudoscalar and vector meson decays
### B.1 Vector meson decays
The following are formulas for vector meson decays mediated by an off-shell
photon that result in the production of LSP-NLSP pair,
$V(p_{0})\\!\to\\!\gamma^{*}(p_{1}+p_{2})\\!\to\\!\mathrm{LSP}(p_{1})+\mathrm{NLSP}(p_{2})$:
$\displaystyle\frac{{\rm BR}_{V\rightarrow\tilde{a}\tilde{\chi}_{0}}}{{\rm
BR}_{V\rightarrow e^{+}e^{-}}}\\!=\\!\cos^{2}\theta_{W}$
$\displaystyle\frac{\alpha_{\text{EM}}\left(m_{V}^{2}+2(m_{\tilde{a}}-m_{\tilde{\chi}_{0}})^{2}\right)(m_{V}^{2}-(m_{\tilde{a}}+m_{\tilde{\chi}_{0}})^{2})\sqrt{\left(-m_{V}^{2}+m_{\tilde{a}}^{2}+m_{\tilde{\chi}_{0}}^{2}\right)^{2}-4m_{\tilde{a}}^{2}m_{\tilde{\chi}_{0}}^{2}}}{128\pi^{3}f_{a}^{2}\sqrt{m_{V}^{2}-4m_{e}^{2}}\left(m_{V}^{3}+2m_{V}m_{e}^{2}\right)},$
(16) $\displaystyle\frac{{\rm
BR}_{V\rightarrow\tilde{G}\tilde{\chi}_{0}}}{{\rm BR}_{V\rightarrow
e^{+}e^{-}}}\\!=\\!\cos^{2}\theta_{W}$
$\displaystyle\frac{(m_{V}^{2}-(m_{\tilde{G}}+m_{\tilde{\chi}_{0}})^{2})\sqrt{\left(-m_{V}^{2}+m_{\tilde{G}}^{2}+m_{\tilde{\chi}_{0}}^{2}\right)^{2}-4m_{\tilde{G}}^{2}m_{\tilde{\chi}_{0}}^{2}}}{8\pi
F_{\mathrm{SUSY}}^{2}\alpha_{\text{EM}}\sqrt{m_{V}^{2}-4m_{e}^{2}}\left(m_{V}^{3}+2Mm_{e}^{2}\right)}\times$
$\displaystyle\times\left(2m_{V}^{2}\left(m_{\tilde{G}}^{2}+m_{\tilde{G}}m_{\tilde{\chi}_{0}}-m_{\tilde{\chi}_{0}}^{2}\right)+m_{V}^{4}+(m_{\tilde{G}}-m_{\tilde{\chi}_{0}})^{2}\left(3m_{\tilde{G}}^{2}+m_{\tilde{\chi}_{0}}^{2}\right)\right),$
where ${\rm BR}_{V\rightarrow e^{+}e^{-}}$ is the branching ratio
corresponding to decays into $e^{+}e^{-}$ [112], which we took from the PDG
[112].
### B.2 Pseudoscalar meson decays
The following formulas describe the differential branching ratios of the
pseudoscalar meson decays into $\gamma(p_{1})$ and
$\mathrm{LSP}(p_{2})$-$\mathrm{NLSP}(p_{3})$ pair. We use the form
particularly useful for Monte Carlo simulations, where
$q^{2}=(p_{2}+p_{3})^{2}$ is the momentum squared of the off-shell photon
mediating the decay, and $\theta$ is the angle between the LSP momentum in the
rest frame of the off-shell photon and the momentum of the off-shell photon in
the meson rest frame.
$\displaystyle\frac{d{\rm
BR}_{P\\!\to\\!\gamma\tilde{a}\tilde{\chi}_{0}}}{dq^{2}d\cos\theta}={\rm
BR}_{P\rightarrow\gamma\gamma}\cos^{2}\theta_{W}$
$\displaystyle\\!\times\\!\\!\Bigg{[}\frac{\alpha_{\mathrm{EM}}^{2}}{512\pi^{4}f_{a}^{2}m_{P}^{6}q^{6}}\left(q^{2}-m_{P}^{2}\right)^{3}\sqrt{\left(m_{\tilde{\chi}_{0}}^{2}+m_{\tilde{a}}^{2}-q^{2}\right)^{2}-4m_{\tilde{\chi}_{0}}^{2}m_{\tilde{a}}^{2}}$
(17)
$\displaystyle\times\left((m_{\tilde{\chi}_{0}}+m_{\tilde{a}})^{2}-q^{2}\right)\left(\cos(2\theta)\left((m_{\tilde{\chi}_{0}}-m_{\tilde{a}})^{2}-q^{2}\right)+3(m_{\tilde{\chi}_{0}}-m_{\tilde{a}})^{2}+q^{2}\right)\\!\Bigg{]},$
$\displaystyle\frac{d{\rm
BR}_{P\\!\to\\!\gamma\tilde{G}\tilde{\chi}_{0}}}{dq^{2}d\cos\theta}={\rm
BR}_{P\rightarrow\gamma\gamma}\cos^{2}\theta_{W}$
$\displaystyle\\!\times\\!\\!\left[\frac{1}{64\pi^{2}F_{\mathrm{SUSY}}^{2}m_{P}^{6}q^{6}}(m_{P}^{2}-q^{2})^{3}(m_{\tilde{\chi}_{0}}^{2}-q^{2})^{4}(\cos(2\theta)+3)\\!\right],$
where $m_{P}$ is the mass of pseudoscalar meson and ${\rm
BR}_{P\rightarrow\gamma\gamma}$ is the branching ratio of the decay into two
photons, which we took from the PDG [112].
## References
* [1] Y. A. Golfand and E. P. Likhtman, “Extension of the Algebra of Poincare Group Generators and Violation of p Invariance,” JETP Lett. 13 (1971) 323–326.
* [2] J.-L. Gervais and B. Sakita, “Field Theory Interpretation of Supergauges in Dual Models,” Nucl. Phys. B 34 (1971) 632–639.
* [3] P. Ramond, “Dual Theory for Free Fermions,” Phys. Rev. D 3 (1971) 2415–2418.
* [4] A. Neveu and J. H. Schwarz, “Factorizable dual model of pions,” Nucl. Phys. B 31 (1971) 86–112.
* [5] J. Wess and B. Zumino, “Supergauge Transformations in Four-Dimensions,” Nucl. Phys. B 70 (1974) 39–50.
* [6] ATLAS Collaboration, G. Aad et al., “Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in $\sqrt{s}=8$ TeV pp collisions with the ATLAS detector,” Eur. Phys. J. C 75 (2015) no. 7, 318, arXiv:1503.03290 [hep-ex]. [Erratum: Eur.Phys.J.C 75, 463 (2015)].
* [7] ATLAS Collaboration, M. Aaboud et al., “Search for squarks and gluinos in final states with jets and missing transverse momentum using 36 fb-1 of $\sqrt{s}=13$ TeV pp collision data with the ATLAS detector,” Phys. Rev. D 97 (2018) no. 11, 112001, arXiv:1712.02332 [hep-ex].
* [8] ATLAS Collaboration, “Search for direct pair production of sleptons and charginos decaying to two leptons and neutralinos with mass splittings near the $W$-boson mass in ${\sqrt{s}=13\,}$TeV $pp$ collisions with the ATLAS detector,” arXiv:2209.13935 [hep-ex].
* [9] CMS Collaboration, V. Khachatryan et al., “Searches for Supersymmetry using the MT2 Variable in Hadronic Events Produced in pp Collisions at 8 TeV,” JHEP 05 (2015) 078, arXiv:1502.04358 [hep-ex].
* [10] CMS Collaboration, A. M. Sirunyan et al., “Search for supersymmetry in final states with two oppositely charged same-flavor leptons and missing transverse momentum in proton-proton collisions at $\sqrt{s}=$ 13 TeV,” JHEP 04 (2021) 123, arXiv:2012.08600 [hep-ex].
* [11] S. Dimopoulos and H. Georgi, “Softly Broken Supersymmetry and SU(5),” Nucl. Phys. B 193 (1981) 150–162.
* [12] L. Girardello and M. T. Grisaru, “Soft Breaking of Supersymmetry,” Nucl. Phys. B 194 (1982) 65.
* [13] P. Fayet and J. Iliopoulos, “Spontaneously Broken Supergauge Symmetries and Goldstone Spinors,” Phys. Lett. B 51 (1974) 461–464.
* [14] S. P. Martin, “A Supersymmetry primer,” Adv. Ser. Direct. High Energy Phys. 18 (1998) 1–98, arXiv:hep-ph/9709356.
* [15] I. Gogoladze, J. D. Lykken, C. Macesanu, and S. Nandi, “Implications of a Massless Neutralino for Neutrino Physics,” Phys. Rev. D 68 (2003) 073004, arXiv:hep-ph/0211391.
* [16] H. K. Dreiner, S. Heinemeyer, O. Kittel, U. Langenfeld, A. M. Weber, and G. Weiglein, “Mass Bounds on a Very Light Neutralino,” Eur. Phys. J. C 62 (2009) 547–572, arXiv:0901.3485 [hep-ph].
* [17] D. Gorbunov and I. Timiryasov, “Decaying light particles in the SHiP experiment. II. Signal rate estimates for light neutralinos,” Phys. Rev. D 92 (2015) no. 7, 075015, arXiv:1508.01780 [hep-ph].
* [18] J. de Vries, H. K. Dreiner, and D. Schmeier, “R-Parity Violation and Light Neutralinos at SHiP and the LHC,” Phys. Rev. D 94 (2016) no. 3, 035006, arXiv:1511.07436 [hep-ph].
* [19] D. Dercks, J. De Vries, H. K. Dreiner, and Z. S. Wang, “R-parity Violation and Light Neutralinos at CODEX-b, FASER, and MATHUSLA,” Phys. Rev. D 99 (2019) no. 5, 055039, arXiv:1810.03617 [hep-ph].
* [20] K.-Y. Choi, T. Inami, K. Kadota, I. Park, and O. Seto, “Searching for Axino-Like Particle at Fixed Target Experiments,” Phys. Dark Univ. 27 (2020) 100460, arXiv:1902.10475 [hep-ph].
* [21] H. K. Dreiner, D. Köhler, S. Nangia, and Z. S. Wang, “Searching for a single photon from lightest neutralino decays in R-parity-violating supersymmetry at FASER,” JHEP 02 (2023) 120, arXiv:2207.05100 [hep-ph].
* [22] B. C. Allanach, A. Dedes, and H. K. Dreiner, “R parity violating minimal supergravity model,” Phys. Rev. D 69 (2004) 115002, arXiv:hep-ph/0309196. [Erratum: Phys.Rev.D 72, 079902 (2005)].
* [23] R. Barbier et al., “R-parity violating supersymmetry,” Phys. Rept. 420 (2005) 1–202, arXiv:hep-ph/0406039.
* [24] H. K. Dreiner, “An Introduction to explicit R-parity violation,” Adv. Ser. Direct. High Energy Phys. 21 (2010) 565–583, arXiv:hep-ph/9707435.
* [25] M. Battaglieri et al., “US Cosmic Visions: New Ideas in Dark Matter 2017: Community Report,” in U.S. Cosmic Visions: New Ideas in Dark Matter. 7, 2017. arXiv:1707.04591 [hep-ph].
* [26] J. Beacham et al., “Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report,” J. Phys. G 47 (2020) no. 1, 010501, arXiv:1901.09966 [hep-ex].
* [27] J. Alimena et al., “Searching for long-lived particles beyond the Standard Model at the Large Hadron Collider,” J. Phys. G 47 (2020) no. 9, 090501, arXiv:1903.04497 [hep-ex].
* [28] R. D. Peccei and H. R. Quinn, “CP Conservation in the Presence of Instantons,” Phys. Rev. Lett. 38 (1977) 1440–1443.
* [29] F. Wilczek, “Problem of Strong $P$ and $T$ Invariance in the Presence of Instantons,” Phys. Rev. Lett. 40 (1978) 279–282.
* [30] S. Weinberg, “A New Light Boson?,” Phys. Rev. Lett. 40 (1978) 223–226.
* [31] S. Weinberg, “Implications of Dynamical Symmetry Breaking,” Phys. Rev. D 13 (1976) 974–996. [Addendum: Phys.Rev.D 19, 1277–1280 (1979)].
* [32] E. Gildener, “Gauge Symmetry Hierarchies,” Phys. Rev. D 14 (1976) 1667.
* [33] M. J. G. Veltman, “The Infrared - Ultraviolet Connection,” Acta Phys. Polon. B 12 (1981) 437.
* [34] G. ’t Hooft, “Naturalness, chiral symmetry, and spontaneous chiral symmetry breaking,” NATO Sci. Ser. B 59 (1980) 135–157.
* [35] L. Covi, J. E. Kim, and L. Roszkowski, “Axinos as cold dark matter,” Phys. Rev. Lett. 82 (1999) 4180–4183, arXiv:hep-ph/9905212.
* [36] L. Covi, H.-B. Kim, J. E. Kim, and L. Roszkowski, “Axinos as dark matter,” JHEP 05 (2001) 033, arXiv:hep-ph/0101009.
* [37] D. V. Volkov and V. A. Soroka, “Higgs Effect for Goldstone Particles with Spin 1/2,” JETP Lett. 18 (1973) 312–314.
* [38] S. Deser and B. Zumino, “Broken Supersymmetry and Supergravity,” Phys. Rev. Lett. 38 (1977) 1433–1436.
* [39] D. Z. Freedman, P. van Nieuwenhuizen, and S. Ferrara, “Progress Toward a Theory of Supergravity,” Phys. Rev. D 13 (1976) 3214–3218.
* [40] J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive, and M. Srednicki, “Supersymmetric Relics from the Big Bang,” Nucl. Phys. B 238 (1984) 453–476.
* [41] J. R. Ellis, J. E. Kim, and D. V. Nanopoulos, “Cosmological Gravitino Regeneration and Decay,” Phys. Lett. B 145 (1984) 181–186.
* [42] F. D. Steffen, “Gravitino dark matter and cosmological constraints,” JCAP 09 (2006) 001, arXiv:hep-ph/0605306.
* [43] J. D. Bjorken, S. Ecklund, W. R. Nelson, A. Abashian, C. Church, B. Lu, L. W. Mo, T. A. Nunamaker, and P. Rassmann, “Search for Neutral Metastable Penetrating Particles Produced in the SLAC Beam Dump,” Phys. Rev. D 38 (1988) 3375.
* [44] DELPHI Collaboration, J. Abdallah et al., “Photon events with missing energy in e+ e- collisions at s**(1/2) = 130-GeV to 209-GeV,” Eur. Phys. J. C 38 (2005) 395–411, arXiv:hep-ex/0406019.
* [45] D. Choudhury, H. K. Dreiner, P. Richardson, and S. Sarkar, “A Supersymmetric solution to the KARMEN time anomaly,” Phys. Rev. D 61 (2000) 095009, arXiv:hep-ph/9911365.
* [46] A. Dedes, H. K. Dreiner, and P. Richardson, “Attempts at explaining the NuTeV observation of dimuon events,” Phys. Rev. D 65 (2001) 015001, arXiv:hep-ph/0106199.
* [47] H. Dreiner, G. Polesello, and M. Thormeier, “Bounds on broken R parity from NOMAD and CHORUS,” arXiv:hep-ph/0207160.
* [48] H. K. Dreiner, S. Grab, D. Koschade, M. Kramer, B. O’Leary, and U. Langenfeld, “Rare meson decays into very light neutralinos,” Phys. Rev. D 80 (2009) 035018, arXiv:0905.2051 [hep-ph].
* [49] H. K. Dreiner, J. Y. Günther, and Z. S. Wang, “$R$-parity violation and light neutralinos at ANUBIS and MAPP,” Phys. Rev. D 103 (2021) no. 7, 075013, arXiv:2008.07539 [hep-ph].
* [50] S. Trojanowski, “Beyond the Standard Model physics in the far-forward region of the Large Hadron Collider,” in 29th Cracow Epiphany Conference. 5, 2023. arXiv:2305.04663 [hep-ph].
* [51] J. L. Feng, I. Galon, F. Kling, and S. Trojanowski, “ForwArd Search ExpeRiment at the LHC,” Phys. Rev. D 97 (2018) no. 3, 035001, arXiv:1708.09389 [hep-ph].
* [52] J. L. Feng, I. Galon, F. Kling, and S. Trojanowski, “Dark Higgs bosons at the ForwArd Search ExpeRiment,” Phys. Rev. D 97 (2018) no. 5, 055034, arXiv:1710.09387 [hep-ph].
* [53] FASER Collaboration, H. Abreu et al., “The FASER Detector,” arXiv:2207.11427 [physics.ins-det].
* [54] K. R. Dienes, J. L. Feng, M. Fieg, F. Huang, S. J. Lee, and B. Thomas, “Extending the Discovery Potential for Inelastic-Dipole Dark Matter with FASER,” arXiv:2301.05252 [hep-ph].
* [55] F. Kling and P. Quílez, “ALP searches at the LHC: FASER as a light-shining-through-walls experiment,” Phys. Rev. D 106 (2022) no. 5, 055036, arXiv:2204.03599 [hep-ph].
* [56] W. Liu and Y. Zhang, “Testing Neutrino Dipole Portal by Long-lived Particle Detectors at the LHC,” arXiv:2302.02081 [hep-ph].
* [57] K. Jodłowski and S. Trojanowski, “Neutrino beam-dump experiment with FASER at the LHC,” JHEP 05 (2021) 191, arXiv:2011.04751 [hep-ph].
* [58] K. Jodłowski, “Looking forward to photon-coupled long-lived particles I: massive spin-2 portal,” arXiv:2305.05710 [hep-ph].
* [59] K. Jodłowski, “Looking forward to photon-coupled long-lived particles II: dark axion portal,” arXiv:2305.10409 [hep-ph].
* [60] K. Jodłowski, “Looking forward to photon-coupled long-lived particles III: inelastic DM with EM form factors,” arXiv:2305.16781 [hep-ph].
* [61] FASER Collaboration, A. Ariga et al., “Letter of Intent for FASER: ForwArd Search ExpeRiment at the LHC,” arXiv:1811.10243 [physics.ins-det].
* [62] FASER Collaboration, A. Ariga et al., “Technical Proposal for FASER: ForwArd Search ExpeRiment at the LHC,” arXiv:1812.09139 [physics.ins-det].
* [63] FASER Collaboration, H. Abreu et al., “The tracking detector of the FASER experiment,” Nucl. Instrum. Meth. A 1034 (2022) 166825, arXiv:2112.01116 [physics.ins-det].
* [64] FASER Collaboration, H. Abreu et al., “Detecting and Studying High-Energy Collider Neutrinos with FASER at the LHC,” Eur. Phys. J. C 80 (2020) no. 1, 61, arXiv:1908.02310 [hep-ex].
* [65] FASER Collaboration, H. Abreu et al., “Technical Proposal: FASERnu,” arXiv:2001.03073 [physics.ins-det].
* [66] B. Batell, J. L. Feng, and S. Trojanowski, “Detecting Dark Matter with Far-Forward Emulsion and Liquid Argon Detectors at the LHC,” Phys. Rev. D 103 (2021) no. 7, 075023, arXiv:2101.10338 [hep-ph].
* [67] L. A. Anchordoqui et al., “The Forward Physics Facility: Sites, experiments, and physics potential,” Phys. Rept. 968 (2022) 1–50, arXiv:2109.10905 [hep-ph].
* [68] J. P. Chou, D. Curtin, and H. J. Lubatti, “New Detectors to Explore the Lifetime Frontier,” Phys. Lett. B 767 (2017) 29–36, arXiv:1606.06298 [hep-ph].
* [69] D. Curtin et al., “Long-Lived Particles at the Energy Frontier: The MATHUSLA Physics Case,” Rept. Prog. Phys. 82 (2019) no. 11, 116201, arXiv:1806.07396 [hep-ph].
* [70] CHARM Collaboration, F. Bergsma et al., “Search for Axion Like Particle Production in 400-GeV Proton - Copper Interactions,” Phys. Lett. B 157 (1985) 458–462.
* [71] J. Blumlein et al., “Limits on neutral light scalar and pseudoscalar particles in a proton beam dump experiment,” Z. Phys. C 51 (1991) 341–350.
* [72] J. Blumlein and J. Brunner, “New Exclusion Limits for Dark Gauge Forces from Beam-Dump Data,” Phys. Lett. B 701 (2011) 155–159, arXiv:1104.2747 [hep-ex].
* [73] SHiP Collaboration, M. Anelli et al., “A facility to Search for Hidden Particles (SHiP) at the CERN SPS,” arXiv:1504.04956 [physics.ins-det].
* [74] S. Alekhin et al., “A facility to Search for Hidden Particles at the CERN SPS: the SHiP physics case,” Rept. Prog. Phys. 79 (2016) no. 12, 124201, arXiv:1504.04855 [hep-ph].
* [75] K. Jodłowski, F. Kling, L. Roszkowski, and S. Trojanowski, “Extending the reach of FASER, MATHUSLA, and SHiP towards smaller lifetimes using secondary particle production,” Phys. Rev. D 101 (2020) no. 9, 095020, arXiv:1911.11346 [hep-ph].
* [76] T. Goto and M. Yamaguchi, “Is axino dark matter possible in supergravity?,” Phys. Lett. B 276 (1992) 103–107.
* [77] E. J. Chun, H. B. Kim, and J. E. Kim, “Dark matters in axino gravitino cosmology,” Phys. Rev. Lett. 72 (1994) 1956–1959, arXiv:hep-ph/9305208.
* [78] E. J. Chun and A. Lukas, “Axino mass in supergravity models,” Phys. Lett. B 357 (1995) 43–50, arXiv:hep-ph/9503233.
* [79] E. J. Chun, J. E. Kim, and H. P. Nilles, “Axino mass,” Phys. Lett. B 287 (1992) 123–127, arXiv:hep-ph/9205229.
* [80] K.-Y. Choi, J. E. Kim, and L. Roszkowski, “Review of axino dark matter,” J. Korean Phys. Soc. 63 (2013) 1685–1695, arXiv:1307.3330 [astro-ph.CO].
* [81] J. E. Kim, “A Common Scale for the Invisible Axion, Local SUSY GUTs and Saxino Decay,” Phys. Lett. B 136 (1984) 378–382.
* [82] J. E. Kim, A. Masiero, and D. V. Nanopoulos, “Unstable Photino Mass Bound From Cosmology,” Phys. Lett. B 139 (1984) 346–350.
* [83] J. F. Nieves, “Radiative Photino Decay in Models With an Invisible Axion,” Phys. Lett. B 174 (1986) 411–414.
* [84] M. Bauer, P. Foldenauer, and J. Jaeckel, “Hunting All the Hidden Photons,” JHEP 07 (2018) 094, arXiv:1803.05466 [hep-ph].
* [85] D. V. Volkov and V. P. Akulov, “Possible universal neutrino interaction,” JETP Lett. 16 (1972) 438–440.
* [86] J. Wess and J. Bagger, Supersymmetry and supergravity. Princeton University Press, Princeton, NJ, USA, 1992.
* [87] J. Pradler, “Electroweak Contributions to Thermal Gravitino Production,” Master’s thesis, Vienna U., 2006.
* [88] E. Cremmer, S. Ferrara, L. Girardello, and A. Van Proeyen, “Yang-Mills Theories with Local Supersymmetry: Lagrangian, Transformation Laws and SuperHiggs Effect,” Nucl. Phys. B 212 (1983) 413.
* [89] H. P. Nilles, “Supergravity Generates Hierarchies,” Nucl. Phys. B 217 (1983) 366–380.
* [90] E. Cremmer, B. Julia, J. Scherk, P. van Nieuwenhuizen, S. Ferrara, and L. Girardello, “Super-higgs effect in supergravity with general scalar interactions,” Phys. Lett. B 79 (1978) 231–234.
* [91] R. Casalbuoni, S. De Curtis, D. Dominici, F. Feruglio, and R. Gatto, “A GRAVITINO - GOLDSTINO HIGH-ENERGY EQUIVALENCE THEOREM,” Phys. Lett. B 215 (1988) 313–316.
* [92] L. Borissov, J. M. Conrad, and M. Shaevitz, “Searching for the lightest neutralino at fixed target experiments,” arXiv:hep-ph/0007195.
* [93] L. Merlo, F. Pobbe, S. Rigolin, and O. Sumensari, “Revisiting the production of ALPs at B-factories,” JHEP 06 (2019) 091, arXiv:1905.03259 [hep-ph].
* [94] X. Chu, J.-L. Kuo, and J. Pradler, “Dark sector-photon interactions in proton-beam experiments,” Phys. Rev. D 101 (2020) no. 7, 075035, arXiv:2001.06042 [hep-ph].
* [95] F. Kling and S. Trojanowski, “Forward experiment sensitivity estimator for the LHC and future hadron colliders,” Phys. Rev. D 104 (2021) no. 3, 035012, arXiv:2105.07077 [hep-ph].
* [96] R. Mammen Abraham et al., “Forward Physics Facility - Snowmass 2021 Letter of Interest,”.
* [97] J. L. Feng et al., “The Forward Physics Facility at the High-Luminosity LHC,” arXiv:2203.05090 [hep-ex].
* [98] J. L. Feng, I. Galon, F. Kling, and S. Trojanowski, “Axionlike particles at FASER: The LHC as a photon beam dump,” Phys. Rev. D 98 (2018) no. 5, 055021, arXiv:1806.02348 [hep-ph].
* [99] R. Essig et al., “Working Group Report: New Light Weakly Coupled Particles,” in Community Summer Study 2013: Snowmass on the Mississippi. 10, 2013. arXiv:1311.0029 [hep-ph].
* [100] R. R. Dusaev, D. V. Kirpichnikov, and M. M. Kirsanov, “Photoproduction of axionlike particles in the NA64 experiment,” Phys. Rev. D 102 (2020) no. 5, 055018, arXiv:2004.04469 [hep-ph].
* [101] P. deNiverville, M. Pospelov, and A. Ritz, “Observing a light dark matter beam with neutrino experiments,” Phys. Rev. D 84 (2011) 075020, arXiv:1107.4580 [hep-ph].
* [102] P. deNiverville, C.-Y. Chen, M. Pospelov, and A. Ritz, “Light dark matter in neutrino beams: production modelling and scattering signatures at MiniBooNE, T2K and SHiP,” Phys. Rev. D 95 (2017) no. 3, 035006, arXiv:1609.01770 [hep-ph].
* [103] F. Kling, J.-L. Kuo, S. Trojanowski, and Y.-D. Tsai, “FLArE up dark sectors with EM form factors at the LHC Forward Physics Facility,” arXiv:2205.09137 [hep-ph].
* [104] NOMAD Collaboration, J. Altegoer et al., “The NOMAD experiment at the CERN SPS,” Nucl. Instrum. Meth. A 404 (1998) 96–128.
* [105] K. Mawatari and B. Oexl, “Monophoton signals in light gravitino production at $e^{+}e^{-}$ colliders,” Eur. Phys. J. C 74 (2014) no. 6, 2909, arXiv:1402.3223 [hep-ph].
* [106] ATLAS Collaboration, “Search for New Phenomena in Monojet plus Missing Transverse Momentum Final States using 10fb-1 of pp Collisions at sqrts=8 TeV with the ATLAS detector at the LHC,”.
* [107] P. de Aquino, F. Maltoni, K. Mawatari, and B. Oexl, “Light Gravitino Production in Association with Gluinos at the LHC,” JHEP 10 (2012) 008, arXiv:1206.7098 [hep-ph].
* [108] P. deNiverville and H.-S. Lee, “Implications of the dark axion portal for SHiP and FASER and the advantages of monophoton signals,” Phys. Rev. D 100 (2019) no. 5, 055017, arXiv:1904.13061 [hep-ph].
* [109] J. R. Ellis, K. A. Olive, Y. Santoso, and V. C. Spanos, “Gravitino dark matter in the CMSSM,” Phys. Lett. B 588 (2004) 7–16, arXiv:hep-ph/0312262.
* [110] G. F. Giudice and R. Rattazzi, “Theories with gauge mediated supersymmetry breaking,” Phys. Rept. 322 (1999) 419–499, arXiv:hep-ph/9801271.
* [111] J. L. Diaz-Cruz and B. O. Larios, “Helicity Amplitudes for massive gravitinos in N=1 Supergravity,” J. Phys. G 45 (2018) no. 1, 015002, arXiv:1612.04331 [hep-ph].
* [112] Particle Data Group Collaboration, R. L. Workman et al., “Review of Particle Physics,” PTEP 2022 (2022) 083C01.
|
11institutetext: University of Illinois at Urbana-Champaign, Urbana, IL
61801-2302, USA
# Attacks on Visualization-Based Malware Detection: Balancing Effectiveness
and Executability
Hadjer Benkraouda 11 0000-0001-5511-3182 Jingyu Qian 11 0000-0002-3953-5382
Hung Quoc Tran Benkraouda, Qian, and Tran share co-first authorship.11
0000-0001-7767-3180 Berkay Kaplan 11 0000-0002-4365-7606
###### Abstract
With the rapid development of machine learning for image classification,
researchers have found new applications of visualization techniques in malware
detection. By converting binary code into images, researchers have shown
satisfactory results in applying machine learning to extract features that are
difficult to discover manually. Such visualization-based malware detection
methods can capture malware patterns from many different malware families and
improve malware detection speed. On the other hand, recent research has also
shown adversarial attacks against such visualization-based malware detection.
Attackers can generate adversarial examples by perturbing the malware binary
in non-reachable regions, such as padding at the end of the binary.
Alternatively, attackers can perturb the malware image embedding and then
verify the executability of the malware post-transformation. One major
limitation of the first attack scenario is that a simple pre-processing step
can remove the perturbations before classification. For the second attack
scenario, it is hard to maintain the original malware’s executability and
functionality. In this work, we provide literature review on existing malware
visualization techniques and attacks against them. We summarize the limitation
of the previous work, and design a new adversarial example attack against
visualization-based malware detection that can evade pre-processing filtering
and maintain the original malware functionality. We test our attack on a
public malware dataset and achieve a 98% success rate.
###### Keywords:
Malware visualization Adversarial machine learning Binary rewriting.
## 1 Introduction
With the proliferation of connectivity and smart devices in all aspects of
human life, these devices have become increasingly targeted by malicious
actors. One of the most common forms of attack is through malicious software
(i.e., malware). According to AVTEST, one of the leading independent research
institute for IT security, to date, there have been more than 100 million new
malware applications in 2020 alone [20]. Inspired by the success of machine
learning in other fields, researchers have proposed using machine learning for
many security applications. With the rapid growth and evolving nature of new
malware applications, machine learning-based solutions are a natural fit for
malware detection and classification due to their robustness.
Several papers have designed malware detection systems using machine learning
(e.g., [35, 24]). The proposed solutions tackle malware detection from
different perspectives. The main differences are in the representation used
and the subsequent machine learning model selected for effective
classification. These representations include raw bytes, embeddings,
representative features, and binary visualization. Visualization methods, in
particular, have shown high accuracy in detecting malware compared to
conventional methods. These data reduction and visualization techniques for
detection have shown improvements in both speed and memory efficiency [29,
17]. Additionally, visualization-based techniques have achieved higher
detection accuracy, mainly attributed to the applicability of deep learning
techniques in detecting malware patterns [1]. We, therefore, focus our work on
visualization-based malware detection models.
Nevertheless, machine learning models are susceptible to adversarial example
attacks, which add imperceptible non-random perturbations to test samples,
causing machine learning models to misclassify them. Successful adversarial
examples have been seen to fool systems into misclassifying people [37], cause
systems to recognize street stop signs as speed limit signs [10], or cause
voice-controllable systems to misinterpret commands or perform arbitrary
commands [41].
Recent work has shown that machine learning-based techniques for malware
detection are also susceptible to adversarial examples. In these systems, the
attacks alter the malware binaries to cause the target model to classify the
malware sample as benign or vise versa. However, adversarial examples in this
domain are more challenging to produce. In addition to the constraint of
imperceptibility and minimal changes that conventional adversarial examples
must comply with, adversarial examples for malware detection must maintain the
original malware functionality. This means that the attacker cannot change the
bytes arbitrarily. Instead, the attacker has to understand the functionality
of the malware and perform careful changes.
There have been previous attempts to create adversarial examples against
visualization-based malware classifiers [22, 28]. These attacks produce
adversarial examples either by using conventional image-based techniques such
as the Fast Gradient Sign Method [15] or Carlini and Wagner method [4] or by
injecting byte values to unreachable regions within the binary. These attacks
are simplistic and can be detected and removed easily with minimal
countermeasures [25].
In this paper, we propose a new adversarial example attack that combines
binary rewriting and adversarial attacks in image classification. We target a
convolutional neural network (i.e., CNN) model for malware detection. Because
there is no open-sourced code for visualization-based malware detection, our
first phase of the project includes constructing the malware detection model
(Figure 1 left). We apply a similar CNN structure as previous work for
visualization-based malware detection and achieves an overall accuracy of 99%.
In the second phase of the project (Figure 1 right), we design an adversarial
example attack against this malware detection model. Our attack performs
additive changes to the original malware and ensures that the added
instructions are semantic NOPs, i.e., they do not change values of any
register or manipulate the program state. Our attack achieves a 98% success
rate on a public malware dataset. The success of the proposed attack reveals
that it is necessary for visualization-based malware detection to perform more
advanced and robust protection against adversarial examples other than simply
filtering the padding or the non-reachable header section.
Figure 1: The two project phases: constructing the malware detection model and
designing the adversarial example attack.
The rest of the paper is organized as follows. In Section 2, we introduce
background and related work on visualization-based malware detection and
adversarial machine learning. In Section 3, we provide our detailed design of
the attack against visualization-based malware detection and illustrate how we
solve the challenges of creating successful adversarial examples while
maintaining the original malware functionality. In Section 4, we discuss our
experiment setup and measure our attack success rate. In Section 5, we discuss
limitations of our attack and potential future work.
## 2 Background and Related Work
In this section, we introduce the background and related work on
visualization-based malware detection. We then discuss some traditional
methods to camouflage malware, adversarial machine learning and how attacks
against image classification and malware detection work. Finally, we include a
table for SoK of the papers that we mentioned, and point out their
limitations.
### 2.1 Malware Visualization
With the development of image processing technology, visualization-based
techniques are also proposed for malware detection and analysis. These
techniques can be applied directly to the binary without complicated
disassembly and execution process. Researchers have proposed approaches to
visualize malware as gray-scale or RGB-colored images. From these images,
machine learning techniques, such as CNN, can classify whether the tested
software is benign or malicious.
Figure 2 illustrates a typical approach to visualize the malware as an image.
The malware binary is grouped by 8-bit vectors. Each vector represents a value
from 0 to 255, which can be mapped to a gray-scale pixel value. The shape of
the final image depends on the width of the image, which is usually a tunable
parameter, and the size of the malware binary in bytes. This methodology can
be adapted to visualize the malware as an RGB-colored image, which considers
different feature types and represents them in different color channels [12].
Figure 2: Typical approach to visualize the binary in gray-scale [31]
Here we introduce a few projects from the literature that used several
different malware visualization techniques. Han et al. [18] proposed a malware
visualization method that converts the binary to a gray-scale bitmap image and
then generates the entropy graph for the entire malware binary. They then used
a histogram similarity measuring method to group malware within the same
malware family. Nataraj et al. [31] also visualized the malware binary as a
gray-scale image but extracted texture features to characterize and analyze
the malware. They used GIST to capture texture features, and apply the
K-nearest neighbors algorithm with Euclidean distance to classify the malware
into different malware families. Xiaofang et al. [40] mapped malware binaries
to gray-scale images, extracted a 64-dimension feature vector from the image,
and performed fingerprint matching to identify similar malware.
Unlike the other work, which converts the malware binary to a gray-scale
image, Fu et al. [12] took a different approach to visualize the malware
(Figure 3). Their approach extracted both local and global features to
generate an RGB-colored image for the malware. Specifically, they extracted
three types of features, including section entropy, raw byte values, and
relative size of each section to the whole file. For raw byte values, they use
the same approach to visualize malware as gray-scale images (Figure 2). Each
types of features occupies a single color channel (i.e., either red, green, or
blue). For the final classification process, they compared different machine
learning techniques, including random forest, K-nearest neighbors, and support
vector machine.
Han et al. [19] proposed a novel method to classify malware and malware
families. They extracted opcode instruction sequences from the malware binary
as features and generated the RGB-colored image matrix from these features.
The image matrices are compared with each other using selective area matching
[19] to group malware into malware families.
Another work interestingly extends the field by visualizing the behavior of
the malware instead of the malware binary itself, and suggests that any
feature of a malware can be visualized to determine its classification [36].
This work further indicates that the possibility of malware visualization is
limitless as a program has a lot of useful features ranging from its behavior
to its metadata. But, for this work, it specifically focuses on the malware
behavior by running an API call monitoring utility through a Virtual Machine
(i.e., VM) to examine the APIs used while the program is executed in user mode
[36]. While there are several other techniques to capture malware behavior,
such as capturing the network activity of the malware, or the changes in the
operating system’s resources, API monitoring has been chosen in this study due
to its conciseness and the shortcomings of other techniques that have been
discussed in detail [36]. Afterwards, the calls are mapped to hot colors,
specifically such as red, or orange, for classification. Finally, the
classification is mostly done through a similarity ratio to the tested
software against the known malware’s color mapping [36].
Figure 3: Visualize the malware as an RGB image, considering both local and
global features [12]
### 2.2 Traditional Malware Camouflage
Although there are several methods of malware detection based on
visualization, attackers can still employ various methods such as encryption
and obfuscation to hide their malware in the targeted software’s code and
counter static malware detection methods [7, 27, 34].
Malware encryption intends to encrypt the malware body to hide its intentions
and avoid static analysis detection so that a direct signature matching
defense cannot detect the malware [7, 27, 21]. It relies on a decryption loop
(a.k.a., decryptor) to decrypt the malicious payload and execute the malware.
Therefore, if the defense can find out the decryption loops in the malware, it
can decrypt the malicious code first and then perform a simple signature
matching to detect the malware. In addition, the original defense can be
easily augmented with a signature checking component to identify suspicious
decryption loops within the malware. Visualization-based defense can be even
better at detecting malware encryption because it can extract locality
features specific to suspicious decryption loops of the malware. Even a more
complicated malware encryption technique that picks different decryptors for
different malware (i.e., oligomorphism) only prolongs the detection time [34].
Figure 4: Polymorphism virus structure [34].
Polymorphism is a more sophisticated method to hide the malicious payload
based on malware encryption (Figure 4). It uses several types of
transformations on the decryptor, such as changing the order of instructions
with additional jump instructions to maintain the original semantics and
permuting the register allocation to deceive anti-virus software [7]. It also
typically injects junk or dead codes to further mutate the decryptor so that
it is hard to recognize it [34]. However, after enough emulation and a simple
string matching algorithm, the underlying encrypted sections of the malware
can still be revealed [34].
Figure 5: Metamorphism structure: a metamorphic engineer is responsible for
changing the malware instructions to equivalent ones probabilitically [34].
A major drawback of either malware encryption or polymorphism is that it
relies on an explicit decryptor to decrypt the malicious payload and execute
the malware. This leaves a significant mark on the malware that can be
relatively easily detected. On the other hand, metamorphism is a more advanced
technique to camouflage the malware without using any encrypted parts. Malware
metamorphism is a technique to mutate the malware binary using different
obfuscations by a metamorphic engine (Figure 5). In this way, the attacker
changes the syntax of the original malware but keeps the original malware
behavior. In particular, metamorphism allows the malware to change its opcode
with each execution of the infected program. Alam et al. [2] group some
typical obfuscations used in metamorphism into three categories. The first
category is the opcode level obfuscation, which includes instruction
reordering, dead code insertion, and register renaming [2]. The second
category is control flow level obfuscation, which includes changing the order
of instructions, and applying branch functions, opaque predicates, jump
tables, and exception tables [2]. The last category is obfuscation by self-
modifying code, which intends to change instructions during runtime in order
to hide malicious payload to avoid reverse engineering and detection by anti-
malware software [2].
It has been shown that malware metamorphism can easily defeat the signature-
based malware detection [6] because signature-based detection is unable to
capture the changes of the malware due to dynamic code obfuscation. However,
metamorphic malware is usually initiated from known malware, and with the
initial knowledge of existing malware, it is still possible to detect malware
metamorphism. Zhang et al. [42] proposed a defense to characterize the
semantics of the program and perform code pattern matching, based on static
analysis of control and data flow of call traces. Alam et al. [2] proposed a
metamorphic malware analysis framework that builds the behavioral signatures
to detect metamorphic malware in real-time. Chouchane et al. [5] proposed
another method to detect the existence of a metamorphic engineer by checking
the likelihood of a piece of code generated by some known metamorphic engines.
In addition, the metamorphic malware typically has a significant proportion of
binary related to the metamorphic engine, which can be recognized by a more
advanced detector, such as a visualization-based malware detector.
Go et al. discussed the importance of developing new approaches against
polymorphism and metamorphism specifically [13]. Their method converts the
binary to a grey-scale image and uses the ResNeXt CNN model to build
resiliency against such malware camouflage techniques [13]. Their paper does
not explicitly discuss their method’s effectiveness against obfuscation
attacks [13]. The researchers used the Malimg dataset but did not mention that
their dataset contained examples of obfuscation [13]. Islam et al. focused
more on obfuscation detection by attempting to integrate static analysis with
dynamic [21]. The paper acknowledges the ease of bypassing static analysis
with obfuscation but proposes integrating dynamic analysis using information
vectors derived from FLF, PSI and API calls [21]. Since obfuscating the
features of the code would result in outlier vectors, their approach can
detect such attacks [21].
### 2.3 Adversarial Machine Learning
In this section, we introduce adversarial machine learning. In general,
adversarial machine learning aims to fool the machine learning model by
carefully generating adversarial examples through evasion attacks or polluting
the training phase through poisoning attacks. We focus our discussion on
evasion attacks leveraged against image classification and malware detection
due to their many close similarities to visualization-based malware detection.
#### 2.3.1 Attacking Image Classification.
Previously, image classification has been used in many applications not
related to binary classification. In these fields, multiple attacks have been
produced to cause errors in detection. Goodfellow et al. [15] illustrated the
fast gradient sign method (i.e., FGSM) to generate adversarial examples.
Figure 6 shows how FGSM is applied to cause the image classifier to mistakenly
classify a panda to a gibbon by adding carefully crafted perturbation. In
Carlini et al.[4], it was shown that the addition of random noise to images
can significantly reduce the accuracy of classifiers while being imperceptible
to the human eye. These attacks can be mitigated to some degree by denoising
techniques, as proposed in Liao et al.[26]. Nevertheless, such mitigation
efforts do not fully reverse the perturbations and may introduce more noise
accidentally. Furthermore, as shown in Eykholt et al.[11], classification can
also be interrupted by altering small sections of an image, where the image
would still be readable by a human eye.
Figure 6: Adversarial example generated using FGSM [15].
A notable difference in image classification for binary visualization is that
the validator for images lies in code execution instead of human recognition.
As a result, perturbations intended for adversarial attacks on binary
visualization must continue to function as an unchanged code sample. A code
sample that maintains functionality after modification can be said to maintain
executability, as the code will execute as intended.
#### 2.3.2 Attacking Malware Detection.
Because malware detection from raw bytes relies heavily on the performance of
the machine learning model for classifying the image embedding of the malware
(e.g., RGB-colored image), it is also vulnerable to similar attacks against
image classification. However, there are more difficulties in generating valid
adversarial examples in the malware detection domain than in the image
classification domain. The main challenge is to solve the inverse feature-
mapping problem [33]. Pierazzi et al. [33] proposed a novel formalization of
problem-space attacks and a novel problem-space attack in the Android malware
domain. They used conditional statements that are never executed during
runtime to wrap the malicious code payload. In particular, they used opaque
predicates to ensure the obfuscated conditions always resolve to false but
look legitimate. They showed that their attack against Android is successful
against the Drebin classifier [3] and several SVM-based malware detectors.
Liu et al. proposed introducing perturbations to the visualized binary to
lower the success rate of ML-based malware detectors [28]. The paper
introduced a method that leverages gradient descent and L-norm optimization
methods. However, as it changes the image in potentially unexpected ways, it
cannot guarantee the executability of the perturbed malware.
Khormali et al. [22] showed simple adversarial examples to bypass
visualization-based malware detection. Their attack only attempts rudimentary
methods to generate adversarial examples, namely padding and injection [22].
It preserves the executability and functionality of the original malware, but
it is easy to detect and not scalable. Kolosnjaji et al. [23] proposed a
gradient-based attack to evade ML-based malware detection. Their attack also
injects padding bytes to the original malware but does not consider preserving
the functionality and does not target visualization-based malware detection.
Demetrio et al. [9] proposed a general framework to perform white-box and
black-box adversarial attacks on learning-based malware detection by injecting
the malicious payload to the DOS header. However, the authors also claimed
that their header attacks could be easily patched through filtering process
before classification. Grosse et al. [16] demonstrated the attack against a
deep neural network approach for Android malware detection. They crafted
adversarial examples by iteratively adding small gradient-guided perturbation
to the malware on application level instead of directly perturbing the binary.
They restrict the perturbation to a discrete set of operations that do not
interfere with the malware functionality. In addition, they discussed some
remedies against their attacks, including feature reduction, distillation, and
adversarial training (i.e., re-training the model with the addition of
adversarial examples). Their work focused more on attacking application-level
malware detection instead of visualization-based malware detection. Sharif et
al. [38] proposed an optimization-guided attack to mislead deep neural
networks for malware detection. Their attack is more invasive in that it
changes the reachable code section of the malware. Nevertheless, their attack
considers a limited set of transformation types of malware functions and does
not target visualization-based malware detection [38]. Overall, there is no
work proposing robust attacks against the visualization-based malware
detection that preserve both executability and functionality of the original
malware and are hard to detect through pre-processing. We seek to fill this
room in the research space.
### 2.4 SoK of Existing Literatures
Finally, we provide Table 1 as our SoK of existing malware visualization
techniques and attacks against them. For each reviewed work, we summarize its
methodology and list its limitations.
Table 1: SoK of reviewed papers: malware visualization and adversarial attacks against malware visualization software. Category | Paper | Methodology | Limitations
---|---|---|---
Malware visualization on binary: Gray-Scale | Han et al. [18] | Converts binary to the bitmap image and generates the entropy graph from visualized malware | Hard to classify packed malware binaries
Nataraj et al. [31] | Extracts image texture features from visualized malware | Relying on global texture features can be beat by attackers
Xiaofang et al. [40] | Extracts a 64-dimensional feature vector and performs fingerprint matching to identify similar malware | Relying on global image features can be beat by attackers
Malware visualization on binary: RGB-Colored | Fu et al. [12] | Combines entropy, relative section size, and raw bytes to generate an RGB-colored image | Limited to PE format
Han et al. [19] | Extracts opcode instruction sequences | Classification is not yet automated
Malware visualization on behavioral features | Shaid et al. [36] | API call monitoring | Does not consider network behavior and does not work directly on malware binary
Attacking malware detection | Pierazzi et al. [33] | General problem-space attack for inverse feature-mapping | Attacks focus on Android malware and are not against neural network-based detector
Liu et al. [28] | Gradient descent to perturb binary | Not guarantees executability
Kormali et al. [22] | Padding and injection | Easy to detect and not scalable
Kolosnjaji et al. [23] | Padding and injection | Does not preserve functionality and does not target visualization-based malware detection
Demetrio et al. [9] | Injects the malicious payload to DOS header | Easy to patch through filtering process before classification
Grosse et al. [16] | Iteratively adds small gradient-guided perturbations | Only targets Android malware and does not attack on binary level
Sharif et al. [38] | Manipulates code section guided by an optimization function | Considers only a limited set of manipulation types
## 3 Robust Adversarial Example Attack against Visualization-Based Malware
Detection
In this section, we focus on the workflow of generating adversarial examples,
and we leave the construction of the malware detector to Section 4.1. We have
two main goals for our adversarial example attack. Firstly, we aim to find an
adversarial example generated from a single malware such that the malware
detector will misclassify it as benign software. Secondly, an adversarial
example generated in this way must maintain the functionality of the original
malware. An overview of the full workflow of our adversarial example
generation algorithm is shown in Figure 7. At a high level, the attack starts
with using a mask generator to add empty spaces to instruction boundaries
where perturbations are allowed. Then, the adversarial example generator
(i.e., AE generator) will generate the optimal adversarial example in the
image space. To ensure that the original malware functionality is not changed,
we use a NOP generator to produce a semantic NOP list and update the optimal
adversarial example to the closest matching viable one that preserves malware
functionality. If this processed adversarial example is still misclassified as
benign, then our attack succeeded. Otherwise, we relaunch the AE generator,
starting from the failed adversarial example, creating a new optimal AE, and
starting a new iteration. We iterate until we produce a successful adversarial
example or we reach a pre-set threshold of iterations. In the following sub-
sections, we discuss the details of each component in the workflow.
Figure 7: The overview of the workflow of the adversarial example generation
algorithm.
### 3.1 Mask Generator
The first step of our attack workflow aims at controlling and locating where
the perturbations can be added. This step is provided to ensure both
executability and robustness to simple pre-processing defenses while
maintaining the original semantic operation. The central intuition of this
step is to allow additional instructions to be embedded within the code
section so that they are not easily distinguishable from the rest of the code
section and, in the meantime, ensure that these added instructions do not
introduce any changes to the original malware instructions. These additional
instructions will serve as empty spaces to allow perturbing the malware in the
image representation.
Figure 8: Mask generator adds empty space to allow perturbation
To achieve this, we create a mask generator (Figure 8). The algorithm of the
mask generator is as follows. First, we extract the code section of the
malware sample and identify the instruction boundaries. Next, we need to
decide the size of each perturbation block, its location, and frequency. The
attacker can set these parameters to satisfy certain size limitations of the
perturbation relative to the original malware size to make it harder to
detect. On the other hand, the attacker can also increase the frequency of the
perturbation blocks to make the attack easier. The perturbations can be
initialized to random inputs or naive NOPs. After that, these perturbations
are inserted into the expected instruction boundaries. In this way, we make
sure that the original malware instructions are not changed at all because we
never add extra bytes to the middle of the binary of any instruction. With
this malware augmented with the perturbation sequences initialized to naive
NOPs, we use a binary-to-image converter to represent the malware in the image
space. The binary-to-image converter treats the consecutive 8 bits of the
binary as a single pixel to build a PNG file. Figure 8 illustrates how our
mask generator adds empty space to allow perturbation. In this example, we add
a single NOP every two instructions. The image representation of the binary is
expanded by two pixels (i.e., the pixel marked with ‘*’) due to the two added
nops.
Besides the malware image, the mask generator produces a mask in the form of
an array of the same dimension as the augmented malware image. The mask flags
the locations where perturbations are allowed with ones, while the rest of the
array is filled with zeros. We name this mask the perturbation mask.
### 3.2 AE Generator
Once the perturbation mask and the augmented malware image are generated, we
launch a modified version of the CW attack [4] to generate the optimal
adversarial example (i.e., optimal AE) in the image space, which is
misclassified as benign by the malware detector. The only difference is the
application of the perturbation mask to further restrict the positions of
perturbations. The objective function is given as
$\min||M\delta||_{2}+C\cdot f(x+M\delta)\quad s.t.\quad x+M\delta\in[-1,1].$
(1)
Here $M$ is the perturbation mask, and $x$ is the augmented malware image
produced by the mask generator. The optimal AE is unlikely to correspond to
the perturbed malware that maintains the original malware functionality
because the CW attack only intends to attack the malware detector, which is
modeled as an image classifier. Our objective function does not place
additional restrictions to ensure that the perturbations will be either
executable or semantic NOPs after being reversed back to the binary, which is
required to preserve the malware functionality. One possible approach is to
apply the similar idea of ensuring printability for adversarial examples in a
real-world attack against image classification. However, the number of
semantic NOPs is much larger than the number of printable colors, which will
significantly increase the computation overhead of the objective function. On
the other hand, the optimal AE is a good starting point to guide us to
generate a viable adversarial example that keeps the malware functionality. We
also avoid complicating the objective function to reduce the runtime overhead
to generate the optimal adversarial examples.
### 3.3 NOP Generator
To generate the viable adversarial example from the optimal AE, we need to
replace the perturbations introduced by the CW attack with binaries that keep
the malware functionality (i.e., semantic NOPs). The replacement algorithm
will be discussed in Section 3.4. This section shows how we create the
semantic NOPs, which are semantically equivalent to a naive NOP, which does
not change the values stored in the registers and the function state, such as
the stack. We do not use the same strategy in [33] that used unreachable
conditional statements because that requires careful crafting of obfuscated
conditions to avoid being removed by the compiler optimization or the
filtering process, which makes the perturbation generation process more
complicated.
Figure 9: Semantic NOP seeds used to construct longer semantic NOPs.
We start by creating short semantic NOPs. We call these short semantic NOPs
semantic NOP seeds. Semantic NOP seeds fall into four categories (Figure 9):
1. 1.
Arithmetic sequences: Some neutral sequence of arithmetic operators which can
be performed on any register. An example is adding zero to the register.
2. 2.
Movement sequences: Instructions to move the register value back and forth or
jump to the defined locations to skip the regions that are not expected to
execute. Examples are moving the register value to itself or jumping to the
immediate next instruction.
3. 3.
Logical sequences: Some neutral sequence of logical operators which can be
performed on any register. An example is ANDing 1 to the register.
4. 4.
Other miscellaneous sequences: A sequence of simple NOPs or a sequence to
change and recover the flags.
Because the perturbation space is not pre-determined, we do not generate
semantic NOPs for any arbitrary size. Instead, we build the NOP generator to
combine semantic NOP seeds to make longer semantic NOPs that can fit larger
perturbation spaces. For instance, if the NOP generator is asked to compute
3-byte semantic NOPs and the naive NOP (i.e., byte 90 in heximal) is one of
the semantic NOP seeds, then it can combine three NOPs. Given the expected
size of each perturbation block, the NOP generator produces a list of semantic
NOPs of the given size.
It is necessary to keep the byte length of semantic NOP seeds as small as
possible so as to improve the ability for the AE optimizer to produce a viable
adversarial example that maintains the malware functionality. However, too
much restriction on the byte length also limits our ability to generate enough
semantic NOP seeds. In our design, we pick the minimum size of a semantic NOP
seed as one byte (i.e., the byte length for a naive nop) and the maximum size
of a semantic NOP seed as eight bytes. We provide our example semantic NOP
seeds and the corresponding byte size in Table 2. We admit that this is a
simple design choice and is far from the optimal selection. We also do not
comprehensively list all the possible operations given the operation type,
which can make the set of semantic NOP seed too large. However, our evaluation
results reveal that this selection is already enough for us to generate
adversarial examples for malware that maintains functionality with high
success rate.
Table 2: Example of semantic NOP seeds and their byte length. Operation Type | Example in Hex | Byte Length
---|---|---
Nop | 90 | 1
Move register to itself | 89c0 | 2
Jump to next instruction | 7700 | 2
Push and pop | 5058 | 2
Not and not | f7d0f7d0 | 4
Add/subtract 0 | 9c83c0009d | 5
Logical AND with 1 | 9c83e0ff9d | 5
Logical OR with 0 | 9c83c8009d | 5
Logical XOR with 0 | 9c83f0009d | 5
Negate and negate | 9cf7d8f7d89d | 6
Increment and decrement | 9cffc0ffc89d | 6
Add x and subtract x | 9c83c00183e8019d | 8
### 3.4 AE Optimizer
In this step, we build a module, the AE optimizer, which produces a viable
adversarial example that maintains the original malware functionality. The AE
optimizer takes in the perturbation mask, the optimal AE generated from the CW
attack, and the list of semantic NOPs produced by the NOP generator. Next, the
AE optimizer locates the allowed positions for perturbations in the optimal AE
using the perturbation mask. Subsequently, the Euclidean distance between the
instruction sequences in the allowed perturbation spaces and the semantic NOPs
is calculated using the following equation:
$d\left(p,q\right)=\sqrt{\sum_{i=1}^{n}\left(q_{i}-p_{i}\right)^{2}}$ (2)
Here $p$ and $q$ are the generated instruction sequence and the semantic NOPs.
This process identifies the semantic NOPs closest to each of the sequences in
the allowed perturbation space of the optimal AE. In the current
implementation, this process is done sequentially for each perturbation block;
however, it can be easily parallelized to improve the runtime performance.
After that, the semantic NOPs with the minimum distance are used to replace
the perturbation blocks in the optimal AE. The new adversarial example is
called the optimal viable AE.
Finally, we pass the optimal viable AE to our malware detector for
classification. If it is classified as benign, we stop the process because we
have already produced a successful adversarial example. If it is correctly
classified as malware, it will be used as the starting point for another
iteration of the CW attack, and the process is repeated. We expect that
starting from a failed optimal viable AE can direct us better to the
successful optimal viable AE. However, it is possible that the AE can be stuck
in the local optimum. Another more general approach is to start over the whole
process again from the visualized malware augmented with randomly initialized
semantic NOPs.
## 4 Evaluation
### 4.1 Experiment Setup
#### 4.1.1 Malware Detection Model.
We use a CNN-based malware detector as our attack target. Because there is no
open-source code for the visualization-based malware detector, we build our
own CNN by following a similar structure from previous work. Specifically, we
re-built the structure described in [22, 23]. Our CNN is composed of an
adaptive average pooling layer to handle inputs of different dimensions, two
consecutive convolutional layers with max-pooling, and three fully connected
layers. We use ReLU as the activation function (Figure 10).
Figure 10: Malware detection neural network model structure.
#### 4.1.2 Dataset.
We build our dataset by combining one public malware dataset and one public
benign software dataset. The malware dataset is from UCSB’s public malimg
dataset [30], which has a collection of around 9500 malware formatted as PNGs.
The benign dataset is a collection of around 150 benign software from the
Architecture Object Code Dataset [8]. The software was chosen from the AOCD
with various functionalities to ensure that the classifier did not learn
latent features representative of the type of benign code.
#### 4.1.3 Model Training.
A relatively small subset of the malimg dataset is used to acquire a $50\%$
split between malware and benign images in the training data. This subset was
retrieved randomly from the malimg dataset, and examples were chosen without
considering their corresponding malware family. In doing this, we prevent the
classifier from learning any latent features from any particular malware
family and improve its accuracy for any malware class in the validation set.
Validating this model’s accuracy, we are able to confirm the classification
accuracy of state-of-the-art models, with a $100\%$ accuracy on identifying
benign software and a $99\%$ accuracy on correctly identifying malware
samples.
#### 4.1.4 Attacking the Malware Detector.
We evaluate our attack using the malware from the malimg dataset [30]. For
each malware, we augment it with empty spaces per instruction initialized with
eight naive NOPs. If the AE generator fails to produce the optimal AE, we
consider it an attack failure. If the AE generator can produce the optimal AE,
we run the AE optimizer to replace perturbations with the closest semantic
NOPs. We set the iteration threshold to be ten. If the optimal viable AE is
generated within the ten iterations of the CW attack and AE optimization and
can be successfully misclassified as benign, we view it as a successful
attack.
### 4.2 Results
We first evaluate our attack on the malware family, “Dialplatform.B”, which
contains 174 malware samples. Malware in this family has moderately large code
section, which gives us more flexibility to generate adversarial examples in a
short time. The corresponding image of each malware is either $216\times 64$
or $432\times 64$ in pixels. The average number of instructions in the code
section for each malware is 622. All the malware is classified as malware by
our malware detector.
We successfully generate adversarial examples for 172 malware out of the total
174 malware (98.9%). For the two malware that we fail to generate an
adversarial example for, our algorithm workflow oscillates around a local
optimum, so our iteration of CW attack and AE optimization (steps 2, 4, 5 in
Figure 1). There are two potential methods to avoid the local minimum
oscillation issue. First, we can initialize the empty spaces between
instructions with random semantic NOPs. Randomized initialization has already
been applied frequently to solve similar local optimum oscillation problems
before. Second, in the AE optimizer step (Section 3.4), we could pick a sub-
optimal viable adversarial example if we detect local optimum oscillation. In
this way, our algorithm can break the local optimum while also searching for a
valid adversarial example.
For all of the adversarial examples, the functionality of the original malware
is preserved by construction since we only instrument the original binary with
semantic NOPs at instruction boundaries. All of the adversarial examples can
be generated within five iterations of the CW attack and AE optimization. The
running time to generate the adversarial example is given in Figure 11. On
average, it takes 77.8 seconds to generate the adversarial example for the
malware with around 600 instructions, and the time overhead is in AE
optimization step. The expansion rate of the original malware due to
augmentation is shown in Figure 12. On average, the perturbation size is
35.82% of the size of the original malware as seen from Figure 12. We argue
that though the expansion rate is high, the added perturbation is still hard
to filter out due to the flexibility of our semantic NOPs. The attackers can
even build a much larger semantic NOP set than our current version. They can
also consider using dead code branching. In this way, unless the users know
the semantic NOPs and the initial size of the malware, it is hard to filter
out our attacks.
We achieve similar results for other malware families with similar sizes of
the code section. On the other hand, our algorithm does not work well for
malware with very small code sections. We argue that the spaces to add
semantic NOPs are too few and too small to allow enough perturbation to cause
misclassification due to the small code section. We expect that enlarging the
mask size can improve the attack success rate, but this might defeat the
attacker’s purpose for distributing small and easy-to-transmit malware.
Another potential way is to combine perturbations in code sections with those
in other sections, such as data sections. However, we argue that keeping
executability can be hard and naive padding is relatively easy to filter out.
We leave attacks for malware with a small code section as future work.
Figure 11: The runtime evaluation for the adversarial example attack. Figure
12: The size of the added perturbation of the adversarial example attack.
To further test the end-to-end methodology for generating adversarial examples
for malware with a larger number of instructions, we also run the same
experiment with 11 malware from the “Lolyda.AA3” class. Each malware contains
at least 200,000 instructions. We achieve an attack success rate of 81.8%
(i.e., 9 out of 11). The two failures are also due to local optimum
oscillation issues that we face when generating adversarial examples for
malware in “Dialplatform.B” family. We expect random initialization and sub-
optimal AE optimizer can solve the problem and leave it to future work. On the
other hand, the major problem for generating adversarial examples for large
malware is the time overhead. In our experiments, the time overhead reaches
about six hours to generate a single adversarial example. As a proof-of-
concept attack and assuming the attacker can tolerate this time overhead, our
attack can still work properly, but the practicality of the attack in real
life is still under question considering the size overhead added to the
original malware as well. We expect parallelism can improve the running time.
Specifically, our current AE optimizer finds viable semantic NOP sequences for
each empty space defined by the mask sequentially. However, each empty space
is independent of the other if we perturb it with self-contained NOP
sequences. Therefore, parallelization can be easily applied in the AE
optimization step. We leave its implementation and evaluation as future work.
Our high attack success rate is a conservative representation of how
vulnerable visualization-based malware detection can be to the adversarial
examples. In our experiment, we set each perturbation block to be small (i.e.,
8 bytes). A more powerful attacker who would like to risk being detected more
can further increase the size of each perturbation block so that the attack
can be easier to launch.
## 5 Discussion
### 5.1 Limitations
While the proposed attack can successfully find adversarial examples with a
high success rate, due to the nature of the optimization algorithm, the time
necessary to find the optimal viable AE increases drastically with the size of
the original malware. In our evaluation, we performed adversarial example
attacks on the “DialPlatform.B” malware family, where each malware image is of
dimension $216\times 64$ or $432\times 64$ with an average of 622
instructions. Since the images are of reasonably small dimensions, the
potential perturbation spaces are fairly limited. As a result, an adversarial
example can be generated in less than two minutes. However, for larger
malware, as in the “Lolyda.AA3” family, producing a single optimal adversarial
example can take a few hours. As each perturbation block can be independently
replaced with the closest semantic NOPs, we expect parallelization to improve
the running time.
In addition, our attack only adds instruction to the malware’s code sections.
Therefore, when the code section for the original malware is small, it will be
hard to provide enough perturbations to cause the classifier to misclassify
the malware image. Similar issues occur when the data section is much larger
than the code section. One possible approach to find an easier way to generate
perturbation is to find out the hot zone for the machine learning detector on
the malware in the first place and then only adding enough semantic NOPs to
these locations. Another approach to solving this challenge is to design a
mechanism to perturb the data section without affecting the original
functionality of the malware.
There are some potential defenses against general attacks to malware
detection. Tong et al. [39] proposed a method to boost the robustness of
feature-space models by identifying the conserved features and constraining
them in adversarial training. Their defense is evaluated only on PDF malware,
and we plan to evaluate further the defense on more malware families and more
diverse model types.
### 5.2 Future Work
The proposed attack algorithm is a proof-of-concept and a work-in-progress,
and there are several research directions we would like to explore as future
work:
1. 1.
In the current version of the attack, the size and frequency of the added
perturbations are chosen beforehand by the attacker. In the development of the
attack, we would like to explore the possibility of adding these two
parameters (i.e. size and frequency of the perturbations) to the optimization
process. We speculate that this can improve the performance of our AE
generation algorithm. First, it can lead to faster convergence into a viable
AE. Additionally, it can also lead to a smaller perturbation size that is
customized for each malware sample.
2. 2.
Another avenue of improvement to our baseline algorithm is in speed
performance as the baseline algorithm does not leverage any speed
optimizations. Our AE generation algorithm can allow batching and
parallelization to enhance the speed performance. First, we plan to batch
instructions to be processed at once instead of processing individually.
Additionally, since Euclidean distance calculations do not have to be done
sequentially we plan to calculate all the distances in parallel.
3. 3.
Another avenue that we would like to explore is the intrinsic properties of
NOPs. Specifically, we would like to study the difference between NOPs and
whether some NOPs are better than others. Additionally, we would like to draw
from the field of software engineering in creating semantically similar code
using modifications that do not change the semantics at a higher-level
language (e.g. adding empty loops or if statements that would never be true).
4. 4.
In our current implementation, we restrict adding perturbation to the code
section. In the algorithm, we would like to explore the effects of adding
perturbations to other sections and understanding its effects on
executability, robustness to pre-processing, and maintaining semantic
operations.
5. 5.
We plan to perform a comprehensive comparison between our proposed attack with
the state-of-the-art attacks with respect to speed, stealthiness, and
deviation from the original binary. Additionally, we want to evaluate our
attack’s success against the defense and detection mechanisms available (e.g.
VirusTotal).
6. 6.
To have an end-to-end solution we would like to add a module that checks
executability of each of the binary (e.g., using _execve_ command).
7. 7.
Our current work has already revealed that malware detection based on binary
visualization can be beaten by an adversarial example with a high success
rate. In the meantime, the attacker can also maintain the functionality of the
original malware. Therefore, our work motivates future directions to propose
defenses against our attack. Previous defenses against adversarial examples,
such as defensive distillation [32] and adversarial training [14], does not
usually focus on real-world tasks, such as defending malware detection models.
Whether these defenses are effective in this use case is worth exploring.
## 6 Conclusion
In this work, we provide a literature review on existing techniques and
attacks for malware detection. We summarize their limitations, and propose a
novel end-to-end method for generating adversarial examples against
visualization-based malware detection. We design our attack workflow to ensure
that the malware sample remains executable while maintaining its original
functionality. Additionally, we ensure that the added perturbation is robust
against pre-processing by inserting semantic NOPs in the reachable code
section. We construct a dataset that includes both malware and benign software
samples and use it to build an visualization-based ML malware detector that
achieves high accuracy. Next, we design a workflow that generates semantic NOP
sequences and use them to construct viable adversarial examples. Our results
show that it is possible to successfully generate adversarial examples that
can bypass a highly accurate visualization-based ML malware detector while
maintaining executability and without changing the code operation. Our work
motivates the design for more robust visualization-based malware detection
against carefully crafted adversarial examples.
## References
* [1] Abuhamad, M., Abuhmed, T., Mohaisen, A., Nyang, D.: Large-scale and language-oblivious code authorship identification. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (2018)
* [2] Alam, S., Horspool, R.N., Traore, I., Sogukpinar, I.: A framework for metamorphic malware analysis and real-time detection. computers & security 48, 212–233 (2015)
* [3] Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K., Siemens, C.: Drebin: Effective and explainable detection of android malware in your pocket. In: Ndss. vol. 14, pp. 23–26 (2014)
* [4] Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp). pp. 39–57. IEEE (2017)
* [5] Chouchane, M.R., Lakhotia, A.: Using engine signature to detect metamorphic malware. In: Proceedings of the 4th ACM workshop on Recurring malcode. pp. 73–78 (2006)
* [6] Christodorescu, M., Jha, S.: Testing malware detectors. ACM SIGSOFT Software Engineering Notes 29(4), 34–44 (2004)
* [7] Christodorescu, M., Jha, S., Seshia, S.A., Song, D., Bryant, R.E.: Semantics-aware malware detection. In: 2005 IEEE Symposium on Security and Privacy (S&P’05). pp. 32–46. IEEE (2005)
* [8] Clemens, J.: Automatic classification of object code using machine learning. Digital Investigation 14, S156–S162 (08 2015). https://doi.org/10.1016/j.diin.2015.05.007
* [9] Demetrio, L., Coull, S.E., Biggio, B., Lagorio, G., Armando, A., Roli, F.: Adversarial exemples: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection. arXiv preprint arXiv:2008.07125 (2020)
* [10] Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., Song, D.: Robust physical-world attacks on machine learning models. ArXiv abs/1707.08945 (2017)
* [11] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2018)
* [12] Fu, J., Xue, J., Wang, Y., Liu, Z., Shan, C.: Malware visualization for fine-grained classification. IEEE Access 6, 14510–14523 (2018)
* [13] Go, J.H., Jan, T., Mohanty, M., Patel, O.P., Puthal, D., Prasad, M.: Visualization approach for malware classification with resnext. In: 2020 IEEE Congress on Evolutionary Computation (CEC). pp. 1–7. IEEE (2020)
* [14] Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
* [15] Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2015)
* [16] Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016)
* [17] Han, K.S., Lim, J.H., Kang, B., Im, E.G.: Malware analysis using visualized images and entropy graphs. International Journal of Information Security 14, 1–14 (2014)
* [18] Han, K.S., Lim, J.H., Kang, B., Im, E.G.: Malware analysis using visualized images and entropy graphs. International Journal of Information Security 14(1), 1–14 (2015)
* [19] Han, K., Lim, J.H., Im, E.G.: Malware analysis method using visualization of binary files. In: Proceedings of the 2013 Research in Adaptive and Convergent Systems, pp. 317–321 (2013)
* [20] Institute, A.T.: Malware, https://www.av-test.org/en/statistics/malware/
* [21] Islam, R., Tian, R., Batten, L.M., Versteeg, S.: Classification of malware based on integrated static and dynamic features. Journal of Network and Computer Applications 36(2), 646–656 (2013)
* [22] Khormali, A., Abusnaina, A., Chen, S., Nyang, D., Mohaisen, A.: Copycat: practical adversarial attacks on visualization-based malware detection. arXiv preprint arXiv:1909.09735 (2019)
* [23] Kolosnjaji, B., Demontis, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., Roli, F.: Adversarial malware binaries: Evading deep learning for malware detection in executables. In: 2018 26th European Signal Processing Conference (EUSIPCO). pp. 533–537. IEEE (2018)
* [24] Krcál, M., Svec, O., Bálek, M., Jasek, O.: Deep convolutional malware classifiers can learn from raw executables and labels only. In: ICLR (2018)
* [25] Krügel, C., Robertson, W.K., Valeur, F., Vigna, G.: Static disassembly of obfuscated binaries. In: USENIX Security Symposium (2004)
* [26] Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2018)
* [27] Liu, L., Wang, B.s., Yu, B., Zhong, Q.x.: Automatic malware classification and new malware detection using machine learning. Frontiers of Information Technology & Electronic Engineering 18(9), 1336–1347 (2017)
* [28] Liu, X., Zhang, J., Lin, Y., Li, H.: Atmpa: Attacking machine learning-based malware visualization detection methods via adversarial examples. In: 2019 IEEE/ACM 27th International Symposium on Quality of Service (IWQoS). pp. 1–10. IEEE (2019)
* [29] Makandar, A., Patrot, A.: Malware class recognition using image processing techniques. 2017 International Conference on Data Management, Analytics and Innovation (ICDMAI) pp. 76–80 (2017)
* [30] Nataraj, L., Karthikeyan, S., Jacob, G., Manjunath, B.S.: Malware images: Visualization and automatic classification. In: Proceedings of the 8th International Symposium on Visualization for Cyber Security. VizSec ’11, Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/2016904.2016908, https://doi.org/10.1145/2016904.2016908
* [31] Nataraj, L., Karthikeyan, S., Jacob, G., Manjunath, B.S.: Malware images: visualization and automatic classification. In: Proceedings of the 8th international symposium on visualization for cyber security. pp. 1–7 (2011)
* [32] Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). pp. 582–597. IEEE (2016)
* [33] Pierazzi, F., Pendlebury, F., Cortellazzi, J., Cavallaro, L.: Intriguing properties of adversarial ml attacks in the problem space. In: 2020 IEEE Symposium on Security and Privacy (SP). pp. 1332–1349. IEEE (2020)
* [34] Rad, B.B., Masrom, M., Ibrahim, S.: Camouflage in malware: from encryption to metamorphism. International Journal of Computer Science and Network Security 12(8), 74–83 (2012)
* [35] Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., Nicholas, C.: Malware detection by eating a whole exe. In: AAAI Workshops (2018)
* [36] Shaid, S.Z.M., Maarof, M.A.: Malware behavior image for malware variant identification. In: 2014 International Symposium on Biometrics and Security Technologies (ISBAST). pp. 238–243. IEEE (2014)
* [37] Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (2016)
* [38] Sharif, M., Lucas, K., Bauer, L., Reiter, M.K., Shintre, S.: Optimization-guided binary diversification to mislead neural networks for malware detection. arXiv preprint arXiv:1912.09064 (2019)
* [39] Tong, L., Li, B., Hajaj, C., Xiao, C., Zhang, N., Vorobeychik, Y.: Improving robustness of $\\{$ML$\\}$ classifiers against realizable evasion attacks using conserved features. In: 28th $\\{$USENIX$\\}$ Security Symposium ($\\{$USENIX$\\}$ Security 19). pp. 285–302 (2019)
* [40] Xiaofang, B., Li, C., Weihua, H., Qu, W.: Malware variant detection using similarity search over content fingerprint. In: The 26th Chinese Control and Decision Conference (2014 CCDC). pp. 5334–5339. IEEE (2014)
* [41] Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., Xu, W.: Dolphinattack: Inaudible voice commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (2017)
* [42] Zhang, Q., Reeves, D.S.: Metaaware: Identifying metamorphic malware. In: Twenty-Third Annual Computer Security Applications Conference (ACSAC 2007). pp. 411–420. IEEE (2007)
|
WGMwgm QEqe EPep PMSpms BECbec DEde
[type=editor, auid=000,bioid=1,orcid=0000-0001-6634-1340] Conceptualization,
Methodology, Software, Writing- Original draft preparation.
Investigation, Writing- Reviewing and Editing.
# Simple El Niño prediction scheme using the signature of climate time series
Nozomi Sugiura Shinya Kouketsu Research Institute for Global Change, Japan
Agency for Marine-Earth Science and Technology, Yokosuka, Japan
###### Abstract
El Niño is a typical example of a coupled atmosphere–ocean phenomenon, but it
is unclear whether it can be described quantitatively by a correlation between
relevant climate events. To provide clarity on this issue, we developed a
machine learning-based El Niño prediction model that uses the time series of
climate indices. By transforming the multidimensional time series into the
path signature, the model is able to properly evaluate the order and
nonlinearity of climate events, which allowed us to achieve good forecasting
skill (mean square error = 0.596 for 6-month prediction). In addition, it is
possible to provide information about the sequence of climate events that tend
to change the future NINO3.4 sea surface temperatures. In forecasting
experiments conducted, changes in the North Pacific Index and several NINO
indices were found to be important precursors. The results suggest that El
Niño is predictable to some extent based on the correlation of climate events.
###### keywords:
signature El Niño time series analysis machine learning
## 1 Introduction
El Niño is an important climate phenomenon that has an immense socio-economic
impact. Consequently, its onset/offset mechanism has been garnering intense
scientific interest (Neelin et al., 1998; Wallace et al., 1998; Timmermann et
al., 2018), and the relationships with the other climate modes and events has
been investigated (Bjerknes, 1969; Alexander et al., 2002; White et al., 2014)
for many years. The prediction of El Niño events is still under investigation
from various perspectives, including statistical inference from past time
series of climate records and results from climate models, initialization from
climate models, and data assimilation using oceanic or coupled
atmospheric–oceanic models.
Although previous studies suggest that predictions with expensive climate
models outperform purely statistical predictions (Wu et al., 2021),
statistical predictions still appear to have value because of their simplicity
(Penland and Magorian, 1993). Recently, elaborate and quite skillful
predictions have been performed based on machine learning, or deep learning,
with the use of past oceanic sea surface temperatures (SSTs) and subsurface
information (Wang et al., 2020; Ham et al., 2019). Hu et al. (2021)
extensively evaluated the climate network method that utilizes the
relationship between spatio-temporal points and concluded that it has some
prediction skill over one year. Dijkstra et al. (2019) also reported that
machine learning models can improve the prediction skill over one year.
Nonetheless, few practical prediction studies have employed only the series of
multidimensional climate indices as learning datasets. As a remarkable
exception, Yan et al. (2020) used the NINO3.4 and Southern Oscillation indices
exclusively and successfully performed a skillful prediction of the NINO3.4
index via a temporal convolutional network. In light of the well-known fact
that the climate events and variabilities in the extratropics can also
modulate El Niño and the Southern Oscillation (ENSO) variability (Vimont et
al., 2003; Nakamura et al., 2006), the use of the various climate indices,
which represent the typical patterns of the ocean surface variables (e.g., sea
surface temperature and sea level pressure) in the course of climate changes,
can simply clarify the relationships between ENSO and the other climate modes
as well as improve the prediction efficiency of ENSO. However, a possible
disadvantage of statistical predictions is that they typically provide little
information about the correlations among these climate events through their
evolution.
To rectify this issue, we propose a new statistical method that is simple,
considerably skillful, and provides process information to explain how climate
events evolved. This study was conducted to develop a practical machine
learning-based El Niño prediction scheme using the past time series of climate
indices. The key ingredient that enables the faithful interpretation of the
past time series, including their nonlinearities, is the signature of paths,
which is a central concept in rough path theory (Lyons et al., 2007).
Although several studies have been published on the methodology of time series
analysis using the signature method (Morrill et al., 2021), there appears to
be no application of the method to global-scale climate events, and thus this
paper opens a new field of research in geosciences.
## 2 Methodology
In this study, we apply supervised learning to a time series of past climate
indices. We utilize the fact that each segment of the time series is an
explanatory variable that is equipped with future values at that time, which
can be regarded as objective variables. The most significant aspect of the
proposed method is that each segment of the time series is transformed into a
signature. Therefore, our case study simply employs the simplest setting,
utilizing the signature method to concentrate on the proof-of-concept. In this
section, after explaining the theoretical basis of why the signatures are
relevant, we present the machine learning procedure based on that theory.
Then, we discuss how to interpret the results and, finally, we present the
parameters used.
### 2.1 Approximating a function on a set of paths
In prediction study based on the learning of time series, it is crucial to
properly construct a predictor that link past time series segments to future
values. A predictor is represented as a continuous function on a set of
multidimensional paths. To secure the performance of the predictor, it is
essential to choose an appropriate basis for the function of the path because
it determines the expressive power of the function. Note that our concern is
not the basis for a path but for a function of paths. In this sense, the most
mathematically justified candidate for the basis is the signature (Lyons et
al., 2007; Sugiura and Hosoda, 2020).
For a $d$-dimensional path $X=X_{[s,t]}:[s,t]\to\mathbb{R}^{d}$ that maps
$\tau$ to $X_{\tau}$, the $0$-th to $n$-th iterated integrals are defined
recursively, as follows (Lyons et al., 2007):
$\displaystyle\mathcal{S}^{()}(X_{[s,t]})$ $\displaystyle=1,$ (1)
$\displaystyle\mathcal{S}^{(i_{1}\cdots i_{n})}(X_{[s,t]})$
$\displaystyle=\int_{s}^{t}\mathcal{S}^{(i_{1}\cdots
i_{n-1})}(X_{[s,t_{n}]})dX^{(i_{n})}_{t_{n}},\quad
i_{1},\cdots,i_{n}=1,\cdots,d.$ (2)
The signature $\mathcal{S}(X)$ of path $X$ is the collection of all the
iterated integrals, and the operation $\mathcal{S}:X\mapsto\mathcal{S}(X)$ is
called the signature transform. In particular, its truncation up to the $n$-th
iterated integrals is called the step-$n$ signature,
$\mathcal{S}_{n}(X)\in\bigoplus_{k=0}^{n}(\mathbb{R}^{d})^{\otimes k}$, which
means that the multi-index $I$ in component $\mathcal{S}_{n}^{(I)}(X_{[s,t]})$
runs across
$\displaystyle I$
$\displaystyle\in\phi\cup\\{1,\cdots,d\\}\cup\cdots\cup\\{1,\cdots,d\\}^{n}.$
(3)
Now, let $C(K,\mathbb{R})$ be the space of a continuous function on a compact
set $K$ of paths with at least one monotonous coordinate. Subset $A\subset
C(K,\mathbb{R})$ is defined as
$\displaystyle A$
$\displaystyle=\left\\{g:X\mapsto\sum_{I}w^{(I)}\mathcal{S}^{(I)}(X)\middle|w\in\bigoplus_{k=0}^{n}(\mathbb{R}^{d})^{\otimes
k}\text{~{}for some $n\geq 0$}\right\\}.$ (4)
Then, $A$ satisfies the following conditions:
1. 1.
Because the step-$n$ signature transform $K\ni X\mapsto\mathcal{S}_{n}(X)$ is
continuous for any $n>0$, $A\subset C(K,\mathbb{R})$.
2. 2.
$g_{1},g_{2}\in A\text{ and
}\lambda_{1},\lambda_{2}\in\mathbb{R}\implies\lambda_{1}g_{1}+\lambda_{2}g_{2}\in
A$.
3. 3.
Constant-valued function $\mathbf{1}\in A$.
4. 4.
Based on the shuffle identity (Lyons et al., 2007), $g_{1},g_{2}\in A\implies
g_{1}g_{2}\in A$.
5. 5.
Based on the uniqueness theorem (Levin et al., 2013), for all $X,Y\in A$ with
$X\neq Y$, there exists $g\in A$ that satisfies $g(X)\neq g(Y)$.
From these conditions, we can apply the Stone–Weierstrass theorem (Stone,
1937) to the subset $A$ and conclude that $A$ is dense in $C(K,\mathbb{R})$,
which means that any function $f\in C(K,\mathbb{R})$ is uniformly approximated
by a function $g\in A$ with arbitrary accuracy (Levin et al., 2013; Fermanian,
2021).
From the above reasoning, we can construct a nonlinear predictor as a linear
combination of iterated integrals for each segment of the multidimensional
time series.
### 2.2 Procedure for machine learning of time series
In the proposed approach, the predictor is constructed as follows. Suppose we
have a time series of several climate indices defined at each calendar month
of $t_{m},~{}m=1,2,\cdots,M$. Any segment of the time series over a period,
say six months, can be viewed as a multidimensional path, which can be
represented by the signature.
For this supervised learning, the object variable is the NINO3.4 index
$y_{m+m_{a}}$ at time $\tau=t_{m+m_{a}}$, while the explanatory variables
$x_{m}$ are the signature for the segment of time series $X$ in the period
$[t_{m-m_{b}+1},t_{m}]$. The approximation property described in the previous
section allows us to express the object variable as a linear combination of
the explanatory variables:
$\displaystyle y_{m+m_{a}}$
$\displaystyle=y_{m}+\left<w_{m},x_{m}\right>+\epsilon,$ (5) $\displaystyle
x_{m}$ $\displaystyle:=\mathcal{S}_{n}(X_{[t_{m-m_{b}+1},t_{m}]}),$ (6)
where $\mathcal{S}_{n}(X_{[t_{0},t_{1}]})$ denotes the order-$n$ signature for
the $d$-dimensional time series in the interval $[t_{0},t_{1}]$,
$\left<a,b\right>$ denotes the scalar product $\sum_{I}a^{(I)}b^{(I)}$,
$w_{m}=\\{w_{m}^{(I)}|I=\text{multi-index}\\}$ is the weight vector for the
predictor, $\epsilon$ is a random variable representing prediction error,
$t_{m}$ is the starting time of the prediction, $t_{m-m_{b}+1}$ is the
starting point of the path segment, and $t_{m+m_{a}}$ is the target time for
prediction.
Figure 1: Schematic view of training and prediction flow, assuming that
$m_{b}$=$m_{a}$=6. Hatched squares represent transforming into the signature.
Predicted value at time index $m$+6 will be compared with the validation data
if available.
Before converting into the signature, a zero vector is added at the beginning
of each series $X_{[t_{m-m_{b}+1},t_{m}]}$ to account for the magnitude of the
value at the starting point (Morrill et al., 2021). We computed the signature
by using the Python library ${\tt esig}$ (Kormilitzin, 2017).
In the control case, we instead used the time series as is without converting
it into the signature:
$\displaystyle x_{m}$
$\displaystyle:=X_{[t_{m-m_{b}+1},t_{m}]}=\left(X_{t_{m-m_{b}+1}},X_{t_{m-m_{b}+2}},\cdots,X_{t_{m}}\right).$
(7)
This corresponds to an auto-regressive (AR) model.
Using the training dataset available up to time $t_{m}$,
$\displaystyle D_{m}$
$\displaystyle=\left\\{\left(x_{\mu},y_{\mu+m_{a}}-y_{\mu}\right)\middle|~{}\mu\in[m_{b},m-m_{a}]\right\\},$
(8)
we first estimate the optimal weight $w=w_{m}$ that minimizes the cost
function with an $L_{1}$-penalty term:
$\displaystyle J_{m}(w)$
$\displaystyle=\frac{1}{2|D_{m}|}\sum_{\mu=m_{b}}^{m-m_{a}}\left(y_{\mu+m_{a}}-y_{\mu}-\left<w,x_{\mu}\right>\right)^{2}+\alpha\sum_{I}\left|w^{(I)}\right|,$
(9)
where $|D_{m}|=m-m_{a}-m_{b}+1$ is the number of samples in $D_{m}$, and $I$
is the multi-index. The optimization problem is solved by the Lasso model fit
with least angle regression (Pedregosa et al., 2011), which is suitable for
problems with many parameters. We then predict a future NINO3.4 index as
$\widehat{y}_{m+m_{a}}=y_{m}+\left<w_{m},x_{m}\right>$ and compare it to
$y_{m+m_{a}}$. In other words, a cross-validation is made against the
validation data:
$\displaystyle D^{\prime}_{m}$
$\displaystyle=\left\\{\left(x_{m},y_{m+m_{a}}-y_{m}\right)\right\\}.$ (10)
We repeat the above procedure after incrementing the time index $m$ by $1$.
Figures 1 shows the schematic view of training and prediction flow. In this
flow, the weight $w_{m}$ is obtained by using the training dataset $D_{m}$,
and then the prediction from time $t_{m}$ using the signature $x_{m}$ yields
the value $\widehat{y}_{m+m_{a}}$, which is subject to comparison with the
validation data $y_{m+m_{a}}$. Note that the size of the training data
$|D_{m}|=m-m_{a}-m_{b}+1$ depends on the starting time $t_{m}$. The prediction
error can be obtained from the statistics of
$\widehat{y}_{m+m_{a}}-y_{m+m_{a}}$ for various starting times. By taking this
approach, where training and forecasts are done progressively by moving the
starting time of the forecast hiding future at that moment, each forecast is
assured to be a fair cross-validation.
### 2.3 Diagnosing the dominant event sequences
One difficulty with regular machine learning is that it does not provide
sufficient reasoning for the results. However, the signature-based method
allows us to mathematically extract from the path those properties that are
important in the prediction.
To diagnose the dominant event sequences that contribute to the prediction, we
compute the standard partial regression coefficients (SPRCs) $r_{m}^{(I)}$,
which represent the sensitivity of normalized value $y_{\mu+m_{a}}$ in the
future to each component of the normalized signature $x_{\mu}^{(I)}$ in the
past, as
$\displaystyle r_{m}^{(I)}$
$\displaystyle=\frac{\sigma_{x_{m}^{(I)}}}{\sigma_{y_{m}}}w_{m}^{(I)},$ (11)
where $\sigma_{y_{m}}$ and $\sigma_{x_{m}^{(I)}}$ denote the standard
deviations of $y_{\mu+m_{a}}-y_{\mu}$ and $x_{\mu}^{(I)}$, respectively, among
the samples in $D_{m}$, which represents the learning data in the period from
time $t_{m_{b}}$ to time $t_{m-m_{a}}$.
### 2.4 Setting of experimental parameters
We used a climate time series composed of $d=12$ indices in Table 1 retrieved
from NOAA cite (NOAA, 2021). The time series starts at $t_{1}$ (January of
1900), and ends at $t_{1459}$ (July of 2021).
Table 1: Twelve climate indices and their abbreviations Abbrev. | Climate Indices | References
---|---|---
NINO34 | Nino 3.4 (5N-5S, 170W-120W) SST | Rayner et al. (2003)
NINO12 | Nino 1+2 (0-10S, 90W-80W) SST | Rayner et al. (2003)
NINO3 | Nino 3 (5N-5S, 150W-90W) SST | Rayner et al. (2003)
NINO4 | Nino 4 (5N-5S, 160E-150W) SST | Rayner et al. (2003)
DMI | Dipole Mode Index | Hameed and Yamagata (2003)
AMO | Atlantic Multidecadal Oscillation index | Enfield et al. (2001)
NPI | North Pacific Index | Trenberth and Hurrell (1994)
SOI | Southern Oscillation Index | Ropelewski and Jones (1987)
NAO | North Atlantic Oscillation (NAO) index | Jones et al. (1997)
TPI | Tripole Index | Henley et al. (2015)
AO | Arctic Oscillation index | Thompson and Wallace (1998)
MON | Date elapsed (mid-day in month divided by 365) | -
The standard lead time for prediction is $6$ months ($m_{a}=6$), whereas each
past segment is of length $6$ months ($m_{b}=6$). The experiment duration was
from the prediction starting at $t_{961}$ (January of 1980), to the one
starting at $t_{1453}$ (January of 2021).
We used iterated integrals up to level $n=3$, which means that the total
number of terms in the linear combination was $N=(d^{n+1}-1)/(d-1)=1885$, and
the intensity of the $L_{1}$ penalty term was tuned to $\alpha=2.0$.
## 3 Results
Figure 2 shows the result of $6$-month prediction, whereas fig 3 shows it as
anomalies from climatology, These figures indicate the overall superiority of
prediction using the signature. The prediction error for each target month is
shown in Fig. 4. It is obvious that the predictions for July to September were
much better than those in the control case; however, they were comparable in
the other months. The overall prediction skill was $0.596\mathrm{K}$ for the
signature case and $0.663\mathrm{K}$ for the control case.
Wang et al. (2020) proposed an operator-theoretic technique called kernel
analog forecasting (KAF), which has a rigorous connection with Koopman
operator theory for dynamical systems, yielding statistically optimal
predictions as conditional expectations. They also compared it to the linear
inverse model (LIM), which is a linear evolution operator for modes. Note that
both methods employ as explanatory variables the dominant modes in
spatiotemporal SST, which we did not use in our study. Here, we use KAF and
LIM for the comparison of forecasting skill. For the comparison with KAF and
LIM, the root-mean square (rms) errors for 6-month prediction in the period
from 1998 to 2017 were computed. The signature model, AR model, KAF model, and
LIM had rms values of $0.617$, $0.686$, $0.62$, and $0.75\mathrm{K}$,
respectively. This comparison result suggests that the forecasting skill of
the signature model is comparable to that of the KAF model.
The spring prediction barrier is defined in Lai et al. (2018) as follows: “…
models have problems in predicting Boreal winter tropical Pacific sea surface
temperature (SST) when forecasts start in Boreal spring (February–May). This
is called the spring predictability barrier. ” Similarly, Zheng and Zhu (2010)
pointed out that “… errors have the largest values and the fastest growth
rates initialized before and during the NH spring.” In light of these
definitions, the spring predictability barrier, i.e., poor prediction skill
when starting from February and March, seems to disappear as indicated by the
rms error values in the target months of August to September.
Table 2 shows the dominant event sequences among iterated integrals. The
events with the first to the third indices are shown in each row. If the same
index appears twice in a row, then the event is intense. The top sequence in
the period from 1900 to 2020 is an intense NPI change followed by a Niño 1+2
SST change. The key indices are NPI and various NINO indices. In particular,
NPI, an atmospheric process, is involved in all the dominant sequences, which
should be a manifestation that El Niño is a coupled atmospheric–oceanic
process. In addition, the comparison between statistics for two different
periods suggests that the Nino1+2 index, corresponding to the region of
coastal South America, is becoming more important as a precursor in the 21st
century. Summarizing the above, fig. 5 illustrates how the dominant climate
events occur that will lead to changes in the future NINO3.4.
Although the main indices in terms of the iterative integrals are related to
water temperature in the NINO regions, it appears that not only these but also
various climate indices contribute incrementally, and the predictor is built
on the balance of them. In fact, if we perform an experiment with the main 5
indices (NINO12, NINO3, NINO34, NINO4, and NPI), the rms error for the
prediction is $0.666\mathrm{K}$, with no improvement from the control case. In
this respect, the inference structure is considered to be different from other
SST-based predictions such as KAF and LIM.
Table 2: Top five dominant event sequences among iterated integrals. “1st” denotes the first index for the corresponding iterated integral: $\mathcal{S}^{(i_{1}i_{2}i_{3})}(X)=\int_{s}^{t}\int_{s}^{t_{3}}\int_{s}^{t_{2}}dX_{t_{1}}^{(i_{1})}dX_{t_{2}}^{(i_{2})}dX_{t_{3}}^{(i_{3})}$. Events happen from first to third: $t_{1}<t_{2}<t_{3}$. If the same index appears twice in a row, then the event is intense. “SPRC” represents the standard partial regression coefficients (Eq. 11). | Learning data from Jan. 1900 to Dec. 1999
---|---
| SPRC | Indices in iterated integrals
No. | $r_{m}^{(i_{1}i_{2}i_{3})}$ | 1st ($i_{1}$) | 2nd ($i_{2}$) | 3rd ($i_{3}$)
1 | $5.55$ | NPI | NINO3 | NPI
2 | $-3.70$ | NINO3 | NPI | NPI
3 | $3.63$ | NPI | NPI | NINO12
4 | $-3.57$ | NINO34 | NINO34 | NINO34
5 | $3.34$ | NINO34 | NPI | NPI
| Learning data from Jan. 1900 to Dec. 2020
| SPRC | Indices in iterated integrals
No. | $r_{m}^{(i_{1}i_{2}i_{3})}$ | 1st ($i_{1}$) | 2nd ($i_{2}$) | 3rd ($i_{3}$)
1 | $4.53$ | NPI | NPI | NINO12
2 | $-4.17$ | NINO34 | NINO34 | NPI
3 | $-3.37$ | NPI | NINO34 | NINO12
4 | $-3.25$ | NINO34 | NPI | NINO12
5 | $2.88$ | NPI | NINO3 | NPI
Figure 2: Comparison of NINO3.4 for 6-month predictions. Red: signature case;
blue: control case. Horizontal axis is the target year and month, and vertical
axis is temperature in $\mathrm{{}^{\circ}C}$. Figure 3: The same as Fig. 2
but shown as anomalies, which are defined as the difference from the past
30-yr mean of monthly values. Red: signature case; blue: control case.
Horizontal axis is the target year and month, and vertical axis is temperature
anomalies in $\mathrm{{}^{\circ}C}$. Figure 4: Prediction error for each
target month. Red: signature case; blue: control case. Horizontal axis is the
target month (1 = January, 2 = February, $\cdots$, 12 = December), and
vertical axis is rms error in $\mathrm{K}$. Figure 5: Typical climate event
flows for predicting future Nino3.4 index. Arrows indicate time order. Key
indices include NINO12 (Nino1+2 SST), NINO34 (Nino3.4 SST), NINO4 (Nino4 SST),
and NPI (North Pacific Index).
## 4 Conclusions
We developed a model that can statistically predict El Niño using only the
time series of past multidimensional climate indices. By converting the time
series into the signature, the accuracy of the machine learning algorithm is
improved and, thereby, the NINO3.4 SST can be predicted to some extent six
months in advance. An important byproduct of this approach is that the
correlation of climate events can be read from the dominant iterative
integral. For example, it was suggested that variations in the NPI, NINO12,
and other indices occur in a certain order, which leads to variations in the
NINO3.4 SST. It was also found that the signature method can learn the
nonlinear development of El Niño more accurately than the traditional AR model
and, thus, is less sensitive to the spring barrier of predictability. Future
research is required to improve the scheme by incorporating more detailed
oceanographic information, evaluating uncertainties, and considering other
factors.
The predictions obtained by this method are not marked with error bars, but
because we know the prediction error for each month as shown in Fig. 4, we can
consider these values as the prediction error. However, as it is not possible
to give a forecast error for each forecast individually, an ensemble could be
created by bootstrapping or other methods to improve this point, which may
also lead to a factory for forecast accuracy.
The length of the path segment used for the 6-month forecast, 6 months, was
confirmed in preliminary experiments (not shown) to be appropriate, but the
length of the path segment required for forecasts with other lead times may be
different. It is also necessary to confirm whether the step-$3$ signature is
optimal.
The dominant iterated integral for prediction may change from time to time
depending on the period covered, as shown in Table 2. It needs to be carefully
considered how this relates to the decadal changes in the Niño mechanism.
These points remain as future work.
## 5 Acknowledgments
This study was funded by JST-PROJECT-20218919.
Code availability section
Name: enso_signature
Contact<EMAIL_ADDRESS>+81-46-867-9054
Hardware requirements: CPU
Program language: Python
Software required: Python libraries esig and scikit-learn
Program size: 188 lines
The source codes are available for downloading at the link:
https://github.com/nozomi-sugiura/enso_signature
## References
* Alexander et al. (2002) Alexander, M.A., Bladé, I., Newman, M., Lanzante, J.R., Lau, N.C., Scott, J.D., 2002\. The atmospheric bridge: The influence of ENSO teleconnections on air–sea interaction over the global oceans. Journal of climate 15, 2205–2231.
* Bjerknes (1969) Bjerknes, J., 1969. Atmospheric teleconnections from the equatorial Pacific. Monthly weather review 97, 163–172.
* Dijkstra et al. (2019) Dijkstra, H.A., Petersik, P., Hernández-García, E., López, C., 2019. The application of machine learning techniques to improve El Niño prediction skill. Frontiers in Physics 7, 153\.
* Enfield et al. (2001) Enfield, D.B., Mestas-Nuñez, A.M., Trimble, P.J., 2001. The atlantic multidecadal oscillation and its relation to rainfall and river flows in the continental u.s. Geophysical Research Letters 28, 2077–2080. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2000GL012745, doi:https://doi.org/10.1029/2000GL012745, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2000GL012745.
* Fermanian (2021) Fermanian, A., 2021. Embedding and learning with signatures. Computational Statistics & Data Analysis 157, 107148.
* Ham et al. (2019) Ham, Y.G., Kim, J.H., Luo, J.J., 2019. Deep learning for multi-year enso forecasts. Nature 573, 568--572. doi:https://doi.org/10.1038/s41586-019-1559-7.
* Hameed and Yamagata (2003) Hameed, S., Yamagata, T., 2003\. Possible impacts of indian ocean dipole mode events on global climate. Climate Research - CLIMATE RES 25, 151--169. doi:10.3354/cr025151.
* Henley et al. (2015) Henley, B.J., Gergis, J., Karoly, D.J., Power, S., Kennedy, J., Folland, C.K., 2015\. A Tripole Index for the Interdecadal Pacific Oscillation. Climate Dynamics 45, 3077--3090. doi:10.1007/s00382-015-2525-1.
* Hu et al. (2021) Hu, X., Eichner, J., Faust, E., Kantz, H., 2021\. Benchmarking prediction skill in binary el niño forecasts. Climate Dynamics , 1--15.
* Jones et al. (1997) Jones, P.D., Jonsson, T., Wheeler, D., 1997. Extension to the north atlantic oscillation using early instrumental pressure observations from gibraltar and south-west iceland. International Journal of Climatology 17, 1433--1450. doi:https://doi.org/10.1002/(SICI)1097-0088(19971115)17:13<1433::AID-JOC203>3.0.CO;2-P.
* Kormilitzin (2017) Kormilitzin, A., 2017. the-signature-method-in-machine-learning. https://github.com/kormilitzin/.
* Lai et al. (2018) Lai, A.W.C., Herzog, M., Graf, H.F., 2018. Enso forecasts near the spring predictability barrier and possible reasons for the recently reduced predictability. Journal of Climate 31, 815--838.
* Levin et al. (2013) Levin, D., Lyons, T., Ni, H., 2013. Learning from the past, predicting the statistics for the future, learning an evolving system. ArXiv e-prints arXiv:1309.0260.
* Lyons et al. (2007) Lyons, T.J., Caruana, M., Lévy, T., 2007. Differential Equations Driven by Rough Paths. volume 1908 of Lecture Notes in Mathematics. Springer.
* Morrill et al. (2021) Morrill, J., Fermanian, A., Kidger, P., Lyons, T., 2021\. A Generalised Signature Method for Multivariate Time Series Feature Extraction. arXiv:2006.00873.
* Nakamura et al. (2006) Nakamura, T., Tachibana, Y., Honda, M., Yamane, S., 2006\. Influence of the Northern Hemisphere annular mode on ENSO by modulating westerly wind bursts. Geophysical Research Letters 33.
* Neelin et al. (1998) Neelin, J.D., Battisti, D.S., Hirst, A.C., Jin, F.F., Wakata, Y., Yamagata, T., Zebiak, S.E., 1998. Enso theory. J. Geophys. Res. 103, 14 261--14 290.
* NOAA (2021) NOAA, 2021. Climate Timeseries at PSL. URL: https://psl.noaa.gov/gcos_wgsp/Timeseries/.
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E., 2011\. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825--2830.
* Penland and Magorian (1993) Penland, C., Magorian, T., 1993\. Prediction of Niño 3 Sea Surface Temperatures Using Linear Inverse Modeling. Journal of Climate 6, 1067 -- 1076. URL: https://journals.ametsoc.org/view/journals/clim/6/6/1520-0442_1993_006_1067_ponsst_2_0_co_2.xml, doi:10.1175/1520-0442(1993)006<1067:PONSST>2.0.CO;2.
* Rayner et al. (2003) Rayner, N.A., Parker, D.E., Horton, E.B., Folland, C.K., Alexander, L.V., Rowell, D.P., Kent, E.C., Kaplan, A., 2003\. Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. Journal of Geophysical Research: Atmospheres 108\. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2002JD002670, doi:https://doi.org/10.1029/2002JD002670, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2002JD002670.
* Ropelewski and Jones (1987) Ropelewski, C.F., Jones, P.D., 1987\. An extension of the tahiti--darwin southern oscillation index. Monthly weather review 115, 2161--2165.
* Stone (1937) Stone, M., 1937. Applications of the theory of boolean rings to general topology. Transactions of the American Mathematical Society 41, 375--481.
* Sugiura and Hosoda (2020) Sugiura, N., Hosoda, S., 2020\. Machine learning technique using the signature method for automated quality control of argo profiles. Earth and Space Science 7, e2019EA001019. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019EA001019, doi:https://doi.org/10.1029/2019EA001019, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019EA001019. e2019EA001019 10.1029/2019EA001019.
* Thompson and Wallace (1998) Thompson, D.W.J., Wallace, J.M., 1998\. The arctic oscillation signature in the wintertime geopotential height and temperature fields. Geophysical Research Letters 25, 1297--1300. URL: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/98GL00950, doi:https://doi.org/10.1029/98GL00950, arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/98GL00950.
* Timmermann et al. (2018) Timmermann, A., An, S.I., Kug, J.S., Jin, F.F., Cai, W., Capotondi, A., Cobb, K.M., Lengaigne, M., McPhaden, M.J., Stuecker, M.F., et al., 2018\. El niño--southern oscillation complexity. Nature 559, 535--545.
* Trenberth and Hurrell (1994) Trenberth, K., Hurrell, J., 1994\. Decadal atmosphere-ocean variations in the Pacific. Climate Dynamics 9, 303--319. doi:https://doi.org/10.1007/BF00204745.
* Vimont et al. (2003) Vimont, D.J., Wallace, J.M., Battisti, D.S., 2003. The seasonal footprinting mechanism in the Pacific: Implications for ENSO. Journal of Climate 16, 2668--2675.
* Wallace et al. (1998) Wallace, J.M., Rasmusson, E.M., Mitchell, T.P., Kousky, V.E., Sarachik, E.S., von Storch, H., 1998\. On the structure and evolution of enso related climate variability in the tropical pacific: Lessons from toga. J. Geophys. Res. 103, 14 241--14 259.
* Wang et al. (2020) Wang, X., Slawinska, J., Giannakis, D., 2020. Extended-range statistical ENSO prediction through operator-theoretic techniques for nonlinear dynamics. Sci Rep. 10, 2636\. doi:10.1038/s41598-020-59128-7.
* White et al. (2014) White, C.J., Hudson, D., Alves, O., 2014. ENSO, the IOD and the intraseasonal prediction of heat extremes across Australia using POAMA-2. Climate dynamics 43, 1791--1810.
* Wu et al. (2021) Wu, X., Okumura, Y.M., Deser, C., DiNezio, P.N., 2021\. Two-Year Dynamical Predictions of ENSO Event Duration during 1954--2015. Journal of Climate 34, 4069 -- 4087. URL: https://journals.ametsoc.org/view/journals/clim/34/10/JCLI-D-20-0619.1.xml, doi:10.1175/JCLI-D-20-0619.1.
* Yan et al. (2020) Yan, J., Mu, L., Wang, L., Ranjan, R., Zomaya, A.Y., 2020\. Temporal convolutional networks for the advance prediction of ENSO. Scientific reports 10, 1--15.
* Zheng and Zhu (2010) Zheng, F., Zhu, J., 2010. Spring predictability barrier of ENSO events from the perspective of an ensemble prediction system. Global and Planetary Change 72, 108--117.
|
# Topological insulators and enhancers in networks under generic problem-
solving dynamics
J.F.Johannes Falk E.E.Edwin Eichler K.W.Katja Windt M-T.H.Marc-Thorsten
Hütt School of Science, Constructor University, Bremen, Germany SMS Group
GmbH, Düsseldorf, Germany EICHLER Consulting AG, Weggis, Switzerland School
of Business, Social & Decision Sciences, Constructor University, Bremen,
Germany
###### Abstract
The collective coordination of distributed tasks in a complex system can be
represented as decision dynamics on a graph. This abstract representation
allows studying the performance of local decision heuristics as a function of
task complexity and network architecture. Here we identify hard-to-solve and
easy-to-solve networks in a social differentiation task within the basic model
of small-world graphs. We show that, depending on the details of the decision
heuristic as well as the length of the added links, shortcuts can serve as
topological enhancers, which speed up the finding of a solution, but also as
topological insulators, which make the network more difficult to solve. Our
findings have implications for situations where, in distributed decision
systems, regional solutions emerge, which are globally incompatible such as
e.g. known from the emergence of standards.
Graph Coloring Dynamics,
Distributed Decision Strategies,
Global Coordination,
###### keywords:
Research
## 1 Introduction
Dynamics on graphs are an important concept to analyze distributed decision-
making and task coordination. Beyond social sciences [1, 2, 3] also logistics
[4] and computer science [5, 6, 7] are interested in how distributed decisions
can efficiently lead to global coordination, e.g. to avoid queuing or to
minimize interference between WLANs. In the simplest coordination problems, a
node of the graph can select a decision (a ’color’) out of a list of allowed
decisions based on the observed decision states of their direct neighbors. The
local decision heuristics (i.e., the decision selection criteria at each node)
represent the goal of the systemic task. Such coordination tasks come in two
variants [8]: Either the task is related to some type of consensus across the
whole system. In this case, the graph is ’solved’, when no different colors
are linked. Alternatively, these coordination tasks can be related to social
differentiation, scheduling, or resource allocation. In this case, the graph
is ’solved’, when no same colors are linked. Here we focus on the second
scenario of social differentiation and scheduling. Its abstraction as color
dynamics on graphs, related to the graph coloring problem, has been made
popular by the seminal work of Kearns et al. [1]. This framework has led to
relevant insight into problem-solving dynamics and some ’stylized facts’ about
distributed decisions. Examples include the positive effect of random agents
in such a distributed decision system [3], the effect of a wave-like
organization of attention and strategic waiting on these decision dynamics
[9], and the effect of shortcuts in a small-world architecture on the
convergence toward a fully solved system. This is visible, both in experiments
with human subjects [1] and numerical simulations involving simple heuristics
[9].
The decision heuristics introduced in Hadzhiev et al. [9] furthermore provided
some insight into the interplay of centralized and autonomous, decentralized
control in manufacturing planning and control [10, 11].
However, a striking characteristic of graph coloring dynamics has not been
analyzed in the past: For a few or even a single shortcut (i.e., a small
rewiring probability in the Watts-Strogatz model [12]) one observes a dramatic
variability of runtimes. Here we show that – besides the random
initialization, as well as the general stochastic nature of these dynamics –
this high variability is due to the network topology: Depending on the exact
positions as well as the used heuristic, shortcuts in a ring graph can
generate easy-to-solve and difficult-to-solve graphs.
The problem addressed by our investigation is of relevance for many real-world
applications: Regional solutions emerge rapidly, but they are incompatible on
a global scale and the diffusing remaining conflicts, which are the boundaries
of incompatible solution regimes, require an excessive amount of local
reorganization, until one region switches to a solution compatible with
another region. This problem of different locally valid solutions that are
globally incompatible can especially be observed in the emergence of
compatibility standards [13]. Different technical devices may be locally
compatible based on one standard, but incompatible with functionally
equivalent standards from other areas, leading to competition between
alternatives [14] and ultimately resulting in a global standard. Examples of
such battles are BlueRay vs HD DVD or Wi-Fi vs HomeRF [15]. There already
exist some models to explain the success or failure of standards. But as
economic models, they are focused on the interplay of strategic factors,
business models, and business actors [16, 17]. Our investigation rather
contributes to understanding the spatial organization of standards and hence
the influence of the network topology on the time until a standard settles.
We show that – depending on the heuristic and the length of the links –
shortcuts can act as topological insulators and topological enhancers, i.e.
either delaying or accelerating the regional reorganization efforts towards a
trans-regionally compatible solution.
## 2 Methods
In this paper, we investigate heuristics that can solve the graph coloring
problem. In this problem from graph theory, the goal is to assign colors to
the vertices of a graph such that no two adjacent vertices have the same
color. The minimal number of colors that are needed to color a network in this
way is known as the chromatic number $\chi$ of the graph. In this section, we
explain how we generate graphs with a given chromatic number, introduce
different heuristics, and present a genetic algorithm that we use to generate
networks with specific properties.
### 2.1 Small-World Networks
In this analysis, we mainly focus on small-world networks with few inserted
links as a toy model for graphs with high clustering and small shortest path
length. The idea of the graph generation follows [12]. However, since the
networks are supposed to be solvable with a given number of $\chi$ colors (the
chromatic number), we generated them as follows: 40 (39 for $\chi=3$) nodes
are arranged as a circular graph, where every node $i$ is connected to its
$\chi-1$ closest neighbors in both directions. A given number of shortcuts is
then added such that every shortcut connects only nodes with a different value
of $mod(i,\chi)$, where $i$ is the node’s index, thus preserving the graph’s
chromatic number $\chi$. To compare how fast different network topologies can
be solved, we look at the number of color changes that have been performed
until the network is in a solved state. The color changes then set a time
scale where each time step is equal to one color change.
### 2.2 Other graph topologies with $\chi=2$
To extend our results to more general statements we generate three other types
of random networks (only for $\chi=2$):
* •
BA: For this network, we start with a simple path graph with 4 numbered nodes.
Subsequently, we add nodes and links following preferential attachment as
described in [18] where every new vertex (labeled with a consecutive number)
is attached to existing nodes via two links. However, and in contrast to the
reference, to ensure that the graph has a chromatic number of $2$, for an even
(odd) number of already existing nodes, a newly added node can only connect to
nodes with an odd (even) label.
* •
Random: The procedure to create this graph starts with a graph of $N$
unconnected nodes, labeled with an integer $i$. A given number of edges is
then sampled randomly from all edges that would connect two nodes with an even
and an odd label. This ensures a chromatic number of $\chi=2$. If the
resulting graph is not connected, the procedure is repeated with a different
set of randomly selected edges.
* •
Modular: This graph consists of two separate graphs of type random that are
connected by a single edge connecting two randomly selected nodes.
### 2.3 Neighborhood assessment strategies
Agent-based models to solve graph coloring problems have already been analyzed
in various variations. Inspired by the results from [1], Hadzhiev et al. [9]
developed a family of heuristics that allow agent-based networks to be solved
in reasonably short times. Following the concepts from [9], a graph coloring
heuristic consists of two components: One strategy for the temporal
organization (which node acts next) and one for the neighborhood assessment
(which color does the active node select). To simulate the behavior of
independent distributed systems as closely as possible, we use random
sequential updates (R) for the temporal organization, which means that every
time step the next node is selected at random from all available nodes. For
the neighborhood assessment heuristic, we first refer to three strategies from
[9], namely $R$ (random), $M$ (color minimizing), and $W$ (strategic waiting).
Subsequently, we develop a new ($N$) heuristic whose behavior can be
continuously tuned by a parameter $r$ (reasoning): For large values of $r$ the
agents always select their color by reasoned considerations. The smaller $r$,
the more often the color choice happens randomly. In all strategies, the
active node first assesses the colors of its connected neighbors. If possible,
the node randomly selects one of the colors that do not appear in its
neighborhood (conflict-free color). Otherwise, the different strategies
proceed as follows:
* •
R (random color): The node selects a color at random from all available colors
* •
M (conflict minimizing color): The node selects randomly a color from the set
of colors that minimizes the number of conflicts. If the node has already the
unique conflict-minimizing color, a color is selected at random.
* •
W (strategic waiting): Equal to the M scheme, however, if the node has already
the unique conflict-minimizing color, the present color is retained with
probability $p=0.9$.
* •
N (reasoning): With a probability of $r$ the node randomly selects a color
that minimizes the conflicts (reasoned-acting). In the other case (with a
probability of $1-r$) it randomly selects a color from the list of all
available colors.
The $N$ heuristic can hence be understood as a generalization of the three
other heuristics. For small $r$ the $N$ heuristic is similar to the $R$
heuristic, for intermediate $r$ it is similar to the $M$, and for large $r$ to
the $W$ heuristic.
In order to name the full heuristics, we follow the naming scheme that was
also used in [9]: $XY$ means that we used $X$ as temporal organization
strategy and $Y$ as neighborhood assessment strategy.
### 2.4 Genetic Algorithm
To assess how strongly the topology of a network (with a fixed number of
shortcuts) affects the runtime, we use a genetic algorithm that evolves to
easy-to-solve or hard-to-solve networks (with respect to a given heuristic).
The algorithm starts with an ensemble of six randomly selected small-world
networks with the given number $S$ of shortcuts and proceeds as follows:
* •
Each network of the ensemble is randomly colored and then solved by the
respective strategy. The time until solved (measured in activation steps) is
averaged over 500 runs.
* •
The two fastest (slowest) solved networks are kept for the next run,
additionally, four networks are generated by mutations (re-connection of one
shortcut) and by recombination (take $n$ shortcuts from one network and $S-n$
shortcuts from the other network) of these two fastest (slowest) networks.
* •
These six new networks are the new ensemble for the first step.
The process is terminated after 1000 evolution steps and the obtained
topologies are saved.
## 3 Results
In this investigation, we take the observed high variability of the
distributed graph-coloring problem as an opportunity to examine how the
network topology influences the runtime. To focus the analysis we limit
ourselves to networks with a chromatic number of $\chi=2$. In the last part of
the results section, we explain why networks with $\chi>2$ show a
significantly more complicated behavior, which results from the interaction of
different mechanisms and thus defies a simple mechanistic explanation.
We begin our investigation by looking at some results from [9]. The authors
analyzed how different graph coloring heuristics perform in small-world
networks if the number of shortcuts increases. In Fig. 1 we show the
performance of the three heuristics that use random neighborhood assessment
$R$ and $R$, $M$ or $W$ as neighborhood assessment (see 2.3 for details). With
the $RR$ and $RM$ heuristic, the more shortcuts the network has, the longer
(on average) the nodes need to organize and finally solve the network. In
contrast, using the $RW$ heuristic the solution is reached faster with more
added links, as it was also observed in human subject networks [1]. Looking at
Fig. 1, it is also noticeable that – for a fixed number of shortcuts – the
variance of the time steps required is strikingly high. Since the initial
conditions for each run are chosen randomly and the heuristic contains
stochastic components, a certain variance is to be expected. An open question,
however, is whether the topology, i.e. the location of the shortcuts, has an
impact on the solvability.
Figure 1: Mean number of time steps (color changes) until the network is
solved vs. the number of shortcuts for small-world networks using the $RR$,
$RM$, and $RW$ heuristic. The light area denotes the standard deviation.
(reproduced from [9])
To test and quantify the impact of the topology, we use a genetic algorithm
(see 2.4) that is designed to generate easy and hard-to-solve small-world
graphs with a small number of 5 added links. A strong difference between the
runtimes of the extreme graphs could indicate whether and how the topology
affects the runtime. Exemplary topologies for the $RR$, as well as the $RW$
heuristic, are presented in Fig. 2. The large difference between the fastest
and slowest networks ($120$ vs. $2531$ color changes for $RW$ heuristic, $406$
vs. $1206$ color changes for the $RR$ heuristic) indicates that – for a fixed
number of shortcuts – the runtimes depend strongly on the shortcuts’
positions. Additionally, the resulting topologies seem to have characteristic
features. However, these characteristic features are opposite: Long-range
links facilitate a fast solution finding for the $RW$ heuristic but create a
difficult-to-solve network for the $RR$ heuristic. Likewise, the easy-to-solve
network for the $RR$ heuristic is characterized by maximally short links,
whereas for the $RW$ heuristic the short links appear in the difficult graph.
Figure 2: Exemplary results of the genetic algorithm for (top) hard-to-solve
and (bottom) easy-to-solve network with 5 shortcuts optimizing for (left) $RW$
and (right) $RR$ heuristic. The numbers indicate how many time steps the
heuristics required to solve the respective graph (averaged over 500 random
initial conditions).
In what follows we will introduce a generalized heuristic and extract general
features that can explain the interdependence between topology and runtime.
Long-range links are often considered to be beneficial for a system-wide
organization because they allow transmitting information over a long distance.
Our analysis is based on the idea that the respective agent must be able to
process the additional information provided by a long link. When agents
evaluate the observations from their neighborhood reasoned, the remote
information helps them to adapt themselves to the global solution. If, on the
other hand, the agents do not operate reasoned, the additional source of
information creates confusion, which hinders the stabilization of local
solutions. To test this proposition, we introduce a new heuristic that can be
continuously adjusted from perfectly reasoned to random behavior. Our new
heuristic $N$ (details in Sec. 2.3) has one parameter $r$, the reasoning of
the respective agent. A large value of $r$ means that the nodes preferentially
select their color by minimizing the conflicts with their neighborhood
(similar to the $W$ heuristic). In contrast, for small values of $r$, the
nodes tend to select a color randomly from all available colors once they
observe a conflict within their neighborhood (similar to the $R$ heuristic).
We created a ring lattice with 40 nodes and added a single shortcut (with the
constraint that the chromatic number $\chi=2$ is conserved, see also Sec. 2.1)
For Fig. 3 we set $r$ to different values and analyzed how the runtime depends
on the relative length of the added shortcut (averaged over 10.000 runs each).
As expected, if the heuristic is very reasoned (large $r$) the time until
solved decreases for longer shortcuts. In contrast, if the heuristic contains
a lot of randomnesses (small $r$), long-range links deteriorate the
solvability of the graph. An additional observation is that the reasoned
strategies work poorly when the inserted link is very short (an increase of
the required time by about 30%).
Figure 3: Relative extra time until ring graphs with 40 nodes and a single
shortcut are solved vs. the relative length of the added shortcut for
different values of $r$. A relative length of $1$ refers to a shortcut of
maximal length, hence spanning 20 nodes. The time is measured relative to the
time that is needed if the ring graph does not have any shortcut.
### 3.1 Reasoned-acting Agents (large $r$)
For large $r$ the results are in line with the slow network obtained for the
$RW$ heuristic in Fig. 2. The slow network is characterized by comparably
short links that create two densely connected areas. These clusters foster a
fast emergence of local solutions. Additionally, the short shortcuts stabilize
the local solution against fluctuations from the outside. Figure 4a shows an
example of such stabilization of a local solution. The larger the parameter
$r$, the more stable the locally solved areas. However, in the likely case
that the local solution is not compatible with the currently prevailing global
solution domain, the system is in a hard-to-solve state: The reasoned-acting
agents cling to their local solution, the added link acts as a topological
insulator. Contrarily, evolving towards topologies that are easy to solve for
the $RW$ heuristic, the resulting network is characterized by a few nodes that
are connected to various areas of the network and that act as ordering nodes.
These ordering nodes synchronize the local solutions already during their
build-up. An example of the effect of a single long-range shortcut is shown in
Figure 4b. Without the shortcut, the node labeled with “A” could either stay
red or change its color to blue. In both cases, the result would be a single
conflict with one neighbor. However, due to the shortcut – that is by
definition inserted such that it does not alter the graph’s chromatic number
and, hence, a global solution is possible – a change to blue minimizes the
local color conflicts and acts as a local reference for the global solution
domain.
Figure 4: Comparison of the two effects a shortcut can have: (a) A short link
stabilizes a solution regime against perturbations from the outside. In the
example, there is a color conflict between the two red nodes (indicated by a
red link). The right red node has two blue neighbors (one direct and one via
the shortcut). If the node acts reasoned its color is stabilized since red
minimizes the conflicts. (b) The sketch shows two sections of a large ring
graph (indicated by the gray dashed line). The long shortcut organizes two
apart sections and orders them. Without the shortcut, the node with the label
“A” would have a 50% chance of keeping its color, compared to changing to
blue. Due to the shortcut, reasoned-acting nodes will change to blue, since
this is the conflict-minimizing color.
### 3.2 Irrational Agents (small $r$)
Things are different for the irrational agents with small $r$ (similar to the
$RR$ heuristic). Here, Fig. 1 tells us that shortcuts always create graphs
that are more difficult to solve than the pure ring graph, where the effect is
stronger the longer the added link. Consequently, the results from Fig. 2 show
that the fast networks are characterized by short links. For the $RR$
heuristic, the difficult-to-solve networks are characterized by long-range
links, very similar to the graphs that are easy to solve for the $RW$
heuristic. For irrational acting agents (as in the $RR$ heuristic), the long
links that connect a single node to various areas of the graph act like a
source of noise: A color-fluctuation of the highly connected node immediately
destabilizes the colors of all connected nodes, spread over the full network.
### 3.3 Complex Topologies
Having analyzed the interplay between the length of added links and the
reasoning of the acting agents in small-world graphs, it is now natural to ask
whether this behavior can also be observed in more complex networks. As
described in Sec. 2.2 we generated a random graph (40 nodes, 80 edges), a
modular graph (2 x 20 nodes with 40 edges each), and a BA graph (40 nodes).
All graphs are generated such that $\chi=2$. In Fig. 5 we show the time until
solved vs the reasoning of the heuristic. For both the random network and the
BA graphs, the more reasoned the agents act, the faster they are. Note,
however, that for $r=1.0$ dead-lock situations are possible that cannot be
solved (see e.g. Fig.2 in [9]). The results confirm the observations from the
small-world networks: Random networks as well as BA networks have small
modularity and high connectivity. It is therefore unlikely that globally
incompatible solutions can stabilize against the rest of the network. The
modular network is, however, specifically designed to have two almost separate
modules. Fig. 5 indicates that in this case heuristics that act too reasoned
have a disadvantage: If the two modules converge to different solution
domains, it is difficult for the heuristic to overturn one solution.
Figure 5: Mean number of timesteps (color changes) until the network is solved
vs. the reasoning of the heuristic for three different graph topologies (see
Sec. 2.2).
### 3.4 Extension to $\chi=3$
The natural extension of your investigation would be to increase the chromatic
number of the graphs. For Fig. 6 we performed a similar analysis as for Fig. 3
but with a ring graph with 39 nodes and a chromatic number of $\chi=3$.
Depending on the length of the added shortcut the system takes longer or is
faster than without a shortcut. The general behavior of the network is on
average also similar to the one with a chromatic number of two (short
shortcuts lead to longer times). However, there are also two drastic
differences: (1) The curve shows an alternating behavior that was not present
for the $\chi=2$ graph. The reason is a complicated interplay between the
shortcuts and the different possible solution regimes. For two colors there
are only two possible solutions: $abab$ or $baba$. However, for three colors
there are $3!=6$ possible solution domains that are facilitated or suppressed
depending on the position of the shortcut. (2) The relative effect of a single
shortcut is not as strong as for the $\chi=2$ graph. The main reason is that a
shortcut at each end excludes only one color at a time. If there are only two
colors a single disallowed color directly determines the correct color:
$\neg\text{red}\rightarrow\text{blue}$. However, the more colors we have the
less effect has the banning of a single color. To overcome this problem one
would need to generalize the definition of a shortcut. For $\chi=3$ such a
generalized shortcut would hence consist of four conventional shortcuts that
all-to-all connect two adjacent nodes with two other adjacent nodes.
## 4 Conclusion
Within small-world networks, shortcuts decrease the average path length and
facilitate the transport of local information through the system [19]. One
would therefore expect that distributed coordination problems on graphs always
benefit from shortcuts, albeit the effect size might depend on the respective
length of the shortcut. In this manuscript, we discussed the graph coloring
problem as a simple form of distributed coordination problem. We analyzed how
shortcuts affect the time a local heuristic needs to solve the coloring
problem. Depending on how reasoned the agents act, added shortcuts give rise
to different mechanisms: They synchronize the solution domains between distant
sections of the network, stabilize parts of the network against fluctuations,
or they create perturbations. For reasoned heuristics, short shortcuts tend to
insulate locally solved but globally incompatible solutions against each
other, finally leading to an increase in the overall time until a solution is
found. We call shortcuts that create such separated domains topological
insulators. In contrast, long shortcuts foster early synchronization of
otherwise distant areas of the network, which is why we call them topological
enhancers.
The graph coloring problem is the simplest model to analyze the conflicts
between and the solvability of distributed logical systems: The conflicts
encountered in graph coloring dynamics on a ring arise due to two (or more)
coloring domains that are structurally equal (they are correctly colored) but
locally different (they follow a different color permutation). From a
mathematical point of view, this inconsistency between local logical systems
relates to distributed logic. Our results can hence be interpreted from the
perspective of Gotthard Günther’s theory of polycontexturality (often also
termed transclassical logic) [20].
In this theory, different subjects, $S_{1}$ and $S_{2}$, observing the same
object, $O$, may arrive at different but equally correct conclusions about its
properties. In our model, this corresponds to different solutions emerging at
different locations on the ring. According to Günther’s theory, a third
subject, $S_{3}$, observing the situation, is able to reflect on the different
conclusions of $S_{1}$ and $S_{2}$ observing $O$. In our model, this
corresponds to a node that is enabled to compare different local solution
regimes via a long-ranging (enhancer) shortcut. This addition to the
neighborhood is enough to empower this node to modify the local decision
pattern and thus facilitate the emergence of a global solution.
In spite of the simplicity of the local decision heuristics and the stylized
nature of the task, the different roles of nodes (subjects) proposed by the
theory of polycontexturality establish themselves due to the structural
differences between nodes with and without a long-ranging shortcut.
In this view, it also becomes intuitive, why longer shortcuts serve as
enhancers and shorter shortcuts serve as insulators: For a node with a
shortcut, its role as the third subject requires a link to truly independent
information, transcending the local solution regime.
As a minimal model for the effects of links or information flow within
polycontextural systems, the analysis of the graph coloring problem can hence
contribute to heterarchical approaches in biology [21], consensus finding
[22], complex and reflexive relations in social systems [23, 24], or
transformations in physics [25].
We also believe that our findings have implications for the understanding of
the emergence of standards (here represented by globally compatible
solutions), as well as for the development of more robust scheduling schemes
in manufacturing and resource distribution [26].
Figure 6: Relative extra time until a ring graph with 39 nodes and a chromatic
number of 3 is solved vs. the length of the added shortcut for three different
values of $r$. A relative length of $1$ refers to a shortcut of maximal
length, hence spanning half of the system. The time is measured relative to
the time that is needed if the ring graph does not have any shortcut.
## Availability of data and materials
Not applicable
## Competing Interests
The authors declare that they have no competing interests
## Funding
Not applicable
## Authors’ contributions
MH and KW conceived this study. J.F. and M-T.H. developed the model, and J.F.
ran the simulations and analyzed the data. J.F. and M-T.H. wrote the
manuscript. E.E., K.W., and M-T.H. supervised the project. All authors
discussed the results and implications and commented on the manuscript at all
stages.
## Acknowledgments
Not applicable
## References
* [1] Kearns, M.: An Experimental Study of the Coloring Problem on Human Subject Networks. Science 313(5788), 824–827 (2006). doi:10.1126/science.1127207
* [2] Kearns, M., Judd, S., Tan, J., Wortman, J.: Behavioral Experiments on Biased Voting in Networks. Proceedings of the National Academy of Sciences 106(5), 1347–1352 (2009). Chap. Physical Sciences. doi:10.1073/pnas.0808147106
* [3] Shirado, H., Christakis, N.A.: Locally noisy autonomous agents improve global human coordination in network experiments. Nature 545(7654), 370–374 (2017). doi:10.1038/nature22332
* [4] Grolik, S.: Information Logistics. Decentralized Approaches of Information Allocation in Information Exchange Networks. ibidem Press, ??? (2012)
* [5] Butt, M.M., Dey, I., Dzaferagic, M., Murphy, M., Kaminski, N., Marchetti, N.: Agent-Based Modeling for Distributed Decision Support in an IoT Network. IEEE Internet of Things Journal 7(8), 6919–6931 (2020). doi:10.1109/JIOT.2020.2976802
* [6] Hernández, H., Blum, C.: Distributed graph coloring in wireless ad hoc networks: A light-weight algorithm based on japanese tree frogs’ calling behaviour. In: 2011 4th Joint IFIP Wireless and Mobile Networking Conference (WMNC 2011), pp. 1–7 (2011). doi:10.1109/WMNC.2011.6097216
* [7] Leith, D.J., Clifford, P.: A self-managed distributed channel selection algorithm for wlans. In: 2006 4th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, pp. 1–9 (2006). doi:10.1109/WIOPT.2006.1666484
* [8] Judd, S., Kearns, M., Vorobeychik, Y.: Behavioral dynamics and influence in networked coloring and consensus. Proceedings of the National Academy of Sciences 107(34), 14978–14982 (2010). Chap. Physical Sciences. doi:10.1073/pnas.1001280107
* [9] Hadzhiev, B., Windt, K., Bergholz, W., Hütt, M.-T.: A model of graph coloring dynamics with attention waves and strategic waiting. Advances in Complex Systems 12(06), 549–564 (2009). doi:10.1142/S0219525909002386
* [10] Windt, K., Hütt, M.-T.: Graph coloring dynamics: A simple model scenario for distributed decisions in production logistics. CIRP Annals 59(1), 461–464 (2010). doi:10.1016/j.cirp.2010.03.082
* [11] Blunck, H., Armbruster, D., Bendul, J., Hütt, M.-T.: The balance of autonomous and centralized control in scheduling problems. Applied Network Science 3(1), 16 (2018). doi:10.1007/s41109-018-0071-6
* [12] Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998). doi:10.1038/30918
* [13] van de Kaa, G., De Vries, H.J., van Heck, E., van den Ende, J.: The emergence of standards: A meta-analysis. In: 2007 40th Annual Hawaii International Conference on System Sciences (HICSS’07), pp. 173–173. IEEE, Waikoloa, HI (2007). doi:10.1109/HICSS.2007.529
* [14] Suárez, F.F., Utterback, J.M.: Dominant Designs and the Survival of Firms. Strategic Management Journal 16(6), 415–430 (1995)
* [15] van de Kaa, G., de Vries, H.J.: Factors for winning format battles: A comparative case study. Technological Forecasting and Social Change 91, 222–235 (2015). doi:10.1016/j.techfore.2014.02.019
* [16] Papachristos, G., van de Kaa, G.: A System Dynamics Model of Standards Competition. IEEE Transactions on Engineering Management 68(1), 18–32 (2021). doi:10.1109/TEM.2020.2983352
* [17] Casey, T.R., Töyli, J.: Dynamics of two-sided platform success and failure: An analysis of public wireless local area access. Technovation 32(12), 703–716 (2012). doi:10.1016/j.technovation.2012.08.003
* [18] Barabási, A.-L., Albert, R.: Emergence of Scaling in Random Networks. Science 286(5439), 509–512 (1999). doi:10.1126/science.286.5439.509. Publisher: American Association for the Advancement of Science. Accessed 2023-01-23
* [19] Marr, C., Hütt, M.-T.: Similar impact of topological and dynamic noise on complex patterns. Physics Letters A 349(5), 302–305 (2006). doi:10.1016/j.physleta.2005.08.096
* [20] Günther, G.: Beiträge zur Grundlegung Einer Operationsfähigen Dialektik: Metakritik der Logik, Nicht-aristotelische Logik, Reflexion, Stellenwerttheorie, Dialektik, Cybernetic Ontology, Morphogrammatik, Transklassische Maschinentheorie. Felix Meiner Verlag, ??? (1976)
* [21] Bruni, L.E., Giorgi, F.: Towards a heterarchical approach to biology and cognition. Progress in Biophysics and Molecular Biology 119(3), 481–492 (2015). doi:10.1016/j.pbiomolbio.2015.07.005
* [22] Falk, J., Eichler, E., Windt, K., Hütt, M.-T.: Collective patterns and stable misunderstandings in networks striving for consensus without a common value system. Scientific Reports 12(1), 3028 (2022). doi:10.1038/s41598-022-06880-7
* [23] Vogd, W.: Polykontexturalität: Die Erforschung komplexer systemischer Zusammenhänge in Theorie und Praxis. Familiendynamik 38(1), 32–41 (2013)
* [24] Jansen, T.: Beyond ANT: Towards an ‘infra-language’ of reflexivity. European Journal of Social Theory 20(2), 199–215 (2017). doi:10.1177/1368431016646506
* [25] Falk, J., Eichler, E., Windt, K., Hütt, M.-T.: Physics is Organized Around Transformations Connecting Contextures in a Polycontextural World. Foundations of Science (2021). doi:10.1007/s10699-021-09814-0
* [26] Nandhini, V.: A Study on Course Timetable Scheduling and Exam Timetable Scheduling using Graph Coloring Approach. International Journal for Research in Applied Science and Engineering Technology 7, 1999–2006 (2019). doi:10.22214/ijraset.2019.3368
|
††institutetext: Rudolf Peierls Centre for Theoretical Physics, University of
Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
# Avoided Deconfinement in Randall-Sundrum Models
Prateek Agrawal and Michael Nee<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We study first order phase transitions in Randall-Sundrum models in the early
universe dual to confinement in large-$N$ gauge theories. The transition rate
to the confined phase is suppressed by a factor $\exp(-N^{2})$, and may not
complete for $N\gg 1$, instead leading to an eternally inflating phase. To
avoid this fate, the resulting constraint on $N$ makes the RS effective field
theory only marginally under control. We present a mechanism where the IR
brane remains stabilized at very high temperature, so that the theory stays in
the confined phase at all times after inflation and reheating. We call this
mechanism avoided deconfinement. The mechanism involves adding new scalar
fields on the IR brane which provide a stablilizing contribution to the radion
potential at finite temperature, in a spirit similar to Weinberg’s symmetry
non-restoration mechanism. Avoided deconfinement allows for a viable cosmology
for theories with parametrically large $N$. Early universe cosmological
phenomena such as WIMP freeze-out, axion abundance, baryogenesis, phase
transitions, and gravitational wave signatures are qualitatively modified.
## 1 Introduction
Large-$N$ gauge theories are important both theoretically as well as
phenomenologically. The large-$N$ limit makes many problems in strongly-
coupled gauge theories tractable, providing an expansion parameter for non-
Abelian theories Hooft1974a . It remarkably also plays a central role in
gauge-gravity duality Maldacena1999 ; Witten1998k ; Gubser1998 where a
large-$N$ gauge theory is dual to a gravitational theory in higher dimensions.
A major phenomenological application of the duality appears in the context of
the Randall-Sundrum (RS) model Randall1999 . The RS model provides an elegant
solution to the hierarchy problem, with an exponential separation of the
Planck and weak scales that is generated by the warping of the extra
dimension. The ratio of the two scales is set by the size of the extra
dimension, which is fixed by introducing a stabilisation mechanism, generating
a potential for the radion Goldberger1999 . This mechanism is dual to adding
an almost-marginal operator in the gauge theory which explicitly breaks the
scale invariance of the theory. The small anomalous dimension for this
operator, equivalent to a small bulk mass for the stabilising field
Witten1998k , generates an exponentially small scale in the IR in a manner
analogous to dimensional transmutation in QCD.
The RS model also provides an effective description for a number of string
constructions of realistic vacua with reduced supersymmetry Luty:2000ec ;
Verlinde2000 ; Chan2000 ; Brummer2006 ; Randall2019 . In the large-$N$ limit,
these explicit constructions are described by an effective quantum field
theory where gravity is weakly coupled. A prominent example is the KKLT
scenario Kachru:2003aw , which is partly based on the Klebanov-Witten
Klebanov:1998hh and Klebanov-Strassler Klebanov2000 ; Klebanov:2000hb
constructions. For this construction the stability of the de Sitter vacuum is
often justified in the probe approximation which is valid for parametrically
large $N$ Bena:2018fqc ; Kachru:2019dvo . Therefore, large-$N$ gauge theories
with gravitational duals are expected to be an important part of a UV complete
description of our universe.
Theories which are described by the RS model in an appropriate limit suffer
from a severe cosmological constraint. At high temperature the gauge theory is
in the deconfined phase, and the confined phase becomes thermodynamically
preferred below a critical temperature. However, the confinement phase
transition is first order and exponentially suppressed with a rate
proportional to $\exp(-N^{2}/\lambda)$, where $\lambda$ denotes a possible
weak coupling. The gravitational description of the deconfined phase is the
AdS-Schwarzschild (AdS-S) solution with a UV brane. The confinement transition
corresponds to the appearance of the IR brane from behind the AdS-S horizon
Hawking1983 ; Witten1998 ; Creminelli2002 . The confinement scale in the gauge
theory is dual to the vacuum expectation value of the radion field which sets
the size of the extra dimension in the RS model.
For large-$N$, the suppressed phase transition is much slower than the
expansion rate of the universe, leading to eternal inflation. Requiring the
phase transition to complete leads to a robust upper bound on $N$ Kaplan2006 :
$\displaystyle N\lesssim\sqrt{4\lambda\log\frac{M_{\rm pl}}{\Lambda_{c}}}\sim
12\sqrt{\lambda}\,.$ (1)
where $\Lambda_{c}\simeq 1\ {\rm TeV}$ is the confinement scale for the gauge
theory. This bound follows just from dimensional analysis and is independent
of the details of the RS model. The ratio $N/4\pi$ sets the hierarchy between
the curvature scale $k$ and the 5d reduced Planck scale $M_{5}$,
$N/4\pi\sim(M_{5}/k)^{3/2}$ Gubser2001 . A small value of this ratio, as
implied by the bound (1), then means that corrections due to Planck scale
physics become important, making the EFT control in the RS model delicate.
Gravitational loop corrections can be estimated by the following loop counting
parameter,
$\displaystyle\frac{N_{\rm species}k^{3}}{16\pi^{2}M_{5}^{3}}<1\Rightarrow
N^{2}\gtrsim N_{\rm species}$ (2)
which is in tension with equation (1) even with just the SM degrees of freedom
contributing to $N_{\rm species}\sim 100$.
In fact, the bound is much more stringent within the simplest version of the
RS model. In this setup, the backreaction of the stabilization mechanism and
breaking of scale invariance are assumed to be small even close to the
confinement scale. The gauge theory is an approximately conformal field theory
(CFT), with spontaneously broken conformal invariance in the confined phase.
The approximate conformality suppresses the phase transition further, making
$\lambda\ll 1$, so that the bound on $N$ in equation (1) is impossible to
satisfy.
There is a large body of work devoted to relaxing this more stringent
constraint on the RS model by changing the details of the stabilisation
mechanism Randall2006 ; Nardini2007 ; Konstandin2010 ; Konstandin2011 ;
Dillon2017 ; VonHarling2017 ; Bruggisser2018a ; Bruggisser2018 ; Megias2018a ;
Bunk2018 ; Baratella2019 ; Agashe2019 ; Fujikura2019 ; Azatov:2020nbe ;
Megias2020 ; Agashe2020 in such a way that $\lambda\simeq 1$ and the phase
transition occurs more rapidly. However, the $N^{2}$ dependence of the
tunnelling rate is a generic feature of the confinement phase transition.
While modifying the stabilisation mechanism can change the numerical value of
the bound on $N$, in all these models the phase transition is exponentially
suppressed at large-$N$ and therefore subject to the bound in equation (1). As
pointed out in Hassanain2007 , for Klebanov-Strassler type constructions, the
effective value of $N$ itself varies over the extra-dimensional coordinate –
the relevant $N$ in this case is the value near the confinement scale.
In this paper we present a simple modification to the RS model where $N$ can
be made parametrically large without running into this cosmological bound. We
construct a scenario where the confinement scale grows with temperature, and
hence the universe can remain in the confined phase at all times in early
cosmology. For this reason, we call our mechanism avoided deconfinement. In
order to achieve this we consider the RS I model with the IR brane stabilised
by a Goldberger-Wise (GW) field $\Phi$ Goldberger1999 . By introducing new
scalars to the IR brane, we can generate a potential which stabilises the IR
brane at high temperatures. The mechanism we use to achieve this is
reminiscent of non-restoration of electroweak symmetry at high temperatures,
as considered in refs Weinberg1974 ; Meade2018 ; Baldes:2018nel ;
Glioti:2018roy ; Matsedonskyi2020 . Similar mechanisms have also been proposed
to avoid monopoles Langacker1980 ; Salomonson1985 ; Dvali1995 or domain walls
Dvali1995a in Grand Unified theories, as models of CP violation at high
temperature Mohapatra1979 ; Mohapatra1979a , and in the $O(N)\times O(N)$
models of Orloff1996 ; Chai2020 .
|
---|---
Figure 1: Left hand diagram shows the high temperature behaviour of the RS
model, while the right hand diagram shows the high temperature behaviour in
the AD model. Beyond a certain temperature in the RS model the IR brane is
stabilised behind the location of the horizon in the AdS-S phase, indicating
that the model is unstable against black hole formation. In the AD model, this
instability is lifted by introducing a temperature dependence to the
stabilisation mechanism in such a way that the IR brane is stabilised outside
the would-be horizon at high temperatures.
The modification we make to the RS model can lead to dramatic departures from
its standard cosmological history. Above a critical temperature $T_{c}$, the
confinement scale varies almost linearly with temperature $T$ leading to
$T$-dependent mass scales on the IR brane,
$\displaystyle M_{{\rm
ir}}(T)\propto\mu(T)=\mu(0)\left(\frac{T}{cT_{c}}\right)^{1/(1+\epsilon)}$ (3)
with $\epsilon\ll 1,c\sim\mathcal{O}(1)$. For a mass scale $M_{{\rm
ir}}>T_{c}$, this can imply that the ratio $T/M_{{\rm ir}}(T)$ reaches 1 at
very high temperatures, or potentially not at all (similar to low reheating
temperature models). Taking the standard model (SM) to be localised on the IR
brane, the $T$-dependence of the electroweak and QCD scales is as in equation
(3). If $v_{\rm ew}>T_{c}$, the electroweak phase transition occurs at
temperatures far above the TeV scale or is completely avoided. Furthermore, at
high temperature, fields localised in the UV of the RS model may have had
significant overlap with fields localised towards the IR of the theory, a
feature which may have applications to models of baryogenesis and dark matter
production.
The initial condition for our mechanism to work is that the universe exits
inflation in the RS phase with a stabilized IR brane. A relatively simple way
to achieve this is to have inflation with Hubble rate below the confinement
scale of the gauge group, or with an additional stabilization of the IR brane
during inflation. It will be an interesting future direction to study the
interplay of AD with inflationary models. After inflation the universe reheats
and the AD mechanism prevents the brane from falling behind the would-be AdS-S
horizon (see figure 1). Note that at high enough temperatures the AdS-S phase
will still be the preferred thermodynamic phase of the theory, but in the
avoided deconfinement model the RS phase is classically stable. The
probability of tunnelling from the RS to the AdS-S phase is exponentially
suppressed by $N^{2}$ factors, and can be made vanishingly small in the
large-$N$ limit.
The rest of this paper is organised as follows. In section 2 we describe the
early universe cosmology and summarise the details of the confinement phase
transition in various generalizations of the RS model that have been
considered in the literature. We go on to describe the avoided deconfinement
(AD) model in section 3 and show how the model leads to a stabilised IR brane
at high temperatures. In section 4 we present the low energy effective
Lagrangian and discuss some of the experimental signatures of the model. In
section 5 we then discuss the unique early universe cosmology of the model and
how this relates to other non-standard cosmological histories in the
literature. We also discuss potential applications of the model to
baryogenesis, dark matter production and potential gravitational wave
signatures in this section, before concluding and summarising in section 6.
## 2 The Supercooled Randall-Sundrum Model
In this section we review the standard cosmology of the RS type I model and
Goldberger-Wise field, and its dual gauge theory description via the gauge-
gravity duality. In the standard treatment of gauge-gravity duality at finite
temperature, the gauge theory partition function is defined on a manifold
$\mathcal{M}=S_{1}\times R^{3}$ with the temporal direction compactified on a
circle of radius $\beta=\pi/T$. The corresponding gravitational theory is
defined on a 5-dimensional manifold with $\mathcal{M}$ as the boundary. In
computing the gravitational partition function, all possible geometries
$\Sigma$ which satisfy the boundary condition $\partial\Sigma=\mathcal{M}$
must be integrated over Witten1998 . The partition function will however be
dominated by classical gravity solutions. Each semi-classical gravitational
solution $\Sigma_{i}$ which satisfies the boundary condition is interpreted as
a different phase of the CFT. At a given temperature, the geometry which
minimises the Euclidean action will give the dominant contribution to the
partition function, and therefore correspond to the preferred phase of the
CFT.
In the RS model, the UV brane cuts off the AdS space, and hence plays the role
of the boundary $\partial\Sigma$. The dual gauge theory is interpreted as a
field theory coupled to 4D gravity, defined on the manifold $\mathcal{M}$. One
of the possible classical solutions is,
$\displaystyle ds^{2}_{\rm RS}$
$\displaystyle=k^{2}\rho^{2}dt^{2}-\frac{d\rho^{2}}{k^{2}\rho^{2}}-\rho^{2}k^{2}dx_{i}^{2},\,$
(4)
with the space in the $\rho$ direction cut off at the position of the IR and
UV branes so that $\rho_{\rm ir}<\rho<\rho_{\rm uv}$. Here and throughout this
paper, we work in a frame where $\rho_{\rm uv}$ is fixed to the value
$\rho_{\rm uv}=k^{-1}$, where $k$ is the AdS curvature. A convenient
definition of the temperature of the 5D theory is the local temperature at the
UV brane. We will simply refer to this temperature as $T$. Thermal effects
tend to push the IR brane towards the horizon, rendering the RS solution
unstable Creminelli2002 at arbitrarily small temperatures in the absence of
stabilization. This instability can be lifted using the GW mechanism
Goldberger1999 .
The quasi-conformal theory dual to the RS model is a strongly coupled gauge
theory Arkani-Hamed2001 ; Rattazzi2001 with $\mathcal{O}(N^{2})$ degrees of
freedom, where $N$ can be determined by matching the entropy of the black hole
with the entropy of the high temperature phase of the gauge theory Gubser2001
,
$\displaystyle\frac{N^{2}}{16\pi^{2}}\simeq
12\left(\frac{M_{5}}{k}\right)^{3}.$ (5)
This relation can be modified by $\mathcal{O}(1)$ factors depending on strong
coupling effects in different gauge theory models. We see that the large-$N$
aspect of the 4D gauge theory is a crucial feature of these models, since it
corresponds to the hierarchy between the curvature scale $k$ and the 5D Planck
scale $M_{5}$ in the 5D gravitational theory. The $\rho$ direction can be
thought of as the RG scale of the conformal theory, with small $\rho$
corresponding to the IR of the theory. The UV and IR branes of the RS model
correspond to UV and IR cut-offs in the gauge theory. The cutoff at the IR
brane represents a spontaneous breaking of conformality in the IR due to
confinement in the gauge theory, while the UV brane represents explicit
breaking by the cutoff at the Planck scale Gubser2001 . The RS model with the
IR brane therefore corresponds to the confined phase of the conformal theory.
The GW mechanism corresponds to introducing a nearly marginal operator to the
CFT which explicitly breaks the conformal symmetry of the theory. The coupling
of this operator is dual to a scalar field in the RS model with a small bulk
mass. Introducing the GW scalar generates an effective potential for the
radion (identified with $\mu=k^{2}\rho_{{\rm ir}}$ in co-ordinates where the
location of the UV brane is fixed Charmousis2000 ), with a minimum at small
$\mu$ – the IR brane will then be stabilised at the minimum of this potential.
The RS solution with the IR brane becomes classically unstable at high
temperatures. There is another classical solution that contributes to the
finite temperature partition function given by the AdS-Schwarzschild (AdS-S)
geometry,
$\displaystyle ds^{2}_{\rm AdS-S}$
$\displaystyle=f(\rho)dt^{2}-\frac{d\rho^{2}}{f(\rho)}-\rho^{2}k^{2}dx_{i}^{2},\quad
f(\rho)=k^{2}\left(\rho^{2}-\frac{\rho_{h}^{4}}{\rho^{2}}\right)\,.$ (6)
The position of the horizon $\rho_{h}$ is set by the temperature $\rho_{h}=\pi
T/k^{2}$. The solution is cut off at $\rho=\rho_{{\rm uv}}$ by the UV brane as
before. The AdS-S solution is dual to the deconfined phase of the gauge
theory, with the Hawking temperature and entropy of the AdS black hole equal
to the corresponding quantities in the gauge theory. The AdS-S solution is
classically stable for any non-zero temperature, and is the thermodynamically
preferred phase of the theory at high temperatures.
As the universe cools below a critical temperature, the RS phase with the IR
brane becomes preferred and there is a first order phase transition between
the two phases which proceeds through a tunnelling process connecting the two
solutions. This tunnelling process is strongly suppressed, however, due to the
large change in free-energy in the two phases. The requirement that this phase
transition completes places bounds on $N$ for the model to be cosmologically
viable. These bounds typically require $N\sim\mathcal{O}(1)$, which is in
tension with the assumption of working in the large-$N$ limit.
### 2.1 (De)confinement Phase Transition in the RS Model
We consider the RS model with the GW stabilization mechanism. The bulk
Lagrangian contains gravity and the GW field ($\Phi$),
$\displaystyle S_{\rm bulk,RS}[G_{AB},\Phi]$ $\displaystyle=\int
d^{4}x\,d\rho\,\sqrt{G}\left[-2M_{5}^{3}R+24M_{5}^{3}k^{2}+\frac{1}{2}G^{AB}\partial_{A}\Phi\partial_{B}\Phi-\frac{1}{2}m_{\Phi}^{2}\Phi^{2}\right]\,.$
(7)
We also include brane localized terms,
$\displaystyle S_{{\rm uv},\rm RS}$ $\displaystyle=\int d^{4}x\sqrt{-g_{{\rm
uv}}}\left[-\lambda_{{\rm uv}}(\Phi^{2}-v_{{\rm
uv}}^{2})^{2}-24M_{5}^{3}k+\delta\Lambda_{{\rm uv}}\right]$ (8) $\displaystyle
S_{{\rm ir},\rm RS}$ $\displaystyle=\int d^{4}x\sqrt{-g_{\rm
ir}}\left[-\lambda_{{\rm ir}}(\Phi^{2}-v_{{\rm
ir}}^{2})^{2}+24M_{5}^{3}k+\delta\Lambda_{{\rm ir}}\right]\,.$ (9)
where $g_{{\rm uv},{\rm ir}}$ are the induced metrics on the UV and IR branes.
In the presence of the GW stabilization mechanism, only one combination of the
brane tension detuning parameters $\delta\Lambda_{i}$ needs to be tuned,
corresponding to the tuning of the 4D cosmological constant. Depending on the
sign of $m_{\Phi}^{2}$, these parameters may be required to lie in a certain
range for there to be a local minimum for the radion away from $\mu=0$. For
simplicity, here we set each of the detuning parameters to 0 and assume
$m_{\Phi}^{2}>0$. We also assume that the stabilization occurs in the limit of
small backreaction. These assumptions are not crucial and do not affect the
qualitative results. With these assumptions, the metrics in equations (4) and
(6) continue to be approximate classical solutions.
We can obtain the potential for the radion by integrating over the classical
solution for the GW field. In the limit where the brane localized terms fix
$\Phi(\rho_{{\rm uv},{\rm ir}})=v_{{\rm uv},{\rm ir}}$, the 4D effective
potential for the radion, $\mu\equiv k^{2}\rho_{{\rm ir}}$ is Creminelli2002 ,
$\displaystyle V(\mu)$ $\displaystyle=\epsilon v_{{\rm
uv}}^{2}k^{4}+\left[(4+2\epsilon)\mu^{4}(v_{{\rm ir}}-v_{{\rm
uv}}(\mu/k)^{\epsilon})^{2}-\epsilon v_{{\rm
ir}}^{2}\mu^{4}\right]+\mathcal{O}(\mu^{8}/k^{4})\,,$ (10)
with $\epsilon=\sqrt{4+m_{\Phi}^{2}/k^{2}}-2$. The minimum is obtained for:
$\displaystyle\mu_{{\rm TeV}}$ $\displaystyle=f(\epsilon)k\left(\frac{v_{\rm
ir}}{v_{\rm uv}}\right)^{1/\epsilon}\,,$ (11) $\displaystyle f(\epsilon)$
$\displaystyle=\left[\frac{4+\epsilon+\sqrt{\epsilon(4+\epsilon)}}{4+2\epsilon}\right]^{1/\epsilon}\sim\mathcal{O}(1)\,.$
(12)
A relatively modest hierarchy in $v_{{\rm uv},{\rm ir}}$ and $\epsilon\sim
1/10$ can generate an exponential hierarchy between $k$ and $\mu_{{\rm TeV}}$.
At energies $\lesssim\mu_{\rm TeV}$ the effective theory is a 4D theory with a
tower of Kaluza-Klein (KK) states with masses $\sim\mu_{{\rm TeV}}$. In the 4D
theory, dimensionful parameters involving fields localized on the IR brane,
such as the Higgs mass parameter, scale with $\mu_{{\rm TeV}}$, thus
explaining the electroweak hierarchy elegantly.
At low-temperatures, $T<\mu_{\rm TeV}$, both classical solutions – the
stabilized RS solution with UV and IR branes, and the AdS-S solution with a UV
brane and a black hole horizon – are (meta)stable. However, at high-
temperatures $T\gg\mu_{\rm TeV}$, the minimum of the radion potential is
behind the AdS-S horizon $\mu_{\rm TeV}<\rho_{h}k^{2}$, indicating that the
AdS-S solution is the only classical solution. During the early universe the
universe is in the AdS-S phase; to get to the RS phase the universe needs to
undergo a first order phase transition.
The tunnelling rate per unit volume for the phase transition is,
$\displaystyle\Gamma$ $\displaystyle\simeq R_{c}^{-4}\exp(-S_{b})$ (13)
where $S_{b}$ is the bounce action for the tunnelling transition, and $R_{c}$
is the radius of the critical bubble Coleman1977 ; Linde1983 . The field
configuration for the transition from the AdS-S phase to the RS phase involves
moving the black hole horizon to the far IR, $\rho_{h}\to 0$, and then
nucleating the IR brane at $\rho=0$ and bringing it to larger values of
$\rho$. Therefore this field configuration probes the geometry in the region
where the local temperature is super-Planckian and stringy corrections would
be relevant. However, in the case where the transition temperature is low, and
there is an approximate conformal symmetry in the IR, the dominant
contribution to the bounce is dictated by the radion dynamics and can be
estimated while ignoring the gravitational contribution to the bounce
Agashe2020 . Even so, since the field configuration probes the geometry in the
far IR, the bounce action for this configuration can depend sensitively on the
details of the GW stabilisation, and other physics in the IR. We summarize the
results for the bounce action that have been considered in various limits in
the literature next.
### 2.2 Bounce action from the radion
In a large class of models, the phase transition is captured by the dynamics
of the radion Creminelli2002 ; Agashe2019 ; Agashe2020 . The general radion
effective field theory can be understood in terms of the dual 4D theory. The
4D theory is a near-conformal field theory coupled to gravity. The
gravitational sector breaks the conformal symmetry explicitly, but below the
gravitational cutoff an approximate conformal symmetry survives. For a
stabilised RS geometry with a light radion, the 4D effective theory below the
KK scale $\mu_{\rm TeV}$ is well-described by an effective theory of a
spontaneously broken (approximate) conformal symmetry, with the light
radion/dilaton as the pseudo-Nambu-Goldstone boson.
In this section we work in this 4d picture to study a few such generalisations
that have been studied in the literature. As we will see, in each case the
first order phase transition is highly suppressed. The effective Lagrangian
for the dilaton Creminelli2002 ; Coradeschi2013 can be written as,
$\displaystyle\mathcal{L}_{\rm eff}$
$\displaystyle=\frac{N^{2}}{16\pi^{2}}\left[(\partial\mu)^{2}-\lambda(g(\mu))\mu^{4}\right]\,,$
(14)
where the $\mu$-dependence in $g(\mu)$ denotes the explicit breaking of
conformal symmetry due to the GW deformation. We expect the dilaton to be the
lightest bound state of the gauge theory Goldberger2000 ; Pomarol2019 as it
is the pNGB of the broken dilation symmetry, so is the only relevant degree of
freedom in the IR of the theory. The $N^{2}$ factor makes explicit the fact
that the dilaton is interpreted as a glueball state in a 4D large-N gauge
theory.
The free energy in the (de)confined phase is well approximated by,
$\displaystyle F_{\rm confined}$ $\displaystyle=V(\mu_{{\rm
TeV}})=-\frac{N^{2}}{16\pi^{2}}\lambda_{\rm TeV}\mu_{\rm TeV}^{4}+V_{0}$
$\displaystyle F_{\rm deconfined}$
$\displaystyle=C-2\pi^{4}(M_{5}/k)^{3}T^{4}=C-\frac{\pi^{2}}{96}N^{2}T^{4}$
(15)
where we have defined $\lambda_{\rm TeV}\equiv|\lambda(g(\mu_{\rm TeV}))|$,
and added a constant $V_{0}$ to ensure that the vacuum energy at the minimum
is zero. Notice that $\lambda(g(\mu_{\rm TeV}))<0$ for $\mu_{\rm TeV}$ to be
the minimum of the potential. $C$ can be calculated by matching the free
energy at $\mu=T=0$. The critical temperature can be calculated by equating
the free energy in the two phases at the transition,
$\displaystyle C-\frac{\pi^{2}}{96}N^{2}T_{c}^{4}$
$\displaystyle\simeq-\frac{N^{2}}{16\pi^{2}}\lambda_{\rm TeV}\mu_{\rm
TeV}^{4}+V_{0}$ (16) $\displaystyle\Rightarrow T_{c}$
$\displaystyle\simeq\left[\frac{6\lambda_{\rm
TeV}}{\pi^{4}}\right]^{1/4}\mu_{\rm TeV}\,.$ (17)
When $\lambda_{\rm TeV}\ll 1$, the transition temperature $T_{c}\ll\mu_{\rm
TeV}$, and the approximation of radion domination is well justified.
If the phase transition is prompt, it completes for $T\sim T_{c}$. In this
case the bubble has O(3) symmetry, and the action can be estimated in the
thin-wall regime (see e.g. Kaplan2006 ),
$\displaystyle\frac{S_{3}}{T}$
$\displaystyle\sim\frac{N^{2}}{8}\left[\frac{1}{\lambda_{\rm
TeV}}\right]^{3/4}\frac{T_{c}}{T}\left(1-\left(\frac{T}{T_{c}}\right)^{4}\right)^{-2}\,.$
(18)
This explicitly shows the general enhancement of the bounce action by $N^{2}$,
and often also by the weak coupling $\lambda_{\rm TeV}$.
We can evaluate the bounce action for the GW model considered in section 2.1
above. The quartic $\lambda_{\rm TeV}$ in this case is,
$\displaystyle\lambda_{\rm TeV}$
$\displaystyle=\frac{16\pi^{2}}{N^{2}}\epsilon^{3/2}v_{\rm ir}^{2}$ (19)
which leads to the following parametric form for the bounce action
Creminelli2002 ,
$\displaystyle\frac{S_{3}}{T}$
$\displaystyle\sim\frac{N^{7/2}}{\epsilon^{9/8}v_{\rm
ir}^{3/2}}\frac{T_{c}}{T}\left(1-\left(\frac{T}{T_{c}}\right)^{4}\right)^{-2}\,.$
(20)
The action is not only enhanced by the factor of $N^{2}$, but also by the
small quartic coupling of the radion, which increases the dependence on $N$ to
$N^{7/2}$. There is an additional enhancement by $1/\epsilon$, related to the
fact that the scale symmetry violation at $\mu_{\rm TeV}$ is parametrised by
$\epsilon$. The exact power of $\epsilon$ that appears can depend on the
implementation of the GW mechanism Creminelli2002 ; Nardini2007 ; Agashe2019 ;
Agashe2020 .
More generally, we can see that the action is enhanced for small scale
symmetry violation encoded in $\beta_{\lambda}\ll 1$. The zero temperature
minimum is determined by the running quartic, $\lambda(g(\mu))$,
$\displaystyle\partial V(\mu)/\partial\mu=[4\lambda(g(\mu_{\rm
TeV}))+\beta_{\lambda}(g(\mu_{\rm TeV}))]\mu^{3}=0\,.$ (21)
Thus for a nearly scale invariant theory at $\mu_{\rm TeV}$, $\lambda_{\rm
TeV}$ will be generically small.
If the transition is not prompt, then it will then take place in the
supercooled regime, $T\ll T_{c}$. In the case where there is a barrier in the
radion potential between $\mu_{\rm TeV}$ and $\mu\sim 0$, the bounce
configuration is essentially the same as the zero temperature tunnelling, and
has an O(4) symmetric bounce action Creminelli2002 ; Nardini2007
$\displaystyle S_{4}$ $\displaystyle\sim\frac{N^{2}}{16\pi^{2}\lambda_{\rm
TeV}}.$ (22)
As before, the explicit factor $N^{2}$ appears. Again, we see that in the case
of the simplest RS+GW model above, the parametric dependences are even
stronger,
$\displaystyle S_{4}$
$\displaystyle\sim\frac{N^{4}}{(4\pi)^{4}\epsilon^{3/2}v_{\rm ir}^{2}}.$ (23)
If there is no barrier between $\mu\sim 0$ and $\mu=\mu_{\rm TeV}$, then the
“release point” for the radion field can be very small $\mu\sim T$ even for
supercooled transition, so the smallest bounce action is still obtained by an
O(3) symmetric bounce. For example, in the case where conformal symmetry is
restored in the IR ($\epsilon>0$ for the GW field) Agashe2019 , the radion
potential near the origin is $V\sim\lambda(0)\mu^{4}$. The bounce action is
then the $T\ll T_{c}$ limit of equation (18),
$\displaystyle\frac{S_{3}}{T}$
$\displaystyle\sim\frac{N^{2}}{8[\lambda(0)]^{3/4}}$ (24)
which is no longer suppressed by the small parameter $\epsilon$. In the case
where $\epsilon\lambda_{\rm TeV}$ is not parametrically small, radion dynamics
no longer suffice to estimate the bounce. However, it may still be possible to
estimate the bounce in the 5D effective theory Agashe2020 and is found to be
$\mathcal{O}(N^{2})$.
When $\epsilon<0$, the GW field profile grows towards the IR. Consequently,
the higher order terms in the GW potential might become important and the
approximate conformality $\partial_{\log\mu}g(\mu)\sim\epsilon$ might be
broken as we approach $\mu\lesssim\mu_{\rm TeV}$. In such cases the
enhancement of the bounce action by $1/\epsilon$ will be absent, even though
the EW/Planck hierarchy is set by small epsilon. This can be explicitly seen
in explicit holographic constructions Bigazzi2020 ; Hassanain2007 , or in RS
models with more general stabilisation mechanisms Konstandin2010 ; Nardini2007
; Konstandin2011 ; Dillon2017 ; VonHarling2017 ; Bruggisser2018 ;
Bruggisser2018a ; Megias2018a ; Baratella2019 ; Agashe2019 ; Agashe2020 ;
Fujikura2019 ; Megias2020 ; Bunk2018 .
We see from the examples above that while the details of the bounce action
depend on the actual theory, it takes the form $S_{b}\simeq N^{2}/\lambda$ in
each case, with $\lambda\lesssim 1$. This has far-reaching consequences for
early universe cosmology. Either the universe is required to be reheated to
temperatures lower than the confinement scale, or there is a strong constraint
on the maximal $N$ allowed.
If the rate of tunnelling is smaller than Hubble, the universe will get stuck
in the false vacuum Guth1983 . Since the true vacuum at zero temperature is
assumed to have a (nearly) vanishing cosmological constant, the deconfined
vacuum has a large positive cosmological constant $C\sim N^{2}T_{c}^{4}$ at
low temperatures and starts to inflate with $H\simeq NT_{c}^{2}/M_{\rm pl}$.
In a Hubble volume, the probability of completing the phase transition within
a Hubble time is,
$\displaystyle P$ $\displaystyle=\Gamma/H^{4}$ (25)
If $P\ll 1$, then the universe eternally inflates. This gives us a bound on
$N$,
$\displaystyle N$ $\displaystyle\lesssim 2\sqrt{\lambda\log\frac{M_{\rm
pl}}{T_{c}}}$ (26)
We have replaced the inverse critical radius by $T_{c}$; unless the bubble
size is exponentially smaller, this is a reasonable approximation. In many
models considered above, $\lambda$ is parametrically small. For the RS+GW
model above, $S_{b}$ is enhanced both by $N$ as well as $1/\epsilon$, making
it impossible to satisfy the constraint above. Even without these
enhancements, the calculations above assume dilaton dominance, which requires
$\lambda_{\rm TeV}\ll 1$ Agashe2019 . Therefore, it is hard to get
$S_{b}\lesssim N^{2}$ in a controlled approximation. This translates into a
bound $N\lesssim 12$ for $T_{c}\sim 1\,{\rm TeV}$. From equation 5, we see
that the hierarchy between the 5D Planck scale and the AdS curvature is
$(M_{5}/k)\lesssim 1$. This lack of hierarchy makes the 5D effective
gravitational theory very delicate.
One avenue to evade this cosmological bound is to avoid reheating the universe
above the TeV scale. This may require a more intricate inflationary mechanism,
as well as solutions to baryogenesis at the electroweak scale or below
Baldes:2018nel ; Bruggisser2018 ; Bruggisser2018a . In the next section we
outline the avoided deconfinement mechanism, where the GW stabilisation of the
radion is temperature dependent and the IR brane is stabilised at arbitrarily
high temperatures. This allows for parametrically large hierarchies between
$M_{5}$ and $k$, and an early cosmology without a stringent restriction on the
reheat temperature.
## 3 5D Model for Avoided Deconfinement
In this section we modify the RS model with GW field ($\Phi$) by including
extra scalars localised to the IR brane***Localised fields on the IR brane may
be required to arise from corresponding bulk modes with masses below the 5D
cutoff Fichet:2019owx . These bulk modes will then have an associated tower of
KK states, but this detail will not affect our discussion.. Given a suitable
set of parameters, the effect of this will be to realise a model where the new
scalars provide a metastable minimum for the radion at high temperature,
avoiding the formation of the AdS-S black hole.
We make a simple modification to the RS model described in equation (9) by
adding scalar field(s) $S$ to the IR brane. The action is:
$\displaystyle S$ $\displaystyle=S_{\rm bulk,RS}+S_{{\rm uv},\rm RS}+S_{{\rm
ir},\rm RS}+S_{{\rm ir},\rm AD}\,.$ (27)
where $S_{\rm bulk,RS}$ and $S_{{\rm uv}/{\rm ir},\rm RS}$ are the RS model
bulk and brane actions which are unchanged from their definition in equations
(7), (8),and (9). We continue to choose the detuning parameter
$\delta\Lambda_{\rm ir}=0$. As usual, this simplifying assumption can be
relaxed. The modified IR brane action, $S_{{\rm ir},\rm AD}$ includes $N_{s}$
real scalars $S$ localised to the brane. The additional terms in the IR brane
action is
$\displaystyle S_{{\rm ir},\rm AD}$ $\displaystyle=k^{4}\int_{\rho=\rho_{\rm
ir}}d^{4}x\sqrt{-g_{\rm ir}}\sum_{i=1}^{N_{s}}\left[\frac{1}{2k^{2}}g_{{\rm
ir}}^{\mu\nu}\partial_{\mu}S_{i}\,\partial_{\nu}S_{i}-\frac{\lambda_{s}}{4}(S_{i}^{2}-v_{s}^{2})^{2}-\frac{\gamma}{6}S_{i}^{3}\right],$
(28)
where we have explicitly included factors of $k$ so that the parameters
$\lambda_{s},\gamma,v_{s}$ as well as the field $S$ are dimensionless. We will
suppress the index $i$ on $S$ for notational simplicity. In order for the
potential to remain bounded from below, the coupling $\lambda_{s}$ must be
positive. The value of the masses and quartic couplings of each field $S$ do
not have to be equal, but for simplicity we will take them to be the same. For
unequal couplings our results below can be reinterpreted using statistical
averages over the $S$ ensemble. Each $S$ has an approximate $Z_{2}$ symmetry
that is spontaneously broken at zero temperature. The coupling $\gamma$ that
weakly breaks the $Z_{2}$ symmetry for each $S$ is introduced to avoid domain
wall problems, and can be very small in a technically natural way.
Before presenting the consequences of adding these extra scalars, we summarize
the choice of parameters for which our approximations are under theoretical
control. The primary goal of the AD setup is to generate a classically stable
minimum for the radion at high temperatures for arbitrary $N$, therefore
avoiding a confinement phase transition entirely and putting the large-$N$
approximation on a firmer footing. It is then worth highlighting the validity
of the large-$N$ expansion, especially in light of adding extra matter on the
IR brane. Requiring the gravitational loop counting parameter to be small,
$N_{\rm species}/N^{2}\lesssim 1$, restricts the number of scalars we can add.
As we show below, the AD mechanism does require $N_{s}>1$ to operate. However,
in order to obtain a classically stabilized radion at high temperatures,
$N_{s}$ can be parametrically smaller than $N^{2}$. In this case, the black
hole phase does have a lower free energy, but the tunneling rate from the AD
phase to the black hole phase is exponentially suppressed by a tunnelling
exponent of order $\sim N^{3}$. We present an estimate of the tunneling rate
in appendix A. Thus, the parameter $N$ can be taken arbitrarily large while
keeping other parameters in our model fixed, ensuring that the $1/N$ expansion
is well under control.
In order to understand the other parametric scalings in the Lagrangian, it is
useful to characterize the cosmological history of the AD model by the
following three scales:
1. 1.
the temperature $T_{s}$ at which the scalars $S$ undergo a (crossover) phase
transition;
2. 2.
the temperature $T_{c}$ at which the AD construction begins to take effect and
the position of the IR brane begins to vary with temperature; and
3. 3.
the zero-temperature radion vev, $\mu_{\rm TeV}$.
The AD model requires the hierarchy $T_{s}<T_{c}$, due to the fact that the
addition of the new scalars $S$ only generates the desired finite-temperature
effects in the symmetric phase. At temperatures $T>T_{c}$ the position of the
IR brane is moved from the GW minimum due to thermal effects. As mentioned
above, in this temperature range the confined phase is metastable, in contrast
to the usual RS model where the confined phase becomes classically unstable at
high temperature. Another condition on the parameter space comes from the
requirement that the IR brane is stabilized at a radius where the local
temperature is small enough such that the backreaction on the bulk geometry is
small. At the confinement scale, this condition implies
$\displaystyle T_{c}<\frac{\mu_{\rm TeV}}{\pi},$ (29)
This condition becomes stronger logarithmically in temperature, and can fail
at very high temperatures, which sets a maximum temperature $T_{\rm max}$ for
the AD mechanism to operate. As we explain in further detail in section 3.2,
this leads to the following condition on the parameters of the model:
$\displaystyle 1>\frac{\pi T_{c}}{\mu_{\rm
TeV}}>\sqrt{\frac{6}{N_{s}}}\frac{m_{\varphi}}{m_{s}}>v_{s}$ (30)
where $m_{\varphi},m_{s}$ are the masses of the radion and scalar ($s$)
fluctuations at zero temperature. Since the mass of the radion $m_{\varphi}$
cannot be too small phenomenologically, this leads us to require a moderately
large number of scalar fields $N_{s}$ for the AD model to work.
### 3.1 Finite temperature effective potential for the radion
We work in a regime where the local 5D temperature remains below
$(M_{5}^{3}k^{2})^{1/5}$ everywhere in the 5th dimension, so that the finite
temperature effects have a negligible backreaction effect on the bulk
geometry. At any temperature $T$ and the position of the IR brane $\rho_{\rm
ir}(T)$, we can solve the equations of motion for the GW field on the
background RS metric with the same boundary conditions, $\Phi(\rho_{{\rm
ir}/{\rm uv}}(T))=k^{3/2}v_{{\rm ir}/{\rm uv}}$. The bulk solution is,
$\displaystyle\Phi(\rho)=A\rho^{-4-\epsilon}+B\rho^{\epsilon}$ (31)
where $A,B$ are fixed by the boundary conditions. As above, $\epsilon\approx
m_{\Phi}^{2}/(4k^{2})$, which we take to be positive. We expand in
fluctuations around this classical solution with the size of the extra
dimension equal to $\rho_{\rm ir}(T)$. We decompose the bulk fluctuations into
Kaluza-Klein modes and integrate over the 4D modes to derive the finite
temperature effective potential.
The temperature-dependent effective potential can be broken up into the tree-
level potential and the one-loop potential Delaunay2008 ; Curtin2018 :
$\displaystyle V_{\rm eff}(T)$ $\displaystyle=V_{\rm tree}+\Delta V^{\rm
CW}_{1}+\Delta V^{T}_{1}(T)$ (32)
where we have separated the 1-loop contribution into a piece that goes to 0 at
zero temperature $\Delta V^{T}_{1}(0)=0$. The zero temperature Coleman-
Weinberg potential, $\Delta V^{\rm CW}_{1}$ includes the usual UV-divergences
one would encounter in these calculations. The tree-level 4D action is
obtained as above by integrating the classical solution over the extra
dimension. The potential is given by,
$\displaystyle V_{\rm tree}(\mu,S_{i})$
$\displaystyle=\mu^{4}\left[(4+2\epsilon)(v_{\rm ir}-v_{\rm
uv}(\mu/k)^{\epsilon})^{2}-\epsilon v_{\rm
ir}^{2}+\sum_{i=1}^{N_{s}}\left(\frac{\lambda_{s}}{4}(S^{2}_{i}-v_{s}^{2})^{2}+\frac{\gamma}{6}S_{i}^{3}\right)\right]$
(33)
where we have used $\mu=k^{2}\rho_{\rm ir}(T)$. We have also suppressed the
$T$-dependence in the notation.
The one-loop contribution is obtained by integrating over the fluctuations.
The finite temperature contribution to the potential depends on the effective
mass of the fluctuations around the classical solution. The relevant particles
in our case are the radion, the new scalars and the SM fields. Including the
kinetic term for the radion Coradeschi2013 ; Chacko:2014pqa and the scalars,
we find the following action for the fluctuations:
$\displaystyle S$ $\displaystyle=\int d^{4}x\left[\mathcal{L}_{\rm
SM}+\frac{1}{2}(\partial\varphi)^{2}+\frac{1}{2}(\partial s_{i})^{2}-V_{\rm
tree}\left(\mu\left(1+\frac{\varphi}{F_{\varphi}}\right),S_{i}+\frac{s_{i}}{\mu}\right)\right]$
(34)
We have introduced the canonically normalized fluctuations for the radion
$\varphi$ and the scalars $s$. The radion decay constant
$\displaystyle F_{\varphi}=\frac{N}{2\sqrt{2}\pi}\mu.$ (35)
The indices in the kinetic term are now contracted with the 4d Minkowski
metric.
The field-dependent masses of these particles are defined as the second
derivative of the potential w.r.t. the corresponding field. The SM particle
masses and the radion and $s$ masses all scale with $\mu$. For a large number
of scalars $N_{s}\gg 1$, the thermal potential is dominated by loops of
$s_{i}$†††Note that since the SM contribution to the $\mu$ potential is
proportional the mass of the SM field, only the contributions from $t,W,Z,h$
are sizeable.. The field-dependent masses of $s_{i}$ are,
$\displaystyle m_{s,i}^{2}$
$\displaystyle=\mu^{2}\left[-\lambda_{s}v_{s}^{2}+3\lambda_{s}S_{i}^{2}+\gamma
S_{i}\right]$ (36)
The other modes in the spectrum are the KK modes of the graviton, the GW field
and other fields in the bulk. The mass of the $n$th KK mode is approximated by
Gherghetta2000 :
$\displaystyle m_{n}$
$\displaystyle\simeq\left(n+\frac{2+\epsilon}{2}-\frac{3}{4}\right)\pi\mu.$
(37)
The $\rho$-coordinate of the would-be AdS-S horizon is $\rho_{h}=\pi T/k^{2}$.
Therefore, the condition that the IR brane is stabilised outside the AdS-S
horizon, $\rho_{\rm ir}(T)>\rho_{h}$ implies that the higher KK modes are not
excited at any $T$, and can be safely neglected in the thermal potential.
The Coleman-Weinberg potential is given by,
$\displaystyle V_{1}^{\rm CW}(\mu,S_{i})$
$\displaystyle=\sum_{i=1}^{N_{s}}\frac{1}{64\pi^{2}}m_{s,i}^{4}\left(\log\left[\frac{m_{s,i}^{2}}{\mu_{R}^{2}}\right]-\frac{3}{2}\right)$
(38)
where $\mu_{R}$ is a renormalisation scale. A convenient choice of the
renormalisation scale for the dynamics on the IR brane is $\mu$ itself
Sundrum:2003yt . With this choice, we do not generate large hierarchies of
scale on the IR brane and the one-loop corrections at zero-temperature stay
small. We have included all terms allowed by the scale symmetry of the radion
and the $Z_{2}$ symmetries in the $s$-sector, as well as leading terms
violating these symmetries parameterized by the small parameters
$\epsilon,\gamma$. Thus, the higher order terms can be safely neglected and we
will simply absorb the UV divergent pieces into a redefinition of couplings
and masses as renormalized quantities.
The finite-temperature one-loop contributions from $s_{i}$ are
$\displaystyle\Delta V_{1}^{T}(\mu(T),S_{i}(T),T)$
$\displaystyle=\sum_{i=1}^{N_{s}}\frac{T^{4}}{2\pi^{2}}\int
dk\,k^{2}\log\left[1-\exp\left(-\sqrt{k^{2}+\frac{m_{s,i}^{2}(\mu,S_{i})}{T^{2}}}\right)\right]$
(39)
$\displaystyle\equiv\sum_{i=1}^{N_{s}}\frac{T^{4}}{2\pi^{2}}J_{b}\left(\frac{m_{s,i}^{2}(\mu,S)}{T^{2}}\right)$
(40)
We approximate the thermal function $J_{b}$ by assuming that $m_{s}(T)\ll T$.
At high temperature the thermal function can be approximated as,
$\displaystyle J_{b}(y^{2})$
$\displaystyle\approx-\frac{\pi^{4}}{45}+\frac{\pi^{2}}{12}y^{2}-\frac{\pi}{6}y^{3}-\frac{1}{32}y^{4}\left(\log\frac{y^{2}}{\pi^{2}}+2\gamma_{E}-\frac{3}{2}\right)\,.$
(41)
where $\gamma_{E}$ is the Euler-Mascheroni constant. The field-dependent
$\log$ pieces cancel between the Coleman-Weinberg terms and the thermal
corrections. The renormalisation scale $\mu_{R}\simeq\mu$ scales with the
temperature (as we show below), and hence we do not get any enhanced large-log
pieces, and we can safely ignore the terms of $\mathcal{O}(y^{4})$. Then,
$\displaystyle\Delta V_{1}^{T}(\mu,S,T)$
$\displaystyle\simeq\frac{N_{s}}{24}T^{2}\mu^{2}\left(-\lambda_{s}v_{s}^{2}+3\lambda_{s}{S^{2}}+\gamma{S}\right)$
$\displaystyle\qquad-\frac{N_{s}}{12\pi}T\mu^{3}\left(-\lambda_{s}v_{s}^{2}+3\lambda_{s}{S^{2}}+\gamma{S}+\lambda_{s}\frac{T^{2}}{4\mu^{2}}\right)^{3/2}$
(42)
where we have used the fact that each of the scalar vevs $S_{i}=S$ when
$\\{\lambda_{s},v_{s},\gamma\\}$ are taken to be the same for each $s_{i}$.
The extra term involving $T^{2}$ in the second term above is a result of
performing the leading daisy resummation, where we replace the field-dependent
mass $m_{s,i}^{2}$ by $m_{s,i}^{2}+\Pi_{i}$ in equation (40), with $\Pi_{i}$
the leading temperature contribution to the one-loop thermal mass.
Thus, we see that at high temperatures $T>T_{s}\simeq\frac{1}{2}v_{s}\mu_{\rm
TeV}$, thermal effects drive to restore the (approximate) $Z_{2}$ symmetry in
$S$, so that $\langle S\rangle\ll 1$. This generates a tachyonic direction for
$\mu$, providing a finite temperature stabilization. As the universe cools,
the $S$ symmetry gets broken, and the radion settles down close to its zero
temperature minimum dictated by the GW part of the potential. The thermal
potential can be minimized numerically, and for a range of parameters the
radion remains stabilized outside the would-be AdS-S horizon at high
temperatures.
### 3.2 High temperature radion stabilization
The minimum of the radion potential can be simply approximated in two distinct
regimes:
$\displaystyle\mu(T)$ $\displaystyle=\left\\{\begin{array}[]{ll}\mu_{{\rm
TeV}}&T<T_{c}\\\ \mu_{{\rm
TeV}}\left(\frac{T}{cT_{c}}\right)^{\frac{1}{1+\epsilon}}&T\gg
T_{c}\end{array}\right.$ (45)
where the constant $c$ is given by:
$\displaystyle c^{2}=\frac{4v_{\rm uv}^{2}}{\epsilon^{3/2}v_{\rm
ir}^{2}}\left(\frac{\mu_{\rm TeV}}{k}\right)^{2\epsilon}$ (46)
The zero temperature value of the radion minimum $\mu_{\rm TeV}$ is well
approximated by equation 11, up to an $\mathcal{O}(\gamma v_{s}^{3})$
correction to the $\mu^{4}$ co-efficient which is a result of the potential
for $S$ not vanishing at the zero-temperature minimum. The transition
temperature $T_{c}$ is is the temperature at which the radion starts to move
and is given by,
$\displaystyle T_{c}^{2}$
$\displaystyle\simeq\frac{6}{N_{s}}\frac{m_{\varphi}^{2}}{m_{s}^{2}}\mu_{\rm
TeV}^{2}=\frac{24\mu_{\rm
TeV}^{2}}{N_{s}(\lambda_{s}v_{s}^{2})}\epsilon^{3/2}v_{\rm ir}^{2}$ (47)
where $m_{\varphi},m_{s}$ here are the zero-temperature masses for
$\varphi,s$. Since $\mu(T)/T$ is slowly growing, there is a maximum
temperature $T_{\rm max}$ beyond which the IR brane would fall behind the
horizon,
$\displaystyle T_{\rm max}$ $\displaystyle\sim\mu(T_{\rm max})/\pi\Rightarrow
T_{\rm max}=\frac{\mu_{\rm TeV}}{\pi}\left(\frac{\mu_{\rm TeV}}{\pi
cT_{c}}\right)^{1/\epsilon}$ (48)
This sets a (mild) bound on the reheat temperature of the universe. This is at
an exponentially high scale for $\delta\equiv\pi T_{c}/\mu_{\rm TeV}\ll 1/c$.
The AD transition temperature should also be higher than the $Z_{2}$ symmetry
restoration temperature, $T_{c}>T_{s}$. These two requirements give us the
following inequalities on our parameter space,
$\displaystyle\delta>\sqrt{\frac{6}{N_{s}}}\frac{m_{\varphi}}{m_{s}}>v_{s}$
(49)
For illustration, we choose the following benchmark values‡‡‡The value of
$\epsilon$ was chosen such that $\mu_{\rm TeV}\simeq 100\ {\rm TeV}$. A more
generic choice works equally well.
$\displaystyle\left\\{k=6\times 10^{16}\ {\rm GeV},\epsilon=4.13\times
10^{-2},v_{\rm uv}=10^{-3},v_{\rm ir}=3\times 10^{-4},\right.$
$\displaystyle\left.N_{s}=100,\lambda_{s}=1,v_{s}=2\times
10^{-3},\gamma=-10^{-8}\right\\},$ (50)
and find the following parameters
$\displaystyle\mu_{\rm TeV}$ $\displaystyle\simeq 1.8\times 10^{-12}k\simeq
100\ {\rm TeV}$ (51) $\displaystyle T_{c}$ $\displaystyle\simeq 700\ \ {\rm
GeV}\,.$ (52)
The maximum temperature $T_{\rm max}$ for this case is around $5\times
10^{11}\ {\rm GeV}$. The temperature evolution of various scales in this
benchmark is illustrated in figure 2.
|
---|---
Figure 2: Plots showing the dependence of mass scales in the theory with
temperature, for parameters in equation (50). The left plot shows the radion
expectation value ($\mu$), the scalar expectation value ($S\mu$) and the QCD
scale, while the second plot shows the same quantities and the KK scale with
the dependence on the radion factored out. The vertical line shows the
critical temperature $T_{c}\sim 700\ \ {\rm GeV}$. The dashed black line in
the first plot shows the horizon location $k^{2}\rho_{h}$ as a function of
temperature.
Comparing the AD model to the usual RS model, the high temperature behaviour
is vastly different. In the RS model, as we move to high temperature, thermal
effects drive the IR brane towards the AdS boundary, where it eventually
collapses to form an AdS-S black hole Hebecker2001 . The model then remains in
the black hole phase until tunnelling to the RS phase through a first order
phase transition which is highly suppressed in calculable models, as was shown
in section 2. In contrast, the model presented here describes a situation
where the IR brane is stabilised closer to the UV end of the warped direction
at high temperatures, then falls approximately linearly with temperature into
the IR before stabilising at a constant value deep in the IR as in the RS
case. It should be noted that at very high temperatures, the black hole phase
can still have a lower free energy than the AD phase, but the IR brane remains
meta-stabilised. The only way the system can transition to the black hole
phase is through a first order phase transition which is exponentially
suppressed.
The AD model also introduces a novel temperature dependence of the mass scales
on the IR brane, which scale linearly with the radion. Assuming that the SM is
confined to the IR brane, the left panel of figure 2 shows the dependence of
the QCD scale with temperature in the AD model. The other dimensional
parameters of the SM, such as the electroweak vacuum value, exhibit a similar
scaling above the critical temperature, $T_{c}$, of the theory. Above $T_{c}$
the temperature only increases marginally relative to the other scales of the
theory. This leads to the KK modes being frozen out to arbitrarily high
temperature, as shown in the right panel of figure 2. A similar behaviour
would occur for any other scales of the theory which are above $T_{c}$ – they
are frozen out to much higher temperatures than in the usual RS model. In
section 5 we discuss this behaviour and its potential implications for BSM
phenomenology in more detail.
## 4 Low Temperature Phenomenology
In this section we show we study the constraints on the avoided deconfinement
model from collider results and ALP searches. For simplicity we have assumed
that the SM is confined to the IR brane. More realistic models typically have
some or all SM fields propagating in the bulk, which can offer an explanation
of the hierarchical Yukawa couplings in the SM Gherghetta2000 ; Agashe2003 ;
Casagrande2008 but also lead to new constraints from flavour-violating
processes Agashe2005 ; Bauer2009 . The constraints on the RS model have been
well-studied. Here we will focus on the new features that are required for the
AD mechanism to work.
The qualitative features that the mechanism requires can be estimated using
the inequality in equation 30. For $\delta\ll 1$, we need,
$\displaystyle\frac{m_{s}}{\mu_{\rm TeV}}$ $\displaystyle\simeq v_{s}<\delta$
$\displaystyle\frac{m_{\varphi}}{\mu_{\rm TeV}}$
$\displaystyle<\delta^{2}\sqrt{N_{s}}\,.$ (53)
Experimental constraints on a very light radion will lead us to require a
large number of scalars, $N_{s}\gg 1$, with masses below the confinement
scale.
In section 4.1 we write down the effective Lagrangian – the relevant degrees
of freedom being the SM fields, the radion $\varphi$, and the AD scalar(s)
$s$. We ignore higher dimensional operators which arise from integrating out
KK modes and $1/N$ suppressed stringy corrections, taking these to be
negligible. In section 4.2 we then describe the dominant experimental
constraints in different regions of parameter space.
### 4.1 Effective Lagrangian at zero temperature
The tree-level interactions of the radion $\varphi$ with the SM fields can be
written compactly by the replacing the mass terms in the SM by a $\varphi$
dependent mass,
$\displaystyle\mathcal{L}^{(\rm tree)}[m_{i}]\to\mathcal{L}^{(\rm tree)}_{\rm
int}\left[m_{i}\left(1+\frac{\varphi}{F_{\varphi}}\right)\right]$ (54)
where we the decay constant $F_{\varphi}$ was defined in equation 35. This
produces Yukawa-like interactions of $\varphi$ with the fermions, as well as
trilinear and quartic couplings with the Higgs and the gauge bosons. This form
of the radion potential is dictated by the AdS isometries (and hence is valid
in the limit of $\epsilon\ll 1$). The self-interaction terms for the radion
are generated by the GW mechanism,
$\displaystyle\mathcal{L}_{\rm radion}$
$\displaystyle=\frac{1}{2}m_{\varphi}^{2}\varphi^{2}\left(1+\frac{5}{3}\frac{\varphi}{F_{\varphi}}+\frac{11}{12}\frac{\varphi^{2}}{F_{\varphi}^{2}}\right)$
(55)
where we have only kept terms to leading order in $\epsilon$. Finally, the
scalars $s$ interact with the SM through the radion portal.
$\displaystyle\mathcal{L}^{(\rm tree)}_{\rm s}$
$\displaystyle=\lambda_{s}v_{s}^{2}\mu_{\rm
TeV}^{2}s^{2}\left(1+\frac{\varphi}{F_{\varphi}}\right)^{2}+\lambda_{s}v_{s}\mu_{\rm
TeV}s^{3}\left(1+\frac{\varphi}{F_{\varphi}}\right)+\frac{1}{4}\lambda_{s}s^{4}+\mathcal{O}(\gamma)$
(56)
The terms suppressed by the explicit $Z_{2}$ violating coupling $\gamma$ are
assumed to be very small, and do not contribute significantly to the zero-
temperature phenomenology. Notice that we do not generate terms of the form of
$\varphi$-s mixing, or $s\varphi^{3}$. The classical solution sets the linear
term in $s$ to zero, and the $\varphi$ field coupling as $(1+\varphi)$ then
does not have a linear coupling to $s$ around the vacuum. A small Higgs portal
coupling of the form $\kappa sH^{\dagger}H$ can be added in order for $s$ to
be able to decay safely before BBN.
At loop level, there are also induced couplings between the EM and QCD field
strengths proportional to their $\beta$-functions:
$\displaystyle\mathcal{L}^{(1-\rm loop)}_{\rm int}\supset\frac{\alpha_{\rm
EM}}{8\pi F_{\varphi}}b_{\rm
EM}\varphi\,F_{\alpha\beta}F^{\alpha\beta}+\frac{\alpha_{s}}{8\pi
F_{\varphi}}b_{\rm G}\varphi\,G^{a}_{\alpha\beta}G^{a\,\alpha\beta}\,,$ (57)
with the dominant contributions to these terms coming from quark and $W$-boson
loops. In the case where the SM is confined to the IR brane, $b_{\rm
EM}=11/3$, $b_{\rm G}=-\frac{11}{3}N_{c}+2n/3$, where $n$ is the number of
quarks lighter than the radion Blum2014 .
### 4.2 Experimental constraints
The low energy phenomenology of the model is largely determined by the
physical masses of the radion and $s$ fields, as well as the KK scale. These
are related to the fundamental parameters of the model by:
$\displaystyle m_{{\rm KK}}$ $\displaystyle\simeq\frac{5\pi}{4}\mu_{\rm
TeV}\,,$ (58) $\displaystyle m_{\varphi}$
$\displaystyle=2\sqrt{2}v\epsilon^{3/4}\mu_{\rm TeV}\,,$ (59) $\displaystyle
m_{s}$ $\displaystyle=\sqrt{2\lambda_{s}}v_{s}\mu_{\rm TeV}\,.$ (60)
Collider searches limit the KK scale in RS models to be above $m_{{\rm
KK}}\gtrsim 4.25$ TeV Sirunyan2018 ; Sirunyan2019 , requiring the KK
resonances to be out of the kinematic reach of current colliders. Due to the
approximate shift symmetry of the GW field (broken only by the small parameter
$\epsilon$), the radion is parametrically lighter than the KK scale. The AD
scalar masses are similarly suppressed, with $m_{s}$ proportional to the
combination $(\lambda_{s}v_{s}^{2})^{1/2}$, which is chosen to be small for
the $s$ phase transition to happen well before the deconfinement transition.
Therefore, the radion and the AD scalars can be kinematically accessible at
colliders Giudice2018 , however their couplings to the SM are suppressed by
$\mu_{\rm TeV}$. Collider constraints translate into a bound $\mu_{\rm
TeV}\gtrsim 2\,{\rm TeV}$ Blum2014 , which is weaker than direct bounds on the
KK scale.
If the radion mass is below the GeV scale, bounds on the
${\varphi\gamma\gamma}$ coupling from supernova cooling§§§Whether supernova
bounds on the radion coupling apply depends on the radion coupling to nucleons
Abu-Ajamieh2017 . In the case where the SM quarks and gluons are on the IR
brane, this coupling is too large for the radion to contribute significantly
to supernova cooling., cosmology and beam dump experiments can give the
strongest bounds on the model. These limits have been derived for axion-like
particles Masso1995 ; Jaeckel2015 ; Dobrich2015 , which translate into a bound
$\displaystyle F_{\varphi}\gtrsim 4.25\times 10^{7}\,{\rm TeV}\,.$ (61)
For a heavier radion $m_{\varphi}>1\ {\rm GeV}$, these constraints are no
longer applicable. The radion mass will be above a GeV for parameters,
$\displaystyle v\epsilon^{3/4}$ $\displaystyle>7\times 10^{-5}\
\left(\frac{\mu_{{\rm TeV}}}{5\ {\rm TeV}}\right)^{-1}\,.$ (62)
The AD scalars can couple to the SM fields through the radion portal. For
light AD scalars, the couplings to photons/gluons generated at higher loop
order might still provide significant constraints. If the AD scalars are above
the 1 GeV scale, these constraints are also absent.
## 5 Cosmology
The mechanism of avoided deconfinement has dramatic implications for early
universe cosmology. The main departure from a standard cosmology is due to the
scaling of the radion expectation value with temperature. This leads to the
interesting consequence that while the universe may be reheated to a very high
temperature (exciting heavy fields on the UV brane, for instance), from the IR
brane point-of-view, the cosmology resembles a low-reheat cosmology. We aim to
highlight some of the applications of the AD model to cosmology, but leave a
detailed study of these implications for future work.
Figure 2 shows the characteristic dependence on temperature of dimensionful
parameters on the IR brane. In particular, $m_{KK}>T$ to arbitrarily high
temperature, as required by the condition that the IR brane be stabilised
outside the horizon at a given temperature. Therefore, KK modes play no role
in early universe cosmology from the point of view of the IR brane. In the
high temperature regime $T>T_{c}$, the radion expectation value scales with
temperature as:
$\displaystyle\mu(T)\propto T^{\frac{1}{1+\epsilon}}\,.$ (63)
This introduces a scaling of the other dimensionful quantities of the theory
with $T$. The KK scale, the Higgs mass parameter and the QCD scale and are all
proportional to $\mu(T)$¶¶¶More generally, $\Lambda_{\rm
QCD}\propto(\mu(T))^{n}$, where $n=1$ is true for the case where the SM is
confined to the IR brane, $n<1$ if some of the SM quark fields are bulk
fields., which means that the ratio of the temperature to these mass scales
(denoted $\Lambda$) varies with $T$ as:
$\displaystyle\frac{T}{\Lambda}\propto T^{\frac{\epsilon}{1+\epsilon}}\,.$
(64)
The consequence of this scaling is that the ratio $T/\Lambda$ may reach unity
at significantly higher temperatures than is the case in standard RS
cosmology. For example, as we show below, if the critical temperature $T_{c}$
is below the electroweak symmetry breaking scale, then the electroweak
symmetry restoration phase transition may occur at much higher temperatures
than in the usual case, or never occur at all.
Figure 3: Higgs expectation value, $v_{\rm ew}$, as a function of temperature
for the parameters of equation (50) (blue) and equation (65) (red). The dashed
line is the critical temperature, $T_{c}=18\ {\rm GeV}$ for the choice of
parameters in equation (65).
### 5.1 Electroweak Phase Transition
In this section we show that the electroweak phase transition can occur in the
avoided deconfinement phase at temperatures much higher than the weak scale.
To illustrate some of these effects, we use a new set of parameters, with:
$\displaystyle v_{\rm ir}=1.5\times 10^{-4},\,v_{\rm uv}=7.5\times
10^{-4},\,\epsilon=0.05\,$ (65)
and all other parameters as in equation (50), which leads to a radion
stabilised at $\mu_{\rm TeV}=4.73$ TeV at zero temperature. We note that with
these parameters, the model in its simplest for does not satisfy the bound on
the radion mass (62). We expect that a more complete model with additional
breaking of scaling invariance can lead to an unsuppressed radion mass and a
less severe bound than equation (62). This could happen, for example, through
additional terms in the GW action Chacko:2014pqa , allowing more fields to
propagate in the bulk, or by considering a more general geometry for the fifth
dimension Hassanain2007 . We will leave the detailed model building for future
work.
The Higgs potential at finite temperature gets thermal corrections from the
top Yukawa, gauge couplings and its quartic coupling. In addition, the Higgs
mass parameter scales with $\mu(T)$. The Higgs thermal mass in the low- and
high-temperature limits is given by:
$\displaystyle\mu_{h}^{2}(T>T_{c})$
$\displaystyle=T^{2}\left(-\lambda\frac{v_{{\rm
ew}}^{2}}{c^{2}T_{c}^{2}}\left(\frac{cT_{c}}{T}\right)^{2\epsilon}+\frac{\lambda_{t}^{2}}{4}+\frac{3g^{2}}{16}+\frac{g^{\prime
2}}{16}+\frac{\lambda}{2}\right),$ (66) $\displaystyle\mu_{h}^{2}(T<T_{c})$
$\displaystyle=-\lambda v_{{\rm
ew}}^{2}+T^{2}\left(\frac{\lambda_{t}^{2}}{4}+\frac{3g^{2}}{16}+\frac{g^{\prime
2}}{16}+\frac{\lambda}{2}\right).$ (67)
where $v_{\rm ew}\simeq 246\ {\rm GeV}$. The electroweak phase transition
(EWPT) happens at the temperature where the Higgs mass parameter become
positive. This happens for $T<T_{c}$ if $T_{c}$ is above the electroweak
scale, in which case there is no modification to the phase transition in
comparison to the SM. However, if $T_{c}$ is below the electroweak scale, the
EWPT will occur at a temperature:
$\displaystyle T_{*}=cT_{c}\left(\frac{T_{{\rm
ew}}}{cT_{c}}\right)^{\frac{1}{\epsilon}}$ (68)
where $T_{{\rm ew}}$ is the temperature of the EWPT in the SM. For small
$\epsilon$, even a modest ratio $T_{{\rm ew}}/T_{c}$ can lead to the EWPT
occurring at a temperature which is orders of magnitude above the scale
predicted by the SM. Figure 3 shows $-\mu_{h}^{2}$ as a function of
temperature the two sets of parameters defined in equations (50) & (65). For
the second set of parameters the EWPT doesn’t occur until the universe reaches
a temperature of order $\sim 5\times 10^{3}$ GeV.
A high temperature EWPT has been considered in refs Meade2018 ; Baldes:2018nel
; Glioti:2018roy ; Matsedonskyi2020 in the context of electroweak
baryogenesis. A primary motivation for these models is to avoid the bounds
which result from introducing new sources of CP violation around the weak
scale by having the EWPT occur at a temperature $T\gg v_{{\rm ew}}$. This
typically requires the introduction of a large number of fields coupled to the
Higgs sector in order to significantly increase the temperature of the phase
transition while satisfying collider bounds. In contrast, the AD model
provides a mechanism in which the electroweak phase transition can occur at
arbitrarily high temperatures due solely to the Higgs-radion interaction.
However, the Higgs potential must still be modified to make the EWPT first
order and introduce new sources of CP violation introduced in order to include
a mechanism for electroweak baryogenesis in the AD framework. Further, even
though the electroweak phase transition happens at much higher temperatures,
the scales governing local physics on the IR brane also scale with $T$. Thus,
if the CP violating operators are localized on the IR brane, their effect at
$T=T_{*}$ will be the same as that at $T=T_{c}\lesssim v_{\rm ew}$. Therefore,
they will be subject to the very constraints the models of Meade2018 ;
Baldes:2018nel ; Glioti:2018roy ; Matsedonskyi2020 were constructed to avoid.
On the other hand, if the CP violating operators are on the UV brane/bulk,
their effect does become much more important at higher temperatures. It will
be interesting to construct and study a high-temperature electroweak
baryogenesis model using avoided deconfinement in further detail.
### 5.2 High Scale Baryogenesis
As discussed in section 2, in the usual RS model the high temperature phase is
described by an AdS black hole, before a phase transition to the RS phase at a
temperature around the TeV scale or below. The period of supercooling
accompanying the phase transition significantly dilutes any pre-existing
baryon asymmetry Baratella2019 . This has motivated consideration of
baryogenesis mechanisms that combine the electroweak and RS phase transitions
Bruggisser2018 ; Bruggisser2018a ; Konstandin2011b . Baryogenesis mechanisms
which operate at temperatures significantly above the TeV scale are difficult
to realise in the RS model. This is not the case, however, for the AD model,
as the universe is never in the BH phase after inflation and does not undergo
a period of supercooling.
At high temperature the radion is stabilised closer to the UV brane. This
means that fields localised toward the IR brane may have significant overlap
with UV- localised fields at early times, with the UV and IR sectors then
decoupling at low temperature. This allows for the possibility of having
baryogenesis occur due to interactions between the IR and UV fields, which
have $\mathcal{O}(1)$ couplings in the early universe but whose interactions
are negligibly small after the radion has settled to its zero-temperature
expectation value.
Figure 4: Dark matter annihilation rate for $m_{\chi}(T=0)=10$ TeV, and
$\alpha_{\rm{DM}}\sim 10^{-2}$ in the AD model (solid blue line) and without
radion dependence (dashed blue line). The AD model parameters are as in
equation (65). The Hubble rate assuming radiation domination and
$g_{*}=106.75$ is shown in black.
### 5.3 WIMP Freeze-in
The relic abundance of a particle with weak scale mass and interactions in
standard cosmology turns out to be a good estimate for the observed dark
matter abundance. This has led to the WIMP paradigm, which is supported by the
idea that new physics at the weak scale is motivated by the electroweak
hierarchy problem. In this light, we expect a WIMP in the RS model to be
associated with a field localised to the IR brane. Thus, avoided deconfinement
may have significant implications for the WIMP freeze-out in such cases – in
fact, we show that particles with weak scale interactions can have a freeze-in
mechanism.
As noted above, the scales on the IR brane are proportional to $\mu(T)$ and if
the dark matter particle $\chi$ lives on the IR brane, the quantity
$m_{\chi}(T)/T$ changes by only an O(1) amount during the phase of avoided
deconfinement. In particular, the thermal abundance of $\chi$ can be Boltzmann
suppressed for the entire cosmic history.
If the annihilation rate of dark matter is set by a weak coupling
$\alpha_{\rm{DM}}$, then the equilibrium annihilation rate in early cosmology
can be estimated as (assuming $m_{\chi}(0)>T_{c}$),
$\displaystyle n_{eq}\langle\sigma v\rangle$
$\displaystyle\sim\frac{\pi\alpha_{\rm{DM}}^{2}}{m_{\chi}(T)^{2}}(m_{\chi}(T)T)^{3/2}\exp\left[-\frac{m_{\chi}(T)}{T}\right]\sim
T\frac{\pi\alpha_{\rm{DM}}^{2}}{m_{\chi}(0)^{2}}\exp\left[-\frac{m_{\chi}(0)}{cT_{c}}\right]+O(\epsilon\log(T/T_{c})),$
(69)
where we have ignored $\mathcal{O}$(1) factors to highlight the scaling
behavior. The form of the annihilation rate being nearly proportional to $T$
follows from the approximate conformal symmetry. Figure 4 shows the different
annihilation rates as a function of temperature in the AD model and without
radion dependence.
The annihilation rate decreases slower than the Hubble rate, which decreases
as $T^{2}$ in radiation domination, as shown in figure 4. Even though the
zero-temperature annihilation rate is weak-scale, the high-temperature
annihilation rate can be out of equilibrium because it is Boltzmann
suppressed. After the IR brane is stabilized at $T\sim T_{c}$,
$m_{\chi}(T)\sim m_{\chi}(0)$, and the equilibrium annihilation rate drops
exponentially with temperature. If the Hubble rate around $T=T_{c}$ is larger
than the equilibrium annihilation rate, this implies that:
$\displaystyle\frac{m_{\chi}(0)}{T_{c}}\gtrsim\log\frac{M_{\rm pl}}{T_{c}},$
(70)
and the annihilation is never in thermal equilibrium, so the abundance is set
by freeze in. Note that the DM-SM coupling can be sizeable, so that we can
detect $\chi$ in direct and indirect detection experiments as a WIMP. However
the usual relic abundance calculation would not apply. This can potentially
open up a large parameter space for simple WIMPs like the Wino-like
electroweak triplet, or other heavy WIMP candidates which would have a large
abundance in the standard freeze out history.
### 5.4 QCD Phase Transition and the QCD Axion
The QCD phase transition in the AD model may also be modified from the usual
picture if $T_{c}$ is below the QCD scale. In order to achieve this in our
model while satisfying the bounds on the radion and $s$ masses requires a
large number of AD scalars. However, as was the case for the electroweak phase
transition, even for $T_{c}$ slightly below the QCD scale the QCD phase
transition may occur at temperatures far above the TeV scale. This may be able
to reproduce some of the non-standard QCD dynamics discussed in refs
Ipek:2018lhm ; Croon2020 ; Berger2020 .
A cosmology where QCD confinement occurs at high temperatures, when
$\Lambda_{{\rm QCD}}(T)\gg\Lambda_{{\rm QCD}}(0)$, can also have dramatic
consequences for the abundance of the QCD axion. The axion field can have
various 5D origins; one simple possibility is that it is the fifth component
of $U(1)$ gauge field in 5D. Irrespective of its origin, the large decay
constant of the axion suggests that its wavefunction is localised near the UV
brane. Therefore, we can safely assume that $f_{a}$ is largely temperature
independent. In the confined phase of QCD, but with a temperature-dependent
confinement scale, the axion mass is given by,
$\displaystyle m_{a}(T)=\frac{f_{\pi}(T)m_{\pi}(T)}{f_{a}}$ (71)
The axion starts oscillating around the epoch of QCD confinement at a
temperature $T_{\rm osc}$ which is defined by $m_{a}(T_{\rm osc})\simeq
H(T_{\rm osc})$. The axion abundance at the onset of oscillation is
$\displaystyle\rho_{a}(T_{\rm osc})$ $\displaystyle\sim m_{a}^{2}(T_{\rm
osc})f_{a}^{2}\theta_{i}^{2}$ (72)
where $\theta_{i}$ is the initial misalignment angle. The mass of the axion
continues to decrease due to the temperature dependence of $\Lambda_{{\rm
QCD}}(T)\sim\mu(T)$. In the adiabatic approximation $\dot{m}\ll m^{2}$, the
number density of the axion scales as $a^{-3}$, and the mass redshifts as
$\sim a^{-2}$, so the axion energy density redshifts approximately as $a^{-5}$
in this epoch, whereas the background energy density is redshifting as
$a^{-4}$. This can reduce the axion abundance dramatically.
### 5.5 Gravitational Waves
The absence of a first order confinement phase transition is a necessary
feature of this mechanism that distinguishes it from the standard RS model.
The RS phase transition results in a gravitational wave signal which will be
absent in the avoided deconfinement model Megias2018a ; Randall2006 . The RS
phase transition also leads to a drop in $g_{*}$ of order $N^{2}$ as a result
of degrees of freedom confining and freezing out. If there is an observable
background of gravitational waves, such as from a cosmic string network
Cui2017 ; Cui2018 or inflation Watanabe2006 ; Jinno2012 ; Saikawa2018 , this
change in $g_{*}$ is observable as a relative decrease in the power in modes
which were below the horizon scale prior to the phase transition. The absence
of these gravitational wave signals could be used to distinguish the AD model
from RS models which do undergo a phase transition. Furthermore, the addition
of the AD scalars, which are necessarily light degrees of freedom due to the
bound (53), also leads to a potentially observable change in $g_{*}$ in the
early universe for $N_{s}$ as low as $N_{s}\sim 10$ and masses around the GeV
scale.
In addition to modifying the RS phase transition, in our set up there are
additional phase transitions associated with the $s$ fields, which can be
first order and can each happen at slightly different temperatures. This can
give us interesting forest of GW signals with a spectrum that is different
from the one expected from a single phase transition. As noted above, the
electroweak and/or the QCD phase transition may also be made first order and
can happen at very high temperatures, predicting a gravitational wave
signature from these phase transitions as well.
## 6 Discussion
In this work we have described a mechanism which addresses the cosmological
problem of eternal inflation due to suppressed confinement transitions in the
RS model. The standard RS model is described at high temperature by an AdS
black hole, with a transition to the RS phase proceeding via a first order
phase transition which is exponentially suppressed by the large number
$N^{2}$. We have shown that this situation can be avoided by introducing new
scalars localised in the IR which generate a potential that stabilises the IR
brane at high temperatures. Provided the universe exits inflation in the RS
phase, it remains there, never entering the BH phase.
There are a number of issues that would be worth exploring further. It would
be interesting to understand this phenomenon in a 4D field theory example. The
additional scalars that we have introduced on the IR brane are expected to be
emergent degrees of freedom in the 4D theory that appear after confinement. In
such an example the thermal effect of these scalars will be to drive the
confinement scale itself to higher values. This may provide a new insights on
the problem of confinement.
There are also various phenomenological applications of this mechanism which
we have merely touched upon in this paper. Avoided Deconfinement changes the
cosmological history in a unique way, where from the IR brane point of view it
is a model with effectively a low reheat temperature, but from the UV brane
point of view the temperature can get arbitrarily high. This allows us to
build realizations where the electroweak and/or QCD phase transitions happen
at very high temperatures. Since the cosmology at high temperatures is
modified, we have shown that the mechanism for generating the abundance of
various species such as WIMP dark matter, axion dark matter or baryogenesis
can be significantly modified. An interesting future direction would be to
build explicit models which realize these mechanisms, and study their
phenomenological signatures.
Gravitational waves are a powerful experimental tool for studying very early
universe cosmology, both in terms of new sources of the waves as well as
modifications of propagation of gravitational waves in the early universe
plasma. Modification of the Randall-Sundrum phase transition, or the
electroweak/QCD phase transition can change the expectations of GW signals
from these phase transitions; these phase transitions are also associated with
a change of number of degrees of freedom in the plasma, which may also be
detectable in the GW spectrum. Even if a large-$N$ confining gauge group is
part of a dark sector decoupled from the standard model, these gravitational
wave signatures can provide important information about these sectors. Thus,
the detailed phenomenological predictions of avoided deconfinement would be
important to study further even in this more general situation.
###### Acknowledgements.
We would like to thank Raman Sundrum and Soubhik Kumar for useful discussions
and comments on the manuscript. We are grateful to Anson Hook, Lisa Randall,
Matt Reece and John March-Russell for useful conversations. PA is supported by
the STFC under Grant No. ST/T000864/1. MN is funded by a joint Clarendon and
Sloane-Robinson scholarship from Oxford University and Keble college.
## Appendix A Bounce action for deconfining phase transition
In this appendix we estimate the bounce action, $B$, which determines the
transition rate from the AD phase in the high-temperature regime to the AdS-S
or deconfined phase. At high temperatures the phase transition from the AD
phase to the black hole phase proceeds at a rate
$\Gamma\simeq T^{4}e^{-B}.$ (73)
If this is larger than $H^{4}$, where $H$ is the hubble rate, then this
indicates that the AD phase is unstable. This defines a maximum temperature
$T_{\rm max}$ above which the AD mechanism no longer works, but we will find
that tunnelling only becomes significant at temperatures equal to the
temperature which defines the classical instability of the model (defined in
equation (48)), up to $\mathcal{O}(1/N)$ corrections.
In order to determine $B$ we make the approximation that the action is
dominated by the dynamics of the radion and neglect the contribution from the
gravitational portion of the action. The justification for this is that the
gravitational action scales as $N^{2}$ with no further enhancement from small
or large parameters, while the contribution to the bounce from the radion, as
we show below, scales like $N^{2}\lambda^{-1/2}$ for a weak coupling
$\lambda$. In this approximation the relevant Euclidean action is
$\displaystyle S_{E}$
$\displaystyle=\frac{N^{2}}{4\pi}\int_{0}^{T^{-1}}dt_{E}\int
r^{2}dr\left[(\partial\mu)^{2}-\lambda_{1}T^{2}\mu^{2}+\lambda_{2}\mu^{4}\right],$
(74) $\displaystyle\lambda_{1}$
$\displaystyle=\frac{2\pi^{2}N_{s}\lambda_{s}v_{s}^{2}}{3N^{2}},$
$\displaystyle\lambda_{2}$ $\displaystyle=\frac{64\pi^{2}(v_{\rm ir}-v_{\rm
uv})^{2}}{N^{2}},$
where we have explicitly scaled out the factor of $N^{2}$. At high temperature
the minimum of the radion potential is well-approximated by equation (45)
$\mu\simeq\alpha(T)T,\qquad\alpha(T)=\frac{\mu_{\rm
TeV}}{(cT_{c}T^{\epsilon})^{\frac{1}{1+\epsilon}}}.$ (75)
After rescaling the co-ordinates and radion field as $\mu=\alpha
T\tilde{\mu}$, $x_{E}=T^{-1}\tilde{x}_{E}$ the action can be written as:
$\displaystyle S_{E}$
$\displaystyle=\frac{\alpha^{2}N^{2}}{4\pi}\int_{0}^{1}d\tilde{t}_{E}\int\tilde{r}^{2}d\tilde{r}\left[(\tilde{\partial}\tilde{\mu})^{2}-\lambda_{1}\tilde{\mu}^{2}+\lambda_{2}\alpha^{2}\tilde{\mu}^{4}\right].$
(76)
Figure 5: Plot showing the parametric dependence of the potential in the AD
model. The right-hand side shows the radion potential at high temperature,
with depth set by $N_{s}T^{4}$ and width set by $\langle\mu\rangle=\alpha(T)T$
and the left-hand side is the potential for the black hole hawking temperature
$T_{h}$, with width set by $T$ and depth of order $N^{2}T^{4}$. The dashed
line shows the would-be horizon position on the radion side of the potential
and the break in the curve indicates the region where EFT control is lost.
The equation of motion for $\tilde{\mu}$ then implies that $\tilde{\mu}$
varies by an $\mathcal{O}(1)$ amount over a distance of order
$\Delta\tilde{r}\sim\lambda_{1}^{-1/2}\gg 1$, i.e.
$\displaystyle\left|\frac{\partial\tilde{\mu}}{\partial\tilde{r}}\right|\sim\lambda_{1}^{1/2}.$
(77)
We then make the conservative estimate that the bounce solution only requires
the radion to vary by an amount given by
$\displaystyle\Delta\mu=\alpha(T)T-k^{2}\rho_{h},$ (78)
which amounts to moving the IR brane from its stabilised location to the
position of the would-be horizon. In figure 5 this corresponds to the radion
varying from its value at the minimum of the potential to the dashed line, as
opposed to a bounce analogous to the one proposed in Creminelli2002 which
involves the radion varying to $\mu=0$ over the bounce trajectory. In terms of
$\tilde{\mu}$ this is
$\displaystyle\delta\tilde{\mu}=1-\frac{\pi}{\alpha},$ (79)
which approaches $0$ logarithmically (indicating that the IR brane is becoming
classically unstable) as $T$ approaches $T_{\rm max}$.
With this estimate the characteristic radius of the tunnelling configuration
will be $\tilde{R}_{b}\sim(\delta\tilde{\mu})\lambda_{1}^{-1/2}$. For
$\tilde{R}_{b}\gg 1$ we expect the bounce solution to be the $O(3)$ symmetric
($\tilde{t}_{E}$-independent) configuration, while for $\tilde{R}_{b}\ll 1$
the solution will obey an $O(4)$ symmetry and depend on the variable
$\tilde{\rho}=\sqrt{\tilde{t}_{E}^{2}+\tilde{r}^{2}}$. The $O(4)$ bounce
solution therefore only becomes dominant for
$\delta\tilde{\mu}\ll\lambda^{1/2}\sim 1/N$ at which point the IR brane is
close to becoming classically unstable anyway, so only the $O(3)$ symmetric
bounce is relevant for computing the lifetime of the AD phase. The integrals
determining the bounce action scale as
$\displaystyle\int\tilde{r}^{2}d\tilde{r}\left((\tilde{\partial}\tilde{\mu})^{2}-\lambda_{1}\tilde{\mu}^{2}\right)$
$\displaystyle=\lambda_{1}^{-1/2}c_{1}$ (80)
$\displaystyle\int\tilde{r}^{2}d\tilde{r}\left(\lambda_{2}\alpha^{2}\tilde{\mu}^{4}\right)$
$\displaystyle=\lambda_{1}^{-3/2}\lambda_{2}c_{2}.$
where $c_{1},c_{2}$ are $\mathcal{O}(1)$ coefficients. We can then estimate
the bounce to be:
$\displaystyle B$
$\displaystyle\simeq\frac{\alpha^{2}N^{2}(\delta\tilde{\mu})^{3}\lambda_{1}^{-3/2}}{4\pi}\left[c_{1}\lambda_{1}+c_{2}\lambda_{2}\alpha^{2}\right],$
(81)
$\displaystyle\simeq\frac{\sqrt{3}\alpha^{2}N^{3}(\delta\tilde{\mu})^{3}}{4\sqrt{2}\pi^{2}\left(N_{s}\lambda_{s}v_{s}^{2}\right)^{1/2}}\left[c_{1}+c_{2}\frac{96\alpha^{2}(v_{\rm
ir}-v_{\rm uv})^{2}}{N_{s}\lambda_{s}v_{s}^{2}}\right].$
The bounce action is therefore suppressed by the factor $N^{3}$ and
additionally by inverse powers of $v_{s}$. The lifetime of the metastable AD
vacuum is much larger than Hubble for temperatures up until
$\delta\tilde{\mu}\lesssim\mathcal{O}(1/N)$, meaning we can safely ignore the
tunneling rate from the AD to deconfined phase and consider only the classical
instability of the model.
## References
* (1) G. ’t Hooft, A Planar Diagram Theory for Strong Interactions, Nucl. Phys. B72 (1974) 461.
* (2) J. Maldacena, The large-N limit of superconformal field theories and supergravity, International Journal of Theoretical Physics 38 (nov, 1999) 1113–1133, [hep-th/9711200].
* (3) E. Witten, Anti-de Sitter space and holography, Adv. Theor. Math. Phys. 2 (1998) 253–291, [hep-th/9802150].
* (4) S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, Gauge Theory Correlators from Non-Critical String Theory, Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 428 (feb, 1998) 105–114, [hep-th/9802109].
* (5) L. Randall and R. Sundrum, A Large mass hierarchy from a small extra dimension, Phys. Rev. Lett. 83 (1999) 3370–3373, [hep-ph/9905221].
* (6) W. D. Goldberger and M. B. Wise, Modulus stabilization with bulk fields, Phys. Rev. Lett. 83 (1999) 4922–4925, [hep-ph/9907447].
* (7) M. A. Luty and R. Sundrum, Hierarchy stabilization in warped supersymmetry, Phys. Rev. D 64 (2001) 065012, [hep-th/0012158].
* (8) H. Verlinde, Holography and compactification, Nuclear Physics B 580 (jul, 2000) 264–274, [hep-th/9906182].
* (9) C. S. Chan, P. L. Paul, and H. Verlinde, A note on warped string compactification, Nuclear Physics B 581 (aug, 2000) 156–164, [hep-th/0003236].
* (10) F. Brümmer, A. Hebecker, and E. Trincherini, The throat as a Randall-Sundrum model with Goldberger-Wise stabilization, Nuclear Physics B 738 (mar, 2006) 283–305, [hep-th/0510113].
* (11) L. Randall, The Boundaries of KKLT, Fortsch. Phys. 68 (2020), no. 3-4 1900105, [arXiv:1912.06693].
* (12) S. Kachru, R. Kallosh, A. D. Linde, and S. P. Trivedi, De Sitter vacua in string theory, Phys. Rev. D 68 (2003) 46005, [hep-th/0301240].
* (13) I. R. Klebanov and E. Witten, Superconformal field theory on three-branes at a Calabi-Yau singularity, Nucl. Phys. B 536 (1998) 199–218, [hep-th/9807080].
* (14) I. R. Klebanov and A. A. Tseytlin, Gravity Duals of Supersymmetric SU(N) x SU(N+M) Gauge Theories, Nuclear Physics B 578 (feb, 2000) 123–138, [hep-th/0002159].
* (15) I. R. Klebanov and M. J. Strassler, Supergravity and a confining gauge theory: Duality cascades and chi SB resolution of naked singularities, JHEP 08 (2000) 52, [hep-th/0007191].
* (16) I. Bena, E. Dudas, M. Graña, and S. Lüst, Uplifting Runaways, Fortsch. Phys. 67 (2019), no. 1-2 1800100, [arXiv:1809.06861].
* (17) S. Kachru, M. Kim, L. McAllister, and M. Zimet, de Sitter Vacua from Ten Dimensions, arXiv:1908.04788.
* (18) S. W. Hawking and D. N. Page, Thermodynamics of black holes in anti-de Sitter space, Communications in Mathematical Physics 87 (dec, 1983) 577–588.
* (19) E. Witten, Anti-de Sitter space, thermal phase transition, and confinement in gauge theories, Adv. Theor. Math. Phys. 2 (1998) 505–532, [hep-th/9803131].
* (20) P. Creminelli, A. Nicolis, and R. Rattazzi, Holography and the electroweak phase transition, JHEP 03 (2002) 51, [hep-th/0107141].
* (21) J. Kaplan, P. C. Schuster, and N. Toro, Avoiding an Empty Universe in RS I Models and Large-N Gauge Theories, hep-ph/0609012.
* (22) S. S. Gubser, AdS / CFT and gravity, Phys. Rev. D63 (2001) 84017, [hep-th/9912001].
* (23) L. Randall and G. Servant, Gravitational Waves from Warped Spacetime, Journal of High Energy Physics 2007 (jul, 2006) [hep-ph/0607158].
* (24) G. Nardini, M. Quiros, and A. Wulzer, A Confining Strong First-Order Electroweak Phase Transition, JHEP 09 (2007) 077, [arXiv:0706.3388].
* (25) T. Konstandin, G. Nardini, and M. Quiros, Gravitational Backreaction Effects on the Holographic Phase Transition, Physical Review D 82 (jul, 2010) [arXiv:1007.1468].
* (26) T. Konstandin and G. Servant, Cosmological consequences of nearly conformal dynamics at the TeV scale, Journal of Cosmology and Astroparticle Physics (2011) [arXiv:1104.4791].
* (27) B. M. Dillon, B. K. El-Menoufi, S. J. Huber, and J. P. Manuel, A rapid holographic phase transition with brane-localized curvature, Physical Review D 98 (aug, 2017) [arXiv:1708.02953].
* (28) B. von Harling and G. Servant, QCD-induced Electroweak Phase Transition, Journal of High Energy Physics 2018 (nov, 2017) [arXiv:1711.11554].
* (29) S. Bruggisser, B. von Harling, O. Matsedonskyi, and G. Servant, The Baryon Asymmetry from a Composite Higgs, Physical Review Letters 121 (mar, 2018) [arXiv:1803.08546].
* (30) S. Bruggisser, B. von Harling, O. Matsedonskyi, and G. Servant, Electroweak Phase Transition and Baryogenesis in Composite Higgs Models, Journal of High Energy Physics 2018 (apr, 2018) 17–229, [arXiv:1804.07314].
* (31) E. Megías, G. Nardini, and M. Quirós, Cosmological phase transitions in warped space: gravitational waves and collider signatures, Journal of High Energy Physics 2018 (sep, 2018) [arXiv:1806.04877].
* (32) D. Bunk, J. Hubisz, and B. Jain, A perturbative RS I cosmological phase transition, European Physical Journal C 78 (2018), no. 1 [arXiv:1705.00001].
* (33) P. Baratella, A. Pomarol, and F. Rompineve, The Supercooled Universe, JHEP 03 (2019) 100, [arXiv:1812.06996].
* (34) K. Agashe, P. Du, M. Ekhterachian, S. Kumar, and R. Sundrum, Cosmological Phase Transition of Spontaneous Confinement, Journal of High Energy Physics 2020 (oct, 2019) [arXiv:1910.06238].
* (35) K. Fujikura, Y. Nakai, and M. Yamada, A more attractive scheme for radion stabilization and supercooled phase transition, JHEP 02 (2020) 111, [arXiv:1910.07546].
* (36) A. Azatov and M. Vanvlasselaer, Phase transitions in perturbative walking dynamics, JHEP 09 (2020) 085, [arXiv:2003.10265].
* (37) E. Megías, G. Nardini, and M. Quirós, Gravitational imprints from heavy Kaluza-Klein resonances, Physical Review D 102 (2020), no. 5 [arXiv:2005.04127].
* (38) K. Agashe, P. Du, M. Ekhterachian, S. Kumar, and R. Sundrum, Phase Transitions from the Fifth Dimension, arXiv:2010.04083.
* (39) B. Hassanain, J. March-Russell, and M. Schvellinger, Warped Deformed Throats have Faster (Electroweak) Phase Transitions, JHEP 10 (2007) 89, [arXiv:0708.2060].
* (40) S. Weinberg, Gauge and global symmetries at high temperature, Physical Review D 9 (jun, 1974) 3357–3378.
* (41) P. Meade and H. Ramani, Unrestored Electroweak Symmetry, Physical Review Letters 122 (jul, 2018) [arXiv:1807.07578].
* (42) I. Baldes and G. Servant, High scale electroweak phase transition: baryogenesis & symmetry non-restoration, JHEP 10 (2018) 53, [arXiv:1807.08770].
* (43) A. Glioti, R. Rattazzi, and L. Vecchi, Electroweak Baryogenesis above the Electroweak Scale, JHEP 04 (2019) 27, [arXiv:1811.11740].
* (44) O. Matsedonskyi and G. Servant, High-Temperature Electroweak Symmetry Non-Restoration from New Fermions and Implications for Baryogenesis, arXiv:2002.05174.
* (45) P. Langacker and S. Y. Pi, Magnetic monopoles in grand unified theories, Physical Review Letters 45 (jul, 1980) 1–4.
* (46) P. Salomonson, B. S. Skagerstam, and A. Stern, On the primordial monopole problem in grand unified theories, Physics Letters B 151 (feb, 1985) 243–246.
* (47) G. Dvali, A. Melfo, and G. Senjanovic, Is there a monopole problem?, Physical Review Letters 75 (jul, 1995) 4559–4562, [hep-ph/9507230].
* (48) G. Dvali and G. Senjanovic, Is there a domain wall problem?, Physical Review Letters 74 (jan, 1995) 5178–5181, [hep-ph/9501387].
* (49) R. N. Mohapatra and G. Senjanović, Soft CP-invariance violation at high temperature, Physical Review Letters 42 (jun, 1979) 1651–1654.
* (50) R. N. Mohapatra and G. Senjanović, Broken symmetries at high temperature, Physical Review D 20 (dec, 1979) 3390–3398.
* (51) J. Orloff, The U.V. Price for Symmetry Non-Restoration, Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 403 (nov, 1996) 309–315, [hep-ph/9611398].
* (52) N. Chai, et al., Thermal Order in Conformal Theories, arXiv:2005.03676.
* (53) N. Arkani-Hamed, M. Porrati, and L. Randall, Holography and phenomenology, JHEP 08 (2001) 17, [hep-th/0012148].
* (54) R. Rattazzi and A. Zaffaroni, Comments on the holographic picture of the Randall-Sundrum model, JHEP 04 (2001) 21, [hep-th/0012248].
* (55) C. Charmousis, R. Gregory, and V. A. Rubakov, Wave function of the radion in a brane world, Phys. Rev. D62 (2000) 67505, [hep-th/9912160].
* (56) S. R. Coleman, The Fate of the False Vacuum. 1. Semiclassical Theory, Phys. Rev. D15 (1977) 2929–2936.
* (57) A. D. Linde, Decay of the False Vacuum at Finite Temperature, Nucl. Phys. B216 (1983) 421.
* (58) F. Coradeschi, P. Lodone, D. Pappadopulo, R. Rattazzi, and L. Vitale, A naturally light dilaton, JHEP 11 (2013) 57, [arXiv:1306.4601].
* (59) W. D. Goldberger and M. B. Wise, Phenomenology of a stabilized modulus, Phys. Lett. B475 (2000) 275–279, [hep-ph/9911457].
* (60) A. Pomarol, O. Pujolas, and L. Salas, Holographic conformal transition and light scalars, JHEP 10 (2019) 202, [arXiv:1905.02653].
* (61) F. Bigazzi, A. Caddeo, A. L. Cotrone, and A. Paredes, Fate of false vacua in holographic first-order phase transitions, arXiv:2008.02579.
* (62) A. H. Guth and E. J. Weinberg, Could the Universe Have Recovered from a Slow First Order Phase Transition?, Nucl. Phys. B212 (1983) 321–364.
* (63) S. Fichet, Braneworld effective field theories — holography, consistency and conformal effects, JHEP 04 (2020) 016, [arXiv:1912.12316].
* (64) C. Delaunay, C. Grojean, and J. D. Wells, Dynamics of Non-renormalizable Electroweak Symmetry Breaking, JHEP 04 (2008) 29, [arXiv:0711.2511].
* (65) D. Curtin, P. Meade, and H. Ramani, Thermal Resummation and Phase Transitions, Eur. Phys. J. C78 (2018), no. 9 787, [arXiv:1612.00466].
* (66) Z. Chacko, R. K. Mishra, D. Stolarski, and C. B. Verhaaren, Interactions of a Stabilized Radion and Duality, Phys. Rev. D 92 (2015), no. 5 56004, [arXiv:1411.3758].
* (67) T. Gherghetta and A. Pomarol, Bulk fields and supersymmetry in a slice of AdS, Nuclear Physics B 586 (oct, 2000) 141–162, [hep-ph/0003129].
* (68) R. Sundrum, Gravity’s scalar cousin, hep-th/0312212.
* (69) A. Hebecker and J. March-Russell, Randall-Sundrum II cosmology, AdS/CFT, and the bulk black hole, Nuclear Physics B (2001) [hep-ph/0103214].
* (70) K. Agashe, A. Delgado, M. J. May, and R. Sundrum, RS1, custodial isospin and precision tests, Journal of High Energy Physics 7 (2003), no. 8 1167–1197, [hep-ph/0308036].
* (71) S. Casagrande, F. Goertz, U. Haisch, M. Neubert, and T. Pfoh, Flavor Physics in the Randall-Sundrum Model: I. Theoretical Setup and Electroweak Precision Tests, Journal of High Energy Physics 2008 (jul, 2008) [arXiv:0807.4937].
* (72) K. Agashe, G. Perez, and A. Soni, Flavor structure of warped extra dimension models, Physical Review D - Particles, Fields, Gravitation and Cosmology 71 (2005), no. 1 [hep-ph/0408134].
* (73) M. Bauer, S. Casagrande, U. Haisch, and M. Neubert, Flavor Physics in the Randall-Sundrum Model: II. Tree-Level Weak-Interaction Processes, Journal of High Energy Physics 2010 (dec, 2009) [arXiv:0912.1625].
* (74) K. Blum, M. Cliche, C. Csaki, and S. J. Lee, WIMP Dark Matter through the Dilaton Portal, Journal of High Energy Physics 2015 (oct, 2014) [arXiv:1410.1873].
* (75) A. M. Sirunyan, et al., Search for physics beyond the standard model in high-mass diphoton events from proton-proton collisions at s =13 TeV, Physical Review D 98 (nov, 2018) [arXiv:1809.00327].
* (76) A. M. Sirunyan, et al., Combination of Searches for Higgs Boson Pair Production in Proton-Proton Collisions at s =13 TeV, Physical Review Letters 122 (mar, 2019) [arXiv:1811.09689].
* (77) G. F. Giudice, Y. Kats, M. McCullough, R. Torre, and A. Urbano, Clockwork/linear dilaton: structure and phenomenology, JHEP 06 (2018) 009, [arXiv:1711.08437].
* (78) F. Abu-Ajamieh, J. S. Lee, and J. Terning, The Light Radion Window, Journal of High Energy Physics 2018 (nov, 2017) [arXiv:1711.02697].
* (79) E. Masso and R. Toldra, On a Light Spinless Particle Coupled to Photons, Physical Review D 52 (mar, 1995) 1755–1763, [hep-ph/9503293].
* (80) J. Jaeckel and M. Spannowsky, Probing MeV to 90 GeV axion-like particles with LEP and LHC, Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 753 (sep, 2015) 482–487, [arXiv:1509.00476].
* (81) B. Döbrich, J. Jaeckel, F. Kahlhoefer, A. Ringwald, and K. Schmidt-Hoberg, ALPtraum: ALP production in proton beam dump experiments, Journal of High Energy Physics 2016 (dec, 2015) 1–27, [hep-ph/1512.03069].
* (82) T. Konstandin and G. Servant, Natural Cold Baryogenesis from Strongly Interacting Electroweak Symmetry Breaking, JCAP 07 (2011) 024, [arXiv:1104.4793].
* (83) S. Ipek and T. M. P. Tait, Early Cosmological Period of QCD Confinement, Phys. Rev. Lett. 122 (2019), no. 11 112001, [arXiv:1811.00559].
* (84) D. Croon, J. N. Howard, S. Ipek, and T. M. Tait, QCD baryogenesis, Physical Review D (2020) [arXiv:1911.01432].
* (85) D. Berger, S. Ipek, T. M. P. Tait, and M. Waterbury, Dark Matter Freeze Out during an Early Cosmological Period of QCD Confinement, arXiv:2004.06727.
* (86) Y. Cui, M. Lewicki, D. E. Morrissey, and J. D. Wells, Cosmic Archaeology with Gravitational Waves from Cosmic Strings, Physical Review D 97 (nov, 2017) [arXiv:1711.03104].
* (87) Y. Cui, M. Lewicki, D. E. Morrissey, and J. D. Wells, Probing the pre-BBN universe with gravitational waves from cosmic strings, Journal of High Energy Physics 2019 (aug, 2018) [arXiv:1808.08968].
* (88) Y. Watanabe and E. Komatsu, Improved Calculation of the Primordial Gravitational Wave Spectrum in the Standard Model, Physical Review D 73 (apr, 2006) [astro-ph/0604176].
* (89) R. Jinno, T. Moroi, and K. Nakayama, Probing dark radiation with inflationary gravitational waves, Physical Review D 86 (aug, 2012) [arXiv:1208.0184].
* (90) K. Saikawa and S. Shirai, Primordial gravitational waves, precisely: The role of thermodynamics in the Standard Model, Journal of Cosmology and Astroparticle Physics 2018 (mar, 2018) [arXiv:1803.01038].
|
*
# Equations of motion governing the dynamics of the exceptional points of
parameterically dependent nonhermitian Hamiltonians
Milan Šindelka Institute of Plasma Physics of the Czech Academy of Sciences,
Za Slovankou 1782/3, 18200 Prague, Czech Republic<EMAIL_ADDRESS>Pavel
Stránský and Pavel Cejnar Institute of Nuclear and Particle Physics, Faculty
of Mathematics and Physics, Charles University, V Holešovičkách 2, 18000
Prague, Czech Republic<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We study exceptional points (EPs) of a nonhermitian Hamiltonian
$\hat{H}(\lambda,\delta)$ whose parameters $\lambda\in{\mathbb{C}}$ and
$\delta\in{\mathbb{R}}$. As the real control parameter $\delta$ is varied, the
$k$-th EP (or $k$-th cluster of simultaneously existing EPs) of
$\hat{H}(\lambda,\delta)$ moves in the complex plane of $\lambda$ along a
continuous trajectory, $\lambda_{k}(\delta)$. We derive a self contained set
of equations of motion (EOM) for the trajectory $\lambda_{k}(\delta)$, while
interpreting $\delta$ as the propagation time. Such EOM become of interest
whenever one wishes to study the response of EPs to external perturbations or
continuous parametric changes of the pertinent Hamiltonian. This is e.g. the
case of EPs emanating from hermitian curve crossings/degeneracies (which turn
into avoided crossings/near-degeneracies when the Hamiltonian parameters are
continuously varied). The presented EOM for EPs have not only their
theoretical merits, they possess also a substantial practical relevance.
Namely, the just presented approach can be regarded even as an efficient
numerical method, useful for generating EPs for a broad class of complex
quantum systems encountered in atomic, nuclear and condensed matter physics.
Performance of such a method is tested here numerically on a simple yet
nontrivial toy model.
††: J. Phys. A: Math. Gen.
Keywords:
nonhermitian degeneracies, dynamics of exceptional points, avoided crossings.
## 1 Introduction
Nonhermitian Hamiltonians give rise to a special kind of degeneracies (the so
called exceptional points, EPs) which are not encountered within the standard
hermitian quantum mechanics. Namely, not only the (complex) eigenvalues, but
also the corresponding eigenvectors become degenerate (coalescent) at the EP
[1, 2, 3, 4, 5, 6]. Mathematical peculiarities of such a situation include
self-orthogonality, an unusual closure property, and multivaluedness of the
involved eigenvalues when encircling an EP in the parameter space of the
Hamiltonian. Importantly, the EPs arise not only in toy models, but also in a
vast amount of physically relevant and experimentally accessible contexts
(quantum mechanics of laser driven atoms, waveguide optics, acoustics,
electric circuit theory, elasticity) where they imply surprising counter-
intuitive phenomena, see e.g. short reviews [7, 8, 9] and also Refs. [10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
31, 32, 33]. Moreover, relevance of EPs in the context of quantum chaos and
quantum phase transitions has been demonstrated theoretically [11, 34, 35, 36,
37, 38, 39, 40, 41, 42, 43, 44]. The role of EPs in the superradiance
phenomenon has also been recognized [45, 46]. Higher order EPs have been
explored e.g. in Refs. [47, 48, 49].
Substantial effort has been invested into developing computational methods for
finding the EPs explicitly for a given nonhermitian Hamiltonian [50, 51, 52,
53]. In spite of great ingenuity and insighfulness of such algorithms, their
application to concrete systems is not always straightforward. Difficulties
arise especially when the Hamiltonian under study supports and existence of
many EPs, which are typically associated with avoided crossings encountered in
the framework of the pertinent hermitian theory.
The purpose of our present article is to further contribute both to the theory
and computational methodology related to EPs. Namely, we study the response of
EPs to continuous changes of the Hamiltonian parameters. Our intention is then
to calculate EPs of a given problem by means of a continuous parametric
propagation, starting from an arrangement where the EPs are trivial (or at
least easy) to find.
Our basic idea can be sketched as follows. Let us consider a parameterically
$\lambda$-dependent Hamiltonian of the general form
$\hat{H}(\lambda)\;=\;\hat{H}_{0}\;+\;\lambda\,\hat{V}\hskip 14.22636pt.\hskip
14.22636pt(\lambda\in{\mathbb{C}})$ (1)
We are looking for the EPs of $\hat{H}(\lambda)$ in the complex
$\lambda$-plane. The present paper pursues the following strategy: We
conveniently express $\hat{V}$ as a sum $\hat{V}=\hat{V}_{0}+\hat{V}_{1}$,
where the component $\hat{V}_{0}$ is chosen so that the eigenvalue problem and
the EPs of $\hat{H}_{0}+\lambda\hat{V}_{0}$ are either trivially resolvable or
at least easy to handle. A prototypical example (which will be elaborated
fully explicitly below in Section 3) corresponds to cases when
$[\hat{H}_{0},\hat{V}_{0}]=\hat{0}$. Then $\hat{H}_{0}$ and $\hat{V}_{0}$
possess the same eigenvectors, and situations closely linked to the EPs are
encountered due to the exact level crossings of
$\hat{H}_{0}+\lambda\hat{V}_{0}$. Our original Hamiltonian (1) can be now
redisplayed as
$\hat{H}(\lambda)\;=\;\hat{H}_{0}\;+\;\lambda\,\hat{V}_{0}\;+\;\lambda\,\hat{V}_{1}\hskip
14.22636pt.$ (2)
Formula (2) motivates us to think of a slightly more general Hamiltonian
$\hat{H}(\lambda,\delta)\;=\;\hat{H}_{0}\;+\;\lambda\,\hat{V}_{0}\;+\;\lambda\,\delta\,\hat{V}_{1}\hskip
14.22636pt;$ (3)
where $\delta\in[0,1]$ serves as an auxiliary switching parameter of the
$\lambda\,\hat{V}_{1}$ term. Importantly, one has
$\hat{H}(\lambda,0)=\hat{H}_{0}+\lambda\hat{V}_{0}$ and
$\hat{H}(\lambda,1)=\hat{H}(\lambda)$ of Eq. (2). Moreover, the EPs of
$\hat{H}(\lambda,\delta)$ of Eq. (3) move continuously in the complex
$\lambda$-plane when the real valued switching control parameter $\delta$ is
set to increase gradually from 0 to 1. If so, it seems natural to examine the
possibility of finding the equations of motion (EOM) governing the ”flux” or
”dynamical propagation” of the mentioned EPs along the ”time coordinate”
$\delta\in[0,1]$. One may even anticipate that an explicit solution of such
EOM (where the initial conditions at $\delta=0$ are provided by the presumably
known EPs of $\hat{H}(\lambda,0)=\hat{H}_{0}+\lambda\hat{V}_{0}$) would lead
to finding the desired EPs of $\hat{H}(\lambda)=\hat{H}(\lambda,1)$. It is the
purpose of our present article to adequately explore both theoretical and
practical merits of the just sketched approach.
The just presented idea corresponds essentially to implementing adequately and
systematically the nonhermitian perturbation theory in the presence of EPs,
where the perturbation is invoked by parameteric shift
$\delta\mapsto\delta+{\rm d}\delta$. Let us mention in this context that the
merits of nonhermitian perturbation theory in the presence of an EP have been
recently exploited e.g. in Ref. [16] (see the corresponding Supplementary
material). We also point out that our approach is intimately related to the
Dyson-Pechukas theory of level dynamics (see e.g. Ref. [43]), which however
has been pursued so far just within the framework of hermitian Hamiltonian
formalism.
The paper is organized as follows. Section 2 provides a self contained
systematic fully explicit theoretical derivation of the sought EOM for the
EPs. As such, Section 2 represents the most important hardcore material to be
communicated by the present article. The issue of choosing appropriate initial
conditions for the EOM is conveniently relegated to Appendix A. Section 3
describes a conceptually simple yet certainly nontrivial toy model, intended
to serve as a relatively strict test of our obtained EOM. Symmetry of this toy
model (resulting in simultaneous existence of multiple EPs at particular
values of $\lambda$) is discussed. Subsequently, we present in Section 3 an
outcome of the numerical solution of our EOM for the aforementioned toy model,
in order to highlight suitability of our EOM method for practical computations
of the EPs. Finally, Section 4 contains the concluding remarks.
## 2 Mathematical formulation
### 2.1 Preliminaries
Let us consider an $N$-by-$N$ complex symmetric111An extension of our
considerations to general non-symmetric Hamiltonians is also possible and
relatively straightforward. However, in the present article we prefer to deal
only with symmetric Hamiltonians for the sake of maximum simplicity.
Hamiltonian matrix $\hat{H}(\lambda,\delta)$ depending upon two parameters
$\lambda\in{\mathbb{C}}$ and $\delta\in{\mathbb{R}}$. The Hamiltonian
$\hat{H}(\lambda,\delta)$ is acting in the linear space ${\mathbb{C}}^{N}$ of
$N$-component ket (column) vectors
$|v)\;=\;\left(\matrix{v_{1}\cr v_{2}\cr\bm{\vdots}\cr v_{N}}\right)\hskip
14.22636pt.$ (4)
The associated bra (row) vectors are simply
$(v|=(v_{1}\;v_{2}\;\cdots\;v_{N})$. The adequate scalar product (the so
called $c$-product) is defined by prescription
$(v|v^{\prime})\;=\;\sum_{n=1}^{N}\,v_{n}\,v^{\prime}_{n}\;=\;(v^{\prime}|v)\hskip
14.22636pt.$ (5)
Recall that the self overlap $(v|v)$ is generally complex valued, and
$(v|v)=0$ does not imply $|v)$ equal to the zero vector $|\emptyset)$, see
Chapter 9 of Ref. [4] for details.
Consistently with our motivational considerations outlined in the
Introduction, we shall hereafter assume that there exists a function
$\lambda(\delta)$ such that, for each $\delta\in{\mathbb{R}}$, an eigenproblem
of $\hat{H}(\lambda(\delta),\delta)$ gives rise to $M$ distinct binary EPs
$[\,$$N\geq 2M$, each binary EP is formed via coalescence of two eigenvectors
of $\hat{H}(\lambda\to\lambda(\delta),\delta)$$\,]$.222 In the case when
$\hat{H}(\lambda(\delta),\delta)$ does not possess any kind of symmetry, we
expect $M=1$. On the other hand, symmetries of
$\hat{H}(\lambda(\delta),\delta)$ might imply $M>1$ (see Section 3 for an
example). We define for later convenience
$\hat{H}(\delta)\;\equiv\;\hat{H}(\lambda(\delta),\delta)\hskip 14.22636pt;$
(6)
and also
$\hat{V}(\delta)\;\equiv\;{\rm
d}_{\delta}\,\hat{H}(\delta)\;=\;\partial_{\lambda}\,\hat{H}(\lambda(\delta),\delta)\,\bm{\dot{}}{\lambda}(\delta)\;+\;\partial_{\delta}\,\hat{H}(\lambda(\delta),\delta)\hskip
14.22636pt;$ (7)
where $\bm{\dot{}}{\lambda}(\delta)={\rm d}_{\delta}\,\lambda(\delta)$, with
${\rm d}_{\bullet}=\frac{{\rm d}}{{\rm d}\bullet}$ and
$\partial_{\bullet}=\frac{\partial}{\partial\bullet}$.
The eigenproblem of our interest looks then as follows. In accordance with our
above made assumption, for each $\delta\in{\mathbb{R}}$ there exist $M$ binary
EPs of $\hat{H}(\delta)$, satisfying
$\hat{H}(\delta)\,|\tilde{c}_{m}^{\delta})\;=\;\tilde{E}_{m}^{\delta}\,|\tilde{c}_{m}^{\delta})\hskip
14.22636pt,\hskip 14.22636pt1\leq m\leq M\hskip 14.22636pt;$ (8)
with obvious notations.333 The upper tilde superscript indicates here entities
associated inherently with the EPs, whereas all the non-EP entities are
conveniently left without tilde. In this manner we distinguish e.g. between
$\tilde{E}_{1}^{\delta}$ of Eq. (8) and $E_{1}^{\delta}$ of Eq. (9). Besides
these $M$ EPs, there exist also $(N-2M)$ ordinary non-degenerate non-EP
eigenvectors of $\hat{H}(\delta)$, satisfying
$\hat{H}(\delta)\,|c_{j}^{\delta})\;=\;E_{j}^{\delta}\,|c_{j}^{\delta})\hskip
14.22636pt,\hskip 14.22636pt1\leq j\leq(N-2M)\hskip 14.22636pt.$ (9)
Since the just listed ensemble of $(N-M)$ Hamiltonian eigenvectors
$|\tilde{c}_{m}^{\delta})$ and $|c_{j}^{\delta})$ does not form a complete
basis set of ${\mathbb{C}}^{N}$, one needs to include into the game also $M$
complementary basis vectors (see Section 9.2 of Ref. [4]), satisfying
$\left(\hat{H}(\delta)\,-\,\tilde{E}_{m}^{\delta}\,\hat{1}\right)\,|\tilde{b}_{m}^{\delta})\;=\;f_{m}^{\delta}\,|\tilde{c}_{m}^{\delta})\hskip
14.22636pt,\hskip 14.22636pt1\leq m\leq M\hskip 14.22636pt.$ (10)
Here $f_{m}^{\delta}$ are nonzero coefficients arising from imposing suitable
normalization conventions444 See equation (18) below and the accompanying
discussion. Subsection 2.2 and Appendix A describe an unambigous gauge fixing
of $f_{m}^{\delta}$ and all related matters. for $|\tilde{c}_{m}^{\delta})$
and $|\tilde{b}_{m}^{\delta})$, other notations are again self explanatory.
The corresponding orthonormality relations take the following explicit
appearance:
$\displaystyle(c_{j}^{\delta}|c_{j^{\prime}}^{\delta})$ $\displaystyle=$
$\displaystyle\delta_{jj^{\prime}}\hskip 14.22636pt;$ (11)
$\displaystyle(c_{j}^{\delta}|\tilde{c}_{m}^{\delta})$ $\displaystyle=$
$\displaystyle 0\hskip 14.22636pt;$ (12)
$\displaystyle(c_{j}^{\delta}|\tilde{b}_{m}^{\delta})$ $\displaystyle=$
$\displaystyle 0\hskip 14.22636pt;$ (13)
$\displaystyle(\tilde{c}_{m}^{\delta}|\tilde{c}_{m^{\prime}}^{\delta})$
$\displaystyle=$ $\displaystyle 0\hskip 14.22636pt;$ (14)
$\displaystyle(\tilde{c}_{m}^{\delta}|\tilde{b}_{m^{\prime}}^{\delta})$
$\displaystyle=$ $\displaystyle\delta_{mm^{\prime}}\hskip 14.22636pt;$ (15)
$\displaystyle(\tilde{b}_{m}^{\delta}|\tilde{b}_{m^{\prime}}^{\delta})$
$\displaystyle=$ $\displaystyle 0\hskip 14.22636pt.$ (16)
Relations (14), (16) show that the eigenvectors $|\tilde{c}_{m}^{\delta})$ and
their complements $|\tilde{b}_{m}^{\delta})$ are self orthogonal and
normalized via (15) and (10), as opposed to the eigenvectors
$|c_{j}^{\delta})$ which are unit normalizable through (11). The pertinent
closure property is built up accordingly (see again Section 9.2 of Ref. [4]),
we have
$\sum_{j}\,|c_{j}^{\delta})(c_{j}^{\delta}|\;+\;\sum_{m}\,|\tilde{c}_{m}^{\delta})(\tilde{b}_{m}^{\delta}|\,+\,|\tilde{b}_{m}^{\delta})(\tilde{c}_{m}^{\delta}|\;=\;\hat{1}\hskip
14.22636pt;$ (17)
where $\hat{1}$ stands for an $N$-by-$N$ unit matrix.
The normalization of self-orthogonal vectors $|\tilde{c}_{m}^{\delta})$ and
$|\tilde{b}_{m}^{\delta})$ is not unambiguously fixed by the formulas (10),
(15), (17). Indeed, these relations are invariant with respect to rescalings
$|\tilde{c}_{m}^{\delta,{\rm
new}})\;=\;g_{m}^{\delta}\;|\tilde{c}_{m}^{\delta})\hskip 7.11317pt,\hskip
7.11317pt|\tilde{b}_{m}^{\delta,{\rm
new}})\;=\;\left(g_{m}^{\delta}\right)^{-1}|\tilde{b}_{m}^{\delta})\hskip
7.11317pt,\hskip 7.11317ptf_{m}^{\delta,{\rm
new}}\;=\;\left(g_{m}^{\delta}\right)^{-2}f_{m}^{\delta}\hskip 7.11317pt;$
(18)
where $g_{m}^{\delta}$ stands for any nozero complex valued factor.
Our above outlined formulas (8)-(17) indicate that full solution of an
eigenvalue problem of $\hat{H}(\delta)$ is determined by seven fundamental
entities
$\lambda(\delta)\hskip 7.11317pt,\hskip 7.11317pt\tilde{E}_{m}^{\delta}\hskip
7.11317pt,\hskip 7.11317pt|\tilde{c}_{m}^{\delta})\hskip 7.11317pt,\hskip
7.11317pt|\tilde{b}_{m}^{\delta})\hskip 7.11317pt,\hskip
7.11317ptE_{j}^{\delta}\hskip 7.11317pt,\hskip 7.11317pt|c_{j}^{\delta})\hskip
7.11317pt,\hskip 7.11317ptf_{m}^{\delta}\hskip 14.22636pt.$ (19)
The just displayed entities (19) depend continuously upon the parameter
$\delta\in{\mathbb{R}}$. An infinitesimal shift of $\delta$ changes our
Hamiltonian (6) into
$\hat{H}(\delta+{\rm d}\delta)\;=\;\hat{H}(\delta)\;+\;\hat{V}(\delta)\,{\rm
d}\delta\hskip 14.22636pt.$ (20)
This invokes the corresponding infinitesimal changes in the eigensolutions
(19). The associated rates of change
$\bm{\dot{}}{\lambda}(\delta)\hskip 7.11317pt,\hskip
7.11317pt\bm{\dot{}}{\tilde{E}_{m}^{\delta}}\hskip 7.11317pt,\hskip
7.11317pt|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\hskip 7.11317pt,\hskip
7.11317pt|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\hskip 7.11317pt,\hskip
7.11317pt\bm{\dot{}}{E}_{j}^{\delta}\hskip 7.11317pt,\hskip
7.11317pt|\bm{\dot{}}{c}_{j}^{\delta})\hskip 7.11317pt,\hskip
7.11317pt\bm{\dot{}}{f}_{m}^{\delta}$ (21)
are obtainable by examining how do the eigensolutions (19) respond to the
Hamiltonian perturbation $\hat{V}(\delta)\,{\rm d}\delta$ in equation (20). An
explicit analytic elaboration of such a perturbation theory is by no means
conventional or trivial, since the considered eigenproblems of
$\hat{H}(\delta)$ and $\hat{H}(\delta+{\rm d}\delta)$ do support $M$ binary
EPs, as highlighted above in (8)-(17). Nevertheless, the just mentioned task
is feasible to perform, and results in explicit analytic prescriptions for the
”velocities” (21) determining the ”dynamics” or ”motion” of the seven
fundamental eigensolution entities (19) in the flux of ”time”
$\delta\in{\mathbb{R}}$. These ”equations of motion for the EPs” (or briefly
EOM) are worked out in a self contained manner in the next Subsection 2.2,
which actually represents the most important ”hard core” material to be
communicated by the present paper. See the resulting equations (24), (26),
(31), (43), (44), (45), (46) below. Furthermore, an additional Appendix A
describes in a self contained fashion the construction of adequate initial
conditions for these EOM, corresponding to a frequently encountered situation
when the sought EPs emanate from hermitian curve crossings/degeneracies of
$\hat{H}(\lambda,\delta)$.
### 2.2 Equations of motion for the exceptional points
Assume that the seven fundamental entities (19) are known for a given
$\delta\in{\mathbb{R}}$. Let us derive now in a self contained manner explicit
analytic formulas for the corresponding (presumably unknown) derivatives (21).
These need to be expressed solely in terms of the known quantities (19).
#### 2.2.1 An equation of motion for $\bm{\dot{}}{\lambda}(\delta)$
bla
Take equation (8) for a given value of $m$ ($1\leq m\leq M$). Differentiate
both sides with respect to $\delta$, as to get
$\hat{V}(\delta)\,|\tilde{c}_{m}^{\delta})\;+\;\hat{H}(\delta)\,|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;\bm{\dot{}}{\tilde{E}_{m}^{\delta}}\,|\tilde{c}_{m}^{\delta})\;+\;\tilde{E}_{m}^{\delta}\,|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\hskip
14.22636pt.$ (22)
Substitute (7), multiply subsequently by $(\tilde{c}_{m}^{\delta}|$ from the
left, and exploit the self orthogonality property (14) at $m^{\prime}=m$ which
implies also
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;0\hskip
14.22636pt.$ (23)
This yields a compelling formula
$\bm{\dot{}}{\lambda}(\delta)\;=\;-\,\frac{(\tilde{c}_{m}^{\delta}|\partial_{\delta}\,\hat{H}(\lambda(\delta),\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{c}_{m}^{\delta}|\partial_{\lambda}\,\hat{H}(\lambda(\delta),\delta)|\tilde{c}_{m}^{\delta})}\hskip
14.22636pt;$ (24)
which represents perhaps the most important result of the present paper.
Outcome (24) should be regarged as an equation of motion for
$\bm{\dot{}}{\lambda}(\delta)$. The r.h.s. of (24) must be independent upon
$m$, as long as our assumption of having $M$ binary EPs holds. The
$m$-independence of (24) serves as an useful check of internal consistency in
our numerical calculations of Section 3 of the main text. From now on,
$\bm{\dot{}}{\lambda}(\delta)$ will be regarded as explicitly known (and
presumably finite555It is beyond the scope of the present article to examine
if (or under which circumstances) the denominator of (24) can ever become
zero.), and the same applies also for the perturbation $\hat{V}(\delta)$ of
Eq. (7).
#### 2.2.2 Equations of motion for $\bm{\dot{}}{\tilde{E}_{m}^{\delta}}$,
$\bm{\dot{}}{E}_{j}^{\delta}$, and $\bm{\dot{}}{f}_{m}^{\delta}$, plus other
accompanying elaborations
bla
Take equation (22) and multiply from the left by
$(\tilde{c}_{m^{\prime}}^{\delta}|$ where $m^{\prime}\neq m$. Exploit
subsequently (14). This yields an overlap element
$(\tilde{c}_{m^{\prime}}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\hskip
14.22636pt.\hskip 14.22636pt[\,m^{\prime}\neq m\,]$ (25)
The denominator of (25) is nonsingular as long as the considered $M$ binary
EPs are distinct.
Take again (22) and multiply from the left by $(\tilde{b}_{m}^{\delta}|$.
Exploit subsequently (10), (23), and also (15) for $m^{\prime}=m$. This yields
the as yet unknown energy derivative
$\bm{\dot{}}{\tilde{E}_{m}^{\delta}}\;=\;(\tilde{b}_{m}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})\hskip
14.22636pt.$ (26)
This is the sought equation of motion for
$\bm{\dot{}}{\tilde{E}_{m}^{\delta}}$.
Take again (22) and multiply from the left by
$(\tilde{b}_{m^{\prime}}^{\delta}|$ where $m^{\prime}\neq m$. Exploit
subsequently (10) and (15) together with (25). This yields an overlap element
$(\tilde{b}_{m^{\prime}}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\;+\;f_{m^{\prime}}^{\delta}\,\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}\hskip
14.22636pt.\hskip 14.22636pt[\,m^{\prime}\neq m\,]$ (27)
The denominators are nonsingular for the same reason as in (25). In passing we
note that
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m^{\prime}}^{\delta}})\;=\;-\,(\tilde{b}_{m^{\prime}}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\hskip
14.22636pt;$ (28)
valid as an immediate consequence of (15).
Take again (22) and multiply from the left by $(c_{j}^{\delta}|$ where $1\leq
j\leq(N-2\,M)$. Exploit subsequently (9) and (12). This yields an overlap
element
$(c_{j}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;\frac{(c_{j}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-E_{j}^{\delta}}\hskip
14.22636pt.$ (29)
The denominator is nonsingular as long as the considered $m$-th EP eigenvalue
$\tilde{E}_{m}^{\delta}$ does not coincide with the non-EP eigenvalues
$E_{j}$.
Proceeding further, take equation (9) for a given value of $j$ ($1\leq j\leq
N-2\,M$). Differentiate both sides with respect to $\delta$, as to get
$\hat{V}(\delta)\,|c_{j}^{\delta})\;+\;\hat{H}(\delta)\,|\bm{\dot{}}{c}_{j}^{\delta})\;=\;\bm{\dot{}}{E}_{j}^{\delta}\,|c_{j}^{\delta})\;+\;E_{j}^{\delta}\,|\bm{\dot{}}{c}_{j}^{\delta})\hskip
14.22636pt.$ (30)
Multiply subsequently from the left by $(c_{j}^{\delta}|$, exploit then (9)
and (11) for $j^{\prime}=j$. This yields the as yet unknown energy derivative
$\bm{\dot{}}{E}_{j}^{\delta}\;=\;(c_{j}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})\hskip
14.22636pt.$ (31)
This is the sought equation of motion for $\bm{\dot{}}{E}_{j}^{\delta}$.
Take again (30) and multiply from the left by $(c_{j^{\prime}}^{\delta}|$
where $j^{\prime}\neq j$. Exploit then (9) and (11). This yields an overlap
element
$(c_{j^{\prime}}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})\;=\;\frac{(c_{j^{\prime}}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-E_{j^{\prime}}^{\delta}}\hskip
14.22636pt.\hskip 14.22636pt[\,j^{\prime}\neq j\,]$ (32)
The denominator is nonsingular as long as the non-EP eigenvalues are non-
degenerate.
Take again (30) and multiply from the left by $(\tilde{c}_{m}^{\delta}|$.
Exploit then (8) and (12). This yields an overlap element
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})\;=\;\frac{(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-\tilde{E}_{m}^{\delta}}\hskip
14.22636pt.$ (33)
The denominator is nonsingular as long as $E_{j}$ does not coincide with the
EP eigenvalues $\tilde{E}_{m}^{\delta}$.
Take again (30) and multiply from the left by $(\tilde{b}_{m}^{\delta}|$.
Exploit then (10) and (13) together with (33). This yields an overlap element
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})\;=\;\frac{(\tilde{b}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-\tilde{E}_{m}^{\delta}}\;+\;f_{m}^{\delta}\,\frac{(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{(E_{j}^{\delta}-\tilde{E}_{m}^{\delta})^{2}}\hskip
14.22636pt.$ (34)
Again, the denominators are nonsingular as long as $E_{j}$ does not coincide
with the EP eigenvalues $\tilde{E}_{m}^{\delta}$. In passing we note that
$(c_{j}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;=\;-\,(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})\hskip
14.22636pt;$ (35)
valid as an immediate consequence of (13).
Proceeding further, take equation (10) for a given value of $m$ ($1\leq m\leq
M$). Differentiate both sides with respect to $\delta$, as to get
$\hat{V}(\delta)\,|\tilde{b}_{m}^{\delta})\;+\;\hat{H}(\delta)\,|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;-\;\bm{\dot{}}{\tilde{E}_{m}^{\delta}}\,|\tilde{b}_{m}^{\delta})\;-\;\tilde{E}_{m}^{\delta}\,|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;=\;\bm{\dot{}}{f}_{m}^{\delta}\,|\tilde{c}_{m}^{\delta})\;+\;f_{m}^{\delta}\,|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\hskip
14.22636pt.$ (36)
Multiply subsequently from the left by $(\tilde{b}_{m}^{\delta}|$, exploit
then (10) and (15) for $m^{\prime}=m$, as well as (16) for $m^{\prime}=m$.
This yields another important relation
$(\tilde{b}_{m}^{\delta}|\hat{V}(\delta)\,|\tilde{b}_{m}^{\delta})\;=\;\bm{\dot{}}{f}_{m}^{\delta}\;+\;f_{m}^{\delta}\,(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;-\;f_{m}^{\delta}\,(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\hskip
14.22636pt.$ (37)
Take again (36) and multiply from the left by
$(\tilde{b}_{m^{\prime}}^{\delta}|$ where $m^{\prime}\neq m$. Exploit
subsequently (10) and (15), (16), together with (27) and (28). This yields an
overlap element
$\displaystyle(\tilde{b}_{m^{\prime}}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;=\hskip
284.52756pt[\,m^{\prime}\neq m\,]$ $\displaystyle=$
$\displaystyle\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\;+\;f_{m^{\prime}}^{\delta}\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}\;-\;f_{m}^{\delta}\;\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}\;-\;2\,f_{m}^{\delta}\,f_{m^{\prime}}^{\delta}\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{3}}\hskip
7.11317pt.$
Much like pointed out before, the denominator is nonsingular as long as the
considered $M$ binary EPs are distinct.
Take again (36) and multiply from the left by $(\tilde{c}_{m}^{\delta}|$.
Exploit subsequently (8), (14) for $m=m^{\prime}$, (15) for $m=m^{\prime}$,
and (23). This yields
$\bm{\dot{}}{\tilde{E}_{m}^{\delta}}=(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})$
as already known from (26).
Take again (36) and multiply from the left by
$(\tilde{c}_{m^{\prime}}^{\delta}|$ where $m^{\prime}\neq m$. Exploit
subsequently (8), (14), (15), together with (25). This yields an overlap
element
$(\tilde{c}_{m^{\prime}}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$ in the
form exactly equal to an immediate consequence of (27) and (28).
Take again (36) and multiply from the left by $(c_{j}^{\delta}|$ where $1\leq
j\leq(N-2\,M)$. Exploit subsequently (9), (12), (13), together with (29). This
yields an overlap element
$(c_{j}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$ in the form exactly
equal to an immediate consequence of (34) and (35).
To complete all our technical elaborations regarding the overlap elements, we
need to specify the as yet undetermined quantities
$(c_{j}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})$,
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$,
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$,
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$. Recall that
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$ is already
fixed by (23). Clearly, property (11) for $j^{\prime}=j$ implies immediately
$(c_{j}^{\delta}|\bm{\dot{}}{c}_{j}^{\delta})\;=\;0\hskip 14.22636pt.$ (39)
Similarly, property (16) for $m^{\prime}=m$ yields immediately
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;=\;0\hskip
14.22636pt.$ (40)
An appropriate discussion of
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$ and
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$ is a bit more
intriguing. Property (15) for $m^{\prime}=m$ yields immediately
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;-\,(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\hskip
14.22636pt;$ (41)
hence it is sufficient to determine just
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$. Importantly,
the self-orthogonal vectors $|\tilde{c}_{m}^{\delta})$ and
$|\tilde{b}_{m}^{\delta})$, as well as the factor $f_{m}^{\delta}$, have been
introduced in the main text only modulo the rescaling transformation (18). It
is a trivial matter to verify that the as yet arbitrary rescaling coefficients
$g_{m}^{\delta}$ can be always chosen in such a particular manner as to
arrange for having
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})\;=\;0\;=\;(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\hskip
14.22636pt.$ (42)
This is our suitably chosen gauge fixing convention for
$(\tilde{b}_{m}^{\delta}|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$ and
$(\tilde{c}_{m}^{\delta}|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$. Having imposed
(42), equation (37) simplifies into a finalized equation of motion for
$\bm{\dot{}}{f}_{m}^{\delta}$, namely,
$\bm{\dot{}}{f}_{m}^{\delta}\;=\;(\tilde{b}_{m}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})\hskip
14.22636pt.$ (43)
For the sake of completeness and clarity, let us also point out here that
$(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})=0$, this is
equivalent to (24).
#### 2.2.3 Equations of motion for $|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$,
$|\bm{\dot{}}{c}_{j}^{\delta})$, $|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$
bla
The closure property (17) combined with (23), (25), (27), (29), (42) provides
immediately the desired equation of motion for
$|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$. One has
$\displaystyle|\bm{\dot{}}{\tilde{c}_{m}^{\delta}})$ $\displaystyle=$
$\displaystyle\sum_{j}\,|c_{j}^{\delta})\;\frac{(c_{j}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-E_{j}^{\delta}}$
(44) $\displaystyle+$ $\displaystyle\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\,\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\;+\;\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\,f_{m^{\prime}}^{\delta}\,\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}$
$\displaystyle+$ $\displaystyle\sum_{m^{\prime}\neq
m}\,|\tilde{b}_{m^{\prime}}^{\delta})\,\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\hskip
14.22636pt.$
Similarly, the closure property (17) combined with (32), (33), (34), (39)
provides immediately the desired equation of motion for
$|\bm{\dot{}}{c}_{j}^{\delta})$. One has
$\displaystyle|\bm{\dot{}}{c}_{j}^{\delta})$ $\displaystyle=$
$\displaystyle\sum_{j^{\prime}\neq
j}\,|c_{j^{\prime}}^{\delta})\,\frac{(c_{j^{\prime}}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-E_{j^{\prime}}^{\delta}}$
(45) $\displaystyle+$
$\displaystyle\sum_{m}\,|\tilde{c}_{m}^{\delta})\,\frac{(\tilde{b}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-\tilde{E}_{m}^{\delta}}\;+\;\sum_{m}\,|\tilde{c}_{m}^{\delta})\,f_{m}^{\delta}\,\frac{(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{(E_{j}^{\delta}-\tilde{E}_{m}^{\delta})^{2}}$
$\displaystyle+$
$\displaystyle\sum_{m}\,|\tilde{b}_{m}^{\delta})\,\frac{(\tilde{c}_{m}^{\delta}|\hat{V}(\delta)|c_{j}^{\delta})}{E_{j}^{\delta}-\tilde{E}_{m}^{\delta}}\hskip
14.22636pt.$
Finally, the closure property (17) combined with (27), (28), (34), (35),
(2.2.2), (40), (42) provides immediately the desired equation of motion for
$|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})$. One has
$\displaystyle|\bm{\dot{}}{\tilde{b}_{m}^{\delta}})\;=$ (46) $\displaystyle=$
$\displaystyle\sum_{j}\,|c_{j}^{\delta})\;\frac{(c_{j}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-E_{j}^{\delta}}\;-\;f_{m}^{\delta}\,\sum_{j}\,|c_{j}^{\delta})\;\frac{(c_{j}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-E_{j}^{\delta})^{2}}$
$\displaystyle+$ $\displaystyle\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\;\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}$
$\displaystyle+$ $\displaystyle\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\;f_{m^{\prime}}^{\delta}\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}\;-\;\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\;f_{m}^{\delta}\;\frac{(\tilde{b}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}$
$\displaystyle\hskip 147.95424pt\;-\;\sum_{m^{\prime}\neq
m}\,|\tilde{c}_{m^{\prime}}^{\delta})\;2\,f_{m}^{\delta}\,f_{m^{\prime}}^{\delta}\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{3}}$
$\displaystyle+$ $\displaystyle\sum_{m^{\prime}\neq
m}\,|\tilde{b}_{m^{\prime}}^{\delta})\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{b}_{m}^{\delta})}{\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta}}\;-\;f_{m}^{\delta}\,\sum_{m^{\prime}\neq
m}\,|\tilde{b}_{m^{\prime}}^{\delta})\;\frac{(\tilde{c}_{m^{\prime}}^{\delta}|\hat{V}(\delta)|\tilde{c}_{m}^{\delta})}{(\tilde{E}_{m}^{\delta}-\tilde{E}_{m^{\prime}}^{\delta})^{2}}\hskip
14.22636pt.$
In summary, we have in hand now a self contained collection of seven mutually
coupled equations of motion (24), (26), (31), (43), (44), (45), (46) for the
derivatives (21). These EOM determine the flux of the seven fundamental
entities (19) in the ”time” $\delta$, and can be propagated numerically along
$\delta\in{\mathbb{R}}$ once appropriate inital conditions are specified. As
already pointed out above, the issue of initial conditions is addressed in
Appendix A.
## 3 Test in a simple toy model
### 3.1 Introducing the toy model
The general mathematical formalism introduced above in Section 2 (and
supplemented by Appendix A) will be tested below on a conceptually simple yet
quite nontrivial toy model. We deliberately choose here a quantum system whose
state space is finite dimensional, and which therefore gives rise to a finite
number of EPs. Moreover, an inherent symmetry of our toy model allows
simultaneous existence of mutiple EPs ($M>1$), as anticipated already in
Section 2, and as explained in detail in the Figures below.
Our considered toy model corresponds to a theory of two distinct mutually
coupled angular momenta, $\hat{\vec{I}}=(\hat{I}_{1},\hat{I}_{2},\hat{I}_{3})$
and $\hat{\vec{J}}=(\hat{J}_{1},\hat{J}_{2},\hat{J}_{3})$, which possess the
conventional commutation properties. The associated starting Hamiltonian is
defined through prescription
$\hat{H}(\lambda)\;=\;\hat{H}_{0}\;+\;\lambda\,\hat{V}\hskip 14.22636pt;$ (47)
with
$\hat{H}_{0}\;=\;\omega\,\Bigl{(}\hat{I}_{3}+\hat{J}_{3}\Bigr{)}\hskip
14.22636pt;$ (48)
and
$\hat{V}\;=\;\hat{I}_{+}\,\hat{J}_{-}\,+\,\hat{I}_{-}\,\hat{J}_{+}\,+\,\hat{I}_{+}\,\hat{J}_{+}\,+\,\hat{I}_{-}\,\hat{J}_{-}\;=\;4\,\hat{I}_{1}\,\hat{J}_{1}\hskip
14.22636pt.$ (49)
Here $\omega>0$ and $\lambda\in{\mathbb{C}}$, and of course
$\hat{I}_{\pm}\;=\;\hat{I}_{1}\pm i\,\hat{I}_{2}\hskip 14.22636pt,\hskip
14.22636pt\hat{J}_{\pm}\;=\;\hat{J}_{1}\pm i\,\hat{J}_{2}\hskip 14.22636pt.$
(50)
The pertinent state space is spanned by basis vectors $|\,I_{\rm
T}\,I_{3}\,J_{\rm T}\,J_{3}\,\rangle$, where $(I_{\rm T}(I_{\rm T}+1),I_{3})$
are eigenvalues of $(\hat{I}^{2},\hat{I}_{3})$ and similarly $(J_{\rm
T}(J_{\rm T}+1),J_{3})$ are eigenvalues of $(\hat{J}^{2},\hat{J}_{3})$.
Clearly, both $I_{\rm T}$ and $J_{\rm T}$ are good quantum numbers for the
Hamiltonian (47). Dimension of a particular $(I_{\rm T},J_{\rm T})$ sector
equals to ${\cal N}_{I}\,{\cal N}_{J}$, here ${\cal N}_{I}=2\,I_{\rm T}+1$,
and similarly for ${\cal N}_{J}$. Hereafter we shall assume for definiteness
$I_{\rm T}=\frac{N}{2}$ and $J_{\rm T}=\frac{1}{2}$, where $N$ is an odd
positive integer (correspondingly, ${\cal N}_{I}=N+1$ and ${\cal N}_{J}=2$).
The parity of $(I_{3}+J_{3})$ is then another good quantum number.
Our primary interest consists in finding all the EPs of $\hat{H}(\lambda)$ of
Eq. (47) in the complex $\lambda$-plane. To accomplish this goal, we follow
the general strategy outlined in the Introduction, include into the game an
auxiliary switching parameter $\delta\in[0,1]$, and focus on investigating the
parameterically $\delta$-dependent EPs of an augmented Hamiltonian (3), where
by definition
$\hat{V}_{0}\;=\;\hat{I}_{+}\,\hat{J}_{-}\,+\,\hat{I}_{-}\,\hat{J}_{+}\hskip
14.22636pt;$ (51)
$\hat{V}_{1}\;=\;\hat{I}_{+}\,\hat{J}_{+}\,+\,\hat{I}_{-}\,\hat{J}_{-}\hskip
14.22636pt.$ (52)
Written explicitly, we have
$\hat{H}(\lambda,\delta)\;=\;\omega\,\Bigl{(}\hat{I}_{3}+\hat{J}_{3}\Bigr{)}\;+\;\lambda\,\left\\{\hat{I}_{+}\,\hat{J}_{-}\,+\,\hat{I}_{-}\,\hat{J}_{+}\,+\,\delta\,\Bigl{(}\hat{I}_{+}\,\hat{J}_{+}\,+\,\hat{I}_{-}\,\hat{J}_{-}\Bigr{)}\right\\}\hskip
7.11317pt.$ (53)
Again, both $I_{\rm T}$ and $J_{\rm T}$ are good quantum numbers for the
Hamiltonian (53), and the parity of $(I_{3}+J_{3})$ is another good quantum
number.
Before proceeding further, let us highlight an additional symmetry of the
Hamiltonian $\hat{H}(\lambda,\delta)$ of Eq. (53). The eigenvalue spectrum of
$\hat{H}(\lambda,\delta)$ is clearly invariant with respect to any similarity
transformation. One such particular transformation is represented by an
unitary operator
$\hat{U}\;=\;e^{-i\pi\hat{I}_{1}}\;e^{-i\pi\hat{J}_{2}}\hskip 14.22636pt;$
(54)
which corresponds to rotation of $\hat{\vec{I}}$ by angle $\pi$ around the
first coordinate axis, and to rotation of $\hat{\vec{J}}$ by angle $\pi$
around the second coordinate axis. Direct calculation yields
$\hat{U}^{\dagger}\,\hat{I}_{1}\,\hat{U}\;=\;+\,\hat{I}_{1}\hskip
14.22636pt,\hskip
14.22636pt\hat{U}^{\dagger}\,\hat{I}_{2}\,\hat{U}\;=\;-\,\hat{I}_{2}\hskip
14.22636pt,\hskip
14.22636pt\hat{U}^{\dagger}\,\hat{I}_{3}\,\hat{U}\;=\;-\,\hat{I}_{3}\hskip
14.22636pt;$ (55)
and similarly
$\hat{U}^{\dagger}\,\hat{J}_{1}\,\hat{U}\;=\;-\,\hat{J}_{1}\hskip
14.22636pt,\hskip
14.22636pt\hat{U}^{\dagger}\,\hat{J}_{2}\,\hat{U}\;=\;+\,\hat{J}_{2}\hskip
14.22636pt,\hskip
14.22636pt\hat{U}^{\dagger}\,\hat{J}_{3}\,\hat{U}\;=\;-\,\hat{J}_{3}\hskip
14.22636pt.$ (56)
Hence
$\hat{U}^{\dagger}\,\hat{I}_{\pm}\,\hat{U}\;=\;+\,\hat{I}_{\mp}\hskip
14.22636pt,\hskip
14.22636pt\hat{U}^{\dagger}\,\hat{J}_{\pm}\,\hat{U}\;=\;-\,\hat{J}_{\mp}\hskip
14.22636pt.$ (57)
If so, then our Hamiltonian $\hat{H}(\lambda,\delta)$ of Eq. (53) is converted
into
$\hat{U}^{\dagger}\,\hat{H}(\lambda,\delta)\,\hat{U}\;=\;-\,\hat{H}(\lambda,\delta)\hskip
14.22636pt.$ (58)
The just derived symmetry property (58) reveals that both
$\hat{H}(\lambda,\delta)$ and $-\hat{H}(\lambda,\delta)$ must possess the same
spectrum. Thus, if $E(\lambda,\delta)$ is an eigenvalue, then also
$-E(\lambda,\delta)$ is an eigenvalue. Note that another unitary
transformation $\hat{U}=e^{-i\pi\hat{I}_{2}}\,e^{-i\pi\hat{J}_{1}}$ leads to
the same conclusion, since the Hamiltonian (53) is invariant under an
interchange $\hat{\vec{I}}\leftrightarrow\hat{\vec{J}}$.
The matrix elements of $\hat{H}(\lambda,\delta)$ in a given $(I_{\rm T},J_{\rm
T})$ sector can be trivially calculated with the aid of familiar formulas for
the involved angular momentum operators. Let us write them down here
explicitly for the sake of maximum clarity:
$\displaystyle\langle I_{\rm T}\,I_{3}\,J_{\rm
T}\,J_{3}|\,\hat{H}(\lambda,\delta)\,|I_{\rm T}\,I^{\prime}_{3}\,J_{\rm
T}\,J^{\prime}_{3}\rangle\;=$ $\displaystyle=$
$\displaystyle\delta_{I_{3}I^{\prime}_{3}}\,\delta_{J_{3}J^{\prime}_{3}}\;\omega\,\Bigl{(}I_{3}+J_{3}\Bigr{)}$
$\displaystyle+$
$\displaystyle\delta_{I_{3}(I^{\prime}_{3}+1)}\,\delta_{J_{3}(J^{\prime}_{3}-1)}\;\lambda\,\phantom{\delta}\,\gamma_{+}(I_{\rm
T},I^{\prime}_{3})\,\gamma_{-}(J_{\rm
T},J^{\prime}_{3})\;+\;\delta_{I_{3}(I^{\prime}_{3}-1)}\,\delta_{J_{3}(J^{\prime}_{3}+1)}\;\lambda\,\phantom{\delta}\,\gamma_{-}(I_{\rm
T},I^{\prime}_{3})\,\gamma_{+}(J_{\rm T},J^{\prime}_{3})$ $\displaystyle+$
$\displaystyle\delta_{I_{3}(I^{\prime}_{3}+1)}\,\delta_{J_{3}(J^{\prime}_{3}+1)}\;\lambda\,\delta\,\gamma_{+}(I_{\rm
T},I^{\prime}_{3})\,\gamma_{+}(J_{\rm
T},J^{\prime}_{3})\;+\;\delta_{I_{3}(I^{\prime}_{3}-1)}\,\delta_{J_{3}(J^{\prime}_{3}-1)}\;\lambda\,\delta\,\gamma_{-}(I_{\rm
T},I^{\prime}_{3})\,\gamma_{-}(J_{\rm T},J^{\prime}_{3})\hskip 14.22636pt;$
where by definition
$\gamma_{\pm}(l,l^{\prime})\;=\;\sqrt{l(l+1)-l^{\prime}(l^{\prime}\pm
1)}\hskip 14.22636pt.$ (60)
In the case of $\delta=0$, the sum $K=(I_{3}+J_{3})$ becomes another good
quantum number. Correspondingly, the $(I_{\rm T},J_{\rm T})$ sector is divided
into subsectors associated with $K=(-I_{\rm T}-J_{\rm T}),(-I_{\rm T}-J_{\rm
T}+1),\cdots,(+I_{\rm T}+J_{\rm T})$. Moreover, $\hat{H}_{0}$ commutes both
with $\hat{I}_{+}\,\hat{J}_{-}$ and with $\hat{I}_{-}\,\hat{J}_{+}$, hence
$\Bigl{[}\hat{H}_{0}\,,\,\hat{V}_{0}\Bigr{]}\;=\;\hat{0}\hskip 14.22636pt;$
(61)
exactly as mentioned in the Introduction. Thereby an eigenvalue problem of the
Hamiltonian
$\hat{H}(\lambda,0)\;=\;\hat{H}_{0}\;+\;\lambda\,\hat{V}_{0}$ (62)
is solvable trivially, provided only that an eigenproblem of $\hat{V}_{0}$ has
been resolved. Accordingly, all the EPs of $\hat{H}(\lambda,0)$ are trivially
known (see the $\delta=0$ panels of Figs. 1 and 2 below, which consist just of
intersecting straight lines). This confirms that our definition of the
augmented Hamiltonian (53) satisfies the general requirements imposed on
$\hat{H}(\lambda,0)$ in the Introduction and in Appendix A.
For illustration, let us present now explicitly the calculated eigenvalue
spectrum of $\hat{H}(\lambda,\delta)$ for $N=19$, $\lambda\in[0,1]$,
$\delta\in[0,1]$, even parity of $(I_{3}+J_{3})$, and $\omega=1.0$. The
obtained results are shown in Fig. 1. An analogous case of odd parity is then
depicted in Fig. 2.
Figure 1: The calculated eigenvalue spectrum of $\hat{H}(\lambda,\delta)$ of
Eq. (53) for $N=19$, even parity of $(I_{3}+J_{3})$, and $\omega=1.0$.
Horizontal axis corresponds to $\lambda$, vertical axis to the energy variable
$E$ associated with the eigenvalues. Note the reflection symmetry of the
spectrum with respect to the horizontal $E=0$ axis. This kind of symmetry is
explained by equation (58) above. Figure 2: The calculated eigenvalue
spectrum of $\hat{H}(\lambda,\delta)$ of Eq. (53) for $N=19$, odd parity of
$(I_{3}+J_{3})$, and $\omega=1.0$. Horizontal axis corresponds to $\lambda$,
vertical axis to the energy variable $E$ associated with the eigenvalues. Note
the reflection symmetry of the spectrum with respect to the horizontal $E=0$
axis. This kind of symmetry is explained by equation (58) above.
The sought EPs of our starting Hamiltonian $\hat{H}(\lambda)$ of Eq. (47) can
be identified now with the EPs of $\hat{H}(\lambda,\delta)$ of Eq. (53) at
$\delta=1$. Yet the EPs of $\hat{H}(\lambda,\delta)$ are obtainable
numerically from the hermitian straight line crossings of $\hat{H}(\lambda,0)$
of Eq. (62) via the parametric $\delta$-propagation
$(\delta=0\mapsto\delta=1)$ of the EOM, exactly as we formulated in a self
contained fashion in the above Section 2 and in Appendix A.
### 3.2 Numerical propagation of the EOM and the obtained results
The seven mutually coupled equations of motion (24), (26), (31), (43), (44),
(45), (46) derived in Subsection 2.2 are propagated numerically using the
simplest possible first order difference scheme, starting from the initial
conditions which are established in Appendix A. At each propagation step, an
internal consistency of the obtained results is strictly checked. Namely, the
seven entities (19) calculated for a given particular value of
$\delta\in[0,1]$ are required to satisfy (up to a prescribed numerical
accuracy) the three eigenvalue equations (8), (9), (10), the six
orthonormality relations (11)-(16), and the closure property (17). In this
manner our numerical results presented below are granted to be reliably
converged.
Our illustrative numerical calculations are performed for the toy model of
Subsection 3.1, assuming $N=19$ and $\omega=1$ much as in Figs. 1-2. The range
$[0,1]$ of $\delta$ is discretized by $G=10^{7}$ equidistant grid points. This
ensures that our aforementioned test relations (8), (9), (10), (11)-(16), (17)
are fulfilled at each value of $\delta$ with the maximum error not exceeding
$0.0005$.
#### 3.2.1 Results for the odd parity
bla
For maximum clarity of the presentation, it is convenient to start with
discussing our results obtained for the case of odd parity. Our propagation
starts from the hermitian crossings which are indicated by red bullets in Fig.
3.
Figure 3: The hermitian straight line crossings corresponding to
$\hat{H}(\lambda,0)$, again for $N=19$, odd parity of $(I_{3}+J_{3})$, and
$\omega=1.0$. One may observe that there often (through not always) exist
multiple (twofold) crossings for a given value of $\lambda$ (see the vertical
black dashed lines). These are exactly the multiplets described theoretically
in Appendix A.
Our explicit numerical propagation of the EOM provides the following outcomes:
* $\star$
Each isolated (onefold) hermitian crossing of Fig. 3 provides for $\delta>0$
an isolated (onefold) EP.
* $\star$
Each twofold hermitian crossing of Fig. 3 provides for $\delta>0$ the
corresponding pair (twofold cluster) of distinct binary EPs which share the
same dependence $\lambda(\delta)$. This is a direct consequence of the
symmetry of $\hat{H}(\lambda,\delta)$ which is highlighted by equation (58)
above.
Figs. 4, 5 and 6 present explicitly our most important numerical results,
namely, the trajectories of the EPs in the $\lambda$-plane and in the plane of
complex energy. Note that Fig. 4 and Fig. 5 display essentially the same data,
just with a different layout convention. Specifically, Fig. 4 is plotted using
a random (machine generated) sign convention for the imaginary part of each
obtained curve $\lambda_{k}(\delta)$, whereas Fig. 5 corresponds to imposing a
fixed convention of $\Im\lambda_{k}(\delta)\geq 0$.666 Recall in this context
that each curve $\lambda_{k}(\delta)$ displayed in Figs. 4-5 gives rise to
another legitimate curve $\lambda^{*}(\delta)$ which penetrates into the
opposite side of the imaginary $\lambda$-plane, this curve
$\lambda_{k}^{*}(\delta)$ is not plotted. Compared to Fig. 4, the overall
appearance of Fig. 5 is somewhat less transparent. In particular, some curves
$\lambda_{k}(\delta)$ do intersect (albeit at mutually distinct values of
$\delta$). This is the only reason why we hereafter prefer to present all our
numerical results using the layout convention analogous to Fig. 4.
Figure 4: The EP trajectories $\lambda_{k}(\delta)$ emanating from the
hermitian straight line crossings of Fig. 3. The dark blue trajectories
correspond to a pair (twofold cluster) of distinct binary EPs which share the
same $\lambda_{k}(\delta)$, see our discussion in the main text. On the other
hand, the light blue trajectories are associated with a single binary EP. Note
also that each curve $\lambda_{k}(\delta)$ gives rise to another legitimate
curve $\lambda_{k}^{*}(\delta)$, which departs from the same (cluster of) red
bullet(s) of Fig. 3, but which corresponds to the complex conjugated initial
conditions at $\delta=0$. $[\,$This means that each curve
$\lambda_{k}(\delta)$ plotted explicitly here in the present figure has been
obtained via adopting a particular (machine generated) sign convention for the
$\sigma_{1}$-factor from Appendix A.$\,]$ Figure 5: The same data as in Fig.
4, just a sign convention of $\Im\lambda_{k}(\delta)\geq 0$ is imposed a
posteriori. Note that each curve $\lambda_{k}(\delta)$ displayed here gives
rise to another legitimate curve $\lambda_{k}^{*}(\delta)$ which penetrates
into the negative imaginary plane of $\lambda$. Compared to Fig. 4, the
overall appearance of the present figure is somewhat less transparent. In
particular, some curves $\lambda_{k}(\delta)$ do intersect (albeit at mutually
distinct values of $\delta$). This is the only reason why we hereafter prefer
to display all our numerical results using the layout convention analogous to
Fig. 4. Figure 6: The EP trajectories $\tilde{E}_{m}(\delta)$ emanating from
the hermitian straight line crossings of Fig. 3 and corresponding to all the
curves $\lambda_{k}(\delta)$ plotted explicitly in Fig. 4. Importantly, all
the onefold hermitian crossings of Fig. 3 are associated with
$\tilde{E}_{m}(0)=0$, and actually provide $\tilde{E}_{m}(\delta)=0$ for all
$\delta\in[0,1]$. This fact (arising as a trivial consequence of the symmetry
property (58) of $\hat{H}(\lambda,\delta)$) is highlighted by the presence of
red bullet at the origin of the energy plane. On the other hand, the present
figure depicts also a progression of several nonzero trajectories
$\tilde{E}_{m}(\delta)$, which possess reflection symmetry with respect to the
origin. Each pair of these symmetry related trajectories corresponds
inevitably to a pair (twofold cluster) of distinct binary EPs which share the
same $\lambda_{k}(\delta)$.
#### 3.2.2 Results for the even parity
bla
Let us move now on to the case of even parity. Our propagation starts from the
hermitian crossings which are indicated by red bullets in Fig. 7.
Figure 7: The hermitian straight line crossings corresponding to
$\hat{H}(\lambda,0)$, again for $N=19$, even parity of $(I_{3}+J_{3})$, and
$\omega=1.0$. One may observe that there often (through not always) exist
multiple (twofold, fourfold) crossings for a given value of $\lambda$ (see the
vertical black dashed lines). These are exactly the multiplets described
theoretically in Appendix A.
Our explicit numerical propagation of the EOM provides the following outcomes:
* $\star$
Each isolated (onefold) hermitian crossing of Fig. 7 provides for $\delta>0$
an isolated (onefold) EP.
* $\star$
Each twofold hermitian crossing of Fig. 7 provides for $\delta>0$ the
corresponding pair (twofold cluster) of distinct binary EPs which share the
same dependence $\lambda(\delta)$. This is a direct consequence of the
symmetry of $\hat{H}(\lambda,\delta)$ which is highlighted by equation (58)
above.
* $\star$
The fourfold crossings behave for $\delta>0$ as two separate twofold
crossings. Each of these twofold crossings reflects again the symmetry
property (58) of $\hat{H}(\lambda,\delta)$.
Figs. 8 and 9 present explicitly our most important numerical results, namely,
the trajectories of the EPs in the $\lambda$-plane and in the plane of complex
energy. We use the same layout convention as in Fig. 4 above.
Figure 8: The EP trajectories $\lambda_{k}(\delta)$ emanating from the
hermitian straight line crossings of Fig. 7. The dark blue trajectories
correspond to a pair (twofold cluster) of distinct binary EPs which share the
same $\lambda_{k}(\delta)$, see our discussion in the main text. On the other
hand, the light blue trajectories are associated with a single binary EP. Note
also that each curve $\lambda_{k}(\delta)$ gives rise to another legitimate
curve $\lambda_{k}^{*}(\delta)$, which departs from the same (cluster of) red
bullet(s) of Fig. 7, but which corresponds to the complex conjugated initial
conditions at $\delta=0$. $[\,$This means that each curve
$\lambda_{k}(\delta)$ plotted explicitly here in the present figure has been
obtained via adopting a particular (machine generated) sign convention for the
$\sigma_{1}$-factor from Appendix A.$\,]$ Figure 9: The EP trajectories
$\tilde{E}_{m}(\delta)$ emanating from the hermitian straight line crossings
of Fig. 7 and corresponding to all the curves $\lambda_{k}(\delta)$ plotted
explicitly in Fig. 8. Importantly, all the onefold hermitian crossings of Fig.
7 are associated with $\tilde{E}_{m}(0)=0$, and actually provide
$\tilde{E}_{m}(\delta)=0$ for all $\delta\in[0,1]$. This fact (arising as a
trivial consequence of the symmetry property (58) of
$\hat{H}(\lambda,\delta)$) is highlighted by the presence of red bullet at the
origin of the energy plane. On the other hand, the present figure depicts also
a progression of several nonzero trajectories $\tilde{E}_{m}(\delta)$, which
possess reflection symmetry with respect to the origin. Each pair of these
symmetry related trajectories corresponds inevitably to a pair (twofold
cluster) of distinct binary EPs which share the same $\lambda_{k}(\delta)$.
Summarizing the contents of Section 3, we have employed a nontrivial toy model
to explicitly test the performance of our computational algorithm based upon
solving the EOM for the EPs. We hope that our illustrative calculations
demonstrate practical usefulness of our EOM method for finding the EPs of
nontrivial Hamiltonians.
## 4 Concluding remarks
In summary, the present article establishes the equations of motion (EOM)
governing the dynamics (or flux) of EPs of parameterically dependent
nonhermitian Hamiltonians. This motion of EPs in the parameter space is
triggered here by a continuous change of an additional external control
parameter of the Hamiltonian. Our analysis covers a relatively broad class of
problems (1), where the search for EPs can be reinterpreted as solution of EOM
pertaining to an augmented Hamiltonian $\hat{H}(\lambda,\delta)$ of Eq. (3),
with $\delta$ playing the role of the dynamical ”time”.
From the theoretical point of view, Section 2 represents the most important
new hardcore material brought in by our paper. The resulting EOM (24), (26),
(31), (43), (44), (45), (46) are based essentially upon implementing a
nontraditional perturbation theory of nonhermitian quantum mechanics in the
presence of multiple EPs. An elaboration of such EOM, and in particular
derivation of equation (24), brings further theoretical insights into the
properties of the EPs, and represents thus a contribution of its own right.
Furthermore, our EOM can be exploited even in a totally pragmatic fashion,
merely as an efficient numerical tool for obtaining all the EPs of interest
for a given Hamiltonian $\hat{H}(\lambda)$ of Eq. (1). Such an approach lends
itself for its immediate application e.g. whenever the sought EPs emanate from
avoided crossings of the particular hermitian Hamiltonian under study. Section
3 demonstrates very explicitly practical merits of our EOM method in the just
mentioned situation.
We hope that the EOM formalism developed here can motivate or facilitate
further studies of EPs in atomic, nuclear, optical and condensed matter
physics.
Acknowledgements
We acknowledge financial support of the Czech Science Foundation under grant
Nos. 20-21179S (M. Š.) and 20-09998S (P. S. and P. C.), and of the Charles
University in Prague under project UNCE/SCI/013 (P. S. and P. C.).
References
## References
* [1] T. Kato, Perturbation Theory of Linear Operators, Springer, New York (1966).
* [2] C. M. Bender and T. T. Wu, Phys. Rev., 184, 1231 (1969).
* [3] N. Moiseyev and S. Friedland, Phys. Rev. A, 22, 618 (1980).
* [4] N. Moiseyev, Non-Hermitian Quantum Mechanics, Cambridge University Press (2011).
* [5] A. P. Seyranian, O. N. Kirillov, and A. Mailybaev, J. Phys. A: Math. Gen., 38, 1723 (2005).
* [6] M. A. Miri and A. Al, Science, 363, eaar7709 (2019).
* [7] W. D. Heiss, J. Phys. A: Math. Gen., 37, 2455 (2004).
* [8] I. Rotter, J. Phys. A: Math. Theor., 42, 153001 (2009).
* [9] W. D. Heiss, J. Phys. A: Math. Theor., 45, 444016 (2012).
* [10] M. R. Zirnbauer, J. J. M. Verbaarschot, and H. A. Weidenmüller, Nucl. Phys. A, 411, 161 (1983).
* [11] W. D. Heiss and A. L. Sannino, J. Phys. A: Math. Gen., 23, 1167 (1990).
* [12] W. D. Heiss and W. H. Steeb, J. Math. Phys., 32, 3003 (1991).
* [13] M. V. Berry, Czech. J. Phys., 54, 1039 (2004).
* [14] S. Garmon, M. Gianfreda, and N. Hatano, Phys. Rev. A, 92, 022125 (2015).
* [15] S. Klaiman, U. Günther, and N. Moiseyev, Phys. Rev. Lett., 101, 080402 (2008).
* [16] A. Pick, P. R. Kaprálová-Žďánská, N. Moiseyev, J. Chem. Phys., 150, 204111 (2019).
* [17] C. Shi, M. Dubois, et al., Nature Communications, 7, 11110 (2016).
* [18] Y. Choi, C. Hahn, et al., Nature Communications, 9, 2182 (2018).
* [19] R. El-Ganainy, K. G. Makris, et al., Nature Physics, 14, 11 (2018).
* [20] G. Shmuel, N. Moiseyev, Phys. Rev. Applied, 13, 024074 (2020).
* [21] A. Ben-Asher, D. Šimsa, T. Uhlířová, M. Šindelka, and N. Moiseyev,
Phys. Rev. Lett., 124, 253202 (2020).
* [22] N. Moiseyev and M. Šindelka, Phys. Rev. A, 103, 033518 (2021).
* [23] P. R. Kaprálová, M. Šindelka, and N. Moiseyev, J. Phys. A: Math. Gen., 55, 284001 (2022).
* [24] P. R. Kaprálová, Annals of Physics, 443, 168939 (2022).
* [25] H. Hodaei, M. A. Miri, M. Heinrich, D. N. Christodoulides, M. Khajavikhan,
Science, 346, 975978 (2014).
* [26] L. Feng, Z. J. Wong, R.-M. Ma, Y. Wang, X. Zhang, Science, 346, 972975 (2014).
* [27] A. Regensburger, C. Bersch, M.-A. Miri, G. Onishchukov, D. N. Christodoulides, U. Peschel,
Nature, 488, 167 (2012).
* [28] Z. Lin, H. Ramezani, T. Eichelkraut, T. Kottos, H. Cao, D. N. Christodoulides,
Phys. Rev. Lett., 106, 213901 (2011).
* [29] X. Wang, X. Fang, D. Mao, Y. Jing, Y. Li, Phys. Rev. Lett., 123, 214302 (2019).
* [30] W. Chen, S. K. Özdemir, G. Zhao, J. Wiersig, L. Yang, Nature, 548, 192 (2017).
* [31] M. P. Hokmabadi, A. Schumer, D. N. Christodoulides, M. Khajavikhan, Nature, 576, 70 (2019).
* [32] H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides,
M. Khajavikhan, Nature, 548, 187 (2017).
* [33] J. Wiersig, Phys. Rev. Lett., 112, 203901 (2014).
* [34] W. D. Heiss, Z. Phys. A, 329, 133 (1989).
* [35] W. D. Heiss and M. Müller, Phys. Rev. E, 66, 016217 (2002).
* [36] W. D. Heiss, F. G. Scholtz, and H. B. Geyer, J. Phys. A: Math. Gen., 38, 1843 (2005).
* [37] P. Cejnar, S. Heinze, and J. Dobeš, Phys. Rev. C, 71, 011304 (2005).
* [38] P. Cejnar, S. Heinze, and M. Macek, Phys. Rev. Lett., 99, 100601 (2007).
* [39] P. Stránský, M. Dvořák, and P. Cejnar, Phys. Rev. E, 97, 012112 (2018).
* [40] T. E. Lee, F. Reiter, and N. Moiseyev, Phys. Rev. Lett., 113, 250401 (2014).
* [41] D. I. Borisov, F. Ružička, and M. Znojil, Int. J. Theor. Phys., 54, 4293 (2015).
* [42] M. Znojil, Proc. R. Soc. A, 476, 20190831 (2020).
* [43] S.-J. Wang and S. Y. Chu, Phys. Rev. A, 47, 3546 (1993).
* [44] M. Šindelka, L. F. Santos, N. Moiseyev, Phys. Rev. A, 95, 010103(R) (2017).
* [45] C. Jung, M. Müller, and I. Rotter, Phys. Rev. E, 60, 114 (1999).
* [46] P. Stránský and P. Cejnar, Phys. Rev. E, 100, 042119 (2019).
* [47] E.-M. Graefe, U. Günther, H. J. Korsch, and A. E. Niederle,
J. Phys. A: Math. Theor., 41, 255206 (2008).
* [48] G. Demange and E.-M. Graefe, J. Phys. A: Math. Theor., 45, 025303 (2012).
* [49] M. Znojil, Phys. Rev. A, 100, 032124 (2019).
* [50] A. Mailybaev, Numer. Linear Algebra Appl., 13, 419 (2006).
* [51] R. Uzdin and R. Lefebvre, J. Phys. B: At. Mol. Opt. Phys., 43, 235004 (2010).
* [52] O. N. Kirillov, Entropy, 20, 502 (2018).
* [53] B. Nennig, E. Perrey-Debain, J. Comp. Phys., 412, 109425 (2020).
## Appendix A Initial conditions for the EOM
Equations of motion (24), (26), (31), (43), (44), (45), (46) need to be
supplemented with appropriate initial conditions (ICS), i.e., by the seven
fundamental entities (19) provided at some starting value of
$\delta=\delta_{\rm in}$. The choice of $\delta_{\rm in}$ is of course
governed by concrete nature of the problem under study. In the present paper,
we shall describe a relatively frequently encountered situation when the
mentioned ICS are determinable at $\delta_{\rm in}$ (semi)trivially due to a
particularly simple form of $\hat{H}(\lambda,\delta_{\rm in})$. Namely, we
shall be concerned with such an arrangement when the starting Hamiltonian
$\hat{H}(\lambda,\delta_{\rm in})$ ($\lambda\in{\mathbb{R}}$) of the studied
physical model is hermitian (actually, even real symmetric), and possesses
exact crossings (accidental degeneracies). These crossings play the role of
origins from which our sought EPs emanate into the complex $\lambda$-plane as
$\delta$ is set to depart continuously from $\delta_{\rm in}$.
Let us assume for now $\lambda\in{\mathbb{R}}$, and consider the hermitian
Hamiltonian $\hat{H}(\lambda,\delta_{\rm in})$. Suppose that there exists some
particular value $\lambda_{\rm in}\in{\mathbb{R}}$ at which the eigenvalue
spectrum of $\hat{H}(\lambda,\delta_{\rm in})$ contains $M_{\rm in}$ simple
binary777 One may of course analyze also more general situations of multiple
degeneracies, but this is beyond the scope of the present paper. crossings
($1\leq M_{\rm in}\leq N/2$). Meaning that
$\displaystyle\;\;\;\;\;E_{1}(\lambda_{\rm in},\delta_{\rm in})$
$\displaystyle=$ $\displaystyle\;\;E_{2}(\lambda_{\rm in},\delta_{\rm
in})\hskip 14.22636pt;$ $\displaystyle\;\;\;\;\;E_{3}(\lambda_{\rm
in},\delta_{\rm in})$ $\displaystyle=$ $\displaystyle\;\;E_{4}(\lambda_{\rm
in},\delta_{\rm in})\hskip 14.22636pt;$ $\displaystyle\;\;\bm{\vdots}$
$\displaystyle E_{2M_{\rm in}-1}(\lambda_{\rm in},\delta_{\rm in})$
$\displaystyle=$ $\displaystyle E_{2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})\hskip 7.11317pt;$
where the twice degenerate levels $E_{1}(\lambda_{\rm in},\delta_{\rm
in}),\ldots,E_{2M_{\rm in}-1}(\lambda_{\rm in},\delta_{\rm in})$ are all
distinct, and satisfy also
$\displaystyle\;\;\;\;\;\partial_{\lambda}\,E_{1}(\lambda,\delta_{\rm
in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}$ $\displaystyle\neq$
$\displaystyle\;\;\partial_{\lambda}\,E_{2}(\lambda,\delta_{\rm
in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}\hskip 14.22636pt;$
$\displaystyle\;\;\;\;\;\partial_{\lambda}\,E_{3}(\lambda,\delta_{\rm
in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}$ $\displaystyle\neq$
$\displaystyle\;\;\partial_{\lambda}\,E_{4}(\lambda,\delta_{\rm
in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}\hskip 14.22636pt;$
$\displaystyle\;\;\bm{\vdots}$ $\displaystyle\partial_{\lambda}\,E_{2M_{\rm
in}-1}(\lambda,\delta_{\rm in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}$
$\displaystyle\neq$ $\displaystyle\partial_{\lambda}\,E_{2M_{\rm
in}}(\lambda,\delta_{\rm in})\,\Bigr{|}_{\lambda=\lambda_{\rm in}}\hskip
7.11317pt.$
Conditions (A) say that each of the listed ”simple binary” crossings
corresponds to an intersection of two $\lambda$-dependent eigenvalue lines
with nonequal slopes. All the remaining eigenvalues $E_{j>2M_{\rm
in}}(\lambda_{\rm in},\delta_{\rm in})$ are assumed to be nondegenerate. Figs.
1, 2, 3, 7 in the main text illustrate neatly the presence of the just
discussed simple binary crossings in the case of our toy model Hamiltonian at
$\delta_{\rm in}=0$. In fact, Figs. 1, 2, 3, 7 depict even several distinct
occurrences of $\lambda_{\rm in}$ together with their pertinent values of
$M_{\rm in}$ (one may actually observe that $M_{\rm in}\in\\{1,2,4\\}$ in
these plots).
Let us explore now what happens with a particular multiplet of simple binary
crossings $(\delta_{\rm in},\lambda_{\rm in},M_{\rm in})$ once
$\delta\in{\mathbb{R}}$ is set to depart slightly from $\delta_{\rm in}$, and
once $\lambda$ is set to deviate slightly from $\lambda_{\rm in}$ while being
allowed to penetrate into the complex plane. As a matter of fact, each
crossing $\kappa\in\\{1,2,\bm{\cdots},M_{\rm in}\\}$ survives inside the
complex $\lambda$-plane in the form of a binary EP, which moves with $\delta$
along a certain well defined trajectory $(\delta,\lambda_{\kappa}(\delta))$.
Generally speaking, the resulting trajectories $\lambda_{\kappa}(\delta)$ will
be $\kappa$-dependent. However, eventual symmetries of
$\hat{H}(\lambda,\delta)$ may also cause (some of) these trajectories to be
exactly identical. Under these more peculiar circumstances, our $M_{\rm in}$
binary crossings can be classified into subgroups (clusters), such that
$\lambda_{\kappa}(\delta)$ is the same within each subgroup (cluster).
Different clusters will be hereafter labeled by index $k$.
Consider now a particular $k$-th cluster of $M$ binary EPs $(1\leq M\leq
M_{\rm in})$. As explained in the previous paragraph, this cluster of $M$ EPs
(whose elements we are going to label by index $m\in\\{1,2,\bm{\cdots},M\\}$)
emanates from a subset of simple binary hermitian crossings $(\delta_{\rm
in},\lambda_{\rm in},M_{\rm in})$, and $1\leq M\leq M_{\rm in}$. All the
mentioned $M$ EPs are associated with the same complex $\lambda$-trajectory,
$\lambda_{k}(\delta)$. At $\delta=\delta_{\rm in}$, one has
$\lambda_{k}(\delta_{\rm in})\;=\;\lambda_{\rm in}\hskip 14.22636pt;$ (65)
and
$\displaystyle\;\;\;\;\;E_{1}(\lambda_{\rm in},\delta_{\rm in})$
$\displaystyle=$ $\displaystyle\;\;E_{2}(\lambda_{\rm in},\delta_{\rm
in})\hskip 14.22636pt;$ $\displaystyle\;\;\;\;\;E_{3}(\lambda_{\rm
in},\delta_{\rm in})$ $\displaystyle=$ $\displaystyle\;\;E_{4}(\lambda_{\rm
in},\delta_{\rm in})\hskip 14.22636pt;$ $\displaystyle\;\;\bm{\vdots}$
$\displaystyle E_{2M-1}(\lambda_{\rm in},\delta_{\rm in})$ $\displaystyle=$
$\displaystyle E_{2M}(\lambda_{\rm in},\delta_{\rm in})\hskip 14.22636pt.$
We have conveniently adopted here the same kind of notation as above in (A).
Let the orthonormalized eigenvectors corresponding to $E_{1}(\lambda_{\rm
in},\delta_{\rm in})$, $E_{2}(\lambda_{\rm in},\delta_{\rm in})$,
$E_{3}(\lambda_{\rm in},\delta_{\rm in})$ etc. be denoted by symbols
$|v_{1}(\lambda_{\rm in},\delta_{\rm in})\rangle$, $|v_{2}(\lambda_{\rm
in},\delta_{\rm in})\rangle$, $|v_{3}(\lambda_{\rm in},\delta_{\rm
in})\rangle$, etc. Note that we use here the standard ket-notation, since
$\hat{H}(\lambda_{\rm in},\delta_{\rm in})$ is hermitian (real symmetric) and
thus the conventional definition of the scalar product applies. Since
$E_{1}(\lambda_{\rm in},\delta_{\rm in})=E_{2}(\lambda_{\rm in},\delta_{\rm
in})$, the sought initial condition for the $m=1$ EP must inevitably look as
follows:
$\tilde{E}_{1}^{\delta_{\rm in}}\;=\;E_{1}(\lambda_{\rm in},\delta_{\rm
in})\hskip 14.22636pt;$ (67)
and
$\displaystyle|\tilde{c}_{1}^{\delta_{\rm in}})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\,\Bigl{(}|v_{1}(\lambda_{\rm in},\delta_{\rm
in})\rangle\,+\,\sigma_{1}\,i\,|v_{2}(\lambda_{\rm in},\delta_{\rm
in})\rangle\Bigr{)}\hskip 14.22636pt;$ (68)
$\displaystyle|\tilde{b}_{1}^{\delta_{\rm in}})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\,\Bigl{(}|v_{1}(\lambda_{\rm in},\delta_{\rm
in})\rangle\,-\,\sigma_{1}\,i\,|v_{2}(\lambda_{\rm in},\delta_{\rm
in})\rangle\Bigr{)}\hskip 14.22636pt.$ (69)
In (68)-(69), the sign factor $\sigma_{1}\in\\{-1,+1\\}$. Similarly for all
the other EPs $m=2,3,\bm{\cdots},M$. Written down explicitly, we set
$\tilde{E}_{m}^{\delta_{\rm in}}\;=\;E_{2m-1}(\lambda_{\rm in},\delta_{\rm
in})\hskip 14.22636pt;$ (70)
and
$\displaystyle|\tilde{c}_{m}^{\delta_{\rm in}})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\,\Bigl{(}|v_{2m-1}(\lambda_{\rm
in},\delta_{\rm in})\rangle\,+\,\sigma_{m}\,i\,|v_{2m}(\lambda_{\rm
in},\delta_{\rm in})\rangle\Bigr{)}\hskip 14.22636pt;$ (71)
$\displaystyle|\tilde{b}_{m}^{\delta_{\rm in}})$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\,\Bigl{(}|v_{2m-1}(\lambda_{\rm
in},\delta_{\rm in})\rangle\,-\,\sigma_{m}\,i\,|v_{2m}(\lambda_{\rm
in},\delta_{\rm in})\rangle\Bigr{)}\hskip 14.22636pt;$ (72)
where $1\leq m\leq M$ and $\sigma_{m}\in\\{-1,+1\\}$. An assignment of the
sign factors $(\sigma_{1},\sigma_{2},\bm{\cdots},\sigma_{M})$ in (71)-(72)
must be performed in such a consistent way that the velocity
$\bm{\dot{}}{\lambda}(\delta_{\rm
in})\;=\;-\,\frac{(\tilde{c}_{m}^{\delta_{\rm
in}}|\partial_{\delta}\,\hat{H}(\lambda_{\rm in},\delta_{\rm
in})|\tilde{c}_{m}^{\delta_{\rm in}})}{(\tilde{c}_{m}^{\delta_{\rm
in}}|\partial_{\lambda}\,\hat{H}(\lambda_{\rm in},\delta_{\rm
in})|\tilde{c}_{m}^{\delta_{\rm in}})}$ (73)
predicted by equation (24) comes out as being independent of $m$. We shall
return to the sign factors $(\sigma_{1},\sigma_{2},\bm{\cdots},\sigma_{M})$
below (see the item (ii) in the last paragraph).
We need to specify also the ICS for all the ordinary non-EP eigenstates of
$\hat{H}(\lambda_{k}(\delta_{\rm in}),\delta_{\rm in})$. This task is
straightforward in the case of non-degenerate energy levels $E_{j+2M_{\rm
in}}(\lambda_{\rm in},\delta_{\rm in})$ (where $1\leq j\leq N-2M_{\rm in}$).
One sets obviously
$E_{j}^{\delta_{\rm in}}\;=\;E_{j+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})\hskip 14.22636pt;\hskip 14.22636pt1\leq j\leq N-2M_{\rm in}$ (74)
and
$|c_{j}^{\delta_{\rm in}})\;=\;|v_{j+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})\rangle\hskip 14.22636pt;\hskip 14.22636pt1\leq j\leq N-2M_{\rm in}$ (75)
where $|v_{j+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm in})\rangle$ stands of
course for the unit normalized eigenvector of $\hat{H}(\lambda_{\rm
in},\delta_{\rm in})$ associated with level $E_{j+2M_{\rm in}}(\lambda_{\rm
in},\delta_{\rm in})$.
The situation becomes somewhat more delicate in the case of the doubly
degenerate energy eigenvalues listed in (A) but not included in the cluster
(A), namely, in the case of levels
$\displaystyle\;\;E_{2M+1}(\lambda_{\rm in},\delta_{\rm in})$ $\displaystyle=$
$\displaystyle E_{2M+2}(\lambda_{\rm in},\delta_{\rm in})\hskip 14.22636pt;$
$\displaystyle\;\;E_{2M+3}(\lambda_{\rm in},\delta_{\rm in})$ $\displaystyle=$
$\displaystyle E_{2M+4}(\lambda_{\rm in},\delta_{\rm in})\hskip 14.22636pt;$
$\displaystyle\;\;\bm{\vdots}$ $\displaystyle E_{2M_{\rm in}-1}(\lambda_{\rm
in},\delta_{\rm in})$ $\displaystyle=$ $\displaystyle E_{2M_{\rm
in}}(\lambda_{\rm in},\delta_{\rm in})\hskip 14.22636pt\;\,.$
Consider any given doubly degenerate eigenvalue
$E_{2M+j-N+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})\;=\;E_{2M+j-N+2M_{\rm in}+1}(\lambda_{\rm in},\delta_{\rm in})$ (76)
where $N-2M_{\rm in}+1\leq j\leq N-2M-1$. Let the two pertinent unit
normalized orthonormal eigenvectors be
$\displaystyle|v^{(1)}\rangle$ $\displaystyle\equiv$
$\displaystyle|v_{2M+j-N+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})\rangle\hskip 14.22636pt\;\;\;\,;$ (77) $\displaystyle|v^{(2)}\rangle$
$\displaystyle\equiv$ $\displaystyle|v_{2M+j-N+2M_{\rm in}+1}(\lambda_{\rm
in},\delta_{\rm in})\rangle\hskip 14.22636pt.$ (78)
We set of course
$E_{2M+j-N+2M_{\rm in}}^{\delta_{\rm in}}\;=\;E_{2M+j-N+2M_{\rm
in}}(\lambda_{\rm in},\delta_{\rm in})\;=\;E_{2M+j-N+2M_{\rm
in}+1}^{\delta_{\rm in}}\hskip 14.22636pt;$ (79)
much as in (74). Yet an assignment of the corresponding non-EP eigenvectors
$|c_{2M+j-N+2M_{\rm in}}^{\delta_{\rm in}})\hskip 14.22636pt{\rm and}\hskip
14.22636pt|c_{2M+j-N+2M_{\rm in}+1}^{\delta_{\rm in}})$ (80)
needs a bit more care. Clearly, entities (80) must be built up as
$c$-orthonormalized linear combinations of the two eigenstates (77)-(78). In
addition, however, one must ensure that the two sought non-EP eigenvectors
(80) are not mutually coupled by the Hamiltonian $\delta$-derivative (7),
i.e., by the operator
$\hat{V}(\delta_{\rm in})\;=\;\partial_{\lambda}\,\hat{H}(\lambda_{\rm
in},\delta_{\rm in})\,\bm{\dot{}}{\lambda}(\delta_{\rm
in})\;+\;\partial_{\delta}\,\hat{H}(\lambda_{\rm in},\delta_{\rm in})\hskip
14.22636pt.$ (81)
Indeed, the just imposed extra requirement of
$(c_{2M+j-N+2M_{\rm in}}^{\delta_{\rm in}}|\hat{V}(\delta_{\rm
in})|c_{2M+j-N+2M_{\rm in}+1}^{\delta_{\rm in}})\;=\;0$ (82)
is indispensable, since it guarantees that our EOM (45) does not possess a
singularity at $\delta=\delta_{\rm in}$. Hence an appropriate kind of
regularization or rectification must be implemented here. In fact, an explicit
construction of the two non-EP eigenvectors (80) is conceptually
straightforward. Namely, we diagonalize888 The matrix (83) is surely
diagonalizable. Since if it was non-diagonalizable, then the just investigated
crossing of eigenvalues $E_{2M+j-N+2M_{\rm in}}(\lambda_{\rm in},\delta_{\rm
in})=E_{2M+j-N+2M_{\rm in}+1}(\lambda_{\rm in},\delta_{\rm in})$ would bring
an additional $(M+1)$-th EP into the list (A), contrary to our starting
assumption. the 2-by-2 matrix
$\left(\matrix{\langle v^{(1)}|\hat{V}(\delta_{\rm in})|v^{(1)}\rangle&\langle
v^{(1)}|\hat{V}(\delta_{\rm in})|v^{(2)}\rangle\cr\langle
v^{(2)}|\hat{V}(\delta_{\rm in})|v^{(1)}\rangle&\langle
v^{(2)}|\hat{V}(\delta_{\rm in})|v^{(2)}\rangle}\right)\hskip 14.22636pt;$
(83)
and access in this way the two associated eigenvectors
$\vec{w}^{(1)}\;=\;\left(\matrix{w_{1}^{(1)}\cr w_{2}^{(1)}}\right)\hskip
14.22636pt,\hskip 14.22636pt\vec{w}^{(2)}\;=\;\left(\matrix{w_{1}^{(2)}\cr
w_{2}^{(2)}}\right)\hskip 14.22636pt.$ (84)
Subsequently we set
$\displaystyle|c_{2M+j-N+2M_{\rm in}\phantom{+1}}^{\delta_{\rm in}})$
$\displaystyle=$ $\displaystyle
w_{1}^{(1)}\,|v^{(1)}\rangle\;+\;w_{2}^{(1)}\,|v^{(2)}\rangle\hskip
14.22636pt;$ (85) $\displaystyle|c_{2M+j-N+2M_{\rm in}+1}^{\delta_{\rm in}})$
$\displaystyle=$ $\displaystyle
w_{1}^{(2)}\,|v^{(1)}\rangle\;+\;w_{2}^{(2)}\,|v^{(2)}\rangle\hskip
14.22636pt;$ (86)
while tacitly implementing the $c$-normalization.
What remains to be done is to supply the values of $f_{m}^{\delta_{\rm in}}$.
Equation (10) implies
$f_{m}^{\delta_{\rm in}}\;=\;0\hskip 14.22636pt;\hskip 14.22636pt1\leq m\leq
M$ (87)
valid simply because $|\tilde{b}_{m}^{\delta_{\rm in}})$ is an eigenvector of
$\hat{H}(\lambda_{k}(\delta_{\rm in}),\delta_{\rm in})$ with an eigenvalue
$\tilde{E}_{m}^{\delta_{\rm in}}$.
Summarizing, in this Appendix A we have described in a self contained fashion
how to specify adequately the ICS for the seven fundamental entities (19). The
resulting ICS are given above in equations (65), (70), (71), (72), (74), (75),
(79), (85), (86), and (87). The just mentioned ICS must satisfy by
construction the basic eigenvalue and eigenvector properties (8), (9), (10),
(11)-(16), (17) listed in Section 2 of the main text, this may serve as an
useful consistency check.
Let us finally mention two important questions which still need to be
addressed:
* (i)
A nontrivial puzzle arises on how a given multiplet of simple binary crossings
$(\delta_{\rm in},\lambda_{\rm in},M_{\rm in}\geq 2)$ should be split into
specific subgroups (clusters) characterized by the same $\lambda_{k}(\delta)$.
* (ii)
Another nontrivial puzzle concerns consistent choice of the $M$ sign factors
$(\sigma_{1},\sigma_{2},\bm{\cdots},\sigma_{M})$ for a given $k$-th cluster in
equations (71)-(72), see the above discussion of requirement (73).
Both puzzles (i)-(ii) are resolved correctly iff an explicit solution of our
EOM (24), (26), (31), (43), (44), (45), (46), starting from the just discussed
ICS, provides unique outcomes (19) which do possess the basic properties (8),
(9), (10), (11)-(16), (17) of Section 2 for all values of $\delta$ considered
in the propagation. On the other hand, any inconsistency detected during the
propagation of our EOM, manifested e.g. by violation of any from the
properties (8), (9), (10), (11)-(16), (17), would inevitably imply an
incorrect resolution of one or both of the aforementioned issues (i)-(ii).
Hence the two puzzles (i)-(ii) can be uniquely resolved simply on the trial-
and-error basis, even in such situations when direct answer to (i)-(ii) is not
a priori obvious.
|
rm
11institutetext: CERN, Geneva, Switzerland
# Beam Loss Monitors at LHC
B. Dehning
###### Abstract
One of the main functions of the LHC beam loss measurement system is the
protection of equipment against damage caused by impacting particles creating
secondary showers and their energy dissipation in the matter. Reliability
requirements are scaled according to the acceptable consequences and the
frequency of particle impact events on equipment. Increasing reliability often
leads to more complex systems. The downside of complexity is a reduction of
availability; therefore, an optimum has to be found for these conflicting
requirements. A detailed review of selected concepts and solutions for the LHC
system will be given to show approaches used in various parts of the system
from the sensors, signal processing, and software implementations to the
requirements for operation and documentation.
Keywords
Machine protection; equipment protection; beam loss; dependability.
## 1 Introduction
After a LHC beam loss project study phase, a functional specification is
compiled. The specification introduces the subject, first viewing the project
globally by treating:
* •
location of monitors;
* •
time response;
* •
dynamic range;
* •
safety and reliability requirements.
The safety and reliability requirements need to be discussed at the system
level, to define the overall quantitative requirements. The time response,
dynamic range, safety, and reliability requirements limit the choice of
sensors and define the acquisition chain. With the knowledge obtained in the
project study phase, the following choices are made:
* •
sensor: ionization chamber;
* •
acquisition chain: distributed system with local and independent beam inhibit
functionality.
A more detailed treatment of the global safety and reliability requirements
has been covered in study groups and thesis projects. The subjects treated
include:
* •
acquisition chain with:
* –
parallel and voting for safety and reliability requirements;
* –
radiation-tolerant electronics;
* •
fail-safe system;
* •
data flow path;
* •
management of settings;
* •
functional tests;
* •
preventive actions;
* •
firmware updates;
* •
reliability software;
* •
human errors;
* •
documentation.
Several of these aspects will be discussed in this paper, and examples will be
presented from the LHC beam loss monitoring system.
## 2 Global beam loss measurement requirements
For a beam loss protection system, the possible loss locations and therefore
also the potential damage location are unknown parameters, to be addressed by
particle tracking and particle shower simulations. In a second step, the
optimal sensor locations are also determined by particle shower simulations.
For the LHC, the considerations are illustrated in Fig. 1. The electrodes of
the beam position monitors are retracted to be shielded by the nearby vacuum
chamber walls against particle impacts, which could create electrical charges
on the electrodes and disturb the beam position measurement.
Figure 1: Loss location considerations: aperture between a LHC bending magnet
(MB) and a quadrupole magnet (MQ). The change in aperture is mainly controlled
by the connection bellow and the beam position monitor (BPM) location. BLM,
beam loss monitor.
An aperture limitation results in a concentration of losses if off-orbit
protons approach the aperture. At the LHC, this is the case for every
transition between a bending and a quadrupole magnet. This can be visualized
by the tracking simulation (Fig. 2), resulting in a maximum at the beginning
of the quadrupole magnet.
Figure 2: Number of lost protons and beta function values with schematic of
LHC regular cell as function of the location along the lattice. MB, bending
magnet; MQ, quenching magnet.
These loss locations are most probable, because:
* •
the beta function, and therefore the beam size, is maximal;
* •
orbit bumps have a maximum at this location, because of the location of a
dipole corrector magnet near to the quadrupole magnet;
* •
alignment errors are possible, causing an additional aperture limitation.
The shower particles initiated by lost protons can be best observed outside of
the magnet yoke about a metre downstream of the proton impact location (see
Fig. 3).
Figure 3: Number of secondary particles as function of location along the
lattice. MB, bending magnet; MQ, quenching magnet.
A second maximum occurs at the downstream transition between the quadrupole
and bending magnet, owing to the reduced material in the transition region. To
make use of the high particle signal, resulting in the lowest statistical
measurement error, the ionization chambers are located at or near to particle
shower maxima (see Fig. 3, red and blue rectangular areas). A separation
between the losses from beams 1 and 2 is given by the different locations of
the shower particle maxima, owing to their opposite directions.
The LHC ionization chambers are cylindrical, with a sensitive volume of
$\Unit{1.5}{l}$, covered by a yellow insulating tube and are mounted on the
outside of the magnets or near collimators (see Fig. 4, bottom right, red and
blue rectangular areas).
Figure 4: LHC tunnel photos with ionization chambers (yellow tubes) mounted on
the outside of magnets and schematic of an ionization chamber near a
collimator. BLM, beam loss monitor; IC, ionization chamber; SEM, secondary
emission monitor.
The limits of the time response and dynamic range requirements for LHC
protection are mostly defined by the quench curves of the bending magnets. The
quench levels of the magnets are orders of magnitude lower than the damage
levels of the magnets. Magnet quenching is avoided, because of the gain in
operational efficiency, by extracting the beam from the ring and therefore
ending the deposition of heat in the coil before quenching can occur. In the
case of a quench, the magnet coil is warmed up and the new cool down takes
between 6 and $10\Uh$.
Figure 5: Proton density rate as function of loss duration. Different curves
indicate the functional dependence for different energies and the defined
observation range. Red arrow, required proton density rate dynamic; blue
arrow, duration dynamic.
The allowed particle loss rate (see Fig. 5) in protons per metre per second is
shown as the function of the loss duration. The characteristic superconducting
magnet quench level curves are due to the quench margin of the superconducting
cable filaments and the super fluid cooling of the cables and the whole magnet
coil. For short duration losses, the quench level is about four orders of
magnitude higher than for steady-state losses and for both LHC nominal beam
energies, of $450\UGeV$ and $7\UTeV$, an order of two variation is seen.
The time resolution of the loss measurement system of $40\Uus$ is given by the
duration of the extraction of the beam from the LHC, $89\Uus$, and some signal
propagations and synchronization considerations. The maximum duration is given
by the reach of the steady-state quench level at about $80\Us$ (see Fig. 5,
blue arrow).
The maximal signal value is defined by the crossing of the $89\Uus$ line and
the quench level at $450\UGeV$. Owing to an optimization process for the LHC
acquisition electronics, the value has been chosen a little lower (see Fig. 5,
vertical dashed black line ($89\Uus$) and thin green line). The lower limit of
the dynamic range is given by the steady-state quench level for $7\UTeV$ and
the need to observe losses, for accelerator tuning purposes, below the quench
level (see Fig. 5, thin blue line, $80\Us$). These considerations led to a
required signal dynamic of over seven orders of magnitude (see Fig. 5, red
arrow). Operational experience required that the dynamic upper value be
extended by two orders of magnitude for short-term losses in injection areas.
## 3 Safety system design approach
All considerations start with the recognition that the probable frequency and
probable magnitude of a non-conformal behaviour could lead to a damage of the
system integrity. The combined likelihood of frequency and magnitude
determines the risk for a certain system (see Fig. 6, first column). The risk
could be reduced by using a safety system providing protection, but increased
complexity reduces the availability of the protected system (see Fig. 6, first
row). To arrive at a quantitative demand for a safety level, the probable
frequency of events and the probable magnitude of its consequence are utilized
by the SIL (safety integrity level) approach [1] or the ‘as low as reasonably
practicable’ (ALARP) approach.
Figure 6: LHC protection system design approach (items in green are discussed
in this paper). ALARP, as low as reasonably practicable; SIL, safety integrity
level.
For both approaches, a failure probability per time is estimated by
calculating the risk of damage and the resulting downtime of the equipment
[2]. A failure in the safety system itself should fall in a fail-safe state,
with the consequence of reducing the operation efficiency. The main design
criteria for the safety system are listed in the safety column of Fig. 6:
fail-safe, redundancy, survey, and functional check. The protection column of
Fig. 6 lists the methods for the protection of an accelerator: stop of next
injection applicable for a one-path particle guiding system (linac, transfer
line) and extraction of the beam for a multipath system (storage ring). The
accelerator safety system consists of a beam loss measurement system, an
interlock system, and a beam dump system. If superconducting magnets are used,
some beam loss protection could also be provided by the quench protection
system. The availability column of Fig. 6 lists the means used in the design
of the safety system to decrease the number of transitions of the system into
the fail-safe state. The effect of the number of components added to a system
to increase the probability of a safe operation results in a reduction in the
availability of the system. This negative consequence of the safety-increasing
elements is partially compensated by the choice of reliable components, by
redundancy, voting, and the monitoring of drifts of the safety system
parameters.
## 4 Failure probability and failure rate reduction
To illustrate the available means of increasing safety, the system’s basic
functional dependencies are discussed. An often-valid assumption is given by
the exponential time dependence of the failure probability $F(t)$ (Fig. 7).
With increasing time, the probability of the occurrence of a failure in a
system approaches 1. The failure rate, $\lambda$, is assumed to be time-
independent (Fig. 8, magenta curve). In a next step, two systems with the same
functionality are assumed to be working in parallel, to allow redundant
operation. The failure rate, $\lambda$, decreases drastically for short times,
but finally approaches the failure rate of a single system (Fig. 8, blue
line).
Figure 7: Exponential failure probability Figure 8: Failure rates of different
systems as a function of time (arbitrary units). Magenta: single system. Blue:
Two systems parallel. Green: Parallel systems with survey. Red: Parallel
systems with survey and with regular test.
It should be noted that the failure rate curve changes from time-independent
to time-dependent behaviour. A further reduction in the failure rate could be
reached by a survey of the system. With a system survey, some failure modes
can be detected in advance and a repair can be planned (see Fig. 8, red and
green line). This procedure results in a shift of the failure rate curve to
lower values, which no longer approach the infinite times of the single system
rate. Another strong reduction could be reached if the system could be
regarded as new after a certain time period. The failure rate curve shows the
time dependence of the surveyed system in the period $t_{0}=0$ to $t=t_{1}$
repeated after every time period (see Fig. 8, red lines). The conclusion that
a system could be regarded as new after a certain time is justified if the
system is subjected to a test. Functional tests will verify, on request, that
the system has the defined functionality. In case of an internal system
failure system, the very basic requirement is a fail-safe behaviour. Internal
failure will not contribute to the unsafeness of the system but will
contribute to its non-availability.
## 5 Protection system overview
As an example of a protection system, the CERN LHC beam loss monitoring (BLM)
system will be used. The discussion will focus on protection, reliability, and
availability aspects.
The main purpose of the BLM system is to convert particle shower information
into electrical signals, which are then compared with limits. If the limits
are exceeded, extraction of the LHC beam from the ring is initiated to stop
the irradiation of equipment. In the case of the LHC, the protection function
is often linked to the quench prevention of the superconducting magnets, since
the threshold levels for beam extraction are lower (orders of magnitude) than
for the damage protection of equipment [3].
The very first element of the protection system is the sensor that detects the
irradiation of equipment. The conversion of the particle shower magnitude is
done by ionization chambers [4] or secondary emission detectors [5] (see Fig.
9, left block). The front-end acquisition electronics convert the analogue
detector signal into a digital signal and transmit the signal to the back-end
acquisition and control unit, which is the decision-making centre of the whole
system. The measured signals arrive here and are compared with the limits. In
addition, beam permit signals are generated (see Fig. 9, red block), taking
the information of the system settings (see Fig. 9, right-hand blocks) into
account. The measurement data and all setting information are also distributed
to the display and the logging databases (see Fig. 9, bottom blocks) from this
unit. The control functionality is linked to the survey and test
functionality, which are discussed later.
Figure 9: Information flow from the sensor up to the beam permit signal
transmission. The red framed (back-end acquisition and control) unit is the
local decision-making centre.
In the LHC, ionization chambers [4] and secondary emission detectors [5] are
used. Their signals are digitized using a current-to-frequency converter [6,
7] (see Fig. 10, front-end acquisition unit in tunnel). Up to the end of the
analogue signal chain, the signal is not redundant, because no technical
solution has been found to the problem of splitting the detector signal while
simultaneously allowing a large dynamic signal (nine orders of magnitude). To
cope with this requirement for the analogue front-end unit, a low failure rate
circuit concept has been chosen. To avoid the consequences of single event
effects, and to increase the availability of a channel, the signal is trebled
in the front-end logic. Two voting blocks are used to generate the signal
transmitted over a redundant optical link. A redundant optical link has been
chosen to increase the availability of the link, which is limited by the mean
time between failures of the transmission laser.
Figure 10: CERN LHC beam loss measurement and protection system: CRC, cyclic
redundancy check
The signals are decoded and cyclic redundancy checks (CRCs) are calculated for
both signal chains (see Fig. 10, back-end acquisition unit at the surface). At
the front-end unit, CRCs are also calculated and transmitted, to enable the
CRCs of each line and also the CRCs for both lines to be compared. This
procedure ensures high reliability and also maximizes the availability of the
data link [8, 9].
The effect of the implementation of redundancy and trebling in the data
transmission and treatment and the verification of loss-free data transmission
are listed in Table 1. The most important technique for increasing the
reliability of a system is given by a fail-safe design. In the case of an
internal failure of a system, it should make the transition to a state that
ensures the protection of the system. This could be done by assigning the
active state to: ‘system is allowed to operate’. In case of an internal
failure, if no power is supplied, the state will switch to a passive state and
the system will be protected.
Table 1: Procedure and techniques to increase the reliability and availability of acquisition systems | Comment position of monitor | Safety gain | Availability gain
---|---|---|---
Fail-safe | Active state = beam permit | Yes | No
Voting | | Yes | Yes
Redundancy | | Yes | Yes
CRC | Cyclic redundancy check | Yes | No
## 6 Fault tree analysis
The fault tree treatment of the system has been chosen to calculate, from the
component level up to the system level, the damage risk, the false alarm, and
the warning probability [10], taking into account the component failure,
repair and inspection rates.
The false alarm slice of the fault tree (see Fig. 11) shows the signal chain
for different false alarm generators (memory, beam energy from control unit
(combiner), and energy transceiver) of the back-end electronics [11]. The
different inputs are linked with a Boolean ‘OR’ gate so that every single
input generates, in the same way, a false alarm and, therefore, a downtime of
the system and the LHC.
Figure 11: Image section of the false alarm generation fault tree of the LHC
BLM system, showing the part describing the back-end acquisition unit.
The results of the fault tree analysis have been essential for the design of
the hardware and software, especially for the estimates of failure rates of
the optical links and the propagated consequences of it up to the system
damage and false rate probabilities. An optimization process has been
instigated, to balance the probabilities of damage rates and false alarms. The
failure rate calculations also lead to the definition of functional tests and
their frequencies. Failure modes are also defined for the limit values,
detector names, channel assignments, and much more data needed by the system.
Therefore, setting management and metadata verification tests are also treated
in the fault tree analysis.
## 7 Functionality checks
As an example of a check, the signal distribution inside the VME crate for the
beam energy and the beam permit line test is discussed [12, 13] (see Fig. 12).
The test is initiated by a client, to allow optimal scheduling. The control
unit (combiner card) holds a downtime counter requiring every 24 hours the
execution of functional tests every 24 hours. If the tests are not completed
in time, the downtime counter inhibits the beam permit immediately if no beam
is circulating or when the beam present flag becomes false. For the tests, the
whole system changes the status to ‘mode’ and, the control units send a
request to inhibit the beam permit line to each acquisition card (threshold
card) in sequence (see Fig. 12).
Figure 12: Beam permit line functionality check Figure 13: Check of the whole
acquisition chain
The test results are analyzed by the controller; if a false status is
detected, a manual intervention is required, to repair the system before the
test can be passed without a false status detected. The distribution of the
beam energy levels between the controller and the acquisition card is tested
by changing the energy levels in the test mode; this should result in the
acquisition card returning the appropriate threshold settings for comparison
with the settings sent.
In a second example, the test of the whole acquisition chain is presented [14,
15]. An electrical signal is introduced in the sensor by the capacitive
coupling of the sensor electrodes and by a harmonic modulation of the applied
high voltage supply (see Fig. 13). This test includes the complete signal
chain, except for the ionization process in the ionization chambers and the
secondary electron emission in the secondary emission monitor detectors. The
conversion of the particle shower to an electrical signal in the detector is
tested every few years with a radiative source placed outside the detector.
The long intertest interval for this test is possible because the failure mode
of a complete gas exchange with air (ionization chamber) or loss of the vacuum
(secondary emission detector) of the detectors will still result in an
appropriate signal, without loss of protection functionality. Also, this test
is initiated and the results are analyzed by the back-end unit (survey and
control) (see Fig. 13), allowing the beam permit line to be inhibited directly
in the case of a negative result.
## 8 Setting management
The system setting management controls the settings for the beam permit
thresholds and also the settings used for system operation [16, 17]. These
operational settings include hard and firmware information, to verify that the
configuration stored in the database images the installed system. Table 2
illustrates the variety of the metadata needed to interpret the measured
values or to check the configuration of the system. For example, a match
between measured value, channel official names, channel expert names, DCUM
(position of monitor), and monitor coefficient needs to be given and tested.
To reduce the complexity of the metadata information chain (see Fig. 9, right
blocks), a single path is defined for the metadata flow and the measurement
values into the back-end unit. The back-end unit distributes the measurement
values together with the metadata to ensure consistency and to have only one
location where the data integrity needs to be tested. This concept is
essential to reduce the number of possible failure modes for metadata
corruption.
Table 2: Parameters deployed on each back-end unit (threshold comparator module) Parameters | Data 32 bit | Description
---|---|---
Threshold values | 8192 | 16 channels $\times$ 12 sums $\times$ 32 energies
Channel connected | 1 | Generating (or not) a beam permit
Channel mask | 1 | ‘Maskable’ or ‘unmaskable’
Serial A | 1 | Card’s serial number (channels 1–8)
Serial B | 1 | Card’s serial number (channels 9–16)
Serial | 2 | Threshold comparator
Firmware version | 1 | Threshold comparator’s firmware
Expert names | 128 |
Official names | 128 |
DCUM | 16 | Position of monitor
Family names | 128 | Threshold family name
Monitor coefficients | 16 | Monitor threshold coefficients
Last link-state advertisement update | 2 | Time stamp: master table
Last flash update | 2 | Time stamp: non-volatile memory
Flash checksum | 1 | CRC value for or from table integrity
Figure 14: Comparison of descriptive metadatabase reference settings with
settings in the back-end acquisition and control unit. The decision logic is
indicated in the flow diagram. FPGA, field-programmable gate array.
Having expressed the importance of a failure mode optimized metadata flow, the
data check is achieved by comparing the data stored in a reference setting
database (Oracle) with the data stored in the memory of the back-end
electronics field-programmable gate arrays (see Fig. 14). Also, in this test,
a downtime counter located in the back-end unit (survey and control) requests
a comparison of the data stored at both locations every $24\Uh$. If the test
is not initiated, or if the test result is negative, the beam permit is
inhibited. Since the comparison is made in a different software environment,
the additional functionality required in the back-end unit is marginal, but it
is necessary to test the comparison code from time to time.
### 8.1 Descriptive metadata
Metadata need to be generated and the option for required changes needs to be
provided. To reduce human error, the graphical user interfaces (GUI) accessing
the setting database (see Fig. 9, right block) need to be optimized by
allowing for all data manipulation steps to include comparisons with previous
data, checks on the magnitudes of changes, and several confirmation steps. The
last confirmation steps require the electronic signatures of two independent
persons.
The generation of sets of metadata required initially and for larger changes
during the operation periods is done for the LHC system by a GUI for the
database access. Generation of metadata, such as limits for the beam abort
thresholds, are parameterized and the calculation is made by code loaded into
the database (Oracle) (see Fig. 9, rightmost block). The calculation made in
the database environment, where database software changes and updates are made
in a coherent manner, should ensure long-term maintainability [18].
### 8.2 Documentation
In a complex system, designed for operation over decades, sufficient
documentation is essential to describe the system for knowledge transfer. For
a safety system, the function of the documentation is to avoid failure modes
and failures. The design documentation, from the specification to
documentation on operation and system changes, needs to be distributed for
review, comment, and final approval by each client. At the LHC, standardized
forms, electronic procedures, and signatures are in use to organize the
process, an engineering change request outlines the motivation for a change,
the description of the proposed change, and an estimate of the impact of the
change on the functionality of the concerned system and other systems.
## 9 Snapshots of loss measurements triggered by events
The loss measurement recording rate has been set up at different speeds, with
$40\Uus,80\Uus,80\Ums$, and $1.3\Us$ integration times. The two first periods
are event-triggered, to cope with the amount of data, while the latter periods
are read out at $12\UHz$ and $1\UHz$. The event-triggered measurements are
used to analyze losses occurring at particular times during operation or
depending on measurements and output analysis data acquisition freezing
events. The $12\UHz$ measurements are used for the collimator positioning
feedback system and the $1\UHz$ measurements are used for continuous
observation of the accelerator status.
Figure 15: Example of a particle loss triggered event recording. The trigger
has been generated at $1.74\Us$. The measurements recorded before the trigger
event reveal loss precursors. The losses are caused by collisions between the
beam and dust particles.
High-resolution data have been used not only for the detailed study of beam
losses caused by dust events (see Fig. 15), but also to check for non-
conformities of the acquisition system. When testing the system under extreme
conditions, high loss levels with a large leading signal transition give an
insight into system performance. The advantage of publishing different
measurement signals is that it enables consistency checks to be performed. In
the LHC, several clients are used to check the consistency of measurement
data.
## 10 Acquisition database
The storage and fast retrieval of measurement data and metadata is also
essential for system checks. Besides the examples discussed previously, for
which extended data storage were required, an extreme case is the check of
noise amplitudes of the system (see Fig. 16). For a protection system with
limits leading automatically to a beam abort and to accelerator downtime,
there is a strong requirement to avoid false aborts caused by rare events
(noise). This is extreme, because rare signals need to be retrieved from
stored measurement data from acquisition periods lasting weeks. The
measurements with the shortest integration periods of $40\Uus$ show the
largest signal fluctuation, because signal averaging does not led to a
reduction in signal fluctuation. To reduce the amount of data to be stored, an
on-line measurement data reduction algorithm has been implemented in the back-
end unit. Only maximum values of the short integration times are stored for
the $1\UHz$ read-out. This procedure reduces the quantity of data to be stored
by over four orders of magnitude. In addition, a retrieval time optimized
database structure has been implemented for this purpose.
Figure 16: Noise level determination of all beam loss monitor channels. The
LHC loss monitor channels are grouped by the observed loss, creating elements
of cold and warm magnets and collimators. Top: Beam loss monitor noise signal
taken with no beam circulating versus beam abort thresholds. The blue line
indicates the threshold value and the red line the maximum noise goal set to
avoid any noise false beam aborts. Bottom: Beam loss monitor spectrum
normalized to the beam abort threshold.
## 11 Preventive action
The discussion in Section 4 emphasized the reduction in failure rate achieved
by surveying the system, to anticipate possible failure modes. In the LHC
system, this survey task is realized by the daily retrieval of relevant
database information and an automatic comparison with limits for initiating
actions. Reports containing different levels of abstraction are produced daily
and weekly.
Figure 17: Optical link failures and printed circuit board temperatures versus
time of day
An example of this procedure is given by the survey of optical links. The
links are redundant (see Fig. 10) and the calculations of different CRCs
enable the differences between the CRC values to be recorded and correlated
with board temperature variations (see Fig. 17). The limits for actions are
set empirically, to minimize downtime and maintenance efforts.
## 12 Summary
A systematic design approach for machine protection systems will start with
determination of the system failure rate. The failure rate magnitude could be
based on well-established standards first developed for the design of military
equipment, the aircraft industry, space missions, or nuclear power stations.
The effect of increasing complexity by adding protection functionalities and
therefore reducing availability is best studied by reliability software
packages [19]. The basic means of delivering a reduction in failure rate are
provided by a system layout with parallel, redundant information, treated in
combination with a regular survey of the system status and functional tests. A
survey will allow preventive actions, to reduce the failure rate. For a
protection system, a fail-safe design is essential so that protection is
ensured in the case of a failure.
Functionality checks staged for all levels of the signal treatment are
implemented for the LHC BLM system. The checks of the information exchange
inside the VME crate and the analogue and digital signal chain have been
discussed. Examples have been given to emphasize the importance of the
metadata information flow. The combination of measurement and metadata as
early as possible in the signal chain is important for the reduction of
failure modes and simplified test options. To attain low failure rates,
rigorous metadata tests have to be implemented, to ensure metadata conformity.
The generation of metadata and change options using a graphical interface also
need to be analyzed in terms of failure modes, taking into account long-term
usage and the maintainability of tests and validation procedures in the
future. For the LHC, the most stringent requirement in avoiding human error is
the request of two signatures to validate metadata changes. Although listed
last, documentation tasks should be started first, including planning for
reliability measures, and have to be continued as long as the system exists.
## References
* [1] International Electrotechnical Commission, IEC 61508. IEC, 2010.
* [2] G. Guaglio, Reliability of beam loss monitors system for the Large Hadron Collider, 11th Beam Instrumentation Workshop, Knoxville 2004 (AIP, 2004), vol. 732, p. 141, http://hal.in2p3.fr/in2p3-00025196
* [3] B. Dehning et al., Overview of LHC beam loss measurements, 2011, p. THOAA03, https://cds.cern.ch/record/1379469
* [4] M. Stockner. Ph.D. thesis, Technische Universität Wien, 2006.
* [5] D. Kramer, Ph.D. thesis, Technical University of Liberec, 2008.
* [6] E. Effinger et al., The LHC beam loss monitoring system’s data acquisition card, 12th Workshop on Electronics for LHC and Future Experiments, Valencia, Spain, 2006, p. 108, http://cdsweb.cern.ch/record/1027422
* [7] E. Effinger et al., Single gain radiation tolerant LHC beam loss acquisition card, Proc. DIPAC, Venice, Italy, 2007. p. 319. http://accelconf.web.cern.ch/Accelconf/d07/papers/wepc06.pdf
* [8] C. Zamantzas, et al., An FPGA based implementation for real-time processing of the LHC beam loss monitoring system’s data, San Diego, 2006, IEEE Nucl. Sci. Symposium Conf. Record (2006), vol. 2, p. 950, http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4179157
* [9] C. Zamantzas, Ph.D. thesis, Brunel University, 2006.
* [10] G. Guaglio. Ph.D. thesis, Université Blaise Pascal, Clermont-Ferrand II, 2005.
* [11] Reliability software from Isograph – world leaders in reliability, maintenance and safety, http://www.isograph.com
* [12] C. Zamantzas et al., Reliability tests of the LHC beam loss monitoring FPGA firmware, 14th Beam Instrumentation Workshop, Santa Fe, New Mexico, 2010, https://cds.cern.ch/record/1268403
* [13] B. Dehning et al., Self testing functionality of the LHC BLM system, 10th European Workshop on Beam Diagnostics and Instrumentation for Particle Accelerators, Hamburg, Germany, 2011, p. 152, https://cds.cern.ch/record/1375171
* [14] J. Emery, et al., First experiences with the LHC BLM sanity checks, Topical Workshop on Electronics for Particle Physics 2010, Aachen, Germany, 2010 [J. Instrum. 5 (2010) C12044. http://dx.doi.org/10.1088/1748-0221/5/12/c12044], https://cds.cern.ch/record/1321592
* [15] J. Emery et al., LHC BLM single channel connectivity test using the standard installation, Beam Diagnostics and Instrumentation for Particle Accelerators, Basel, Switzerland, 2009, https://cds.cern.ch/record/1183414
* [16] E. Nebot Del Busto et al., Handling of BLM abort thresholds in the LHC, 2nd International Particle Accelerator Conference, San Sebastian, Spain, 2011, p. WEPC170, https://cds.cern.ch/record/1379461
* [17] E. B. Holzer et al., Generation of 1.5 million beam loss threshold values, 11th European Particle Accelerator Conference, Genoa, Italy, 2008, p. THPC147, https://cds.cern.ch/record/1124306
* [18] M. Nemcic, B.Sc. thesis, University of the West of England, Bristol 2012, http://ab-div-bdi-bl-blm.web.cern.ch/ab-div-bdi-bl-blm/talks_and_papers/Nemcic
* [19] S. Bhattacharyya, Ph.D. thesis, Ohio State University, 2012.
|
# Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5
using Weighted Finite State Automata
Jinghong Chen, Weizhe Lin, Jingbiao Mei, Bill Byrne
Department of Engineering
University of Cambridge
{jc2124, wl356, jm2245<EMAIL_ADDRESS>
###### Abstract
The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that
performs well in Neural Machine Translation. Two issues prevent its
application to general Natural Language Generation (NLG) tasks: frequent Out-
Of-Vocabulary (OOV) errors and the inability to faithfully generate entity
names. We introduce Control-DAG, a constrained decoding algorithm for our
Directed Acyclic T5 (DA-T5) model which offers lexical, vocabulary and length
control. We show that Control-DAG significantly enhances DA-T5 on the Schema
Guided Dialogue and the DART datasets, establishing strong NAR results for
Task-Oriented Dialogue and Data-to-Text NLG.
Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5
using Weighted Finite State Automata
Jinghong Chen, Weizhe Lin, Jingbiao Mei, Bill Byrne Department of Engineering
University of Cambridge {jc2124, wl356, jm2245<EMAIL_ADDRESS>
## 1 Introduction
Non-autoregressive (NAR) models for text generation offer the promise of much
faster generation than auto-regressive (AR) models. However NAR models have
been largely developed for Neural Machine Translation (NMT) Xiao et al.
(2022), with other Natural Language Generation (NLG) tasks less well studied.
We will show how a NAR model developed for NMT, the Directed Acyclic
Transformer (DAT) (Huang et al., 2022), can be used for generation in Task-
Oriented Dialogue (TOD) and Data-to-Text (D2T) scenarios.
DATs as originally developed for NMT perform poorly in NLG on TOD and D2T
tasks: they fail to generate specified entity names in up to 40% of responses
and frequently (>20%) produce Out-Of-Vocabulary (OOV) words. Practical systems
must operate at zero error rate in these aspects to be deployable at scale.
Previous NAR study reported similar error patterns Xiao et al. (2022). Unless
these shortcomings are addressed, NAR models will not be usable for general
NLG.
We introduce three constrained decoding procedures for NLG using DATs. Our
approach converts Directed Acyclic Graphs (DAG) generated by DAT into Weighted
Finite State Automata (WFSA). We then intersect these WFSAs with other
automata that are defined to ensure that designated entities (lexical
constraints) are generated and OOVs are eliminated (vocabulary constraints).
To avoid generating responses that are too short, we employ a Viterbi decoding
algorithm to control the target length of the generated text (length
constraints).
We refer to the decoding procedure that incorporates all these steps as
Control-DAG. We evaluate extensively on the Schema Guided Dialogue (SGD)
(Rastogi et al., 2020) and the Data Record To Text (DART) datasets (Nan et
al., 2021) for NLG in TOD and D2T domains. Our Directed Acyclic T5 model, when
decoded with Control-DAG, is free from OOV error, faithfully generates all
specified entity names, and achieves marked BLEU and BLEURT gains on both
datasets. We use pynini Gorman (2016) for WFSA operations. Our contributions
are summarized below:
1. 1.
We introduce Control-DAG, a constrained decoding algorithm which
simultaneously offers lexical, vocabulary, and length controls for Directed
Acyclic models, addressing key limitations in NAR text generation.
2. 2.
We demonstrate the effectiveness of Control-DAG on two major NLG tasks: Task-
Oriented Dialogues and Data-to-Text. To our knowledge, DA-T5 with Control-DAG
is the first practical NAR benchmark on the SGD and the DART datasets.111Code:
https://github.com/EriChen0615/ControlDAG
Figure 1: Control-DAG with lexical, vocabulary, and length constraints. 1.
Directed Acyclic T5 (DA-T5) takes the input text to generate a Directed
Acyclic Graph (DAG). 2. The DAG is pruned by likelihood, keeping $K_{e}$ most
likely output tokens and $K_{t}$ most likely out-going arcs, and converted
into a Weighted Finite State Automaton (WFSA). We show WFSA vertices and arcs
in the upper-right corner. 3. For lexical and vocabulary constraints,
constraint FSAs are built from equivalent regular expressions (Sec.3.1). The
length target predictor is a simple linear predictor based on the input
sequence length (Sec.4). 4. We intersect the WFSA with constraint FSAs to
obtain a constrained WFSA which only contains hypotheses that satisfy all
lexical and vocabulary constraints. 5. DFS-Viterbi is used to obtain the most
likely string in the constrained WFSA that satisfies the length constraint.
## 2 Related Work
The Directed Acyclic Transformer (DAT) Huang et al. (2022) performs on par
with AR baselines in NMT and has attracted much interests. Shao et al. (2022)
developed a Viterbi decoding algorithm for DAT. Ma et al. (2023) introduced a
fuzzy alignment objective to improve DAT training. In NLG, PreDAT (Huang et
al., 2023) pretrains a DAT for open-domain dialogue, notably with high word
error rate reported even after extensive pre-training. Our work highlights the
links between DATs and automata, and shows well-studied WFSA algorithms Mohri
et al. (2002) can be used in constrained decoding to eliminate OOV errors.
Enforcing lexical constraints in auto-regressive decoding has been studied
extensively. Constrained beam search (CBS) Post and Vilar (2018); Hu et al.
(2019); Li et al. (2020) is a widely used family of lexically constrained
decoding procedure. We show how CBS can be adapted to NAR Directed Acyclic
models.
## 3 Constrained Decoding with DA-T5
The architecture of our DA-T5 model follows that of the DAT by Huang et al.
(2022). Conceptually, DAT takes an input sequence and generates a DAG with a
pre-determined number of DAG vertices. Vertex embeddings are produced first,
and then token emission probabilities and state transition probabilities are
generated from these vertex embeddings via softmax and self-attention, resp.
Each vertex has a token emission distribution. These vertices and transitions
define a weighted DAG that contains output string hypotheses. DAT uses a
vanilla Transformer to produce vertex embeddings whereas we use T5, hence the
name DA-T5.
In training DA-T5, we use ‘glancing training’ Qian et al. (2021) as DAT. In
inference, DAGs are generated with DA-T5 and converted to WFSAs. The procedure
is simply Moore-to-Mealy Machine conversion (Appendix B.1). Prior to the
conversion, we perform likelihood-based pruning of each vertex, keeping
$K_{e}$ most likely output tokens and $K_{t}$ most likely out-going arcs. This
pruning balances coverage against decoding speed, with larger thresholds
leading to a more complete WFSA at the cost of slower decoding.
### 3.1 Constrained Decoding
# | Decoding | BLEURT | BLEU | BLEU-BP | NEO$\downarrow$ | SER$\downarrow$ | Time | Spd. Up
---|---|---|---|---|---|---|---|---
_T5-small (Auto-regressive)_
1 | Greedy | 69.7 | 28.8 | 1.00 | 0.0 | 0.49 | 13:$30$ | x1.6
2 | Beam search (BS) | 70.2 | 29.1 | 1.00 | 0.0 | 0.12 | 16:0$5$ | x1.4
3 | Constrained beam (CBS) | 65.6 | 22.5 | 1.00 | 0.0 | 0.0 | 22:$15$ | x1.0
_Directed Acyclic T5-small (Non-Autoregressive)_
4 | Greedy | 56.0 | 18.3 | 0.92 | 29.7 | 46.3 | 2:$52$ | x7.8
5 | Beam search | 55.6 | 16.0 | 0.60 | 20.7 | 20.6 | 6:$50$ | x3.3
6 | CBS-DAG | 59.8 | 21.7 | 0.73 | 19.2 | 0.0 | 5:$57$ | x3.7
7 | WFSA shortest path | 53.8 | 13.0 | 0.44 | 12.2 | 34.8 | 3:0$4$ | x7.3
8 | w/ HLC | 58.1 | 20.2 | 0.58 | 11.0 | 0.0 | 5:$16$ | x4.2
9 | w/ VC | 54.0 | 14.1 | 0.45 | 0.0 | 47.5 | 4:$18$ | x5.2
10 | w/ LC (DFS-Viterbi) | 58.5 | 20.8 | 1.00 | 21.9 | 45.8 | 3:$31$ | x6.3
11 | Control-DAG | 60.0 | 22.9 | 1.00 | 0.0 | 0.0 | 13:$14$ | x1.7
Table 1: Main results on the SGD dataset. For reference, auto-regressive
T5-small by Kale and Rastogi (2020) achieves 26.2 BLEU and 0.80 SER. BP stands
for the brevity penalty term in computing BLEU. SER stands for Slot Error Rate
in percentage. All speed ups are computed against auto-regressive constrained
beam search. Constrained beam search (Row 3) forces the replication of slot
values that need to appear exactly and hence has zero slot error rate. CBS-DAG
(Row 6) refers to Constrained beam search adapted for Directed Acyclic Graph
introduced in Sec.3.1. HLC refers to Hard Lexical Constraint; VC is Vocabulary
Constraint; and LC is Length Constraint. Control-DAG (Row 11) is WFSA shortest
path decoding with HLC, VC, and LC applied simultaneously.
For hard lexical and vocabulary constraints we build corresponding Finite
State Automata (FSA). Intersecting the WFSA with these constraint FSAs
produces a WFSA that only contains hypotheses that satisfy all constraints
Mohri et al. (2002). For length constraints, we propose a pruned version of
DAT Viterbi decoding by Shao et al. (2022) to search for strings with
specified length. Appendix B gives implementation details and complexity
analyses. Figure 1 illustrates our Control-DAG system with an example.
#### Hard Lexical Constraints (HLC)
For each phrase $C_{i}$ that must appear in the generation, we construct a
constraint FSA $A_{i}$ that accepts and only accepts strings where the phrase
$C_{i}$ appears at least once, corresponding to the regular expression
“$.\ast(C_{i}).\ast$” IEEE (2004). We then intersect the WFSA converted from
the DAG with all of the constraint FSAs. The resulting WFSA $W_{HLC}$ contains
only hypotheses that satisfy all lexical constraints.
#### Vocabulary Constraints (VC)
We build a vocabulary FSA $A_{vocab}$ that accepts and only accepts strings of
words from a valid vocabulary; intersection with $A_{vocab}$ prevents OOV
errors. $A_{vocab}$ is obtained from three FSAs: a dictionary FSA $A_{dict}$
that accepts and only accepts English words; a special token FSA $A_{spec}$
that accepts and only accepts numbers, punctuation, and special tokens; and a
dynamic FSA $A_{dyn}$ that accepts and only accepts entity names specified in
the input. The final vocabulary FSA $A_{vocab}$ is obtained by unioning the
three FSAs and taking the Kleene closure (Eq.1).
$A_{vocab}=(A_{dict}\cup A_{spec}\cup A_{dyn})^{*}$ (1)
For efficiency, we perform a one-time determinization and minimization Mohri
et al. (2002) of the union ($A_{dict}\cup A_{spec}$) and store the optimized
FSA in memory.
#### Length Constraints (LC)
Shao et al. (2022) introduced a Viterbi decoding procedure for DAT that finds
the highest scoring hypothesis for each string length. We find this exact
Viterbi procedure to be impractical because the number of WFSA states can be
large (>30,000) after intersection with the constraint FSAs. We introduce a
pruned version of this procedure, _Depth-First Search Viterbi (DFS-Viterbi)_.
DFS-Viterbi searches the WFSA with DFS and keeps the best hypotheses of all
possible string lengths at each vertex to avoid repeated computation. During
DFS, we only explore the minimal set of out-going edges such that their
cumulative probability is bigger than a threshold $p$. This pruning is
inadmissible but works well in practice. We also introduce an exponential
length penalty that penalizes strings shorter than target length $L_{tgt}$ and
select the hypothesis with the lowest overall costs. In experiments to follow,
$L_{tgt}$ is obtained via simple linear regression.
#### HLC with CBS
In addition to automata-based methods, we introduce CBS-DAG, a constrained
beam search algorithm for our NAR DA-T5. CBS-DAG is straight-forwardly adapted
from AR CBS by Hu et al. (2019) (Appendix B.4).
## 4 Experiments and Results
We evaluate on the SGD and the DART datasets. In SGD, the aim is to generate
natural utterances from dialogue actions (e.g., INFORM(destination=Cambridge))
that contain the specified information. DART is a more general data-to-text
task that takes triplets of (SUBJECT, RELATION, OBJECT) to generate natural
texts. Hyper-parameters and implementation details are in Appendix A.
#### Metrics
We use BLEURT Sellam et al. (2020) and BLEU Papineni et al. (2002) to measure
text quality relative to ground truth text. We also report the BLEU _Brevity
Penalty (BP)_ , as a small BP indicates too short generation. For SGD, we use
Slot Error Rate (SER) Kale and Rastogi (2020) to evaluate lexical
faithfulness. A slot error occurs when a slot value that should be reproduced
exactly (e.g., a phone number) is not in the generated text. For DART, we use
subjects/objects whose string values are always in the ground-truth training
text as hard lexical constraints and propose Exact Occurrence error Rate (EOR)
for evaluation. EOR is the percentage of model responses where at least one of
the string values from these subjects/objects is missing. For OOV errors, we
define _neologism rate (NEO)_ to be the percentage of model’s responses that
contain at least one OOV generation.
We emphasize that SER, EOR, and OOV are critical metrics as even a small error
rate could lead to an intolerable number of misleading responses for systems
deployed at scale. ‘Speed up’ is measured against auto-regressive CBS
implemented by Li et al. (2020) with batch size of 1 to reflect a realistic
NLG system that operates at zero SER/EOR.
#### Training
We train DA-T5 from scratch by glancing training by Qian et al. (2021) on the
SGD and the DART datasets for 30 and 50 epochs, respectively. Auto-regressive
T5 is trained following Chen et al. (2023).
#### Decoding configurations
We use $K_{t}=K_{e}=3$ and $K_{t}=K_{e}=5$ for DAG-to-WFSA conversion on SGD
and DART, respectively. For LC, we fit a simple linear regression model on the
training set to predict the target token length given the input token length.
Decoding hyper-parameters are determined on the validation sets.
### 4.1 Non-Autoregressive NLG on SGD
Decoding | BLEURT | BLEU | NEO | SER
---|---|---|---|---
Greedy | 56.0 | 18.3 | 29.7 | 46.3
Lookahead | 56.6 | 19.3 | 23.0 | 44.6
Viterbi | 52.7 | 13.4 | 12.4 | 50.5
Joint Viterbi | 52.1 | 12.6 | 10.5 | 50.6
Control-DAG | 60.0 | 22.9 | 0.00 | 0.00
Table 2: Performance on the SGD dataset using Control-DAG and other decoding
algorithms in the literature. NEO stands for Neologism rate. Huang et al.
(2022) proposed Lookahead. Shao et al. (2022) introduced Viterbi and Joint
Viterbi.
Table 1 reports NLG performance on SGD with auto-regressive T5 decoding in
Rows 1-2 with greedy and beam search. Although these systems yield high BLEURT
and BLEU, they still commit slot errors (SER=0.12%). Constrained Beam Search
(CBS) eliminates slot errors by forcing the generation of designated slot
values, but with longer decoding times (16:05 $\rightarrow$ 22:15) and a
degradation in BLEU ($-6.6$) and BLEURT ($-4.6$) compared to unconstrained
beam search. This constraint-quality trade-off is also observed in previous
study Post and Vilar (2018); See Appendix D for CBS failure modes. Auto-
regressive T5 is completely free from OOV errors (NEO=0.0).
Turning to non-autogressive NLG, generation with DA-T5 using common decoding
methods (greedy, beam search) leads to very high SER (> 20%) and OOV errors in
at least 20% of the generated responses (Rows 4, 5). Although our CBS-DAG (Row
6) eliminates SER by design and enhances quality as measured by BLEURT (+3.8)
and BLEU (+3.4), its neologism rate is still unusably high (19.2%).
We now discuss the performance of our constrained decoding methods.
Unconstrained WFSA shortest path decoding (Row 7) is as fast as greedy
decoding, showing that DAGs can be efficiently converted to WFSAs. However,
unconstrained generation directly from the WFSA frequently leads to slot
errors (SER=34.8%), OOV errors (NEO=12.2%), and a harsh brevity penalty
(BP=0.44). These aspects of text quality can be improved individually by
constrained decoding (Rows 8-10): Hard Lexical Constrained decoding eliminates
slot errors (SER=0); Vocabulary constraints eliminate OOV errors (NEO=0); and
Length constrained decoding leads to better text lengths (BP=1.0). Control-DAG
(Row 11) combines these methods to achieves zero SER and zero neologism rate
while satisfying the length requirement and yielding a speed advantage of x1.7
relative to auto-regressive CBS.
Table 2 shows the performance of using existing decoding procedures developed
for DA-Transformer to decode DA-T5 on the SGD dataset. Control-DAG has the
overall best BLEU (22.9) and BLEURT (60.0) .
### 4.2 Results on DART
# | Model | BLEURT | BLEU | BP | NEO$\downarrow$ | EOR$\downarrow$ | Time | Spd. Up
---|---|---|---|---|---|---|---|---
_T5-small (Auto-regressive)_
1 | Greedy | 71.2 | 31.3 | 0.95 | 4.1 | 5.0 | 24:$50$ | x1.3
2 | Beam search | 72.8 | 31.9 | 0.93 | 3.2 | 3.9 | 30:$53$ | x1.1
3 | Constrained beam | 70.5 | 29.3 | 0.95 | 3.3 | 0.0 | 33:$10$ | x1.0
_Directed Acyclic T5-small (Non-Autoregressive)_
4 | Greedy | 45.0 | 18.2 | 1.00 | 48.9 | 39.5 | 3:$17$ | x10.1
5 | Beam search | 45.6 | 14.0 | 0.53 | 34.3 | 43.6 | 9:$29$ | x3.5
6 | CBS-DAG | 46.0 | 18.9 | 0.80 | 36.1 | 0.0 | 7:$26$ | x4.5
7 | WFSA shortest | 42.1 | 10.8 | 0.38 | 27.3 | 45.4 | 3:$49$ | x8.7
8 | w/ HLC | 46.8 | 14.4 | 0.46 | 24.4 | 0.0 | 9:$39$ | x3.4
9 | w/ VC | 39.3 | 7.7 | 0.28 | 0.0 | 45.1 | 10:$38$ | x3.1
10 | w/ LC (DFS-Viterbi) | 46.8 | 18.3 | 0.86 | 44.4 | 40.3 | 5:$26$ | x6.1
11 | CONTROL-DAG | 46.8 | 19.0 | 1.00 | 0.0 | 0.0 | 24:0$3$ | x1.4
Table 3: Results on the DART dataset. The naming convention for metrics and
decoding methods follow that in Table 1. EOR is Exact Occurrence Error.
The results on DART (Table 3) validate our findings on the SGD dataset:
Control-DAG yields the best performance while maintaining a speed advantage
and each constrained decoding step contributes as expected. We now contrast
performance on DART and SGD to show how Control-DAG performs on tasks with
very different characteristics.
DART has a challenging vocabulary that causes even AR models to commit OOV
errors. This is also reflected by the much higher neologism rate when decoding
DA-T5 with greedy (48.9% versus 29.7% in SGD). This explains why less
aggressive pruning (top-5) is needed for DART relative to SGD (top-3). We find
the simple procedure of searching the training data for subjects/objects whose
values are exactly reproduced and using them as lexical constraints boosts
DA-T5 performance by +4.7 BLEURT and +3.6 BLEU (Row 8, Table 3). This
demonstrates that hard lexical constraints are effective and easy to apply for
less lexically constrained NLG tasks such as DART.
## 5 Conclusion
We propose Control-DAG for decoding non-autoregressive Directed Acyclic models
with lexical, vocabulary, and length constraints, addressing key limitations
in NAR text generation. Constrained decoding is efficiently performed via
well-studied Weighted Finite State Automata algorithms. DA-T5 with Control-DAG
establishes strong NAR results on the Schema Guided Dialogue and the DART
datasets, bridging gaps in NAR research.
## 6 Acknowledgement
Jinghong Chen is supported by the Warwick Postgraduate Studentship from
Christ’s College and the Huawei Hisilicon Studentship for the undertaking of
the PhD in Engineering at the University of Cambridge.
Weizhe Lin was supported by a Research Studentship funded by Toyota Motor
Europe (RG92562(24020)).
Prof. Bill Byrne holds concurrent appointments as a Professor of Information
Engineering at Cambridge University and as an Amazon Scholar. This publication
describes work performed at Cambridge University and is not associated with
Amazon.
We would also like to thank all the reviewers for their knowledgeable reviews.
## 7 Limitation
Given our focus on decoding algorithms, we leave further training and model
scaling to future work. It is possible to further improve inference speed by
writing the DAG-to-WFSA conversion and the DFS-Viterbi algorithm in the C
programming language to reduce overhead from the python interface. In this
paper, we demonstrate substantial speed-up can be achieved without these
optimizations and leaves further speed-up techniques to future work.
## 8 Ethical Statement
We trained two versions of the DA-T5 model: one on the training set of Schema
Guided Dialogue and one on the training set of the DART dataset. These are
English datasets and do not contain sensitive personal information or
offensive language. Detailed statistics of the SGD and DART datasets can be
found in Rastogi et al. (2020) and Nan et al. (2021), respectively. We note
that the model may hallucinates information or generates language that appears
offensive. Some linguistic phenomena of our DA-T5 models are in Appendix D. It
is vital that developers test DA-T5 fully before deployment.
All software packages that our code built on are used as their original
intention. Our code is released under the MIT license.
## References
* Chen et al. (2023) Jinghong Chen, Weizhe Lin, and Bill Byrne. 2023. Schema-guided semantic accuracy: Faithfulness in task-oriented dialogue response generation. _CoRR_ , abs/2301.12568.
* Gorman (2016) Kyle Gorman. 2016. Pynini: A Python library for weighted finite-state grammar compilation. In _Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata_ , pages 75–80, Berlin, Germany. Association for Computational Linguistics.
* Hu et al. (2019) J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 839–850. Association for Computational Linguistics.
* Huang et al. (2023) Fei Huang, Pei Ke, and Minlie Huang. 2023. Directed acyclic transformer pre-training for high-quality non-autoregressive text generation. _CoRR_ , abs/2304.11791.
* Huang et al. (2022) Fei Huang, Hao Zhou, Yang Liu, Hang Li, and Minlie Huang. 2022. Directed acyclic transformer for non-autoregressive machine translation. In _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 9410–9428. PMLR.
* IEEE (2004) The Open Group IEEE. 2004. _Chapter 9: Regular Expressions_ , ieee std 1003.1, 2004 edition edition, volume 6, chapter 9. IEEE. Archived from the original on 2011-12-02. Retrieved 2011-12-13.
* Kale and Rastogi (2020) Mihir Kale and Abhinav Rastogi. 2020. Template guided text generation for task-oriented dialogue. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 6505–6520. Association for Computational Linguistics.
* Li et al. (2020) Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, and Benjamin Van Durme. 2020. Guided generation of cause and effect. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020_ , pages 3629–3636. ijcai.org.
* Ma et al. (2023) Zhengrui Ma, Chenze Shao, Shangtong Gui, Min Zhang, and Yang Feng. 2023. Fuzzy alignments in directed acyclic graph for non-autoregressive machine translation. In _The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023_. OpenReview.net.
* Mohri et al. (2002) Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. _Comput. Speech Lang._ , 16(1):69–88.
* Nan et al. (2021) Linyong Nan, Dragomir R. Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: open-domain structured data record to text generation. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021_ , pages 432–447. Association for Computational Linguistics.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting on Association for Computational Linguistics_ , ACL ’02, page 311–318, USA. Association for Computational Linguistics.
* Post and Vilar (2018) Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers)_ , pages 1314–1324. Association for Computational Linguistics.
* Qian et al. (2021) Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for non-autoregressive neural machine translation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_ , pages 1993–2003. Association for Computational Linguistics.
* Rastogi et al. (2020) Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 8689–8696.
* Sellam et al. (2020) Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: learning robust metrics for text generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020_ , pages 7881–7892. Association for Computational Linguistics.
* Shao et al. (2022) Chenze Shao, Zhengrui Ma, and Yang Feng. 2022. Viterbi decoding of directed acyclic transformer for non-autoregressive machine translation. In _Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022_ , pages 4390–4397. Association for Computational Linguistics.
* Tyler Barrus (2018) Tyler Barrus. 2018. Pyspellchecker: Pure Python Spell Checking. https://pypi.org/project/pyspellchecker/. Python version: 3.
* Xiao et al. (2022) Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, and Tie-Yan Liu. 2022. A survey on non-autoregressive generation for neural machine translation and beyond. _CoRR_ , abs/2204.09269.
## Appendix A Experiment setup details
#### Metrics details
For BLEURT, we use the BLEURT-20 checkpoint. For BLEU, we use the sacrebleu
implementation. Decoding times are average of three runs on a single A100 GPU
for the SGD dataset and on a single V100 GPU for the DART dataset.
#### Vocabulary for neologism evaluation
From the entire corpus, we extract all space-delimited words, strip
punctuation and numbers, and maintain true cases. All words in the test corpus
are also added to the evaluation vocabulary without pre-processing. Note that
they are not added to the constraint vocabulary for VC decoding to avoid
leakage. For the SGD, we also add all words in the slot names, slot values,
and slot descriptions from the schema, resulting in a vocabulary of 19,126
words. In evaluation, we only strip punctuation from words in the generated
texts. We also use the pyspellchecker library Tyler Barrus (2018) to check
that the word in question is indeed OOV.
#### Exact Occurrence Error
We go through the training data to identify subjects/objects that are always
present in the ground-truth text. For example, we find that the subject of the
relation priceRange always appear in the ground-truth text. Whenever
priceRange appears during testing, we treat the string value of its subject as
hard lexical constraints. If the string cannot be found in the generated text,
an exact occurrence error is flagged.
#### Data Preprocessing
We linearize the input dialogue actions or triplets to strings as input to our
DA-T5 model. On the SGD, we follow the Schema Guided Linearization by Kale and
Rastogi (2020) to process our input data. On DART, we process the triplets
into arrays of ‘‘<h> SUBJECT <r> RELATION <t> OBJECT’’ where <h>, <r>, and <t>
are special tokens.
#### Training hyper-parameters
The DAG vertex size $L$ is determined by the upsample factor $\lambda$
($L=\lambda\times N$ where $N$ is the input length) with $\lambda=5$ for both
the SGD and the DART datasets. We use the T5-small architecture with randomly
initialized weights to generate vertex embeddings (79.3M trainable
parameters). We train the model with a learning rate of 1e-4, a batch size of
8 using the AdamW optimizer. Glancing training is used to facilitate training
with a constant annealing factor $\tau=1.0$. SGD training took around 13 hours
(25 minutes per epoch) on a single A100 GPU including all validation runs.
DART training took 24 hours on a single V100 GPU. We find that glancing
training is critical to successful training. Without it the model performs
poorly (4.6 BLEU on the SGD when decoded with Greedy).
#### Target length predictor
Let $x$ be the input length in tokens, $L_{tgt}=\lceil 26.1x+0.4\rceil$ for
the SGD and $L_{tgt}=\lceil 0.5x+11.9\rceil$ for DART. Coefficients are fitted
on the validation set. We use strictness $A=1$ in LC decoding.
#### Beam search
Auto-regressive Beam Search (BS) and Constrained Beam Search (CBS) use beam
size $=5$. CBS-DAG uses a base beam size of $4$ with dynamic adjustment
(Sec.B.4).
## Appendix B Algorithmic details
### B.1 DAG-to-WFSA conversion
A Weighted FSA (WFSA) consists of states and weighted directed arcs connecting
the states. The outputs (tokens) are labeled on the arcs. DAG-to-WFSA is
simply Moore Machine to Mealy Machine conversion by treating DAG vertices as
WFSA states and exploding the output tokens at DAG vertices to WFSA arc
labels. WFSA arc weights are the sum of negative log-likelihood for state
transition and token emission. The best path has maximal likelihood.
We prune the DAG before conversion to reduce the number of WFSA arcs. For each
vertex $u$ in the DAG, we only keep the top $K_{e}$ tokens and top $K_{t}$
transitions in descending probabilities. We also keep tokens that appear in
the constraint phrases, ensuring there exists paths that realize lexical
constraints in the WFSA (Algo.2). Algo.1 shows pseudo-code. $\times$ denotes
Cartesian product.
Algorithm 1 DAG to WFSA conversion
1:Inputs: DAG vertices $V$, transition matrix $E$, emission matrix $P$,
emission degree $K_{e}$ and transition degree $K_{t}$. Lexical constraint
phrases $\mathcal{C}=[C_{1},...,C_{M}]$.
2:$\mathcal{E}\leftarrow\emptyset$
3:for $u\in\text{topological\\_sort}(V)$ do
4: $\mathcal{T}[u]\leftarrow\arg\text{topk}(P[u,:],K_{e})$
5: $\mathcal{S}[u]\leftarrow\arg\text{topk}(E[u,:],K_{t})$
6: $\mathcal{T}[u]\leftarrow\mathcal{T}[u]\ \cup$ ForceEmit($u,\mathcal{C}$)
7:$\triangleright$ Forced emission (Algo.2)
8: for $t,v\in\mathcal{T}[u]\times\mathcal{S}[u]$ do
9: $w=-(\log P[u,t]+\log E[u,v])$
10: $e\leftarrow\left(u,t,w,v\right)$
11: $\mathcal{E}\leftarrow\mathcal{E}\cup\\{e\\}$
12: end for
13:end for
14:Construct the WFSA with edge set $\mathcal{E}$
Finding the shortest path has linear complexity in the number of edges because
our WFSA is acyclic. The pruning parameters, $K_{t}$ and $K_{e}$, trades of
completeness with decoding speed. Larger values lead to a more complete WFSA
at the cost of longer decoding time.
Algorithm 2 The ForceEmit function
1:Inputs: Vertex predecessors under top-K transition pruning
$N_{K_{t}}^{-}(v)$. Lexical constraint phrases
$\mathcal{C}=[C_{1},...,C_{M}]$. Emission tokens at all predecessor vertices
$\mathcal{T}[\cdot]$
2:function ForceEmit($u,\mathcal{C}$)
3: $\mathcal{F}\leftarrow\emptyset$
4: for phrase $C_{i}\in\mathcal{C}$ do
5: for token $t_{j}$ in $C_{i}[:-1]$ do
6: for $v\in N_{K_{t}}^{-}(u)$ do
7: if $t_{j}\in\mathcal{T}[v]$ then
8: $\mathcal{F}\leftarrow\mathcal{F}\cup\\{t_{j+1}\\}$
9:$\triangleright$ Force-emit the next token $t_{j+1}$ in phrase $C_{i}$
10: end if
11: end for
12: end for
13: end for
14: return $\mathcal{F}$
15:end function
### B.2 Vocabulary Constraint
We elaborate on how to construct the FSAs for vocabulary constraints below:
#### Dictionary FSA
From the training corpus, we extract space-delimited unigrams, strip numbers
and punctuation, sort them in descending frequency, and cutoff at 90%
cumulative frequency. This results in a vocabulary $V$ of 1129 words on the
SGD dataset. We then tokenize each unigram with the T5 tokenizer, build FSA
that accepts and only accepts the tokenized sequence (e.g. ‘‘photosynthesis’’
$\rightarrow$ ‘‘_photo’’, ‘‘synthesis’’), and union these FSAs to form the
dictionary FSA $A_{dict}$.
#### Special token FSA
$A_{spec}$ accepts and only accepts punctuation “$&’()*+,-./:;=>?@[]_”, start-
of-sentence <s>, end-of-sentence token </s>, and T5 tokenizer’s start-of-word
mark (u2581 “_”).
#### Dynamic FSA
: $A_{dyn}$ is built for each input. Given the entity names, we tokenize them,
build FSAs that accepts and only accepts the token sequence for each entity,
and take the union. Note that entity names may include space. For example,
$A_{dyn}$ may accept “Hong Kong” but not the constituent unigrams “Hong” and
“Kong”.
### B.3 Length Constraint
Algo.3 lists the DFS-Viterbi algorithm and the symbol definitions. The
recursive relation is given in Eq.2. For each vertex, we memoize the current
best string of each length and their costs. The shortest path is recovered
with parent pointers.
$\delta(u,l+1)=\min_{v\in N^{+}_{p}(u)}w(u,v)+\delta(v,l)$ (2)
We fit a first-order linear model to predict target length $L_{tgt}$ from
input length. Length is measured in tokens and coefficients are given in
Appendix A. Enforcing a strict length constraint can lead to incomplete
sentences. Therefore, we find the best $l-$length string for
$l=1,\ldots,L_{upper}$, where $L_{upper}=\min(L_{tgt}+5,L_{tgt}\times 1.5)$
and introduce an exponential length penalty (Eq.3) similar to BLEU. The
candidate with the lowest overall cost $C^{\prime}$ (Eq.4) is chosen as the
final generation. We use simple linear regression to specify the length target
$L_{tgt}$.
$\displaystyle LP=\begin{cases}\exp\big{(}A(L_{tgt}/l-1)\big{)},&\text{if
}l<L_{tgt}\\\ 1,&\text{otherwise}\end{cases}$ (3) $\displaystyle
C^{\prime}=LP\times\delta(u_{s},l)$ (4)
The WFSA software implementation, pynini Gorman (2016), allows us to
efficiently traverse the WFSA as graphs. Prior to running DFS-Viterbi, we sort
the WFSA states topologically and perform epsilon-removal Mohri et al. (2002).
Epsilon transitions do not have actual token labels, and are removed to
prevent over-counting the output length. The WFSA can be topologically sorted
because intersection preserves the acyclic property of its input: any cycles
will result in strings of unbounded length which cannot be accepted by the
acyclic WFSA.
Let $|V|$ be the number of WFSA states. The space complexity of memoization is
$O(L_{tgt}\times|V|)$. The worst-case time complexity is exponential
$O(L_{tgt}^{|V|})$. However, we observe a linear time complexity of
$O(L_{tgt})$ when applying DFS-Viterbi to our trained DA-T5 model. We
attribute the efficiency to: (1) memoization; (2) transition probabilities are
concentrated on a few successors. We find that the number of out-going edges
after pruning, $|N_{p}^{+}(u)|$, approximates 1 when $p=0.7$, leading to very
efficient search.
Algorithm 3 DFS-Viterbi finds the shortest path with exactly $L_{tgt}$ edges.
1:function DFS-Viterbi($u$, $l$, $\delta$, $L_{tgt}$, $N^{+}$, $w$)
2: Arguments:
3: $u$: current vertex.
4: $l$: target length (number of edges) from vertex $u$ to a final vertex.
5: $\delta$: memoization table storing shortest distance to vertex $u$ with
exactly $l$ edges.
6: $F$: set of final states (vertices).
7: $N^{+}_{p}(u)$: minimal set of successors of vertex $u$ with cumulative
probability $>p$.
8: $w(u,v)$: edge weight from vertex $u$ to $v$.
9: if $v$ is in $F$ then
10: return $0$
11: end if
12: if $\delta[u,l]$ is not NULL then
13: return $\delta[u,l]$
14: end if
15: $\text{min\\_distance}\leftarrow\infty$
16: for all $v\in N^{+}(u)$ do
17: $\text{dist}\leftarrow w(u,v)+$ DFS-Viterbi($v,l+1,\delta,F,N^{+},w$)
18: if $\text{dist}<\text{min\\_distance}$ then
19: $\text{min\\_distance}\leftarrow\text{dist}$
20: end if
21: end for
22: $\delta[u,l]\leftarrow\text{min\\_distance}$
23: return min_distance
24:end function
### B.4 Constrained Beam Search for Directed Acyclic Graphs (CBS-DAG)
CBS-DAG follows the beam expansion and pruning rules in Dynamic Beam
Allocation (DBA) Post and Vilar (2018). Let $K$ be the beam size. At each
vertex transition, CBS-DAG extends the beam with the top-$K$ tokens from model
prediction, the next token in active constraints, and the first token in non-
active constraints. Active constraints are identified by the KMP string-
matching algorithm. After beam expansion, we regroup the candidates into
“banks” by the number of unmet constraint tokens and retain the most likely
candidate within each bank. We dynamically adjust the beam size such that beam
size is always larger than the number of non-empty banks (i.e., the number of
constraint tokens plus one).
## Appendix C Further Analysis
#### DA-T5 produces sparse DAGs
We find that DA-T5 learns to produce a sparse DAG in the following sense: on
average, each vertex has 1.68 transitions with probability $>0.2$ and 1.58
emissions with probability $>0.2$ after training. These statistics are
computed over the validation set, and explain why we can prune aggressively
during WFSA-to-DAG conversion (top-3 for the SGD and top-5 for DART) for speed
without much loss of information.
## Appendix D Qualitative Study
Figure 2: Case study comparing DA-T5 with Control-DAG, Joint Viterbi, and CBS-
DAG decoding on the SGD dataset.
|
# Using network structure and community detection to discover important
website features when distinguishing between phishing and legitimate ones
Arash Negahdari Kia, Finbarr Murphy, Zahra Dehghani Mohammadabadi, & Parisa
Shamsi
###### Abstract
In this paper, we uncover the essential features of websites that allow
intelligent models to distinguish between phishing and legitimate sites.
Phishing websites are those that are made with a similar user interface and a
near similar address to trustworthy websites in order to persuade users to
input their private data for potential future misuse by attackers. Detecting
phishing websites with intelligent systems is an important goal to protect
users, companies, and other online services that use the HTTP protocol. An
intelligent model needs to distinguish features that are important as input to
predict phishing sites. In this research, using correlation-based networks, we
provide a novel network-based method to find features that are more important
in phishing detection. The networks are trained and tested on an established
phishing dataset. Three different networks are made by partitioning the
dataset by its data instance labels. The important features are found by
discovering the hubs of these networks, and the results are presented and
analysed. This is the first time using a network-based approach for feature
selection which is a fast and accurate way to do so.
###### Index Terms:
Phishing Detection, Knowledge Graph, Community Detection
## I Introduction
The internet has become ubiquitous for all private and commercial activity. In
many instances, websites may require personal information such as usernames
and passwords; and there is a general degree of implicit trust associated with
this information transfer. This allows malicious hackers to steal private data
for illicit gain. To do so, they can try to trick people by making them think
that they are passing their information to a trustworthy website by displaying
a fake website with similar characteristics to the legitimate website and a
near similar URL address. Such websites are called phishing websites. Many
people fall into this trap and face substantial negative consequences [1].
According to the IBM threat index 2020, phishing is the most popular cyber-
attack which happens globally [2].
Significant efforts have been made to detect phishing websites and prevent
such fraud. However, there is yet no clear way to distinguish legitimate
websites from phishing websites. Therefore, efforts are focused on methods
that can detect phishing websites with higher accuracy.
Phishing detection is approached in a variety of ways. Most contemporary
approaches for detecting phishing websites are based on machine learning and
intelligent models, like using a classification method on website features.
One way to optimize the results of these approaches is to find the most
salient features of a website to identify its legitimacy [3, 4, 5, 6]. In
another research, the effectiveness of finding features in optimally detecting
phishing websites was investigated by two methods: Wrapper-based feature
selection and correlation-based feature selection [7]. In correlation-based
feature selection, the criterion had been calculated, and important subset of
features were selected according to the criterion. Wrapper-based feature
selection needs the supervised algorithms and labels for each instance of the
dataset for selecting the important features. In the wrapper method, a subset
of features that make the most accurate prediction/classification is selected.
The researchers finally compared the performance of both methods in their
study. In this paper, we propose a heuristic method to find the most important
features of a website to help intelligent models in phishing detection.
Knowledge graph representation has helped us find the most important features
in distinguishing between phishing and legitimate websites. In our approach
nodes of the network represent the features and those nodes that have more
connections have more influence on other nodes, and therefore, they represent
more important features. In section I-A, we give a brief explanation and
examples of some concepts used in our proposed method.
In section I-B, we investigate some preliminaries methods. We explain our
proposed method in section II and discuss its results in section III. Section
IV concludes the paper and presents suggestions for future researches.
### I-A Related Works
#### I-A1 Phishing Detection
Some considerable research has been undertaken to increase the detection of
phishing websites. The approaches can be classified into the blacklist
approach and the heuristic approach.
1. 1.
Blacklist Approach:
In the blacklist approach, a list of malicious URLs is formed as a blacklist.
When a user requests a website, the domain will be compared with the list to
find a match. If this match is found, the connection would not be allowed. Its
disadvantage is that the blacklist should be updated frequently, and some
phishing websites may not be discovered.
Another study proposed a blacklist approach that keeps the blacklist up to
date by using search engine results to detect suspicious domains [8]. This
way, the website’s legitimacy can be checked. Another research proposed a
system (PhishNet) using an algorithm to find a close match in the blacklist
[9]. Another study proposed an approach that uses the redirection URLs from
phishing websites for completing the blacklist [10].
2. 2.
Heuristic Approach:
In the heuristic approach, some techniques like machine learning are used to
find phishing websites based on general phishing features. The advantage of
this approach is that new phishing websites can be detected.
In one study, researchers propose a heuristic method based on a relative
detection based on the website’s logo and legal logo [11]. Another research
proposed a phishing detection technique based on machine learning using an
analysis of the URL’s features, website host and interpretation of the visual
appearance [3]. Mao et al. propose a heuristic phishing detection method using
machine learning techniques to find the similarity between the website’s user
interface and a legitimate website’s user interface [4]. Chiew et al. propose
a heuristic feature-based method that uses machine learning to detect phishing
websites, called the Hybrid Ensemble Feature Selection (HEFS) [5]. HEFS
included two steps: The first by using the Cumulative Distribution Function
gradient algorithm found the number of optimal features and in the second
step, selected a subset of features by the hybrid framework. HEFS had high
performance when using a random forest algorithm. Rao et al. propose a method
based on URL and using TF-IDF property to detect a phishing website. Also, the
dataset that we used in our paper has features based on URLs such as
URL$\\_$length [6]. Zhang et al. also propose a method only using URL
addresses for phishing detection. Techniques used in the method are
bidirectional LSTM, skip-gram, and CNN [12]. Chavan et al. propose a phishing
detection method using deep learning technique and feature engineering and
reduces features of the dataset from 19 to 10 [13]. One of the big problems in
phishing detection is the lack of phishing data against legitimate data.
Shirazi et al. used data augmentation to solve the problem [14].
#### I-A2 Network Structures
Many real-world phenomena can be modelled by networks such as social networks
and information networks.
Social networks show the interaction of people or groups of people in the form
of nodes and edges connecting them [15]. Barabasi et al. discuss scientific
collaborations as complex networks [16]. Nekovee et al. propose a model to
show the spread of rumours [17] and Potts et al. propose a market-based
definition of creative industries, both based on complex social networks [18].
In another research, Schimit used complex networks for modelling a population
to show how people connect and analysed it as a disease spreading model [19].
Information networks are networks showing interactions between some items of
data. Information networks are a type of network showing the interaction
between concepts in the outside world that can be interpreted as nodes and
links [15]. We can refer to the World Wide Web as the best-known information
network. A citation network is a network that is based on paper citations. Son
et al. propose a method for an academic paper recommender system based on
citation networks [20]. Their proposed network is a multilevel simultaneous
citation network, and this method is useful when citation information is not
enough.
In this paper, we analyse the information network built from the phishing and
legitimate websites data for feature selection. The features can be used for
better phishing detection models used in both literature and application like
intrusion detection systems.
#### I-A3 Community Detection
Community detection is a procedure used to group network nodes in a way to
make nodes in each community have dense connections. As a result, a better
understanding of the network’s structure and function is discovered. There is
a broad application of community detection used in the researches. Kanavos et
al. propose an efficient methodology for community detection to analyse the
behaviour of users on an emotional level based on their tweets [21]. In this
paper, we use community detection to cluster the website features in order to
analyse their similarities. We deploy a similarity knowledge graph using
different characteristics and features of phishing/legitimate websites. This
is a unique approach that has not been used in the phishing detection research
area by far. Network modelling has been found useful and effective in
different areas of research, and this is the first time it is used in phishing
research.
### I-B Preliminaries
#### I-B1 Constructing a Similarity Graph
We define
$Correlation(feature_{i},feature_{j})=1-\frac{6\sum d_{k}^{2}}{n(n^{2}-1)},$
(1)
where $d_{i}$ is the difference between two ranks of each feature for each
instance of the dataset, and $n$ is the number of instances in the dataset. By
using Equation 1, we form the correlation matrix. The dataset features are
categorical, so we use Spearman’s rank-order correlation [22].
The distance between $feature_{i}$ and $feature_{j}$ can be obtained from
Equation 2.
$d_{i,j}=\sqrt{2\ (1-Correlation(feature_{i},feature_{j}))}.$ (2)
Finally, we use Equation 3 to form the similarity matrix.
$similarity\ measure=e^{-d_{i,j}}.$ (3)
This approach of constructing a similarity matrix is used in other researches
such as the researches of Bonanno et al. [23], Wang et al. [24], and Song et
al. [25]. Most of these researches are in the financial data mining and
analysis domain. Our research employs the same models to determine the
existence of phishing.
#### I-B2 Louvain Community Detection
Community detection determines the similarity amongst features of the phishing
dataset. Louvain is a greedy and extendable community detection method that
divides a large network into communities [26], and because of its greedy
nature, it is a fast method in comparison with other methods, specially when
dealing with complex networks [27]. Louvain is based on optimizing the
modularity meaning the detection of communities in a way that nodes in a
community have dense connections, while nodes in different communities have
scattered connections. The Louvain algorithm is described as following:
1. 1.
Consider each node as a community.
2. 2.
Merge two communities if it raises the modularity.
3. 3.
Repeat step 2 until no other changes could be done, and that means the
modularity is optimized.
#### I-B3 Maximum Spanning Tree (MST)
Using the maximum spanning tree helps us find the strongest relationship
structure amongst the features of our phishing dataset when modelled into a
correlation network. A maximum spanning tree ($T(V_{T},E_{T})$) is a subgraph
of an edge-weighted undirected graph ($G(V_{G},E_{G})$) that,
$V_{T}=V_{G},$ (4)
and
$E_{T}\subset E_{G},$ (5)
with the maximum possible total edge weight where $V_{T}$ and $E_{T}$ are sets
of the tree’s vertices and edges and $V_{G}$ and $E_{G}$ are sets of the
graph’s vertices and edges.
We will use Kruskal’s algorithm [28] to form the maximum spanning tree. The
algorithm is described as following:
1. 1.
Sort the graph’s edges in descending order.
2. 2.
Pick the first edge.
3. 3.
Pick the next edge if the set of selected edges up to this step does not form
a cycle. A cycle is a non-empty trail in which the only repeated vertices are
the first and last vertices [29].
4. 4.
If the number of selected edges is one unit less than the number of the main
graph’s vertices, stop the algorithm. Else, repeat step 3.
Using thresholding instead of a maximum spanning tree may lead to expert bias.
Even using statistical significance testing would need distribution assumption
which would also lead to expert bias.
#### I-B4 Centrality Measures Used In The Research
Centrality measures are used to find the most important nodes in a knowledge
graph. There are many centrality measures defined in the network science. The
meaning of what is important in this context depends on the mathematical
definition of each centrality measure [15]. These measurements help us capture
feature attributes of the phishing dataset. These measurements help us
understand which features are more important than others in the phishing
dataset.
* •
Degree:
In a graph, the number of edges connected to each node is called its degree.
In an undirected graph (G=(V, E)), the relationship between the number of
edges (E) and the number of nodes (V) is
$\sum_{v\in V}\deg(v)=2|E|.$ (6)
* •
Hub:
To find the most important nodes in a graph, we can use different measures and
definitions. For example, we can say if a node’s degree is higher, it is more
influential in the graph. In network science, a node with a degree much higher
than the average is called hub [16].
In this research, we consider nodes with a degree higher than two as a hub.
#### I-B5 Gamma Value
Gamma value is a measurement in network structures that shows the scale-
freeness of the network. In some networks, connections between nodes are based
on a power-law distribution called preferential attachment. In these networks,
called scale-free networks, the gamma value in Equation 8 is a parameter in
the range $2<\gamma<3$ [15]. Social networks are a kind of scale-free
networks. In social networks, there are few nodes with dense connections and
many nodes with few connections. In the case of higher gamma values, there
will be fewer hub nodes with higher degree and more nodes connected to the
hubs with less degrees. This means that in higher gamma values, we have some
important features and many other features that relate to these hub features.
Therefore, it may be possible that they can be ignored when constructing
intelligent phishing detection models. Imagine a network with $n$ nodes. If
the number of nodes with degree k is $n_{k}$, then the probability that a node
is of degree k is equal to
$p(k)=\frac{n_{k}}{n}.$ (7)
The proper distribution function for the above expression in a network is as
follows:
$P(k)\sim k^{-\gamma}.$ (8)
In this paper, first, we will calculate the gamma value for each network
structure constructed from the phishing dataset. Subsequently, we provide a
network analysis of the nodes and their connections to discover important
nodes which correspond to important features of websites.
## II Method
In this section, we explain the design of the study, as shown in Figure 1 and
network construction mechanism in Figure 2. We apply a process on the salient
features of a website and find the most effective ones that can help
intelligent models to detect phishing websites. These features are described
in the Appendix.
Figure 1: An overview of the research methodology
### II-A Design of the study
In the following, we describe the research methodology of Figure1 for each
sub-procedure which is enumerated in the figure.
1. 1.
We divided data into three parts. The first group contains all the websites.
In the second group, there are only legitimate websites, and the last group
includes only phishing websites. All the features are the same in the three
parts of the data. The following steps apply to all three groups. For each, we
build a network from the dataset features. This is to capture the
characteristics of different categories of websites along with a network for
all the websites together.
2. 2.
For building the networks, as it can be seen in Figure 2, first, the
correlation matrix must be calculated as described in section I-B1. Then, we
calculate the distance matrix, and by doing so, we calculate the similarity
matrix for each part of the dataset. The reason for doing this is to construct
a similarity graph where nodes represent the features of websites in the
dataset and the links with their weights represent the similarity between each
pair of features. The code for network construction has been added in the
GitHub account of the paper111The code is available here:
https://github.com/dmresearches/phishing.
The network construction procedure has a successful history of representing
the similarity between data in finance paradigm. A full and comprehensive
description of these correlation networks has been studied in network science
literature [23].
3. 3.
For finding the features that are most related to each other, we apply a
community detection algorithm on all three graphs extracted in the previous
step. In this research, we use the Louvain modularity, which is described in
section I-B2.
4. 4.
We find the maximum spanning tree of all three networks built in part 2 of
Figure1, which the weights of the edges are the similarity between the
features (described in section I-B3). The reason for doing so is to capture
the most important feature relationships in each dataset. The maximum spanning
tree finds the strongest relations among nodes in each graph.
5. 5.
As described in section I-B4, we find hubs for each maximum spanning tree. By
doing this, we find the most important features for each category of websites.
These features are the most related ones to other features in their dataset.
In other words, these hub features can be seen as the candidates in a feature
selection procedure for the future supervised or semi-supervised prediction of
phishing or legitimate websites.
6. 6.
Finally, we find the gamma values for each network, as described in section
I-B5. The value shows if the hub features are good representatives of other
features. As described in section I-B5, it is known that in scale-free
networks with high gamma values, there are fewer nodes with high degree and a
lot of other nodes with low degree. This means that the minority nodes with
high degree that we are going to call as hubs are those which relate to many
other nodes which represent the features in our dataset.
Figure 2: An overview of network building steps in the research methodology
In the next section, the graphs, trees, and numerical results achieved from
our proposed method are presented to discover the essential features in
phishing, legitimate, and the whole dataset.
## III Results and Discussion
In this section, we provide an analysis and discussion based on the results
achieved from the methodology outlined in section II. As discussed, we
construct a maximum spanning tree for features of each network; all websites
presented in Figure 3, legitimate websites presented in Figure 4, and phishing
websites presented in Figure 5.
Figure 3: Maximum spanning tree for the graph extracted from the dataset of
all websites
In each maximum spanning tree, we find hub features as described in section
I-B4. These hubs are listed in tables I, II, III.
TABLE I: Website features discovered as hubs (nodes with high connections) in the Maximum Spanning Tree built from all the websites Hub label | Degree | Community
---|---|---
Shortening Service | 4 | 0
SSLfinal State | 4 | 2
URL Length | 4 | 2
Double Slash Redirecting | 3 | 0
Links Pointing To Page | 3 | 2
Port | 3 | 1
Submitting To Email | 3 | 1
URL Of Anchor | 3 | 2
By finding the maximum spanning tree, we are specifying the strongest
relations among the features. For example, in Figure 3 which shows the maximum
spanning tree of features in the dataset of all websites, if a website has the
feature ”Shortening Service”, it is more probable that it has the feature
”Double Slash Redirecting”, and so it is more probable that it has the feature
”HTTPS Token”. In general, it can be said that the features that are hubs are
more important than other features of the website in distinguishing between
phishing and legitimate, and changes in the values of these features affect
the values of other features.
Figure 4: Maximum spanning tree for the graph extracted from the dataset of
legitimate websites
Maximum spanning tree of features in legitimate websites (Figure 4) have six
hubs listed in table II. These features are the most effective features to
discuss the legitimacy of a website. In the Appendix we describe in which
state of each feature, it is effective in determining the legitimacy of a
website.
TABLE II: Website features discovered as hubs (nodes with high connections) in the Maximum Spanning Tree built from the legitimate websites Hub label | Degree | Community
---|---|---
Double Slash Redirecting | 6 | 2
URL Length | 4 | 1
Ifram | 3 | 0
Links Pointing To Page | 3 | 1
Port | 3 | 0
Submitting To Email | 3 | 0
In Figure 5, the maximum spanning tree of features in phishing websites
dataset is displayed whit six features listed in table III as hubs. That
means, for checking if a website is a phishing one, we can focus on these
features in prediction models and gain a better performance.
Figure 5: Maximum spanning tree for the graph extracted from the dataset of
phishing websites
Some features like ”URL Length” are common in both hub features of the
legitimate websites and the phishing websites. That means the length of the
website’s URL, can help us judge both the legitimacy or being a phishing one.
This shows that the value of the URL length feature is a discriminator of the
classes phishing, or legitimate.
TABLE III: Website features discovered as hubs (nodes with high connections) in the Maximum Spanning Tree built from the phishing websites Hub label | Degree | Community
---|---|---
Port | 4 | 0
Shortening Service | 4 | 2
URL Length | 4 | 1
Age Of Domain | 3 | 1
Double Slash Redirecting | 3 | 2
Page Rank | 3 | 1
In table IV, the gamma values are presented. These gamma values are calculated
(as described in section I-B5) for features in all websites, legitimate
websites, and phishing websites. The higher the gamma value is, the less the
number of hubs and the more the degrees of the hubs would be. So, those hubs
are better candidates for us to predict the website type (legitimate, or
phishing). The gamma value of features in legitimate websites is higher than
the gamma value of features in phishing websites. As a result, we can check
the legitimacy of a website by the hub features in table II with better
performance than checking if a website is phishing using the features in table
III because this gamma value is higher and the tree is more scale-free
(described in section I-B5).
TABLE IV: Gamma values of the maximum spanning trees for three different datasets of all data, legitimate data, and phishing data. Higher gamma indicates features that make a scale-free network, and the more the hub features are effective in website type prediction. Group | Gamma values
---|---
All Of Data | 0.09
Legitimate Data | 0.13
Phishing Data | 0.08
Table I, table II, and table III, also show some of the results of the
community detection algorithm on the networks built in the second part of the
method. The nature of community detection algorithms is to cluster similar
entities into the same cluster/community. Given that, all those features that
appear in the same community can be considered having the same importance in
phishing/legitimate detection systems so that one can be a delegate for
others. As you can see in table II, most features belong to community 0, so
features in this community play a more effective role in detecting legitimate
websites. In table III, most features belong to community 1, so features in
this community play a more effective role in detecting phishing websites.
The above method was also applied on different random subsets of the dataset
and the results stayed the same, which shows the reliability of the results.
For evaluating the proposed method, we follow these steps: Table I shows three
nodes with the degree of 4. These are the most important features when
discussing the legitimacy or illegitimacy of a website. So these are some of
the features we worked with. For other nodes with the degree of 3, we chose
the ones directly connected to a node with the degree of 4, as can be seen in
Figure 3. As the result, we were dealing with five features:
”SSLfinal$\\_$State”, ”Shortening$\\_$Service”, ”URL$\\_$Length”,
”URL$\\_$Of$\\_$Anchor”, and ”Double$\\_$Slash$\\_$Redirecting”. We chose the
eXtreme Gradient Boosting (XGBoost) [30] algorithm for classification by the
selected features since it is one of the strongest ensemble methods. The
accuracy of classification by these five selected features was 0.917. For
comparison, we used the Principal Component Analysis (PCA) [31] method by five
components for feature selection and ran the XGBoost algorithm for the dataset
with the five components. The accuracy by using the PCA method was 0.899.
When talking about cybersecurity, we are facing a complex system that its
elements do not have a linear relationship with each other, and also some of
the relationships are not clear. So the best way to model such a system is
using network-based approaches. The network-based approaches can discover
hidden patterns in the system. Using methods like Principal Component Analysis
(PCA) for feature selection and finding the most important features in
phishing detection would implement the assumption that the features have a
linear relationship with each other, which here is not the case.
## IV Conclusion
In order to find the most important features for intelligent models to help
them detect phishing websites, we propose a method that finds these features
and discovers the connections between them. In this way, we can prevent data
loss from the phishing website and provide information security for those who
use the HTTP protocol and machines like smart routers that can filter the
malicious HTTP traffic.
In this paper, we built correlation-based networks of features in phishing,
legitimate, and all websites in our dataset. We subsequently identify
important features in each network by finding the hubs in them that had the
most effect on the other features.
By extracting the relation networks out of the datasets and finding the hub
nodes and gamma values for scale-freeness, we showed which features have a
stronger effect on the website class (phishing or legitimate) and which
website class is more dependent on particular features.
In the network made by phishing instances, the important features were, Port,
Shortening Service, URL Length, Age Of Domain, Double Slash Redirecting, and
PageRank. In the network made by legitimate instances, the important features
were, Double Slash Redirecting, URL Length, Ifram, Links Pointing to Page,
Port, and Submitting To email. In the knowledge graph made by the whole
dataset, the important features were, Shortening Service, SSLfinal State, URL
Length, Double Slash Redirecting, Links Pointing To Page, Port, Submitting TO
email, and URL Of Anchor.
The results of our study can be used in smart routers and intrusion detection
systems that try to monitor the HTTP traffic and filter the phishing websites.
Future researches that analyses different supervised models with our reduced
feature sets for phishing detection using different similarity functions other
than correlation can be employed to produce different networks that capture
different information from the dataset.
## Appendix A Data Gathering
The studying dataset is mainly gathered from archives of ”PhishingTank”,
”MillerSmiles” and Google’s searching operators [32] and includes 11055
samples, 30 features, and labels that show, according to the listed features,
if a website is phishing or not. The rows including the listed features are
categorized as 1, 0, -1. The value 1 defines that a website is legitimate, 0
defines the situation that a website is suspicious to be phishing, and -1
defines that a website is phishing.
In Figure 6, all dataset features can be seen. Also, the percentage that
shows, in each feature, how many of the samples are phishing, suspicious, or
legitimate.
Figure 6: Features in the dataset and the percentage of them appearing in
phishing, legitimate, and suspicious classes.
For understanding the dataset better, we explain each feature briefly. For
this purpose, we used references such as research of [33] and a Computer
Networks reference book [34].
#### Having$\\_$IP$\\_$Address
If a URL includes an IP in the domain name, the website is phishing. It should
be noticed that some times the IP will be turned into a hexadecimal code, so
it will be hard for users to pay attention to it easily.
#### URL$\\_$Length
Phishers use long URLs in order to hide the suspicious part in the address
bar. Studies show that if the length of a URL is less than 54 characters, the
website is legitimate with a high probability. If the length is more than 54
and less than 75 characters, the website is suspicious, and if the length is
more than 75 characters, the website is more probable to be phishing.
#### Shortening$\\_$service
Shortening URL is a method that shortens the URL address significantly. First,
by clicking on the shortened URL, users would be referred to the website which
offers this service, and then enters the main website. In this dataset, a
shortened URL is considered as a probable sign of a phishing website.
#### Having$\\_$At$\\_$symbol
If a URL includes the symbol ”@”, the website is phishing, and if not, it is
legitimate.
#### Double$\\_$Slash$\\_$Redirecting
If a URL includes the symbol ”//”, it is a sign of redirecting users to a new
website. Studies show that if a URL starts with HTTP, the symbol ”//” should
be in the sixth position, and if it starts with HTTPS, the symbol ”//” should
be in the seventh position. Thus, if the symbol ”//” is in the next positions,
the website is phishing. Otherwise, it is legitimate.
#### Prefix-Suffix
It rarely happens that a legitimate URL includes the symbol ”-”. Phishers
usually put a prefix or a suffix isolated by a ”-” in the URL, so users assume
that they are facing a legitimate website.
#### Having$\\_$Sub$\\_$Domain
Consider the URL ”http://www.hud.ac.uk/students”. By omitting ”www.” from it
and counting dots in the remaining part, we can bring up with a rule. If the
number of remaining dots is 1, the website is legitimate, if 2, the website is
suspicious (because it has subdomain) and if the number is greater than 2 (it
has multi-subdomains), the website is more probable to be phishing.
#### SSLfinal$\\_$State
If a URL supports HTTPS, it significantly has improved the probability of
being legitimate. However, the existence of HTTPS is not enough itself, and
for more assurance, features like SSL’s (Secure Sockets Layer) source and its
certification age should be considered. Studies show that if a website uses
HTTPS and the certificate’s sources are valid, and its certificate age is more
than one year, the website is legitimate, and if the website uses HTTPS but
the certificate’s sources are not valid, it is suspicious to be phishing.
Otherwise, it is phishing with a high probability.
#### Domain$\\_$Registration$\\_$Length
Most phishing websites will not be on a specified domain on the World Wide Web
for a long time, while if a website is a legitimate one and wants to be on a
specified domain for a long time, the cost will be prepaid. Thus, if a website
wants to be on a specified domain for a short time, it is more probable that
it is phishing. Otherwise, it is legitimate.
#### Favicon
Favicon is a particular graphical image that would be placed as an icon beside
the address bar. If this icon is loaded from a domain other than the website
domain, the website is more probable to be phishing. Otherwise, it is
legitimate.
#### Port
When a user sends a request to a particular server, along with the request,
the number of expected ports for answering back from the server will be sent.
For protecting the user’s information, websites must have a particular control
on the ports. If all ports are open, hackers can threaten the user’s
information. If all ports are closed except 80 and 443, it reduces the
probability of any break-in, and the website is more probable to be
legitimate.
#### HTTPS$\\_$Token
Phishers will use HTTPS in the URL but not in the right place. They put it
after HTTP to make the website feel legitimate. For example, we can take a
look at a URL like this: http://https-www-paypal-it-webapps-mpp-home.soft-
hair.com/
So, if there is HTTPS in the domain, the website is phishing. Otherwise, it is
legitimate.
#### Request$\\_$URL
In legitimated websites that contain photos, videos, and such things, their
source and the website’s URL should have the same domain. In general, if the
percentage of different cases is less than 22$\%$, the website is legitimate,
and if it is between 22$\%$ and 61$\%$, the website is suspicious. Otherwise,
the website is more probable to be phishing.
#### URL$\\_$Of$\\_$Anchor
If we need to make a link from our website to another website, we use the
”$<$a$>$” tag. Thus, there are two situations:
1. 1.
The ”$<$a$>$” tag’s domain is different from the website’s domain
2. 2.
The ”$<$a$>$” tag is not linking any website (for example: $<$a
href=”$\\#$”$>$)
If the percentage of any of the explained situations is less than 31$\%$ of
the whole HTML code, the website is legitimate. If this percentage is between
31$\%$ and 67$\%$, the website is suspicious. Otherwise, the website is more
probable to be phishing.
#### Links$\\_$In$\\_$Tags
In HTML programming language, programmers use ”$<$Meta$>$”, ”$<$Script$>$” and
”$<$Link$>$” tags in HTML documents. In legitimate websites, it is expected
that these tags have their links in the same website domain. If the percentage
of differences in the domains is less than 17$\%$, the website is legitimate.
If it is between 17$\%$ and 81$\%$, the website is suspicious. Otherwise, it
is more probable that the website is phishing.
#### SFH (Server Form Handler)
It is a field that contains an address that the user receives from the server.
If SFH is empty, the website is phishing. If its domain is different from the
website domain, the website is suspicious. Otherwise, the website is
legitimate.
#### Submitting$\\_$To$\\_$email
If a website wants users to enter their personal information like email
address by using ”mail()” or ”mailto:” method, it is likely that it is just an
effort to access their information. So the website is more probable to be
phishing.
#### Abnormal$\\_$URL
This feature can be extracted from websites of the ”WHOIS” database. If the
URL does not include the host’s name, the website is phishing, and if not, it
is legitimate.
#### Redirect
Legitimate websites redirect at most once. If the number of redirection is
between 2 and 4, the website is suspicious, more than that, it will be
phishing.
#### On$\\_$Mouseover
If the URL in status bar changes with ”onMouseOver”, the website is phishing.
Otherwise, it is legitimate.
#### Right$\\_$Click
If Right$\\_$Click is disabled on the website, it is more probable to be
phishing. Otherwise, it is legitimate.
#### Pop$\\_$Up$\\_$Windows
In legitimate websites, it is not common to ask users to enter their personal
information in a popup window, and these windows are being used for welcoming
or warning users. In general and with a high probability, if users are not
asked to enter a text in pop up windows, the website is legitimate. Otherwise,
it is phishing.
#### Iframe
There is a tag in HTML that allows displaying a website on another website.
Phishers may use this feature and make the frame invisible. Thus, if a website
uses the $<$iframe$>$ tag, it is more probable to be phishing. Otherwise, it
is legitimate.
#### Age$\\_$Of$\\_$Domain
Phishing websites are usually available for a short time. Studies show that
websites older than six months are legitimate. Otherwise, they are phishing.
This feature can be seen on the ”WHOIS” website.
#### DNS$\\_$Record (Domain Name Server Record)
This feature can be recognized from the ”WHOIS” database. If this feature is
empty or is not among the features in ”WHOIS”, the website is phishing.
Otherwise, it is legitimated.
#### Web$\\_$Traffic
”Alexa” is a database that ranks websites based on their views. In the worst
ranking, legitimate websites are among the top 100,000 websites. If the
website is ranked less than 100,000, the website is legitimate. If it is
ranked more than 100,000, it is suspicious, and if it is not among the
”Alexa”’s ranking, it is more probable to be phishing.
#### Page$\\_$Rank
PageRank shows the importance of websites and gets values between 0 and 1. In
the examined dataset, 95$\%$ of the phishing websites did not have PageRank,
and the other 5$\%$ had a PageRank value lower than 0.2. Thus, if the PageRank
value is more than 0.2, the website is legitimate. Otherwise, it is phishing.
#### Google$\\_$Index
Phishing websites are not usually in Google’s index for their short
availability. Thus, if a website is not on Google’s index, it is more probable
to be phishing. Otherwise, it is legitimate.
#### Links$\\_$Pointing$\\_$To$\\_$Page
In general, if there are many links from other websites to the website being
tested, it is more probable that the tested website is legitimate, otherwise
phishing. In this dataset, if there are no websites to point the website, we
consider it phishing. If this number is less than 2, the website is
suspicious. Otherwise, it is legitimate.
#### Statistical$\\_$Report
”Phishtank Stats” and ”StopBadware” are two of the institutes working on
providing statistical reports regarding phishing websites. If the website’s
host is in the list of phishing IPs or domains from those two institutes, the
website is phishing. Otherwise, it is legitimate.
Figure 7: The proportion of phishing and legitimate websites in the dataset
After a brief overview of the features, we review the data labels. We show the
phishing class label by -1 and legitimate class label by 1 in the dataset. The
number of phishing classes is 4898, and the number of legitimate classes is
6157. In Figure 7 the chart that shows the proportion of phishing classes and
legitimate classes in the whole dataset, is presented.
## List of abbreviations
HEFS: Hybrid Ensemble Feature Selection; HTTP: Hypertext Transfer Protocol;
URL: Uniform Resource Locator; HTTPS: Hypertext Transfer Protocol Secure; SSL:
Secure Sockets Layer; IP: Internet Protocol; SFH: Server Form Handler; DNS:
Domain Name System; HTML: HyperText Markup Language; MST: Maximum Spanning
Tree
## References
* [1] M. Wu, “Fighting phishing at the user interface,” Ph.D. dissertation, Massachusetts Institute of Technology, 2006.
* [2] M. Alvarez, D. Bales, J. Chung, S. Craig, K. Dahl, C. DeBeck, A. Eitan, B. Faby, R. Gates, D. Harz, L. Kessem, C. Lee, D. McMillen, S. Moore, G. Prassinos, C. Singleton, M. Usher, A. Vila, H. Virani, C. Zaboeva, and J. Zorabedian, “Ibm x-force threat intelligence index 2020,” _United States of America_ , 2020.
* [3] V. Patil, P. Thakkar, C. Shah, T. Bhat, and S. Godse, “Detection and prevention of phishing websites using machine learning approach,” in _2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA)_. IEEE, 2019, pp. 1–5.
* [4] J. Mao, J. Bian, W. Tian, S. Zhu, T. Wei, A. Li, and Z. Liang, “Phishing page detection via learning classifiers from page layout feature,” _EURASIP Journal on Wireless Communications and Networking_ , vol. 2019, no. 1, p. 43, 2019\.
* [5] K. L. Chiew, C. L. Tan, K. Wong, K. S. Yong, and W. K. Tiong, “A new hybrid ensemble feature selection framework for machine learning-based phishing detection system,” _Information Sciences_ , vol. 484, pp. 153–166, 2019\.
* [6] R. S. Rao, T. Vaishnavi, and A. R. Pais, “Catchphish: detection of phishing websites by inspecting urls,” _Journal of Ambient Intelligence and Humanized Computing_ , vol. 11, no. 2, pp. 813–825, 2020.
* [7] R. B. Basnet, A. H. Sung, and Q. Liu, “Feature selection for improved phishing detection,” in _International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems_. Springer, 2012, pp. 252–261.
* [8] M. Sharifi and S. H. Siadati, “A phishing sites blacklist generator,” in _2008 IEEE/ACS International Conference on Computer Systems and Applications_. IEEE, 2008, pp. 840–843.
* [9] P. Prakash, M. Kumar, R. R. Kompella, and M. Gupta, “Phishnet: predictive blacklisting to detect phishing attacks,” in _2010 Proceedings IEEE INFOCOM_. IEEE, 2010, pp. 1–5.
* [10] L.-H. Lee, K.-C. Lee, H.-H. Chen, and Y.-H. Tseng, “Poster: Proactive blacklist update for anti-phishing,” in _Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security_. ACM, 2014, pp. 1448–1450.
* [11] W. Yao, Y. Ding, and X. Li, “Deep learning for phishing detection,” in _2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom)_. IEEE, 2018, pp. 645–650.
* [12] L. Zhang and P. Zhang, “Phishtrim: Fast and adaptive phishing detection based on deep representation learning,” in _2020 IEEE International Conference on Web Services (ICWS)_. IEEE, 2020, pp. 176–180.
* [13] S. Chavan, A. Inamdar, A. Dorle, S. Kulkarni, and X.-W. Wu, “Phishing detection: Malicious and benign websites classification using machine learning techniques,” in _Proceeding of International Conference on Computational Science and Applications_. Springer, 2020, pp. 437–446.
* [14] H. Shirazi, S. R. Muramudalige, I. Ray, and A. P. Jayasumana, “Improved phishing detection algorithms using adversarial autoencoder synthesized data,” in _2020 IEEE 45th Conference on Local Computer Networks (LCN)_. IEEE, 2020, pp. 24–32.
* [15] M. Newman, _Networks: an introduction_. Oxford university press, 2010.
* [16] A.-L. Barabasi _et al._ , _Network science: Graph Theory, p. 27_. Cambridge university press, 2016.
* [17] M. Nekovee, Y. Moreno, G. Bianconi, and M. Marsili, “Theory of rumour spreading in complex social networks,” _Physica A: Statistical Mechanics and its Applications_ , vol. 374, no. 1, pp. 457–470, 2007.
* [18] J. Potts, S. Cunningham, J. Hartley, and P. Ormerod, “Social network markets: a new definition of the creative industries,” _Journal of cultural economics_ , vol. 32, no. 3, pp. 167–185, 2008.
* [19] P. H. Schimit and F. H. Pereira, “Disease spreading in complex networks: a numerical study with principal component analysis,” _Expert Systems with Applications_ , vol. 97, pp. 41–50, 2018.
* [20] J. Son and S. B. Kim, “Academic paper recommender system using multilevel simultaneous citation networks,” _Decision Support Systems_ , vol. 105, pp. 24–33, 2018.
* [21] A. Kanavos, I. Perikos, I. Hatzilygeroudis, and A. Tsakalidis, “Emotional community detection in social networks,” _Computers & Electrical Engineering_, vol. 65, pp. 449–460, 2018.
* [22] S. Kumar and I. Chong, “Correlation analysis to identify the effective data in machine learning: Prediction of depressive disorder and emotion states,” _International journal of environmental research and public health_ , vol. 15, no. 12, p. 2907, 2018.
* [23] G. Bonanno, G. Caldarelli, F. Lillo, and R. N. Mantegna, “Topology of correlation-based minimal spanning trees in real and model markets,” _Physical Review E_ , vol. 68, no. 4, p. 046130, 2003.
* [24] G.-J. Wang, C. Xie, and H. E. Stanley, “Correlation structure and evolution of world stock markets: Evidence from pearson and partial correlation-based networks,” _Computational Economics_ , vol. 51, no. 3, pp. 607–635, 2018\.
* [25] D.-M. Song, M. Tumminello, W.-X. Zhou, and R. N. Mantegna, “Evolution of worldwide stock markets, correlation structure, and correlation-based graphs,” _Physical Review E_ , vol. 84, no. 2, p. 026108, 2011.
* [26] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast unfolding of communities in large networks,” _Journal of statistical mechanics: theory and experiment_ , vol. 2008, no. 10, p. P10008, 2008.
* [27] P. Chejara and W. W. Godfrey, “Comparative analysis of community detection algorithms,” in _2017 Conference on Information and Communication Technology (CICT)_. IEEE, 2017, pp. 1–5.
* [28] J. B. Kruskal, “On the shortest spanning subtree of a graph and the traveling salesman problem,” _Proceedings of the American Mathematical society_ , vol. 7, no. 1, pp. 48–50, 1956.
* [29] E. A. Bender and S. G. Williamson, _Lists, Decisions and Graphs_. S. Gill Williamson, 2010.
* [30] T. Chen, “Story and lessons behind the evolution of xg-boost,” 2016.
* [31] K. Pearson, “Liii. on lines and planes of closest fit to systems of points in space,” _The London, Edinburgh, and Dublin philosophical magazine and journal of science_ , vol. 2, no. 11, pp. 559–572, 1901.
* [32] R. Mohammad, L. McCluskey, and F. Thabtah, “Uci machine learning repository: phishing websites data set (2015),” 2016. [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Phishing+Websites
* [33] R. M. Mohammad, F. Thabtah, and L. McCluskey, “Phishing websites features,” _Unpubl. Available Httpe-prints Hud Ac Uk243306RamiPhishingWebsitesFeatures Pdf, 2015_ , 2015.
* [34] A. S. Tanenbaum _et al._ , “Computer networks, 4-th edition,” _ed: Prentice Hall_ , 2003.
|
& \; +\E_t\bigg[ \int_t^T \e^{cr}\Big( L_{\rm u} |\sigma^\t_r \partial u_r^s|^2+4 L_z |\sigma^\t_r z_r|^2 + 2 L_v |\sigma^\t_r v_r^r|^2+2 L_v |\sigma^\t_r v_r^s|^2+ L_{\rm v}|\sigma^\t_r \partial v_r^s|^2\Big) \d r \bigg].
\end{align*}
where we recall the notation $L_\star=\max\{L_y,L_u,L_{\rm u},L_z,L_v,L_{\rm v}\}$. Thus, for any $c>0$ we obtain
\begin{align*}
& \mathrm{max}\big\{ \e^{\frac{c}2 t} |\Yc_t| ,\, \e^{\frac{c}2 t} |\Uc_t| ,\,\e^{\frac{c}2 t} |U_t^s|,\,\e^{\frac{c}2 t} |\partial U_t^s| \big\} \\
\leq &\ \|\xi\|_{\Lc^{\infty,c}} + \|\tilde h\|_{\L^{1,\infty,2,c}}+ 2\big( \|\eta \|_{\Lc^{\infty,2,c}} + \|\tilde g\|_{\L^{1,\infty,2,c}} \big) +(1+T+TL_{{\rm u}} )\big( \|\partial_s \eta \|_{\Lc^{\infty,2,c}}+ \|\nabla \tilde g\|_{\L^{1,\infty,2,c}} \big) \\
& \; + (4+T+L_{\rm u} T) L_{\star} T \Big( \|y\|^2_{\Sc^{2\infty,c}}+\|u\|^2_{\Sc^{\infty,2,c}}+ \|\partial u\|^2_{\Sc^{\infty,2,c}}\Big) + (4+T+L_{\rm u} T) L_{\star} \Big( \|z\|^2_{\H^{2,c}_{{\rm BMO}}}+\|v\|^2_{\overline \H^{2,2,c}_{{\rm BMO}}}+ \|\partial v\|^2_{\H^{2,2,c}_{{\rm BMO}}}\Big),
\end{align*}
* We show $(\Zc,\Vc,\Nc,\Mc)\in \big(\H^{2,c}_{{\rm BMO}}\big)^2\times \big(\M^{2,c}\big)^2$ and $\| V\|_{ \H^{2,2,c}_{{\rm BMO}}}^2+\|M\|_{{\M}^{2,2,c}}^2+\| \partial V\|_{ \H^{2,2,c}_{{\rm BMO}}}^2+\|\partial M\|_{{\M}^{2,2,c}}^2<\infty $.
From $(iii)$, <Ref><ref> and <ref><ref>, together with Young's inequality, yield that, for any $\eps_i>0$, $i\in\{1,2\}$, and defining $ C_{\eps_{1}}:=\eps_1^{-1} 7T L_{\rm u}^2$, and $ C_{\eps_{2}}:= \eps_2^{-1} 7T$, we have
\begin{align*}
2 \Yc_r \cdot h_r-c |\Yc_r|^2 & \leq 2 \|\Yc\|_{\Sc^{\infty,c}} \big( L_y |y_r|^2+L_z |\sigma^\t_r z_r|^2+L_u |u_r^r|^2+L_v |\sigma^\t_r v_r^r|^2+ |\tilde h_r|\big)\\
&\quad + \eps_1(7T)^{-1} |\partial U_r^r|^2+ ( \widetilde C_{\eps_{1}}-c) |\Yc_r|^2 \\[0,5em]
2 \Uc_r \cdot g_r-c |\Uc_r|^2 & \leq 2 \|\Uc\|_{\Sc^{\infty,c}} \big( L_y | y_r|^2+L_z |\sigma^\t_r z_r|^2+ | u_r^r|^2+L_v |\sigma^\t_r v_r^r|^2+ |\tilde g_r|\big)\\
&\quad + \eps_2(7T)^{-1} |\partial U_r^r|^2 + ( \widetilde C_{\eps_{2}}-c) |\Uc_r|^2 , \\[0,5em]
2 U_r^s \cdot g_r(s)-c |U_r^s|^2 & \leq 2 \|U\|_{\Sc^{\infty,2,c}} \big( L_u | u_r^s|^2 + L_{ v} |\sigma^\t_r v_r^s|^2+L_y | y_r|^2+L_z |\sigma^\t_r z_r|^2+ |\tilde g_r(s)| \big)-c |U_r^s|^2 , \\[0,5em]
2 \partial U_r^s \cdot \nabla g_r(s)-c |\partial U_r^s|^2& \leq 2\|\partial U\|_{\Sc^{\infty,c,2}} \big(L_{\rm u} | \partial u_r^s|^2 + L_{ v} |\sigma^\t_r \partial v_r^s|^2+ L_u | u_r^s|^2 + L_{\rm v} |\sigma^\t_r v_r^s|^2+ |\nabla \tilde g_r(s)| \big)\\
&\quad +2\|\partial U\|_{\Sc^{\infty,c,2}} \big( L_y | y_r|^2+L_z |\sigma^\t_r z_r|^2\big) -c |\partial U_r^s|^2
\end{align*}
These inequalities in combination with the analogous version of <Ref> (which holds for $c>2L_{\rm u}$), Young's inequality, and Itô's formula, as in (<ref>), show that for any $\eps_i>0$, $i\in\{3,...,24\}$
\begin{align*}
&\sum_{i=1}^4 \e^{ct} |\Yf_t^i|^2+\E_t\bigg[ \int_t^T \e^{cr} |\sigma^\t_r \Zf_r^i|^2 \d r+\int_t^T\e^{c r-} \d \Tr [\Nf^i]_r\bigg]+ \E_t\bigg[ \int_t^T \e^{cr}\big( |\Yc_r|^2 (c-C_{\eps_{1}})+|\Uc_r|^2 (c- C_{\eps_{2}} )\big)\d r\bigg] \\
&\; +\sup_{s\in [0,T]} \E_t\bigg[ \int_t^T c \e^{cr}|U_r^s|^2 \d r\bigg] +\sup_{s\in [0,T]} \E_t\bigg[ \int_t^T c \e^{cr} |\partial U_r^s|^2 \d r\bigg]\\
= & \ + \E_t\Big[ \e^{cT}\big( |\xi|^2+ |\eta(T)|^2+|\eta(s)|^2+|\partial_s \eta(s)|^2\big)\Big] + ( \eps_1+\eps_2) \Big( \|\partial_s \eta \|_{\Lc^{\infty,2,c}}^2 +\|\nabla \tilde g\|^2_{\L^{1,\infty,2,c}} \Big) \\
&\; +( \eps_1+\eps_2) \Big( L_\star T^2 \|y\|_{\Sc^{\infty,c}}^4+L_\star T^2 \|u\|_{\Sc^{\infty,c}}^4+L_\star T^2 \|\partial u\|_{\Sc^{\infty,c}}^4+ 2 L_\star^2 \Big( \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}} + \|v\|^4_{\H^{2,2,c}_{{\rm BMO}}}+ \|z\|^4_{\H^{2,c}_{{\rm BMO}}}\Big) \Big)\\
&\; +\big(\eps_{3}^{-1}+ \eps_{7}^{-1}+\eps_{8}^{-1}+\eps_{9}^{-1} +\eps_{10}^{-1}\big) \|\Yc\|^2_{\Sc^{\infty,c}} +\big(\eps_{4}^{-1}+\eps_{11}^{-1}+ \eps_{12}^{-1}+ \eps_{13}^{-1}+\eps_{14}^{-1}\big) \|\Uc\|^2_{\Sc^{\infty,c}} \\
&\; +\big(\eps_{5}^{-1}+ \eps_{15}^{-1}+\eps_{16}^{-1}+\eps_{17}^{-1} +\eps_{18}^{-1}\big) \|U\|^2_{\Sc^{\infty,c,2}} +\big(\eps_{6}^{-1}+\eps_{19}^{-1}+\eps_{20}^{-1} +\eps_{21}^{-1}+\eps_{22}^{-1}+\eps_{23}^{-1}+\eps_{24}^{-1}\big) \|\partial U\|^2_{\Sc^{\infty,c,2}} \\
&\; +\E_t\bigg[\eps_3 \bigg|\int_t^T \e^{cr} |\tilde h_r|\d r\bigg|^2 + \eps_{4} \bigg|\int_t^T \e^{cr} |\tilde g_r|\d r\bigg|^2 \bigg] + \eps_{5} \bigg|\int_t^T \e^{cr} |\tilde g_r(s)|\d r\bigg|^2+ \eps_{6} \bigg|\int_t^T \e^{cr} |\nabla \tilde g_r|\d r\bigg|^2 \bigg]\\
&\; +( \eps_{7} +\eps_{11}+\eps_{15}+\eps_{19} )L_\star T^2\|y\|^4_{\Sc^{\infty,c}}
+( \eps_{9} +\eps_{13} +\eps_{17}+\eps_{21})L_\star T^2 \|u\|^4_{\Sc^{\infty,2,c}}+ \eps_{23}L_\star T^2 \|\partial u\|^4_{\Sc^{\infty,2,c}}\\
&\; +( \eps_{8}+\eps_{12}+\eps_{16}+\eps_{20} ) L_z^2\E_t\bigg[ \bigg|\int_t^T \e^{cr} |\sigma^\t_r z_r|^2 \d r\bigg|^2\bigg]+( \eps_{10}+\eps_{14} ) L_v^2\E_t\bigg[ \bigg|\int_t^T \e^{cr} |\sigma^\t_r v_r^r|^2 \d r\bigg|^2\bigg]\\
&\; + (\eps_{18}+\eps_{22}) L_v^2\E_t\bigg[ \bigg|\int_t^T \e^{cr} |\sigma^\t_r v_r^s|^2 \d r\bigg|^2\bigg] +\eps_{24} L_{\rm v}^2\E_t\bigg[ \bigg|\int_t^T \e^{cr} |\sigma^\t_r \partial v_r^s|^2 \d r\bigg|^2\bigg]
\end{align*}
We now let $\tau \in \Tc_{0,T}$. In light of (<ref>), for
\begin{align}\label{Eq:cZwelldefinedq}
\begin{split}
c\geq \max &\ \{ \eps_1^{-1} 7T L_{\rm u}^2, \eps_2^{-1} 7T , 2L_{\rm u}\} ,
\end{split}
\end{align}
<Ref> yields
\begin{align*}
&\sum_{i=1}^4 \e^{ct} |\Yf_t^i|^2+\E_t\bigg[ \int_t^T \e^{cr} |\sigma^\t_r \Zf_r^i|^2 \d r+\int_t^T\e^{c r-} \d \Tr [\Nf^i]_r\bigg]\\
=&\ \|\xi\|_{\Lc^{\infty,c}}^2+2 \|\eta \|_{\Lc^{\infty,2,c}}^2 \! + (1+ \eps_1+\eps_2)\|\partial_s \eta \|_{\Lc^{\infty,2,c}}^2\! + \eps_3 \| \tilde h\|^2_{\L^{1,\infty,c}} \!+ ( \eps_{4} +\eps_{5}) \| \tilde g\|^2_{\L^{1,\infty,2,c}} \! + ( \eps_1+\eps_2+ \eps_{6}) \| \nabla \tilde g\|^2_{\L^{1,\infty,2,c}} \\
&\; + L_\star^2 T^2( \eps_1+\eps_2+\eps_{7} +\eps_{11}+\eps_{15}+\eps_{19} ) \|y\|^4_{\Sc^{\infty,c}} + L_\star^2 T^2( \eps_1+\eps_2+ \eps_{9}+\eps_{13}+\eps_{17}+\eps_{21} ) \|u\|^4_{\Sc^{\infty,2,c}}\\
&\; +L_\star^2 T^2(\eps_1+\eps_2+\eps_{23})\|\partial u\|_{\Sc^{\infty,2,c}}^4 +2 L_\star^2 ( \eps_1+\eps_2+ \eps_{8}+\eps_{12}+\eps_{16}+\eps_{20} ) \|z\|^4_{\H^{2,c}_{{\rm BMO}}} \\
&\; +2 L_\star^2 ( \eps_1+\eps_2+ \eps_{10}+\eps_{14}+\eps_{18}+\eps_{22} ) \|v\|^4_{\overline \H^{2,2,c}_{{\rm BMO}}} + 2 L_\star^2 ( \eps_1+\eps_2+\eps_{24}) \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}}\\
&\; +\big(\eps_{3}^{-1}+ \eps_{7}^{-1}+\eps_{8}^{-1}+\eps_{9}^{-1} +\eps_{10}^{-1}\big) \|\Yc\|^2_{\Sc^{\infty,c}} +\big(\eps_{4}^{-1}+\eps_{11}^{-1}+ \eps_{12}^{-1}+\eps_{13}^{-1} + \eps_{14}^{-1}\big) \|\Uc\|^2_{\Sc^{\infty,c}} \\
&\; +\big(\eps_{5}^{-1}+ \eps_{15}^{-1}+\eps_{16}^{-1}+\eps_{17}^{-1}+\eps_{18}^{-1} \big) \|U\|^2_{\Sc^{\infty,c,2}} +\big(\eps_{6}^{-1}+\eps_{19}^{-1}+\eps_{20}^{-1} +\eps_{21}^{-1}+\eps_{22}^{-1}+\eps_{23}^{-1}+\eps_{24}^{-1}\big) \|\partial U\|^2_{\Sc^{\infty,c,2}}
\end{align*}
which in turn leads to
\begin{align}\label{Eq:thm:wdq:ineq:final}
\begin{split}
&\frac{1}{10}\Big(\|\Yc\|^2_{\Sc^{\infty,c}} +\|\Uc\|^2_{\Sc^{\infty,c}} +\|U\|^2_{\Sc^{\infty,2,c}}+\|\partial U\|^2_{\Sc^{\infty,2,c}} + \|\Zc\|_{\H^{2,c}_{ {\rm BMO}}}^2 \\
&\quad + \|V\|_{\overline \H^{2,2,c}_{{\rm BMO}}}^2+ \|\partial V\|_{\H^{2,2,c}_{{\rm BMO}}}^2 + \|\Nc\|_{{\M}^{2,c}}^2+ \|M\|_{{\M}^{2,2,c}}^2+ \|\partial M\|_{{\M}^{2,2,c}}^2
\Big) \\
\leq&\ \|\xi\|_{\Lc^{\infty,c}}^2+2 \|\eta \|_{\Lc^{\infty,2,c}}^2 \! + (1+ \eps_1+\eps_2)\|\partial_s \eta \|_{\Lc^{\infty,2,c}}^2\! + \eps_3 \| \tilde h\|^2_{\L^{1,\infty,c}} \\
&\; + ( \eps_{4} +\eps_{5}) \| \tilde g\|^2_{\L^{1,\infty,2,c}} \! + ( \eps_1+\eps_2+ \eps_{6}) \| \nabla \tilde g\|^2_{\L^{1,\infty,2,c}} \\
&\; + L_\star^2 T^2( \eps_1+\eps_2+\eps_{7} +\eps_{11}+\eps_{15}+\eps_{19} ) \|y\|^4_{\Sc^{\infty,c}} + L_\star^2 T^2( \eps_1+\eps_2+ \eps_{9}+\eps_{13}+\eps_{17}+\eps_{21} ) \|u\|^4_{\Sc^{\infty,2,c}}\\
&\; +L_\star^2 T^2(\eps_1+\eps_2+\eps_{23})\|\partial u\|_{\Sc^{\infty,2,c}}^4 +2 L_\star^2 ( \eps_1+\eps_2+ \eps_{8}+\eps_{12}+\eps_{16}+\eps_{20} ) \|z\|^4_{\H^{2,c}_{{\rm BMO}}} \\
&\; +2 L_\star^2 ( \eps_1+\eps_2+ \eps_{10}+\eps_{14}+\eps_{18}+\eps_{22} ) \|v\|^4_{\overline \H^{2,2,c}_{{\rm BMO}}} + 2 L_\star^2 ( \eps_1+\eps_2+\eps_{24}) \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}}\\
&\; +\big(\eps_{3}^{-1}+ \eps_{7}^{-1}+\eps_{8}^{-1}+\eps_{9}^{-1} +\eps_{10}^{-1}\big) \|\Yc\|^2_{\Sc^{\infty,c}} +\big(\eps_{4}^{-1}+\eps_{11}^{-1}+ \eps_{12}^{-1}+\eps_{13}^{-1} + \eps_{14}^{-1}\big) \|\Uc\|^2_{\Sc^{\infty,c}} \\
&\; +\big(\eps_{5}^{-1}+ \eps_{15}^{-1}+\eps_{16}^{-1}+\eps_{17}^{-1}+\eps_{18}^{-1} \big) \|U\|^2_{\Sc^{\infty,c,2}} +\big(\eps_{6}^{-1}+\eps_{19}^{-1}+\eps_{20}^{-1} +\eps_{21}^{-1}+\eps_{22}^{-1}+\eps_{23}^{-1}+\eps_{24}^{-1}\big) \|\partial U\|^2_{\Sc^{\infty,c,2}}
\end{split}
\end{align}
From (<ref>) we conclude $(Z,N)\in \H^{2,c}_{{\rm BMO}}\times {\M}^{2,c}$, $\| V\|_{\overline \H^{2,2,c}_{{\rm BMO}}}^2+\| \partial V\|_{\H^{2,2,c}_{{\rm BMO}}}^2+ \|M\|_{{\M}^{2,2,c}}^2+ \|\partial M\|_{{\M}^{2,2,c}}^2<\infty$.
Defining $C_{\eps}$ analogously and if for some $\gamma\in(0,\infty)$
\begin{align}\label{Eq:thm:wpq:smalldatacond}
\begin{split}
I_0^\eps\leq \gamma R^2/10,
\end{split}
\end{align}
we obtain back in (<ref>)
\begin{align*}
& \|(Y,Z,N,U,V,M,\partial U,\partial V,\partial M)\|^2_{\Hc^{c}}\\
\leq &\ C_{\eps}^{-1} \Big( 10 I_0^\eps +10L_\star^2 \max\{2,T^2\} \big(
( \eps_1+\eps_2+\eps_{7} +\eps_{11}+\eps_{15}+\eps_{19} ) \|y\|^4_{\Sc^{\infty,c}} +( \eps_1+\eps_2+ \eps_{9}+\eps_{13}+\eps_{17}+\eps_{21} ) \|u\|^4_{\Sc^{\infty,2,c}}\\
&\qquad +(\eps_1+\eps_2+\eps_{23})\|\partial u\|_{\Sc^{\infty,2,c}}^4 + ( \eps_1+\eps_2+ \eps_{8}+\eps_{12}+\eps_{16}+\eps_{20} ) \|z\|^4_{\H^{2,c}_{{\rm BMO}}} \\
&\qquad + ( \eps_1+\eps_2+ \eps_{10}+\eps_{14}+\eps_{18}+\eps_{22} ) \|v\|^4_{\overline \H^{2,2,c}_{{\rm BMO}}} + ( \eps_1+\eps_2+\eps_{24}) \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}}\Big) \\
\leq &\ C_{\eps}^{-1} R^2 \bigg(\gamma +10L_\star^2 \max\{2,T^2\} R^2 \bigg( \eps_1+\eps_2+\sum_{i=7}^{24} \eps_{i} \bigg) \bigg)
\end{align*}
Therefore, to obtain $\Tf(\Bc_R)\subseteq \Bc_R$, that is to say that the image under $\Tf$ of the ball of radius $R$ is contained in the ball of radius $R$, it is necessary to find $R^2$ such that the term in parentheses above is less or equal than $C_{\eps}$, i.e.
\[
R^2 \leq \frac1{1 0 L_\star^2\max\{ 2, T^2\} }\frac{ C_{\eps} - \gamma }{ \eps_1+\eps_2+ \sum_{i=7}^{24} \eps_{i} }
\]
which after optimising the choice of $\eps$'s renders
\begin{align}\label{Eq:Rwelldefinedq}
R^2 < \frac{1}{2^6\cdot 3\cdot 5^2\cdot 7\cdot L^2_\star\cdot \max\{ 2, T^2\} }
\end{align}
* The continuity of the applications $([0,T],\Bc([0,T])) \longrightarrow (\Sc^{\infty,c},\|\cdot \|_{ \Sc^{\infty,c}})\big($resp. $(\H_{{\rm BMO}}^{2,c},\|\cdot \|_{\H_{{\rm BMO}}^{2,c}}),\, ({\M}^{2,c},\|\cdot \|_{{\M}^{2,c}} )\big) : s \longmapsto \varphi^s $ for $\varphi=U^s,\partial U^s\, ($resp. $V^s,\partial V^s, M^s,\partial M^s).$ follows analogously as in the proof <Ref>.
We conclude, $\Tf(\Bc_R)\subseteq \Bc_R$ for all $R$ satisfying (<ref>).
Step 2: We now argue that $\Tf$ is a contraction in $\Bc_R\subseteq \Hc$ for the norm $\| \cdot \|_{\Hc^c}$. Let
\begin{align*}
\delta h_t&:=h_t(y^1_t,z^1_t,u_t^{1,t}, v_t^{1,t},\partial U_t^{1,t})-h_t(y^2_t,z^2_t,u_t^{2,t}, v_t^{2,t},\partial U_t^{2,t}),\\
\delta g_t&:=g_t(t,u_t^{1,t},v^{1,t}_t,y_t^1,z_t^1)-\partial U_t^{1,t} -g_t(t,u_t^{2,t},v^{2,t}_t,y_t^2,z_t^2)+\partial U_t^{2,t},\\
\delta \tilde h_t&:=h_t(y^1_t,z^1_t,u_t^{1,t}, v_t^{1,t},\partial U_t^{2,t})-h_t(y^2_t,z^2_t,u_t^{2,t}, v_t^{2,t},\partial U_t^{2,t}),\\
\delta \tilde g_t&:=g_t(t,u^{1,t}_t,v^{1,t}_t,y_t^1,z_t^1) -g_t(t,u^{2,t}_t,v^{2,t}_t,y_t^2,z_t^2),\\
\delta \tilde g_t(s)&:=g_t(s,u^{1,s}_t,v^{1,s}_t,y_t^1,z_t^1)-g_t(s,u^{2,s}_t,v^{2,s}_t,y_t^2,z_t^2),\\
\delta \nabla \tilde g_t(s)&:=\nabla g_t(s,\partial u^{1,s}_t,\partial v^{1,s}_t,u^{1,s}_t,v^{1,s}_t,y_t^1,z_t^1)-g_t(s,\partial u^{2,s}_t,\partial v^{2,s}_t,u^{2,s}_t,v^{2,s}_t,y_t^2,z_t^2).
\end{align*}
Applying Itô's formula we obtain that for any $t\in[0,T]$
\begin{align*}
&\sum_{i=1}^4 \e^{ct} |\delta \Yf_t^i|^2+\int_t^T \e^{cr} |\sigma^\t_r \delta \Zf_r^i|^2 \d r+\int_t^T\e^{c r-} \d \Tr [\delta \Nf^i]_r +\delta \widetilde \Mf_t -\delta \widetilde \Mf_T\\
=&\ \int_t^T \e^{c r} \bigg( 2 \delta \Yc_r \cdot \delta h_r + 2 \delta \Uc_r \cdot \delta g_r + 2 \delta U_r^s \cdot \delta \tilde g_r(s)+ 2 \delta \partial U_r^s \cdot \delta \nabla \tilde g_r(s)\bigg) \d r \\
\leq &\ \int_t^T \e^{c r} \bigg(2 | \delta \Yc_r| \big( L_{\rm u} |\delta \partial U_r^r|+|\delta \tilde h_r|\big) +2 | \delta \Uc_r| \big( |\delta \partial U_r^r|+|\delta \tilde g_r| \big) +2 | \delta U_r^s| |\delta \tilde g_r(s)| +2 | \delta \partial U_r^s| |\delta \nabla \tilde g_r(s)| -c \sum_{i=1}^4 |\delta \Yf_r^i|^2 \bigg) \d r
\end{align*}
where $\delta \widetilde \Mf$ denotes the corresponding martingale term. Let $\tau \in \Tc_{0,T}$, as in <Ref> we obtain for $c>2L_{\rm u}$
\begin{align}\label{Eq:ineqdeltaUtt:quadratic}
\begin{split}
\E_\tau\bigg[ \int_\tau^T \frac{ \e^{cr}}{3 T} |\delta \partial U_r^r|^2\d r\bigg] &\leq \sup_{s\in [0,T]} \es_{\tau \in \Tc_{0,T}} \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \nabla \tilde g_r(s)|\d r \bigg] \bigg|^2
\end{split}
\end{align}
We now take conditional expectation with respect to $\Fc_\tau$ in the expression above and use <Ref> in combination with (<ref>). We then obtain from Young's inequality that for any $\tilde \eps_i\in (0,\infty)$, $i\in \{1,2\}$,
\begin{align}\label{Eq:c:contraction1q}
\begin{split}
c\geq \max&\ \{ \tilde \eps_1^{-1} 3T L_{\rm u}^2,\; 3T\tilde \eps_2^{-1},\; 2 L_{\rm u} \},
\end{split}
\end{align}
it follows that
\begin{align}\label{Eq:contractionItoq}\begin{split}
& \sum_{i=1}^4 \e^{ct} |\delta \Yf_t^i|^2+\E_\tau\bigg[\int_t^T \e^{cr} |\sigma^\t_r \delta \Zf_r^i|^2 \d r+\int_t^T\e^{c r-} \d \Tr [\delta \Nf^i]_r \bigg] \\
\leq &\ \tilde \eps_3^{-1} \|\delta Y\|^2_{\Sc^{\infty,c}}+ \tilde \eps_4^{-1} \|\delta \Uc\|^2_{\Sc^{\infty,2,c}} + \tilde \eps_{5}^{-1} \|\delta U\|^2_{\Sc^{\infty,2,c}} + \tilde \eps_{6}^{-1} \|\delta \partial U\|^2_{\Sc^{\infty,2,c}} \\
&\; + ( \tilde \eps_1+\tilde \eps_2+\tilde \eps_6) \sup_{s\in [0,T]} \es_{\tau \in \Tc_{0,T}} \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \nabla \tilde g_r(s)|\d r \bigg] \bigg|^2 + \tilde \eps_3 \es_{\tau \in \Tc_{0,T}} \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \tilde h_t |\d r \bigg] \bigg|^2 \\
&\; + \tilde \eps_4 \es_{\tau \in \Tc_{0,T}} \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \tilde g_t|\d r \bigg] \bigg|^2 + \tilde \eps_{5} \sup_{s\in [0,T]} \es_{\tau \in \Tc_{0,T}} \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \tilde g_t(s)|\d r \bigg] \bigg|^2
\end{split}
\end{align}
We now estimate the terms on the right side of (<ref>). Note that in light of <Ref><ref> we have
\begin{align*}
&\, \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \nabla \tilde g_t(s)|\d r \bigg] \bigg|^2 \\
\leq &\, \bigg| \E_\tau \bigg[ \int_\tau^T \e^{c r} \Big( L_{\rm u} | \delta \partial u^s_r|\big( | \partial u_r^{1,s}|+| \partial u_r^{2,s}|\big)+ L_{\rm v} |\sigma^\t_r \delta \partial v^s_r|\big( |\sigma^\t_r \partial v_r^{1,s}|+|\sigma^\t_r \partial v_r^{2,s}|\big)\\
&\hspace{6em} + L_u| \delta u^s_r|\big( | u_r^{1,s}|+| u_r^{2,s}|\big) + L_v|\sigma^\t_r \delta v^s_r|\big( |\sigma^\t_r v_r^{1,s}|+|\sigma^\t_r v_r^{2,s}|\big) \\
&\hspace{6em}+ L_y| \delta y_r|\big( | y_r^1|+| y_r^2|\big) + L_z|\sigma^\t_r \delta z_r|\big( |\sigma^\t_r z_r^1|+|\sigma^\t_r z_r^2|\big)\Big) \d r \bigg] \bigg|^2 \\
\leq &\ 6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta \partial u^s_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( | \partial u_r^{1,s}|+| \partial u_r^{2,s}|\big)^2\d r\bigg]\\
&+6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta \partial v^s_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( |\sigma^\t_r \partial v_r^{1,s}|+|\sigma^\t_r \partial v_r^{2,s}|\big)^2\d r\bigg]\\
&+6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta u^s_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( | u_r^{1,s}|+| u_r^{2,s}|\big)^2\d r\bigg]\\
& + 6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta v^s_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( |\sigma^\t_r v_r^{1,s}|+|\sigma^\t_r v_r^{2,s}|\big)^2\d r\bigg]\\
&+6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta y_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( | y_r^{1}|+| y_r^{2}|\big)^2\d r\bigg]\\
&+ 6 L_\star^2 \E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta z_r|^2\d r \bigg] \E_\tau \bigg[ \int_\tau^T \e^{c r} \big( |\sigma^\t_r z_r^1|+|\sigma^\t_r z_r^2|\big)^2\d r\bigg]\\
\leq&\ 6 L_\star^2 R^2\max\{ 2, T\} \bigg( \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta \partial u^s_r|^2\d r \bigg]+ \E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta \partial v^s_r|^2\d r \bigg] + \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta u^s_r|^2\d r \bigg]\\
&\hspace{8em} +\E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta v^s_r|^2\d r \bigg]+ \E_\tau \bigg[ \int_\tau^T \e^{c r} | \delta y_r|^2\d r \bigg]+\E_\tau \bigg[ \int_\tau^T \e^{c r} |\sigma^\t_r \delta z_r|^2\d r \bigg]\bigg)\\
\leq&\ 6 L_\star^2 R^2\max\{ 2, T^2\} \Big( \|\delta \partial u\|_{\Sc^{\infty,2,c}}^2+ \| \delta \partial v\|_{\H^{2,2,c}_{{\rm BMO}}}^2 +\|\delta u\|_{\Sc^{\infty,2,c}}^2+ \| \delta v\|_{\overline \H^{2,2,c}_{{\rm BMO}}}^2 +\|\delta y\|_{\Sc^{\infty,c}}^2+\| \delta z\|_{\H^{2,c}_{{\rm BMO}}}^2\Big)
\end{align*}
where in the second inequality we used (<ref>) and Cauchy–Schwartz's inequality. Similarly
\begin{align*}
&\max\bigg\{ \bigg|\E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta\tilde h_r|\d r \bigg] \bigg|^2 , \bigg|\E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta \tilde g_r(s)|\d r \bigg] \bigg|^2, \bigg|\E_\tau \bigg[ \int_\tau^T \e^{c r} |\delta g_r|\d r \bigg] \bigg|^2\bigg\}\\
&\ \leq 4 L_{\star}^2 R^2\max\{ 2, T^2\} \Big(\|\delta y\|_{\Sc^{\infty,c}}^2+ \| \delta z\|_{\H^{2,c}_{{\rm BMO}}}^2+ \|\delta u\|_{\Sc^{\infty,2,c}}^2+ \| \delta v \|_{\overline \H^{2,c}_{{\rm BMO}}}^2\Big)
\end{align*}
Overall, we obtain back in (<ref>) that
\begin{align*}
& \sum_{i=1}^4 \e^{ct} |\delta \Yf_t^i|^2+\E_\tau\bigg[\int_t^T \e^{cr} |\sigma^\t_r \delta \Zf_r^i|^2 \d r+\int_t^T\e^{c r-} \d \Tr [\delta \Nf^i]_r \bigg] \\
\leq &\ \tilde \eps_3^{-1} \|\delta Y\|^2_{\Sc^{\infty,c}}+ \tilde \eps_4^{-1} \|\delta \Uc\|^2_{\Sc^{\infty,2,c}} + \tilde \eps_{5}^{-1} \|\delta U\|^2_{\Sc^{\infty,2,c}} + \tilde \eps_{6}^{-1} \|\delta \partial U\|^2_{\Sc^{\infty,2,c}} \\
&\; + 6 ( \tilde \eps_1+\tilde\eps_2+\tilde\eps_{6}) L_\star^2 R^2\max\{ 2, T^2\} \Big( \|\delta \partial u\|_{\Sc^{\infty,2,c}}^2+ \| \delta \partial v\|_{\H^{2,2,c}_{{\rm BMO}}}^2 +\|\delta u\|_{\Sc^{\infty,2,c}}^2+ \| \delta v\|_{\overline \H^{2,2,c}_{{\rm BMO}}}^2 +\|\delta y\|_{\Sc^{\infty,c}}^2+\| \delta z\|_{\H^{2,c}_{{\rm BMO}}}^2\Big) \\
& \; +4( \tilde \eps_3+\tilde\eps_4+\tilde\eps_{5}) L_{\star}^2 R^2\max\{ 2, T^2\} \ \Big(\|\delta y\|_{\Sc^{\infty,c}}^2+ \| \delta z\|_{\H^{2,c}_{{\rm BMO}}}^2+ \|\delta u\|_{\Sc^{\infty,2,c}}^2+ \| \delta v \|_{\overline \H^{2,c}_{{\rm BMO}}}^2\Big)
\end{align*}
If we define, for $\tilde \eps_i>10, i\in \{3,4,5,6\}$, $ C_{\tilde \eps}:= {\rm min}\big\{ 1-10/ \tilde \eps_{3 },\; 1-10/ \tilde \eps_{4 },\;1-10/ \tilde \eps_{5},\;1-10/ \tilde \eps_{6}\big\}$, we deduce,
\begin{align}\label{Eq:thm:contq:final}\begin{split}
\| \delta \mathfrak{H}\|_{\Hc^c}^2 \leq 20 C_{\tilde \eps}^{-1} L_{\star}^2 R^2\max\{ 2, T^2\} (3 \tilde \eps_1 + 3 \tilde \eps_2 +2 \tilde \eps_3+2 \tilde \eps_4+2 \tilde \eps_{5}+3 \tilde \eps_{6 })\|\delta \mathfrak{h}\|_{\Hc^c}.
\end{split}
\end{align}
Minimising for $\tilde \eps_1$ and $\tilde\eps_2$ fixed, we find that letting
\[
R^2 < \frac{1}{2^6\cdot 3\cdot 5^2\cdot 7\cdot L^2_\star\cdot \max\{ 2, T^2\} } ,\; c\geq \max \{ \eps_1^{-1} 7T L_{\rm u}^2, \eps_2^{-1} 7T , \tilde \eps_1^{-1} 3T L_{\rm u}^2,\; 3T\tilde \eps_2^{-1}, 2 L_{\rm u} \}
\]
we have that
\[
\|\delta \mathfrak{H}\|_{\Hc^c}^2 < \ \frac{20}{2^4\cdot 3\cdot 7\cdot 10^2}3(\sqrt{30+(\tilde\eps_1+\tilde\eps_2)}+\sqrt{30})^2 \|\delta \mathfrak{h}\|_{\Hc^c} = \frac{ (\sqrt{30+(\tilde\eps_1+\tilde\eps_2)}+\sqrt{30})^2}{2^3\cdot 7\cdot 10} \|\delta \mathfrak{h}\|_{\Hc^c}.
\]
Thus, letting choosing $(\sqrt{30+(\tilde\eps_1+\tilde\eps_2)}+\sqrt{30})^2 \leq 2^3\cdot 7\cdot 10$, $\Tf$ is contractive.
Step 3: We consolidate our results.. In light of
(<ref>) and (<ref>), taking $\eps_i=\tilde\eps_i, i\in \{1,2\}$, $c$ must satisfy
\begin{align}\label{eq:cfinalq}
c\geq \max \{ \eps_1^{-1} 7T L_{\rm u}^2, \eps_2^{-1} 7T , \tilde \eps_1^{-1} 3T L_{\rm u}^2,\; 3T\tilde \eps_2^{-1},\; 2 L_{\rm u} \}= \max \{ \eps_1^{-1} 7T L_{\rm u}^2, \eps_2^{-1} 7T ,\; 2 L_{\rm u} \}
\end{align}
All together we find that given $\gamma\in(0,\infty)$, $\eps_i\in(0,\infty)$, $i\in\{1,2\}$, $c\in (0,\infty)$, such that $ \eps_1+\eps_2 \leq (4\sqrt{35}-\sqrt{30})^2-30$, $\Tf$ is a well–defined contraction in $\Bc_{ R}\subseteq \Hc^c$ for the norm $\| \cdot \|_{\Hc^c}$ provided: $(i)$ $\gamma $, $\eps_i$, $i\in \{1,2\}$, and the data of the problem satisfy (<ref>); $(ii)$ $c$ satisfies (<ref>).
[Artzner et al., 1999]
P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath.
Coherent measures of risk.
Mathematical Finance, 90 (3):0 203–228, 1999.
[Bielagk et al., 2017]
J. Bielagk, A. Lionnet, and G. Dos Reis.
Equilibrium pricing under relative performance concerns.
SIAM Journal on Financial Mathematics, 80
(1):0 435–482, 2017.
[Briand and Hu, 2006]
P. Briand and Y. Hu.
BSDE with quadratic growth and unbounded terminal value.
Probability Theory and Related Fields, 1360
(4):0 604–618, 2006.
[Briand and Hu, 2008]
P. Briand and Y. Hu.
Quadratic BSDEs with convex generators and unbounded terminal
Probability Theory and Related Fields, 1410
(3-4):0 543–567, 2008.
[Buckdahn et al., 2000]
R. Buckdahn, M. Quincampoix, and A. Rǎşcanu.
Viability property for a backward stochastic differential equation
and applications to partial differential equations.
Probability Theory and Related Fields, 0
(116):0 485—504, 2000.
[Cheridito and Nam, 2015]
P. Cheridito and K. Nam.
Multidimensional quadratic and subquadratic BSDEs with special
Stochastics: An International Journal of Probability and
Stochastic Processes, 870 (5):0 871–884, 2015.
[Delbaen et al., 2011]
F. Delbaen, Y. Hu, and X. Bao.
Backward SDEs with superquadratic growth.
Probability Theory and Related Fields, 150:0 145–192,
[Dellacherie and Meyer, 1982]
C. Dellacherie and P.-A. Meyer.
Probabilities and potential B: theory of martingales,
volume 72 of Mathematics studies.
North–Holland, 1982.
[Ekeland and Lazrak, 2006]
I. Ekeland and A. Lazrak.
Being serious about non-commitment: subgame perfect equilibrium in
continuous time.
ArXiv preprint arXiv:0604264, 2006.
[Ekeland and Lazrak, 2010]
I. Ekeland and A. Lazrak.
The golden rule when preferences are time inconsistent.
Mathematics and Financial Economics, 40 (1):0
29–55, 2010.
[El Karoui and Hamadène, 2003]
N. El Karoui and S. Hamadène.
Bsdes and risk-sensitive control, zero-sum and nonzero-sum game
problems of stochastic functional differential equations.
Stochastic Processes and their Applications, 1070
(1):0 145–169, 2003.
[El Karoui et al., 1997]
N. El Karoui, S. Peng, and M.-C. Quenez.
Backward stochastic differential equations in finance.
Mathematical Finance, 70 (1):0 1–71, 1997.
[Élie and Possamaï, 2019]
R. Élie and D. Possamaï.
Contracting theory with competitive interacting agents.
SIAM Journal on Control and Optimization, 570
(2):0 1157–1188, 2019.
[Espinosa and Touzi, 2015]
G.-É. Espinosa and N. Touzi.
Optimal investment under relative performance concerns.
Mathematical Finance, 250 (2):0 221–257,
[Frei, 2014]
C. Frei.
Splitting multidimensional BSDEs and finding local equilibria.
Stochastic Processes and their Applications, 1240
(8):0 2654–2671, 2014.
[Frei and Dos Reis, 2011]
C. Frei and G. Dos Reis.
A financial market with interacting investors: does an equilibrium
Mathematics and Financial Economics, 40 (3):0
161–182, 2011.
[Hamaguchi, 2021]
Y. Hamaguchi.
Extended backward stochastic Volterra integral equations and their
applications to time-inconsistent stochastic recursive control problems.
Mathematical Control and Related Fields, 110
(2):0 433–478, 2021.
[Harter and Richou, 2019]
J. Harter and A. Richou.
A stability approach for solving multidimensional quadratic BSDEs.
Electronic Journal of Probability, 240 (4):0
1–51, 2019.
[Hernández and Possamaï, 2020]
C. Hernández and D. Possamaï.
Me, myself and I: a general theory of non–Markovian
time-inconsistent stochastic control for sophisticated agents.
ArXiv preprint arXiv:2002.12572, 2020.
[Hernández and Possamaï, 2021]
C. Hernández and D. Possamaï.
A unified approach to well-posedness of type-I backward stochastic
Volterra integral equations.
Electronic Journal of Probability, 260 (89):0
1–35, 2021.
[Hu and Peng, 1991]
Y. Hu and S. Peng.
Adapted solution of a backward semilinear stochastic evolution
Stochastic Analysis and Applications, 90 (4):0
445–459, 1991.
[Hu and Peng, 2006]
Y. Hu and S. Peng.
On the comparison theorem for multidimensional BSDEs.
Comptes Rendus Mathématique, 3430 (2):0
135–140, 2006.
[Hu and Tang, 2016]
Y. Hu and S. Tang.
Multi-dimensional backward stochastic differential equations of
diagonally quadratic generators.
Stochastic Processes and their Applications, 1260
(4):0 1066–1086, 2016.
[Hu et al., 2021]
Y. Hu, S. Tang, and F. Wang.
Quadratic G-BSDEs with convex generators and unbounded terminal
ArXiv preprint arXiv:2101.11413, 2021.
[Jackson and Žitković, 2020]
J. Jackson and G. Žitković.
A characterization of solutions of quadratic BSDEs and a new
approach to existence.
ArXiv preprint arXiv:2004.05412, 2020.
[Jackson and Žitković, 2021]
J. Jackson and G. Žitković.
Existence and uniqueness for non-Markovian triangular quadratic
ArXiv preprint arXiv:2101.12302, 2021.
[Jacod and Shiryaev, 2003]
J. Jacod and A.N. Shiryaev.
Limit theorems for stochastic processes, volume 288 of
Grundlehren der mathematischen Wissenschaften.
Springer–Verlag Berlin Heidelberg, 2003.
[Jamneshan et al., 2017]
A. Jamneshan, M. Kupper, and P. Luo.
Multidimensional quadratic BSDEs with separated generators.
Electronic Communications in Probability, 220
(58):0 1–10, 2017.
[Jouini et al., 2004]
E. Jouini, M. Meddeb, and N. Touzi.
Vector-valued coherent risk measures.
Finance and Stochastics, 80 (4):0 531–552,
[Karatzas and Shreve, 1998]
I. Karatzas and S.E. Shreve.
Brownian motion and stochastic calculus, volume 113 of
Graduate texts in mathematics.
Springer–Verlag New York, 2nd edition, 1998.
[Kazamaki, 1994]
N. Kazamaki.
Continuous exponential martingales and BMO, volume 1579 of
Lecture notes in mathematics.
Springer–Verlag Berlin Heidelberg, 1994.
[Kazi-Tani et al., 2015]
N. Kazi-Tani, D. Possamaï, and C. Zhou.
Quadratic BSDEs with jumps: a fixed-point approach.
Electronic Journal of Probability, 200 (66):0
1–28, 2015.
[Kobylanski, 2000]
M. Kobylanski.
Backward stochastic differential equations and partial differential
equations with quadratic growth.
The Annals of Probability, 280 (2):0 558–602,
[Kramkov and Pulido, 2016]
D.O. Kramkov and S. Pulido.
Stability and analytic expansions of local solutions of systems of
quadratic BSDEs with applications to a price impact model.
SIAM Journal on Financial Mathematics, 70
(1):0 567–587, 2016a.
[Kramkov and Pulido, 2016]
D.O. Kramkov and S. Pulido.
A system of quadratic BSDEs arising in a price impact model.
The Annals of Applied Probability, 260 (2):0
794–817, 2016b.
[Kulikov, 2008]
A.V. Kulikov.
Multidimensional coherent and convex risk measures.
Theory of Probability & Its Applications, 520
(4):0 614–635, 2008.
[Kupper et al., 2019]
M. Kupper, P. Luo, and L. Tangpi.
Multidimensional Markovian FBSDEs with super-quadratic growth.
Stochastic Processes and their Applications, 1290
(3):0 902–923, 2019.
[Landriault et al., 2018]
D. Landriault, B. Li, D. Li, and V.R. Young.
Equilibrium strategies for the mean–variance investment problem over
a random horizon.
SIAM Journal on Financial Mathematics, 90
(3):0 1046–1073, 2018.
[Lepeltier and San Martín, 1998]
J.-P. Lepeltier and J. San Martín.
Existence for BSDE with superlinear–quadratic coefficient.
Stochastics: An International Journal of Probability and
Stochastic Processes, 630 (3–4):0 227–240, 1998.
[Lin, 2002]
J. Lin.
Adapted solution of a backward stochastic nonlinear Volterra
integral equation.
Stochastic Analysis and Applications, 200
(1):0 165–183, 2002.
[Luo, 2020]
P. Luo.
A type of globally solvable bsdes with triangularly quadratic
Electronic Journal of Probability, 25:0 1–23, 2020.
[Luo, 2021]
P. Luo.
Comparison theorem for diagonally quadratic bsdes.
Discrete & Continuous Dynamical Systems, 410
(6):0 2543–2557, 2021.
[Meyer, 1966]
P.-A. Meyer.
Probability and potentials.
Blaisdell Publishing Company, 1966.
[Pardoux and Peng, 1990]
É. Pardoux and S. Peng.
Adapted solution of a backward stochastic differential equation.
System and Control Letters, 140 (1):0 55–61,
[Protter, 2005]
P.E. Protter.
Stochastic integration and differential equations, volume 21
of Stochastic modelling and applied probability.
Springer–Verlag Berlin Heidelberg, 2nd edition, 2005.
[Ren, 2010]
Y. Ren.
On solutions of backward stochastic Volterra integral equations
with jumps in Hilbert spaces.
Journal of Optimization Theory and Applications, 1440
(2):0 319–333, 2010.
[Shi and Wang, 2012]
Y. Shi and T. Wang.
Solvability of general backward stochastic Volterra integral
Journal of the Korean Mathematical Society, 490
(6):0 1301–1321, 2012.
[Stroock and Varadhan, 1997]
D.W. Stroock and S.R.S. Varadhan.
Multidimensional diffusion processes, volume 233 of
Grundlehren der mathematischen Wissenschaften.
Springer–Verlag Berlin Heidelberg, 1997.
[Tevzadze, 2008]
R. Tevzadze.
Solvability of backward stochastic differential equations with
quadratic growth.
Stochastic Processes and their Applications, 1180
(3):0 503–515, 2008.
[Viens and Zhang, 2019]
F. Viens and J. Zhang.
A martingale approach for fractional Brownian motions and related
path dependent PDEs.
The Annals of Applied Probability, 290 (6):0
3489–3540, 2019.
[Wang, 2020]
H. Wang.
Extended backward stochastic Volterra integral equations,
quasilinear parabolic equations, and Feynman–Kac formula.
Stochastics and Dynamics, 210 (01):0 215004,
[Wang and Yong, 2021]
H. Wang and J. Yong.
Time-inconsistent stochastic optimal control problems and backward
stochastic Volterra integral equations.
ESAIM: Control, Optimisation and Calculus of Variations,
270 (22):0 1–40, 2021.
[Wang et al., 2019]
H. Wang, J. Sun, and J. Yong.
Recursive utility processes, dynamic risk measures and quadratic
backward stochastic Volterra integral equations.
Applied Mathematics and Optimization, to appear, 2019.
[Wang and Yong, 2019]
T. Wang and J. Yong.
Backward stochastic Volterra integral equations—representation of
adapted solutions.
Stochastic Processes and their Applications, 1290
(12):0 4926–4964, 2019.
[Wang and Zhang, 2007]
Z. Wang and X. Zhang.
Non-Lipschitz backward stochastic Volterra type equations with
Stochastics and Dynamics, 70 (4):0 479–496,
[Wei et al., 2017]
Q. Wei, J. Yong, and Z. Yu.
Time-inconsistent recursive stochastic optimal control problems.
SIAM Journal on Control and Optimization, 550
(6):0 4156–4201, 2017.
[Xing and Žitković, 2018]
H. Xing and G. Žitković.
A class of globally solvable Markovian quadratic BSDE systems and
The Annals of Probability, 460 (1):0 491–550,
[Yong, 2006]
J. Yong.
Backward stochastic Volterra integral equations and some related
Stochastic Processes and their Applications, 1160
(5):0 779–795, 2006.
[Yong, 2008]
J. Yong.
Well-posedness and regularity of backward stochastic Volterra
integral equations.
Probability Theory and Related Fields, 1420
(1–2):0 21–77, 2008.
[Yong, 2012]
J. Yong.
Time-inconsistent optimal control problems and the equilibrium HJB
Mathematical Control and Related Fields, 20
(3):0 271–329, 2012.
§ PROOFS OF SECTION <REF>
First note that for $Z\in \H^2_{{\rm BMO}}(\R^{n\times \tilde d})$, $Z\bullet X$ is a continuous local martingale, thus we have that
\[ \| Z\bullet X\|_{{\rm BMO}^{2,c}}=\sup_{\tau\in\Tc_{0,T}} \Big \| \E\big[ \big \langle\e^{\frac{c}2} Z\bullet X\big\rangle_T- \big \langle \e^{\frac{c}2} Z\bullet X\big \rangle_\tau \big|\Fc_\tau\big]\Big\|_\infty<\infty.\]
Therefore, letting $X_t:=\E[ \langle \e^{\frac{c}2 }Z\bullet X\rangle_T- \langle \e^{\frac{c}2 } Z\bullet X\rangle_t |\Fc_t]$, we have: $(i)$ $|X_t|\leq \| Z\bullet X\|_{{\rm BMO}^{2,c}}=\| Z\|_{\H^{2,c}_{{\rm BMO}}}^2$;
$(ii)$ $A=\langle \e^{\frac{c}2 } Z\bullet X\rangle$. Indeed, note $X_t=\E\big[ \big\langle \e^{\frac{c}2 }Z\bullet X\big\rangle_T \big|\Fc_t\big]- \big\langle \e^{\frac{c}2 } Z\bullet X\big\rangle_t $. The result then follows immediately from the energy inequality, i.e.
\[ \E\bigg[ \bigg(\int_0^T\e^{cr }|\sigma^\t_r Z_r|^2 \d r\bigg)^p\bigg]=\E[ (A)_\infty^p] \leq p !\| Z\|_{\H^{2,c}_{{\rm BMO}}}^{2p}.\]
To obtain the second part of the statement, recall that by definition of $\Ho(\R^{n\times \tilde d})$, $s\longmapsto \partial Z^s$ is the density of $s\longmapsto Z^s$ with respect to the Lebesgue measure and $\Zc$ is given as in <Ref>. By definition of $\Zc$, Fubini's theorem and Young's inequality we have that for $\eps>0$
\begin{align*}
\int_t^T \e^{cu} | \sigma^\t_uZ_u^u|^2-\e^{cu} |\sigma^\t_u Z_u^t|^2 \d u& = \int_t^T \int_r^T 2 \e^{cu} \Tr\Big [ {Z_u^r}^\t{\sigma_u} {\sigma^\t_u} \partial Z_u^r \Big ] \d u \d r\\
& \leq \int_t^T\int_r^T \eps \e^{cu} |\sigma_u^\t Z^r_u|^2+ \eps^{-1}\e^{cu} |\sigma_u^\t \partial Z^r_u |^2 \d u \d r .
\end{align*}
This proves the first first statement. For the second claim, we may use (<ref>) and (<ref>) to obtain
\begin{align*}
\E_t\bigg[ \bigg( \int_t^T \e^{cu} |\sigma^\t_u \Zc_u|^2 \d u\bigg)^2\bigg] & \leq 3\bigg(\E_t\bigg[ \bigg( \int_t^T\e^{cu} | \sigma^\t_uZ_u^t|^2 \d u\bigg)^2\bigg]\\
&\quad + T \int_t^T\E_t\bigg[ \bigg(\int_t^T \e^{cu} |\sigma_u^\t Z^r_u|^2\d u \bigg)^2\bigg] \d r + T \int_t^T \E_t\bigg[ \bigg( \int_t^T \e^{cu} |\sigma_u^\t \partial Z^r_u |^2 \d u \bigg)^2\bigg]\d r\bigg)\\
& \leq 6 ( (1+T^2)\|Z\|_{\H^{2,2,c}_{\rm BMO}}^4+T^2 \|\partial Z\|_{\H^{2,2,c}_{\rm BMO}}^4)
\end{align*}
The inequality for the $\H^2$ norm is argued similarly taking expectations.
§ PROOFS OF SECTION <REF>
We next lemma helps derive appropriate auxiliary estimates of the terms $U_t^t$ and $\partial U_t^t$ as in <Ref>.
Let $\partial U$ satisfy the equation
\[
\partial U_t^s= \partial_s \eta (s,X_{\cdot\wedge,T})+\int_t^T \nabla g_r(s,X,\partial U_r^s,\partial v_r^s,U_r^s,v_r^s, \Yc_r, z_r) \d r-\int_t^T \partial {V_r^s}^\t \d X_r-\int_t^T \d \partial M^s_r,
\]
and $c\geq \max\{ 2L_u, 2L_{\rm u}\}$, the following estimates hold for $t\in [0,T]$
\begin{align*}
\E_t\bigg[ \int_t^T \frac{ \e^{cr}}{7 T} |\partial U_r^r|^2\d r\bigg]& \leq \|\partial_s \eta \|_{\Lc^{\infty,2,c}}^2+ \|\nabla \tilde g\|^2_{\L^{1,\infty,2,c}} + T L_y^2 \E_t\bigg[ \int_t^T \e^{cr}|Y_r|^2\d r\bigg]+ T L_u^2 \sup_{s\in [0,T]} \E_t\bigg[ \int_t^T \e^{cr}|U_r^s|^2\d r\bigg]\\
& \quad + 2 L_{\star}^2 \Big( \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}} + \|z\|^4_{\H^{2,c}_{{\rm BMO}}}+\|v\|^4_{\H^{2,2,c}_{{\rm BMO}}}\Big)\\
\E_t\bigg[ \int_t^T \frac{ \e^{\frac{c}2 r}}{ T} |\partial U_r^r|\d r\bigg] & \leq \|\partial_s \eta \|_{\Lc^{\infty,2,c}}+ \|\nabla \tilde g\|_{\L^{1,\infty,2,c}} + L_y \E_t\bigg[ \int_t^T \e^{\frac{c}2 r}|Y_r|\d r\bigg]+ L_u \sup_{s\in [0,T]} \E_t\bigg[ \int_t^T \e^{cr}|U_r^s|\d r\bigg]\\
&\quad + L_{\star} \Big(\|\partial v\|^2_{\H^{2,2,c}_{{\rm BMO}}}+ \|v\|^2_{\H^{2,2,c}_{{\rm BMO}}}+ \|z\|^2_{\H^{2,c}_{{\rm BMO}}}\Big)
\end{align*}
By Meyer–Itô's formula for $\e^{\frac{c}2 t} |\partial U_t^s|$, see <cit.>
\begin{align}\label{eq:eq1}
\begin{split}
&\e^{\frac{c}2 t}|\partial U_t^s|+ L_T^0 -\int_t^T \e^{\frac{c}2 r} \sgn( \partial U_r^s)\cdot \partial {V_r^s}^\t \d X_r -\int_t^T \e^{\frac{c}2 r-} \sgn( \partial U_{r-}^s)\cdot \d \partial M_r^s \\
&\; =\e^{\frac{c}2 T} |\partial_s \eta (s)| + \int_t^T \e^{\frac{c}2 r} \bigg( \sgn( \partial U_r^s) \cdot \nabla g_r(s,\partial U_r^s,\partial v_r^s,U_r^s,v_r^s,\Yc_r,z_r)-\frac{c}2 |\partial U_r^s| \bigg) \d r ,\; t\in[0,T],
\end{split}
\end{align}
where $L^0:=L^0(\partial U^s)$ denotes the non-decreasing and pathwise-continuous local time of the semi-martingale $\partial U^s$ at $0$, see <cit.>. We also notice that for any $s\in [0,T]$ the last two terms on the left-hand side are martingales, recall that $\partial V^s\in \H^2$ by <cit.>.
In light of <Ref>, letting $ \nabla g_r(s):=\nabla g_r(s,\partial U_r^s,\partial v_r^s,U_r^s,v_r^s,Y_r, z_r)$, we have that $\d t\otimes \d \P\ae$
\begin{align}\label{Eq:ineqLipUts0}
\begin{split}
|\nabla g_r(s) |\leq & L_{\rm u} |\partial U_r^s| +L_{\rm v} |\sigma^\t_r \partial v_r^s|^2+L_u |U_r^s| +L_v |\sigma^\t_r v_r^s|^2+L_y |Y_r|+L_z |\sigma^\t_r z_r|^2+ |\nabla \tilde g_r(s)|,
\end{split}
\end{align}
We now take conditional expectation with respect to $\Fc_t$ in <Ref>. We may use (<ref>) and the fact $\tilde L^0$ is non–decreasing to derive that for $c>2 L_{\rm u}$ and $ t\in[0,T]$
\begin{align}\label{Eq:ineqUst}
\e^{\frac{c}2 t}| \partial U_t^s| & \leq \E_t \bigg[ \e^{\frac{c}2 T} |\partial \eta(s)|+\int_t^T \e^{\frac{c}2 r} \big( |\nabla \tilde g_r(s)| +L_{\rm v} |\sigma^\t_r \partial v_r^s|^2+L_u |U_r^s| +L_v |\sigma^\t_r v_r^s|^2+L_y |Y_r|+L_z |\sigma^\t_r z_r|^2\big) \d r \bigg].
\end{align}
Squaring in (<ref>), we may use (<ref>) and Jensen's inequality to derive that for $t\in [0,T]$
\begin{align*}
\frac{\e^{ct}}{7} |\partial U_t^t|^2 \leq &\ \E_t\bigg[ \e^{cT} |\partial_s \eta(t)|^2+ \bigg(\int_t^T \e^{\frac{c}2 r} |\nabla \tilde g_r(t)|\d r\bigg)^2+ T L_{u}^2 \int_t^T \e^{c r} |U_r^t|^2 \d r + T L_y^2 \int_t^T \e^{c r} |Y_r|^2 \d r\\
& + L_{\rm v}^2 \bigg(\int_t^T \e^{\frac{c}2 r} |\sigma^\t_r \partial v_r^t|^2\d r \bigg)^2+ L_v^2 \bigg(\int_t^T \e^{\frac{c}2 r} |\sigma^\t_r v_r^t|^2\d r\bigg)^2+ L_z^2 \bigg(\int_t^T \e^{\frac{c}2 r} |\sigma^\t_rz_r|^2\d r\bigg)^2 \bigg].
\end{align*}
By integrating the previous expression and taking conditional expectation with respect to $\Fc_t$, it follows from the tower property that for any $t\in[0,T]$
\begin{align*}
\frac{ 1}7\E_t\bigg[\int_t^T \e^{cr}|\partial U_r^r|^2\d r\bigg]\leq &\ \E_t\bigg[ \int_t^T \e^{cT} |\partial_s \eta(r)|^2\d r\bigg]+\E_t\bigg[ \int_t^T\bigg[ \bigg( \int_r^T \e^{\frac{c}2 u}|\nabla \tilde g_u(r)|\d u\bigg)^2\d r\bigg] \\
& + T L_{u}^2 \E_t\bigg[ \int_t^T \int_r^T \e^{c u} |U_u^r|^2 \d u\bigg] \d r + T L_y^2 \E_t\bigg[ \int_t^T \int_r^T \e^{cu} |Y_u|^2\d u \bigg]\d r\bigg] \\
& + L_{\rm v}^2 \int_t^T\E_t\bigg[ \bigg(\int_r^T \e^{\frac{c}2 u} |\sigma^\t_u \partial v_u^r|^2\d u\bigg)^2\bigg] \d r + L_v^2 \int_t^T\E_t\bigg[ \bigg(\int_r^T \e^{\frac{c}2 u} |\sigma^\t_u v_u^r|^2\d u\bigg)^2\bigg] \d r\\
& + L_z^2 \int_t^T\E_t\bigg[ \bigg(\int_r^T \e^{\frac{c}2 u} | \sigma^\t_u z_u|^2\d u\bigg)^2\bigg] \d r\\
\leq &\ T \sup_{r\in [0,T]} \bigg\{ \| \e^{cT} |\eta(r)|^2]\|_\infty+ \bigg \| \int_r^T \e^{\frac{c}2 u} |\nabla \tilde g_u(r)|\d u \bigg \|_\infty^2 \bigg\} +T^2L_y^2 \E_t\bigg[ \int_t^T \e^{cu}|Y_u|^2\d u\bigg]\\
& + T^2 L_{u}^2 \sup_{r\in [0,T]} \bigg\{ \E_t\bigg[ \int_t^T \e^{c u} |U_u^r|^2 \d u\bigg] \bigg\} + T L_{\rm v}^2 \sup_{r\in [0,T]} \bigg\{ \E_t\bigg[ \bigg(\int_t^T \e^{\frac{c}2 u} |\sigma^\t_u \partial v_u^r|^2\d u\bigg)^2\bigg] \d r\bigg\} \\
&+ T L_v^2 \sup_{r\in [0,T]} \bigg\{ \E_t\bigg[ \bigg(\int_t^T \e^{\frac{c}2 u} |\sigma^\t_u v_u^r|^2\d u\bigg)^2\bigg] \d r\bigg\} + TL_z^2 \E_t\bigg[ \bigg(\int_t^T \e^{\frac{c}2 u} | \sigma^\t_u z_u|^2\d u\bigg)^2\bigg] \d r,
\end{align*}
and by (<ref>) we obtain for $c>2L_u$, and any $t\in[0,T]$
\begin{align*}
\E_t\bigg[ \int_t^T \frac{ \e^{cr}}{7 T} |\partial U_r^r|^2\d r\bigg]& \leq \|\partial_s \eta \|_{\Lc^{\infty,2,c}}^2+ \|\nabla \tilde g\|^2_{\L^{1,\infty,2,c}} + T L_y^2 \E_t\bigg[ \int_t^T \e^{cr}|Y_r|^2\d r\bigg]+ + T L_{u}^2 \sup_{r\in [0,T]} \E_t\bigg[ \int_t^T \e^{c u} |U_u^r|^2 \d u\bigg] \\
&\; + 2 L_\star^2 \Big( \|\partial v\|^4_{\H^{2,2,c}_{{\rm BMO}}} + \|z\|^4_{\H^{2,c}_{{\rm BMO}}}+\|v\|^4_{\H^{2,2,c}_{{\rm BMO}}}\Big) .
\end{align*}
Evaluating at $s=t$ in (<ref>) and integrating with respect to $t$ we derive the second estimate.
$({\rm OPT1})=1/(2^{4}5 )$, where
\begin{align*}
\sup\; &\frac{ \min\big \{ \alpha(\eps_3,\eps_{12},\eps_{13}),\; \alpha(\eps_4,\eps_{14},\eps_{15}) ,\; \alpha(\eps_{5},\eps_{16},\eps_{17}) ,\; \alpha(\eps_{6},\eps_{18},\eps_{19},\eps_{20}) \big \} -\gamma}{ \eps_1+\eps_2+\sum_{i=12}^{20} \eps_i }\\
{\rm s.t.}\; & \alpha(\eps_8,\eps_{12},\eps_{13})=1-10(\eps_{8}^{-1}+\eps_{12}^{-1}+\eps_{13}^{-1}) \in (0,1], \;
\alpha(\eps_9,\eps_{14},\eps_{15})=1-10(\eps_{9}^{-1}+\eps_{14}^{-1}+\eps_{15}^{-1}) \in (0,1], \\
& \alpha(\eps_{10},\eps_{16},\eps_{17}) =1-10(\eps_{10}^{-1}+\eps_{16}^{-1}+\eps_{17}^{-1}) \in (0,1], \; \alpha(\eps_{11},\eps_{18},\eps_{19},\eps_{20})=1-10(\eps_{11}^{-1}+\eps_{18}^{-1}+\eps_{19}^{-1}+\eps_{20}^{-1}) \in (0,1], \\
& \gamma\in (0,\infty);\; \eps_i\in (0,\infty), \forall i .
\end{align*}
We begin by noticing that as a function of $(\gamma,\eps_1,\eps_2,\eps_3,\eps_4,\eps_{5},\eps_{6})$ the objective is bounded by the value when $(\gamma,\eps_1,\eps_2,\eps_3,\eps_4,\eps_{5},\eps_{6})\longrightarrow (0,0,0,\infty,\infty,\infty,\infty)$. Thus, we will maximise
\[\frac{\min\{ 1-10( \eps_{12}^{-1}+\eps_{13}^{-1}) ,\;1-10( \eps_{14}^{-1}+\eps_{15}^{-1}) ,\;1-10( \eps_{16}^{-1}+\eps_{17}^{-1}) ,\;1-10(\eps_{18}^{-1}+\eps_{19}^{-1}+\eps_{20}^{-1}) \} }{\sum_{i=12}^{20} \eps_i } . \]
From this we observe that the optimal value is positive. Indeed, there is a feasible solution with positive value, and the $\min$ in the objective function does not involve common $\eps_i$ terms, so the minima is attained at one of the terms. Since the value function is symmetric in each of the variables inside each term of the mean we can assume with out lost of generality
\[
\eps_{12}=\eps_{13}=2\alpha_1,\;\eps_{14}=\eps_{15}=2\alpha_2,\; \eps_{16}=\eps_{17}=2\alpha_3, \; \eps_{18}=\eps_{19}=\eps_{20}=3\alpha_4,\; \{\alpha_1,\alpha_2,\alpha_3,\alpha_4\} \in (0,\infty)^4
\]
So we can write the objective function as $\min\{ 1-10\alpha_1^{-1} ,\;1-10\alpha_2^{-1} ,\;1-10\alpha_3^{-1} ,\;1-10\alpha_4^{-1} \} /( 4\alpha_1+4\alpha_2+4\alpha_3+9\alpha_4)$. Now, without lost of generality the $\min$ is attained by the first quantity. This is,
the optimisation problem becomes
\begin{align*}
\sup\;\frac{ 1-10\alpha_1^{-1} }{ 4\alpha_1+4\alpha_2+4\alpha_3+9\alpha_4 }\; {\rm s.t.}\; \alpha_1\leq \min\{\alpha_2,\alpha_3,\alpha_4\}, 1-10\alpha_i^{-1}\in (0,1], \alpha_i\in (0,\infty), i\in \{1,2,3,4\}.
\end{align*}
Now, as the objective function is decreasing in $\alpha_2,\alpha_3,\alpha_4$, and $\alpha_1\leq \min\{\alpha_2,\alpha_3,\alpha_4\}$, we see $\alpha_1=\alpha_2=\alpha_3=\alpha_4$. Thus
\begin{align*}
\sup\;\frac{ 1-10\alpha_1^{-1} }{ 21 \alpha_1 }\; {\rm s.t.}\; 1-10\alpha_1^{-1}\in (0,1], \alpha_1\in (0,\infty).
\end{align*}
Let $f(\alpha_1):= \frac{ \alpha_1 -10}{21 \alpha_1^2}$. By first order analysis
\[ \partial_{\alpha_1}f(\alpha_1) = \frac{-\alpha_1(\alpha_1-20)}{21 \alpha_1^4 }=0, \text{ yields, }\alpha_1\in \{0,20 \}
\]
By inspecting the sign of the derivative, one sees that $\alpha_1=0$ corresponds to a minima and $\alpha_1=20$ is the maximum and it is feasible. Thus we obtain that
\begin{align*}
f\big(\alpha_1^\star\big)=\frac{1}{2^3\cdot 3\cdot 5\cdot 7},
\end{align*}
We conclude the maxima when $(\eps_{12},\eps_{13},\eps_{14},\eps_{15},\eps_{16},\eps_{17},\eps_{18},\eps_{19},\eps_{20})=(40,40,40,40,40,40,60,60,60)$. Evaluating the value function in these values and letting $(\gamma,\eps_1,\eps_3,\eps_8,\eps_9,\eps_{10},\eps_{11})\longrightarrow (0,0,0,\infty,\infty,\infty,\infty)$, we obtain this bound. This is, $f$ does not attain its maximum value, but in the feasible region it can get as close as possible.
Case 1: $1-10\eps_{12}^{-1} <\min\{ 1-10\eps_{13}^{-1} ,\;1-10\eps_{14}^{-1} ,\;1-10\eps_{15}^{-1} \}$. We refer to the previous inequality as $($C1$)$. Then the $\min$ is attained by the first expression and we optimise on $\eps_{12}$.
We then write the objective as
\[f(\eps_{12}):= \frac{ \eps_{12} -10}{\eps_{12}( \eps_{12}+ \eps_{13}+\eps_{14}+\eps_{15} )}\]
By first order analysis
\[ \partial_{\eps_7}f(\eps_7)= \frac{- \eps_{12}^2 +20\eps_{12} +10(\eps_{13}+\eps_{14}+\eps_{15})}{\eps_{12}^2(\eps_{13}+\eps_{14}+\eps_{15} )^2}=0, \text{ yields, }\eps_{12}^{\pm}= 10 \pm \sqrt{100+10 (\eps_{13}+\eps_{14}+\eps_{15})}
\]
Therefore, as the optimal value is positive we must have $1 -10\eps_{12}^{-1} >0$, which implies that the maximum is attained at
\[\eps_{12}^{\star}(\eps_{13},\eps_{14},\eps_{15}):=10 + \sqrt{100+10 (\eps_{13}+\eps_{14}+\eps_{15})} .\]
and by inspecting the sign of $\partial_{\eps_{12}} f$ we verify $\eps_{12}^\star(\eps_{13},\eps_{14},\eps_{15})$ is indeed a maximum. Thus we obtain that
\begin{align}\label{Eq:c1:1}
f\big(\eps_{12}^\star(\eps_{13},\eps_{14},\eps_{15})\big)=\frac{1}{(\sqrt{10}+\sqrt{10+ \eps_{13}+\eps_{14}+\eps_{15} })^2},
\end{align}
and note
$f\big(\eps_{12}^\star(\eps_{13},\eps_{14},\eps_{15})\big)$ is maximised whenever $(\eps_{13},\eps_{14},\eps_{15})\longrightarrow (0,0,0)$. However this choice does not verify the $($C1$)$. We thus now enforce $($C1$)$. We note that (<ref>) is symmetric in $\eps_{13},\eps_{14},\eps_{15}$ therefore, we set $\eps_{13}=\eps_{14}=\eps_{15}=\eps$. It then follows from $($C1$)$ that
\[ \eps_{12}^\star(\eps,\eps,\eps)< \eps \Longleftrightarrow \eps>50. \]
\[f(\eps_{12}^\star(50,50,50))= \frac{1}{5^3 2} \]
Case 2: $1 -5\eps_7^{-1}>1 -5\eps_5^{-1}-5\eps_6^{-1}$. In this case we note the objective is completely symmetric in $\eps_5$ and $\eps_6$, and so are its partial derivatives. Thus we assume, without lost of generality, $\eps_5=\eps_6=2 \lambda$, and optimise for $\lambda$. The analysis is completely analogous to the previous case.
We write the objectives as
\[ f(\lambda):= \frac{ \lambda -5}{ \lambda (4\lambda+\eps_7 ) }, \]
and first order analysis yields candidates $\lambda^\pm(\eps_7)= 5 \pm1/2 \sqrt{10^2+5\eps_7}$. Now, $1-5\lambda>0$ yield $\lambda^\star(\eps_7)=\lambda^+(\eps_7)$ and
\[ f(\lambda^\star(\eps_7)):= \frac{ 1}{(10+\sqrt{10^2+ 5\eps_7 })^2},\]
thus the maximum value is $1/(2^4 5^2)$ whenever $\eps_7 \longrightarrow 0 $. However this choice does not verify the $($C2$)$. Enforcing it we find that $\eps_7(4 \eps-45)>0$ and thus $f(\lambda^\star(45/4))=2^2/(3^4 5^2)$. The maximum remain the one in Case 1.
$({\rm OPT2})=3(\sqrt{30+(\tilde\eps_1+\tilde\eps_2)}+\sqrt{30})^2$, where
\begin{align*}
({\rm OPT2}):=&\ \inf \bigg\{ \big(3 \tilde \eps_1 + 3 \tilde \eps_2 +2\tilde \eps_3+2\tilde \eps_4 +2\tilde \eps_{5}+3\tilde \eps_{6}\big) \min \bigg\{ \frac{\tilde \eps_3}{ \tilde \eps_3 -10} ,\; \frac{\tilde \eps_4}{ \tilde \eps_4 -10},\; \frac{\tilde \eps_{5}}{ \tilde \eps_{5} -10},\; \frac{\tilde \eps_{6}}{ \tilde \eps_{6} -10} \bigg\}\bigg\} \\
{\rm s.t.}\; & 1-10\tilde\eps_i^{-1} \in (0,1] ,\; \tilde \eps_i \in (0,\infty),\; i\in\{3,4,5,6\}.
\end{align*}
Without lost of generality let us assume the min is attained by the first quantity, i.e. the optimisation problem becomes
\[ \inf \big(3(\tilde\eps_1+\tilde\eps_2)+ 2\tilde \eps_3+2\tilde \eps_4 +2\tilde \eps_{5}+3\tilde \eps_{6}\big) \frac{\tilde \eps_3}{ \tilde \eps_3-10} , \; \text{ s.t. } \tilde \eps_8\leq \min\{\tilde\eps_4,\tilde\eps_{5},\tilde\eps_{6}\},\; 1-10\tilde\eps_3^{-1} \in (0,1],\; \tilde \eps_i \in (0,\infty)\ \forall i.\]
As the value function is increasing in $(\tilde\eps_4,\tilde\eps_{5},\tilde\eps_{6})$, $\tilde \eps_3\leq \min\{\tilde\eps_4,\tilde\eps_{5},\tilde\eps_{6}\}$ implies we must have $\tilde \eps_3=\tilde\eps_4=\tilde\eps_{5}=\tilde\eps_{6}$ a thus we minimise
\[ f(\tilde \eps):=3\frac{3 \tilde \eps^2 +\tilde\eps(\tilde \eps_1+\tilde \eps_2)}{\tilde \eps-10}.\]
First order analysis renders
\[ \partial_{\tilde\eps}f(\tilde\eps) = \frac{9\tilde\eps-180\tilde\eps-30(\tilde\eps_1+\tilde\eps_2)}{\tilde\eps-10}=0, \text{ yields, }\tilde\eps^\pm= 10\pm \frac{1}6\sqrt{60^2+120(\tilde\eps_1+\tilde\eps_2)}.
\]
The minimum occurs at $\tilde\eps^\star=10+ \frac{1}6\sqrt{60^2+120(\tilde\eps_1+\tilde\eps_2)}$, and $f(\tilde\eps^\star)=3(\sqrt{30+(\tilde\eps_1+\tilde\eps_2)}+\sqrt{30})^2$. We conclude the minima occurs when $(\tilde\eps_3,\tilde\eps_4,\tilde\eps_{5},\tilde\eps_{6})=(20,20,20,20)$. Evaluating the value function in these values and letting $(\tilde \eps_1,\tilde \eps_2)\longrightarrow (0,0)$, we obtain this bound. This is, $f$ does not attain its minimum value, but in the feasible region it can get as close as possible.
Let $f(\tilde \eps_1):= \tilde \eps_1^{-1} 2T L_u^2 +\tilde \eps_1 T L_y^2$ and consider the problem
\begin{align*}
({\rm OPT3})&:= \inf\big\{ f(\tilde \eps_1)\big\}\\
{\rm s.t.}\; & (\sqrt{15} + \sqrt{15 + 2\tilde \eps_1})^2 \leq 2^2 5^2, \; \tilde \eps_1 \in (0,\infty).
\end{align*}
Let $2\alpha:= (10-\sqrt{15})^2-15$ and $\tilde \eps_1^\star:={\rm min} \big \{ \sqrt{2}L_u L_y^{-1}, \alpha\big\}$. Then $({\rm OPT3})=f(\tilde \eps_1^\star)$.
We note that the constraint can be rewritten as $\tilde \eps_1 \leq \alpha$, and therefore
\begin{align*}
({\rm OPT3})= \inf\{ f(\tilde \eps_1)\},\; {\rm s.t.}\; \tilde \eps_1 \in (0,\alpha].
\end{align*}
By first order analysis we find
\[ \partial_{\tilde \eps_1} f(\tilde \eps_1)=-2TL_yL_u^2 \tilde \eps_1^{-2}+TL_y^2=0, \text{ yields } \tilde \eps_1=\sqrt{2}L_u L_y^{-1} .\]
we conclude $\tilde \eps_1^\star=\min\big \{ \sqrt{2}L_u L_y^{-1}, \alpha\big\}.$
Consider the problem
\begin{align*}
({\rm OPT4})&:= \inf\big\{ \max\{a+\eps,b+c\eps^{-1}\}\big\}\\
{\rm s.t.}\; & \eps \in (0,\infty).
\end{align*}
$({\rm OPT4})=\min\{ b+c/\eps^{+}, a+\eps^{-}\}$, where $\eps^-\leq 0\leq \eps^+$ denote the real valued roots of $f(\eps):= \eps^2+(a-b)\eps-c$.
Case 1: $a+\eps\leq b+c \eps^{-1}$. In this case, we must have $f(\eps)<0$, yielding $\eps\in [\eps^-,\eps^+]$. Back in the objective, we obtain
\[ \inf \ \{b+c\eps^{-1}\},\ {\rm s.t. }\ \eps\in [\eps^-,\eps^+] \]
Thus $\eps^\star=\eps^+$, and $({\rm OPT4})=b+c\eps^{-1}$.
Case 2: $a+\eps>b+c /\eps^{+}$. Arguing similarly we obtain, $\eps^\star=\eps^-$, and $({\rm OPT4})=a+\eps^{-}$.
We conclude $({\rm OPT4})=\min\{ b+c/\eps^{+}, a+\eps^{-}\}$. |
# Existence of global-in-time solutions to a system
of fully nonlinear parabolic equations
Takahiro Kosugi 111Tottori University of Environmental Studies, Tottori,
Japan<EMAIL_ADDRESS>and Ryuichi Sato222Fukuoka University, Fukuoka,
Japan<EMAIL_ADDRESS>
###### Abstract
We consider the Cauchy problem for a system of fully nonlinear parabolic
equations. In this paper, we shall show the existence of global-in-time
solutions to the problem. Our condition to ensure the global existence is
specific to the fully nonlinear parabolic system.
Keywords: viscosity solutions, fully nonlinear parabolic systems, global-in-
time solutions, comparison principle
MSC: 35A01, 35D40, 35K45, 35K55
## 1 Introduction
Let us consider the Cauchy problem for a weakly coupled system of nonlinear
parabolic equations
$\left\\{\begin{aligned}
\partial_{t}u_{1}+F_{1}(x,D^{2}u_{1})=|u_{2}|^{p-1}u_{2},\quad
x\in\bm{R}^{N},\ t>0,\\\
\partial_{t}u_{2}+F_{2}(x,D^{2}u_{2})=|u_{1}|^{q-1}u_{1},\quad
x\in\bm{R}^{N},\ t>0,\end{aligned}\right.$ (1.1)
with initial condition
$u_{i}(x,0)=u_{i0}(x),\quad x\in\bm{R}^{N}\mbox{ for }i=1,2,$ (1.2)
where $N\geq 1$, $p,q>0$, $F_{1},F_{2}\in C(\bm{R}^{N}\times S^{N})$ are
uniformly elliptic and homogeneous of order one, and $u_{10}$, $u_{20}\in
BUC(\bm{R}^{N})$ are nonnegative. Here $\partial_{t}u_{i}$ denotes the
derivative $\partial u_{i}/\partial t$ and $D^{2}u_{i}$ denotes the Hessian
matrix of $u_{i}$ in the variable $x$. Throughout this paper, we let $S^{N}$
denote the $N\times N$ real symmetric matrices and let $BUC(\bm{R}^{N})$
denote the set of bounded uniformly continuous functions on $\bm{R}^{N}$.
In [4], Escobedo and Hererro considered the Cauchy problem for a system of
semilinear parabolic equations
$\partial_{t}u_{1}-\bigtriangleup
u_{1}=u_{2}^{p},\quad\partial_{t}u_{2}-\bigtriangleup u_{2}=u_{1}^{q},\quad
x\in\bm{R}^{N},t>0$ (1.3)
with (1.2), where $N\geq 1$, $p,q>0$, and $\bigtriangleup$ denotes the Laplace
operator, that is,
$\bigtriangleup:=\sum_{j=1}^{N}\frac{\partial^{2}}{\partial x_{j}^{2}}.$
The system (1.3) agrees with the case $F_{1}(x,X)=F_{2}(x,X)=-\mathrm{tr}(X)$
for $x\in\bm{R}^{N}$, $X\in S^{N}$. Escobedo and Hererro proved that if $pq>1$
and
$\frac{\max\\{p,q\\}+1}{pq-1}\geq\frac{N}{2},$
then every nontrivial nonnegative solution to (1.3) blows up in a finite time.
On the other hand, if $pq>1$ and
$\frac{\max\\{p,q\\}+1}{pq-1}<\frac{N}{2},$ (1.4)
then there exists a global-in-time solution to (1.3) for some $u_{10}$,
$u_{20}$. These results show that the existence of nonnegative global-in-time
solutions to (1.3) is clarified by the curve
$\frac{\max\\{p,q\\}+1}{pq-1}=\frac{N}{2}.$ (1.5)
This Fujita type result for (1.3) is extended by [6], [18] to the case where
the system with linear but unequal principal parts. In [18], $-\bigtriangleup
u_{1},-\bigtriangleup u_{2}$ are replaced by the linear operators of the form
$\displaystyle L_{1}u_{1}=-\sum_{j,k=1}^{N}\frac{\partial}{\partial
x_{j}}\left(a^{jk}\frac{\partial u_{1}}{\partial x_{k}}\right),\quad
L_{2}u_{2}=-\sum_{j,k=1}^{N}\frac{\partial}{\partial
x_{j}}\left(b^{jk}\frac{\partial u_{2}}{\partial x_{k}}\right),$
where the coefficients $a^{jk}$, $b^{jk}$ are sufficiently smooth, uniformly
elliptic and symmetric. In particular, the system with constant diffusion
coefficients $L_{1}=-d_{1}\bigtriangleup u_{1}$, $L_{2}=-d_{2}\bigtriangleup
u_{1}$, $d_{1}$,$d_{2}>0$ is considered in [6] (see also [5] for another
context). The Fujita exponent for the system
$\partial_{t}u_{1}+L_{1}u_{1}=u_{2}^{p},\quad\partial_{t}u_{2}+L_{2}u_{2}=u_{1}^{q},\quad
x\in\bm{R}^{N},\ t>0$
is also given by (1.5). Namely, the Fujita exponent is given by (1.5) if the
principal parts are linear.
Let us introduce results for a single equation. Setting $u_{1}=u_{2}=u$,
$F_{1}=F_{2}=F$ and $p=q>1$, then (1.1) becomes a single nonlinear parabolic
equation
$\partial_{t}u+F(x,D^{2}u)=u^{p},\quad x\in\bm{R}^{N},\ t>0.$ (1.6)
Typical examples of $F$ are given below. When $F(D^{2}u)=-\bigtriangleup u$,
(1.6) is the Fujita equation. In [7], Fujita considered the Cauchy problem for
(1.6) with $F(x,D^{2}u)=-\bigtriangleup u$. He proved that the critical
exponent for the existence of nonnegative global-in-time solutions is given by
$\frac{1}{p-1}=\frac{N}{2}.$
More precisely, if $1<p<p_{F}:=1+2/N$, then all positive solutions blow-up in
a finite time, while if $p>p_{F}$, then there exists a positive global-in-time
solution of (1.6). (Readers are referred to [3] for a survey of blow-up
problems.) When $F$ is fully nonlinear, the critical exponent for the
existence of global-in-time solutions to (1.6) was obtained in [16, 17]. We
employ the viscosity solutions to treat fully nonlinear equations. To give
prrecise examples and state the existence of viscosity solutions, we suppose
precise assumptions on $F_{i}$’s. The definition of viscosity solutions is
given in the next section.
For $i=1,2$, we assume that $F_{i}:\bm{R}^{N}\times S^{N}\to\bm{R}$ satisfies
the following properties.
1. (i)
$F_{i}$ is continuous in $\bm{R}^{N}\times S^{N}$, that is,
$\displaystyle F_{i}\in C(\bm{R}^{N}\times S^{N}).$ (1.7)
2. (ii)
There exist constants $0<\lambda_{i}\leq\Lambda_{i}$ such that
$\displaystyle\mathcal{P}_{i}^{-}(X-Y)\leq
F_{i}(x,X)-F_{i}(x,Y)\leq\mathcal{P}_{i}^{+}(X-Y)$ (1.8)
for $(x,X,Y)\in\bm{R}^{N}\times S^{N}\times S^{N}$, where
$\mathcal{P}^{\pm}_{i}$ are the Pucci extremal operators defined by
$\displaystyle\mathcal{P}^{+}_{i}(X)$
$\displaystyle=\mathcal{P}^{+}_{\lambda_{i},\Lambda_{i}}(X):=\max\\{\mathrm{tr}[-AX]\
|\lambda_{i}I\leq A\leq\Lambda_{i}I,\ A\in S^{N}\\},$
$\displaystyle\mathcal{P}^{-}_{i}(X)$
$\displaystyle=\mathcal{P}^{-}_{\lambda_{i},\Lambda_{i}}(X):=\min\\{\mathrm{tr}[-AX]\
|\lambda_{i}I\leq A\leq\Lambda_{i}I,\ A\in S^{N}\\},$
for $X\in S^{N}$.
3. (iii)
$F_{i}$ is Lipshitz continuous in $x$. Namely, there exists $L>0$ such that
$|F_{i}(x,X)-F_{i}(y,X)|\leq L(\|X\|+1)|x-y|$ (1.9)
for all $X\in S^{N}$ and $x,y\in\bm{R}^{N}$. Here $\|X\|$ stands for the
operator norm of $X$.
4. (iv)
$F_{i}$ is homogeneous of order one. Namely,
$F_{i}(x,\mu X)=\mu F_{i}(x,X)$ (1.10)
for $\mu\geq 0$, $x\in\bm{R}^{N}$, $X\in S^{N}$.
We shall give two examples of $F:\bm{R}^{N}\times S^{N}\to\bm{R}$ satisfying
above conditions.
* •
Let $0<\gamma<1$. The operator
$F(D^{2}u)=\max\left\\{-\frac{\bigtriangleup
u}{1-\gamma},-\frac{\bigtriangleup u}{1+\gamma}\right\\}$
is nonlinear and convex. Equation $\partial_{t}u+F(D^{2}u)=0$ is called the
Barenblatt equation of Elasto-Plastic equation. See [9] and [11].
* •
Let $N=2$. Then
$F(D^{2}u)=\min\left\\{\max\\{-\bigtriangleup u,-2\bigtriangleup
u\\},-u_{x_{1}x_{1}}-2u_{x_{2}x_{2}}\right\\}$
is a nonlinear and nonconvex operator.
We now state the comparison principle. The proof is given in Section 5.
###### Theorem 1.1 (Comparison principle).
Assume that $p,q\geq 1$ and let $T>0$. Let $(u_{1},u_{2})\in USC\cap
L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$ be a viscosity subsolution and
$(v_{1},v_{2})\in LSC\cap L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$ be a
viscosity supersolution of (1.1), respectively. If
$\displaystyle u_{i}(\cdot,0)\leq v_{i}(\cdot,0)\quad\mbox{in
}\bm{R}^{N}\mbox{ for }i=1,2,$
then
$\displaystyle u_{i}\leq v_{i}\quad\mbox{in }\bm{R}^{N}\times(0,T)\mbox{ for
}i=1,2.$
Existence of viscosity solutions to (1.1) and (1.2) is guaranteed by the
following:
###### Theorem 1.2.
Assume that $p,q\geq 1$ and $pq>1$. Let $u_{10},u_{20}\in BUC(\bm{R}^{N})$.
There exist $T>0$ and a unique viscosity solution $(u_{1},u_{2})$ of (1.1)
satisfying (1.2) in $\bm{R}^{N}\times[0,T]$. Furthermore, if $u_{i0}\geq 0$
for $i=1,2$, then $u_{i}\geq 0$ for $i=1,2$, as long as the solution exists.
Moreover, $u_{i}\in BUC(\bm{R}^{N}\times[0,T))$.
We temporarily go back to the results obtained by Meneses and Quaas [16, 17].
They treated (1.6) for the case $F=F(D^{2}u)$ is an $x$-independent operator
which satisfies (1.7)–(1.10) with the ellipticity constants
$0<\lambda\leq\Lambda$. Then there exists $\alpha=\alpha(F)>0$ such that if
$1<p\leq 1+1/\alpha$, then there exists no global-in-time solutions for any
$u_{0}\in BUC(\bm{R}^{N})$. While if $p>1+1/\alpha$, then there exists a
global-in-time solution for some $u_{0}\in BUC(\bm{R}^{N})$. These results
mean that $1+1/\alpha$ is the Fujita exponent for (1.6).
###### Remark 1.1.
We give several remarks about $\alpha=\alpha(F)$.
1. (i)
If $F$ and $G$ are uniformly elliptic homogeneous such that $F\leq G$, then
$\alpha(F)\leq\alpha(G).$
Moreover, if $0<\lambda\leq\Lambda$ are the ellipticity constants of $F$, then
it holds that
$\frac{N\lambda}{2\Lambda}\leq\alpha(\mathcal{P}_{\lambda,\Lambda}^{-})\leq\alpha(F)\leq\alpha(\mathcal{P}_{\lambda,\Lambda}^{+})\leq\frac{N\Lambda}{2\lambda}.$
See [1, (3.21)] and [17, Lemma 2.2].
2. (ii)
If $F=F(X)$ is convex, then $\alpha(F)\geq N/2$ and this inequality is strict
unless $F$ is linear (see [1, Example 3.12]).
3. (iii)
Note that $\alpha$ coincides with the eigenvalue of
$F(D^{2}\psi)-\frac{1}{2}y\cdot\nabla\psi=\alpha\psi,\quad y\in\bm{R}^{N}.$
(1.11)
We next give a remark for the case $F=F(x,D^{2}u)$ depending on $x$. In [17],
it was shown that there exists $\tilde{\alpha}=\tilde{\alpha}(F)>0$ such that
for all solutions of
$\partial_{t}w+F(x,D^{2}w)=0,\quad x\in\bm{R}^{N},\ t>0,\quad
w(x,0)=w_{0}(x),\quad x\in\bm{R}^{N}$
satisfy
$\lim_{t\to\infty}t^{\tilde{\alpha}}\|w(\cdot,t)\|_{L^{\infty}}<\infty,$
whenever $w_{0}\in BUC(\bm{R}^{N})$ satisfies $0\leq w_{0}(x)\leq
A\exp(-B|x|^{2})$ for some $A,B>0$. While if $\beta>\tilde{\alpha}$, it holds
that
$\lim_{t\to\infty}t^{\beta}\|w(\cdot,t)\|_{L^{\infty}}=\infty.$
This is well-known for the case $F(D^{2}u)=-\bigtriangleup u$ (see e.g. [15]).
For $x$ depending case, the Fujita exponent is also given by
$1+1/\tilde{\alpha}$. However, the critical case $p=1+1/\tilde{\alpha}$ has
never been treated for $x$ depending case even for the single equation.
In this paper, we would like to prove the existence of global-in-time
solutions for the system of fully nonlinear parabolic equations (1.1). In our
setting, we have the choices of a combination of $F_{1}$ and $F_{2}$. Let
$\alpha_{i}=\alpha(F_{i})>0$ be the corresponding eigenvalue of (1.11)
replacing $F$ by $F_{i}$. As mentioned above, $\alpha_{i}>N/2$ if $F_{i}$ is
convex and nonlinear. Therefore, we can expect a different condition to
guarantee the existence of a global-in-time solution from (1.4).
Our main result is the following:
###### Theorem 1.3.
Let $F_{1}$, $F_{2}$ be independent of $x$. Suppoe that $F_{i}$’s satisfy
(1.7)–(1.10). Further assume that $p,q\geq 1$ satisfy $pq>1$ and
$p>\frac{\Lambda_{2}}{\lambda_{1}},\quad q>\frac{\Lambda_{1}}{\lambda_{2}}.$
(1.12)
There exist positive constants $\alpha_{1}$ and $\alpha_{2}$ such that, if
$\frac{p+1}{pq-1}<\alpha_{1}\quad\mathit{and}\quad\frac{q+1}{pq-1}<\alpha_{2},$
(1.13)
then there exists a global-in-time solution to (1.1) and (1.2) for some
$u_{10}$, $u_{20}\in BUC(\bm{R}^{N})$.
Our theorem gives a sufficient condition for the exsistence of global-in-time
solutions to (1.1) similar to the Fujita type result. We consider a solution
$\psi$ of the problem
$F(D^{2}\psi)-\frac{1}{2}y\cdot D\psi=\mu\psi,\quad
y\in\bm{R}^{N},\quad\lim_{|y|\to\infty}\psi(y)=0,$
where $F$ satisfies (1.7)–(1.10). To prove our main theorem, let us apply an
estimate for $\psi$ of the form
$c\exp(-\delta|y|^{2})\leq\psi(y)\leq C\exp(-\delta|y|^{2}).$
Using this estimate under the condtion (1.12), we can find a supersolution to
obtain global-in-time solutions. See Section 4. Note that (1.12) is not needed
for the single equation.
The rest of this paper is organized as follows. In Section 2, we give a
precise definition of viscosity solutions of the Cauchy problem and prepare
several notation. In Section 3, we give a proof of the existence of local-in-
time solutions of (1.1) and (1.2). In Section 4, we prove Theorem 1.3. We give
a proof of Theorem 1.1 in Section 5. In the appendix, for the convenience of
the reader, we give a detailed proof of Perron’s method.
## 2 Preliminaries
For a real valued function $f$ defined in $\bm{R}^{N}\times(0,T)$, define the
upper (resp. lower) semi-continuous envelope $f^{*}$ (resp. $f_{*}$) of $f$ by
$\displaystyle f^{*}(x,t):=\lim_{\varepsilon\to
0}\sup_{\begin{subarray}{c}y\in B(x,\varepsilon)\\\
|s-t|<\varepsilon\end{subarray}}f(y,s),\quad f_{*}(x,t):=\lim_{\varepsilon\to
0}\inf_{\begin{subarray}{c}y\in B(x,\varepsilon)\\\
|s-t|<\varepsilon\end{subarray}}f(y,s),$ (2.1)
for $x\in\bm{R}^{N}$, $t\in(0,T)$. It is well known that $f^{*}$ is upper
semi-continuous, $f_{*}$ is lower semi-continuous and $f_{*}\leq f\leq f^{*}$.
Furthermore, if $f$ is upper semi-continuous, then $f^{*}=f$. The same
property holds for $f_{*}$.
We prepare some notation. Let $A$ be a subset of $\bm{R}^{N}\times[0,\infty)$.
The sets $USC(A)$ and $LSC(A)$ stand for the set of upper semicontinuous
functions on $A$ and lower semicontinuous functions on $A$ respectively. Let
$\Omega\subset\bm{R}^{N}$ and let $BUC(\Omega)$ denote the set of bounded
uniformly continuous functions on $\Omega$. For $T>0$,
$C^{2,1}=C^{2,1}(\bm{R}^{n}\times(0,T))$ denotes the set of all functions
which is $C^{2}$ in the variable $x$ and $C^{1}$ in the variable $t$.
We recall the definition of viscosity solutions of general parabolic systems
$\displaystyle\partial_{t}u_{i}+G_{i}(x,t,u_{1},\ldots,u_{m},Du_{i},D^{2}u_{i})=0,\quad\mbox{in
}\Omega\times(0,T),\ \mbox{for }i=1,\ldots,m,$ (2.2)
where $T\in(0,\infty]$ and $\Omega$ is an open subset of $\bm{R}^{N}$.
###### Definition 2.1.
We call $u=(u_{1},\ldots,u_{m}):\Omega\times(0,T)\to\bm{R}^{m}$ a viscosity
subsolution (resp., supersolution) of (2.2) if for
$(i,x,t,\phi)\in\\{1,\ldots,m\\}\times\Omega\times(0,T)\times
C^{2,1}(\Omega\times(0,T))$,
$\displaystyle\partial_{t}\phi(x,t)+G_{i}(x,t,u_{1}^{*}(x,t),\ldots,u_{m}^{*}(x,t),D\phi(x,t),D^{2}\phi(x,t))$
$\displaystyle\leq 0,$ $\displaystyle(resp.\
\partial_{t}\phi(x,t)+G_{i}(x,t,{u_{1}}_{*}(x,t),\ldots,{u_{m}}_{*}(x,t),D\phi(x,t),D^{2}\phi(x,t))$
$\displaystyle\geq 0)$
provided that $u_{i}^{*}-\phi$ (resp. ${u_{i}}_{*}-\phi$) attains its local
maximum (resp., minimum) at $(x,t)\in\Omega\times(0,T)$. We call
$u:\Omega\times(0,T)\to\bm{R}^{m}$ a viscosity solution of (2.2) if $u$ is a
viscosity sub- and supersolution of (2.2).
We also define a solution to the Cauchy problem.
###### Definition 2.2.
Let $u=(u_{1},\ldots,u_{m}):\Omega\times(0,T)\to\bm{R}^{m}$ be a viscosity
subsolution of (2.2). We call $u$ a viscosity subsolution of the Cauchy
problem (2.2) and
$u_{1}(\cdot,0)=u_{10},\dots,u_{m}(\cdot,0)=u_{m0}$
if $u$ satisfies
$u_{1}(\cdot,0)\leq u_{10},\dots,u_{m}(\cdot,0)\leq u_{m0}\quad\mathrm{in}\
\Omega.$
A viscosity supersolution is also defined in the same way.
###### Definition 2.3.
Define parabolic semi-jet $PJ^{2,+}u(x,t)$ of a function
$u:\bm{R}^{N}\times(0,\infty)\to\bm{R}$ at
$(x,t)\in\bm{R}^{N}\times[0,\infty)$ by
$\displaystyle PJ^{2,+}u(x,t)$ (2.3)
$\displaystyle:=\biggl{\\{}(a,z,X)\in\bm{R}\times\bm{R}^{N}\times S^{N}\
\biggl{|}\ u(y,s)\leq u(x,t)+\langle z,y-x\rangle$
$\displaystyle\quad+\frac{1}{2}\langle
X(y-x),y-x\rangle+a(s-t)+o(|y-x|^{2}+|s-t|)\quad\mathrm{as}\ y\to x,s\to
t.\biggr{\\}},$
where $\langle\cdot,\cdot\rangle$ denotes the standard innnar product on
$\bm{R}^{N}$. We also define $PJ^{2,-}u(x,t):=-PJ^{2,+}(-u(x,t))$. Moreover, a
sort of closure of semi-jet $\overline{PJ}^{2,\pm}u(x,t)$ is defined as
follows: $(a,z,X)\in\bm{R}\times\bm{R}^{N}\times S^{N}$ is a point of
$\overline{PJ}^{2,\pm}u(x,t)$ if there exist sequences
$(x_{k},t_{k})\in\bm{R}^{N}\times(0,\infty)$ and $(a_{k},z_{k},X_{k})\in
PJ^{2,\pm}u(x,t)$ such that
$x_{k}\to x,\quad t_{k}\to t,\quad u(x_{k},t_{k})\to u(x,t),\quad a_{k}\to
a,\quad z_{k}\to z,\quad X_{k}\to X$
as $k\to\infty$.
## 3 Existence of local-in-time solutions
In this section, we give proof of Theorem 1.2. To prove the local existence of
viscosity solution, we refer to important results from [2]. The following
Lemma is modified for the convenience of our argument.
###### Lemma 3.1.
Let $F:S^{N}\to\bm{R}$ be continuous and let satisfy the ellipticity condition
$F(Y)\leq F(X)\quad\mathrm{whenever}\ X\leq Y,\quad X,Y\in S^{N}.$ (3.1)
1. (i)
If $u_{0}$ is uniformly continuous on $\bm{R}^{N}$, then the Cauchy problem
$\partial_{t}u+F(D^{2}u)=0\quad\mathrm{in}\ \bm{R}^{N},\quad
u(\cdot,0)=u_{0}\quad\mathrm{in}\ \bm{R}^{N}$ (3.2)
has a unique viscosity solution $u\in C(\bm{R}^{N}\times[0,\infty))$, which is
uniformly continuous in $x\in\bm{R}^{N}$. Moreover, if $u_{0}\in
BUC(\bm{R}^{N})$, then the unique solution $u$ of (3.2) is bounded and
uniformly continuous in $x\in\bm{R}^{N}$.
2. (ii)
Assume $u_{0}\in BUC(\bm{R}^{N})$. Then the solution $u$ of (3.2) generates a
semigroup $\\{S(t)\\}_{t\geq 0}$ on $BUC(\bm{R}^{N})$, which satisfies the
following properties.
1. (1)
For any $\varphi,\psi\in BUC(\bm{R}^{N})$,
$\|S(t)\varphi-S(t)\psi\|_{L^{\infty}}\leq\|\varphi-\psi\|_{L^{\infty}},\quad
t>0.$
2. (2)
For any $\varphi\in BUC(\bm{R}^{N})$,
$\lim_{t\to+0}\|S(t)\varphi-\varphi\|_{L^{\infty}}=0.$
###### Proof of Theorem 1.2.
The proof is based on [16]. It can be seen that $\mathcal{P}_{i}^{-}$
satisfies (3.1) by the definition. Let $\\{S_{i}(t)\\}$ be an order preserving
semigroup generated by $\mathcal{P}_{i}^{-}$. Then, by Lemma 3.1 for each
$i=1,2$, $z_{i}(x,t)=[S_{i}(t)u_{i0}](x)$ is a viscosity solution to
$\partial_{t}z_{i}+\mathcal{P}_{i}^{-}(D^{2}z_{i})=0,\quad
x\in\bm{R}^{N},\,t>0,\quad z_{i}(x,0)=u_{i0}(x),\quad x\in\bm{R}^{N}.$
Furthermore, $S_{i}(t)$ satisfies
$\|S_{i}(t)\varphi-
S_{i}(t)\psi\|_{L^{\infty}}\leq\|\varphi-\psi\|_{L^{\infty}},\quad t>0$ (3.3)
for any $\varphi,\psi\in BUC(\bm{R}^{N})$ and
$\lim_{t\to+0}\|S_{i}(t)\varphi-\varphi\|_{L^{\infty}}=0.$
Let $T>0$. Define
$\Psi:(BUC(\bm{R}^{N}\times[0,T])^{2}\to(BUC(\bm{R}^{N}\times[0,T])^{2}$ by
$\Psi[v_{1},v_{2}](t):=(\Phi_{1}[v_{2}](t),\Phi[v_{1}](t)),\quad 0\leq t\leq
T,$
where
$\displaystyle\Phi_{1}[v_{2}](t)$
$\displaystyle:=S_{1}(t)u_{10}+\int_{0}^{t}S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))\,ds,$
$\displaystyle\Phi_{2}[v_{1}](t)$
$\displaystyle:=S_{2}(t)u_{20}+\int_{0}^{t}S_{2}(t-s)(|v_{1}(s)|^{q-1}v_{1}(s))\,ds.$
For the sake of convenience, we shall show that $\Psi$ is a contraction on a
closed subset of $(BUC(\bm{R}^{N}\times[0,T]))^{2}$. For $M>0$ and $T>0$, the
closed ball $B_{T,M}:=\\{v\in BUC(\bm{R}^{N}\times[0,T]):\sup_{0\leq t\leq
T}\|v(t)\|_{L^{\infty}(\bm{R}^{N})}\leq M\\}$ is a complete metric space.
Without loss of generality, we may assume $u_{10}\not\equiv 0$. Moreover, we
only need to consider $\Phi_{1}$ due to the symmetry. Set
$M:=2(\|u_{10}\|_{L^{\infty}(\bm{R}^{N})}+\|u_{20}\|_{L^{\infty}(\bm{R}^{N})})>0$.
Let $v_{2},\tilde{v}_{2}\in B_{T,M}$. Thanks to (3.3), we see that
$\|S_{i}(t)u_{10}\|_{L^{\infty}(\bm{R}^{N})}\leq\|u_{10}\|_{L^{\infty}(\bm{R}^{N})}$
for all $t\in[0,T]$ and so $S(t)u_{10}\in B_{M}$. Moreover, we have
$\left|\int_{0}^{t}S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))\,ds\right|\leq\int_{0}^{t}\||v_{2}(s)|^{p-1}v_{2}(s)\|_{L^{\infty}(\bm{R}^{N})}\leq
tM^{p}$
for $t\in[0,T]$. Thus,
$\|\Phi_{1}[v_{2}](t)\|_{L^{\infty}(\bm{R}^{N})}\leq\|u_{10}\|_{L^{\infty}(\bm{R}^{N})}+TM^{p}.$
(3.4)
We next use (3.3) to see
$\displaystyle|\Phi_{1}[v_{2}](t)-\Phi_{1}[\tilde{v}_{2}](t)|$
$\displaystyle\leq\int_{0}^{t}\|\\{(|v_{2}(s)|^{p-1}v_{2}(s))-(|\tilde{v}_{2}(s)|^{p-1}\tilde{v}_{2}(s))\\}\|_{L^{\infty}(\bm{R}^{N})}\,ds.$
By the mean value theorem, we see that there exists some $C>0$ such that
$|\Phi_{1}[v_{2}](t)-\Phi_{1}[\tilde{v}_{2}](t)|\leq CTM^{p-1}\sup_{0\leq
t\leq T}\|v_{2}(t)-\tilde{v}_{2}(t)\|_{L^{\infty}(\bm{R}^{N})}$
for $t\in[0,T]$. It follows that
$\sup_{0\leq t\leq
T}\|\Phi_{1}[v_{2}]-\Phi_{1}[\tilde{v}_{2}]\|_{L^{\infty}(\bm{R}^{N})}\leq
CTM^{p-1}\sup_{0\leq t\leq
T}\|v_{2}-\tilde{v}_{2}\|_{L^{\infty}(\bm{R}^{N})}.$ (3.5)
Therefore, taking $T>0$ small enough, we see that $\Psi$ is a contraction map
on $(B_{T,M})^{2}$. By (3.4) and (3.5), the Banach fixed point theorem is
applied and there exists a unique fixed point so that
$\Psi[v_{1},v_{2}]=(\Phi_{1}[v_{2}],\Phi_{2}[v_{1}])=(v_{1},v_{2})\in
B_{T,M}^{2}$. Namely, we have
$\displaystyle v_{1}(t)$
$\displaystyle=S_{1}(t)u_{10}+\int_{0}^{t}S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))\,ds,$
(3.6) $\displaystyle v_{2}(t)$
$\displaystyle=S_{2}(t)u_{20}+\int_{0}^{t}S_{2}(t-s)(|v_{1}(s)|^{q-1}v_{1}(s))\,ds$
in $\bm{R}^{N}\times[0,T]$. Furthermore, it follows from (3.6) that
$\|v_{1}(t)-S_{1}(t)u_{10}\|_{L^{\infty}(\bm{R}^{N})}\leq M^{p}t\to 0$
as $t\to+0$, hence
$\lim_{t\to+0}\|v_{1}(t)-u_{10}\|_{L^{\infty}(\bm{R}^{N})}=0.$
We have the same convergence of $v_{2}$ by the same argument.
By the regularity theory (see e.g. [13, Theorem 1.6, Chapter 13], [14, Theorem
14.10], [20, Theorem 4.13]), we know that $S_{i}(t)u_{i0}$ is a classical
solution of $\partial_{t}w+\mathcal{P}^{-}_{i}(D^{2}w)=0$. It follows from
(3.6) that $\partial_{t}v_{1},\partial_{t}v_{2}$ exist. Taking the derivative
of the right-hand side of (3.6), we see that
$\displaystyle\frac{\partial}{\partial
t}[S_{1}(t)u_{10}]=-\mathcal{P}_{1}^{-}(D^{2}[S_{1}(t)u_{10}]),$
$\displaystyle\frac{\partial}{\partial
t}\int_{0}^{t}S_{1}(t-s)(|v_{2}(s)|^{p-1}v(s))\,ds$
$\displaystyle=-\int_{0}^{t}\mathcal{P}^{-}_{1}(D^{2}[S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))])\,ds+|v_{2}(t)|^{p-1}v_{2}(t).$
The same estimate also allows $\partial_{t}v_{2}$ to exist. Thus, they satisfy
$\displaystyle\partial_{t}v_{1}+\mathcal{P}^{-}_{1}(D^{2}S_{1}(t)u_{10})$
$\displaystyle=-\int_{0}^{t}\mathcal{P}^{-}_{1}(D^{2}[S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))])\,ds+|v_{2}|^{p-1}v_{2}(t),$
$\displaystyle\partial_{t}v_{2}+\mathcal{P}^{-}_{2}(D^{2}S_{2}(t)u_{20})$
$\displaystyle=-\int_{0}^{t}\mathcal{P}^{-}_{2}(D^{2}[S_{2}(t-s)(|v_{1}(s)|^{q-1}v_{1}(s))])\,ds+|v_{1}|^{q-1}v_{1}(t).$
It follows from the property
$\mathcal{P}_{i}^{-}(X+Y)\geq\mathcal{P}_{i}^{-}(X)+\mathcal{P}_{i}^{-}(Y)$,
that
$\displaystyle-\int_{0}^{t}\mathcal{P}^{-}_{1}(D^{2}[S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))])\,ds$
$\displaystyle\geq-\mathcal{P}^{-}_{1}\left(\int_{0}^{t}D^{2}[S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))])\,ds\right).$
Furthermore, since $S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))$ is of $C^{2}$ as a
function of $x$, we have
$\int_{0}^{t}D^{2}[S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))]\,ds=D^{2}\int_{0}^{t}S_{1}(t-s)(|v_{2}(s)|^{p-1}v_{2}(s))\,ds,$
hence
$\displaystyle\partial_{t}v_{1}+\mathcal{P}_{1}^{-}\left(D^{2}\left[S_{1}(t)u_{10}+\int_{0}^{t}S_{1}(t-s)|v_{2}|^{p-1}v_{2}(s)\,ds\right]\right)$
$\displaystyle\geq|v_{2}|^{p-1}v_{2}(t),$ (3.7)
$\displaystyle\partial_{t}v_{2}+\mathcal{P}_{2}^{-}\left(D^{2}\left[S_{2}(t)u_{20}+\int_{0}^{t}S_{2}(t-s)|v_{1}|^{q-1}v_{1}(s)\,ds\right]\right)$
$\displaystyle\geq|v_{1}|^{q-1}v_{1}(t)\quad$
in $\bm{R}^{N}\times[0,T]$. Note that the integral preserve linear properties.
We note that for $i=1,2$, $F_{i}$ satisfies
$\mathcal{P}^{-}_{i}(X)\leq F_{i}(x,X),\quad x\in\bm{R}^{N},\,X\in S^{N}.$
Consequently, we then deduce from (3.7) that
$\displaystyle\partial_{t}v_{1}+F_{1}\left(x,D^{2}v_{1}\right)\geq|v_{2}|^{p-1}v_{2},\quad\partial_{t}v_{2}+F_{2}\left(x,D^{2}v_{2}\right)\geq|v_{1}|^{q-1}v_{1}$
for $x\in\bm{R}^{N}$ and $t>0$. Namely, $(v_{1},v_{2})$ is a viscosity
supersolution of (1.1) and (1.2). Replacing $\mathcal{P}^{-}$ by
$\mathcal{P}^{+}$, we can also obtain a viscosity subsolution of (1.1)
satisfying (1.2).
By the Perron method and the comparison principle, there exists a continuous
viscosity solution $(u_{1},u_{2})$ of (1.1) satisfying (1.2). Nonnegativity of
solutions follows from the comparison principle.
Finally, we shall show that $u_{i}\in BUC(\bm{R}^{N}\times[0,T))$ for $i=1,2$.
We refer to [8, Section 3.5] for the method. Let
$(\underline{u}_{1},\underline{u}_{2})$ and
$(\overline{u}_{1},\overline{u}_{2})$ be viscosity subsolution and viscosity
supersolution to (1.1) and (1.2) obtained above. We can see that
$\underline{u}_{i}(x,t)-u_{i0}(x)\leq
u_{i}(x,t)-u_{i0}(x)\leq\overline{u}_{i}(x,t)-u_{i0}(x)$
for $x,y\in\bm{R}^{N}$, $t>0$. There exists a modulus of continuity
$\omega:[0,\infty)\to[0,\infty)$, $\omega(0)=0$, since
$\underline{u}_{i},\overline{u}_{i}\in BUC(\bm{R}^{N}\times[0,T))$. Then we
have
$\underline{u}_{i}(x,t)-u_{i0}(x)\geq-\omega(t)$
and
$\overline{u}_{i}(x,t)-u_{i0}(x)\leq\omega(t),$
which implies that
$\sup_{\begin{subarray}{c}{i=1,2}\\\
x\in\bm{R}^{n}\end{subarray}}|u_{i}(x,t)-u_{i0}(x)|\leq\omega(t).$ (3.8)
For fixed $h>0$, set
$\displaystyle\overline{v}_{i}(x,t)$
$\displaystyle:=u_{i}(x,t+h)+\omega(h),\quad x\in\bm{R}^{N},\ t\geq 0,$
$\displaystyle\underline{v}_{i}(x,t)$
$\displaystyle:=u_{i}(x,t+h)-\omega(h),\quad x\in\bm{R}^{N},\ t\geq 0,$
for $i=1,2$. Then $(\overline{v}_{1},\overline{v}_{2})$ is a viscosity
supersolution and $(\underline{v}_{1},\underline{v}_{2})$ is a viscosity
subsolution to (1.1) and (1.2). We see from (3.8) that
$\underline{v}_{i}(x,0)\leq u_{i0}(x)\leq\overline{v}_{i}(x,0)$. By Theorem
1.1, we see that
$\underline{v}_{i}(x,t)\leq u_{i}(x,t)\leq\overline{v}_{i}(x,t)\quad
x\in\bm{R}^{N},\ t\in[0,T].$
Therefore, we obtain
$|u_{i}(x,t)-u_{i}(x,t+h)|\leq\omega(h)$
for all $x\in\bm{R}^{N}$. This shows that $u_{i}$ is uniformly continuous with
respect to variable $t$. Since $u_{i0}\in BUC(\bm{R}^{N})$, there exists
another modulus of continuity $\omega$ so that
$\sup_{i=1,2}|u_{i0}(x)-u_{i0}(y)|\leq\omega(|x-y|)$
for $x,y\in\bm{R}^{N}$. Similarly to the above discussion, set for
$h\in\bm{R}^{N}$,
$\displaystyle\overline{w}_{i}(x,t):=u_{i}(x+h,t)+\omega(|h|),\quad
x\in\bm{R}^{N},\ t\geq 0,$
$\displaystyle\underline{w}_{i}(x,t):=u_{i}(x+h,t)-\omega(|h|),\quad
x\in\bm{R}^{N},\ t\geq 0,$
Then $(\underline{w}_{1},\underline{w}_{2})$ is a viscosity subsolution and
$(\overline{w}_{1},\overline{w}_{2})$ is a supersolution to (1.1) and (1.2).
Since $\underline{w}_{i}(x,0)\leq u_{i}(x,0)\leq\overline{w}_{i}(x,0)$, by
Theorem 1.1, we obtain
$\sup_{i=1,2}|u_{i}(x,t)-u_{i}(x+h,t)|\leq\omega(|h|)$
for all $t\in[0,T)$. Summarizing, $u_{i}$’s are uniformly continuous in
$\bm{R}^{N}\times[0,T)$. ∎
## 4 Existence of global-in-time solutions (proof of Theorem 1.3)
In this section, we shall prove the existence of global-in-time solutions to
(1.1) and (1.2). We use the following Lemma.
###### Lemma 4.1 ([1, Lemma 3.10]).
Let $0<\lambda\leq\Lambda$. Assume that $F$ satisfies (1.8) and (1.10). For
each $\delta<(4\Lambda)^{-1}$, there exists $C>0$ such that
$\psi(y)\leq C\exp(-\delta|y|^{2}),\quad y\in\bm{R}^{N},$
where $\psi$ is the profile function of a unique positive self-similar
solution $\Phi$ of $\partial_{t}u+F(D^{2}u)=0$ appearing as
$\Phi(x,t)=t^{-\alpha(F)}\psi(x/\sqrt{t})$. Likewise, for each
$\delta>(4\lambda)^{-1}$, there exists $C>0$ such that
$C\exp(-\delta|y|^{2})\leq\psi(y),\quad y\in\bm{R}^{N}.$
###### Proof of Theorem 1.2.
For $i=1,2$, let $\psi_{i}$ be a positive solution of the eigenvalue problem
$F_{i}(D^{2}\psi_{i})-\frac{1}{2}y\cdot D\psi_{i}=\mu\psi_{i},\quad
y\in\bm{R}^{N},\quad\lim_{|y|\to\infty}\psi(y)=0.$ (4.1)
Let $(\alpha(F_{i}),\psi_{i})$ be the eigenpair of (4.1). Set
$\alpha_{i}=\alpha(F_{i})$. The existence of solution to (4.1) is obtained in
[1, Section 3]. See also [16]. Let us look forward a supersolution to (1.1) of
the form
$\overline{u_{1}}(x,t):=\varepsilon(t+1)^{a}\phi_{1}(x,t+1),\quad\overline{u_{2}}(x,t):=\tilde{\varepsilon}(t+1)^{b}\phi_{2}(x,t+1),$
where $\phi_{i}$ is defined by
$\phi_{i}(x,t):=t^{-\alpha_{i}}\psi_{i}(t^{-\frac{1}{2}}x),\quad
x\in\bm{R}^{N},\,t>0$
and $a,b$ will be defined. We refer to [19, Section 32] for the case of linear
diffusion. For each $i=1,2$, $\phi_{i}$ satisfies
$\partial_{t}\phi_{i}+F_{i}(D^{2}\phi_{i})=0\quad\mathrm{in}\
\bm{R}^{N}\times(0,\infty)$ (4.2)
in the sense of viscosity solution. In fact, by the argument used in [16,
Lemma 3.1], we can see that $\phi_{i}$ satisfies (4.2).
In what follows, we shall find a sufficient condition that
$(\overline{u}_{1},\overline{u}_{2})$ becomes a viscosity supersolution of
(1.1). We have
$\displaystyle\partial_{t}\overline{u_{1}}$
$\displaystyle=a\varepsilon(t+1)^{a-1}\phi_{1}+\varepsilon(t+1)^{a}\partial_{t}\phi_{1}(x,t+1),\quad
x\in\bm{R}^{N},\ t\geq 0.$ (4.3)
Assume that $\overline{u}_{1}-\Phi$ attains its minimum at $(x,t)$ and
satisfies that
$(\overline{u}_{1}-\Phi)(x,t)=0.$
Note that the function
$\phi_{1}(\cdot,\cdot+1)-\frac{1}{\varepsilon(t+1)^{a}}\Phi$
also attains minimum at $(x,t)$. Since $\phi_{1}(\cdot,\cdot+1)\in
C^{1,1}(\bm{R}^{N}\times(0,\infty))$ and $\phi_{1}$ is a viscosity solution of
(4.2), it holds that
$\displaystyle\partial_{t}\Phi(x,t)$
$\displaystyle=a\varepsilon(t+1)^{a-1}\phi_{1}+\varepsilon(t+1)^{a}\partial_{t}\phi_{1}$
$\displaystyle\geq
a\varepsilon(t+1)^{a-1}\phi_{1}-F_{1}\left(\frac{1}{\varepsilon(t+1)^{a}}D^{2}\Phi\right)$
$\displaystyle=\varepsilon(t+1)^{a-1}\phi_{1}-F_{1}(D^{2}\Phi).$
On the other hands, we have
$\displaystyle\overline{u_{2}}^{p}$
$\displaystyle=\tilde{\varepsilon}^{p}(t+1)^{bp}\phi_{2}^{p},\quad t\geq 0.$
(4.4)
Combining (4.3)–(4.4), we see that
$\partial_{t}\Phi+F_{1}(D^{2}\Phi)-\overline{u_{2}}^{p}\geq(t+1)^{bp-\alpha_{2}p}[\varepsilon
a(t+1)^{a-1-\alpha_{1}-bp+\alpha_{2}p}\psi_{1}-\tilde{\varepsilon}^{p}\psi_{2}^{p}]$
(4.5)
at $(x,t)$. In the same way, it holds that
$\partial_{t}\Phi+F_{2}(D^{2}\Phi)-\overline{u_{1}}^{q}\geq(t+1)^{aq-\alpha_{1}q}[\varepsilon
b(t+1)^{b-1-\alpha_{1}-aq+\alpha_{1}q}\psi_{2}-\tilde{\varepsilon}^{q}\psi_{1}^{q}]$
(4.6)
at $(x,t)$.
To ensure that the right-hand sides of (4.5), (4.6) become nonnegative, it
suffices that
$\displaystyle a-1-\alpha_{1}-bp+\alpha_{2}p\geq 0,\quad\varepsilon
a\psi_{1}\geq\tilde{\varepsilon}^{p}\psi_{2}^{p},$ $\displaystyle
b-1-\alpha_{2}-aq+\alpha_{1}q\geq
0,\quad\tilde{\varepsilon}b\psi_{2}\geq\varepsilon^{q}\psi_{1}^{q}.$
Solving
$a-1-\alpha_{1}-bp+\alpha_{2}p=0,\quad b-1-\alpha_{2}-aq+\alpha_{1}q=0,$
we find the conditions
$a=\alpha_{1}-\frac{p+1}{pq-1},\quad b=\alpha_{2}-\frac{q+1}{pq-1}.$ (4.7)
Under the conditions (4.7), $a>0$ and $b>0$ are equivalent to (1.13), that is,
$\alpha_{1}>\frac{p+1}{pq-1},\quad\alpha_{2}>\frac{q+1}{pq-1}.$
If $\varepsilon=\tilde{\varepsilon}$, then we can find $\varepsilon>0$ so
small that $a\geq\varepsilon^{p-1}(\psi_{2}^{p}/\psi_{1})$ and
$b\geq\varepsilon^{q-1}(\psi_{1}^{q}/\psi_{2})$. Note that
$\psi_{2}^{p}/\psi_{1}$ and $\psi_{1}^{q}/\psi_{2}$ are bounded. Indeed,
applying Lemma 4.1 for $F_{i}$, for each $a_{i}<(4\Lambda_{i})^{-1}$, there
exists $C_{i}^{+}>0$ such that
$\psi_{i}(y)\leq C_{i}^{+}\exp(-a_{i}|y|^{2}),\quad y\in\bm{R}^{N},\quad
i=1,2.$
We also see that for each $b_{i}>(4\lambda)^{-1}$, there exists $C_{i}^{-}>0$
such that
$C_{i}^{-}\exp(-b_{i}|y|^{2})\leq\psi_{i}(y),\quad y\in\bm{R}^{N},$
hence
$C_{1}\frac{\exp(-b_{2}|y|^{2})^{p}}{\exp(-a_{1}|y|^{2})}\leq\frac{\psi_{2}^{p}}{\psi_{1}}\leq
C_{2}\frac{\exp(-a_{2}|y|^{2})^{p}}{\exp(-b_{1}|y|^{2})},$
where $C_{1}=(C_{2}^{-})^{p}/C_{1}^{+}$ and $C_{2}=(C_{2}^{+})^{p}/C_{1}^{-}$.
We also have a similar estimate for $\psi_{1}^{q}/\psi_{2}$. Finally, it
follows from (1.12) that $\psi_{1}^{q}/\psi_{2}$ and $\psi_{1}^{q}/\psi_{2}$
are bounded. Therefore, assuming (1.13) and choosing
$u_{i0}(x):=\bar{u}_{i}(x,0)$, by the Perron method, there exists a global-in-
time solution $(u_{1},u_{2})$ to (1.1) and (1.2). ∎
## 5 Proof of Theorem 1.1
In this section, we prove Theorem 1.1.
###### Lemma 5.1 ([12, Proposition 3.8 (2)]).
Asssume that $F$ satisfy (1.8) and (1.9). There exists a modulus of continuity
$\omega_{F}:[0,\infty)\to\bm{R}$ such that, if $X$, $Y\in S^{N}$, $\mu>1$
satisfy
$\displaystyle-3\mu\begin{pmatrix}I&O\\\
O&I\end{pmatrix}\leq\begin{pmatrix}X&O\\\ O&-Y\end{pmatrix}\leq
3\mu\begin{pmatrix}I&-I\\\ -I&I\end{pmatrix},$
then
$F(y,Y)-F(x,X)\leq\omega_{F}\left(|x-y|+\mu|x-y|^{2}\right)$
for all $x,y\in\bm{R}^{N}$.
###### Lemma 5.2.
Let $(u_{1},u_{2})\in USC\cap L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$ (resp.,
$LSC\cap L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$) be a viscosity subsolution
(resp., supersolution) of (1.1). We set for $i=1,2$,
$\displaystyle w_{i}:=e^{-\nu t}u_{i},$
where $\lambda>0$ is a constant. Then, $(w_{1},w_{2})$ is a viscosity
subsolution (resp., supersolution) of
$\left\\{\begin{aligned} \partial_{t}w_{1}+F_{1}(x,D^{2}w_{1})+\nu
w_{1}-e^{(p-1)\nu t}|w_{2}|^{p-1}w_{2}=0,\quad x\in\bm{R}^{N},\ t>0,\\\
\partial_{t}w_{2}+F_{2}(x,D^{2}w_{2})+\nu w_{2}-e^{(q-1)\nu
t}|w_{1}|^{q-1}w_{1}=0,\quad x\in\bm{R}^{N},\ t>0.\end{aligned}\right.$ (5.1)
###### Proof.
We shall argue only $w_{1}$. Let $\varphi\in C^{2}(\bm{R}^{N}\times[0,T))$ be
such that $w_{1}-\varphi$ achieve a maximum at $(x_{0},t_{0})$ and
$(w_{1}-\varphi)(x_{0},t_{0})=0.$
Then for all $(x,t)\in\bm{R}^{N}\times[0,T)$,
$e^{-\nu
t}u_{1}(x,t)-\varphi(x,t)=(w_{1}-\varphi)(x,t)\leq(w_{1}-\varphi)(x_{0},t_{0})=0.$
We have $u_{1}(x,t)-e^{-\nu t}\varphi(x,t)\leq 0$ for all
$(x,t)\in\bm{R}^{N}\times[0,T)$. On the other hand, $u_{1}(x_{0},t_{0})-e^{\nu
t_{0}}\varphi(x_{0},t_{0})=0$. Thus, $u_{1}-e^{\nu t}\varphi$ attains a
maximum at $(x_{0},t_{0})$. Since $(u_{1},u_{2})$ is a viscosity subsolution
of (1.1), we have, at $(x,t)=(x_{0},t_{0})$,
$\displaystyle 0$ $\displaystyle\geq\nu e^{\nu t}\varphi+e^{\nu
t}\partial_{t}\varphi+F_{1}(x,e^{\nu t}D^{2}\varphi)-|u_{2}|^{p-1}u_{2}$
$\displaystyle=\nu e^{\nu t}\varphi+e^{\nu t}\partial_{t}\varphi+e^{\nu
t}F_{1}(x,D^{2}\varphi)-e^{p\nu t}|w_{2}|^{p-1}w_{2}.$
We here used (1.10). Therefore, we obtain
$0\geq\nu w_{1}+\partial_{t}\varphi+F_{1}(x,D^{2}\varphi)-e^{(p-1)\nu
t}|w_{2}|^{p-1}w_{2}.$
By the same argument, we also obtain
$0\geq\nu w_{2}+\partial_{t}\varphi+F_{2}(x,D^{2}\varphi)-e^{(q-1)\nu
t}|w_{1}|^{q-1}w_{1}.$
Consequently, $(w_{1},w_{2})$ is a viscosity solution of (5.1). ∎
Theorem 1.1 is shown by proving the following proposition:
###### Proposition 5.1.
Let $p,q\geq 1$ and $T>0$. Let $(u_{1},u_{2})\in USC\cap
L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$ be a viscosity subsolution and
$(v_{1},v_{2})\in LSC\cap L^{\infty}(\bm{R}^{N}\times[0,T))^{2}$ be a
viscosity supersolution of (5.1), respectively. Assume that there exists a
constant $R>0$ such that for all $\lambda>0$,
$|e^{\nu t}u_{i}|,|e^{\nu t}v_{i}|\leq R\quad\mathrm{in}\
\bm{R}^{N}\times[0,T),i=1,2.$ (5.2)
If $u_{i0}\leq v_{i0}$ in $\bm{R}^{N}$ for $i=1,2$, then $u_{i}\leq v_{i}$ in
$\bm{R}^{N}\times[0,T)$ for $i=1,2$.
###### Proof.
For $\mu$, $\delta>0$, define
$\theta_{\mu,\delta}:=\sup_{i,x,t}\\{u_{i}(x,t)-v_{i}(x,t)-\frac{\mu}{T-t}-\delta|x|^{2}\\}.$
By the assumption (5.2), we see that $\theta_{\mu,\delta}\leq 2e^{-\nu t}R\leq
2R$. Put $\displaystyle\theta:=\limsup_{\mu,\delta\to 0}\theta_{\mu,\delta}$.
If $\theta\leq 0$, for all $x$, $t$, $i$, we have
$u_{i}(x,t)-v_{i}(x,t)\leq\frac{\mu}{T-t}+\delta|x|^{2}+\theta_{\mu,\delta}.$
Taking $\limsup$, we then have $u_{i}(x,t)-v_{i}(x,t)\leq 0$.
To obtain a contradiction, suppose that $\theta>0$. There exists a subsequence
(expressed by the same symbol) such that $\theta_{\mu,\delta}\geq\theta/2>0$.
In what follows fix $\mu$ and $\delta$ sufficiently small. Consider the
doubling of the variables
$(i,x,y,t,s)\mapsto
u_{i}(x,t)-v_{i}(y,s)-\frac{\mu}{T-t}-\delta|x|^{2}-\frac{|x-y|^{2}}{2\varepsilon}-\frac{|t-s|^{2}}{2\varepsilon},$
where $\varepsilon\in(0,1)$ is a parameter. Assume that the doubling map
attains a maximum at
$(i_{\varepsilon},x_{\varepsilon},y_{\varepsilon},t_{\varepsilon},s_{\varepsilon})$.
We may assume
$(i_{\varepsilon},x_{\varepsilon},y_{\varepsilon},t_{\varepsilon},s_{\varepsilon})\in\\{1,2\\}\times\overline{B_{r_{\delta}}}^{2}\times[0,T-\tau_{\mu}]^{2}$
for some $r_{\delta}>0$ and $\tau_{\mu}>0$, where $B_{r}$ stand for the ball
centered at the origin with radius $r>0$.
It follows from
$\theta_{\mu,\delta}\leq
u_{i}(x,t)-v_{i}(y,s)-\frac{\mu}{T-t}-\delta|x|^{2}-\frac{|x-y|^{2}}{2\varepsilon}-\frac{|t-s|^{2}}{2\varepsilon}$
at
$(i_{\varepsilon},x_{\varepsilon},y_{\varepsilon},t_{\varepsilon},s_{\varepsilon})$
that
$\displaystyle\frac{|x_{\varepsilon}-y_{\varepsilon}|^{2}}{2\varepsilon}+\frac{|t_{\varepsilon}-s_{\varepsilon}|^{2}}{2\varepsilon}+\frac{\mu}{T-t_{\varepsilon}}+\delta|x_{\varepsilon}|^{2}$
$\displaystyle\leq
u_{i_{\varepsilon}}(x_{\varepsilon},t_{\varepsilon})-v_{i_{\varepsilon}}(y_{\varepsilon},s_{\varepsilon})-\theta_{\mu,\delta}$
(5.3) $\displaystyle\leq 4R.$
Taking a subsequence of $i_{\varepsilon}$ such that
$i_{\varepsilon}\equiv\hat{i}\in\\{1,2\\}$ for sufficiently $\varepsilon\ll
1$. Take a subsequence if necessary, we find
$(\hat{x},\hat{t},\hat{i})\in\overline{B_{r_{\delta}}}\times[0,T-\tau_{\mu})\times\\{1,2\\}$
such that
$x_{\varepsilon},y_{\varepsilon}\to\hat{x},\quad
t_{\varepsilon},s_{\varepsilon}\to\hat{t},\quad i_{\varepsilon}\to\hat{i}$
as $\varepsilon\to 0$. It follows from (5.3) that
$\displaystyle\limsup_{\varepsilon\to
0}\left(\frac{|x_{\varepsilon}-y_{\varepsilon}|^{2}}{2\varepsilon}+\frac{|t_{\varepsilon}-s_{\varepsilon}|^{2}}{2\varepsilon}\right)$
$\displaystyle\leq\limsup_{\varepsilon\to
0}\left(u_{\hat{i}}(x_{\varepsilon},t_{\varepsilon})-v_{\hat{i}}(y_{\varepsilon},_{\varepsilon})-\frac{\mu}{T-t_{\varepsilon}}-\delta|x_{\varepsilon}|^{2}\right)-\theta_{\mu,\delta}$
$\displaystyle\leq
u_{\hat{i}}(\hat{x},\hat{t})-v_{\hat{i}}(\hat{y},\hat{s})-\frac{\mu}{T-\hat{t}}-\delta|\hat{x}|^{2}-\theta_{\mu,\delta}$
$\displaystyle\leq 0.$
Thus, we obtain
$\lim_{\varepsilon\to
0}\frac{|x_{\varepsilon}-y_{\varepsilon}|^{2}}{2\varepsilon}=\lim_{\varepsilon\to
0}\frac{|t_{\varepsilon}-s_{\varepsilon}|^{2}}{2\varepsilon}=0.$
To obtain a contradiction, suppose that $\hat{t}=0$. Then
$\displaystyle 0$ $\displaystyle\leq\limsup_{\varepsilon\to
0}\left(u_{\hat{i}}(x_{\varepsilon},t_{\varepsilon})-v_{\hat{i}}(y_{\varepsilon},_{\varepsilon})-\frac{\mu}{T-t_{\varepsilon}}-\delta|x_{\varepsilon}|^{2}\right)-\theta_{\mu,\delta}$
$\displaystyle=u_{\hat{i}}(\hat{x},0)-v_{\hat{i}}(\hat{x},0)-\frac{\mu}{T}-\delta|\hat{x}|^{2}-\theta_{\mu,\delta}$
$\displaystyle\leq 0.$
Therefore, we obtain
$u_{\hat{i}}(\hat{x},0)-v_{\hat{i}}(\hat{x},0)-\frac{\mu}{T}-\delta|\hat{x}|^{2}=\theta_{\mu,\delta}.$
which is impossible. Therefore, $\hat{t}$ must be positive.
By the Ishii lemma (see [10, Lemma 2.3.23], [8, Chapter 3]), it holds that
$\displaystyle\left(\frac{t_{\varepsilon}-s_{\varepsilon}}{\varepsilon},\frac{x_{\varepsilon}-y_{\varepsilon}}{\varepsilon},X\right)$
$\displaystyle\in\overline{PJ}^{2,+}\left(u_{\hat{i}}(x_{\varepsilon},t_{\varepsilon})-\delta|x_{\varepsilon}|^{2}-\frac{\mu}{T-t_{\varepsilon}}\right),$
$\displaystyle\left(\frac{t_{\varepsilon}-s_{\varepsilon}}{\varepsilon},\frac{x_{\varepsilon}-y_{\varepsilon}}{\varepsilon},Y\right)$
$\displaystyle\in\overline{PJ}^{2,-}v_{\hat{i}}(y_{\varepsilon},s_{\varepsilon})$
and
$\displaystyle-\frac{3}{\varepsilon}\begin{pmatrix}I&O\\\
O&I\end{pmatrix}\leq\begin{pmatrix}X&O\\\
O&-Y\end{pmatrix}\leq\frac{3}{\varepsilon}\begin{pmatrix}I&-I\\\
-I&I\end{pmatrix},$
where $\overline{PJ}^{2,+}$ are defined in (2.3). In general,
$\displaystyle\left(\frac{t_{\varepsilon}-s_{\varepsilon}}{\varepsilon},\frac{x_{\varepsilon}-y_{\varepsilon}}{\varepsilon},X\right)$
$\displaystyle\in\overline{PJ}^{2,+}\left(u_{\hat{i}}(x_{\varepsilon},t_{\varepsilon})-\delta|x_{\varepsilon}|^{2}-\frac{\mu}{T-t_{\varepsilon}}\right)$
is equivalent to
$\displaystyle\left(\frac{t_{\varepsilon}-s_{\varepsilon}}{\varepsilon}+\frac{\mu}{(T-t_{\varepsilon})^{2}},\frac{x_{\varepsilon}-y_{\varepsilon}}{\varepsilon}+2\delta
x_{\varepsilon},X+2\delta I\right)$
$\displaystyle\in\overline{PJ}^{2,+}u_{\hat{i}}(x_{\varepsilon},t_{\varepsilon}).$
We now set $i:=\hat{i}$, $p_{1}=p$, $p_{2}=q$ and $j$ denotes $j\neq i$. In
what following ,we drop the subscription $\varepsilon$ for simplisity. Since
$(u_{1},u_{2})$ is a viscosity subsolution to (5.1) and $(v_{1},v_{2})$ is a
viscosity supersolution to (5.1), we have
$\displaystyle\frac{t-s}{\varepsilon}+\frac{\mu}{(T-t)^{2}}+F_{i}(x,X+2\delta
I)+\nu u_{i}(x,t)-e^{(p_{i}-1)\nu t}|u_{j}|^{p_{i}-1}u_{j}\leq 0,$
$\displaystyle\frac{t-s}{\varepsilon}+F_{i}(y,Y)+\nu
v_{i}(y,s)-e^{(p_{i}-1)\nu t}|v_{j}|^{p_{i}-1}v_{j}\geq 0.$
Combining these inequalities, we see that
$\displaystyle\frac{\mu}{(T-t)^{2}}+\nu(u_{i}(x,t)-v_{i}(y,s))-e^{(p_{i}-1)\nu
t}(|u_{j}|^{p_{i}-1}u_{j}-|v_{j}|^{p_{i}-1}v_{j})$ (5.4) $\displaystyle\leq
F_{i}(y,Y)-F_{i}(x,X+2\delta I).$
By Lemma 5.1, there exists a modulus of continuity
$\omega_{F_{i}}:[0,\infty)\to\bm{R}$ such that
$F_{i}(y,Y)-F_{i}(x,X)\leq\omega_{F_{i}}\left(|x-y|+\frac{|x-y|^{2}}{\varepsilon}\right)$
for all $x,y\in\bm{R}^{N}$. This together with (1.8) implies that
$\displaystyle F_{i}(y,Y)-F_{i}(x,X+2\delta I)+F_{i}(x,X)-F_{i}(x,X)$ (5.5)
$\displaystyle\leq\omega_{F_{i}}\left(|x-y|+\frac{|x-y|^{2}}{\varepsilon}\right)+\mathcal{P}_{i}^{+}(-2\delta
I)$
$\displaystyle=\omega_{F_{i}}\left(|x-y|+\frac{|x-y|^{2}}{\varepsilon}\right)+2\delta\Lambda_{i}N.$
If $u_{j}(x,t)\leq v_{j}(y,s)$, then
$\nu(u_{i}-v_{i})-e^{(p_{i}-1)\nu
t}(|u_{j}|^{p_{i}-1}u_{j}-|v_{j}|^{p_{i}-1}v_{j})\geq\lambda(u_{i}-v_{i})\geq\lambda\frac{\theta}{2}>0.$
(5.6)
On the other hand, if $u_{j}(x,t)>v_{j}(y,s)$, it follows from the mean value
theorem and (5.2) that
$\displaystyle e^{(p_{i}-1)\nu
t}(|u_{j}|^{p_{i}-1}u_{j}-|v_{j}|^{p_{i}-1}v_{j})$
$\displaystyle=e^{(p_{i}-1)\nu
t}\frac{|u_{j}|^{p_{i}-1}u_{j}-|v_{j}|^{p_{i}-1}v_{j}}{u_{j}-v_{j}}(u_{j}-v_{j})$
(5.7) $\displaystyle\leq R^{p_{i}-1}(u_{j}-v_{j}).$
It follows from (5.6) and (5.7) that
$\nu(u_{i}-v_{i})-e^{(p_{i}-1)\nu
t}(|u_{j}|^{p_{i}-1}u_{j}-|v_{j}|^{p_{i}-1}v_{j})\geq\nu(u_{i}-v_{i})-R^{p_{i}-1}(u_{j}-v_{j}).$
Choose $\nu>0$ so large that
$\nu>\max\left\\{R^{p_{1}-1},R^{p_{2}-1}\right\\}$
to get
$\nu(u_{i}-v_{i})-R^{p_{i}-1}(u_{j}-v_{j})\geq(\nu-R^{p_{i}-1})(u_{i}-v_{i})\geq(\nu-R^{p_{i}-1})\frac{\theta}{2}>0.$
(5.8)
Summarazing (5.4)–(5.6), (5.8), we obtain
$\frac{\mu}{(T-t)^{2}}+(\nu-R^{p_{i}-1})\frac{\theta}{2}\leq\omega_{F_{i}}\left(|x-y|+\frac{|x-y|^{2}}{\varepsilon}\right)+2\delta\Lambda_{i}N.$
Dropping the first term of the left-hand and then passing to the limit
$\varepsilon\to 0$, we finally obtain
$(\nu-R^{p_{i}-1})\frac{\theta}{2}\leq 2\delta\Lambda_{i}N.$
Since $\delta$ is sufficiently small, this is a contradiction. ∎
## Appendix A Perron’s method
In this appendix, we state Perron’s method and give its proof. Let $T>0$ and
let $S$ be the set of all viscosity solutions of (1.1) in
$\bm{R}^{N}\times(0,T)$.
###### Lemma A.1.
Let $T>0$. Assume that $S\neq\emptyset$. For $i=1,2$, set
$u_{i}(x,t):=\sup\\{v_{i}(x,t)|\ v=(v_{1},v_{2})\in S\\},\quad
x\in\bm{R}^{N},t\in(0,T).$
If
$\sup_{K}|u_{i}|<\infty,\quad i=1,2$
for any compact subset $K\subset\bm{R}^{N}\times(0,T)$, then $u=(u_{1},u_{2})$
is a viscosity subsolution of (1.1).
###### Proof.
For $i=1,2$ and $\varphi\in C^{2,1}(\bm{R}^{N}\times(0,T))$, we assume that
$u_{i}^{*}-\varphi$ attains strict maximum at
$(x_{0},t_{0})\in\bm{R}^{N}\times(0,T)$. Choose $r>0$ so that
$[t_{0}-r,t_{0}+r]\subset(0,T)$. Then for any $\varepsilon>0$, we have
$u_{i}^{*}(x_{0},t_{0})=\lim_{\varepsilon\to 0}\sup_{\begin{subarray}{c}x\in
B(x_{0},\varepsilon)\\\
|t-t_{0}|<\varepsilon\end{subarray}}u_{i}(x,t)\leq\sup_{\begin{subarray}{c}x\in
B(x_{0},\varepsilon)\\\ |t-t_{0}|<\varepsilon\end{subarray}}u_{i}(x,t),$
where $u_{i}^{*}$ us defined in (2.1). For all $\tau>0$, there exist sequences
$x_{\tau,\varepsilon}\in B(x_{0},\varepsilon)$ and
$t_{\tau,\varepsilon}\in(t_{0}-r,t_{0}+r)$ such that
$\sup_{\begin{subarray}{c}x\in B(x_{0},\varepsilon)\\\
|t-t_{0}|<\varepsilon\end{subarray}}u_{i}(x,t)\leq
u_{i}(x_{\tau,\varepsilon},t_{\tau,\varepsilon})+\tau.$
For any $\tau>0$ there exists $\delta_{\tau}>0$ such that for all
$x\in\bm{R}^{N}$, $t\in(0,T)$, if $|x-x_{0}|+|t-t_{0}|<\delta_{\tau}$, then
$|\varphi(x,t)-\varphi(x_{0},t_{0})|<\tau$. For each $k=1,2,\dots$ set
$\varepsilon=\min\left\\{\frac{\delta_{1/k}}{2},\frac{1}{k},r\right\\}.$
There exist $x_{k}\in B_{r}(x_{0})$, $t_{k}\in(t_{0}-r,t+r)$ such that
$\displaystyle x_{k}\to x_{0},\quad t_{k}\to t_{0}\quad\mathrm{as}\
k\to\infty,$ $\displaystyle
u_{i}^{*}(x_{0},t_{0})<u_{i}(x_{k},t_{k})+\frac{1}{k},\quad|\varphi(x_{k},t_{k})-\varphi(x_{0},t_{0})|<\frac{1}{k}.$
Moreover, by the definition of $u_{i}$, there exists $(u_{1}^{k},u_{2}^{k})\in
S$ such that
$u_{i}(x_{k},t_{k})<u_{i}^{k}(x_{k},t_{k})+\frac{1}{k}.$
Choose $(y_{k},s_{k})\in\overline{B}_{r}(x_{0})\times[t_{0}-r,t_{0}+r]$ so
that $(u_{i}^{k})^{*}-\varphi$ attains its maximum at $(y_{k},s_{k})$. Taking
a subsequence (still denoted by the same symbol), we see that as $k\to\infty$,
$y_{k}\to\hat{y}$, $t_{k}\to\hat{t}$ for some
$\hat{y}\in\overline{B}_{r}(x_{0})$, $t_{k}\to\hat{t}\in[t_{0}-r,t_{0}+r]$. We
have
$\displaystyle(u_{i}^{*}-\varphi)(x_{0},t_{0})$
$\displaystyle<(u_{i}^{*}-\varphi)(x_{k},t_{k})+\frac{3}{k}$ (A.1)
$\displaystyle\leq((u_{i}^{k})^{*}-\varphi)(x_{k},t_{k})+\frac{3}{k}$
$\displaystyle\leq((u_{i}^{k})^{*}-\varphi)(y_{k},s_{k})+\frac{3}{k}$
$\displaystyle\leq(u_{i}^{*}-\varphi)(y_{k},s_{k})+\frac{3}{k}.$
Since $u_{i}^{*}$ is lower semicontinuous, we see that
$(u_{i}^{*}-\varphi)(x_{0},t_{0})\leq(u_{i}^{*}-\varphi)(\hat{y},\hat{t}).$
On the other hand, $u_{i}^{*}-\varphi$ has a strict maximum at
$(x_{0},t_{0})$, hence $x_{0}=\hat{x}$ and $t_{0}=\hat{t}$. Therefore,
$y_{k}\in\overline{B}_{r}(x_{0})$, $t_{k}\in[t_{0}-r,t_{0}+r]$ for
sufficiently large $k$. In addition, using (A.1) again, we have
$\lim_{k\to\infty}(u_{i}^{k})^{*}(y_{k},s_{k})=u_{i}^{*}(x_{0},t_{0})$
and
$\displaystyle\limsup_{k\to\infty}(u_{i}^{k})^{*}(y_{k},s_{k})\leq\limsup_{k\to\infty}u_{j}^{*}(y_{k},s_{k})\leq
u_{j}^{*}(x_{0},t_{0}),$
where $j\neq i$. Consequently, we obtain
$\partial_{t}\varphi(y_{k},s_{k})+F_{i}(y_{k},D^{2}\varphi(y_{k},s_{k}))\leq|u_{j}^{*}(y_{k},s_{k})|^{p_{i}-1}u_{j}^{*}(y_{k},s_{k}),$
hence
$\partial_{t}\varphi(x_{0},t_{0})+F_{i}(x_{0},D^{2}\varphi(x_{0},t_{0}))\leq|u_{j}^{*}(x_{0},t_{0})|^{p_{i}-1}u_{j}^{*}(x_{0},t_{0}),$
which completes the proof. ∎
###### Proposition A.1.
Assume that
$\xi=(\xi_{1},\xi_{2})\in(L^{\infty}_{\mathrm{loc}}(\bm{R}^{N}\times(0,T)))^{2}$
is a viscosity subsolution of (1.1) and
$\eta=(\eta_{1},\eta_{2})\in(L^{\infty}_{\mathrm{loc}}(\bm{R}^{N}\times(0,T)))^{2}$
is a viscosity supersolution of (1.1) for some $T>0$. If $\xi$ and $\eta$
satisfy
$\xi_{1}\leq\eta_{1},\quad\xi_{2}\leq\eta_{2}\quad\mathrm{in}\
\bm{R}^{N}\times(0,T),$
then
$u_{i}(x,t):=\sup\\{v_{i}(x,t)|v=(v_{1},v_{2})\in S,\ \xi\leq v\leq\eta\\}$
is a viscosity solution of (1.1) in $\bm{R}^{N}\times(0,T)$. Here $\xi\leq
v\leq\eta$ means that $\xi_{i}\leq v_{i}\leq\eta_{i}$ in
$\bm{R}^{N}\times(0,T)$ for $i=1,2$.
###### Proof.
By Lemma A.1, $u=(u_{1},u_{2})$ is a viscosity solution to (1.1). Suppose,
contrary to our claim, that there exist $\varphi\in C^{2,1}$ and
$(x_{0},t_{0})\in\bm{R}^{N}\times(0,T)$, $u_{i*}-\varphi$ attains a strict
minimum at $(x_{0},t_{0})$, $(u_{i*}-\varphi)(x_{0},t_{0})=0$ for some $i=1,2$
and exists $\theta>0$ such that
$\partial_{t}\varphi+F(x_{0},D^{2}\varphi)-|u_{i*}|^{p_{i}-1}u_{i*}<-\theta$
(A.2)
at $(x_{0},t_{0})$, where $i\neq j$ and $p_{1}=p$, $p_{2}=q$.
We firstly show that $\varphi(x_{0},t_{0})<\eta_{i*}(x_{0},t_{0})$. In fact,
we see that $\varphi\leq u_{i*}\leq\eta_{i*}$ and $u_{j*}\leq\eta_{j*}$ and
$\eta_{i*}-\varphi$ attains a minimum at $(x_{0},t_{0})$ if
$\varphi(x_{0},t_{0})=\eta_{i*}(x_{0},t_{0})$, thus by the definition of the
viscosity supersolution, we obtain
$\partial_{t}\varphi(x_{0},t_{0})+F(x_{0},D^{2}\varphi(x_{0},t_{0}))\geq|\eta_{j*}|^{p_{j}-1}\eta_{j*}(x_{0},t_{0})\geq|u_{j*}|^{p_{j}-1}u_{j*}(x_{0},t_{0}).$
This contradicts the assumption.
For any $\rho>0$, there exists $\varepsilon_{\rho}>0$ such that
$\displaystyle u_{j*}(x_{0},t_{0})-\rho$
$\displaystyle=\sup_{\varepsilon>0}\inf_{\begin{subarray}{c}|x-x_{0}|<\varepsilon_{\rho}\\\
|t-t_{0}|<\varepsilon_{\rho}\end{subarray}}u_{j}(x,t)-\rho$
$\displaystyle<\inf_{\begin{subarray}{c}|x-x_{0}|<\varepsilon_{\rho}\\\
|t-t_{0}|<\varepsilon_{\rho}\end{subarray}}u_{j}(x,t)$ $\displaystyle\leq
u_{j}(x,t)$
for $|x-x_{0}|<\varepsilon_{\rho}$ and $|t-t_{0}|<\varepsilon_{\rho}$. By
using the mean value theorem, there exists $\hat{\theta}$ such that
$\displaystyle|u_{j*}^{p_{i}-1}u_{j}$
$\displaystyle\geq|u_{j*}(x_{0},t_{0})-\rho|^{p_{i}-1}(u_{j*}(x_{0},t_{0})-\rho)$
$\displaystyle=|u_{j*}(x_{0},t_{0})|^{p_{i}-1}u_{j*}(x_{0},t_{0})$
$\displaystyle-\rho
p_{i}|\hat{\theta}u_{j*}(x_{0},t_{0})(x_{0},t_{0})+(1-\hat{\theta})(u_{j*}(x_{0},t_{0})-\rho)|^{p_{i}-1}.$
We can find $s_{0}>0$ so small that
$\rho
p_{i}|\hat{\theta}u_{j*}(x_{0},t_{0})(x_{0},t_{0})+(1-\hat{\theta})(u_{j*}(x_{0},t_{0})-\rho)|^{p_{i}-1}<\frac{\theta}{4}$
and
$|\partial_{t}\varphi(x_{0},t_{0})+F_{i}(x_{0},D^{2}\varphi(x_{0},t_{0}))-\partial_{t}\varphi(x,t)-F_{i}(x,D^{2}\varphi(x,t))|<\frac{\theta}{4}$
for $|x-x_{0}|<s_{0}$ and $|t-t_{0}|<s_{0}$. This together with (A.2) and the
continuity of $\partial_{t}\varphi$ and $F_{i}(\cdot,D^{2}\varphi)$ implies
that
$\displaystyle-\theta$
$\displaystyle>\partial_{t}\varphi(x,t)+F_{i}(x,D^{2}\varphi(x,t))-\frac{\theta}{4}-|u_{j*}|^{p_{i}-1}u_{j*}(x,t)-\frac{\theta}{4}$
for $|x-x_{0}|<s_{0}$ and $|t-t_{0}|<s_{0}$. Therefore,
$\partial_{t}\varphi(x,t)+F_{i}(x,D^{2}\varphi(x,t))-|u_{j*}|^{p_{i}-1}u_{j*}(x,t)<-\frac{\theta}{2}.$
(A.3)
It is already shown that
$u_{i*}(x_{0},t_{0})=\varphi(x_{0},t_{0})<\eta_{i*}(x_{0},t_{0})$. Set
$3\hat{\tau}:=\eta_{i*}(x_{0},t_{0})-u_{i*}(x_{0},t_{0})>0.$
Since $\eta_{i*}$ is lower semicontinuous and $\varphi$ is continuous, we can
find $s_{1}\in(0,s_{0})$ such that for all $|x-x_{0}|<s_{1}$ and
$t\in(t_{0}-s_{1},t_{0}+s_{1})$,
$\eta_{i*}(x,t)-\varphi(x,t)>\eta_{i*}(x_{0},t_{0})-\varphi(x_{0},t_{0})-\hat{\tau}=2\hat{\tau}.$
Therefore $\varphi(x,t)+2\hat{\tau}<\eta_{i*}(x,t))$ in $D$, where
$D:=B_{s_{1}}(x_{0})\times(t_{0}-s_{1},t_{0}+s_{1}).$
On the other hand, since $u_{i*}-\varphi$ attains a strict minimum at
$(x_{0},t_{0})$ and $(u_{i*}-\varphi)(x_{0},t_{0})=0$, there exist
$\varepsilon\in(0,s_{1}/2)$ and $\tau_{0}\in(0,\hat{\tau})$ such that
$(u_{i*}-\varphi)(x,t)\geq\min_{(x,t)\in
A}\\{(u_{i*}-\varphi)(x,t)\\}>\tau_{0}.$
Here we have set
$A:=\left(\overline{B_{s_{1}/2+\varepsilon}(x_{0})}\setminus
B_{s_{1}/2-\varepsilon}(x_{0})\right)\times\left\\{t\mid\frac{s_{1}}{2}-\varepsilon\leq|t-t_{0}|\leq\frac{s_{1}}{2}+\varepsilon\right\\}.$
We now define $(w_{1},w_{2})$ by
$\displaystyle w_{i}(x,t)$
$\displaystyle:=\begin{cases}\max\\{u_{i}(x,t),\varphi(x,t)+\tau_{0}\\}\quad&\mathrm{in}\
D/2,\\\ u_{i}(x,t)\quad\mathrm{in}\
(\bm{R}^{N}\times(0,T))\setminus(D/2),\end{cases}$ $\displaystyle w_{j}(x,t)$
$\displaystyle:=u_{j}(x,t)\quad\mathrm{in}\ \bm{R}^{N}\times(0,T),$
where
$D/2:=B_{s_{1}/2}(x_{0})\times\left(t_{0}-\frac{s_{1}}{2},t_{0}+\frac{s_{1}}{2}\right).$
In what follows, we shall show that $(w_{1},w_{2})$ is a viscosity subsolution
to (1.1) in $\bm{R}^{N}\times(0,T)$ satisfying $\xi_{k}\leq w_{k}\leq\eta_{k}$
for $k=1,2$. It follows from the definition of $w_{i}$ that we have
$\xi_{i}\leq u_{i}\leq w_{i}$ in $\bm{R}^{N}\times(0,T)$. Since
$\varphi(x,t)+\tau_{0}\leq\eta_{i*}$ in $D$, we see that
$\varphi+\tau_{0}\leq\eta_{i}$ in $D$, hence $w_{k}\leq\eta_{k}$ in
$\bm{R}^{N}\times(0,T)$ for $k=1,2$. Consequently, for $k=1,2$, we obtain
$\xi_{k}\leq w_{k}\leq\eta_{k}\quad\mathrm{in}\ \bm{R}^{N}\times(0,T).$
We can find $n\in\\{1,2,\dots\\}$ sufficiently large so that
$\frac{1}{n}<\frac{s_{1}}{2}-\varepsilon\quad\mathrm{and}\quad\frac{1}{n}<\frac{\tau_{0}}{2}$
and there exist $x_{n}\in B_{1/n}(x_{0})$ and $t_{n}\in\bm{R}$ with
$|t_{0}-t_{n}|<1/n$ such that
$u_{i}(x_{n},t_{n})<u_{i*}(x_{0},t_{0})+\frac{1}{n}.$
Moreover, it follows that
$u_{i*}(x_{0},t_{0})+\frac{1}{n}<u_{i*}(x_{0},t_{0})+\frac{\tau_{0}}{2}=\varphi(x_{0},t_{0})+\frac{\tau_{0}}{2}<\varphi(x_{0},t_{0})+\tau_{0}.$
Note that $(x_{n},t_{n})\in D/2$.
In what follows, we shall prove that $(w_{1},w_{2})$ is a viscosity
subsolution to (1.1) in $\bm{R}^{n}\times(0,T)$. Let us take
$\bm{R}^{N}\times(0,T)$ and $\psi\in C^{2,1}(\bm{R}^{N}\times(0,T))$
arbitrarily.
We firstly assume that $w_{i}^{*}-\psi$ attains a local maximum at
$(\hat{x},\hat{t})$. Consider the first case
$w_{i}^{*}(\hat{x},\hat{t})=u_{i}^{*}(\hat{x},\hat{t})$. Then
$\displaystyle u_{i}^{*}(\hat{x},\hat{t})-\psi(\hat{x},\hat{t})$
$\displaystyle=w_{i}^{*}(\hat{x},\hat{t})-\psi(\hat{x},\hat{t})$
$\displaystyle\geq w_{i}^{*}(x,t)-\psi(x,t)$ $\displaystyle\geq
u_{i}^{*}(x,t)-\psi(x,t)$
in $\bm{R}^{N}\times(0,T)$. Thus, $u_{i}^{*}-\psi$ attains its maximum at
$(\hat{x},\hat{t})$. Moreover, since $(u_{1},u_{2})$ is a subsolution to (1.1)
in $\bm{R}^{N}\times(0,T)$ and $u_{j}\equiv w_{j}$, we have
$\partial_{t}\psi+F_{i}(\cdot,D^{2}\psi)\leq|u_{j}^{*}|^{p_{i}-1}u_{j}^{*}=|w_{j}^{*}|^{p_{i}-1}w_{j}^{*}$
at $(\hat{x},\hat{t})$.
We next consider the second case
$w_{i}^{*}(\hat{x},\hat{t})=(\varphi+\tau_{0})^{*}(\hat{x},\hat{t})=\varphi(\hat{x},\hat{t})+\tau_{0}$.
Note that $(\hat{x},\hat{t})\in D/2$. The same argument above implies that
$\varphi+\tau_{0}-\psi$ attains its maximum at $(\hat{x},\hat{t})$. Thus, we
see that
$\partial_{t}\varphi(\hat{x},\hat{t})=\partial_{t}\psi(\hat{x},\hat{t}),\quad
D\varphi(\hat{x},\hat{t})=D\psi(\hat{x},\hat{t}),\quad
D^{2}\varphi(\hat{x},\hat{t})\leq D^{2}\psi(\hat{x},\hat{t}).$
It follows from (1.8) and (A.3) that
$\displaystyle\partial_{t}\psi+F_{i}(\cdot,D^{2}\psi)$
$\displaystyle\leq\partial_{t}\varphi+F_{i}(\cdot,D^{2}\varphi)$
$\displaystyle\leq|{u_{j}}_{*}|^{p_{i}-1}{u_{j}}_{*}$
$\displaystyle\leq|{u_{j}}^{*}|^{p_{i}-1}{u_{j}}^{*}$
$\displaystyle=|{w_{j}}^{*}|^{p_{i}-1}{w_{j}}^{*}$
at $(\hat{x},\hat{t})$.
We secondly assume that $w_{j}^{*}-\psi$ attains a local maximum at
$(\hat{x},\hat{t})$. Since $w_{j}=u_{j}$ in $\bm{R}^{N}\times(0,T)$,
$u_{j}^{*}-\psi$ also attains its maximum at $(\hat{x},\hat{t})$. Therefore,
we obtain
$\displaystyle\partial_{t}\psi+F_{j}(\cdot,D^{2}\psi)\leq|u_{i}^{*}|^{p_{j}-1}u_{i}^{*}\leq|w_{i}^{*}|^{p_{j}-1}w_{i}^{*}$
at $(\hat{x},\hat{t})$.
Consequently, $(w_{1},w_{2})$ is a viscosity subsotlution to (1.1) in
$\bm{R}^{N}\times(0,T)$. This contradicts the definition of $(u_{1},u_{2})$. ∎
###### Remark A.1.
Let $f_{1}$ and $f_{2}$ be a real valued function defined in a subset of
$\bm{R}^{M}$ with $M\in\\{1,2,\dots\\}$. Then
$\max\\{f_{1},f_{2}\\}^{*}=\max\\{f_{1}^{*},f_{2}^{*}\\}$. This fact allows us
to divide the cases $u_{i}^{*}=w_{i}^{*}$ or not.
## Acknoledgement
TK was partially supported by Grant-in-Aid for Early-Career Scientists JSPS
KAKENHI Grant Number 18K13436 and Tottori University of Environmental Studies
Grant-in-Aid for Special Research. RS was partially supported by Grant-in-Aid
for Early-Career Scientists JSPS KAKENHI Grant Number 18K13435 and funding
from Fukuoka University (Grant No. 205004).
## References
* [1] S. N. Armstrong and M. Trokhimtchouk, Long-time asymptotics for fully nonlinear homogeneous parabolic equations, Calc. Var. Partial Differential Equations 38 (2010), 521–540.
* [2] M. G. Crandall, P.-L. Lions, Quadratic growth of solutions of fully nonlinear second order equations in $R^{n}{}$, Differential Integral Equations 3 (1990), 601–616.
* [3] K. Deng, H. A. Levine, The role of critical exponents in blow-up theorems: the sequel, J. Math. Anal. Appl. 243 (2000), 85–126.
* [4] M. Escobedo and M. A. Herrero, Boundedness and blow up for a semilinear reaction-diffusion system, JDE 89, (1991), 176–202.
* [5] Y. Fujishima, K. Ishige, Blowing up solutions for nonlinear parabolic systems with unequal elliptic operators, J. Dynam. Differential Equations 32 (2020), 1219–1231.
* [6] Y. Fujishima, K. Ishige, Initial traces and solvability of Cauchy problem to a semilinear parabolic system, J. Math. Soc. Japan 73 (2021), 1187–1219.
* [7] H. Fujita, On the blowing up of solutions of the Cauchy problem for $u_{t}=\bigtriangleup u+u^{1+\alpha}$, J. Fac. Sci. Univ. Tokyo Sect. I 13 109–124 (1966).
* [8] Y. Giga, Surface evolution equations, A level set approach, Monographs in Mathematics, 99 Birkhäuser Verlag, Basel, 2006.
* [9] Y. Huang and J.L. V’azquez, Large-time geometrical properties of solutions of the Barenblatt equation of elasto-plastic filtration, J. Differential Equations 252 (2012), 4229–4242.
* [10] C. Imbert and L. Silvestre, An introduction to fully nonlinear parabolic equations, An introduction to the Kähler-Ricci flow, Lecture Notes in Math. 2086 Springer, Cham, 2013, 7–88.
* [11] S. Kamin, L. A. Peletier, J. L. Vázquez, On the Barenblatt equation of elastoplastic filtration, Indiana Univ. Math. J. 40 (1991), 1333–1362.
* [12] S. Koike, A Beginner’s Guide to the Theory of Viscosity Solutions, MSJ memoir 13, Math. Soc. Japan, 2004.
* [13] N. V. Krylov, Sobolev and viscosity solutions for fully nonlinear elliptic and parabolic equations, Mathematical Surveys and Monographs, 233, American Mathematical Society, Providence, RI, 2018.
* [14] G. M. Lieberman, Second order parabolic differential equations, World Scientific Publishing Co., Inc., River Edge, NJ, 1996.
* [15] P. Meier, On the critical exponent for reaction-diffusion equations, Arch. Rational Mech. Anal. 109 (1990), 63–71.
* [16] R. Meneses and A. Quaas, Fujita type exponent for fully nonlinear parabolic equations and existence results, J. Math. Anal. Appl. 376 (2011), 514–527.
* [17] R. Meneses and A. Quaas, Existence and non-existence of global solutions for uniformly parabolic equations, J. Evol. Equ. 12 (2012), 943–955.
* [18] Y. Uda, The critical exponent for a weakly coupled system of the generalized Fujita type reaction-diffusion equations, Z. Angew. Math. Phys. 46 (1995), 366–383.
* [19] P. Quittner and Ph. Souplet, Superlinear parabolic problems, Blow-up, global existence and steady states, Second edition, Birkhäuser/Springer, Cham, 2019.
* [20] L. Wang, On the regularity theory of fully nonlinear parabolic equations: II, Comm. Pure Appl. Math. 45 (1992), 141-178.
|
# Memory Efficient Patch-based Training for INR-based GANs
Namwoo Lee12 Hyunsu Kim1 Gayoung Lee1 Sungjoo Yoo2 Yunjey Choi1
1NAVER AI Lab 2Seoul National University
This work was done during an internship at NAVER AI Lab.
###### Abstract
Recent studies have shown remarkable progress in GANs based on implicit neural
representation (INR) - an MLP that produces an RGB value given its (x, y)
coordinate. They represent an image as a continuous version of the underlying
2D signal instead of a 2D array of pixels, which opens new horizons for GAN
applications (e.g., zero-shot super-resolution, image outpainting). However,
training existing approaches require a heavy computational cost proportional
to the image resolution, since they compute an MLP operation for every (x, y)
coordinate. To alleviate this issue, we propose a multi-stage patch-based
training, a novel and scalable approach that can train INR-based GANs with a
flexible computational cost regardless of the image resolution. Specifically,
our method allows to generate and discriminate by patch to learn the local
details of the image and learn global structural information by a novel
reconstruction loss to enable efficient GAN training. We conduct experiments
on several benchmark datasets to demonstrate that our approach enhances
baseline models in GPU memory while maintaining FIDs at a reasonable level.
## 1 Introduction
Recent advances in Generative Adversarial Networks (GANs) [8, 11, 12] enable
realistic image synthesis and show practical and diverse applicability such as
image-to-image translation [10, 5, 6, 16, 14], 3d-aware image generation [4,
19, 20, 9], real image editing [1, 26, 13], etc. Typical GANs view images as
2D pixel arrays and build them using convolutional filters. However, thanks to
the success of NeRF in 3D modeling, it is also getting popular to view images
as a continuous function in GANs. Implicit Neural Representations (INR) [22,
7, 21, 3, 18, 25] is a popular method that use a neural network to approximate
the continuous function. A number of recent studies including CIPS [2] and
INR-GAN [23] have proposed a model that combines the INR concept and GANs.
These INR-based GANs can naturally and easily do what was difficult in
convolutional GANs, such as partial patch generation, zero-shot super-
resolution, and image extrapolation.
Despite the advantages of INR-based GANs, it is difficult to train them
because they are hardware intensive due to a lot of network inference
proportional to the image size. Unlike convolutional GANs [12, 17] which use
upsampling and convolutional filters, pure INR-based GANs need to infer each
coordinate of an image, so it consumes much more GPU memory. For example, CIPS
requires 4 times more GPU memory than StyleGAN2. Therefore, reducing
computation costs is an important research topic to practically use INR-based
GANs. INR-GAN reduces the costs in the generator by factorizing the parameters
and progressively growing the feature maps similar to StyleGAN2. However,
their method is still computationally expensive because it starts with a
feature map of large size $(64^{2})$ and requires generating the entire image
for the discriminator.
(a) Traditional INR-based generator (b) Multi-stage patch-based training
(Ours)
Figure 1: Traditional vs. Multi-stage patch-based training. (a) Training
existing INR-based GANs [23, 2] is computationally expensive as they require
performing an MLP operation $G$ on all (x, y) coordinates for full resolution
($16^{2}$ in the example). (b) Our proposed multi-stage patch-based training
enables efficient training of INR-based GANs by performing $G$ only on a
predetermined small number of (x, y) coordinates ($4^{2}$ in the example)
regardless of resolution. In the early stage (Stage 1), a coarse global image
is generated from the sparse grid, and in the later stages (Stage 2, 3), local
patches with fine details are generated from the dense grids. The local patch
generated in each later stage is regularized to match the corresponding region
in the image generated in the previous stage. In multi-stage patch-based
training, we omit the mapping network $F$ for brevity.
In this paper, we propose a method that can dramatically reduce the training
costs for INR-based GAN using multi-stage patch-based training. During
training, our method generates small patches ($32^{2}$) instead of entire
images, and the generated patches are fed to the discriminator. This patch-
wise training can save a lot of GPU memory, but since the discriminator only
sees the patches, it cannot give feedback on global structures. To solve this
problem, we propose a novel multi-stage training method and progressively
reduce the receptive field of each stage patch. Specifically, in the initial
stage, the target patch is coarsely and globally sampled from an image,
whereas in the final stage, the patch of equal size is densely and locally
sampled. Then, in order to transfer the knowledge about the global structure
of the previous stage to the current stage, we apply the consistency loss
between the current generated patches and the patches cropped from the
previously generated patches. By doing this, the final generator can generate
a globally consistent image while it is trained using only local patches. We
conduct extensive experiments with various datasets and show that our method
reduces the required size of GPU memory and training time effectively while
maintaining the quality of generated images comparable to the existing
methods.
## 2 Multi-stage patch-based training
We propose multi-stage patch-based training, which reduces the computational
cost for training INR-based GANs. We build upon the INR-based GAN [2] and keep
every other component except the training strategy and patch regularization,
including the adversarial loss, and hyperparameters. Overall framework can be
shown in Figure 1.
For efficient training, we aim to generate local patches instead of full
images (e.g. generating $64^{2}$ patches instead of creating $256^{2}$ images
can reduce the computational cost such as GPU memory by $\tfrac{1}{4}$).
However, it is known that the generator $G$ cannot learn the global structure
of an image by providing only small patches to the discriminator [17]. To
alleviate this problem, we adopt multi-stage training in which the generator
learns to produce a coarse full image in the early stage of training (Stage 1
in Figure 1b) and learns to generate local patches with fine details in the
later stages (Stage 2, 3 in Figure 1b).
Sparse-to-dense coordinate sampling. During training, we sample $(x,y)$
coordinates in a sparse-to-dense manner. We first define a set of integer
pixel coordinates grid as:
$\texttt{grid}\left(H,W,N\right)=\\{\left(\tfrac{H}{N}k,\tfrac{W}{N}k\right)\mid
0\leq k<N\\}$ (1)
where $H,W$ are the height and width of training image resolution,
respectively (e.g. $256^{2}$), and $k$ is an integer value. A small $N$ gives
sparsely sampled coordinates, while a large $N$ gives densely sampled ones. In
the first stage of training, we set $N$ to $\tfrac{H}{4}$ to reduce the size
of the coordinate grid to $\tfrac{1}{16}$ of its full resolution (sparse
sampling). In the second and third stages of training, we set $N$ to
$\tfrac{H}{2}$ and $H$, respectively (dense sampling). We apply appropriate
random cropping to reduce the computational cost in the later stages.
Coarse-to-fine patch generation. We train the generators to produce coarse
global images in the early stage of training and local patches with fine
details in the later stages. Here, we denote the generator for each stage
$i\in\left\\{1,2,3\right\\}$ as $G_{i}$ for clarity. Our generator $G_{i}$
takes as input a random Gaussian vector $\textbf{z}\in\mathds{R}^{128}$ shared
across all pixels and pixel coordinates $\left(x,y\right)\in\left\\{0\dots
W-1\right\\}\times\left\\{0\dots H-1\right\\}$.
(a) Image-based (86GB) | (b) Patch-based (30GB) | (c) Ours (31GB)
---|---|---
| | | | | | | |
| | | | | | | |
| | | | | | | |
Figure 2: Qualitative comparison with the baselines and our method. The
first/second/third row shows samples from FFHQ/LSUN Church/AFHQ, respectively.
The image-based model offers the best quality but requires much GPU memory
(86GB), whereas the patch-based model needs much less GPU memory (30GB) but
generates globally inconsistent images. Our method uses the comparable amount
of GPU memory (31GB) to the patch-based model, while producing much better
image quality.
The first stage generator $G_{1}$ produces a coarse global image $I_{1}$ by
performing an MLP operation for each $\left(x,y\right)$ coordinates, while
keeping random vector $\mathbf{z}$ fixed:
$\displaystyle
I_{1}=\left\\{G_{1}\left(x,y;\mathbf{z}\right)\mid\left(x,y\right)\in\texttt{grid}\left(H,W,\tfrac{H}{4}\right)\right\\}.$
(2)
We train $G_{1}$ with an adversarial loss [8] to generate images that are
indistinguishable from real images of low resolution. Note that unlike
traditional INR-based GANs [2, 23], our method sets $N$ to $\tfrac{H}{4}$
instead of $H$, which efficiently reduces GPU memory.
Unlike $G_{1}$, we train the generators $G_{2}$, $G_{3}$ to produce local
patches instead of full images. We use the generator trained in the previous
stage to initialize the generator in the later stage (i.e. initialize $G_{2}$
with the weights of $G_{1}$). This helps to distill the global representation
learned in the previous stage. The equation is similar to that of $G_{1}$, but
the (x, y) coordinates are densely sampled and randomly selected:
$\displaystyle
I_{i}=\left\\{G_{i}\left(x,y;\mathbf{z}\right)\mid\left(x,y\right)\in\texttt{rcrop}(\texttt{grid}\left(H,W,N_{i}\right))\right\\},$
(3)
where rcrop indicates a random crop operation. We set $N_{2}$ to
$\tfrac{H}{2}$ and $N_{3}$ to $H$ for $G_{2}$ and $G_{3}$, respectively. We
obtain a coordinate grid of $\tfrac{1}{4}$ size compared to full resolution
through the rcrop operation, and use the small grid to efficiently train the
generators to produce local patches.
Patch regularization. In order to maintain consistency between the currently
generated patch $I_{i}$ and the region cropped from the previously generated
image (or patch) $I_{i-1}$, we apply patch regularization:
$\mathcal{L}_{patch}=\mathbb{E}\left[{\lVert\texttt{resize}(I_{i},\tfrac{1}{2})-\texttt{crop}(I_{i-1})\lVert}_{2}\right],$
(4)
where $\texttt{resize}(\cdot,\tfrac{1}{2})$ reduces the size of image in half.
The proposed patch regularization is simple and helps to distill the global
structure learned from the previous stage to the current stage.
| FID Scores$\downarrow$ | Computation Costs
---|---|---
Method | FFHQ (5 days) | Church (6 days) | AFHQ (4 days) | GPU mem.$\downarrow$ | sec/iter$\downarrow$
Image-based | 8.51 | 6.42 | 10.00 | 86GB | 3.04
Patch-based | 41.65 | 18.48 | 39.39 | 30GB | 0.82
Ours | 24.38 | 10.08 | 17.13 | 31GB | 0.71
Table 1: Comparison on FID score and computational costs for each method.
While patch-based method is memory-efficient than the original image-based
method, it produces worse quality images in terms of the FID score. Our method
requires the same amount of GPU memory as the patch-based model, but produces
higher quality images. We also report the running time for each training
iteration.
## 3 Experiments
Our multi-stage patch-based method effectively reduces the required size of
GPU memory ($2.8\times$ lower) in training. In this section, we conduct
experiments on various benchmark datasets (FFHQ, LSUN Church, and AFHQ) to
verify the effectiveness of our method. All experiments are conducted at
$256\times 256$ scale with the $G_{3}$ generator, and we use the Fréchet
inception distance (FID) metric to show that our method still retains
comparable performance in image generation.
### 3.1 Baseline Models
Since CIPS [2] is one of the state-of-the-art INR-based GANs, we demonstrates
the applicability of our method to the CIPS model. To show the effectiveness
of our method, we compare our method with three baselines.
Image-based method is the original version of CIPS network. We do not change
any configurations from its paper.
Patch-based method is the patch-based training without our multi-stage
training and patch regularization. The network is trained with $4\times$
smaller patches and only adversarial loss term.
Gradient Accumulation is the same as Image-based method except for the batch
size. To avoid the GPU memory limitation, some recent works [11, 12] may use
small batch size and accumulate gradients. The network weights are updated
once every multiple batches, whose summation is equal to that of the original
batch size.
### 3.2 Main results
Figure 2 and Table 1 show the qualitative and quantitative results. For a fair
comparison, we trained all baselines with the same training time; 4, 5, 6 days
for AFHQ, FFHQ, and LSUN Church, respectively. We set the training time in
proportion to the size of the data. Gradient Accumulation method is excluded
from Table 1 because it needs $n\times$ more time if we want to use $n\times$
smaller batch size. Ours shows visually comparable quality compared to the
original CIPS network while it needs $2.8\times$ less memory of GPU. Without
our multi-stage training, image quality deteriorates significantly in the
patch-based method. Our method needs only $3\%$ additional GPU memory but
shows significantly better image generation quality than patch-based method
according to the FID score; FIDs increase by 17.27, 8.40, and 22.26 in FFHQ,
LSUN Church, and AFHQ, respectively. Since our method and the patch-based
model generate only part of an image, each training iteration takes
significantly less time than the image-based model, and we can run more
training iterations in the same amount of time. Note that our method is
slightly faster than the patch-based model because we can skip random cropping
for the first stage.
| |
---|---|---
| |
Stage 1 | Stage 2 | stage 3
Figure 3: Samples of each training phase in our multi-stage training method.
In the first stage, coarse and global contours are generated, and in the later
stage, more and more details are added. The ability to produce globally
consistent images is transferred by our patch regularization loss.
### 3.3 Effect of patch regularization
In multi-stage patch-based training, we propose patch regularization which
matches the generated patches of different training phases as we’ve discussed
in Section 2. Figure 3 shows our regularization makes the network produce
consistent structure in all stages. Stage 1 shows blurry but structurally
meaningful images, and stage 3 shows high-fidelity images while maintaining
the structure of the early stages. Without this loss term, our network cannot
fully exploit the advantage of the multi-stage training.
|
---|---
Figure 4: Extrapolation on LSUN Church using our method. The pixels in out-of-
boundary locations are properly generated.
### 3.4 Extrapolation Results
In Figure 4, we show the results of extrapolation on LSUN Church using our
method. Thanks to the advantages of INR-based model, our method can generate
an image of a size not seen during training by simply feeding the targeted
coordinates.
## 4 Conclusion and Discussion
In this paper, we propose multi-stage patch-based training, a novel and
scalable approach that can train INR-based GANs with a flexible computational
cost regardless of the image resolution. We conducted experiments on several
benchmark datasets and demonstrated that our method contributes to reducing
the required size of GPU memory in training INR-based GAN models.
Our method also has some limitations. The proposed patch regularization might
be too restrictive as it forces the patch generated in the current stage to be
strongly match the image in the previous step. Also, the performance of multi-
stage training for a specific dataset (i.e. FFHQ) could be more improved.
Improving the performance and devising more flexible regularization to extract
global structures would be one of the meaningful future work.
Acknowledgements. The authors thank NAVER AI Lab researchers for constructive
discussion. All experiments were conducted on NAVER Smart Machine Learning
(NSML) platform [15, 24].
## References
* [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In CVPR, 2019.
* [2] Ivan Anokhin, Kirill Demochkin, Taras Khakhulin, Gleb Sterkin, Victor Lempitsky, and Denis Korzhenkov. Image generators with conditionally-independent pixel synthesis. arXiv preprint arXiv:2011.13775, 2020.
* [3] Matan Atzmon and Yaron Lipman. Sal: Sign agnostic learning of shapes from raw data. In CVPR, 2020.
* [4] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In CVPR, 2021.
* [5] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018.
* [6] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In CVPR, 2020.
* [7] Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In ICCV, 2019.
* [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NeurIPS, 2014.
* [9] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. ICLR, 2022.
* [10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
* [11] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019.
* [12] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, 2020.
* [13] Hyunsu Kim, Yunjey Choi, Junho Kim, Sungjoo Yoo, and Youngjung Uh. Exploiting spatial dimensions of latent in gan for real-time image editing. In CVPR, 2021.
* [14] Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, and Sungjoo Yoo. Tag2pix: Line art colorization using text tag with secat and changing loss. In ICCV, 2019.
* [15] Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. Nsml: Meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957, 2018.
* [16] Junho Kim, Minjae Kim, Hyeonwoo Kang, and Kwang Hee Lee. U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. In ICLR, 2020.
* [17] Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, and Hwann-Tzong Chen. COCO-GAN: generation by parts via conditional coordinating. In ICCV, 2019.
* [18] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
* [19] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In CVPR, 2021.
* [20] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. 2022\.
* [21] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In CVPR, 2019.
* [22] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. 2020\.
* [23] Ivan Skorokhodov, Savva Ignatyev, and Mohamed Elhoseiny. Adversarial generation of continuous images. CVPR, 2020.
* [24] Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, et al. Nsml: A machine learning platform that enables you to focus on your models. arXiv preprint arXiv:1712.05902, 2017.
* [25] Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, and Jinwoo Shin. Generating videos with dynamics-aware implicit generative adversarial networks. 2022\.
* [26] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In-domain gan inversion for real image editing. In ECCV, 2020.
|
# Superradiant Superconductivity
G. Baskaran The Institute of Mathematical Sciences, C.I.T. Campus, Chennai
600 113, India
Perimeter Institute for Theoretical Physics, Waterloo, Ontario, Canada N2L 2Y5
###### Abstract
We suggest possibility of Dicke superradiance in superconductors. The
necessary 2-level atoms are identified with Anderson pseudo spins in k-space,
seeing a k-dependent self consistent mean field. A way to couple these 2-level
bose atoms to a macroscopically excited coherent boson mode and create a novel
nonequilibrium superradiant superconductivity (SRSC) is suggested. Our
coherence transfer mechanism offers a hope to realize transient
superconductivity, even at room temperatures, in the pseudo gap phase of
certain underdoped cuprates. Recent experiments are briefly discussed in the
light of our theory. Quantum entanglement, QCP and superfluorescence
properties follow.
Introduction Superconductivity is a remarkable macroscopic manifestation of
quantum mechanics. A rich physics and phenomenology, including Meissner and
Josephson effects are parts of superconductivity BCS . Dicke’s Superradiance
Dicke1954 is another macroscopic manifestation, exhibited by a collection of
2-level atoms interacting with a single boson mode. The coupled system can
develop quantum coherence, enhanced emission properties and complex dynamics.
Certain phenomena in NMR, ESR, optics and cold atoms are related to
superradiance.
In the present work we suggest a way to combine superconductivity and
superradiance, We call the resultant non equilibrium state as superradiant
superconductivity (SRSC). In our proposal a macroscopically occupied long
wavelength single boson mode interacts with a collection of independent
2-level atoms located in k-space and creates a Dicke superradiant situation,
under certain conditions. In this state certain deformation of Cooper pair
wave function is entangled with a coherent external bosonic mode.
Interaction of coherent electromagnetic radiation and ultrasound with
superconductor is a well studied subjectMicrowaveExpt ; Eliashberg ;
OwenScalapino ; KumarSinha ; McIntoshLindesay . Our proposal of SRSC may have
relevance to some known results. An exciting recent development is
experimental observation of transient superconductivity well above Tc, induced
by certain femtosecond laser pulses, in the pseudo gap phase of cuprates
liscLBCO ; liscYBCO .
In a pioneering theoretical work Eliashberg Eliashberg in 1970 showed that
microwave induced quasi particle redistribution self consistently enhances gap
values and Jc. Works by Scalapino, Owen and Chang OwenScalapino , also focused
on quasi particle redistribution. In a later theory in 1994, McIntosh and
Lindesey McIntoshLindesay showed that stimulated emission and reabsorption of
photon by correlated electron pairs play a fundamental role in
superconductivity enhancement. This key insight is one of the triggers for our
proposal. Interestingly, in 1968, there was a theoretical suggestion
KumarSinha for photon induced room temperature superconductivity.
In what follows, we start with an ideal BCS superconductor and show how Dicke
superradiance emerges, when the wavelength of the macroscopically occupied
external single boson mode $\lambda\geq L$, the sample size $L$. Then we
discuss how our mechanism could gnerate transient superconductivity abouve Tc
and discuss recent experiments liscLBCO and (see note liscYBCO in the light
of our mechanism.
In our work we make the tacit assumption that there are suitable relaxation
processes involving quasiparticles and phonons that drains energy to the heat
bath efficiently to avoid heating. At the same time some energy gets pumped to
the electronic sub system to help reach a new non equilibrium coherent state
for a short time scale. It is the nature of non equilibrium coherent state
that we are after. To achieve this we assume that the coherent state of the
single boson mode is long lived and does not radiate away its energy. It
exchanges its quanta with the electron subsystem only and gets quantum
entangled. Ours is an equilibrium statistical mechanics approximation tailored
to get a glimpse of a remarkable non equilibrium situation.
Model. To develop our theory we follow Anderson’s pseudo spin formulation of
BCS theory PWApseudoSpin . It helps us to view BCS mean field eigen states as
a k-space lattice containing 2-level bose atoms and free fermions. Consider
time reversed single particle states $({\bf k}\uparrow,{\bf-k}\downarrow)$,
with empty state written as $|0\rangle_{\bf k}$. To generate complete Fock
space, we need only 4 states in each $({\bf k}\uparrow,{\bf-k}\downarrow)$ :
i) $|0\rangle_{\bf k}$, ii)
$c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow}|0\rangle_{\bf k}$, iii)
$c^{\dagger}_{k\uparrow}|0\rangle_{\bf k}$ and iv)
$c^{\dagger}_{-k\downarrow}|0\rangle_{\bf k}$. BCS interaction mixes only the
0 and 2-fermion states. Resulting ground and excited paired states are two
orthogonal states:
$|g\rangle_{k}\equiv(u_{k}+v_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow})|0\rangle_{k}$
and
$|e\rangle_{k}\equiv(u_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow}-v_{k})|0\rangle_{k}$.
We call these 2-level bosonic states as Anderson atom or A-atom. A-atom
carries zero total momentum. Single fermion states
$c^{\dagger}_{k\uparrow}|0\rangle_{\bf k}$ and
$c^{\dagger}_{-k\downarrow}|0\rangle_{\bf k}$, in $({\bf
k}\uparrow,{\bf-k}\downarrow)$ remain unaffected by BCS interaction.
An A-atom close to fermi suface is special (see note note2 ). It is a coherent
superposition of 0 and 2-electron states. Consequent non zero value of the
product $u_{k}v_{k}$, around the fermi surface quantifies superconductivity.
BCS mean field Hamiltonian has the familiar form:
$H_{mf}=\sum\varepsilon_{k}\alpha^{\dagger}_{k\sigma}\alpha_{k\sigma},$ (1)
where, Bogoliubov quasi particle operators $\alpha^{\dagger}_{k\sigma}\equiv
u_{k}c^{\dagger}_{k\sigma}+\sigma v_{k}c_{-k-\sigma}$ and
$\alpha_{k\sigma}\equiv u_{k}^{*}c_{k\sigma}+\sigma
v_{k}^{*}c^{\dagger}_{-k-\sigma}$. The quasi particle energy
$\varepsilon_{k}\equiv\sqrt{(\frac{\hbar
k^{2}}{2m}-\mu)^{2}+\triangle_{k}^{2}}$.
Complete set of BCS mean field eigen states can be written as product over all
states, $({\bf k}\uparrow,{\bf-k}\downarrow)$, each containing either an
A-atom in the ground or excited state or a single upspin or down spin fermion
state. Bogoliubov quasi particle operators have very simple action on the BCS
eigen states. BCS vacuum,
$|BCS\rangle=\prod_{k}(u_{k}+v_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow})|0\rangle$
is annihilated by the annihilation operator, $\alpha_{q\sigma}|BCS\rangle=0$.
Bogoliubov creation operator, while acting on the BCS ground state, removes an
A-atom and replaces it by a fermion:
$\alpha^{\dagger}_{q\uparrow}|BCS\rangle=c^{\dagger}_{q\uparrow}\prod_{k\neq
q}(u_{k}+v_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow})|0\rangle$ and
$\alpha^{\dagger}_{-q\downarrow}|BCS\rangle=c^{\dagger}_{-q\downarrow}\prod_{k\neq
q}(u_{k}+v_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow})|0\rangle$.
What is the operator that excites an A-atom ? Pair of Bogoliubov quasi
particle operators
$\alpha^{\dagger}_{q\uparrow}\alpha^{\dagger}_{-q\downarrow}$, with total
momentum zero and total spin projection zero, acting within $({\bf
q}\uparrow,{\bf-q}\downarrow)$ excites an A-atom:
$\alpha^{\dagger}_{q\uparrow}\alpha^{\dagger}_{-q\downarrow}|BCS\rangle=(u_{q}c^{\dagger}_{q\uparrow}c^{\dagger}_{-q\downarrow}-v_{q})\prod_{k\neq
q}(u_{k}+v_{k}c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow})|0\rangle$.
The 2-level (bosonic) A-atom subspace can be studied using pseudo spin (Pauli)
operators. Pseudo spin operators (see note note3 ) are defined as,
$\sigma^{z}_{k}\equiv(1-\alpha^{\dagger}_{k\uparrow}\alpha_{k\uparrow}-\alpha^{\dagger}_{-k\downarrow}\alpha_{-k\downarrow}),\leavevmode\nobreak\
\sigma^{+}_{k}\equiv\alpha^{\dagger}_{k\uparrow}\alpha^{\dagger}_{-k\downarrow}$
and $\sigma^{-}_{k}\equiv\alpha_{-k\downarrow}\alpha_{k\uparrow}$.
The BCS mean field Hamiltonian (equation 1) in the boson subspace takes a
suggestive form :
$H_{mf}=-\sum\varepsilon_{k}\sigma_{k}^{z},$ (2)
It describes a collection of non-interacting pseudo spins in the prsence of a
k-dependent magnetic field of magnitude $\varepsilon_{k}$. Energy level
separation of a 2-level A-atom is 2$\varepsilon_{k}$. Notice that long range
interaction in k-space in the BCS Hamiltonian leads to free spins in the mean
field description, but in the presence of a self consistent mean field of
magnitude $\varepsilon_{k}$ in k-space. In our pseudo spin basis BCS ground
state is a fully aligned ferromagnet, while in Andersons basis pseudo spins
twist to form Bloch wall across the fermi surface in k-space (see note note2
).
Now we consider a simple way to couple A-atoms selectively to a single
external boson mode, with creation and annihilation operators
($b^{\dagger},b$). Interaction of electrons with this mode, in the long wave
length (zero momentum transfer) limit, $\lambda>>L$, where $L$ is the size of
the sample, has a simple form:
$H_{int}=\frac{1}{{\sqrt{N}}}\sum
B_{k}(c^{\dagger}_{k\sigma}c_{k\sigma}+H.c.)(b+b^{\dagger})$ (3)
Here Bk is a momentum dependent coupling constant and N $\sim$ number of
electrons in the interaction region. In terms of Bogoliubov quasiparticle
operators,
$\displaystyle H_{int}=\frac{1}{{\sqrt{N}}}\sum
B_{k}(u_{k}^{2}-B_{-k}v_{k}^{2})\alpha^{\dagger}_{k\sigma}\alpha_{k\sigma}(b+b^{\dagger})+\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ $ (4) $\displaystyle+$
$\displaystyle\frac{1}{{\sqrt{N}}}\sum(B_{k}+B_{-k})u_{k}v_{k}(\alpha^{\dagger}_{k\uparrow}\alpha^{\dagger}_{-k\downarrow}+H.c.)(b+b^{\dagger})$
We ignore non resonant terms using rotating wave approximation. Further
quasiparticle number operators can be also be taken care of using Hartree type
of approximations. We are left with the important pair annihilation and
creation terms:
$H_{int}\approx\frac{1}{{\sqrt{N}}}\sum
B_{k}u_{k}v_{k}(\alpha^{\dagger}_{k\uparrow}\alpha^{\dagger}_{-k\downarrow}b+\alpha_{-k\downarrow}\alpha_{k\uparrow}b^{\dagger})$
(5)
Interms of pseudo spin operators it takes the form
$H_{int}\approx\frac{1}{{\sqrt{N}}}\sum
B_{k}u_{k}v_{k}(\sigma^{+}_{k}b+\sigma^{-}_{k}b^{\dagger})$. Thus the final
form of the Hamiltonian of the superconductor interacting with a single boson
mode is:
$H=\hbar\omega_{0}(b^{\dagger}b+\frac{1}{2})-\sum\varepsilon_{k}\sigma_{k}^{z}+\frac{1}{{\sqrt{N}}}\sum\lambda_{k}(\sigma_{k}^{+}b+\sigma_{k}^{-}b^{\dagger})$
(6)
where $\lambda_{k}\equiv(B_{k}+B_{-k})u_{k}v_{k}$. Equation 5 is a generalized
Dicke Hamiltonian Dicke1954 , where 2-level atoms in k-space have a
k-dependent energy level separation, The sum, N${}_{t}\equiv$ N∗ \+ Nboson of
number of excited N∗ atoms and number of photons Nboson, commutes with the
Hamiltonian (equation 6)2$\varepsilon_{k}$.
Finding a Dicke like Hamiltonian is a key result of our paper, from which
several consequences follow.
Notice that A-atom-boson mode coupling
$\lambda_{k}\equiv(B_{k}+B_{-k})u_{k}v_{k}$ is appreciable only in regions
where the product $u_{k}v_{k}$ is appreciable. That is, possibility of
superrandiace is intimately connected with pairing phenomenon. The matrix
element Bk = - B-kfor electron-photon coupling. And Bk = + B-k for electron-
acoustic phonon coupling BCS . Thus in simple geometries, $\lambda_{k}=0$ for
electron-electromagnetic radiation coupling.
Our restriction to bosonic subspace and our effective Hamiltonian is a good
low temperature approximation because i) kBT << $\Delta_{0}$, the minimum
superconducting gap and density of thermal fermionic quasi particles is small
and ii) when $\lambda>>L$, the boson mode excites only the A-atoms.
More importantly, we have ignored back reaction, i.e., self consistent
modification of uk, vk or gap function $\Delta_{k}$, arising from interaction
with the boson mode. We will see later that selfconsistent modification
reinforces superradiant superconductivity.
To illustrate superradiance, consider a simple Dicke Hamiltonian, with
identical two level atoms in resonance with the boson mode,
$H_{D}=\hbar\omega_{0}(b^{\dagger}b+\frac{1}{2})-\frac{\hbar\omega_{0}}{2}\sum_{i}\sigma^{z}_{i}+\frac{g}{\sqrt{N}}\sum_{i}(b^{\dagger}\sigma^{-}_{i}+b\sigma^{+}_{i})$.
For every value of Nt there is an unique ground state, a nodeless in phase
superposition of degenerate states with real positive coefficients. The ground
state is a superradiant state capable of undergoing a spontaneous emission
with an emission strength that scales as N${}^{2}_{t}$.
For our purpose consider a superconductor at T = 0 in the presence of a
macroscopically occupied single boson mode $|N_{b}\rangle$ and allow the
coupled system to evolve in time. When $\hbar\omega_{0}$ start increasing
towards $\triangle_{0}$, minimum of the two quasi particle gap, a set of
k-points which are near resonance with energy of a boson quanta actively
participate in superradiance and modify the ground state wave function. Net
density of these active A-atoms depend on quasi particle density of states and
the coupling constant $\lambda_{k}$.
Dicke Hamiltonian, equation 6, admits Bethe Ansatz solutionBetheAnsatz for
k-independent $\lambda_{k}=\lambda_{0}$. Using the approximation,
$\lambda_{k}\approx\lambda_{0}$ for our set of near resonant A-atoms, our
ground state wave function has Bethe Ansatz form:
$\displaystyle|SRSC\rangle$ $\displaystyle\sim$
$\displaystyle(b^{\dagger}+\sum_{k}w_{k}\leavevmode\nobreak\
\alpha^{\dagger}_{k\uparrow}\alpha^{\dagger}_{-k\downarrow})^{N_{b}}|BCS\rangle\otimes|0_{b}\rangle$
(7) $\displaystyle\equiv$
$\displaystyle(b^{\dagger}+\sum_{k}w_{k}\leavevmode\nobreak\
\sigma^{+}_{k})^{N_{b}}|BCS\rangle\otimes|0_{b}\rangle$
Here $|0_{b}\rangle$ is the vacuum of the single boson mode. Superradiance
mixes (hybridizes or entangles) two nearly degenerate neutral modes. One is
the single mode external Bose oscillator. Second is a coherent sum of zero
momentum Bogoliubov pair excitations,
$\sum_{k}w_{k}\alpha^{\dagger}_{k\uparrow}\alpha^{\dagger}_{-k\downarrow}$; or
equivalently an Anderson pseudo spin wave packet mode in k-space. It is easy
to show that the second boson mode is a dynamic deformation of the Cooper pair
wave function (in the relative coordinate of the two electrons, characterized
by wk). The center of mass degree of freedom of the Cooper pairs, and hence
the phase of superconducting order parameter is not directly influenced by
superradiance phenomenon.
Superradiance effect in an s-wave superconductor is maximum, when the boson
frequency $\hbar\omega_{0}$ passes through minimum gap 2$\Delta_{o}$, where
quasi particle density of states has a maximum. If a superconductor supports
excited Cooper bound states below $2\Delta_{0}$, depending on the symmetry of
the excited states, there will be enhanced superradiance around these bound
state energies.
It follows from our work that one should be able to see i) a well known
quantum phase transition HeppLieb , as a function $\omega,\lambda$ ii)
enhanced quantum entanglement TobiasEntanglement around the transition point
and iii) superfluorescence Superfluorescence .
Application to Pseudogap Phase of Cuprates Having theoretically suggested
possibility of Dicke superradiance in a BCS superconductor, we will address
recent experimental observation liscLBCO ; liscYBCO of femtosecond laser
induced transient superconductivity in the pseudogap normal state of some
cuprates. In the two experiments two different Cu-O bond stretching modes are
resonantly excited by an 80 meV ( $\sim$ 20 THz) femtosecond laser. In view of
resonance, laser pumps its energy and coherence to the infrared phonon mode.
Electronic subsystem receives its energy and coherence from the infrared mode.
We have a phonon-photon polariton Hamiltonian:
$H=\hbar\omega_{0}a^{\dagger}a+\hbar\omega_{0}b^{\dagger}b+g(a^{\dagger}b+H.c.)$
(8)
Here $(a^{\dagger},a)$ and $(b^{\dagger},b)$ are the photon and phonon
operators respectively. As wavelengths of 20 THz infrared radiation and the
optic modes is $\sim 150$ microns, we will approximate the wavelengths by size
of the sample. The phonon optical mode coupling ‘g’ is of the order of 10 meV.
This coupling will lead to interesting Rabi oscillation between two modes,
after the femtosecond photon pulse impinges on the superconducting crystal.
It is safe to assume that Cu-O stretching lattice modes in both experiments
modulate i) site energy and ii) the hopping matrix element ‘t’ of the tight
binding electronic Hamiltonian for cuprates. To leading order in the normal
coordinate displacement u of this mode we have $t=t_{0}+\frac{\partial
t}{\partial u}|_{0}u\equiv
t_{0}+\alpha_{t}(b^{\dagger}+b){\rm\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ and}$ to is the value of hopping
integral in the absence of resonant excitation of Cu-O stretching infrared
mode.
As far as the pseudogap normal state of cuprates is concerned, there are
experimental evidences Ong ; RamanPseudoGap ; STM and theoretical support
BZAEmeryKivelson that this metallic state has substantial pairing amplitude
and a strong phase fluctuations. It is well described as a 2D vortex liquid
above a Kosterlitz-Thouless transition point. This is borne out by Nernst
effect Ong , Raman effect RamanPseudoGap , among other experiments. In what
follows we propose an effective Hamilotonian that is an expression of the fact
that pseudogap phase gap supports local superconductivity. We assumes presence
of equal density of positive and negative vortices that are quasi static and
spatially random. That is, the thermal vortices are slowly moving compared to
time scale of interest to us.
Our effective Hamiltonian for pseudogap normal state has the form:
$H_{\rm normal}=\sum\varepsilon_{m}\alpha^{\dagger}_{\sigma}\alpha_{m\sigma}$
(9)
The index m denotes eigen modes of Bogolibov quasiparticle operator
$\alpha^{\dagger}_{m\sigma}\equiv u_{m}c^{\dagger}_{m\sigma}+\sigma
v_{m}c_{-m-\sigma}$ . In view of presence of disordered vortices in the
background, single particle eigen modes are not Bloch states; some localized
and rest extended. Our conclusions hold good even for the d-wave symmetry
situation in cuprates.
In the absence of external magnetic field we have pairs of degenerate single
particle eigen states (m$\uparrow$, - m $\downarrow$), connected by time
reversal symmetry. As in the BCS case, a pair subspace (m$\uparrow$, - m
$\downarrow$) is occupied by A-atom in its ground or excited state or an
unpaired fermion. By using same arguments as in the BCS case, bosonic
excitation sector in the normal state pseudo gap phase, coupled to the single
phonon mode gives us a Dicke type pseudo spin Hamiltonian:
$H=\hbar\omega_{0}(b^{\dagger}b+\frac{1}{2})+\sum\varepsilon_{m}\sigma^{z}_{m}+\frac{1}{{\sqrt{N}}}\sum\lambda_{i}(\sigma^{+}_{m}b+\sigma^{-}_{m}b^{\dagger})$
(10)
Here the operator
$\alpha^{\dagger}_{m\uparrow}\alpha^{\dagger}_{{-m}\downarrow}\equiv\sigma^{+}_{m}$
excites an A-atom.
In terms of A-atom and ferminic quasi particle there is a key difference
between the BCS supercodnuctor and the cuprate superconductors above Tc. In a
standard BCS superconductor, the pair subspace $({\bf
k}\uparrow,{\bf-k}\downarrow)$, is dominated by fermionic quasi particles and
nearly vanishing density of A-atom. Whereas, in the pseudogap, which exists
over a wide temperature range above Tc, the pair subspace (m$\uparrow$, -m
$\downarrow$) is dominated by ground and excited A-atoms and nearly vanishing
density of fermions. This makes pseudogap phase special and susceptible for
transient superconductivity.
To understand how superradiance induces transient superconductivity in the
pseudo gap phase, we have to go beyond our model Hamiltonian (equation 10) and
consider selfconsistent modification of um and vm’s. We offer a feed back
mechanism. Qualitatively it is as follows. A subspace (m$\uparrow$, - m
$\downarrow$) contains A-atoms with high probability, in ground or excited
states; fermions with low probability. A fraction of excited A-atoms are in
resonance with the macroscopically occupied phonon mode. In view of
macroscopic occupancy, the boson mode stimulates the near resonant excited
A-atom to emit a boson and reach its ground state. In the process we create an
excess population of ground state A-atoms. Increase in density of ground state
A-atoms means increased superconducting correlation (increase in magnitude of
ukvk); consequently an increase in superradiance interaction. Thus there is a
positive feedback, which could establishes a transient long range
superconducting order. As pseudo gap phase extends to room temperatures in
some of the underdoped cupraets, our mechamism offers a possibility to observe
room temperature transient superconductivity. This is one more incentive for
authors of reference liscYBCO to confirm their exciting observations.
To establish superconductivity in the normal state of a Kosterlitz Thouless
superconductor, what we need is only a spatial reorganization of random
thermal vortices into either i) a fluid of bound vortex-antivortex pairs as in
Kosterlitz-Thouless phase or ii) an ordered lattice of positive and negative
vortices (see note note5 ). The increased pairing correlation from
superradiance increases the core energy of the thermal vortices and a
corresponding increase of vortex pair binding energy. Resulting increase in
population of paired vortices help create transient superconductivity.
In addition to superconductors, it will be interesting look for superradiant
superfluidity in pairing dominatated fermion systems: superfluid He3, cold
atoms, heavy nucleii and nuclear matter.
Acknowledgement I thank - N Kumar, K P Sinha, R K Shankar and R Nityananda for
early discussions on photoinduced superconductivity; P W Anderson and N P Ong
for an encouraging discussion; N P Ong for bringing to my attention reference
RamanPseudoGap ; B. Keimer for an encouraging information liscYBCO ; DAE,
India for a Raja Ramanna Fellowship. This research was supported by Perimeter
Institute for Theoretical Physics.
## References
* (1) J. Bardeen, J R Schreiffer and L. Cooper, Phys. Rev., 108, 1175 (1957); Introduction to Superconductivity, M. Tinkham (Dover, NY 2004)
* (2) R. H. Dicke, Phys. Rev., 93, 99 (1954); Super-radiance, M.G. Benedict et al., (IOP Publishing, Bristol 1996)
* (3) P. W. Anderson and A. H. Dayem, Phys. Rev. Lett., 13, 195 (1964); A. F. G. Wyatt et al., Phys. Rev. Lett., 16, 1166 (1966); A. H. Dayem, J. J. Wiegand, Phys. Rev., 155, 419 (1967); R. Escudero and H.J.T. Smith, Phys. Rev. B31, 2725 (1985); S. I. Vedeneev, D. K. Maude, and J. M. Byrne, Phys. Rev. B78, 052509 (2008)
* (4) G. M. Eliashberg, JETP Letters, 11, 114 (1970); B. I. Ivlev and G. M. Eliashberg, JETP Letters 13, 333 (1971)
* (5) C.S. Owen and D.J. Scalapino, Phys. Rev Lett. 25, 1559 (1972)
* (6) N Kumar and K P Sinha, Phys. Rev., 1̱74, 482 (1968)
* (7) D. R. McIntosh and J. Lindesay, Phys. Rev., B50, 15852 (1994)
* (8) D. Fausti et al., Science, 331, 6014 (2011)
* (9) S. Kaiser et al., arXiv:1205.466 v2. According to version 3, in view of a calibration error, part of the claim needs to be verified; experiments are being repeated. However, signal for transient superconductivitty, seen as an appearance of c-axis plasma edge, remains robust (B. Keimer, private communication)
* (10) P.W. Anderson, Phys. Rev.,112, 1900 (1958)
* (11) In circuit QED, a collective degree of freedom of a Josephson junction is called Josephson atom (see for example, M. Devoret, S. Girvin and R. Schoelkopf, Ann. Phys. (Leipzig), 16, 767 .(2007)). A-atom is different - it fills k-space and is a bulk property of the superconductor.
* (12) Our pseudo spin ${\vec{\sigma}}_{\bf k}$ is related to Anderson’s pseudo spin, $\tau^{z}_{k}\equiv(1-c^{\dagger}_{k\uparrow}c_{k\uparrow}-c^{\dagger}_{-k\downarrow}c_{-k\downarrow}),\leavevmode\nobreak\ \tau^{+}_{k}\equiv c^{\dagger}_{k\uparrow}c^{\dagger}_{-k\downarrow}$ and $\tau^{-}_{k}\equiv c_{-k\downarrow}c_{k\uparrow}$ by a ${\bf k}$-dependent rotation of quantization direction to $(\theta,\phi_{k})$, where $u_{k}\equiv\cos\frac{\phi_{k}}{2}$ and $v_{k}\equiv\sin\frac{\phi_{k}}{2}e^{i\theta}$.
* (13) M. Gaudin, J. Phys. (Paris), 37, 1087 (1976); A. Kundu, J. Phys. A: Math. Gen., 37, L281 (2004); J. Dukelsky et al., Phys. Rev. Lett., 93, 050403 (2004). Depending on wk, pseudo spin wave packet mode may have a overlap with the Higgs amplitude mode of the superconducting order parameter.
* (14) N. Lambert, C. Emary, and T. Brandes, Phys. Rev. Lett., 92, 073602 (2004)
* (15) K. Hepp and E. Lieb, Annals of Physics, 76, 360 (1973); Y. K. Wang and F. T. Hioe, Phys. Rev., A7, 831 (1973)
* (16) R. Bonifacio, L. A. Lugiato, Phys. Rev., A11 1507 (1975)
* (17) Z.A. Xu et al., Nature 406, 486 (2000); Y. Wang, Lu Li and N.P. Ong, Phys. Rev. B73, 024510 (2006)
* (18) I. Iguchi, T. Yamaguchi and A. Sugimoto NATURE, 412, 420 (2001); C.V. Parker et al., Nature, 468, 677 (2010) vanishing phase coherence J. Corson et al., Nature 398, 221-223 (18 March 1999)
* (19) A. Dubroka et al., Phys. Rev. Lett. 106, 047006 (2011)
* (20) G. Baskaran, Z. Zou and P. W. Anderson, Sol. St. Commn., 63, 973 (1987); V. Emery and S. Kivelson, Nature, 374, 434 (1995)
* (21) I thank P.W. Anderson for suggesting this possibility.
|
11institutetext: 1 Center for Imaging Science, 2 Biomedical Engineering,
Rochester Institute of Technology, Rochester, NY USA
11email<EMAIL_ADDRESS>
3 Bioengineering Graduate Program, 4 Electrical Engineering and Computer
Science, 5 Information and Telecommunication Technology Center, University of
Kansas, Lawrence, KS, USA
# CNN-based Cardiac Motion Extraction to Generate Deformable Geometric Left
Ventricle Myocardial Models from Cine MRI
Roshan Reddy Upendra${}^{\textrm{{\char 0\relax}}}$ 11 Brian Jamison Wentz
3355 Richard Simon 22 Suzanne M. Shontz Cristian A. Linte 3344551122
###### Abstract
Patient-specific left ventricle (LV) myocardial models have the potential to
be used in a variety of clinical scenarios for improved diagnosis and
treatment plans. Cine cardiac magnetic resonance (MR) imaging provides high
resolution images to reconstruct patient-specific geometric models of the LV
myocardium. With the advent of deep learning, accurate segmentation of cardiac
chambers from cine cardiac MR images and unsupervised learning for image
registration for cardiac motion estimation on a large number of image datasets
is attainable. Here, we propose a deep leaning-based framework for the
development of patient-specific geometric models of LV myocardium from cine
cardiac MR images, using the Automated Cardiac Diagnosis Challenge (ACDC)
dataset. We use the deformation field estimated from the VoxelMorph-based
convolutional neural network (CNN) to propagate the isosurface mesh and volume
mesh of the end-diastole (ED) frame to the subsequent frames of the cardiac
cycle. We assess the CNN-based propagated models against segmented models at
each cardiac phase, as well as models propagated using another traditional
nonrigid image registration technique.
###### Keywords:
Patient-specific Modeling Deep Learning Image Registration Cine Cardiac MRI
## 1 Introduction
To reduce the morbidity and mortality associated with cardiovascular diseases
(CVDs) [3], and to improve their treatment, it is crucial to detect and
predict the progression of the diseases at an early stage. In a clinical set-
up, population-based metrics, including measurements of cardiac wall motion,
ventricular volumes, cardiac chamber flow patterns, etc., derived from cardiac
imaging are used for diagnosis, prognosis and therapy planning.
In recent years, image-based computational models have been increasingly used
to study ventricular mechanics associated with various cardiac conditions. A
comprehensive review of patient-specific cardiovascular modeling and its
applications is described in [18]. Cardiovascular patient-specific modeling
includes a geometric representation of some or all cardiac chambers of the
patient’s anatomy and is derived from different imaging modalities [8].
The construction of patient-specific geometric models entails several steps:
clinical imaging, segmentation and geometry reconstruction, and spatial
discretization (i.e., mesh generation) [13]. For example, Bello et al. [2]
presented a deep learning based framework for human survival prediction for
patients diagnosed with pulmonary hypertension using cine cardiac MR images.
Here, the authors employ a 4D spatio-temporal B-spline image registration
method to estimate the deformation field at each voxel and at each timeframe.
The estimated deformation field was used to propagate the ED surface mesh of
the right ventricle (RV), reconstructed from the segmentation map, to the rest
of the timeframes of a particular subject. Cardiac MRI is a current gold
standard to assess global (ventricle volume and ejection fraction) and
regional (kinematics and contractility) function of the heart under various
diseases. In particular, cardiac MRI enables the generation of high quality
myocardial models, which can, in turn, be used to identify reduced function.
In this work, we propose a deep learning-based pipeline to develop patient-
specific geometric models of the LV myocardium from cine cardiac MR images
(Fig. 1). These models may be used to conduct various simulations, such as
assessing myocardial viability. In our previous work [19], we introduced a
preliminary, proof of concept, CNN-based 4D deformable registration method for
cardiac motion estimation from cine cardiac MR images, using the ACDC dataset
[4]. Here, we demonstrate the use of the CNN-based 4D deformable registration
technique to build dynamic patient-specific LV myocardial models across
subjects with different pathologies, namely normal, dilated cardiomyopathy
(DCM), hypertrophic cardiomyopathy (HCM) and subjects with prior myocardial
infarctions (MINF). Following segmentation of the ED cardiac frame, we
generate both isosurface and volume LV meshes, which we then propagate through
the cardiac cycle using the CNN-based registration fields. In addition, we
demonstrate the generation of dynamic LV volume meshes depicting the heart at
various cardiac phases by warping a patient-specific ED volume mesh based on
the registration-based propagated surface meshes. Lastly, we compare these
meshes to those obtained by directly propagating the ED volume mesh using the
CNN-based deformation fields.
## 2 Methodology
### 2.1 Cardiac MRI Data
We use the 2017 ACDC dataset that was acquired from real clinical exams. The
dataset is composed of cine cardiac MR images from $150$ subjects, divided
into five equally-distributed subgroups: normal, MINF, DCM, HCM and abnormal
RV. The MR image acquisitions were obtained using two different MR scanners of
$1.5$ T and $3.0$ T magnetic strength. These series of short axis slices cover
the LV from base to apex such that one image is captured every $5$ mm to $10$
mm with a spatial resolution of $1.37$ mm2/pixel to $1.68$ mm2/pixel.
### 2.2 Image Preprocessing
We first correct for the inherent slice misalignments that occur during the
cine cardiac MR image acquisition. We train a modified version of the U-Net
model [14] to segment the cardiac chambers, namely LV blood-pool, LV
myocardium and RV blood-pool, from 2D cardiac MR images. We identify the LV
blood-pool center, i.e., the centroid of the predicted segmentation mask and
stack the 2D cardiac MR slices collinearly to obtain slice misalignment
corrected 3D images [19, 7].
Figure 1: Overview of the proposed CNN-based workflow to generate patient-
specific LV myocardial geometric model.
### 2.3 Deformable Image Registration
#### 2.3.1 CNN-based Image Registration.
We leverage our 4D deformable registration method described in [19] which
employs the VoxelMorph [1] framework to determine the optical flow
representation between the slice misalignment corrected 3D images. The CNN is
trained using the following loss function:
${L}={L}_{\text{similarity}}+\lambda{L}_{\text{smooth}},$ (1)
where ${L}_{\text{similarity}}$ is the mean squared error (MSE) between the
target frame and the warped frame, ${L}_{\text{smooth}}$ is the smoothing loss
function to spatially smooth the registration field, and $\lambda$ is the
regularization parameter, which is set to $10^{-3}$ in our experiments.
Inspired by Zhu et al. [20], we use the Laplacian operator in the smoothing
loss function as it considers both global and local properties of the
objective function $y=x^{2}$ instead of the traditional gradient operator
which considers only the local properties of the function $y=x^{2}$. A
detailed comparison of both these smoothing loss functions with respect to
cardiac motion estimation from cine MR images is found in [19].
The 4D cine cardiac MRI datasets are composed of $28$ to $40$ 3D image frames
that cover the complete cardiac cycle. For this discussion, we shall refer to
the 3D images as $I_{ED}$, $I_{ED+1}$,…,$I_{ED+N_{T}-1}$ where $I_{ED}$ is the
end-diastole image frame, and $N_{T}$ is the total number of 3D images. We
employ the fixed reference frame registration method, wherein the task is to
find an optical flow representation between the image pairs
$\\{(I_{ED},I_{ED+t})\\}_{t=1,2,3,...,N_{T}-1}$.
During training, we use $110$ of the total $150$ MR image dataset for
training, $10$ for validation and the remaining $30$ for testing. The CNN for
cardiac motion estimation is trained using an Adam optimizer with a learning
rate of $10^{-4}$, halved at every $10^{th}$ epoch for $50$ epochs. Both, the
U-Net model used for slice misalignment correction and VoxelMorph network
trained to estimate cardiac motion were trained on NVIDIA RTX 2080 Ti GPU.
#### 2.3.2 Conventional Image Registration.
We compare the performance of the VoxelMorph framework with that of the
B-spline free form deformation (FFD) nonrigid image registration algorithm
[15]. This iterative intensity-based image registration method was implemented
using SimpleElastix [12, 9], which enables a variety of image-registration
algorithms in different programming languages. The FFD algorithm was set to
use the adaptive stochastic gradient descent method as the optimizer, MSE as
the similarity measure and binding energy as the regularization function. The
FFD-based image registration was optimized in $500$ iterations, while sampling
$2048$ random points per iteration, on an Intel(R) Core(TM) i9-9900K CPU.
### 2.4 Mesh Generation and Propagation
We use the manual segmentation map of the ED frame to generate isosurface
meshes. The slice thickness of each MRI image slice is $5$ mm to $10$ mm,
however, in order to obtain good quality meshes, the segmentation maps were
resampled to a slice thickness of $1$ mm. We use the Lewiner marching cubes
[11] algorithm to generate the meshes from the resampled segmentation maps of
the ED frames, and then simplification techniques, such as vertex
simplification and edge collapse, were performed using MeshLab $2020.07$ [5].
The simplification techniques are repeated multiple times to reduce the number
of vertices until the mesh has been fully decimated while preserving the
anatomical integrity and aspect ratio of the isosurface meshes.
Volume meshes of the initial surface meshes at the end-diastolic phases for
four patients with various heart conditions were generated based on the
decimated patient-specific surface meshes using Tetgen 1.6 [17]. In
particular, a constrained Delaunay mesh generation algorithm was used to
generate tetrahedral meshes based on the triangulated surface meshes. Steiner
points were added within the boundary of the surface mesh so that the
tetrahedra maintained a radius-edge ratio of $1.01$ and a maximum volume of
$9$ mm3 as needed for generation of valid meshes [17].
Mesh quality assessment was performed on the ED volume meshes utilizing the
scaled Jacobian metric, which ranges between $-1$ to $+1$, where $+1$
indicates an ideal equilateral tetrahedron, while negative and zero scaled
Jacobian values indicate inverted and degenerate tetrahedral elements,
respectively. Tetrahedra with a scaled Jacobian greater than or equal to $0.2$
are considered acceptable [10]. The ED volume mesh has a minimum scaled
Jacobian value of $0.078$, which demonstrates a valid, non-tangled mesh.
However, the end-systole (ES) phase mesh contains some lower quality elements
indicated by lower minimum scaled Jacobian values.
To demonstrate the VoxelMorph-based motion extraction and propagation to build
patient-specific LV myocardial models, we generate two sets of volume meshes
at each cardiac frame for each patient in each pathology group (Fig. 2).
Figure 2: Pipeline to generate dynamic volume meshes (at cardiac frames (ED +
k)) by direct CNN-based propagation, as well as volume mesh warping based on
dynamic boundary meshes.
The first set is produced by propagating the volume meshes at the ED frame to
all the subsequent frames of the cardiac cycle using the deformation field
estimated by the VoxelMorph-based registration method. For the second set, the
ED volume mesh generated with Tetgen was used to generate the volume meshes
corresponding to the other cardiac phases. We employed the log barrier-based
mesh warping (LBWARP) method [16] to deform the ED volume mesh onto the target
surface mesh for the new cardiac phase (Fig. 3). The method computes new
positions for the interior vertices in the ED volume mesh, while maintaining
the mesh topology and point-to-point correspondence [16].
Figure 3: LV volume meshes at three cardiac phases (a) end-diastole; (b) end-
systole; and (c) mid-diastole generated using LBWARP.
Briefly, LBWARP first calculates a set of local weights for each interior
vertex in the initial (ED) volume mesh based on the relative inverse distances
from each of its neighbors, which specify the representation of each interior
vertex in terms of its neighbors. Next, the vertices in the ED surface mesh
are mapped onto the new surface boundary. Finally, the interior vertices in
the ED volume mesh are then repositioned to reflect the updated positions of
the boundary nodes, while maintaining edge connectivity and point-to-point
correspondence, and ultimately yielding the volume meshes that correspond to
each new cardiac phase.
## 3 Results and Discussion
To evaluate the registration performance, the LV isosurface (generated from
the ED image segmentation map) is propagated to all the subsequent cardiac
frames using the deformation field estimated by FFD and VoxelMorph. We then
compare these isosurfaces to those directly generated by segmenting all
cardiac image frames using a modified U-Net model [14] (Section 2.2), which we
refer to as the “silver standard”.
Table 1 summarizes the performance of the FFD and VoxelMorph registration by
assessing the Dice score and mean absolute distance (MAD) between the
propagated and directly segmented (i.e., “silver standard”) isosurfaces.
Fig. 4 illustrates the distance between the three sets of isosurfces
(segmented, CNN-propagated and FFD-propagated) for one patient from each
pathology. The MAD between the surfaces is less than 2 mm at all frames, with
the CNN-propagated isosurfaces being closest to the “silver standard”
segmented surfaces.
Table 1: Mean Dice score (%) and mean absolute distance (MAD) (mm) between FFD and segmentation (FFD-SEG), CNN and segmentation (CNN-SEG), and FFD and CNN (FFD-CNN) results. Statistically significant differences were evaluated using the t-test (* for p $<$ 0.1 and ** for p $<$ 0.05). | Normal | MINF | DCM | HCM
---|---|---|---|---
| Dice | MAD | Dice | MAD | Dice | MAD | Dice | MAD
FFD-Segmentation | 74.80 | 1.53 | 77.69 | 1.09 | 80.41 | 0.91 | 77.39 | 1.97
CNN-Segmentation | 80.41** | 1.15 | 81.21* | 0.87 | 83.39* | 0.91 | 82.46* | 1.09
FFD-CNN | 77.81 | 1.13 | 82.12 | 0.75 | 81.67 | 0.97 | 77.34 | 1.77
Figure 4: MAD between FFD- and CNN-propagated, and segmented (i.e., “silver
standard”) isosurfaces at all cardiac frames for all patient pathologies.
Figure 5: Mean node-to-node distance at each cardiac frame between the CNN-
propagated and LBWARP-generated volume meshes (left); mean (std-dev) node
distance across all frames for each patient pathology (right).
As mentioned in Section 2.4 and shown in Fig. 2, we generate two sets of
volume meshes at each frame of the cardiac cycle. Fig. 5 shows the mean node
distance between the two sets of volume meshes across all cardiac frames for
one subject in each of the four pathologies. Fig. 5 shows the mean node
distance between the two sets of volume meshes at each frame of the cardiac
cycle for the four subjects. It can be observed that the two sets of volume
meshes are in close agreement with each other, exhibiting a mesh-to-mesh
distance within 2 mm.
We also briefly investigated the effect of using initial-to-final frame vs.
adjacent frame-to-frame registration to extract the cardiac motion throughout
the cycle. Although the sequential registration method estimates smaller
deformation between two consecutive, adjacent image frames compared to the
larger deformations estimated by the initial-to-final frame registration,
their concatenation across several frames accumulates considerable
registration errors. As such, when using these concatenated registration-
predicted deformation fields to propagate the ED isosurfaces and volume meshes
to the subsequent cardiac phases, the Dice score and MAD between the
propagated and segmented geometries rapidly deteriorate, along with the
quality of the propagated surface and volume meshes.
Moreover, although the proposed VoxelMorph-based cardiac motion extraction
method can capture the frame-to-frame motion with sufficient accuracy, as
shown in this work, our ongoing and future efforts are focused on further
improving the algorithm by imposing diffeomorphic deformations [6]. This
improvement will help maintain a high quality of the meshes and prevent mesh
tangling and element degeneration, especially for the systolic phases.
## 4 Conclusion
In this work, we show that the proposed deep learning framework can be used to
build LV myocardial geometric models. The proposed framework is not limited to
any pathology and can be extended to LV and RV blood-pool geometry.
## Acknowledgments
This work was supported by grants from the National Science Foundation (Award
No. OAC 1808530, OAC 1808553 & CCF 1717894) and the National Institutes of
Health (Award No. R35GM128877).
## References
* [1] Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: A learning framework for deformable medical image registration. IEEE Transactions on Medical Imaging 38(8), 1788–1800 (2019)
* [2] Bello, G.A., Dawes, T.J., Duan, J., Biffi, C., de Marvao, A., Howard, L.S., Gibbs, J.S.R., Wilkins, M.R., Cook, S.A., Rueckert, D., et al.: Deep-learning cardiac motion analysis for human survival prediction. Nature Machine Intelligence 1(2), 95–104 (2019)
* [3] Benjamin, E.J., Blaha, M.J., Chiuve, S.E., Cushman, M., Das, S.R., Deo, R., Floyd, J., Fornage, M., Gillespie, C., Isasi, C., et al.: Heart disease and stroke statistics-2017 update: a report from the american heart association. Circulation 135(10), e146–e603 (2017)
* [4] Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, I., Lekadir, K., Camara, O., Ballester, M.A.G., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Transactions on Medical Imaging 37(11), 2514–2525 (2018)
* [5] Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G.: Meshlab: an open-source mesh processing tool. In: Eurographics Italian chapter conference. vol. 2008, pp. 129–136. Salerno, Italy (2008)
* [6] Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces. Medical image analysis 57, 226–236 (2019)
* [7] Dangi, S., Linte, C.A., Yaniv, Z.: Cine cardiac MRI slice misalignment correction towards full 3D left ventricle segmentation. In: Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling. vol. 10576, p. 1057607. International Society for Optics and Photonics (2018)
* [8] Gray, R.A., Pathmanathan, P.: Patient-specific cardiovascular computational modeling: diversity of personalization and challenges. Journal of cardiovascular translational research 11(2), 80–88 (2018)
* [9] Klein, S., Staring, M., Murphy, K., Viergever, M.A., Pluim, J.P.: Elastix: a toolbox for intensity-based medical image registration. IEEE transactions on medical imaging 29(1), 196–205 (2009)
* [10] Knupp, P.M.: Algebraic mesh quality metrics for unstructured initial meshes. Finite Elements in Analysis and Design 39(3), 217–241 (2003)
* [11] Lewiner, T., Lopes, H., Vieira, A.W., Tavares, G.: Efficient implementation of marching cubes’ cases with topological guarantees. Journal of graphics tools 8(2), 1–15 (2003)
* [12] Marstal, K., Berendsen, F., Staring, M., Klein, S.: Simpleelastix: A user-friendly, multi-lingual library for medical image registration. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp. 134–142 (2016)
* [13] Morris, P.D., Narracott, A., von Tengg-Kobligk, H., Soto, D.A.S., Hsiao, S., Lungu, A., Evans, P., Bressloff, N.W., Lawford, P.V., Hose, D.R., et al.: Computational fluid dynamics modelling in cardiovascular medicine. Heart 102(1), 18–28 (2016)
* [14] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 234–241. Springer (2015)
* [15] Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L., Leach, M.O., Hawkes, D.J.: Nonrigid registration using free-form deformations: application to breast mr images. IEEE transactions on medical imaging 18(8), 712–721 (1999)
* [16] Shontz, S.M., Vavasis, S.A.: A mesh warping algorithm based on weighted laplacian smoothing. In: IMR. pp. 147–158 (2003)
* [17] Si, H.: Tetgen, a delaunay-based quality tetrahedral mesh generator. ACM Transactions on Mathematical Software (TOMS) 41(2), 1–36 (2015)
* [18] Smith, N., de Vecchi, A., McCormick, M., Nordsletten, D., Camara, O., Frangi, A.F., Delingette, H., Sermesant, M., Relan, J., Ayache, N., et al.: euheart: personalized and integrated cardiac care using patient-specific cardiovascular modelling. Interface focus 1(3), 349–364 (2011)
* [19] Upendra, R.R., Wentz, B.J., Shontz, S.M., Linte, C.A.: A convolutional neural network-based deformable image registration method for cardiac motion estimation from cine cardiac mr images. In: 2020 Computing in Cardiology. pp. 1–4. IEEE (2020)
* [20] Zhu, Y., Zhou Sr, Z., Liao Sr, G., Yuan, K.: New loss functions for medical image registration based on Voxelmorph. In: Medical Imaging 2020: Image Processing. vol. 11313, p. 113132E. International Society for Optics and Photonics (2020)
|
# Occupation times and areas derived from random sampling
Frank Aurzada Department of Mathematics, Technical University of Darmstadt
Leif Döring Mathematics Institute, University of Mannheim Helmut H. Pitters
Mathematics Institute, University of Mannheim
###### Abstract
We consider the occupation area of spherical (fractional) Brownian motion,
i.e. the area where the process is positive, and show that it is uniformly
distributed. For the proof, we introduce a new simple combinatorial view on
occupation times of stochastic processes that turns out to be surprisingly
effective. A sampling method is used to relate the moments of occupation times
to persistence probabilities of random walks that again relate to
combinatorial factors in the moments of beta distributions. Our approach also
yields a new and completely elementary proof of Lévy’s second arcsine law for
Brownian motion. Further, combined with Spitzer’s formula and the use of Bell
polynomials, we give a characterisation of the distribution of the occupation
times for all Lévy processes.
Keywords— Bell polynomials; fluctuation theory for random walks; Lévy process;
occupation time; spherical fractional Brownian motion
## 1 Introduction and main results
Consider a measure space $(I,\mathcal{I},\alpha)$, where $\alpha$ is a finite
measure with total mass $|\alpha|=\alpha(I)$, and a stochastic process
$X=\\{X_{t},t\in I\\}$ with index set $I$ whose state space $\mathscr{X}$ is
endowed with some sigma algebra $\mathcal{X}$. We do not assume $I$ to be an
ordered set. For a real-valued, non-negative, measurable function
$f:\mathscr{X}\to[0,\infty)$ consider the path integral
$\int_{I}f(X_{s})\alpha(ds).$
Path integrals for diverse stochastic processes have a rich history in several
areas of probability theory. In the present article, we deal with occupation
times $\int_{I}{\mathbf{1}}_{\\{X_{s}\in S\\}}\alpha(ds)$ for some measurable
set $S$. For $I=[0,t]$, $\alpha$ the Lebesgue measure, and $S$ measurable,
this is the portion of time that the process spends in the set $S$. Most
classically, the occupation time of the non-negative half-line $S=[0,\infty)$
during $[0,1]$ by a Brownian motion is well-known to be arcsine distributed,
i.e. it has the density $\pi^{-1}(x(1-x))^{-1/2}$ on $(0,1)$. This result goes
back to Paul Lévy [34] and is sometimes referred to as the second arcsine law
for Brownian motion, cf. [37]. Since Lévy’s seminal work, many proofs for this
result have been found (e.g. Kac’s derivation via the Feynman-Kac formula as
expounded in [37, application of Theorem 7.43], or via approximation by
(simple) random walks, cf. [37, Theorem 5.28]). Further, various
generalizations to other processes have been considered (see for instance [30,
11, 7, 21, 24, 32, 8, 31, 19, 38, 36]).
While the one-dimensional stochastic process setting is well-understood, many
open problems remain for multi-dimensional processes and processes with
general index sets. Most prominently, characterising the distribution of
occupation times of planar (and higher dimensional) Brownian motion (random
walks) in cones are open problems to this day (except for cases that may
trivially be reduced to one-dimensional problems). The major focus of this
paper is on random fields, i.e. on processes with multidimensional index sets.
Here, the Brownian sheet is a natural object to consider, and [31] derives
asymptotic bounds, but the exact distribution of the occupation ‘area’ of the
Brownian sheet remains unknown. For the Brownian pillow we refer to [25]. In
this paper we compute the distribution of the occupation area for the
fractional generalisation of Lévy’s spherical Brownian motion. This is our
main result.
To motivate our approach let us recall some attempts towards the occupation
times of planar Brownian motion using moments. Note that occupation times are
bounded random variables and as such are uniquely determined by their moments.
As a specific example consider the time that planar Brownian motion spends in
some fixed cone $C$. The problem to characterise the distribution of this time
was put forth in [8] and is still open. The authors were able to derive the
first three moments of this occupation time if $C$ is taken to be a quadrant.
Motivated by this work, [19] studied the time that planar Brownian motion
spends in the ‘hourglass’, i.e. the union of the first and third quadrant, and
rephrased this problem in the language of Kontorovich-Lebedev transforms.
Desbois [14] generalized the quadrant problem to wedges with apex at the
origin and some angle $\theta>0$. Employing methods from physics, the author
computed the first three moments in the case of a wedge with angle $\theta$,
the fourth moment in the quadrant case ($\theta=\pi/2$), and derived a general
formula for second moments in high-dimensional orthants. We follow these
research efforts and attack occupation time distributions through their
integer moments, introducing a simple sampling method.
Suppose that we were to ‘guess’ the proportion of time that the process $X$
spends in some set $S$ during $[0,t]$, and to this end we were allowed to
sample $X$ at $m$ instances chosen according to our liking. It seems rather
natural to choose the times $U_{1},\ldots,U_{m}$ independently (and
independent of $X$) and uniformly at random in $[0,t],$ and to take the
empirical probability $\texttt{\\#}\\{1\leq k\leq m\colon X_{U_{k}}\in S\\}/m$
as an estimator of said proportion. In fact, it turns out that the probability
that $X$ is in $S$ at all times $U_{1},\ldots,U_{m}$ agrees with the $m$-th
moment of the occupation time of $S$ (up to the factor $t^{m}$), a
generalization of which we will see in Proposition 1. Sampling a stochastic
process at random times is by no means a new idea, and has been employed in
various other contexts. For instance, the random tree may be constructed from
broken lines derived from Brownian excursion sampled at independent uniform
times [1, 33]. In [39] the author studied Brownian motion, bridge, excursion
and meander by sampling at i.i.d. uniform times, and the convex hull of
multidimensional Brownian motion was studied in [18] by sampling at the points
of an independent Poisson process.
A surprising consequence of the computation of moments by means of sampling at
random times is a completely elementary proof for the arcsine law of the
Brownian motion, the uniform distribution of the occupation time of Lévy
bridges, and also a new characterisation of the occupation times for all Lévy
processes. Our approach combines occupation time moments with random walk
probabilities and elementary combinatorics. The use of combinatorics is not
surprising as beta distributions often appear as occupation time distributions
have explicit moment expressions involving elementary combinatorial factors.
For example, the $m$-th moments of the arcsine distribution are
$2^{-2m}\binom{2m}{m}$, combinatorial factors that appear in many
combinatorial problems, in particular in persistence probabilities of random
walks. This suggests to ask if the $m$-th moments of occupation times are
inherently related to combinatorial terms. Our answer is yes. The main insight
of this article is to realise that the following simple sampling formula is a
surprisingly effective link to relate occupation times, random walks, and,
depending on the situation, beta distributions.
###### Proposition 1.
Consider a stochastic process $(X_{t})_{t\in I}$ indexed by a measure space
$(I,\mathcal{I},\alpha)$ that attains values in a measurable space
$(\mathscr{X},\mathcal{X})$. Let $S\in\mathcal{X}$ and $m\in\mathbb{N}$. Then
$\displaystyle\mathbb{E}\left[\left(\int_{I}{\mathbf{1}}_{\\{X_{t}\in
S\\}}\alpha(dt)\right)^{m}\right]=\lvert\alpha\rvert^{m}\mathbb{P}\left\\{X_{U_{1}}\in
S,\ldots,X_{U_{m}}\in S\right\\},\quad m\in\mathbb{N},$ (1)
where $U_{1},U_{2},\ldots$ is an i.i.d. sequence independent of $X$ such that
$U_{1}$ has distribution $\alpha/\lvert\alpha\rvert$.
###### Proof.
Set $f(x):={\mathbf{1}}_{\\{x\in S\\}}$. Re-writing the expectation w.r.t. to
the distribution $\alpha/|\alpha|$ (independent of $X$) as integrals, we
obtain
$\mathbb{E}\big{[}f(X_{U_{1}})\cdots
f(X_{U_{m}})\big{]}=\mathbb{E}\left[\int_{I}\cdots\int_{I}f(X_{u_{1}})\cdots
f(X_{u_{m}})\frac{\alpha(du_{1})}{|\alpha|}\cdots\frac{\alpha(du_{m})}{|\alpha|}\right].$
Multiplying by $|\alpha|^{m}$, noticing that all the integrals are identical,
and inserting $f(x):={\mathbf{1}}_{\\{x\in S\\}}$ shows the claim. ∎
In order to discuss the use of this result, let us consider the example
$I=[0,t]$, $\alpha$ the Lebesgue measure, and $S=[0,\infty)$. Then Proposition
1 shows that the occupation time of continuous-time processes $(X_{t})$ can be
characterised through the persistence probabilities
$\mathbb{P}\left\\{X_{U_{1}}\geq 0,...,X_{U_{m}}\geq 0\right\\}$. In many
situations these persistence probabilities may be reduced to persistence
probabilities $\mathbb{P}\left\\{S_{1}\geq 0,...,S_{m}\geq 0\right\\}$ for a
well-understood discrete-time process $(S_{n})$. For example for random walks,
there is a vast literature going back to seminal works of Spitzer [44] and
Sparre Andersen [42], see also the exposition in [28, Section 1.3] for more
recent results, where such probabilities were computed under different
assumptions on the set $S$. Little suprisingly, the moments of arcsine
distributions appear naturally in persistence probabilities. While there is a
long tradition of deriving arcsine laws for continuous-time processes from
discrete-time processes using Donsker-type limiting arguments, the simple
connection between moments of occupation times and persistence probabilities
seems to be new.
###### Remark 1.
The sampling approach also shows that the $m$-th moment of the occupation time
of $d$-dimensional Brownian motion in some cone $C$ is equal to the
probability that a $d$-dimensional random walk stays in the cone $C$ up to
time $m.$ The exit time from a cone of a multi-dimensional random walk has
received great interest in mathematical research (cf. e.g. [23]), not least
because this quantity has connections to many areas such as representation
theory [5, 6], conditioned random walks [5, 6], random matrices [16], non-
colliding random walks [13, 17], and enumerative combinatorics [10, 20, 27].
We leave for future research whether this direct link between continuous-time
occupation times and discrete-time exit probabilities may help to solve open
problems for the planar and multidimensional Brownian motion.
Organisation of the article: In the following sections we illustrate the power
of this simple approach. The paper is structured as follows. In Section 1.1 we
give a very simple proof of the second arcsine law of Brownian motion. In
Section 1.2 we discuss the main result of this paper, i.e. we determine the
distribution of the occupation ‘area’ of Lévy’s Brownian motion on the sphere.
In Section 1.3 we characterise all occupation times of one-dimensional Lévy
processes using combinatorial expressions. The proofs are given in Section 2.
### 1.1 An elementary proof of Lévy’s arcsine law
As a first illustration of our line of attack we give a new, very elementary
proof of Lévy’s second arcsine law for Brownian motion.
###### Theorem 1 (Lévy [34]).
If $B$ is a standard Brownian motion, then
$t^{-1}\int_{0}^{t}{\mathbf{1}}_{\\{B_{s}>0\\}}ds$ is arcsine distributed.
In contrast to other proofs of the second arcsine law of Brownian motion, our
proof is completely elementary and in particular does not require any limiting
procedure nor does it employ analytic computations or excursion theory, as
Lévy’s original proof. At first sight our argument might resemble proofs that
approximate Brownian motion using discrete-time random walks. However,
instead, we use an entirely different connection between Brownian motion and
the so-called Laplace random walk. Instead of discretising $(B_{t})$ and
studying the same problem for random walks, the sampling method relates the
moments of the occupation time of continuous Brownian motion to discrete
persistence probabilities.
###### A simple proof of Theorem 1.
W.l.o.g. we may assume $t=1$, by the self-similiarity of Brownian motion. Fix
$m\in\mathbb{N}$. The sampling formula (1) gives
$\mathbb{E}\left[\left(\int_{0}^{1}{\mathbf{1}}_{\\{B_{t}>0\\}}dt\right)^{m}\right]=\mathbb{P}\left\\{B_{U_{1}}>0,\ldots,B_{U_{m}}>0\right\\}=\mathbb{P}\left\\{B_{U_{m:1}}>0,\ldots,B_{U_{m:m}}>0\right\\},$
(2)
where $(U_{i})$ are i.i.d. uniform in $[0,1]$ independent of the Brownian
motion and $(U_{m:i})$ is the corresponding order statistics. Further, let
$(E_{i})$ be i.i.d. standard exponential random variables independent of the
$(U_{i})$ and of the Brownian motion and set $T_{k}:=\sum_{i=1}^{k}E_{i}$,
$k=0,1,2,\ldots$. Conditioning on $T_{m+1}$ and on the $(U_{i})$ (which are
independent of the Brownian motion $B$), we can use the self-similarity of
Brownian motion, $(B_{s})_{s\geq 0}=_{d}(T_{m+1}^{-1/2}B_{T_{m+1}s})_{s\geq
0}$, to see that the probability in (2) equals
$\mathbb{P}\left\\{B_{T_{m+1}U_{m:1}}>0,\ldots,B_{T_{m+1}U_{m:m}}>0\right\\}=\mathbb{P}\left\\{B_{T_{1}}>0,\ldots,B_{T_{m}}>0\right\\},$
(3)
where we used the independence of $(U_{i})$ and $(E_{i})$ from the Brownian
motion and the fact that the vector $(T_{m+1}U_{m:1},\ldots,T_{m+1}U_{m:m})$
has the same distribution as $(T_{1},\ldots,T_{m})$, see e.g. Theorem V.2.2 in
[15].
Thus, the moments of the occupation time of Brownian motion on the left-hand
side in (2) are given by the persistence probabilities on the right-hand side
in (3). We note that these are the persistence probabilities of the Laplace
random walk $R_{i}:=B_{T_{i}}$, $i=0,1,2,\ldots$. It is well-known that the
probabilities on the right-hand side in (3) are equal to
$2^{-2m}\binom{2m}{m}$, which are – in turn – the moments of the arcsine
distribution. Since the occupation times are bounded the proof of the second
arcsine law of Brownian motion is complete.
To keep the proof self-contained let us also give an elementary argument for
the persistence probabilities in (3). Define
$\tau:=\min\\{j\in\\{0,\ldots,m\\}:R_{j}=\max_{k\in\\{0,\ldots,m\\}}R_{k}\\}$
to be the first (and only) index where the maximum of $(R_{k})_{k=0}^{m}$ is
attained. Since $\tau\in\\{0,\ldots,m\\}$ by construction, we must have (using
the continuity of the distribution of the $R_{k}$ in the second step):
$\displaystyle 1=$
$\displaystyle\sum_{j=0}^{m}\mathbb{P}\left\\{\tau=j\right\\}=\sum_{j=0}^{m}\mathbb{P}\left\\{R_{k}<R_{j},k=0,\ldots,j-1,j+1,\ldots,m\right\\}$
$\displaystyle=$
$\displaystyle\sum_{j=0}^{m}\mathbb{P}\left\\{R_{k}<R_{j},k=0,\ldots,j-1\right\\}\cdot\mathbb{P}\left\\{R_{k}<R_{j},k=j+1,\ldots,m\right\\}$
$\displaystyle=$
$\displaystyle\sum_{j=0}^{m}\mathbb{P}\left\\{R_{k}>0,k=1,\ldots,j\right\\}\cdot\mathbb{P}\left\\{R_{k}>0,k=1,\ldots,m-j\right\\},$
where we used the independence of increments of $(R_{k})$ in the third step
and the stationarity and the symmetry of the increments of $(R_{k})$ in the
fourth step. It is again elementary to show that the unique solution of this
recursive equation is given by
$\mathbb{P}\left\\{R_{k}>0,k=1,\ldots,j\right\\}=\frac{(2j-1)!!}{(2j)!!}=2^{-2j}\binom{2j}{j}$
for all $j=0,\ldots,m$. To see the latter, multiply the recursion by
$x\in[0,1)$, sum in $m$, and the generating function of the probabilities in
question is found to be $(1-x)^{-1/2}$, cf. [12] for similar arguments. ∎
The proof does not fully use the Brownian properties, in particular,
continuity does not play a role in the sampling. Actually, exactly the same
argument works for symmetric strictly stable Lévy processes, recovering the
arcsine law first derived by Kac [29]. Below we also provide a simple proof
for the occupation time of a Brownian bridge to be $\mathcal{U}([0,1])$ but we
do so directly in the more general setting of Lévy bridges, cf. Theorem 4.
### 1.2 Spherical fractional Brownian motion
We now come to the main result of this article, the occupation ‘area’ law for
Lévy’s spherical Brownian motion and the fractional generalisation. Fix
$H\in(0,1/2]$ and $d\in\mathbb{N}$, $d\geq 2$, and let $\lVert
x\rVert\coloneqq\sqrt{x_{1}^{2}+\cdots+x_{d}^{2}}$ denote the Euclidean norm
of $x\in\mathbb{R}^{d}$. Recall that spherical fractional Brownian motion
(spherical fBM) $X\coloneqq(X_{t})_{t\in\mathbb{S}^{d-1}}$ is a centred
Gaussian process on the unit $(d-1)$-sphere
$\mathbb{S}^{d-1}\coloneqq\\{x\in\mathbb{R}^{d}\colon\lVert x\rVert=1\\}$ such
that $X_{O}=0$ a.s. for some arbitrary fixed point $O\in\mathbb{S}^{d-1}$ with
$\displaystyle\mathbb{E}[(X_{s}-X_{t})^{2}]=(d(s,t))^{2H},\qquad
s,t\in\mathbb{S}^{d-1},$ (4)
where $d(s,t)$ denotes the geodesic distance between two points $s,t$ on
$\mathbb{S}^{d-1}$. The special case $H=1/2$ was first studied by Paul Lévy
[35] and is sometimes referred to as Lévy’s spherical Brownian motion. Istas
[26] showed that there exists a Gaussian process indexed by $\mathbb{S}^{d-1}$
with covariance structure as in (4) if and only if $H\leq 1/2$. Let
$\displaystyle
A\coloneqq\int_{\mathbb{S}^{d}}{\mathbf{1}}_{\\{X_{s}>0\\}}\sigma^{d-1}(ds)$
denote the ‘area’ that $X$ spends positive, or rather the measure of the area
on $\mathbb{S}^{d-1}$ on which $X$ is positive as measured by the surface
measure $\sigma^{d-1}$.
###### Theorem 2 (Occupation time of spherical fractional Brownian motion).
Let $(X_{t})_{t\in\mathbb{S}^{d-1}}$ be a spherical fractional Brownian motion
$X$ with Hurst parameter $H\in(0,1/2]$. Then
$\lvert\sigma^{d-1}\rvert^{-1}\int_{\mathbf{S}^{d-1}}{\mathbf{1}}_{\\{X_{s}>0\\}}\sigma^{d-1}(ds),$
i.e. the ‘area’ that $X$ spends positive, is uniformly distributed on $(0,1)$,
where
$\lvert\sigma^{d-1}\rvert=\sigma^{d-1}(\mathbb{S}^{d-1})=2\pi^{\frac{d}{2}}/\Gamma(\frac{d}{2})$
is the surface area of the unit $(d-1)$-sphere.
### 1.3 Lévy processes and bridges
In this section, we apply the sampling formula to compute all moments of
occupation times of one-dimensional Lévy processes, i.e. stochastic processes
with independent and stationary increments. Let $(X_{t})_{t\geq 0}$ be a Lévy
process. We characterize the distribution of the random variable
$A_{t}\coloneqq\int_{0}^{t}{\mathbf{1}}_{\\{X_{s}>0\\}}\,ds$ (5)
by working out explicitly all its moments. In order to state the result, let
us introduce some further notation. A partition of a set $S$ is a set, $\rho$
say, of nonempty pairwise disjoint subsets of $S$ whose union is $S$. The
members of $\rho$ are also called the blocks of $\rho$. Let $\texttt{\\#}S$
denote the cardinality of $S$ and for some natural number $n$ let
${\mathscr{P}}_{n}$ denote the set of all partitions of $\\{1,\ldots,n\\}$.
Further, we recall that $(f\ast g)(t):=\int_{0}^{t}f(t-s)g(s)ds$ is the
convolution of two functions $f,g:[0,\infty)\to\mathbb{R}$. Sampling the
occupations at Poisson times in combination with Spitzer’s identity and a Bell
polynomial trick yields the following moment formula:
###### Theorem 3 (Occupation time of a Lévy process).
Fix $m\geq 1$ arbitrarily. The $m$-th moment of the occupation time $A_{t}$ of
the real-valued Lévy process $X$ in the set $(0,\infty)$ is given by
$\displaystyle\mathbb{E}[A_{t}^{m}]$
$\displaystyle=\sum_{\rho\in{{\mathscr{P}}}_{m}}\int_{0}^{t}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}\left(u^{\texttt{\\#}B-1}\mathbb{P}\left\\{X_{u}>0\right\\}\right)(s)ds.$
(6)
In particular, the first two moments of $A_{t}$ are given by
$\displaystyle\mathbb{E}[A_{t}]$
$\displaystyle=\int_{0}^{t}\mathbb{P}\left\\{X_{s}>0\right\\}ds,$ (7)
$\displaystyle\mathbb{E}[A_{t}^{2}]$
$\displaystyle=\int_{0}^{t}s\mathbb{P}\left\\{X_{s}>0\right\\}ds+\int_{0}^{t}\int_{0}^{s}\mathbb{P}\left\\{X_{u}>0\right\\}\mathbb{P}\left\\{X_{s-u}>0\right\\}duds.$
Equations (6) and (7) still hold when their (strict) inequalities together
with the (strict) inequality in the definition of the occupation time (5) are
replaced by weak inequalities.
Theorem 3 shows how to work out explicitly the moments of the distribution of
the occupation time above zero of a Lévy process $X$. In particular, the
formula shows that the distribution of $A_{t}$ is completely determined by the
positivity function $s\mapsto\mathbb{P}\left\\{X_{s}>0\right\\}$. In fact, the
only ingredient coming from the Lévy process in the moment formula (6) is the
positivity function. Equivalently, the theorem shows that the first moment of
the occupation times already determines their entire distribution.
There are a few situations in which the moments formulas can be used to
compute the occupation time distributions. One example, that could not be
treated in the literature before, is the $\frac{1}{2}$-stable subordinator
with negative drift $\mu$ for which the positivity function is known as
$\mathbb{P}\left\\{X_{t}>0\right\\}=\text{erf}(\sqrt{t/(4\mu)})$. The slightly
tedious computations will be presented in an accompanying article. A more
common situation is that of constant positivity, i.e.
$\mathbb{P}\left\\{X_{t}>0\right\\}=c$ for all $t>0$, which occurs for
instance in the case of strictly stable Lévy processes. Inserting into (6)
leaves us with a simple combinatorial expression for the moments of $A_{t}$. A
short computation shows that those expressions are precisely those of the
generalised arcsine distributions, i.e. a beta distribution with parameters
$(a,b)=(c,1-c)$ for some $c\in(0,1)$.
###### Corollary 1 (cf. [24]).
Fix $c\in(0,1)$. The following two statements are equivalent:
1. 1.
We have $\mathbb{P}\left\\{X_{t}>0\right\\}=c$ for all $t>0$.
2. 2.
The occupation time
$t^{-1}A_{t}=t^{-1}\int_{0}^{t}{\mathbf{1}}_{\\{X_{s}>0\\}}ds$ is generalised
arcsine distributed with parameter $c\in(0,1)$ for all $t>0$.
The symmetric case $c=\frac{1}{2}$ thus recovers the classical arcsine law.
The corollary can be deduced from Theorem 3 with a short combinatorial
computation because the moments of generalised arcsine distributions have the
combinatorial form
$\displaystyle\frac{\Gamma(m+c)}{\Gamma(m+1)\Gamma(c)}=\frac{c^{\overline{m}}}{m!},$
where $x^{\overline{m}}\coloneqq x(x+1)\cdots(x+m-1)$ denotes the $m$-the
rising factorial power of $x\in\mathbb{R}$, and the last identity is easily
seen by induction. The corollary was already proved by Getoor and Sharpe [24]
by guessing of Laplace transforms. Our proof highlights once more the
combinatorial nature behind occupation times seen through their moments.
Finally, we use the sampling method to provide a simple proof of the
uniformity of occupation times for Lévy bridges.
###### Theorem 4 (Occupation time of a Lévy bridge; cf. [22] and [32]).
Let $X$ denote a Lévy process and consider the stochastic process
$\mathring{X}\coloneqq(\mathring{X}_{t})_{t\in[0,1]}$ defined by
$\mathring{X}_{t}\coloneqq X_{t}-tX_{1}$, that we refer to as the Lévy bridge
induced by $X$. Provided that the distribution of $X_{1}$ has no atoms, the
occupation time
$\mathring{A}\coloneqq\int_{0}^{1}{\mathbf{1}}_{\\{\mathring{X}_{t}>0\\}}dt$
of the Lévy bridge $\mathring{X}$ is uniformly distributed on $(0,1)$.
The result in Theorem 4 essentially goes back to Fitzsimmons and Getoor [22]
and Knight [32]. In fact, Knight [32, Theorem 2.1(a)] provides a complete
characterization of Lévy bridges with uniform occupation times. However, we
consider our derivation interesting in its own right, as the sampling approach
yields significantly simpler proofs.
## 2 Proofs of the theorems
### 2.1 Proofs for the spherical fractional Brownian motion result
Before we start with the proof of our results on spherical fBM let us first
examine its index set. Looking at $\mathbb{S}^{d-1}$ through the glasses of
Cartesian coordinates, there seems to be no natural way to order its elements
that suits our purposes. Instead, the spherical coordinates naturally suggest
an order on the sphere that is very useful. Let us recall the definition of
_spherical coordinates._ For any point $x\in\mathbb{R}^{d}$ with Euclidean
norm $r\coloneqq r(x)\coloneqq\lVert x\rVert$ its angles
$(\varphi_{1},\ldots,\varphi_{d-2},\theta)\coloneqq(\varphi_{1}(x),\ldots,\varphi_{d-2}(x),\theta(x))\in[0,\pi)^{d-2}\times[0,2\pi)$
are defined (cf. [9]) implicitly by
$\displaystyle\begin{split}x_{k}&=r\cos\varphi_{k}\prod_{j=1}^{k-1}\sin\varphi_{j},\qquad
1\leq k\leq d-2,\\\ x_{d-1}&=r\sin\theta\prod_{j=1}^{d-2}\sin\varphi_{j},\\\
\end{split}$ (8) which then implies $\displaystyle x_{d}$
$\displaystyle=r\cos\theta\prod_{j=1}^{d-2}\sin\varphi_{j}.$
We refer to $(r(x),\varphi_{1}(x),\ldots,\varphi_{d-2}(x),\theta(x))$ as the
spherical coordinates of $x$. At times we allow ourselves to slightly misuse
terminology and refer to the angles
$(\varphi_{1}(x),\ldots,\varphi_{d-2}(x),\theta(x))$ as the spherical
coordinates of $x$, in particular if $r(x)=1$. In what follows we agree on
using the following (lexicographic) order $\leq$ on $\mathbb{S}^{d-1}$. For
$x,x^{\prime}\in\mathbb{S}^{d-1}$ we set $x\leq x^{\prime}$ if one of the
following (mutually exclusive) conditions holds:
1. i)
$x=x^{\prime}$,
2. ii)
$\theta(x)<\theta(x^{\prime})$,
3. iii)
$\theta(x)=\theta(x^{\prime})$, and there exists $1\leq k\leq d-2$ such that
$\varphi_{j}(x)=\varphi_{j}(x^{\prime})$ for all $1\leq j\leq k-1$, and
$\varphi_{k}(x)<\varphi_{k}(x^{\prime})$.
In what follows we will deal with a finite number $U_{1},\ldots,U_{m}$ of,
say, i.i.d. r.v.s sampled according to some continuous distribution with
support $\mathbb{S}^{d-1}$. Consequently, for any pair $U_{i},U_{j}$ all their
angles are distinct a.s. Therefore, their order, i.e. whether $U_{i}\leq
U_{j}$ or $U_{j}\leq U_{i}$, is completely determined by $\theta(U_{i})$ and
$\theta(U_{j})$. This means that the order statistics $U_{m:1}\leq\cdots\leq
U_{m:m}$ again only depends on the angles $\theta(U_{1}),\ldots,\theta(U_{m})$
a.s.
At the heart of our proof of Theorem 2 lies the following proposition on the
increments of spherical fBM that we consider of interest in its own right. We
call a finite permutation $\pi$ a cyclic permutation if there is a
decomposition of $\pi$ into one cycle only. For $m\in\mathbb{N}$ we denote by
$\operatorname{Cyc}(m)$ the set of all cyclic permutations of
$\\{1,\ldots,m\\}$. A finite sequence $(Y_{1},\ldots,Y_{m})$ of r.v.s is
called cyclically exchangeable if for any cyclic permutation
$\pi\in\operatorname{Cyc}(m)$ the random vectors $(Y_{1},\ldots,Y_{m})$ and
$(Y_{\pi(1)},\ldots,Y_{\pi(m)})$ have the same distribution. Intuition
suggests that the increments of spherical fractional Brownian motion $X$
induced by the order statistics of $m$ i.i.d. points $U_{1},\ldots,U_{m}$
sampled from $\mathbb{S}^{d-1}$ uniformly at random should be cyclically
exchangeable. (This is most easily seen first in the special case $d=2$.) Our
next proposition shows that this intuition is in fact true.
###### Proposition 2.
Fix $H\in(0,1/2]$, and let $(X_{t})_{t\in\mathbb{S}^{d-1}}$ denote spherical
fBM with Hurst index $H$ as defined by (4) with the property that $X_{O}=0$
a.s. for some fixed (deterministic) $O\in\mathbb{S}^{d-1}$ with $\theta(O)=0$.
Let $U_{1},U_{2},\ldots$ denote a sequence of i.i.d. r.v.s uniformly
distributed on $\mathbb{S}^{d-1}$. Fix $m\in\mathbb{N}$. Then the sequence of
increments
$\displaystyle(X_{U_{m:k}}-X_{U_{{m:k-1}}})_{k=1}^{m+1}$ (9)
is cyclically exchangeable, where we set $U_{m:0}\coloneqq U_{m:m+1}\coloneqq
O$.
Before we turn to the proof of Proposition 2 we make some further
observations.
###### Lemma 1.
Let $U$ be a point sampled uniformly at random from $\mathbb{S}^{d-1}$. Then
$(\varphi_{1}(U),\ldots,\varphi_{d-2}(U))$ and $\theta(U)$ are independent,
and $\theta(U)$ is uniformly distributed on $(0,2\pi)$.
###### Proof.
Fix some arbitrary $x\in\mathbb{R}^{d}$. Notice from the definition of
spherical coordinates in Equations (8) that the angles of $x$ and $cx$ agree
for any $c>0$, i.e.
$\displaystyle\varphi_{k}(x)$ $\displaystyle=\varphi_{k}\left(cx\right),\qquad
1\leq k\leq d-2,\quad\text{and}\quad\theta(x)=\theta\left(cx\right).$
Moreover, the angles $\varphi_{1},\ldots,\varphi_{d-2}$ depend on
$x_{1},\ldots,x_{d-2}$, but not on $x_{d-1},x_{d}$. The projection of $x$ onto
the hyperplane $x_{1}=\cdots=x_{d-2}=0$ has distance
$\sqrt{x_{d-1}^{2}+x_{d}^{2}}=r\prod_{j=1}^{d-2}\sin\varphi_{j}$ from the
(Euclidian) origin by the Pythagorean identity
$\sin^{2}\varphi+\cos^{2}\varphi=1$, and since $\sin\varphi\geq 0$ for
$\varphi\in[0,\pi)$. Consequently,
$\sin\theta=x_{d-1}/\sqrt{x_{d-1}^{2}+x_{d}^{2}}$, and therefore $\theta$ only
depends on $x_{d-1}$ and $x_{d}$. Recall now that $U=_{d}X/\lVert X\rVert$
with $X=(X_{1},\ldots,X_{d})$ having i.i.d. standard Gaussian coordinates.
This shows that
$(\varphi_{1}(U),\ldots,\varphi_{d-2}(U))=_{d}(\varphi_{1}(X),\ldots,\varphi_{d-2}(X))$
and $\theta(U)=_{d}\theta(X)$ are independent. Moreover, $\theta$ is the angle
enclosed by the positive $x_{d}$-axis and the line through the origin and the
projection $(0,\ldots,0,x_{d-1},x_{d})$ of $x$ onto the hyperplane
$x_{1}=x_{2}=\cdots=x_{d-2}=0$. Since the distribution of $(X_{d-1},X_{d})$ is
invariant under rotations in the plane, $\theta(X)$ is uniformly distributed
on $(0,2\pi)$. ∎
The last lemma allows us to show that the (geodesic) distances between
consecutively ordered i.i.d uniformly distributed points on $\mathbb{S}^{d-1}$
are exchangeable. Recall that a finite sequence $(Y_{1},\ldots,Y_{m})$ of
r.v.s is called exchangeable if for any permutation $\pi$ the random vectors
$(Y_{1},\ldots,Y_{m})$ and $(Y_{\pi(1)},\ldots,Y_{\pi(m)})$ have the same
distribution. Clearly, if $(Y_{1},\ldots,Y_{m})$ is exchangeable than it is
also cyclically exchangeable.
###### Proposition 3.
Fix $m\in\mathbb{N}$. Let $U_{1},\ldots,U_{m}$ be a sequence of i.i.d. r.v.s
with uniform distribution on $\mathbb{S}^{d-1}$. Then the random vector of
geodesic distances
$\displaystyle\left(d(U_{m:k},U_{m:k-1})\right)_{k=1}^{m+1}$
between consecutive order statistics $U_{m:0},U_{m:1},\ldots,U_{m:m+1}$ is
exchangeable, where $U_{m:0}\coloneqq U_{m:m+1}\coloneqq O\in\mathbb{S}^{d-1}$
is a fixed (deterministic) point with $\theta(O)=0$.
###### Proof.
We make use of the fact that the geodesic distance $d(x,x^{\prime})$ between
two points $x,x^{\prime}\in\mathbb{S}^{d-1}$ satisfies $\cos
d(x,x^{\prime})=x\cdot x^{\prime}$, where $x\cdot
x^{\prime}\coloneqq\sum_{k=1}^{d}x_{k}x^{\prime}_{k}$ denotes the scalar
product of $x$ and $x^{\prime}$, cf. [3, p. 141–142]. Thus it suffices to show
that $(U_{m:k}\cdot U_{m:k-1})_{k=1}^{m+1}$ is exchangeable. Now, denoting by
$(x)_{\ell}$ the $\ell$-th component of $x$, by definition of the scalar
product,
$\displaystyle U_{m:k}\cdot U_{m:k-1}=$
$\displaystyle\sum_{\ell=1}^{d}(U_{m:k})_{\ell}(U_{m:k-1})_{\ell},$
and by the implicit definition of spherical coordinates, Equations (8), the
last term equals
$\displaystyle=$
$\displaystyle\sum_{\ell=1}^{d-2}\cos\varphi_{\ell}(U_{m:k})\cos\varphi_{\ell}(U_{m:k-1})\prod_{j=1}^{\ell-1}\sin\varphi_{j}(U_{m:k})\sin\varphi_{j}(U_{m:k-1})$
$\displaystyle+\left(\sin\theta(U_{m:k})\sin\theta(U_{m:k-1})+\cos\theta(U_{m:k})\cos\theta(U_{m:k-1})\right)\prod_{j=1}^{d-2}\sin\varphi_{j}(U_{m:k})\sin\varphi_{j}(U_{m:k-1})$
$\displaystyle=$
$\displaystyle\sum_{\ell=1}^{d-2}\cos\varphi_{\ell}(U_{m:k})\cos\varphi_{\ell}(U_{m:k-1})\prod_{j=1}^{\ell-1}\sin\varphi_{j}(U_{m:k})\sin\varphi_{j}(U_{m:k-1})$
$\displaystyle+\cos(\theta(U_{m:k})-\theta(U_{m:k-1}))\prod_{j=1}^{d-2}\sin\varphi_{j}(U_{m:k})\sin\varphi_{j}(U_{m:k-1}),$
where we used the identity
$\cos(\varphi-\varphi^{\prime})=\cos\varphi\cos\varphi^{\prime}+\sin\varphi\sin\varphi^{\prime}$
in the last equation. By Lemma 1 $(\theta(U_{k}))_{k=1}^{m}$ is an i.i.d.
sequence of uniform $(0,2\pi)$ r.v.s. Consequently, the gaps
$(\theta(U_{m:k})-\theta(U_{m:k-1}))_{k=1}^{m+1}$ rescaled by $1/(2\pi)$ obey
a Dirichlet distribution with all parameters equal to one, cf. Theorem V.2.2
in [15]. In particular, the gaps are exchangeable. Since
$(\varphi_{1}(U_{m:k}),\ldots,\varphi_{d-2}(U_{m:k}))_{k=1}^{m}$ is an i.i.d.
sequence of random variables and again by Lemma 1 and the expansion of the dot
product in the last display we see that $(U_{m:k}\cdot U_{m:k-1})_{k=1}^{m}$
is exchangeable. ∎
We are now ready to show Proposition 2.
###### Proof of Proposition 2.
Set $d_{H}(s,t)\coloneqq(d(s,t))^{2H}$, and define the function
$\tilde{c}\colon(\mathbb{S}^{d-1})^{4}\to\mathbb{R}$ by
$\displaystyle\tilde{c}(s,s^{\prime},t,t^{\prime})$
$\displaystyle\coloneqq\operatorname{Cov}(X_{s^{\prime}}-X_{s},X_{t^{\prime}}-X_{t})$
$\displaystyle=\operatorname{Cov}(X_{s^{\prime}},X_{t^{\prime}})-\operatorname{Cov}(X_{s^{\prime}},X_{t})-\operatorname{Cov}(X_{s},X_{t^{\prime}})+\operatorname{Cov}(X_{s},X_{t})$
$\displaystyle=c(s^{\prime},t^{\prime})-c(s^{\prime},t)-c(s,t^{\prime})+c(s,t)$
$\displaystyle=\frac{1}{2}\left(d_{H}(s^{\prime},t)+d_{H}(s,t^{\prime})-d_{H}(s^{\prime},t^{\prime})-d_{H}(s,t)\right),$
where the covariance function of $X$,
$\displaystyle c(s,t)$
$\displaystyle=\frac{1}{2}(d_{H}(O,s)+d_{H}(O,t)-d_{H}(s,t)),\qquad
s,t\in\mathbb{S}^{d-1},$
can be computed from (4). Conditionally given $U\coloneqq(U_{1},\ldots,U_{m})$
the random vector of increments $(X_{U_{m:k}}-X_{U_{m:k-1}})_{k=1}^{m+1}$ has
characteristic function
$\displaystyle\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}s_{k}\left(X_{U_{m:k}}-X_{U_{m:k-1}}\right)\right)\Big{|}\,U\right]$
$\displaystyle=\exp\left(-\frac{1}{2}s^{\intercal}Rs\right)\qquad(s\in\mathbb{R}^{m+1}),$
(10)
where $R=(R_{ij})$ is the $(m+1)\times(m+1)$ covariance matrix (a random
matrix, as it depends on $(U_{1},\ldots,U_{m})$) defined by
$R_{ij}\coloneqq\tilde{c}(U_{m:i},U_{m:i-1},U_{m:j},U_{m:j-1})$. Let
$\pi\in\operatorname{Cyc}(m+1)$ be an arbitrary but fixed cyclic permutation.
If we can show that the matrices $(R_{ij})_{i,j\in\\{1,\ldots,m+1\\}}$ and
$(R_{\pi(i)\pi(j)})_{i,j\in\\{1,\ldots,m+1\\}}$ have the same distribution,
the claim is proved. Assume without loss of generality that $i\leq j-1$.
Notice that by definition of $\tilde{c}$
$\displaystyle\quad R_{ij}$
$\displaystyle=\frac{1}{2}\left(d_{H}(U_{m:i-1},U_{m:j})+d_{H}(U_{m:i},U_{m:j-1})-d_{H}(U_{m:i-1},U_{m:j-1})-d_{H}(U_{m:i},U_{m:j})\right)$
$\displaystyle=_{d}\frac{1}{2}\bigg{(}d_{H}(U_{m:\pi(i)-1},U_{m:\pi(j)})+d_{H}(U_{m:\pi(i)},U_{m:\pi(j)-1})$
$\displaystyle\qquad-
d_{H}(U_{m:\pi(i)-1},U_{m:\pi(j)-1})-d_{H}(U_{m:\pi(i)},U_{m:\pi(j)})\bigg{)}$
$\displaystyle=R_{\pi(i)\pi(j)},$
where we applied Proposition 3 in the second equality. The identity in the
last math display implies that $(R_{ij})_{i,j\in\\{1,\ldots,m+1\\}}$ has the
same distribution as $(R_{\pi(i)\pi(j)})_{i,j\in\\{1,\ldots,m+1\\}}$ and thus
the claim follows. ∎
The last ingredient in our derivation of the uniform distribution of the
occupation time of spherical fBM, Theorem 2, is a fluctuation result on random
walk bridges. We construct a random walk bridge
$S_{0}=0,S_{1},\ldots,S_{m},S_{m+1}=0$ from the (cyclically exchangeable)
increments in (9) by setting
$\displaystyle S_{k}$
$\displaystyle\coloneqq\sum_{\ell=1}^{k}(X_{U_{m:\ell}}-X_{U_{m:\ell-1}})=X_{U_{m:k}},\qquad
1\leq k\leq m+1.$
The event
$\\{X_{U_{1}}>0,\ldots,X_{U_{m}}>0\\}=\\{X_{U_{m:1}}>0,\ldots,X_{U_{m:m}}>0\\}$
may now be viewed as the event $\\{S_{1}>0,\ldots,S_{m}>0\\}$, i.e. that
$(S_{k})_{k=0}^{m+1}$ is positive (except for its two endpoints
$S_{0}=S_{m+1}=0$). The fluctuation result on random walk bridges (with
cyclically exchangeable increments) will be formulated quite generally and may
be of independent interest. We stress that a similar result by Sparre Andersen
[42], cf. the exposition in [28, Section 1.3], is not sufficient for our
purposes, as it assumes exchangeable increments rather than only cyclically
exchangeable increments. We refer to [4] for similar results, but they do not
apply to bridges, as needed in our case. Our notation partly follows the
exposition in [28].
Fix $m\in\mathbb{N}$. Let $\xi_{1},\ldots,\xi_{m}$ be real r.v.s. Define the
partial sums $(S_{k})_{k=0}^{m}$ by
$\displaystyle S_{0}\coloneqq 0,\qquad S_{k}$
$\displaystyle\coloneqq\xi_{1}+\cdots+\xi_{k},\qquad 1\leq k\leq m.$
We impose the following assumptions on the increments
$\xi_{1},\ldots,\xi_{m}$:
1. i)
Bridge property: $S_{m}=\xi_{1}+\ldots+\xi_{m}=0$ a.s.
2. ii)
Cyclic exchangeability: For every cyclic permutation
$\pi\in\operatorname{Cyc}(m)$ of $\\{1,\ldots,m\\}$ we have the distributional
identity
$\displaystyle(\xi_{1},\ldots,\xi_{m})=_{d}(\xi_{\pi(1)},\ldots,\xi_{\pi(m)}).$
3. iii)
For any $1\leq k\leq m-1$ the distribution of $S_{k}$ has no atoms.
We call $(S_{k})_{k=0}^{m}$ a random walk bridge with cyclically exchangeable
increments.
###### Proposition 4.
Fix $m\in\mathbb{N}$. Let $(S_{k})_{k=0}^{m}$ be a random walk bridge with
cyclically exchangeable increments such that $S_{k}$ has no atoms for any
$1\leq k\leq m$, then
$\displaystyle\mathbb{P}\left\\{S_{1}>0,\ldots,S_{m-1}>0\right\\}$
$\displaystyle=\frac{1}{m}.$
###### Proof.
Define
$\tau:=\min\\{j\in\\{0,\ldots,m-1\\}:S_{j}=\min_{k\in\\{0,\ldots,m-1\\}}S_{k}\\}$
to be the index where the minimum of $S_{0}=0,S_{1},\ldots,S_{m-1}$ is
attained. We show that
$\mathbb{P}\left\\{\tau=j\right\\}=\mathbb{P}\left\\{\tau=0\right\\}$ for all
$j=0,1,\ldots,m-1$. Since
$1=\sum_{j=0}^{m-1}\mathbb{P}\left\\{\tau=j\right\\}$, this will imply
$\mathbb{P}\left\\{\tau=0\right\\}=\frac{1}{m}$. Noting further that
$\mathbb{P}\left\\{\tau=0\right\\}=\mathbb{P}\left\\{S_{1}>0,\ldots,S_{m-1}>0\right\\}$
we will have proved our claim. In order to prove
$\mathbb{P}\left\\{\tau=j\right\\}=\mathbb{P}\left\\{\tau=0\right\\}$ we will
use the cyclic permutation $\pi$ given by
$\pi(i):=\begin{cases}i+j&:i=1,\ldots,m-j,\\\
i-m+j&:i=m-j+1,\ldots,m.\end{cases}$
Note that by cyclic exchangeability
$\mathbb{P}\left\\{\tau=0\right\\}=\mathbb{P}\left\\{0<\sum_{i=1}^{k}\xi_{i},k=1,\ldots,m-1\right\\}=\mathbb{P}\left\\{0<\sum_{i=1}^{k}\xi_{\pi(i)},k=1,\ldots,m-1\right\\}.$
We are going to analyse the conditions $0<\sum_{i=1}^{k}\xi_{\pi(i)}$,
$k=1,\ldots,m-1$, and see that they are equivalent to the event
$\\{\tau=j\\}$. Indeed, first note that for $k=1,\ldots,m-j-1$
$0<\sum_{i=1}^{k}\xi_{\pi(i)}=\sum_{i=1}^{k}\xi_{i+j}=\sum_{i=j+1}^{k+j}\xi_{i}=S_{k+j}-S_{j}.$
This means that $S_{j}<S_{\ell}$ for all $\ell=j+1,\ldots,m-1$. Second, note
that for $k=m-j,\ldots,m-1$
$\displaystyle 0$ $\displaystyle<$
$\displaystyle\sum_{i=1}^{k}\xi_{\pi(i)}=\sum_{i=1}^{m-j}\xi_{\pi(i)}+\sum_{i=m-j+1}^{k}\xi_{\pi(i)}=\sum_{i=1}^{m-j}\xi_{i+j}+\sum_{i=m-j+1}^{k}\xi_{i-m+j}$
$\displaystyle=$
$\displaystyle\sum_{i=j+1}^{m}\xi_{i}+\sum_{i=1}^{k-m+j}\xi_{i}=S_{m}-S_{j}+S_{k-m+j}=0-S_{j}+S_{k-m+j}.$
This means that $S_{j}<S_{\ell}$ for all $\ell=0,\ldots,j-1$. ∎
We are now ready to prove Theorem 2.
###### Proof of Theorem 2.
Recall that the uniform distribution on $(0,1)$ has moment sequence
$\int_{0}^{1}x^{m}dx=\frac{1}{m+1}$, $m\in\mathbb{N}$. For this reason,
according to Proposition 1 it is sufficient to show that
$\mathbb{P}\left\\{X_{U_{1}}>0,\ldots,X_{U_{m}}>0\right\\}=\frac{1}{m+1}$ for
$m\geq 1$. From Proposition 2 we know that we can view
$\\{X_{U_{1}}>0,\ldots,X_{U_{m}}>0\\}=\\{X_{U_{m:1}}>0,\ldots,X_{U_{m:m}}>0\\}$
as the event $\\{S_{1}>0,\ldots,S_{m}>0\\}$, where $S_{1},\ldots,S_{m+1}$ is a
random walk bridge with exchangeable increments defined in (9). The claim thus
follows from Proposition 4. ∎
### 2.2 Proofs for the results on Lévy processes and Lévy bridges
#### 2.2.1 Lévy processes
For the proof of Theorem 3 we rely on some well-known results of Spitzer and
some basic facts on Bell polynomials that we now recall. Let
$\xi_{1},\xi_{2},\ldots$ denote a sequence of i.i.d. real-valued random
variables with partial sums $S_{n}\coloneqq\sum_{k=1}^{n}\xi_{k}$,
$n\in\mathbb{N}$. As a consequence of what is now called Spitzer’s identity,
he obtains the following fact.
###### Corollary 2 (Corollary 2 in [44], Theorem 1 in [43]).
The survival probabilities of the partial sums $S_{1},S_{2},\ldots$ have
generating function
$\displaystyle\sum_{k=0}^{\infty}t^{k}\mathbb{P}\left\\{S_{1}\geq
0,\ldots,S_{k}\geq 0\right\\}$
$\displaystyle=\exp\left(\sum_{k=1}^{\infty}\frac{t^{k}}{k}\mathbb{P}\left\\{S_{k}\geq
0\right\\}\right),\qquad|t|<1.$ (11)
This identity still holds when the inequalities in (11) are replaced by strict
inequalities.
Let us rewrite this identity in a more combinatorial form that is better
suited for our purposes. To this end we utilize the Bell polynomials. For any
two sequences of real numbers $v_{\bullet}=(v_{k})_{k\in\mathbb{N}}$ and
$w_{\bullet}=(w_{k})_{k\in\mathbb{N}}$ let
$B_{k}(v_{\bullet},w_{\bullet})\coloneqq\sum_{\ell=1}^{k}v_{\ell}B_{k,\ell}(w_{\bullet}),\quad
k\in\mathbb{N},$
denote the $k$-th complete Bell polynomial (associated with
$(v_{\bullet},w_{\bullet})$), where
$B_{k,\ell}(w_{\bullet})\coloneqq\sum_{\rho\in{\mathscr{P}}_{k,\ell}}\prod_{B\in\rho}w_{\texttt{\\#}B},\quad
1\leq\ell\leq k,$
denotes the $(i,\ell)$-th partial Bell polynomial (associated with
$w_{\bullet}$) and ${\mathscr{P}}_{i,\ell}$ denotes the set of all partitions
of $\\{1,\ldots,i\\}$ that contain exactly $\ell$ blocks. We use the well
known fact, cf. [40, Equation (1.11)], that for any two sequences
$v_{\bullet},w_{\bullet}$ the exponential generating function of the
associated complete Bell polynomials is given by
$\displaystyle\sum_{k=1}^{\infty}B_{k}(v_{\bullet},w_{\bullet})\frac{x^{k}}{k!}=v(w(x)),$
(12)
whenever either of these quantities is well-defined and where
$v(x)\coloneqq\sum_{k\geq 1}v_{k}\frac{x^{k}}{k!}$ and
$w(y)\coloneqq\sum_{k\geq 1}w_{k}\frac{y^{k}}{k!}$ denote the exponential
generating function of $v_{\bullet}=(v_{k})_{k\in\mathbb{N}}$ and
$w_{\bullet}=(w_{k})_{k\in\mathbb{N}}$, respectively. For more information on
Bell polynomials, the interested reader is referred to the lecture notes [40].
We will work with the following reformulation of Spitzer’s result:
###### Corollary 3.
For the survival probability of the sequence of partial sums we obtain
$\displaystyle\mathbb{P}\left\\{S_{1}\geq 0,\ldots,S_{k}\geq 0\right\\}$
$\displaystyle=\frac{1}{k!}\,\sum_{\rho\in{\mathscr{P}}_{k}}\prod_{B\in\rho}(\texttt{\\#}B-1)!\,\mathbb{P}\left\\{S_{\texttt{\\#}B}\geq
0\right\\};$ (13)
and the identity in (13) still holds when the inequalities are replaced by
strict inequalities.
###### Proof.
Define the sequences $v_{\bullet}\coloneqq(v_{k})_{k\in\mathbb{N}}$ and
$w_{\bullet}\coloneqq(w_{k})_{k\in\mathbb{N}}$ by setting
$\displaystyle v_{k}=1\quad\text{and}\quad
w_{k}=(k-1)!\,\mathbb{P}\left\\{S_{k}\geq 0\right\\},\quad k\geq 1.$
With this particular choice for $v_{\bullet},w_{\bullet}$, we find that
$v(x)=e^{x}-1$ and
$w(x)=\sum_{k=1}^{\infty}\frac{x^{k}}{k}\mathbb{P}\left\\{S_{k}\geq
0\right\\}$ and we can observe that the right hand side in (11) equals
$v(w(t))+1$. For $v(w(t))$ we can use the expansion (12). Comparing this to
the left hand side of (11) we find that
$\displaystyle\mathbb{P}\left\\{S_{1}\geq 0,\ldots,S_{k}\geq 0\right\\}$
$\displaystyle=\frac{1}{k!}\,B_{k}(v_{\bullet},w_{\bullet})$
$\displaystyle=\frac{1}{k!}\,\sum_{\ell=1}^{k}B_{k,\ell}(w_{\bullet})$
$\displaystyle=\frac{1}{k!}\,\sum_{\ell=1}^{k}\sum_{\rho\in{\mathscr{P}}_{k,\ell}}\prod_{B\in\rho}w_{\texttt{\\#}B}$
$\displaystyle=\frac{1}{k!}\,\sum_{\rho\in{\mathscr{P}}_{k}}\prod_{B\in\rho}(\texttt{\\#}B-1)!\,\mathbb{P}\left\\{S_{\texttt{\\#}B}\geq
0\right\\},$
as claimed, where we used that ${\mathscr{P}}_{k}$ is the disjoint union of
the ${\mathscr{P}}_{k,\ell}$ for $\ell=1,\ldots,k$. The claim for strict
inequalities follows from the same proof using strict inequalities at all
places. ∎
We are now prepared to prove Theorem 3. The proof idea is similar to our proof
of the Brownian (or stable Lévy) arcsine law, Theorem 1. The moments are
rewritten as persistence probabilities at random times that come from the
normalised jump-times of an independent Poisson process. As we are not
assuming the scaling property we cannot scale out the terminal time. This
forces us to work with a Lévy process extension of the sampling formula at
Poisson times. Combined with the above variant of Spitzer’s identity the claim
can be deduced.
###### Proof of Theorem 3.
For the proof it is convenient to turn to a variant of the sampling method.
Specifically, instead of starting with the $m$-th moment of the sojourn time
of $X$ up until time $t$, we focus instead on its Laplace transform. We use
the Poisson sampling formula
$\displaystyle F(q)$
$\displaystyle\coloneqq\int_{0}^{\infty}e^{-qt}\mathbb{E}[A_{t}^{m}]dt=\frac{m!}{q^{m+1}}\mathbb{P}\left\\{X_{T_{1}^{(q)}}>0,\ldots,X_{T_{m}^{(q)}}>0\right\\},\qquad
m\in\mathbb{N},q>0,$ (14)
where $(T_{k}^{(q)})$ denotes the sequence of waiting times in a standard
Poisson process of intensity $q>0$ independent of $X$. The formula was for
instance used by [24] but most certainly also appeared elsewhere. Here is a
quick proof for completeness. First note that the Markov property of the
random walk $X_{T_{1}^{(q)}},X_{T_{2}^{(q)}},\ldots$ yields
$\displaystyle\mathbb{P}\left\\{X_{T_{1}^{(q)}}>0,\ldots,X_{T_{m}^{(q)}}>0\right\\}$
$\displaystyle=\int_{x_{1}>0}\cdots\int_{x_{m}>0}P(dx_{1})\cdots
P(dx_{m}-x_{m-1})$
$\displaystyle=q^{m}\int_{x_{1}>0}\cdots\int_{x_{m}>0}U^{q}(dx_{1})\cdots
U^{q}(dx_{m}-x_{m-1}),$
where $P(A):=\mathbb{P}\left\\{X_{T_{1}^{(q)}}\in A\right\\}$ is the jump
distribution of the random walk and the quantity
$U^{q}(A):=\int_{0}^{\infty}e^{-qt}\mathbb{P}\left\\{X_{t}\in A\right\\}dt$ is
the so-called $q$-potential measure of $X$. Here we used that $P=qU^{q}$. To
rewrite $F$ denote by $\bar{X}$ the killed process, i.e. the Lévy process
killed at an independent exponential time with parameter $q$. By $\bar{p}$
denote the transition kernel of the killed process. Then Fubini, monotone
convergence, the Markov property, and symmetry of the integrand yield
$\displaystyle\quad\int_{0}^{\infty}e^{-qt}\mathbb{E}[A_{t}^{m}]dt$
$\displaystyle=\frac{1}{q}\int_{0}^{\infty}\cdots\int_{0}^{\infty}\mathbb{E}[\mathbf{1}_{\bar{X}_{s_{1}}>0}\cdots\mathbf{1}_{\bar{X}_{s_{m}>0}}]ds_{m}\cdots
ds_{s}$
$\displaystyle=\frac{1}{q}\lim_{N_{1},...,N_{m}\to\infty}\int_{0}^{N_{1}}\cdots\int_{0}^{N_{m}}\mathbb{E}[\mathbf{1}_{\bar{X}_{s_{1}}>0}\cdots\mathbf{1}_{\bar{X}_{s_{m}>0}}]ds_{m}\cdots
ds_{1}$
$\displaystyle=\frac{m!}{q}\lim_{N_{1},...,N_{m}\to\infty}\int_{0}^{N_{1}}\int_{s_{1}}^{N_{2}}\cdots\int_{s_{m-1}}^{N_{m}}\mathbb{E}[\mathbf{1}_{\bar{X}_{s_{1}}>0}\cdots\mathbf{1}_{\bar{X}_{s_{m}>0}}]ds_{m}\cdots
ds_{1}$
$\displaystyle=\frac{m!}{q}\lim_{N_{1},...,N_{m}\to\infty}\int_{0}^{N_{1}}\int_{s_{1}}^{N_{2}}\cdots\int_{s_{m-1}}^{N_{m}}$
$\displaystyle\qquad\times\int_{x_{1}>0}\cdots\int_{x_{m}>0}\bar{p}_{s_{1}}(dx_{1})\cdots\bar{p}_{s_{m}-s_{m-1}}(dx_{m}-x_{m-1})ds_{m}\cdots
ds_{1}$
$\displaystyle=\frac{m!}{q}\lim_{N_{1},...,N_{m}\to\infty}\int_{0}^{N_{1}}\int_{0}^{N_{2}-s_{1}}\cdots\int_{0}^{N_{m}-s_{m-1}}$
$\displaystyle\qquad\times\int_{x_{1}>0}\cdots\int_{x_{m}>0}\bar{p}_{s_{1}}(dx_{1})\cdots\bar{p}_{s_{m}}(dx_{m}-x_{m-1})ds_{m}\cdots
ds_{1}$
$\displaystyle=\frac{m!}{q}\int_{x_{1}>0}\cdots\int_{x_{m}>0}U^{q}(dx_{1})\cdots
U^{q}(dx_{m}-x_{m-1}).$
Combining the two previous displays yields (14). Combined with Corollary 3 we
obtain
$\displaystyle F(q)$
$\displaystyle=\frac{1}{q^{m+1}}\sum_{\rho\in{\mathscr{P}}_{m}}\prod_{B\in\rho}(\texttt{\\#}B-1)!\,\mathbb{P}\left\\{S_{\texttt{\\#}B}>0\right\\},$
with $S_{k}\coloneqq X_{T_{k}^{(q)}}$. Since $T_{k}^{(q)}$ is gamma
distributed with parameters $k,q$; i.e. with density $s\mapsto
q^{k}s^{k-1}e^{-qs}/(k-1)!\mathbf{1}\\{s>0\\}$, and setting
$p_{s}\coloneqq\mathbb{P}\left\\{X_{s}>0\right\\}$ we conclude
$\displaystyle F(q)$
$\displaystyle=\frac{1}{q}\sum_{\rho\in{\mathscr{P}}_{m}}\prod_{B\in\rho}\int_{0}^{\infty}s^{\texttt{\\#}B-1}p_{s}e^{-qs}ds.$
The integral on the right-hand side is a Laplace transform, and we will denote
the Laplace transform of a function $f:[0,\infty)\to\mathbb{R}$ by
$(\mathcal{L}f)(q):=\int_{0}^{\infty}f(s)e^{-qs}ds$. Using basic properties of
Laplace transforms we can thus write
$\displaystyle F(q)$
$\displaystyle=\frac{1}{q}\sum_{\rho\in{\mathscr{P}}_{m}}\prod_{B\in\rho}\mathcal{L}\left(s^{\texttt{\\#}B-1}p_{s}\right)(q)$
$\displaystyle=\frac{1}{q}\mathcal{L}\left(\sum_{\rho\in{\mathscr{P}}_{m}}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}s^{\texttt{\\#}B-1}p_{s}\right)(q)$
$\displaystyle=\mathcal{L}\left(1\ast\sum_{\rho\in{\mathscr{P}}_{m}}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}s^{\texttt{\\#}B-1}p_{s}\right)(q).$
From this calculation we find
$\displaystyle\mathbb{E}[A_{t}^{m}]$
$\displaystyle=\left(1\ast\sum_{\rho\in{\mathscr{P}}_{m}}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}s^{\texttt{\\#}B-1}p_{s}\right)(t),$
which is the claim. ∎
Before we turn to the proof of Corollary 1 we provide a helpful lemma.
###### Lemma 2.
Fix $m\in\mathbb{N}$ and positive real numbers $a_{1},\ldots,a_{m}>0.$ Then,
for $t>0$,
$\displaystyle\left(\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{k=1}^{m}\left(s^{a_{k}-1}\right)\right)(t)=\Gamma\left(\sum_{k=1}^{m}a_{k}\right)^{-1}\prod_{k=1}^{m}\Gamma(a_{k})\cdot
t^{\sum_{k=1}^{m}a_{k}-1}.$
###### Proof.
We show the claim by induction on $m$. It is clear that the claim holds for
$m=1.$ Assume now the claim is true for some positive integer
$m\in\mathbb{N}.$ Then, using the induction hypothesis in the second equality,
$\displaystyle\left(\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{k=1}^{m+1}\left(s^{a_{k}-1}\right)\right)(t)$
$\displaystyle=\left(\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{k=1}^{m}\left(s^{a_{k}-1}\right)\ast
s^{a_{m+1}-1}\right)(t)$
$\displaystyle=\left(\Gamma\left(\sum_{k=1}^{m}a_{k}\right)^{-1}\prod_{k=1}^{m}\Gamma(a_{k})\cdot
s^{\sum_{k=1}^{m}a_{k}-1}\ast s^{a_{m+1}-1}\right)(t)$
$\displaystyle=\Gamma\left(\sum_{k=1}^{m}a_{k}\right)^{-1}\prod_{k=1}^{m}\Gamma(a_{k})\int_{0}^{t}(t-s)^{\sum_{k=1}^{m}a_{k}-1}s^{a_{m+1}-1}ds$
$\displaystyle=\Gamma\left(\sum_{k=1}^{m}a_{k}\right)^{-1}\prod_{k=1}^{m}\Gamma(a_{k})\cdot
t^{\sum_{k=1}^{m+1}a_{k}-1}\int_{0}^{1}(1-s)^{\sum_{k=1}^{m}a_{k}-1}s^{a_{m+1}-1}ds$
$\displaystyle=\Gamma\left(\sum_{k=1}^{m}a_{k}\right)^{-1}\prod_{k=1}^{m}\Gamma(a_{k})\cdot
t^{\sum_{k=1}^{m+1}a_{k}-1}\frac{\Gamma(\sum_{k=1}^{m}a_{k})\Gamma(a_{m+1})}{\Gamma(\sum_{k=1}^{m+1}a_{k})}$
$\displaystyle=\Gamma\left(\sum_{k=1}^{m+1}a_{k}\right)^{-1}\prod_{k=1}^{m+1}\Gamma(a_{k})\cdot
t^{\sum_{k=1}^{m+1}a_{k}-1},$
where we transformed coordinates in the integral in the fourth equality and
used the beta integral in the fifth equality. ∎
###### Proof of Corollary 1.
Assume that $\mathbb{P}\left\\{X_{t}>0\right\\}=c\in(0,1)$ for all $t>0$. By
(6) we have
$\displaystyle\mathbb{E}[A_{t}^{m}]=\sum_{\rho\in{\mathscr{P}}_{m}}\int_{0}^{t}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}\left(u^{\texttt{\\#}B-1}\mathbb{P}\left\\{X_{u}>0\right\\}\right)(s)ds=\sum_{\rho\in{\mathscr{P}}_{m}}c^{\texttt{\\#}\rho}\int_{0}^{t}\mathop{\scalebox{1.5}{\raisebox{-0.77498pt}{$\ast$}}}_{B\in\rho}\left(u^{\texttt{\\#}B-1}\right)(s)ds$
and applying Lemma 2 the right hand side equals
$\displaystyle\quad\frac{1}{\Gamma(m)}\sum_{\rho\in{\mathscr{P}}_{m}}c^{\texttt{\\#}\rho}\prod_{B\in\rho}(\texttt{\\#}B-1)!\cdot\int_{0}^{t}s^{m-1}ds$
$\displaystyle=\frac{t^{m}}{m!}\sum_{\rho\in{\mathscr{P}}_{m}}c^{\texttt{\\#}\rho}\prod_{B\in\rho}(\texttt{\\#}B-1)!$
$\displaystyle=\frac{t^{m}}{m!}\sum_{b=1}^{m}c^{b}\sum_{\rho\in{\mathscr{P}}_{m,b}}\prod_{B\in\rho}(\texttt{\\#}B-1)!.$
Notice that $(k-1)!$ is the number of cyclic permutations of $k$ elements,
thus $\sum_{\rho\in{\mathscr{P}}_{m,b}}\prod_{B\in\rho}(\texttt{\\#}B-1)!$ is
the number of permutations of $\\{1,\ldots,m\\}$ with $b$ blocks, also known
as the $(n,b)$-th unsigned Stirling number, which we denote by
$\genfrac{[}{]}{0.0pt}{}{n}{k}$. Recall that the unsigned Stirling numbers may
also be written as
$\displaystyle
c^{\overline{m}}=\sum_{b=0}^{m}\genfrac{[}{]}{0.0pt}{}{m}{k}c^{b},$
where $c^{\overline{m}}\coloneqq c(c+1)\cdots(c+m-1)$ and we recall that
$\genfrac{[}{]}{0.0pt}{}{m}{0}=0$ if $m>0$. Putting everything together, we
conclude that $\mathbb{E}[A_{t}^{m}]=t^{m}c^{\overline{m}}/m!$, which is the
$m$-th moment of $tA$ where the distribution of $A$ is the arcsine law on
$(0,1)$ with parameter $c$. Since this distribution is uniquely determined by
its moments, we are done with the proof of the first implication.
To see the opposite implication, assume that $t^{-1}A_{t}$ is arcsine
distributed with parameter $c$. In particular, the first moment has to have
the form $\mathbb{E}[A_{t}]=ct$ for all $t>0$. Using the first moment formula
(7), we know that the positivity function of the Lévy process is constant. ∎
#### 2.2.2 Lévy bridges
We now offer an elementary derivation of the occupation time distribution of
Lévy bridges. In some sense the proof is a simpler version of our proof for
the theorem on the occupation time of spherical Brownian motion. Here the
proof can rely on exchangeability whereas the spherical situation is more
subtle and requires to work with cyclic exchangeability only. The proof is
based on the so-called Baxter’s combinatorial lemma for permutations of
vectors. Fix $n\in\mathbb{N}$ and $z_{1},\ldots,z_{n}\in\mathbb{R}^{2}$. For
any permutation $\pi\in\mathfrak{\operatorname{Cyc}}(m)$ define the
corresponding partial sums We now offer an elementary derivation of the
occupation time distribution of Lévy bridges. In some sense the proof is a
simpler version of our proof for the theorem on the occupation time of
spherical Brownian motion. Here the proof can rely on exchangeability whereas
the spherical situation is more subtle and requires to work with cyclic
exchangeability only. The proof is based on the so-called Baxter’s
combinatorial lemma for permutations of vectors. Fix $n\in\mathbb{N}$ and
$z_{1},\ldots,z_{n}\in\mathbb{R}^{2}$. For any permutation
$\pi\in\mathfrak{\operatorname{Cyc}}(m)$ define the corresponding partial sums
$\displaystyle s_{0}[\pi]\coloneqq
0,\,\,\,s_{k}[\pi]\coloneqq\sum_{\ell=1}^{k}z_{\pi(\ell)},\quad 1\leq k\leq
n.$
For the sake of brevity we set $s_{k}\coloneqq
s_{k}[id_{n}]=\sum_{\ell=1}^{k}z_{\ell}$ for $0\leq k\leq n$, where
$id_{n}\colon[n]\to[n]$ denotes the identity permutation $id_{n}(k)=k$. Notice
that $s_{n}[\pi]=\sum_{\ell=1}^{n}z_{\pi(\ell)}=s_{n}$ does not depend on
$\pi$. Moreover, for any subset $M\subseteq\\{1,\ldots,n\\}$ let
$s_{M}\coloneqq\sum_{k\in M}z_{k}$. Following Baxter, we call
$z_{1},\ldots,z_{n}$ skew if the fact that $z_{M}$ and $z_{M^{\prime}}$ lie on
a common line (i.e. there exists a real $c\neq 0$ such that
$z_{M}=cz_{M^{\prime}}$) implies $M=M^{\prime}$. Any point
$z\in\mathbb{R}^{2}\setminus\\{0\\}$ together with the origin
$0\in\mathbb{R}^{2}$ defines a line in the plane containing $0$ and $z$ that
divides the plane into two half spaces. We call these the left and right half
space (in clockwise orientation when the direction of the line is induced by
moving from $0$ to $z$) induced by $z$. Let $H(z)$ denote the left half space
induced by $z$ and including the line containing $z$. Then Baxter’s
combinatorial lemma may be stated as follows.
###### Lemma 3 (Baxter’s combinatorial lemma, cf. Lemma 1 in [2]).
Fix $n\in\mathbb{N}$ and assume that $z_{1},\ldots,z_{n}\in\mathbb{R}^{2}$
that are skew. Then
$\displaystyle\texttt{\\#}\left\\{\pi\in\operatorname{Cyc}(n)\colon\\{s_{k}[\pi]\\}_{k=1}^{n}\subseteq
H(s_{n})\right\\}$ $\displaystyle=1.$
In words, there is precisely one cyclic permutation $\pi$ of
$z_{1},\ldots,z_{n}$ such that the corresponding partial sums
$s_{1}[\pi],\ldots,s_{n}[\pi]$ all lie in the left half space $H(s_{n})$ of
$s_{n}$.
We note that Baxter’s lemma is rather elementary to prove, the proof is a
clever few line computation. The way in which we will apply Baxter’s lemma is
the following. If the partial sums $(s_{k})_{k=1}^{n}$ induced by skew points
$z_{1},\ldots,z_{n}\in\mathbb{R}^{2}$ are such that $s_{n}$ lies on the
positive $x$-axis, then $H(s_{n})$ is the upper half-plane. Thus Baxter’s
lemma is well suited to approach persistence probabilities from a
combinatorial perspective.
Moreover, we make the following observation. Consider a real function
$f:\mathbb{R}\to\mathbb{R}$ such that $f(0)=0$ and fix
$0=t_{0}<t_{1}<\cdots<t_{n}<t_{n+1}=1$. Define the function
$\mathring{f}\colon[0,1]\to\mathbb{R}$ by setting $\mathring{f}(t)\coloneqq
f(t)-tf(1)$, and call $\mathring{f}$ the bridge induced by $f$.
###### Lemma 4.
We have $\mathring{f}(t_{k})>0$ for all $1\leq k\leq n$ if and only if the
points $(t_{k},f(t_{k}))$, $1\leq k\leq n+1$, all lie in the left half plane
induced by $(1,f(1))$ (which is $H((1,f(1))$).
###### Proof.
The line through the origin containing $(1,f(1))$ may be parameterised as
$\\{(t,tf(1))\colon t\in\mathbb{R}\\}$. Inserting the arguments $t_{k}$ gives
the claim. ∎
We are now ready to prove Theorem 4. Proposition 1 reduces the problem to
persistence probabilities of Lévy bridges, which is then reformulated with the
help of Lemma 4. Baxter’s lemma then simplifies the expressions to the moments
of the uniform distribution.
###### Proof of Theorem 4.
Recall that the uniform distribution on $(0,1)$ is uniquely identified by its
moment sequence $\int_{0}^{1}x^{m}dx=\frac{1}{m+1}$, $m\geq 1$. By Proposition
1 it suffices to show that, for any $m\in\mathbb{N}$,
$\displaystyle\mathbb{P}\left\\{\mathring{X}_{U_{m:1}}>0,\ldots,\mathring{X}_{U_{m:m}}>0\right\\}=\frac{1}{m+1},$
where $U_{1},U_{2},\ldots$ is an i.i.d. sequence of uniform $(0,1)$ random
variables independent of $(X_{t})$ and $U_{m:1}\leq\ldots\leq U_{m:m}$ is the
corresponding order statistics. We further set $U_{m:0}:=0$ and
$U_{m:m+1}:=1$.
Step 1: We first show that the random vector
$(X_{U_{m:k}}-X_{U_{m:k-1}})_{k=1}^{m+1}$ is exchangeable. For this it
suffices to show that for any permutation $\pi$ of $\\{1,\ldots,m+1\\}$ and
for any $t_{1},\ldots,t_{m+1}\in\mathbb{R}$
$\displaystyle\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}t_{k}(X_{U_{m:k}}-X_{U_{m:k-1}})\right)\right]$
$\displaystyle=\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}t_{k}(X_{U_{m:\pi(k)}}-X_{U_{m:\pi(k)-1}})\right)\right].$
(15)
Let $B\coloneqq\\{x\in\mathbb{R}\colon\lVert x\rVert\leq 1\\}$ denote the unit
ball in $\mathbb{R}$, and let $(a,\gamma,\nu)$ denote the generating triplet
of the law of $X_{1}$, where $a\geq 0$, $\gamma\in\mathbb{R}$, and $\nu$ is a
measure on $\mathbb{R}$ with $\nu(\\{0\\})=0$ and $\int_{\mathbb{R}}(\lvert
x\rvert\wedge 1)\nu(dx)<\infty$. Conditionally given
$U\coloneqq(U_{1},\ldots,U_{m})$, and using the fact that $X$ has independent
increments, we have
$\displaystyle\quad\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}t_{k}(X_{U_{m:k}}-X_{U_{m:k-1}})\right)\middle|U\right]$
$\displaystyle=\prod_{k=1}^{m+1}\mathbb{E}\left[\exp(it_{k}(X_{U_{m:k}}-X_{U_{m:k-1}}))\middle|U\right]$
$\displaystyle=\prod_{k=1}^{m+1}\exp\left((U_{m:k}-U_{m:k-1})\left(-\frac{1}{2}t_{k}^{2}a+i\gamma
t_{k}+\int_{\mathbb{R}}(e^{it_{k}x}-1-izx\mathbf{1}_{B}(x))\nu(dx)\right)\right),$
(16)
where in the last step we used the well-known Lévy-Khinchine representation of
an infinitely divisible distribution, cf. [41, Theorem 8.1]. Using the fact
that the $m+1$ gaps
$(U_{m:1}-U_{m:0},U_{m:2}-U_{m:1},\ldots,U_{m:m+1}-U_{m:m})$ induced by
$U_{1},\ldots,U_{m}$ obey a Dirichlet distribution with parameters
$1,\ldots,1$, and thus constitute an exchangeable random vector, we obtain
from (16) that
$\displaystyle\mathbb{E}\,\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}t_{k}(X_{U_{m:k}}-X_{U_{m:k-1}})\right)\middle|U\right]$
$\displaystyle=\mathbb{E}\,\mathbb{E}\left[\exp\left(i\sum_{k=1}^{m+1}t_{k}(X_{U_{m:\pi(k)}}-X_{U_{m:\pi(k)-1}})\right)\middle|U\right].$
By Fubini’s theorem, this shows (15). We can now come to the main argument of
the proof.
Step 2: Set
$S_{k}:=\left(U_{m:k},X_{U_{m:k}}\right)=\sum_{i=1}^{k}\left(U_{m:i}-U_{m:i-1},X_{U_{m:i}}-X_{U_{m:i-1}}\right),\quad
k=0,\ldots,m+1.$
Note that the events $\\{(U_{m:1},X_{U_{m:1}}),\ldots,(U_{m:m},X_{U_{m:m}})\in
H((1,X_{1}))\\}=\\{S_{1},\ldots,S_{m}\in H((1,X_{1}))\\}$ and
$\\{\mathring{X}_{U_{m:1}}>0,\ldots,\mathring{X}_{U_{m:1}}>0\\}$ are equal by
Lemma 4. Using the cyclic exchangeability in this first step (note that
$X_{1}$ is not altered by the permutations), we obtain
$\displaystyle\mathbb{P}\left\\{S_{1},\ldots,S_{m}\in H((1,X_{1}))\right\\}$
$\displaystyle=\frac{1}{m+1}\sum_{\pi\in\operatorname{Cyc}(m+1)}\mathbb{P}\left\\{S_{1}[\pi],\ldots,S_{m}[\pi]\in
H((1,X_{1}))\right\\}$
$\displaystyle=\mathbb{E}\Big{[}\frac{1}{m+1}\sum_{\pi\in\operatorname{Cyc}(m+1)}\mathbf{1}\\{S_{1}[\pi],\ldots,S_{m}[\pi]\in
H((1,X_{1}))\\}\Big{]}$ $\displaystyle=\frac{1}{m+1},$
where in the second to last line the sum equals one a.s. by Baxter’s
combinatorial lemma. Here, we used that the points
$(U_{m:k}-U_{m:k-1},X_{U_{m:k}}-X_{m:k-1})$ are almost surely skew in the
application of Baxter’s combinatorial lemma, which is due to the assumption
that $X_{1}$ has no atoms.∎
###### Remark 2.
Note that the main argument applies to all stochastic processes whose
increments over gaps induced by i.i.d. sampled times are exchangeable.
## References
* [1] D. Aldous, _The continuum random tree. III_ , Ann. Probab. 21 (1993), no. 1, 248–289.
* [2] G. Baxter, _A combinatorial lemma for complex numbers_ , Ann. Math. Statist. 32 (1961), 901–904. MR 126290
* [3] M. Berger, _Geometry revealed_ , Springer, Heidelberg, 2010, A Jacob’s ladder to modern higher geometry, Translated from the French by Lester Senechal. MR 2724440
* [4] Q. Berger and L. Béthencourt, _An application of Sparre Andersen’s fluctuation theorem for exchangeable and sign-invariant random variables_ , Séminaire de Probabilités, to appear, 2023+.
* [5] P. Biane, _Quantum random walk on the dual of SU $(n)$_, Probab. Theory Relat. Fields 89 (1991), no. 1, 117–129.
* [6] , _Minuscule weights and random walks on lattices_ , Quantum probability & related topics, QP-PQ, vol. VII, World Sci. Publ., River Edge, NJ, 1992, pp. 51–65. MR 1186654
* [7] N. H. Bingham, _Limit theorems for occupation times of Markov processes_ , Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 17 (1971), 1–22. MR 281255
* [8] N. H. Bingham and R. A. Doney, _On higher-dimensional analogues of the arc-sine law_ , J. Appl. Probab. 25 (1988), no. 1, 120–131. MR 929510
* [9] L. E. Blumenson, _Classroom Notes: A Derivation of $n$-Dimensional Spherical Coordinates_, Amer. Math. Monthly 67 (1960), no. 1, 63–66. MR 1530579
* [10] M. Bousquet-Mélou and M. Mishna, _Walks with small steps in the quarter plane_ , Algorithmic probability and combinatorics. Papers from the AMS special sessions, Chicago, IL, USA, October 5–6, 2007 and Vancouver, BC, Canada, October 4–5, 2008, Providence, RI: American Mathematical Society (AMS), 2010, pp. 1–39.
* [11] D. A. Darling and M. Kac, _On occupation times for Markoff processes_ , Trans. Amer. Math. Soc. 84 (1957), 444–458. MR 84222
* [12] A. Dembo, J. Ding, and F. Gao, _Persistence of iterated partial sums_ , Ann. Inst. Henri Poincaré Probab. Stat. 49 (2013), no. 3, 873–884. MR 3112437
* [13] D. Denisov and V. Wachtel, _Conditional limit theorems for ordered random walks_ , Electron. J. Probab. 15 (2010), 292–322, Id/No 11.
* [14] J. Desbois, _Occupation times for planar and higher dimensional Brownian motion_ , J. Phys. A, Math. Theor. 40 (2007), no. 10, 2251–2262.
* [15] L. Devroye, _Nonuniform random variate generation_ , Springer-Verlag, New York, 1986. MR 836973
* [16] F. J. Dyson, _A Brownian-motion model for the eigenvalues of a random matrix_ , J. Math. Phys. 3 (1962), 1191–1198.
* [17] P. Eichelsbacher and W. König, _Ordered random walks_ , Electron. J. Probab. 13 (2008), 1307–1336.
* [18] R. Eldan, _Volumetric properties of the convex hull of an $n$-dimensional Brownian motion_, Electron. J. Probab. 19 (2014), 34, Id/No 45.
* [19] P. A. Ernst and L. Shepp, _On occupation times of the first and third quadrants for planar Brownian motion_ , J. Appl. Probab. 54 (2017), no. 1, 337–342. MR 3632623
* [20] G. Fayolle and K. Raschel, _Some exact asymptotics in the counting of walks in the quarter plane_ , Proceeding of the 23rd international meeting on probabilistic, combinatorial, and asymptotic methods in the analysis of algorithms (AofA’12), Montreal, Canada, June 18–22, 2012, Nancy: The Association. Discrete Mathematics & Theoretical Computer Science (DMTCS), 2012, pp. 109–124.
* [21] W. Feller, _An introduction to probability theory and its applications. Vol. I_ , third ed., John Wiley & Sons, Inc., New York-London-Sydney, 1968\. MR 228020
* [22] P. J. Fitzsimmons and R. K. Getoor, _Occupation time distributions for Lévy bridges and excursions_ , Stochastic Process. Appl. 58 (1995), no. 1, 73–89. MR 1341555
* [23] R. Garbit and K. Raschel, _On the exit time from a cone for random walks with drift_ , Rev. Mat. Iberoam. 32 (2016), no. 2, 511–532. MR 3512425
* [24] R. K. Getoor and M. J. Sharpe, _On the arc-sine laws for Lévy processes_ , J. Appl. Probab. 31 (1994), no. 1, 76–89. MR 1260572
* [25] E. Hashorva, _Boundary non-crossings of Brownian pillow_ , J. Theoret. Probab. 23 (2010), no. 1, 193–208. MR 2591910
* [26] J. Istas, _Spherical and hyperbolic fractional Brownian motion_ , Electron. Comm. Probab. 10 (2005), 254–262. MR 2198600
* [27] S. Johnson, M. Mishna, and K. Yeats, _A combinatorial understanding of lattice path asymptotics_ , Adv. Appl. Math. 92 (2018), 144–163.
* [28] Z. Kabluchko, V. Vysotsky, and D. Zaporozhets, _A multidimensional analogue of the arcsine law for the number of positive terms in a random walk_ , Bernoulli 25 (2019), no. 1, 521–548. MR 3892328
* [29] M. Kac, _On some connections between probability theory and differential and integral equations_ , Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, 1950, Univ. California Press, Berkeley-Los Angeles, Calif., 1951, pp. 189–215. MR 45333
* [30] G. Kallianpur and H. Robbins, _Ergodic property of the Brownian motion process_ , Proc. Nat. Acad. Sci. U.S.A. 39 (1953), 525–533. MR 56233
* [31] D. Khoshnevisan and R. Pemantle, _Sojourn times of Brownian sheet_ , Period. Math. Hungar. 41 (2000), no. 1-2, 187–194. MR 1812805
* [32] F. B. Knight, _The uniform law for exchangeable and Lévy process bridges_ , Astérisque (1996), no. 236, 171–188, Hommage à P. A. Meyer et J. Neveu. MR 1417982
* [33] J.-F. Le Gall, _The uniform random tree in a Brownian excursion_ , Probab. Theory Relat. Fields 96 (1993), no. 3, 369–383.
* [34] P. Lévy, _Sur certains processus stochastiques homogènes_ , Compositio Math. 7 (1939), 283–339. MR 919
* [35] , _Processus Stochastiques et Mouvement Brownien_ , Gauthier-Villars & Cie, Paris, 1965, Suivi d’une note de M. Loève, Deuxième édition revue et augmentée. MR 190953
* [36] T. Meyre and W. Werner, _On the occupation times of cones by Brownian motion_ , Probab. Theory Relat. Fields 101 (1995), no. 3, 409–419.
* [37] P. Mörters and Y. Peres, _Brownian motion_ , Cambridge Series in Statistical and Probabilistic Mathematics, vol. 30, Cambridge University Press, Cambridge, 2010, With an appendix by Oded Schramm and Wendelin Werner. MR 2604525
* [38] T. S. Mountford, _Limiting behaviour of the occupation of wedges by complex Brownian motion_ , Probab. Theory Relat. Fields 84 (1990), no. 1, 55–65.
* [39] J. Pitman, _Brownian motion, bridge excursion, and meander characterized by sampling at independent uniform times_ , Electron. J. Probab. 4 (1999), 33, Id/No 11.
* [40] , _Combinatorial stochastic processes_ , Lecture Notes in Mathematics, vol. 1875, Springer-Verlag, Berlin, 2006, Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour, July 7–24, 2002, With a foreword by Jean Picard. MR 2245368
* [41] K. Sato, _Lévy processes and infinitely divisible distributions_ , Cambridge Studies in Advanced Mathematics, vol. 68, Cambridge University Press, Cambridge, 1999, Translated from the 1990 Japanese original, Revised by the author. MR 1739520
* [42] E. Sparre Andersen, _On the fluctuations of sums of random variables_ , Math. Scand. 1 (1953), 263–285. MR 58893
* [43] , _On the fluctuations of sums of random variables. II_ , Math. Scand. 2 (1954), 195–223.
* [44] F. Spitzer, _A combinatorial lemma and its application to probability theory_ , Trans. Amer. Math. Soc. 82 (1956), 323–339. MR 79851
|
# Construction of explicit symplectic integrators in general relativity. I.
Schwarzschild black holes
Ying Wang1,2, Wei Sun1, Fuyao Liu1, Xin Wu1,2,3,† 1\. School of Mathematics,
Physics and Statistics, Shanghai University of Engineering Science, Shanghai
201620, China
2\. Center of Application and Research of Computational Physics, Shanghai
University of Engineering Science, Shanghai 201620, China
3\. Guangxi Key Laboratory for Relativistic Astrophysics, Guangxi University,
Nanning 530004, China Emails<EMAIL_ADDRESS>(Y. W.),
<EMAIL_ADDRESS>(W. S<EMAIL_ADDRESS>(F. L.); ${\dagger}$
Corresponding Author<EMAIL_ADDRESS>(X. W.)
###### Abstract
Symplectic integrators that preserve the geometric structure of Hamiltonian
flows and do not exhibit secular growth in energy errors are suitable for the
long-term integration of N-body Hamiltonian systems in the solar system.
However, the construction of explicit symplectic integrators is frequently
difficult in general relativity because all variables are inseparable.
Moreover, even if two analytically integrable splitting parts exist in a
relativistic Hamiltonian, all analytical solutions are not explicit functions
of proper time. Naturally, implicit symplectic integrators, such as the
midpoint rule, are applicable to this case. In general, these integrators are
numerically more expensive to solve than same-order explicit symplectic
algorithms. To address this issue, we split the Hamiltonian of Schwarzschild
space-time geometry into four integrable parts with analytical solutions as
explicit functions of proper time. In this manner, second- and fourth-order
explicit symplectic integrators can be easily available. The new algorithms
are also useful for modeling the chaotic motion of charged particles around a
black hole with an external magnetic field. They demonstrate excellent long-
term performance in maintaining bounded Hamiltonian errors and saving
computational cost when appropriate proper time steps are adopted.
_Unified Astronomy Thesaurus concepts_ : Black hole physics (159);
Computational methods (1965); Computational astronomy (293); Chaos (222)
## 1 Introduction
Black holes and gravitational waves were predicted in Einstein’s theory of
general relativity (Einstein 1915; Einstein $\&$ Sitzungsber 1916). The
Schwarzschild solution was obtained from the field equations of a nonrotating
black hole (Schwarzschild 1916). The Kerr solution was given to a rotating
black hole (Kerr 1963). The recent detection of gravitational waves (GW150914)
from a binary black hole merger (Abbott et al. 2016) and the images of a
supermassive black hole candidate at the center of the giant elliptical galaxy
M87 (EHT Collaboration et al. 2019) provide powerful evidence for confirming
the two predictions.
Although the relativistic equations of motion for test particles in the
Schwarzschild and Kerr metrics are highly nonlinear, they are separable in
variables and solved analytically in the Hamiltonian-Jacobi equation. Thus,
they are integrable and the motions of particles near the two black holes are
strictly regular. This integrability is attributed to the existence of four
independent constants of motion, namely, energy, angular momentum, four-
velocity relation of particles, and Carter constant (Carter 1968). However, no
additional information regarding the solutions but only the integrability of
space-times is known because the solutions are expressed in terms of
quadratures rather than elementary functions. Good numerical methods for
computing these geodesics are highly desirable. In particular, when magnetic
fields are included in curved space-times, the separation of variables in the
Hamiltonian-Jacobi equation, associated to the equations of charged particle
motion, is generally highly improbable. This condition may lead to the non-
integrability of systems and the chaotic behavior of motion (Takahashi $\&$
Koyama 2009; Kopáček et al. 2010; Kopáček $\&$ Karas 2014; Kološ et al. 2015;
Stuchlík $\&$ Kološ 2016; Tursunov et al. 2016; Azreg-Aïnou 2016; Li $\&$ Wu
2019). Numerical methods play an important role in analyzing the properties of
these non-integrable problems.
Supposedly, good numerical methods are integrators that provide reliable
results, particularly in the case of long-term integrations. In addition, the
preservation of structural properties, such as symplectic structures,
integrals of motion, phase-space volume and symmetries, should be desired.
Such structure-preserving algorithms belong to a class of geometric
integrators (Hairer et al. 1999). Among the properties, the most important
ones are the preservation of energy and symplecticity.
In many cases, checking energy accuracy is a basic reference for testing the
performance of numerical integration algorithms although energy conservation
does not necessarily yield high-precision numerical solutions. To demonstrate
this scenario, we present a two-body problem as an example. Energy errors from
the truncation or discretization errors of Runge-Kutta type algorithms in the
two-body problem typically increase linearly with integration time (Rein $\&$
Spiegel 2015). The growth speeds of in-track errors (Huang $\&$ Innanen 1983),
which correspond to errors along the tangent to a trajectory in phase space,
directly depend on the relative error in Keplerian energy (Avdyushev 2003).
Accordingly, the Keplerian orbit is Lyapunov’s instability that leads to an
increase in various errors. However, the stabilization or conservation of
energy along the orbit is more efficient in eliminating Lyapunov’s instability
and the fast drifting of in-track errors than that of other integrals. The
energy stabilization method of Baumgarte (1972, 1973) includes known integrals
(such as an energy integral) in the equations of motion. The stabilization in
the perturbed two-body, restricted three-body problems of satellites,
asteroids, stars and planets has been demonstrated to improve the accuracy of
numerical integrations by several orders of magnitude (Avdyushev 2003). In
contrast with Baumgarte’s method, the manifold correction or projection method
of Nacozy (1971) applies a least-squares procedure to add a linear correction
vector to a numerical solution. This vector is computed from the gradient
vectors of the integrals involving the total energy. The application of
Nacozy’s method is generalized to quasi-Keplerian motions of perturbed two-
body or $N$-body problems with the aid of the integral invariant relation of
slowly varying individual Kepler energies (Wu et al. 2007; Ma et al. 2008a).
Some projection methods (Fukushima 2003a, 2003b, 2003c, 2004; Ma et al. 2008b;
Wang et al. 2016, 2018; Deng et al. 2020) for rigorously satisfying integrals,
including Kepler energy in a two-body problem, have been proposed and extended
to perturbed two-body problems, $N$-body systems, nonconservative elliptic
restricted three-body problems and dissipative circular restricted three-body
problems. In addition to explicit projection methods that exactly preserve the
energy integral, exact energy-preserving implicit integration methods that
discretize Hamiltonian gradients in terms of the average Hamiltonian
difference terms have been specifically designed for conservative Hamiltonian
systems (Feng $\&$ Qin 2009; Bacchini et al. 2018a, 2018b; Hu et al. 2019).
Although energy-preserving integrators and some projection methods exactly
conserve energy, they are non-symplectic. Symplectic algorithms (Wisdom 1982;
Ruth 1983; Feng 1986; Suzuki 1991; McLachlan $\&$ Atela 1992; Chin 1997;
Omelyan et al. 2002a, 2002b, 2003) do not exactly conserve the energy of a
Hamiltonian system, but they cause energy errors to oscillate and become
bounded as evolution time increases. In this manner, these algorithms are also
considered to conserve energy efficiently over long-term integrations.
Moreover, they preserve the symplectic structure of Hamiltonian flows. Given
the two advantages, symplectic integrators are widely used in long-term
studies on solar system dynamics. The most popular algorithms in solar system
dynamics are the second-order symplectic integrator of Wisdom $\&$ Holman
(1991) and its extensions (Wisdom et al. 1996; Chambers $\&$ Murison 2000;
Laskar $\&$ Robutel 2001; Hernandez $\&$ Dehnen 2017). Notably, the explicit
symplectic algorithms in a series of references (Suzuki 1991; Chin 1997;
Omelyan et al. 2002a, 2002b, 2003) require the integrated Hamiltonian to be
split into two parts with analytical solutions as explicit functions of time.
However, the two splitting parts from the Hamiltonian in Wisdom $\&$ Holman
(1991), Wisdom et al. (1996), Chambers $\&$ Murison (2000) and Laskar $\&$
Robutel (2001) should be the primary and secondary parts. For the secondary
part, the analytical solutions can be given in explicit functions of time. The
primary part also has explicit analytical solutions, but eccentric anomaly is
calculated using an iteration method, such as the Newton-Raphson method.
However, a relativistic gravitational Hamiltonian system, such as the
Schwarzschild space-time, is inseparable or has no two separable parts with
analytical solutions being explicit functions of proper time. This condition
leads to the difficulty in applying explicit symplectic integrators. By
extending the phase space of such an inseparable Hamiltonian system, Pihajoki
(2015) obtained a new Hamiltonian consisting of two sub-Hamiltonians equal to
the original Hamiltonian, where one sub-Hamiltonian is a function of the
original coordinates and new momenta, and the other is a function of the
original momenta and new coordinates. The two sub-Hamiltonians are separable
in variables; therefore, standard explicit symplectic leapfrog splitting
methods are applicable to the new Hamiltonian. Mixing maps of feedback between
the two sub-Hamiltonian solutions and a map for projecting a vector in the
extended phase space back to the original number of dimensions are necessary
and have a suitable choice. Liu et al. (2016) confirmed that sequent
permutations of coordinates and momenta achieve good results in preserving the
original Hamiltonian without an increase in secular errors compared with the
permutations of momenta suggested by Pihajoki (2015). Luo et al. (2017) found
that midpoint permutations exhibit the best results. However, mixing maps
generally destroy symplecticity in extended phase space. In addition, extended
phase space leapfrogs are not symplectic for the use of any projection map.
Despite the absence of symplecticity, mixing and projection maps are used only
as output and exert no influence on the state in extended phase space.
Consequently, leapfrogs, such as partitioned multistep methods, can exhibit
good long-term behavior in stabilizing the original Hamiltonian (Liu et al.
2017; Luo $\&$ Wu 2017; Wu $\&$ Wu 2018). Thus, extended phase-space leapfrog
methods, including extended phase-space logarithmic Hamiltonian methods (Li
$\&$ Wu 2017), are called explicit symplectic-like integrators. In addition to
the two copies of the original system with mixed-up positions and momenta, a
third sub-Hamiltonian, as an artificial restraint to the divergence between
the original and extended variables, was introduced by Tao (2016). Neither
mixing nor projection maps are used in Tao’s method, and thus, explicit
leapfrog methods are still symplectic in the extended phase space. Two
problems exist. (_i_) A binding constant for controlling divergence has an
optimal choice. This choice cannot be given theoretically but requires
considerable values to test which one minimizes the original Hamiltonian
error. (_ii_) Whether the original variables in the newly extended Hamiltonian
coincide with those in the original Hamiltonian is unclear.
To date, no standard explicit symplectic leapfrogs but only implicit
symplectic methods have been established in a relativistic Hamiltonian problem
because of the difficulty in separating variables. The second-order implicit
midpoint method (Feng 1986) is the most common choice among implicit
symplectic methods. It can function as a variational symplectic integrator for
constrained Hamiltonian systems (Brown 2006). To save computational cost,
explicit and implicit combined symplectic algorithms have been provided in
some references (Liao 1997; Preto $\&$ Saha 2009; Lubich et al. 2010; Zhong et
al. 2010; Mei et al. 2013a, 2013b). Notably, the symplectic integration scheme
for the post-Newtonian motion of a spinning black hole binary (Lubich et al.
2010) is noncanonical because of the use of noncanonical spin variables.
However, this scheme can become canonical when canonically conjugated
cylindrical-like spin coordinates (Wu $\&$ Xie 2010) are used. The symplectic
implicit Gauss-Legendre Runge-Kutta method has been applied to determine the
regular and chaotic behavior of charged particles around a Kerr black hole
immersed in a weak, asymptotically uniform magnetic field (Kopáček et al.
2010). Implicit symmetric schemes with adaptive step size control that
effectively conserve the integrals of motion are appropriate for studying
geodesic orbits in curved space-time backgrounds (Seyrich $\&$ Lukes-
Gerakopoulos 2012). Slimplectic integrators for general nonconservative
systems (Tsang et al. 2015) can share many benefits of traditional symplectic
integrators.
In general, implicit symplectic methods are numerically more expensive to
solve than same-order explicit symplectic integrators. The latter algorithms
should be used if possible. Accordingly, we intend to address the difficulty
in constructing explicit symplectic integrators for Schwarzschild type space-
times similar to the standard explicit symplectic leapfrogs for Hamiltonian
problems in solar system dynamics. If the Hamiltonians of Schwarzschild type
space-times are separated into two parts that resemble the splitting form of
Hamiltonian systems in the construction of standard symplectic leapfrogs, then
no explicit symplectic algorithms are available. The conditions for
constructing explicit symplectic schemes may require Hamiltonians to be split
into more parts with analytical solutions as explicit functions of proper
time.
The remainder of this paper is organized as follows. In Section 2, we briefly
introduce the standard explicit symplectic leapfrog and its extensions for a
separable Hamiltonian system. The Hamiltonian of charged particles moving
around a Schwarzschild black hole with an external magnetic field is described
in Section 3. Explicit symplectic schemes are designed for curved
Schwarzschild space-times in Section 4. The performance of explicit symplectic
integrators is tested numerically in Section 5. Section 6 concludes the major
results. A discrete difference scheme of the new second-order explicit
symplectic integrator is presented in Appendix A. Explicit and implicit
combined symplectic methods and extended phase-space explicit symplectic-like
methods are provided in Appendix B.
## 2 Standard explicit symplectic integrators for a separable Hamiltonian
Set $\mathbf{q}$ as an $N$-dimensional coordinate vector. Its corresponding
generalized momentum is $\mathbf{p}$. Let $\mathbf{Z}=(\mathbf{p},\mathbf{q})$
be a $2N$-dimensional phase-space variable. Consider the following Hamiltonian
$H(\mathbf{p},\mathbf{q})=H_{1}(\mathbf{p},\mathbf{q})+H_{2}(\mathbf{p},\mathbf{q}),$
(1)
where the two separable parts $H_{1}$ and $H_{2}$ are supposed to be
independently integrable. A typical splitting form of $H$ takes $H_{1}$ as
kinetic energy $T(\mathbf{p})$ and $H_{2}$ as potential $V(\mathbf{q})$.
Two differential operators are defined as follows:
$\displaystyle\mathcal{A}$ $\displaystyle=$
$\displaystyle\sum^{N}_{i=1}(\frac{\partial
H_{1}}{\partial\mathbf{p}_{i}}\frac{\partial}{\partial\mathbf{q}_{i}}-\frac{\partial
H_{1}}{\partial\mathbf{q}_{i}}\frac{\partial}{\partial\mathbf{p}_{i}}),$
$\displaystyle\mathcal{B}$ $\displaystyle=$
$\displaystyle\sum^{N}_{i=1}(\frac{\partial
H_{2}}{\partial\mathbf{p}_{i}}\frac{\partial}{\partial\mathbf{q}_{i}}-\frac{\partial
H_{2}}{\partial\mathbf{q}_{i}}\frac{\partial}{\partial\mathbf{p}_{i}}).$
System (1) has the following formal solution
$\mathbf{Z}(h)=\mathcal{C}(h)\mathbf{Z}(0),$ (2)
where $\mathbf{Z}(0)$ denotes the value of $\mathbf{Z}$ in the beginning of
time step $h$. The differential operator $\mathcal{C}=\mathcal{A}+\mathcal{B}$
is approximately expressed as a series of products of $\mathcal{A}$ and
$\mathcal{B}$:
$\mathcal{C}(h)\approx\Pi^{e}_{j=1}\mathcal{A}(h\alpha_{j})\mathcal{B}(h\beta_{j})+O(h^{d+1}),$
(3)
where coefficients $\alpha_{j}$ and $\beta_{j}$ are determined by the
conditions of order $d$. In this manner, symplectic numerical integrators of
arbitrary orders are built.
If $d=2$, then Equation (3) is the Verlet algorithm (Swope et al. 1982)
$\mathcal{S}_{2}(h)=\mathcal{A}(\frac{h}{2})\mathcal{B}(h)\mathcal{A}(\frac{h}{2}).$
(4)
This algorithm is an explicit standard symplectic leapfrog method. When $d=4$,
Equation (3) corresponds to the explicit symplectic algorithm of Forest $\&$
Ruth (1990)
$\displaystyle FR4(h)$ $\displaystyle=$
$\displaystyle\mathcal{A}(\frac{\gamma}{2}h)\mathcal{B}(\gamma
h)\mathcal{A}(\frac{1-\gamma}{2}h)\mathcal{B}((1-2\gamma)h)$ (5)
$\displaystyle\circ\mathcal{A}(\frac{1-\gamma}{2}h)\mathcal{B}(\gamma
h)\mathcal{A}(\frac{\gamma}{2}h),$
where $\gamma=1/(2-\sqrt[3]{2})$.
Evidently, the construction of these explicit symplectic integrators is based
on the Hamiltonian with an analytically integrable decomposition. Can such an
operator-splitting technique be available in strictly general relativistic
systems, such as a Schwarzschild space-time? The succeeding discussions answer
this question.
## 3 Schwarzschild black holes
A Schwarzschild black hole with mass $M$ is a nonrotating black hole. In
spherical-like coordinates $(t,r,\theta,\phi)$, the Schwarzschild metric is
described by
$\displaystyle-c^{2}d\tau^{2}$ $\displaystyle=$ $\displaystyle
ds^{2}=g_{\alpha\beta}dx^{\alpha}dx^{\beta}$ (6) $\displaystyle=$
$\displaystyle-(1-\frac{2GM}{rc^{2}})c^{2}dt^{2}+(1-\frac{2GM}{rc^{2}})^{-1}$
$\displaystyle\cdot dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2},$
where $\tau$, $c$ and $G$ denote proper time, the speed of light and constant
of gravity, respectively. In general, $c$ and $G$ use geometrized units,
$c=G=1$. $M$ also has one unit, $M=1$. This unit mass can be obtained via
scale transformations to certain quantities: $t\rightarrow tM$, $r\rightarrow
rM$ and $\tau\rightarrow\tau M$. In this manner, this metric is transformed
into a dimensionless form as follows:
$\displaystyle-d\tau^{2}=ds^{2}$ $\displaystyle=$
$\displaystyle-(1-\frac{2}{r})dt^{2}+(1-\frac{2}{r})^{-1}dr^{2}$ (7)
$\displaystyle+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2}.$
This metric corresponds to a Lagrangian system
$\mathcal{L}=\frac{1}{2}(\frac{ds}{d\tau})^{2}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu},$
(8)
where $\dot{x}^{\mu}=\mathbf{U}$ is a four-velocity. A covariant generalized
momentum $\mathbf{p}$ is defined in the following form
$p_{\mu}=\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}=g_{\mu\nu}\dot{x}^{\nu}.$
(9)
This Lagrangian does not explicitly depend on $t$ and $\phi$, and thus, two
constant momentum components exist. They are
$\displaystyle p_{t}$ $\displaystyle=$
$\displaystyle-(1-\frac{2}{r})\dot{t}=-E,$ (10) $\displaystyle p_{\phi}$
$\displaystyle=$ $\displaystyle r^{2}\sin^{2}\theta\dot{\phi}=\ell,$ (11)
where $E$ and $\ell$ are the energy and angular momentum of a test particle
moving around a black hole, respectively.
In accordance with classical mechanics, a Hamiltonian derived from the
Lagrangian is expressed as
$\displaystyle\mathcal{H}$ $\displaystyle=$
$\displaystyle\mathbf{U}\cdot\mathbf{p}-\mathcal{L}=\frac{1}{2}g^{\mu\nu}p_{\mu}p_{\nu}=-\frac{1}{2}(1-\frac{2}{r})^{-1}E^{2}$
(12)
$\displaystyle+\frac{1}{2}(1-\frac{2}{r})p^{2}_{r}+\frac{1}{2}\frac{p^{2}_{\theta}}{r^{2}}+\frac{1}{2}\frac{\ell^{2}}{r^{2}\sin^{2}\theta}.$
This Hamiltonian governs the motion of a test particle around the
Schwarzschild black hole.
A point is worth noting. A magnetic field arises due to the relativistic
motion of charged particles in an accretion disc around the central black hole
(Borm $\&$ Spaans 2013). It also leads to generating gigantic jets along the
magnetic axes. The magnetic field is too weak to change the gravitational
background and alter the metric tensor of the Schwarzschild black hole space-
time. However, it can exert a considerable influence on the motion of charged
test particles. Considering this point, we suppose that the particle has a
charge $q$ and the black hole is immersed into an external asymptotically
uniform magnetic field. The magnetic field is parallel to the $z$-axis, and
its strength is $B$. The electromagnetic four-vector potential $A^{\alpha}$ in
the Lorentz gauge is a linear combination of the time-like and space-like
axial Killing vectors $\xi^{\alpha}_{(t)}$ and $\xi^{\alpha}_{(\phi)}$
(Abdujabbarov et al. 2013; Shaymatov et al. 2015; Tursunov et al. 2016;
Benavides-Gallego et al. 2019):
$A^{\alpha}=C_{1}\xi^{\alpha}_{(t)}+C_{2}\xi^{\alpha}_{(\phi)}.$ (13)
In Felice $\&$ Sorge (2003), the constants are set as $C_{1}=0$ and
$C_{2}=B/2$. In this manner, the four-vector potential has only one nonzero
covariant component
$A_{\phi}=\frac{B}{2}g_{\phi\phi}=\frac{B}{2}r^{2}\sin^{2}\theta.$ (14)
The charged particle motion is described by the Hamiltonian system
$\displaystyle K$ $\displaystyle=$
$\displaystyle\frac{1}{2}g^{\mu\nu}(p_{\mu}-qA_{\mu})(p_{\nu}-qA_{\nu})$ (15)
$\displaystyle=$
$\displaystyle-\frac{1}{2}(1-\frac{2}{r})^{-1}E^{2}+\frac{1}{2}(1-\frac{2}{r})p^{2}_{r}+\frac{1}{2}\frac{p^{2}_{\theta}}{r^{2}}$
$\displaystyle+\frac{1}{2r^{2}\sin^{2}\theta}(L-\frac{\beta}{2}r^{2}\sin^{2}\theta)^{2},$
where $\beta=qB$. The energy $E$ is still determined using Equation (10).
However, the expression of angular momentum is dissimilar to that of Equation
(11) and is presented as
$L=r^{2}\sin^{2}\theta\dot{\phi}+\frac{\beta}{2}r^{2}\sin^{2}\theta.$ (16)
A point is illustrated here. The dimensionless Hamiltonian (15) is obtained
after scale transformations of $B\rightarrow B/M$, $E\rightarrow mE$,
$p_{r}\rightarrow mp_{r}$, $q\rightarrow mq$, $L\rightarrow mML$,
$p_{\theta}\rightarrow mMp_{\theta}$ and $K\rightarrow m^{2}K$, where $m$ is
the particle’s mass. In addition, the Schwarzschild solution with an external
magnetic field is the Hamiltonian (15), and it no longer has a background
solution to general relativity.
The Hamiltonians $\mathcal{H}$ and $K$ always remain at a given constant as
follows:
$\displaystyle\mathcal{H}$ $\displaystyle=$ $\displaystyle-\frac{1}{2},$ (17)
$\displaystyle K$ $\displaystyle=$ $\displaystyle-\frac{1}{2}.$ (18)
They are attributed to the four-velocity relation
$\mathbf{U}\cdot\mathbf{U}=-1$. In addition, a second integral (i.e., the
Carter constant) can be easily found in the Hamiltonian $\mathcal{H}$ by
performing the separation of variables in the Hamilton-Jacobi equation. Thus,
this Hamiltonian is integrable and has formal analytical solutions. However,
the perturbation from the external magnetic field leads to the absence of a
second integral. In such case, no formal analytical solutions exist in the
Hamiltonian $K$.
## 4 Construction of explicit symplectic integrators for Schwarzschild space-
times
Suppose the Hamiltonian (12) is similar to the Hamiltonian (1) and has two
splitting parts:
$\displaystyle\mathcal{H}$ $\displaystyle=$
$\displaystyle\mathcal{T}+\mathcal{V},$ (19) $\displaystyle\mathcal{T}$
$\displaystyle=$
$\displaystyle\frac{1}{2}(1-\frac{2}{r})p^{2}_{r}+\frac{1}{2}\frac{p^{2}_{\theta}}{r^{2}},$
(20) $\displaystyle\mathcal{V}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}(1-\frac{2}{r})^{-1}E^{2}+\frac{1}{2}\frac{\ell^{2}}{r^{2}\sin^{2}\theta}.$
(21)
The $\mathcal{V}$ part is analytically integrable, and its analytical
solutions $p_{r}$ and $p_{\theta}$ are explicit functions of proper time
$\tau$. Although the $\mathcal{T}$ part exhibits no separation of variables,
it is still analytically integrable. However, its analytical solutions $r$ and
$p_{r}$ are not explicit functions of proper time $\tau$ but are implicit
functions. In such case, the explicit symplectic integrators in Equations (4)
and (5) are unsuitable for the Hamiltonian splitting form (19). Consequently,
implicit symplectic integrators rather than explicit ones can be constructed
in relativistic Hamiltonian systems, such as Equation (12), in the general
case. The $\mathcal{V}$ part is more complicated and is not a separation of
variables in most cases in general relativity. Thus, the construction of
explicit symplectic methods becomes more difficult.
From the preceding demonstrations, the key for constructing explicit
symplectic integrators requires the integrated Hamiltonian to exist as an
analytically integrable decomposition. In particular, the obtained analytical
solutions for each splitting part should be explicit functions of proper time
$\tau$. In summary, the two points must be satisfied for constructing explicit
symplectic integrators. The Hamiltonian (12) with the two analytically
integrable splitting parts fails to construct any explicit symplectic scheme.
Subsequently, we focus on the Hamiltonian with more analytically integrable
splitting parts.
We split the Hamiltonian $\mathcal{H}$ into four pieces:
$\mathcal{H}=\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}+\mathcal{H}_{4},$
(22)
where these sub-Hamiltonians are
$\displaystyle\mathcal{H}_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\frac{\ell^{2}}{r^{2}\sin^{2}\theta}-\frac{1}{2}(1-\frac{2}{r})^{-1}E^{2},$
(23) $\displaystyle\mathcal{H}_{2}$ $\displaystyle=$
$\displaystyle\frac{1}{2}p^{2}_{r},$ (24) $\displaystyle\mathcal{H}_{3}$
$\displaystyle=$ $\displaystyle-\frac{1}{r}p^{2}_{r},$ (25)
$\displaystyle\mathcal{H}_{4}$ $\displaystyle=$
$\displaystyle\frac{p^{2}_{\theta}}{2r^{2}}.$ (26)
For the sub-Hamiltonian $\mathcal{H}_{1}$, its canonical equations are
$\dot{r}=\dot{\theta}=0$ and
$\displaystyle\frac{dp_{r}}{d\tau}$ $\displaystyle=$
$\displaystyle-\frac{\partial\mathcal{H}_{1}}{\partial
r}=\frac{\ell^{2}}{r^{3}\sin^{2}\theta}-\frac{E^{2}}{(r-2)^{2}},$ (27)
$\displaystyle\frac{dp_{\theta}}{d\tau}$ $\displaystyle=$
$\displaystyle-\frac{\partial\mathcal{H}_{1}}{\partial\theta}=\frac{\ell^{2}\cos\theta}{r^{2}\sin^{3}\theta}.$
(28)
Evidently, $r$ and $\theta$ are constants when proper time goes from
$\tau_{0}$ to $\tau_{1}=\tau_{0}+\tau$. Thus, $p_{r}$ and $p_{\theta}$ can be
solved analytically from Equations (27) and (28). They are explicit functions
of $\tau$ in the following forms
$\displaystyle p_{r}(\tau)$ $\displaystyle=$ $\displaystyle
p_{r0}+\tau[\frac{\ell^{2}}{r^{3}_{0}\sin^{2}\theta_{0}}-\frac{E^{2}}{(r_{0}-2)^{2}}],$
(29) $\displaystyle p_{\theta}(\tau)$ $\displaystyle=$ $\displaystyle
p_{\theta 0}+\tau\frac{\ell^{2}\cos\theta_{0}}{r^{2}_{0}\sin^{3}\theta_{0}},$
(30)
where $r_{0}$, $\theta_{0}$, $p_{r0}$ and $p_{\theta 0}$ represent values of
$r$, $\theta$, $p_{r}$ and $p_{\theta}$ at the proper time $\tau_{0}$; and
$p_{r}(\tau)$ and $p_{\theta}(\tau)$ denote the values of $p_{r}$ and
$p_{\theta}$ at proper time $\tau_{1}$. A differential operator for solving
$\mathcal{H}_{1}$ is labeled as $\psi^{\mathcal{H}_{1}}_{\tau}$.
The canonical equations of the sub-Hamiltonians $\mathcal{H}_{2}$,
$\mathcal{H}_{3}$ and $\mathcal{H}_{4}$ are
$\displaystyle\mathcal{H}_{2}:~{}\frac{dr}{d\tau}$ $\displaystyle=$
$\displaystyle p_{r},~{}~{}\dot{p}_{r}=0;$ (31)
$\displaystyle\mathcal{H}_{3}:~{}\frac{dr}{d\tau}$ $\displaystyle=$
$\displaystyle-\frac{2}{r}p_{r},~{}\frac{dp_{r}}{d\tau}=-\frac{p^{2}_{r}}{r^{2}};$
(32) $\displaystyle\mathcal{H}_{4}:~{}\frac{d\theta}{d\tau}$ $\displaystyle=$
$\displaystyle\frac{p_{\theta}}{r^{2}},~{}\frac{dp_{r}}{d\tau}=\frac{p^{2}_{\theta}}{r^{3}},~{}\dot{r}=\dot{p}_{\theta}=0.$
(33)
Let $\psi^{\mathcal{H}_{2}}_{\tau}$, $\psi^{\mathcal{H}_{3}}_{\tau}$ and
$\psi^{\mathcal{H}_{4}}_{\tau}$ be three operators. We obtain the solutions
for Equations (31)-(33) as follows:
$\displaystyle\psi^{\mathcal{H}_{2}}_{\tau}:~{}r(\tau)$ $\displaystyle=$
$\displaystyle r_{0}+\tau p_{r0};$ (34)
$\displaystyle\psi^{\mathcal{H}_{3}}_{\tau}:~{}r(\tau)$ $\displaystyle=$
$\displaystyle[(r^{2}_{0}-3\tau p_{r0})^{2}/r_{0}]^{1/3},$ $\displaystyle
p_{r}(\tau)$ $\displaystyle=$ $\displaystyle p_{r0}[(r^{2}_{0}-3\tau
p_{r0})/r^{2}_{0}]^{1/3};$ (35)
$\displaystyle\psi^{\mathcal{H}_{4}}_{\tau}:~{}\theta(\tau)$ $\displaystyle=$
$\displaystyle\theta_{0}+\tau p_{\theta 0}/r^{2}_{0},$ $\displaystyle
p_{r}(\tau)$ $\displaystyle=$ $\displaystyle p_{r0}+\tau p^{2}_{\theta
0}/r^{3}_{0}.$ (36)
It is clear that these solutions are explicit functions of proper time $\tau$.
If the sum of $\mathcal{H}_{2}$ and $\mathcal{H}_{3}$ is regarded as an
independent sub-Hamiltonian, then it is analytically solved. However, the
analytical solutions of $r$, $\theta$ and $p_{r}$ for the sum cannot be
expressed as explicit functions of proper time $\tau$. Thus, such a composed
sub-Hamiltonian is not considered. Equation (22) is a possible Hamiltonian
splitting for satisfying this requirement. Other appropriate splitting forms
may be provided to the Hamiltonian (12).
The flow $\psi^{\mathcal{H}}_{h}$ of the Hamiltonian (12) over time step $h$
is approximately given by the symmetric composition of these operators
$\displaystyle\psi^{\mathcal{H}}_{h}\approx S^{\mathcal{H}}_{2}(h)$
$\displaystyle=$
$\displaystyle\psi^{\mathcal{H}_{4}}_{h/2}\circ\psi^{\mathcal{H}_{3}}_{h/2}\circ\psi^{\mathcal{H}_{2}}_{h/2}\circ\psi^{\mathcal{H}_{1}}_{h}$
(37)
$\displaystyle\circ\psi^{\mathcal{H}_{2}}_{h/2}\circ\psi^{\mathcal{H}_{3}}_{h/2}\circ\psi^{\mathcal{H}_{4}}_{h/2}.$
The above construction is a second order explicit symplectic integrator marked
as $S^{\mathcal{H}}_{2}$. Its difference scheme is provided in Appendix A.
The order of algorithm (37) can be lifted to four by using the composition
scheme of Yoshida (1990). That is, a fourth order symplectic composition
construction is
$S^{\mathcal{H}}_{4}(h)=S^{\mathcal{H}}_{2}(\gamma h)\circ
S^{\mathcal{H}}_{2}(\delta h)\circ S^{\mathcal{H}}_{2}(\gamma h),$ (38)
where $\delta=1-2\gamma$.
The Hamiltonian (15) exhibits the following splitting form
$K=K_{1}+K_{2}+K_{3}+K_{4},$ (39)
where $K_{2}=\mathcal{H}_{2}$, $K_{3}=\mathcal{H}_{3}$,
$K_{4}=\mathcal{H}_{4}$, and the inclusion of $A_{\phi}$ only changes
$\mathcal{H}_{1}$ as
$\displaystyle K_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{2r^{2}\sin^{2}\theta}(L-\frac{\beta}{2}r^{2}\sin^{2}\theta)^{2}$
(40) $\displaystyle-\frac{1}{2}(1-\frac{2}{r})^{-1}E^{2}.$
When $\mathcal{H}_{1}$ gives place to $K_{1}$, the explicit symplectic
integrators $S_{2}$ and $S_{4}$ are still suitable for the non-integrable
Hamiltonian $K$ of the Schwarzschild solution with an external magnetic field,
labeled as $S^{K}_{2}$ and $S^{K}_{4}$.
In summary, when the Hamiltonians (12) and (15) are split into four
analytically integrable parts, their explicit symplectic integrators are
easily constructed.
Table 1: Dependence of stable (S) or unstable (U) behavior of Hamiltonian errors for the seven algorithms on step size $h$. Chaotic Orbit 3 in Figure 2 is integrated until proper time $\tau=10^{8}$. Method | S2 | EI2 | EE2 | S4 | EI4 | EE4 | RK4
---|---|---|---|---|---|---|---
$h=0.1$ | S | S | S | U | U | S | U
$h=1.0$ | S | S | U | S | U | S | U
$h=10$ | S | S | U | S | S | U | U
Table 2: Same as Table 1, but dependence of the largest absolute values of Hamiltonian errors on $h$. Method | S2 | EI2 | EE2 | S4 | EI4 | EE4 | RK4
---|---|---|---|---|---|---|---
$h=0.1$ | 4e-8 | 4e-8 | 3e-8 | 7e-9 | 3e-12 | 1e-12 | 4e-12
$h=1.0$ | 6e-6 | 5e-6 | 2e-6 | 3e-8 | 7e-9 | 2e-8 | 4e-7
$h=10$ | 8e-4 | 6e-3 | 6e-3 | 4e-4 | 7e-5 | 4e-3 | 3e-2
Table 3: Same as Table 1, but dependence of computational cost, i.e., CPU times (minute: second), on $h$. Method | S2 | EI2 | EE2 | S4 | EI4 | EE4 | RK4
---|---|---|---|---|---|---|---
$h=0.1$ | 9:13 | 10:13 | 14:22 | 27:42 | 30:33 | 33:35 | 17:48
$h=1.0$ | 0:56 | 1:03 | 1:26 | 2:46 | 3:09 | 3:21 | 1:46
$h=10$ | 0:05 | 0:07 | 0:07 | 0:16 | 0:20 | 0:19 | 0:10
## 5 Numerical evaluations
In this section, we focus on checking the numerical performance of the
proposed integrators. For comparison, a conventional fourth-order Runge-Kutta
integrator (RK4), second- and fourth-order symplectic algorithms consisting of
explicit and implicit mixed methods (EI2 and EI4), and second- and fourth-
order extended phase-space explicit symplectic-like methods (EE2 and EE4) are
used. The details of EI2, EI4, EE2 and EE4 are provided in Appendix B.
### 5.1 Case of $\beta=0$
When no charges are assigned to test particles, the system (15) is transformed
to the Schwarzschild problem (12). We consider parameters $E=0.995$ and $\ell$
(or $L$) =4.6, and proper time step size $h=1$. Initial conditions are $r=11$,
$\theta=\pi/2$ and $p_{r}=0$. The initial value of $p_{\theta}$ ($>0$) is
determined by using Equation (17). We conduct our numerical experiments by
applying each of the aforementioned algorithms to solve the Hamiltonian (12).
As shown in Figure 1(a), the three second-order methods, namely, S2, EI2 and
EE2, provide an order of $10^{-6}$ to Hamiltonian errors $\Delta
H=1+2\mathcal{H}$ from Equation (17) at the end of integration time.
Differences exist among the algorithmic errors. The new symplectic algorithm
S2 and the explicit and implicit mixed symplectic method EI2 have nearly the
same errors, which remain bounded and stable. This result indicates the
superiority of S2 in the conservation of the long-term stable behavior of
energy (or Hamiltonian) errors. However, the extended phase-space method EE2
exhibits an increase in secular errors. This increase can be prevented if a
small time size $h=0.1$ is used. In such case, the errors (not plotted) can be
stabilized within an order of $10^{-8}$.
The four fourth-order algorithms, namely, S4, EI4, EE4 and RK4, yield the
Hamiltonian errors in Figures 1(b) and 1(c). The algorithms S4, EI4 and EE4
are accurate to an order of $10^{-8}$. The new method S4 and the extended
phase-space method EE4 have stable and bounded errors. The explicit and
implicit mixed symplectic method EI4 causes the errors to become bounded.
Meanwhile, RK4 provides the lowest accuracy with an order of $10^{-6}$ and its
errors increase linearly with time. This result is expected because RK4 is not
a geometric integrator.
The considered orbit, called Orbit 1, can be observed from the Poincaré
section map on the plane $\theta=\pi/2$ and $p_{\theta}>0$. The map relates to
a two-dimensional plane, which exhibits intersections of the particles’
trajectories with the surface of section in phase space (Lichtenberg $\&$
Lieberman 1983). If the plotted points form a closed curve, then the motion is
regular. This result is based on a regular trajectory moving on a torus in the
phase space and the curve being a cross section of the torus. By contrast, if
the plotted points are distributed randomly, then the motion is chaotic. With
the aid of the distribution of the points in the Poincaré map, we can
determine the phase-space structure, indicating whether the motion is chaotic.
The Kolmogorov-Arnold-Moser (KAM) torus in the section in Figure 1(d) is
provided by the new method S2 and indicates the regularity of Orbit 1. In
addition, the structure of Orbit 1, and those of Orbits 2 and 3 with initial
separations $r=70$ and 110 are described, respectively. The numerical
performance of the aforementioned algorithms acting on Orbit 1 is
approximately consistent with those acting on Orbits 2 and 3.
### 5.2 Case of $\beta\neq 0$
When an external magnetic field with parameter $\beta=8.9\times 10^{-4}$ is
included within the vicinity of a black hole, the system is non-integrable.
The magnetic field causes the three orbits in Figure 1(d) to have different
phase-space structures in Figure 2(a). Although Orbit 1 remains a simply
closed torus, it is shrunk drastically and becomes a small torus. By contrast,
Orbit 2 becomes a more complicated KAM torus, consisting of seven small loops
wherein the successive points jump from one loop to the next. These small
loops belong to the same trajectory and form a chain of islands (Hénon $\&$
Heiles 1964). Such a torus is regular but easily induces the occurrence of
resonance and chaos. In particular, Orbit 3, which is a small loop in Figure
1(d), is considerably enlarged and densely filled in the phase space. This
result indicates the onset of strong chaoticity.
Although the loop of Orbit 1 is considerably smaller under the interaction of
the electromagnetic forces in Figure 2(a) than in the case without
electromagnetic forces in Figure 1(d), each algorithm exhibits nearly the same
performance in the two cases because the tori of Orbit 1 in the two cases
belong to the same category of trajectories, namely, simple single regular
loops. Orbits 2 and 3 exhibit completely different dynamical behavior, but
correspond to approximately the same Hamiltonian errors for each integration
method. Figures 2(b)-2(d) plot the errors for chaotic Orbit 3. The errors of
the second-order methods for chaotic Orbit 3 shown in Figure 2(b) are
approximately consistent with those for regular Orbit 1 shown in Figure 1(a).
The fourth-order algorithms S4 and EE4 exhibit no dramatic differences in
errors in Figure 2(c), similar to that in Figure 1(b). This result indicates
that orbital chaoticity does not explicitly affect algorithmic accuracy.
However, the explicit and implicit mixed method EI4 presents a secular drift
in errors due to roundoff errors. The increase in errors can be prevented when
a large time size $h=10$ is adopted. In such case, accuracy is maintained with
an order of $10^{-5}$. EI4 exhibits secular drift in the Hamiltonian errors
for the smaller time step $h=1$ but does not for the larger time size $h=10$.
The following is a simple analysis. The errors of a symplectic integrator
mostly consist of truncation and roundoff errors. When truncation errors are
more than roundoff errors, the symplectic integrator causes the Hamiltonian
errors to remain bounded and to exhibit no secular drift in appropriate
situations. Roundoff errors increase with an increase in the number $N$ of
calculations. They are approximately estimated using $N\epsilon$, where
$\epsilon\sim 10^{-16}$ demonstrates machine precision in double floating-
point precision. When roundoff errors completely dominate total errors, the
Hamiltonian or energy errors increase linearly with time. Assume that a
symplectic method has a truncation energy error in an order of $10^{-12}$. The
total errors in the energy are stabilized at the order of magnitude when
$N<10^{4}$, but grow linearly as $N\gg 10^{4}$. If a symplectic method has a
truncation energy error higher than the order of $10^{-8}$, then the total
errors in the energy remain bounded and approach the order of truncation
errors when $N<10^{8}$, whereas increase linearly as $N\gg 10^{8}$. These
results have been confirmed by numerical experiments on $N$-body problems in
the solar system (Wu et al. 2003; Deng et al. 2020). In the present numerical
simulations, the truncation Hamiltonian errors of EI4 are in the order of
$10^{-9}$ for $h=1$ but the roundoff errors are $10^{-8}$ after $10^{8}$
integration steps. Given that the former errors are smaller than the latter
ones, secular drift exists in the Hamiltonian errors. However, the truncation
Hamiltonian errors of EI4 are in the order of $10^{-5}$ for $h=10$. They are
larger than the roundoff errors after $10^{8}$ integration steps. Therefore,
no secular drift occurs in the Hamiltonian errors.
A conclusion can be drawn from Figures 1 and 2 that the stable behavior and
magnitude of the Hamiltonian errors for each algorithm mostly depend on the
choice of step sizes. To demonstrate this fact clearly, we list them in Tables
1 and 2, where chaotic Orbit 3 is used as a test orbit. The two second-order
symplectic integrators S2 and EI2 can make the errors bounded for the three
time steps, $h=0.1,1,10$. A larger time step is also suitable for the two
fourth-order symplectic integrators S4 and EI4. However, a smaller time step
is suitable for the extended phase-space methods. The reason why EE2 does not
produce stable errors for $h=1$ but does for $h=0.1$ (or EE4 does not produce
stable errors for $h=10$ but does for $h=1$) differs from why S4 does not
provide stable errors for $h=0.1$ but does for $h=1$. The error stability or
instability for the former case is mostly dependent on permutations, which are
frequently required in appropriately small times. However, it is primarily
related to the roundoff errors for the latter case. Such a smaller time step
is also necessary for RK4 to obtain higher accuracy, although RK4 does not
remain at a stable or bounded value of energy errors.
Computational costs are listed in Table 3. Given the smaller step sizes,
several differences among CPU times exist for the same order methods. The
proposed explicit symplectic integrators achieve the best computational
efficiency compared with the other algorithms at the same order and time step.
The explicit and implicit mixed symplectic methods require smaller additional
computational labor than the same-order new integrators because only the
solutions of $r$ and $p_{r}$ in IM2 of Equation (B.2) should be iterated. Such
partially implicit constructions are faster to compute than the completely
implicit integrators.
## 6 Conclusions
The major contribution of this study is the successful construction of
explicit symplectic integration algorithms in general relativistic
Schwarzschild type space-time geometries. The construction is based on an
appropriate splitting form of the Hamiltonian corresponding to this space-
time. The Hamiltonian exists four integrable separable parts with analytical
solutions as explicit functions of proper time. The solutions from the four
parts are symmetrically composed of second- and fourth-order explicit
symplectic integrators, similar to the standard explicit symplectic leapfrog
methods that split the considered Hamiltonian into two integrable parts with
analytical solutions as explicit functions of time. The proposed algorithms
are still valid for an external magnetic field included within the vicinity of
the black hole.
Numerical tests show that the newly proposed integration schemes effectively
control Hamiltonian errors without secular changes when appropriate step sizes
are adopted. They are well-behaved in the simulation of the long-term
evolution of regular orbits with single or many loops and weakly or strongly
chaotic orbits. Appropriately larger step sizes are acceptable for such
explicit symplectic integrators to maintain stable or bounded energy (or
Hamiltonian) errors. Explicit constructions are generally superior to same
order implicit methods in computational efficiency.
In summary, the new methods achieve long-time performance. Therefore, they are
highly appropriate for the long-term numerical simulations of regular and
chaotic motions of charged particles in the present non-integrable magnetized
Schwarzschild space-time background (Felice $\&$ Sorge 2003; Kološ et al.
2015; Yi $\&$ Wu 2020). The methods are also useful for studying the chaotic
motion of a charged particle in a tokamak magnetic field (Cambon et al. 2014).
They are suitable for investigating the capture cross section of magnetized
particles and the magnetized particles’ acceleration mechanism near a black
hole with an external magnetic field (Abdujabbarov et al. 2014). These methods
are applicable to the simulation of the dynamics of charged particles around a
regular black hole with a nonlinear electromagnetic source (Jawad et al.
2016). Such class of explicit symplectic integration algorithms will be
developed to address other black hole gravitational problems, such as the
Reissner-Nordström space-time.
## APPENDIX
## Appendix A Discrete difference scheme of algorithm $S^{\mathcal{H}}_{2}$
From an $(n-1)$th step to an $n$th step, algorithm $S^{\mathcal{H}}_{2}$ has
the following discrete difference scheme:
$\displaystyle\theta^{\mathcal{H}4}$ $\displaystyle=$
$\displaystyle\theta_{n-1}+\frac{h}{2}p_{\theta,n-1}/r^{2}_{n-1},$
$\displaystyle p^{\mathcal{H}4}_{r}$ $\displaystyle=$ $\displaystyle
p_{r,n-1}+\frac{h}{2}p^{2}_{\theta,n-1}/r^{3}_{n-1};$ $\displaystyle
r^{\mathcal{H}3}$ $\displaystyle=$
$\displaystyle[(r^{2}_{n-1}-\frac{3}{2}hp^{\mathcal{H}4}_{r})^{2}/r_{n-1}]^{1/3},$
$\displaystyle p^{\mathcal{H}3}_{r}$ $\displaystyle=$ $\displaystyle
p^{\mathcal{H}4}_{r}[(r^{2}_{n-1}-\frac{3}{2}hp^{\mathcal{H}4}_{r})/r^{2}_{n-1}]^{1/3};$
$\displaystyle r^{\mathcal{H}2}$ $\displaystyle=$ $\displaystyle
r^{\mathcal{H}3}+\frac{h}{2}p^{\mathcal{H}3}_{r};$ $\displaystyle
p^{\mathcal{H}1}_{r}$ $\displaystyle=$ $\displaystyle
p^{\mathcal{H}3}_{r}+h[\frac{\ell^{2}}{(r^{\mathcal{H}2})^{3}\sin^{2}\theta^{\mathcal{H}4}}-\frac{E^{2}}{(r^{\mathcal{H}2}-2)^{2}}],$
$\displaystyle p_{\theta n}$ $\displaystyle=$ $\displaystyle
p_{\theta,n-1}+h\frac{\ell^{2}\cos\theta^{\mathcal{H}4}}{(r^{\mathcal{H}2})^{2}\sin^{3}\theta^{\mathcal{H}4}};$
$\displaystyle r^{*\mathcal{H}2}$ $\displaystyle=$ $\displaystyle
r^{\mathcal{H}2}+\frac{h}{2}p^{\mathcal{H}1}_{r};$ $\displaystyle r_{n}$
$\displaystyle=$
$\displaystyle[((r^{*\mathcal{H}2})^{2}-\frac{3}{2}hp^{\mathcal{H}1}_{r})^{2}/r^{*\mathcal{H}2}]^{1/3},$
$\displaystyle p^{*\mathcal{H}3}_{r}$ $\displaystyle=$ $\displaystyle
p^{\mathcal{H}1}_{r}[((r^{*\mathcal{H}2})^{2}-\frac{3}{2}hp^{\mathcal{H}1}_{r})/(r^{*\mathcal{H}2})^{2}]^{1/3};$
$\displaystyle\theta_{n}$ $\displaystyle=$
$\displaystyle\theta^{\mathcal{H}4}+\frac{h}{2}p_{\theta n}/(r_{n})^{2},$
$\displaystyle p_{rn}$ $\displaystyle=$ $\displaystyle
p^{*\mathcal{H}3}_{r}+\frac{h}{2}(p_{\theta n})^{2}/(r_{n})^{3}.$
In this manner, the solutions $(r_{n},\theta_{n},p_{rn},p_{\theta n})$ at the
$n$th step are presented. Let the integration continue from the $n$th step to
the $(n+1)$th step.
## Appendix B Descriptions of algorithms EI4 and EE4
Algorithm EI4 was discussed in the references (Lubich et al. 2010; Zhong et
al. 2010; Mei et al. 2013a, 2013b). Here, it is used to solve the Hamiltonian
(15). Its construction requires splitting this Hamiltonian into two parts
$K=K_{1}+\Lambda,$ (B1)
where $\Lambda=K_{2}+K_{3}+K_{4}$. The sub-Hamiltonian $K_{1}$ does not depend
on momenta $p_{r}$ and $p_{\theta}$, and thus, it is easily, explicitly and
analytically solved, and then labeled as operator $\psi^{K_{1}}_{h}$. Another
sub-Hamiltonian $\Lambda$ exhibits difficulty in providing explicit analytical
solutions, but can be integrated using the second-order implicit midpoint rule
(Feng 1986), labeled as operator $IM2(h)$. Similar to the explicit algorithm
$S_{2}$ in Equation (4), a second-order explicit and implicit mixed symplectic
integrator is symmetrically composed of two explicit and implicit operators by
$EI2(h)=\psi^{K_{1}}_{h/2}\circ IM2(h)\circ\psi^{K_{1}}_{h/2}.$ (B2)
Such a mixed symplectic method demonstrates an explicit advantage over the
implicit midpoint method acting on the complete Hamiltonian $K$ in terms of
computational efficiency. The four-order explicit and implicit mixed
symplectic integrator EI4 can be obtained by substituting EI2 into
$S^{\mathcal{H}}_{2}$ in Equation (38).
Algorithm EE4 is based on the idea of Pihajoki (2015). Its construction relies
on extending the four-dimensional phase-space variables
$(r,\theta,p_{r},p_{\theta})$ of the Hamiltonian $K$ to eight-dimensional
phase-space variables $(r,\theta,\tilde{r},\tilde{\theta},p_{r},$
$p_{\theta},\tilde{p}_{r},\tilde{p}_{\theta})$ of a new Hamiltonian, i.e.,
$\Gamma=\kappa_{1}(r,\theta,\tilde{p}_{r},\tilde{p}_{\theta})+\kappa_{2}(\tilde{r},\tilde{\theta},p_{r},p_{\theta}),$
(B3)
where
$\kappa_{1}(r,\theta,\tilde{p}_{r},\tilde{p}_{\theta})=\kappa_{2}(\tilde{r},\tilde{\theta},p_{r},p_{\theta})=K(r,\theta,p_{r},p_{\theta})$.
Evidently, the two sub-Hamiltonians $\kappa_{1}$ and $\kappa_{2}$ are
independently, explicitly and analytically solved, and then labeled as
operators $\psi^{\kappa_{1}}_{h}$ and $\psi^{\kappa_{2}}_{h}$. The two
operators are used to yield the second-order symplectic method
$\mathcal{S}_{2}$ and the Forest-Ruth fourth-order algorithm FR4, which are
respectively given by Equations (4) and (5) but $\mathcal{A}$ and
$\mathcal{B}$ are respectively replaced with $\psi^{\kappa_{1}}$ and
$\psi^{\kappa_{2}}$.
If the two independent Hamiltonians $\kappa_{1}$ and $\kappa_{2}$ have the
same initial conditions, then they should have the same solutions, i.e.,
$r=\tilde{r}$, $\theta=\tilde{\theta}$, $\tilde{p}_{r}=p_{r}$ and
$\tilde{p}_{\theta}=p_{\theta}$. However, these solutions are not equal
because of their couplings in the methods $\mathcal{S}_{2}$ and FR4. To make
them equal, Pihajoki (2015), Liu et al. (2016), Luo et al. (2017), Liu et al.
(2017), Luo $\&$ Wu (2017), Li $\&$ Wu (2017) and Wu $\&$ Wu (2018) introduced
permutations between the original variables and their corresponding extended
variables after the implementation of $\mathcal{S}_{2}$ or FR4. A good choice
is the midpoint permutation method(Luo et al. 2017):
$\displaystyle\mathcal{M}:\frac{r+\tilde{r}}{2}$ $\displaystyle\rightarrow$
$\displaystyle
r=\tilde{r},~{}~{}~{}~{}~{}~{}~{}~{}\frac{\theta+\tilde{\theta}}{2}\rightarrow\theta=\tilde{\theta};$
$\displaystyle\frac{p_{r}+\tilde{p}_{r}}{2}$ $\displaystyle\rightarrow$
$\displaystyle
p_{r}=\tilde{p}_{r},~{}~{}\frac{p_{\theta}+\tilde{p}_{\theta}}{2}\rightarrow
p_{\theta}=\tilde{p}_{\theta}.$ (B4)
By adding the midpoint permutation map $\mathcal{M}$ after $\mathcal{S}_{2}$
or FR4, Luo et al. (2017) obtained algorithms EE2 and EE4 as follows:
$EE2=\mathcal{M}\otimes\mathcal{S}_{2},~{}~{}EE4=\mathcal{M}\otimes FR4.$ (B5)
The inclusion of $\mathcal{M}$ destroys the symplecticity of $\mathcal{S}_{2}$
and FR4, but EE2 and EE4, similar to the symplectic schemes $\mathcal{S}_{2}$
and FR4, still exhibit good long-term stable behavior in energy errors because
of their symmetry. Thus, they are called explicit symplectic-like algorithms
for the newly extended phase-space Hamiltonian $\Gamma$.
## Acknowledgments
The authors are very grateful to a referee for useful suggestions. This
research has been supported by the National Natural Science Foundation of
China [Grant Nos. 11533004, 11973020 (C0035736), 11803020, 41807437, U2031145]
and the Natural Science Foundation of Guangxi (Grant Nos. 2018GXNSFGA281007
and 2019JJD110006).
## References
* Abbott et al. (2016) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Phy. Rev. Lett., 116, 061102
* Abdujabbarov et al. (2013) Abdujabbarov, A. A., Ahmedov, B. J., & Jurayeva, N. B. 2013, Phys. Rev. D, 87, 064042
* Abdujabbarov et al. (2014) Abdujabbarov, A., Ahmedov, B., Rahimov, O., & Salikhbaev., U. 2014, Phys. Scr., 89, 084008
* Avdyushev (2003) Avdyushev, E. A. 2003, Celestial Mechanics and Dynamical Astronomy, 87, 383
* Azreg-Aïnou (2016) Azreg-Aïnou, M. 2016, Eur. Phys. J. C, 76, 414
* Bacchini et al. (2018a) Bacchini, F., Ripperda, B., Chen, A. Y., & Sironi, L. 2018a, Astropys. J. Suppl., 237, 6
* Bacchini et al. (2018b) Bacchini, F., Ripperda, B., Chen, A. Y., & Sironi, L. 2018b, Astropys. J. Suppl., 240, 40
* Baumgarte (1972) Baumgarte, J. 1972, Comp. Math. Appl. Mech. Eng., 1, 1
* Baumgarte (1973) Baumgarte, J. 1973, Celest. Mech., 5, 490
* Benavides-Gallego et al. (2019) Benavides-Gallego, C. A., Abdujabbarov, A., Malafarina, D., Ahmedov, B., & Bambi, C. 2019, Phys. Rev. D, 99, 044012
* Borm & Spaans (2013) Borm, C. V., & Spaans, M. 2013, Astron. Astrophys., 553, L9
* Brown (2006) Brown, J. D. 2006, Phys. Rev. D, 73, 024001
* Cambon et al. (2014) Cambon, B., Leoncini, X., Vittot, M., Dumont, R., & Garbet, X. 2014, Chaos, 24, 033101
* Carter (1968) Carter, B. 1968, Phy. Rev., 174, 1559
* Chambers (2000) Chambers, J. E., & Murison, M. A. 2000, AJ, 119, 425
* Chin (1997) Chin, S. A. 1997, Phys. Lett. A, 226, 344
* Deng et al. (2020) Deng, C., Wu, X., & Liang, E. 2020, MNRAS, 496, 2946
* Collaboration et al. (2019) EHT Collaboration et al. 2019, ApJL, 875, L1
* Einstein (1915) Einstein, A. 1915, Sitzungsberichte der Köiglich Preu$\beta$schen Akademie der Wissenschaften (Berlin: Deutsche Akademie der Wissenschaften zu Berlin)
* Einstein & Sitzungsber (1996) Einstein, A., & Sitzungsber, K. 1996, Preuss. Akad. Wiss., 1, 688
* Felice & Sorge (2003) Felice, D. d, & Sorge, F. 2003, Class. Quantum Grav., 20, 469
* Feng (1986) Feng, K. 1986, Journal of Computational Mathematics, 44, 279
* Feng et al. (2009) Feng, K., & Qin, M. Z. 2009, Symplectic Geometric Algorithms for Hamiltonian Systems (Hangzhou, New York: Zhejiang Science and Technology Publishing House, Springer)
* Forest & Ruth (1990) Forest, E., & Ruth, R. D. 1990, Physica D, 43, 105
* Fukushima (2003a) Fukushima, T. 2003a, AJ, 126, 1097
* Fukushima (2003b) Fukushima, T. 2003b, AJ, 126, 2567
* Fukushima (2003c) Fukushima, T. 2003c, AJ, 126, 3138
* Fukushima (2004) Fukushima, T. 2004, AJ, 127, 3638
* Hairer et al. (1999) Hairer E., Lubich C. & Wanner G. 1999, Geometric Numerical Integration, Springer-Verlag, Berlin
* Hénon & Heiles (1964) Hénon, M., & Heiles, C. 1964, AJ, 69, 73
* Hernandez & Dehnen (2017) Hernandez, D. M., & Dehnen, W. 2017, MNRAS, 468, 2614
* Hu et al. (2019) Hu S., Wu X., Huang G., & Liang E. 2019, ApJ, 887, 191
* Huang & Innanen (1983) Huang, T. Y. & Innanen, K. 1983, AJ, 88, 870
* Jawad et al. (2016) Jawad, A., Ali, F., Jamil, M., & Debnath, U. 2016, Commun. Theor. Phys., 66, 509
* Kerr (1963) Kerr, R. P. 1963, Phy. Rev. Lett., 11, 237
* Kološ et al. (2015) Kološ, M., Stuchlík, Z., & Tursunov, A. 2015, Class. Quantum Grav., 32, 165009
* Kopáček et al. (2010) Kopáček, O., Karas, V., Kovář, J., & Stuchlík, Z. 2010, ApJ, 722, 1240
* Kopáček & Karas (2014) Kopáček, O., & Karas, V. 2014, ApJ, 787, 117
* Laskar & Robutel (2001) Laskar J., & Robutel, P. 2001, Celest. Mech. Dyn. Astron., 80, 39
* Li & Wu (2017) Li D., & Wu, X. 2017, Mon. Not. R. Astron. Soc., 469, 3031
* Li & Wu (2019) Li D., & Wu, X. 2019, Eur. Phys. J. Plus, 134, 96
* Liao (1997) Liao, X. H. 1997, Celest. Mech. Dyn. Astron., 66, 243
* Lichtenberg & Lieberman (1983) Lichtenberg, A. J., & Lieberman, M. A. 1983, Regular and Chaotic Dynamics (Springer-Verlag, New York)
* Liu et al. (2016) Liu, L., Wu, X., Huang, G. Q., & Liu, F. 2016, Mon. Not. R. Astron. Soc., 459, 1968
* Liu et al. (2017) Liu, L., Wu, X., & Huang, G. Q. 2017, Gen. Relativ. Gravit., 49, 28
* Lubich et al. (2010) Lubich, C., Walther, B., & Brügmann, B. 2010, Phys. Rev. D, 81, 104025
* Luo et al. (2017) Luo, J., Wu, X., Huang, G., & Liu, F. 2017, ApJ, 834, 64
* Luo & Wu (2017) Luo, J., & Wu, X. 2017, Eur. Phys. J. Plus, 132, 485
* Ma et al. (2008a) Ma, D. Z., Wu, X., & Zhong, S. Y. 2008a, ApJ, 687, 1294
* Ma et al. (2008b) Ma, D. Z., Wu, X., & Zhu, J. F. 2008b, New Astrom., 13, 216
* McLachlan & Atela (1992) McLachlan, R. I., & Atela, P. 1992, Nonlinearity, 5, 541
* Mei et al. (2013a) Mei, L., Ju, M., Wu, X., & Liu, S. 2013a, Mon. Not. R. Astron. Soc., 435, 2246
* Mei et al. (2013b) Mei, L., Wu, X., & Liu, F. 2013b, Eur. Phys. J. C, 73, 2413
* Nacozy (1971) Nacozy, P. E. 1971, Astrophys. Space Sci, 14, 40
* Omelyan et al. (2002a) Omelyan, I. P., Mryglod, I. M., & Folk, R. 2002a, Comput. Phys. Commun., 146, 188
* Omelyan et al. (2002b) Omelyan, I. P., Mryglod, I. M., & Folk, R. 2002b, Phys. Rev. E, 66, 026701
* Omelyan et al. (2003) Omelyan, I. P., Mryglod, I. M., & Folk, R. 2003, Comput. Phys. Commun., 151, 272
* Pihajoki (2015) Pihajoki, P. 2015, Celest. Mech. Dyn. Astron., 121, 211
* Preto & Saha (2009) Preto, M., & Saha, P. 2009, ApJ, 703, 1743
* Rein & Spiegel (2015) Rein, H., & Spiegel, D. S. 2015, MNRAS, 446, 1424
* Ruth (1983) Ruth, R. D, 1983, IEEE Trans. Nucl. Sci. NS 30, 2669
* Schwarzschild (1916) Schwarzschild, K. 1916, Sitzungsberichte der Köiglich Preu$\beta$schen Akademie der Wissenschaften (Berlin: Deutsche Akademie der Wissenschaften zu Berlin)
* Seyrich & Lukes-Gerakopoulos (2012) Seyrich, J., & Lukes-Gerakopoulos, G. 2012, Phys. Rev. D, 86, 124013
* Shaymatov et al. (2015) Shaymatov, S., Patil, M., Ahmedov, B., & Joshi, P. S. 2015, Phys. Rev. D, 91, 064025
* Stuchlík & Kološ (2016) Stuchlík, Z., & Kološ, M. 2016, Eur. Phys. J. C, 76, 32
* Suzuki (1991) Suzuki, M. 1991, J. Math. Phys. (N.Y.), 32, 400
* Swope et al. (1982) Swope, W. C., Andersen, H. C., Berens, P. H., & Wilson, K. R. 1982, J. Chem. Phys. 76, 637
* Takahashi & Koyama (2012) Takahashi, M., & Koyama, H. 2009, ApJ, 693, 472
* Tao (2016) Tao, M. 2016, J. Comput. Phys., 327, 245
* Tsang et al. (2015) Tsang, D., Galley, C. R., Stein, L. C., & Turner, A. 2015, ApJL, 809, L9
* Tursunov et al. (2016) Tursunov, A., Stuchlík, Z., & Kološ, M. 2016, Phys. Rev. D, 93, 084012
* Wang et al. (2016) Wang, S. C., Wu, X., & Liu, F. Y. 2016, MNRAS, 463, 1352
* Wang et al. (2018) Wang, S. C., Huang, G. Q., & Wu, X. 2018, AJ, 155, 67
* Wisdom (1982) Wisdom, J. 1982, AJ, 87, 577
* Wisdom & Holman (1991) Wisdom, J., & Holman, M. 1991, AJ, 102, 1528
* Wisdom et al. (1996) Wisdom, J., Holman, M., & Touma, J. 1996, Fields Inst. Commun., 10, 217
* Wu et al. (2003) Wu, X., Huang, T. Y., & Wan, X. S. 2003, Chinese Astronomy and Astrophysics, 27, 114
* Wu et al. (2007) Wu, X., Huang, T. Y., Wan, X. S., & Zhang, H. 2007, AJ, 133, 2643
* Wu & Xie (2010) Wu, X., & Xie, Y. 2010, Phys. Rev. D, 81, 084045
* Wu & Wu (2018) Wu, Y. L., & Wu, X. 2018, International Journal of Modern Physics C, 29, 1850006
* Wu & Wu (2020) Yi, M., & Wu, X. 2020, Phys. Scr., Phys. Scr., 95, 085008
* Yoshida (1990) Yoshida, H. 1990, Phys. Lett. A, 150, 262
* Zhong et al. (2010) Zhong, S. Y., Wu, X., Liu, S. Q., & Deng, X. F. 2010, Phys. Rev. D, 82, 124040
Figure 1: (a)-(c) Hamiltonian errors $\Delta H=1+2\mathcal{H}$ from Eq. (17)
for several algorithms solving the Schwarzschild problem (12). The adopted
algorithms are the new second-order explicit symplectic integrator S2 in
Equation (37), the second-order explicit and implicit mixed symplectic method
EI2 in Equation (B.2), the second-order explicit extended phase-space
symplectic-like algorithm EE2, the new fourth-order explicit symplectic
integrator $S_{4}$ in Equation (38), the fourth-order explicit and implicit
mixed symplectic method EI4, the fourth-order explicit extended phase-space
symplectic-like algorithm EE4 in Equation (B.5), and the fourth-order Runge-
Kutta scheme RK4. The energy and angular momentum of particles are $E=0.995$
and $\ell$ (or $L)$=4.6, respectively, and the proper time-step is $h=1$. The
integrated orbit (called Orbit 1) has initial conditions $r=11$,
$\theta=\pi/2$ and $p_{r}=0$. The initial value of $p_{\theta}$ $(>0)$ is
given by Equation (17). (d) Poincaré sections on the plane $\theta=\pi/2$ and
$p_{\theta}>0$. Apart from Orbit 1, Orbits 2 and 3 with initial separations
$r=70$ and 110, respectively, are plotted. The initial values of $\theta$ and
$p_{r}$ for Orbits 2 and 3 are the same as those for Orbit 1. The three orbits
are regular tori because of the integrability of the system (12).
Figure 2: Same as Figure 1, but an external magnetic field with parameter
$\beta=8.9\times 10^{-4}$ is included within the vicinity of the black hole.
(a) Poincaré sections. Orbit 1 is still a regular torus, Orbit 2 has many
islands, and Orbit 3 is strongly chaotic. (b)-(d) Hamiltonian errors $\Delta
K=1+2K$ from Equation (18) for the algorithms solving the three orbits in the
system (15).
|
# On favourite sites of a random walk in moderately sparse random environment
Alicja Kołodziejska
###### Abstract.
We study the favourite sites of a random walk evolving in a sparse random
environment on the set of integers. The walker moves symmetrically apart from
some randomly chosen sites where we impose a random drift. We prove annealed
limit theorems for the time the walk spends in its favourite sites in two
cases. The first one, in which it is the distribution of the drift that
determines the limiting behaviour of the walk, is a generalization of known
results for a random walk in i.i.d. random environment. In the second case a
new behaviour appears, caused by the sparsity of the environment.
Mathematical Institute, University of Wrocław, Pl. Grunwaldzki 2, 50-384
Wrocław, Poland. E-mail<EMAIL_ADDRESS>
Keywords: random walk in random environment, branching process in random
environment, sparse random environment, local times.
MSC2020 subject classifications: primary: 60K37; secondary: 60F05.
## 1\. Introduction
One of the most classic and well studied stochastic processes is a simple
symmetric random walk on the set of integers, which models the movement of a
single particle in one-dimensional, homogeneous medium. The simplicity of the
model allows to analyse it with the help of such classic results as the strong
law of large numbers or the central limit theorem; however, its homogeneity is
not always desired. In many applications one would like to consider some
obstacles or impurities of the medium, possibly placed randomly, that would
have impact on the movement of the particle. One of the ways of defining such
random environment was proposed by Solomon in the seventies [16]. In his
model, called a random walk in a random environment (RWRE), one first samples
the environment by putting random drift independently at every integer, and
then the particle moves in such inhomogeneous, random medium. It soon
transpired that this additional noise leads to behaviour not observed in the
deterministic setting. Various authors described how the distribution of the
environment determines such properties of the walk as its transience and
asymptotic speed [16, 1], limit theorems [11, 13], or large deviations [8, 4].
In particular, under suitable distribution of the drift, the walk may be
transient, but with sub-linear speed, and no longer satisfy the central limit
theorem. This new behaviour is caused, heuristically speaking, by the traps
occurring in the environment, i.e. sites with unfavourable drift; the particle
is forced to make many attempts to cross such a site and this fact has
significant impact on the limiting behaviour of the walk.
The model studied in this article was introduced by Matzavinos, Roitershtein,
and Seol in [14] and is called a random walk in a sparse random environment
(RWSRE). The aim is to consider an environment in which the impurities appear
not at every site, as it is the case in the RWRE, but are put sparsely on the
set of integers. To this end, the environment is sampled by marking some sites
by a two-sided renewal process and putting random drifts only in the marked
points. In the unmarked sites the movement of the particle is symmetric.
Therefore the RWSRE may be seen as an interpolation between the simple
symmetric random walk and the RWRE, and one may expect that, depending on the
distribution of the environment, it should manifest properties resembling one
or the other. Indeed, this dichotomy was already observed in [6, 5, 7] in the
context of limit theorems for the position of the walk and the sequence of
first passage times. Under suitable assumptions on the distribution of the
environment, it is the drift that has major impact on the movement of the
particle and the limit theorems resemble results known for the RWRE. However,
under different assumptions, ones that favour long distances between marked
points, in most sites the walk behaves like a simple symmetric random walk and
this change is visible in the macroscopic scale of the limit theorems.
(a) The case of dominating drift: the particle spends most of its time trying
to cross sites with unfavourable drift.
(b) The case of dominating sparsity: in most of the sites the particle
performs a simple symmetric random walk.
Figure 1.1. Exemplary trajectories of a transient RWSRE. Horizontal lines
indicate marked sites; the darker the line, the stronger the drift to
$-\infty$.
The aim of this article is to study the sequence of maximal local times, i.e.
the amount of time spent by the particle in its favourite sites, in the case
of the transient walk in a sparse random environment. We prove annealed limit
theorems for this sequence under two sets of assumptions. In the first case it
is the drift that drives the limiting behaviour of the walk, and our results
may be seen as a generalization of those obtained by Dolgopyat and Goldsheid
in [10, Theorem 4] for the RWRE. However, the techniques used in [10] were
different from those presented here. In this article we follow the method
proposed by Kesten et al. in [13] when examining the hitting times, that is we
rephrase the question posed for the walk into the setting of the associated
branching process. This method proves useful both in the case of dominating
drift and the complementary case, in which the sparsity of the environment
plays the dominant role in determining the limiting behaviour of the walk.
The article is organized as follows: in the remaining part of this section we
define the examined model formally. Statement of our main results is given in
Section 2. Section 3 introduces the branching process associated with the walk
and presents some of its properties. The proofs of the main theorems are given
in Sections 4 and 5.
### 1.1. Random walk in sparse random environment
Let $\Omega=(0,1)^{\mathbb{Z}}$ and let ${\mathcal{F}}$ be the corresponding
cylindrical $\sigma$-algebra. A random element
$\omega=(\omega_{n})_{n\in{\mathbb{Z}}}$ of $(\Omega,{\mathcal{F}})$
distributed according to a probability measure ${\rm P}$ is called a random
environment. Let $\mathcal{X}={\mathbb{Z}}^{\mathbb{N}}$ be the set of
possible paths of a random walk on ${\mathbb{Z}}$, with corresponding
cylindrical $\sigma$-algebra $\mathcal{G}$. Then any $\omega\in\Omega$ and
$i\in{\mathbb{Z}}$ gives rise to a measure ${\rm P}_{\omega}^{i}$ on
$\mathcal{X}$ such that ${\rm P}_{\omega}^{i}[X_{0}=i]=1$ and
(1.1) ${\rm
P}_{\omega}^{i}\left[X_{n+1}=j|X_{n}=k\right]=\begin{cases}\omega_{k},\quad&\textnormal{if
}j=k+1,\\\ 1-\omega_{k},&\textnormal{if }j=k-1,\\\
0&\textnormal{otherwise,}\end{cases}$
where $X=(X_{n})_{n\in\mathbb{N}}\in\mathcal{X}$. That is, under ${\rm
P}_{\omega}^{i}$, $X$ is a nearest-neighbour random walk starting from $i$
with transition probabilities given by the sequence $\omega$. In particular,
it is a time-homogeneous Markov chain.
Since the environment itself is random, it is natural to consider a measure
$\mathbb{P}^{i}$ on
$(\Omega\times\mathcal{X},{\mathcal{F}}\otimes\mathcal{G})$ such that
(1.2) $\mathbb{P}^{i}\left[F\times G\right]=\int_{F}{\rm
P}_{\omega}^{i}[G]\,{\rm P}(d\omega)$
for any $F\in{\mathcal{F}},G\in\mathcal{G}$. We shall write ${\rm
P}_{\omega}={\rm P}_{\omega}^{0}$ and $\mathbb{P}=\mathbb{P}^{0}$. Observe
that under $\mathbb{P}$ the walk $X$ may exhibit a long-time dependencies and
thus no longer be a Markov chain.
The process $X$ defined above is called a random walk in a random environment
and was introduced by Solomon [16]. A well studied case is $\omega$ being an
i.i.d. sequence, which gives rise to a random walk in i.i.d. random
environment.
We will consider a specific choice of environment that was introduced first by
Matzavinos, Roitershtein, and Seol in [14]. Consider an i.i.d. sequence
$((\xi_{k},\lambda_{k}))_{k\in{\mathbb{Z}}}\in(\mathbb{N}_{+}\times(0,1))^{\mathbb{Z}}$
and define, for any $n,k\in{\mathbb{Z}}$,
(1.3) $S_{n}=\begin{cases}\sum_{j=1}^{n}\xi_{j},\;&n>0,\\\ 0,&n=0,\\\
-\sum_{j={n+1}}^{0}\xi_{j},&n<0;\end{cases}\qquad\omega_{k}=\begin{cases}\lambda_{n+1}\>&\textnormal{
if }k=S_{n}\textnormal{ for some }n\in{\mathbb{Z}},\\\ 1/2&\textnormal{
otherwise.}\end{cases}$
The random walk evolving in an environment $\omega$ defined by (1.3) is called
a random walk in a sparse random environment. We shall refer to the random
sites $S_{n}$ as marked points and write $(\xi,\lambda)$ for a generic element
of the sequence $((\xi_{k},\lambda_{k}))_{k\in{\mathbb{Z}}}$. The environment
is called moderately sparse if ${\rm E}\xi<\infty$ and strongly sparse
otherwise.
Observe that if $\xi=1$ almost surely, then we obtain once again a random walk
in i.i.d. environment. Otherwise the environment is split into blocks of
lengths given by the sequence $(\xi_{k})_{k\in{\mathbb{Z}}}$; within every
block the particle performs a symmetric walk, while the random drift occurs at
the endpoints of blocks. Therefore the RWSRE model may be seen as an
interpolation between a simple symmetric random walk and a walk in i.i.d.
environment, or as a generalization of the latter.
We should remark that the model we consider here is slightly different from
that defined originally in [14]. That is, due to (1.3), we allow for
dependence between the length of the block between marked sites and the drift
at its left end, while originally the dependence was allowed for the drift at
the right end. This change of convention arises naturally from time reversal
coming with the associated branching process which we introduce in Section 3,
and appeared also in [5, 6], where annealed limit theorems for the position of
the walk were proved.
For $k\in{\mathbb{Z}}$, let
$\rho_{k}=\frac{1-\lambda_{k}}{\lambda_{k}}.$
The variables $(\rho_{k})_{k\in{\mathbb{Z}}}$, which quantify the drift in the
environment, appear naturally when examining the properties of the walk. In
particular, as was shown in [14], if
(1.4) ${\rm E}\log\xi<\infty,\quad{\rm E}\log\rho<0,$
then RWSRE is transient to $+\infty$, $\mathbb{P}$-almost surely. From now on
we will assume that conditions (1.4) are satisfied.
## 2\. Annealed limit theorems for maximal local time
Consider a sequence of hitting times
(2.1) $T_{n}=\inf\\{k\geq 0:X_{k}=n\\}$
and let, for $k\leq n$,
(2.2) $L_{k}(n)=|\\{m\leq T_{n}\,:\,X_{m}=k\\}|$
be the local time, i.e. number of times the walk visits $k$ before reaching
$n$. Our object of interest is the limiting behaviour of maximal local time,
that is the variable $\max_{k\leq n}L_{k}(n)$, as $n\to\infty$. We shall
present two cases in which an annealed limit theorem holds for this sequence
of variables, with Fréchet distribution in the limit. We assume that (1.4)
holds, i.e. the walk is transient. Additionally, we consider two sets of
assumptions:
Assumptions $(A)$: For some $\alpha\in(0,2)$,
* •
${\rm E}\rho^{\alpha}=1$;
* •
${\rm E}\rho^{\alpha}\log^{+}\rho<\infty$;
* •
the distribution of $\log\rho$ is non-arithmetic;
* •
${\rm E}\xi^{(\alpha+\delta)\vee 1}<\infty$ for some $\delta>0$;
* •
${\rm E}\xi^{\alpha}\rho^{\alpha}<\infty$.
Recall that a distribution is non-arithmetic if it is not concentrated on any
lattice $c{\mathbb{Z}}$, $c>0$. Note that without loss of generality we may
assume that $\alpha+\delta\leq 2$. In this case the limiting behaviour of
maxima is determined mostly by the parameter $\alpha$, that is by properties
of $\rho$; it is a generalization of the result known for the walk in i.i.d.
environment. We shall prove the following:
###### Theorem 2.1.
Under assumptions $(A)$, there is a constant $c_{\alpha}>0$ such that for all
$x>0$,
$\lim_{n\to\infty}\mathbb{P}\left[\frac{\max_{k\leq
n}L_{k}(n)}{n^{1/\alpha}}>x\right]=1-e^{-c_{\alpha}x^{-\alpha}}.$
It turns out that the crucial assumption in this case is that ${\rm
E}\xi^{\alpha+\delta}<\infty$. Different behaviour appears when $\xi$ does not
have high enough moments. Consider the following:
Assumptions $(B)$: For some $\beta\in[1,2)$,
* •
${\rm P}[\xi>x]\sim x^{-\beta}\ell(x)$ for some slowly varying $\ell$;
* •
${\rm E}\rho^{\beta+\delta}<1$ for some $\delta>0$;
* •
$\xi$ and $\rho$ are independent;
* •
if $\beta=1$, assume ${\rm E}\xi<\infty$.
In this case we may also assume that $\beta+\delta\leq 2$. Observe that we do
not assume that there exists $\alpha$ such that ${\rm E}\rho^{\alpha}=1$.
However, if it does exist, then $\alpha>\beta$ and ${\rm
E}\xi^{\alpha}=\infty$. Since $\xi$ has regularly varying tails, a good
scaling for maxima of $(\xi_{n})_{n\in\mathbb{N}}$ is a sequence
$(a_{n})_{n\in\mathbb{N}}$ such that
(2.3) $\lim_{n\to\infty}n{\rm P}[\xi>a_{n}]=1.$
It turns out it is also a good scaling for maxima of $L$.
###### Theorem 2.2.
Under assumptions $(B)$, there is a constant $c_{\beta}>0$ such that for all
$x>0$,
$\lim_{n\to\infty}\mathbb{P}\left[\frac{\max_{k\leq
n}L_{k}(n)}{a_{n}}>x\right]=1-e^{-c_{\beta}x^{-\beta}}.$
The exact forms of constants $c_{\alpha},c_{\beta}$ will be given during the
proofs.
## 3\. Auxiliary results
Instead of examining the local times explicitly, we pass to a branching
process associated with RWSRE. In this section we describe the construction of
this process and prove auxiliary lemmas which we will use in both examined
cases.
### 3.1. Associated branching process
An important property of a transient nearest neighbour random walk on
${\mathbb{Z}}$ is its duality with a branching process. Consider a walk
$(X_{n})_{n\in\mathbb{N}}$ such that $X_{0}=0$ and $X_{n}\to\infty$ almost
surely, evolving in an environment $\omega=(\omega_{k})_{k\in{\mathbb{Z}}}$.
Recall that, for $n\in\mathbb{N}$,
$T_{n}=\inf\\{k\in\mathbb{N}\,:\,X_{k}=n\\}$
is the first passage time and, for $k\leq n$,
$L_{k}(n)=|\\{m\leq T_{n}:X_{m}=k\\}|$
is the local time, i.e. the number of times the walk visits site $k$ before
reaching $n$. First of all, note that the transience of the walk implies that,
almost surely, the walk spends only finite time on the negative half-axis.
That is, for any sequence $b_{n}\to\infty$,
$\frac{\max_{k<0}L_{k}(n)}{b_{n}}\to 0\quad\mathbb{P}\textnormal{-a.s.}$
Therefore, when examining the limit theorems, we may restrict our analysis to
the variables $L_{k}(n)$ for $k\geq 0$.
Figure 3.1. Exemplary path of a simple walk and corresponding realization of a
branching process. Immigrants (marked in red) correspond to arrivals to new
sites. The subtrees correspond to the excursions of the walk; the first
excursion from $7$ and its corresponding subtree were marked in blue.
The visits to $k\geq 0$ counted by $L_{k}(n)$ may be split into visits from
the left and from the right, that is,
$\begin{split}L_{k}(n)&=|\\{m\leq T_{n}\,:\,X_{m}=k\\}|\\\ &=|\\{m\leq
T_{n}\,:\,X_{m-1}=k-1,\,X_{m}=k\\}|+|\\{m\leq
T_{n}\,:\,X_{m-1}=k+1,\,X_{m}=k\\}|.\end{split}$
Moreover, since the walk is simple, it makes a step from $k-1$ to $k$ when it
visits site $k$ for the first time. After that, it may make some excursions to
the left from $k$; such an excursion always begins with a step from $k$ to
$k-1$ and ends with a step from $k-1$ to $k$. Therefore, to count all the
visits the walk makes to given sites, it is enough to count its steps to the
left. That is, for fixed $n\in\mathbb{N}$ and $0\leq k\leq n$,
$\begin{split}L_{k}(n)&=1+|\\{m\leq
T_{n}\,:\,X_{m-1}=k,\,X_{m}=k-1\\}|+|\\{m\leq
T_{n}\,:\,X_{m-1}=k+1,\,X_{m}=k\\}|\\\
&=1+\widetilde{Z}_{k-1}+\widetilde{Z}_{k},\end{split}$
where $\widetilde{Z}_{k}=|\\{m\leq T_{n}\,:\,X_{m-1}=k+1,\,X_{m}=k\\}|$ is the
number of visits to point $k$ from the right. The main observation is that the
process given by $Z_{k}=\widetilde{Z}_{n-k}$ has a branching structure. Every
step from $n-k$ to $n-k-1$ occurs either before the walk discovered the site
$n-k+1$, or between consecutive steps from $n-k+1$ to $n-k$. That is,
$Z_{k+1}\overset{{\rm d}}{=}\sum_{j=1}^{Z_{k}+1}G_{n,k}^{(j)},$
where $G_{n,k}^{(j)}$, for $j\leq Z_{k}$, counts the number of steps from
$n-k$ to $n-k-1$ between $j$’th and $j+1$’th step from $n-k+1$ to $n-k$, and
$G_{n,k}^{(Z_{k}+1)}$ counts the number of steps from $n-k$ to $n-k-1$ before
the first visit to $n-k+1$. Observe that, due to the strong Markov property of
the walk, the variables $G_{n,k}^{(j)}$ are i.i.d., independent of $Z_{k}$,
and have geometric distribution with parameter $\omega_{n-k}$, i.e.
${\rm
P}_{\omega}\left[G_{n,k}^{(j)}=m\right]=\omega_{n-k}(1-\omega_{n-k})^{m}\quad\textnormal{for
}m=0,1,2,\dots.$
Therefore, $Z=(Z_{k})_{k\in\mathbb{N}}$ is a branching process in random
environment with unit immigration; note that we do not count the immigrant, so
that $Z_{0}=0$. Moreover, for any fixed $n\in\mathbb{N}$,
(3.1) $\left(L_{k}(n)\right)_{0\leq k\leq n}\overset{{\rm
d}}{=}\left(1+Z_{n-k+1}+Z_{n-k}\right)_{0\leq k\leq n}.$
In particular, if $X$ is a random walk in a sparse random environment, its
associated branching process is a branching process in a sparse random
environment (BPSRE). If in the above construction we consider the walk stopped
upon reaching a marked point $S_{n}$, the branching process starts from one
immigrant and evolves in the environment divided into blocks of lengths given
by $(\xi_{n-k})_{k\in\mathbb{N}}$; within the blocks the reproduction is given
by the law $Geo(1/2)$, while the particles in the $k$’th marked generation are
born with the law $Geo(\lambda_{n-k})$. When examining the process $Z$, it is
convenient – and valid, since the environment is given by an i.i.d. sequence –
to reverse the enumeration, so that the block lengths are given by
$(\xi_{k})_{k\in\mathbb{N}}$ and reproduction law in $k$’th marked point is
$Geo(\lambda_{k})$. The process $Z$ may be then defined formally as follows:
for any fixed environment $\omega$, under ${\rm P}_{\omega}$,
$\displaystyle Z_{0}$ $\displaystyle=0,$ $\displaystyle Z_{k}$
$\displaystyle=\sum_{j=1}^{Z_{k-1}+1}G_{k}^{(j)},$
where the variables $(G_{k}^{(j)})_{j\in\mathbb{N}}$ are independent of
$Z_{k-1}$ and each other, and
$G_{k}^{(j)}\overset{{\rm
d}}{=}Geo(\omega_{k})\quad\textnormal{for}\quad\omega_{k}=\begin{cases}\lambda_{n}\quad&\textnormal{if
$k=S_{n}$ for some $n\in\mathbb{N}$;}\\\
1/2&\textnormal{otherwise.}\end{cases}$
Whenever examining a BPSRE, we will distinct the population at marked
generations with bold letters, that is, for example,
${\mathbb{Z}}_{n}=Z_{S_{n}}$.
${\mathbb{Z}}_{1}$$\xi_{1}$${\mathbb{Z}}_{2}$$\xi_{2}$${\mathbb{Z}}_{3}$$\xi_{3}$${\mathbb{Z}}_{4}$$Z_{S_{4}-1}$$Geo(\lambda_{4})$$Geo(1/2)$$\xi_{4}$${\mathbb{Z}}_{5}$$\xi_{5}$${\mathbb{Z}}_{6}$$\xi_{6}$
Figure 3.2. Schematic picture of the process $Z$. Horizontal blue lines
represent marked generations. Within each block between marked generations,
the triangular area represents progeny of immigrants that arrived in this
block. The coloured region represents process $Y^{4}$.
For $k\in\mathbb{N}$, we will denote by $Y^{k}$ the process counting the
progeny of immigrants from $k$’th block, i.e. those arriving at times
$S_{k-1},S_{k-1}+1,\dots S_{k}-1$. Let, for $j\geq 0$, $Y^{k}_{j}$ denote the
number of descendants of these immigrants present in generation $S_{k-1}+j$.
Observe that the process $Y^{k}$ starts with one immigrant at time $j=0$; it
evolves with unit immigration and $Geo(1/2)$ reproduction law up until time
$j=\xi_{k}-1$. The last immigrant arrives at this time, and the particles at
time $j=\xi_{k}$ are born with the law $Geo(\lambda_{k})$. From there on the
process $Y^{k}$ evolves without immigration (see Figure 3.2).
We will use the convention that $Y^{k}_{j}=0$ for $j<0$, so that
$Z_{n}=\sum_{k\in\mathbb{N}}Y^{k}_{n-S_{k-1}}.$
Observe that the processes $Y^{k}$ are independent under ${\rm P}_{\omega}$
and identically distributed under $\mathbb{P}$.
The branching process in a sparse random environment was studied in [6] for
the purpose of proving annealed limit theorems for the first passage times. An
important observation is that the transience of the walk implies quick
extinctions of the branching process. Let
$\tau_{0}=0,\quad\tau_{n}=\inf\\{k>\tau_{n-1}\,:\,{\mathbb{Z}}_{k}=0\\}$
be the extinction times (note that we only consider the extinctions at marked
generations). Observe that when the extinction occurs, the process starts anew
from one immigrant. Thus the sequence $(\tau_{n}-\tau_{n-1})_{n\geq 1}$ is
i.i.d. under $\mathbb{P}$, and the extinction times split the process $Z$ into
independent epochs. The following is Lemma 4.1 from [6]; it implies that the
extinctions occur rather often in the case of transient RWSRE.
###### Lemma 3.1.
Assume that ${\rm E}\log\rho<0$ and ${\rm E}\log\xi<\infty$. Then
$\mathbb{E}\tau_{1}<\infty$. If additionally ${\rm
E}\rho^{\varepsilon}<\infty$ and ${\rm E}\xi^{\varepsilon}<\infty$ for some
$\varepsilon>0$, then there exists $c>0$ such that
$\mathbb{E}e^{c\tau_{1}}<\infty$.
Observe that due to (3.1) we have, for any $n\in\mathbb{N}$,
(3.2) $\max_{0\leq k\leq S_{n}}L_{k}(S_{n})\overset{{\rm d}}{=}1+\max_{0\leq
k\leq S_{n}}(Z_{k}+Z_{k+1}).$
Therefore, to obtain limit theorems for the sequence of maximal local times
along the marked points, one may examine the maximal generations of the
corresponding branching process. We conclude this section by remarking that in
the setting of moderately sparse environment this is sufficient also to obtain
annealed limit theorems for the sequence $(\max_{k\leq
n}L_{k}(n))_{n\in\mathbb{N}}$. Note that $(a_{n})_{n\in\mathbb{N}}$ given by
(2.3) is regularly varying with index $1/\beta$.
###### Lemma 3.2.
Assume that ${\rm E}\xi<\infty$. If there exist constants $c>0$, $\gamma>0$
and a sequence $(b(n))_{n\in\mathbb{N}}$ which is regularly varying with index
$1/\gamma$ such that for every $x>0$,
$\lim_{n\to\infty}\mathbb{P}\left[\frac{\max_{k\leq
S_{n}}L_{k}(S_{n})}{b(n)}>x\right]=1-e^{-cx^{-\gamma}},$
then for every $x>0$,
$\lim_{n\to\infty}\mathbb{P}\left[\frac{\max_{k\leq
n}L_{k}(n)}{b(n)}>x\right]=1-e^{-(c/{\rm E}\xi)x^{-\gamma}}.$
###### Proof.
Denote, for $n\in\mathbb{N}$,
$\nu_{n}=\inf\\{k>0\,:\,S_{k}>n\\}.$
Then the assumption ${\rm E}\xi<\infty$ and the law of large numbers guarantee
that ${\rm P}$-almost surely
$\frac{\nu_{n}}{n}\xrightarrow{n\to\infty}\frac{1}{{\rm E}\xi}.$
Denote, for $m\in\mathbb{N}$, $M(m)=\max_{k\leq S_{m}}L_{k}(S_{m})$. Since
$S_{\nu_{n}-1}\leq n<S_{\nu_{n}}$, we have, for any $\varepsilon>0$,
$\begin{split}\mathbb{P}\left[b(n)^{-1}\max_{0\leq
k<n}L_{k}(n)>x\right]&\geq\mathbb{P}\left[b(n)^{-1}M(\nu_{n}-1)>x\right]\\\
&\geq\mathbb{P}\left[b(n)^{-1}M(n(1/{\rm
E}\xi-\varepsilon)-1)>x\right]-\mathbb{P}\left[|1/{\rm
E}\xi-\nu_{n}/n|>\varepsilon\right]\\\
&\xrightarrow{n\to\infty}1-\exp(-c(1/{\rm
E}\xi-\varepsilon)x^{-\gamma}),\end{split}$
where we used the fact that
$\frac{b(n(1/{\rm E}\xi-\varepsilon)-1)}{b(n)}\to(1/{\rm
E}\xi-\varepsilon)^{1/\gamma}$
since $b(n)$ is regularly varying. Similarly,
$\begin{split}\mathbb{P}\left[b(n)^{-1}\max_{0\leq
k<n}L_{k}(n)>x\right]&\leq\mathbb{P}\left[b(n)^{-1}M(\nu_{n})>x\right]\\\
&\leq\mathbb{P}\left[b(n)^{-1}M(n(1/{\rm
E}\xi+\varepsilon))>x\right]+\mathbb{P}\left[|1/{\rm
E}\xi-\nu_{n}/n|>\varepsilon\right]\\\
&\xrightarrow{n\to\infty}1-\exp(-c(1/{\rm
E}\xi+\varepsilon)x^{-\gamma}),\end{split}$
which ends the proof since $\varepsilon>0$ is arbitrary. ∎
### 3.2. Estimates of the processes related to the environment
Define
(3.3)
${\bar{R}}_{n}=1+\rho_{n}+\rho_{n}\rho_{n+1}+\dots=\sum_{k=n-1}^{\infty}\Pi_{n,k},$
where $\Pi_{n,k}=\prod_{j=n}^{k}\rho_{j}$ if $n\leq k$ and $\Pi_{n,k}=1$
otherwise. Then the following relation holds:
(3.4) ${\bar{R}}_{n}=1+\rho_{n}{\bar{R}}_{n+1}.$
Moreover, the sequence $({\bar{R}}_{n})_{n\in{\mathbb{Z}}}$ is stationary
under ${\rm P}$. Observe that if ${\rm E}\rho^{\gamma}<1$ for some $\gamma>0$,
then ${\rm E}{\bar{R}}_{1}^{\gamma}<\infty$ (see the proof of Lemma 2.3.1 in
[3]), whereas under $(A)$, the distribution of $\rho$ satisfies the
assumptions of Kesten-Goldie theorem (see [3, Theorem 2.4.4]), thus
${\rm P}[{\bar{R}}_{1}>x]\sim c_{\alpha}x^{-\alpha}$
for some constant $c_{\alpha}$. Therefore
(3.5) ${\rm P}[{\bar{R}}_{1}>x]\leq C_{\gamma}x^{-\gamma}\quad\textnormal{for
some $C_{\gamma}<\infty$ and all $x>0$,}$
whenever either ${\rm E}\rho^{\gamma}<1$, or ${\rm E}\rho^{\gamma}=1$ and
Kesten-Goldie theorem holds for ${\bar{R}}_{1}$. As can be seen in the proofs
of Lemma 6 in [13] and Lemma 5.6 in [6], in the case of dominating drift it is
${\bar{R}}_{1}$ from whom the total population of the process $Z$ (which
corresponds to first passage times of the walk) inherits its annealed tail
behaviour.
Let, for $m\in\mathbb{N}$, the potential $\Psi$ be defined as
(3.6) $\Psi_{m,k}=\Pi_{m,n}\quad\textnormal{for }k\in[S_{n},S_{n+1}).$
As we will see, maxima of the potential determine the limiting behaviour of
maximal generation of $Z$ in the same way as ${\bar{R}}_{1}$ determines the
asymptotics of the total population. Let
(3.7) $M_{\Psi,m}=\max_{k\geq S_{m}-1}(\Psi_{m,k}+\Psi_{m,k+1}).$
Then the sequence $(M_{\Psi,m})_{m\in\mathbb{N}}$ is stationary under ${\rm
P}$; denote by $M_{\Psi}$ its generic element. Observe that
$M_{\Psi,1}\leq 2\max_{k\geq S_{1}-1}\Psi_{1,k}=2\max_{n\geq 0}\Pi_{1,n}\leq
2{\bar{R}}_{1},$
thus
(3.8) ${\rm E}M_{\Psi}^{\gamma}<\infty\quad\textnormal{ whenever }{\rm
E}\rho^{\gamma}<1.$
### 3.3. Auxiliary lemmas
The following lemma, concerning a classic Galton-Watson process, will be used
repeatedly to estimate the growth of BPSRE in the unmarked generations.
###### Lemma 3.3.
Let $(X_{n})_{n\geq 0}$ be a Galton-Watson process with $X_{0}=x_{0}$,
reproduction law $Geo(1/2)$, and no immigrants, and let $(\bar{X}_{n})_{n\geq
0}$ be an analogous process with unit immigration. Then the following hold for
any $N\in\mathbb{N}$:
(3.9) $\mathbb{E}\left[\max_{k\leq N}(X_{k}-x_{0})^{2}\right]\leq 8Nx_{0},$
(3.10) $\mathbb{E}\left[\max_{k\leq N}\bar{X}_{k}^{2}\right]\leq
16(N^{2}+Nx_{0}+x_{0}^{2}).$
###### Proof.
Since the process $(X_{k})_{k\in\mathbb{N}}$ is a martingale with mean
$x_{0}$, Doob’s maximal inequality implies
$\mathbb{E}\left[\max_{k\leq N}(X_{k}-x_{0})^{2}\right]\leq
4\mathbb{E}(X_{N}-x_{0})^{2}=4{\rm Var}X_{N}.$
Now, a standard calculation gives
${\rm Var}X_{N}=2Nx_{0},$
which implies (3.9).
Observe that $\bar{X}_{n}=X^{\prime}_{n}+I_{n}$, where $X^{\prime}$ denotes
the descendants of the initial $x_{0}$ particles, and $I$ denotes the progeny
of immigrants. The processes $I$ and $X^{\prime}$ are independent, and
$X^{\prime}$ has the same distribution as $X$. Moreover, the process
$(\bar{X}_{n})_{n\in\mathbb{N}}$ is a non-negative submartingale, thus by
Doob’s maximal inequality,
$\mathbb{E}\left[\max_{k\leq N}\bar{X}_{k}^{2}\right]\leq
4\mathbb{E}\left[\bar{X}_{N}^{2}\right]=4\left({\rm Var}X^{\prime}_{N}+{\rm
Var}I_{N}+(\mathbb{E}X^{\prime}_{N}+\mathbb{E}I_{N})^{2}\right).$
We have already examined the mean and variance of $X^{\prime}_{N}$. To
calculate moments of $I_{N}$, we may express $I$ as a sum of independent
copies of $X$. Alternatively, we may use the duality of $I$ and a simple
symmetric random walk. It implies that $I_{N}$ equals in distribution to the
number of times the walk hits $0$ from the right when crossing the interval
$[0,N+1]$ for the first time. By the classic gambler’s ruin problem, the
probability that the walk passes from $0$ to $N+1$ without returning to $0$
from the right, is $1/(N+1)$. Therefore $I_{N}\sim Geo(1/(N+1))$, from which
it follows that
$\mathbb{E}I_{N}=N+1,\quad{\rm Var}I_{N}=N^{2}+N.$
Hence
$\mathbb{E}\left[\bar{X}_{N}^{2}\right]=2Nx_{0}+N^{2}+N+(x_{0}+N+1)^{2}\leq
4(N^{2}+Nx_{0}+x_{0}^{2}),$
which ends the proof of (3.10).
∎
The next two lemmas will be of use to us both under assumptions $(A)$ and
$(B)$. Therefore we shall consider the following set of assumptions:
Assumptions $(\Gamma)$: for some $\gamma\leq 2$,
* •
${\rm E}\rho^{\gamma}\leq 1$ and (3.5) holds,
* •
${\rm E}\xi^{\gamma/2}<\infty$,
* •
${\rm E}\rho^{\gamma}\xi^{\gamma/2}<\infty$.
Let $U_{n}$ be the progeny of the first immigrant residing in generation $n$,
with the convention $U_{0}=1$, and denote $\mathbb{U}_{n}=U_{S_{n}}$. For
fixed $N\in\mathbb{N}$, let $U^{k}$ for $k=1,\dots,N$ be copies of the process
$U=(U_{n})_{n\in\mathbb{N}}$, evolving in the same environment and independent
under ${\rm P}_{\omega}$. That is,
$(\sum_{k=1}^{N}U_{n}^{k})_{n\in\mathbb{N}}$ is a BPSRE with $N$ initial
particles evolving without immigration. Although the first part of the
following lemma is analogous to results presented in [13, Lemma 3] and [6,
Lemma 5.6], we provide the full proof as it gives some insight into the
properties of the process $U$.
###### Lemma 3.4.
Assume $(\Gamma)$. Then for some constant $C_{1}$,
(3.11) $\mathbb{P}\left[\sum_{k=1}^{N}\sum_{n\geq
0}\mathbb{U}_{n}^{k}>x\right]\leq C_{1}N^{\gamma}x^{-\gamma},$ (3.12)
$\mathbb{P}\left[\sum_{n\geq
0}\left|\sum_{k=1}^{N}\mathbb{U}_{n}^{k}-N\Pi_{1,n}\right|>x\right]\leq
C_{1}N^{\gamma/2}x^{-\gamma}.$
Moreover,
(3.13) $\mathbb{P}\left[\max_{n\geq 1}\sum_{k=1}^{N}U^{k}_{n}>x\right]\leq
C_{1}N^{\gamma}x^{-\gamma},$ (3.14) $\mathbb{P}\left[\sum_{n\geq
1}\sum_{k=1}^{N}\max_{S_{n-1}\leq
j<S_{n}}|U^{k}_{j}-\mathbb{U}^{k}_{n-1}|>x\right]\leq
C_{1}N^{\gamma/2}x^{-\gamma}.$
###### Proof.
For fixed $n\geq 1$, under ${\rm P}_{\omega}$,
$\mathbb{U}_{n}\overset{{\rm d}}{=}\sum_{k=1}^{U_{S_{n}-1}}G^{(n)}_{k},$
where $G^{(n)}_{k}$ are random variables with law $Geo(\lambda_{n})$,
independent of $U_{S_{n}-1}$ and each other. In particular,
${\rm E}_{\omega}G_{k}^{(n)}=\rho_{n},\quad{\rm
Var}_{\omega}G_{k}^{(n)}=\rho_{n}+\rho_{n}^{2}.$
Since in generations $S_{n-1}+1,\dots S_{n}-1$ the process evolves with
offspring distribution $Geo(1/2)$, standard calculation gives
${\rm E}_{\omega}[U_{S_{n}-1}|\mathbb{U}_{n-1}]=\mathbb{U}_{n-1}\quad{\rm
and}\quad{\rm
Var}_{\omega}(U_{S_{n}-1}|\mathbb{U}_{n-1})=2(\xi_{n}-1)\mathbb{U}_{n-1}.$
This in turn implies
(3.15) $\begin{split}{\rm
E}_{\omega}[\mathbb{U}_{n}|\mathbb{U}_{n-1}]&=\rho_{n}\mathbb{U}_{n-1},\\\
{\rm
E}_{\omega}[(\mathbb{U}_{n}-\rho_{n}\mathbb{U}_{n-1})^{2}|\mathbb{U}_{n-1}]&=(\rho_{n}-\rho_{n}^{2}+2\rho_{n}^{2}\xi_{n})\mathbb{U}_{n-1}.\end{split}$
In particular ${\rm E}_{\omega}\mathbb{U}_{n}=\Pi_{1,n}$.
Observe that the processes $U^{k}$ evolve without immigration and the
extinction time of each $U^{k}$ is stochastically dominated by $\tau_{1}$,
which is finite $\mathbb{P}$-a.s. by Lemma 3.1. In particular, with
probability $1$ the series
$\sum_{k=1}^{N}\sum_{n\geq 0}\mathbb{U}_{n}^{k}$
is indeed a finite sum. Recall the sequence ${\bar{R}}$ defined in (3.3) and
observe that, by (3.4),
$\begin{split}\sum_{k=1}^{N}\sum_{n\geq
0}\mathbb{U}_{n}^{k}&=\sum_{k=1}^{N}\sum_{n\geq
0}\mathbb{U}_{n}^{k}({\bar{R}}_{n+1}-\rho_{n+1}{\bar{R}}_{n+2})\\\
&=\sum_{n\geq
1}\left(\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right){\bar{R}}_{n+1}+N{\bar{R}}_{1}\end{split}$
and thus
$\sum_{n\geq
0}\left(\sum_{k=1}^{N}\mathbb{U}_{n}^{k}-N\Pi_{1,n}\right)=\sum_{n\geq
1}\left(\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right){\bar{R}}_{n+1}.$
Therefore
$\begin{split}\mathbb{P}\left[\sum_{n\geq
0}\left|\sum_{k=1}^{N}\mathbb{U}_{n}^{k}-N\Pi_{1,n}\right|>x\right]&\leq\mathbb{P}\left[\sum_{n\geq
1}\left|\sum_{k=1}^{N}\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1}\right|{\bar{R}}_{n+1}>x\right]\end{split}$
and
$\begin{split}\mathbb{P}\left[\sum_{k=1}^{N}\sum_{n\geq
1}\mathbb{U}_{n}^{k}>x\right]&\leq\mathbb{P}\left[\sum_{n\geq
1}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|{\bar{R}}_{n+1}>x/2\right]+\mathbb{P}[N{\bar{R}}_{1}>x/2].\end{split}$
Observe that for any $n\geq 1$, ${\bar{R}}_{n+1}$ is independent of
$(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})$. Thus for any $x>0$,
$\mathbb{P}\left[\sum_{n\geq
1}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|{\bar{R}}_{n+1}>x\right]\leq\sum_{n\geq
1}\mathbb{P}\left[\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|{\bar{R}}_{n+1}>x/2n^{2}\right]\\\
\begin{split}&=\sum_{n\geq 1}\int_{[0,\infty)}{\rm
P}[{\bar{R}}_{n+1}>x/2tn^{2}]\mathbb{P}\left[\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|\in
dt\right]\\\ &\leq C_{\gamma}\,\sum_{n\geq
1}\int_{[0,\infty)}(x/2tn^{2})^{-\gamma}\mathbb{P}\left[\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|\in
dt\right]\\\ &=2^{\gamma}C_{\gamma}\,x^{-\gamma}\sum_{n\geq
1}n^{2\gamma}\mathbb{E}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|^{\gamma},\end{split}$
where the second inequality follows from (3.5).
The relations (3.15) imply that for any fixed $n$, under ${\rm P}_{\omega}$,
$\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})$ is a sum of
independent centered variables; in particular, using formulae (3.15), we
obtain
$\begin{split}{\rm
E}_{\omega}\left(\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right)^{2}&=N{\rm
E}_{\omega}(\mathbb{U}_{n}-\rho_{n}\mathbb{U}_{n-1})^{2}\\\
&=N(\rho_{n}+2\rho_{n}^{2}\xi_{n}-\rho_{n}^{2}){\rm
E}_{\omega}\mathbb{U}_{n-1}\\\
&=N(\rho_{n}+2\rho_{n}^{2}\xi_{n}-\rho_{n}^{2})\Pi_{1,n-1}.\end{split}$
Therefore, conditional Jensen’s inequality and subadditivity of the function
$x\mapsto x^{\gamma/2}$ (recall $\gamma\leq 2$) give
$\begin{split}\sum_{n\geq
1}n^{2\gamma}\mathbb{E}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|^{\gamma}&\leq\sum_{n\geq
1}n^{2\gamma}{\rm E}\left({\rm
E}_{\omega}\left(\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right)^{2}\right)^{\gamma/2}\\\
&=N^{\gamma/2}\sum_{n\geq 1}n^{2\gamma}{\rm
E}((\rho_{n}+2\rho_{n}^{2}\xi_{n}-\rho_{n}^{2})\Pi_{1,n-1})^{\gamma/2}\\\
&\leq N^{\gamma/2}\sum_{n\geq 1}n^{2\gamma}({\rm E}\rho^{\gamma/2}+2{\rm
E}\rho^{\gamma}\xi^{\gamma/2})({\rm E}\rho^{\gamma/2})^{n-1}.\end{split}$
The assumptions of the lemma guarantee that the series is convergent and thus
for some constant $C>0$,
$\begin{split}\mathbb{P}\left[\sum_{n\geq
1}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|{\bar{R}}_{n+1}>x\right]&\leq
2^{\gamma}C_{\gamma}\,x^{-\gamma}\sum_{n\geq
1}n^{2\gamma}\mathbb{E}\left|\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right|^{\gamma}\\\
&\leq CN^{\gamma/2}x^{-\gamma},\end{split}$
which proves (3.12). Invoking (3.5) once again, we conclude that
$\begin{split}\mathbb{P}\left[\sum_{k=1}^{N}\sum_{n\geq
1}\mathbb{U}_{n}^{k}>x\right]&\leq\mathbb{P}\left[\sum_{n\geq
1}\left(\sum_{k=1}^{N}(\mathbb{U}_{n}^{k}-\rho_{n}\mathbb{U}^{k}_{n-1})\right){\bar{R}}_{n+1}>x/2\right]+\mathbb{P}[N{\bar{R}}_{1}>x/2]\\\
&\leq
CN^{\gamma/2}(x/2)^{-\gamma}+C_{\gamma}N^{\gamma}(x/2)^{-\gamma},\end{split}$
which proves (3.11).
To show (3.13), decompose
$\begin{split}\mathbb{P}\left[\max_{j\geq
0}\sum_{k=1}^{N}U^{k}_{n}>x\right]&=\mathbb{P}\left[\max_{n\geq
0}\max_{S_{n}\leq j<S_{n+1}}\sum_{k=1}^{N}U^{k}_{j}>x\right]\\\
&\leq\mathbb{P}\left[\sum_{n\geq 0}\sum_{k=1}^{N}\max_{S_{n}\leq
j<S_{n+1}}U^{k}_{j}>x\right]\\\ &\leq\mathbb{P}\left[\sum_{n\geq
0}\sum_{k=1}^{N}\left(\mathbb{U}^{k}_{n}+\max_{S_{n}\leq
j<S_{n+1}}|U^{k}_{j}-\mathbb{U}^{k}_{n}|\right)>x\right]\\\
&\leq\mathbb{P}\left[\sum_{k=1}^{N}\sum_{n\geq
0}\mathbb{U}^{k}_{n}>x/2\right]+\mathbb{P}\left[\sum_{n\geq
1}\sum_{k=1}^{N}\max_{S_{n-1}\leq
j<S_{n}}|U^{k}_{j}-\mathbb{U}^{k}_{n-1}|>x/2\right],\end{split}$
which means that (3.13) follows from (3.11) and (3.14). To show (3.14), note
that, by Lemma 3.3,
${\rm E}_{\omega}\left[\max_{S_{n-1}\leq
j<S_{n}}|U_{j}-\mathbb{U}_{n-1}|^{2}\right]\leq 8\xi_{n}{\rm
E}_{\omega}\mathbb{U}^{k}_{n-1}=8\xi_{n}\Pi_{1,n-1}.$
Therefore
$\mathbb{P}\left[\sum_{n\geq 1}\sum_{k=1}^{N}\max_{S_{n-1}\leq
j<S_{n}}|U^{k}_{j}-\mathbb{U}^{k}_{n-1}|>x/2\right]\leq\sum_{n\geq
1}\mathbb{P}\left[\sum_{k=1}^{N}\max_{S_{n-1}\leq
j<S_{n}}|U^{k}_{j}-\mathbb{U}^{k}_{n-1}|>x/4n^{2}\right]\\\
\begin{split}\leq&\sum_{n\geq 1}(x/4n^{2})^{-\gamma}N^{\gamma/2}{\rm
E}\left({\rm E}_{\omega}\max_{S_{n-1}\leq
j<S_{n}}|U_{j}-\mathbb{U}_{n-1}|^{2}\right)^{\gamma/2}\\\
\leq&N^{\gamma/2}x^{-\gamma}\sum_{n\geq 1}(4n)^{2\gamma}8^{\gamma/2}{\rm
E}\xi^{\gamma/2}({\rm E}\rho^{\gamma/2})^{n-1}\\\
=&C^{\prime}N^{\gamma/2}x^{-\gamma},\end{split}$
for some constant $C^{\prime}>0$, which proves (3.13) and (3.14). ∎
Let $Y=(Y_{n})_{n\in\mathbb{N}}$ be a copy of the process
$(Y^{1}_{n})_{n\in\mathbb{N}}$. That is, $Y$ starts with one immigrant in
generation $0$ and for the next $\xi_{1}-1$ generations evolves as a Galton-
Watson process with unit immigration and reproduction law $Geo(1/2)$. The last
immigrant arrives in generation $\xi_{1}-1$; particles there reproduce with
distribution $Geo(\lambda_{1})$, giving birth to the first marked generation
$\mathbb{Y}_{1}=Y_{S_{1}}$. From there on the process evolves without
immigration, with particles in each marked generation
$\mathbb{Y}_{n}=Y_{S_{n}}$ being born with $Geo(\lambda_{n})$ distribution,
and $Geo(1/2)$ in consecutive blocks of lengths given by $\xi_{n}-1$ for
$n\geq 2$.
$\mathbb{Y}_{1}$$\xi_{1}$$\mathbb{Y}_{2}$$\xi_{2}$$\mathbb{Y}_{3}$$\xi_{3}$$\mathbb{Y}_{4}$$\xi_{4}$$\mathbb{Y}_{5}$$\xi_{5}$
Figure 3.3. Schematic picture of the process $Y$. Horizontal blue lines
represent marked generations. The immigrants arrive only in the first block.
###### Lemma 3.5.
Assume $(\Gamma)$. Then for some constant $C_{2}$,
(3.16) $\mathbb{P}\left[\max_{n\geq 1}Y_{n}>x\right]\leq
C_{2}x^{-\gamma}\left({\rm E}\left({\rm
E}_{\omega}Y_{\xi_{1}-1}^{2}\right)^{\gamma/2}+\mathbb{E}\mathbb{Y}_{1}^{\gamma}\right).$
If additionally ${\rm E}\xi^{\gamma}<\infty$ and ${\rm
E}\xi^{\gamma}\rho^{\gamma}<\infty$, then for some constant $C_{3}$,
(3.17) $\mathbb{P}\left[\max_{n\geq 1}Y_{n}>x\right]\leq C_{3}x^{-\gamma}.$
###### Proof.
We have
(3.18) $\mathbb{P}\left[\max_{n\geq
1}Y_{n}>x\right]\leq\mathbb{P}\left[\max_{n<S_{1}}Y_{n}>x\right]+\mathbb{P}\left[\max_{n\geq
S_{1}}Y_{n}>x\right].$
For the first $\xi_{1}-1$ generations $Y$ evolves as a Galton-Watson process
with unit immigration and reproduction law $Geo(1/2)$, therefore
$(Y_{n}^{2})_{n<S_{1}}$ is a submartingale with respect to ${\rm P}_{\omega}$.
Using first Markov’s, then Jensen’s, and finally Doob’s maximal inequality, we
obtain
$\mathbb{P}\left[\max_{j<S_{1}}Y_{j}>x\right]\leq
x^{-\gamma}\mathbb{E}\left(\max_{j<S_{1}}Y_{j}\right)^{\gamma}\leq
x^{-\gamma}{\rm E}\left({\rm
E}_{\omega}\max_{n<\xi_{1}}Y_{n}^{2}\right)^{\gamma/2}\leq x^{-\gamma}{\rm
E}\left(4{\rm E}_{\omega}Y_{\xi_{1}-1}^{2}\right)^{\gamma/2}.$
If additionally ${\rm E}\xi^{\gamma}<\infty$, then by Lemma 3.3,
${\rm E}_{\omega}\max_{n<\xi_{1}}Y_{n}^{2}\leq 16\xi_{1}^{2},$
thus
$\mathbb{P}\left[\max_{j<S_{1}}Y_{j}>x\right]\leq 16^{\gamma/2}{\rm
E}\xi^{\gamma}x^{-\gamma}.$
To estimate the second term in (3.18), observe that
$\left(Y_{S_{1}+j}\right)_{j\in\mathbb{N}}\overset{{\rm
d}}{=}\left(\sum_{k=1}^{\mathbb{Y}_{1}}U_{j}^{k}\right)_{j\in\mathbb{N}},$
where $U^{k}$’s are (independent under ${\rm P}_{\omega}$) copies of the
process $U$, independent of $\mathbb{Y}_{1}$ under $\mathbb{P}$. By Lemma 3.4,
$\mathbb{P}\left[\max_{n\geq S_{1}}Y_{n}>x\right]\leq
C_{1}\mathbb{E}\mathbb{Y}_{1}^{\gamma}x^{-\gamma},$
which concludes the proof of the first part of the lemma. If ${\rm
E}\xi^{\gamma}\rho^{\gamma}<\infty$, we may estimate
$\mathbb{E}\mathbb{Y}_{1}^{\gamma}$. Under ${\rm P}_{\omega}$,
$\mathbb{Y}_{1}\overset{{\rm d}}{=}\sum_{k=1}^{Y_{\xi_{1}-1}+1}G_{k},$
where $G_{k}\sim Geo(\lambda_{1})$ are independent of $Y_{\xi_{1}-1}$ and each
other. Moreover, as was explained in the proof of Lemma 3.3,
$Y_{\xi_{1}-1}\sim Geo(1/\xi_{1})$ under ${\rm P}_{\omega}$. Therefore
${\rm E}_{\omega}\mathbb{Y}_{1}^{2}={\rm
E}_{\omega}\left[(Y_{\xi_{1}-1}+1)(2\rho_{1}^{2}+\rho_{1})+(Y_{\xi_{1}-1}^{2}+Y_{\xi_{1}-1})\rho_{1}^{2}\right]=2\xi_{1}^{2}\rho_{1}^{2}+\xi_{1}\rho_{1}.$
Jensen’s inequality and subadditivity of function $x\mapsto x^{\gamma/2}$ give
$\mathbb{E}\mathbb{Y}_{1}^{\gamma}\leq{\rm E}\left({\rm
E}_{\omega}\mathbb{Y}_{1}^{2}\right)^{\gamma/2}\leq 2^{\gamma/2}{\rm
E}\xi^{\gamma}\rho^{\gamma}+{\rm E}\xi^{\gamma/2}\rho^{\gamma/2}<\infty,$
which proves (3.17).
∎
## 4\. Proof of Theorem 2.1
In the proof of Theorem 2.1 we will use the fact that the extinctions divide
process $Z$ into independent epochs. That is, we first determine tail
asymptotics of the maximum up to time $S_{\tau_{1}}$.
For any $A>0$ denote $\sigma=\sigma(A)=\inf\\{n:{\mathbb{Z}}_{n}\geq A\\}$.
The next lemma is an analogue of Lemma 4 in [13] and can be proved the very
same way, that is by examining ${\rm
E}_{\omega}[{\mathbb{Z}}_{k}^{\alpha}|{\mathbb{Z}}_{k-1}]$ using methods we’ve
seen in previous proofs.
###### Lemma 4.1.
For any fixed $A>0$,
$0<\mathbb{E}[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}]<\infty$.
The main proof strategy is as follows: we choose sufficiently big $A$ and
argue that neither the particles living before time $S_{\sigma}$, nor the
descendants of the immigrants arriving after this time contribute
significantly to the examined maximum. Therefore its behavior is determined by
${\mathbb{Z}}_{\sigma}$ particles in the generation $S_{\sigma}$ and their
progeny.
Let us first take care of the particles alive before time $S_{\sigma}$.
###### Lemma 4.2.
For any fixed $A$,
$\mathbb{P}\left[\max_{n<S_{\sigma}\wedge
S_{\tau_{1}}}Z_{n}>x\right]=o(x^{-\alpha}).$
###### Proof.
Fix $A$ and let $x>A$. The only generations before time $S_{\sigma}$ in which
the population size may exceed $x$ are the unmarked ones. However, since
${\mathbb{Z}}_{k}<A$ for $k<\sigma$, the maximum of $Z$ in generations
$S_{k-1}+1,\dots S_{k}-1$ is stochastically dominated by $M_{k}^{A}$, the
maximum of Galton-Watson process with $Geo(1/2)$ offspring distribution, unit
immigration and $A$ initial particles, evolving for time $\xi_{k}$. Observe
that
$\begin{split}\mathbb{P}\left[\max_{n<S_{\sigma}\wedge
S_{\tau_{1}}}Z_{n}>x\right]&\leq\mathbb{P}\left[\max_{k<x^{\delta/2}}M_{k}^{A}>x\right]+\mathbb{P}\left[\tau_{1}>x^{\delta/2}\right]\\\
&\leq
x^{\delta/2}\mathbb{P}\left[M_{1}^{A}>x\right]+\mathbb{P}\left[\tau_{1}>x^{\delta/2}\right].\end{split}$
Since $\alpha+\delta\leq 2$, by Markov’s and Jensen’s inequalities,
$\mathbb{P}\left[M_{1}^{A}>x\right]\leq x^{-\alpha-\delta}{\rm E}\left({\rm
E}_{\omega}(M_{1}^{A})^{2}\right)^{(\alpha+\delta)/2}.$
Lemma 3.3 implies that
${\rm E}_{\omega}(M_{1}^{A})^{2}\leq 16(\xi_{1}^{2}+A\xi_{1}+A^{2})$
and thus, since $x\mapsto x^{(\alpha+\delta)/2}$ is subadditive,
$x^{\delta/2}\mathbb{P}\left[M_{1}^{A}>x\right]\leq
x^{-\alpha-\delta/2}16^{(\alpha+\delta)/2}\left({\rm
E}\xi^{\alpha+\delta}+A^{(\alpha+\delta)/2}{\rm
E}\xi^{(\alpha+\delta)/2}+A^{\alpha+\delta}\right)=o(x^{-\alpha}).$
The second term may be bounded using Lemma 3.1, that is
$\mathbb{P}\left[\tau_{1}>x^{\delta/2}\right]\leq
e^{-cx^{\delta/2}}\mathbb{E}e^{c\tau_{1}}=o(x^{-\alpha}),$
which ends the proof.
∎
The next lemma assures that the contribution of progeny of immigrants arriving
after $S_{\sigma}$ is negligible. Recall that $Y^{k}$ counts the progeny of
immigrants arriving in $k$’th block, that is in generations
$S_{k-1},S_{k-1}+1,\dots S_{k}-1$.
###### Lemma 4.3.
Fix $\varepsilon>0$. There exists $A_{1}(\varepsilon)$ such that for
$A>A_{1}(\varepsilon)$,
(4.1) $\mathbb{P}\left[\sum_{k=\sigma+1}^{\tau_{1}}\max_{n\geq
1}Y^{k}_{n}>\varepsilon x\right]\leq\varepsilon x^{-\alpha}.$
###### Proof.
We have
$\begin{split}\mathbb{P}\left[\sum_{k=\sigma+1}^{\tau_{1}}\max_{n\geq
1}Y^{k}_{n}>\varepsilon
x\right]&=\mathbb{P}\left[\sum_{k=1}^{\infty}\operatorname{\mathbbm{1}}_{\sigma\leq
k<\tau_{1}}\max_{n\geq 1}Y^{k+1}_{n}>\varepsilon x\right]\\\
&\leq\sum_{k=1}^{\infty}\mathbb{P}\left[\sigma\leq k<\tau_{1},\max_{n\geq
1}Y^{k+1}_{n}>\varepsilon x/2k^{2}\right].\end{split}$
Observe that the event $\\{\sigma\leq k<\tau_{1}\\}$ is defined in terms of
$Z_{1},\dots Z_{S_{k}}$, while the process $Y^{k+1}$ evolves in the
environment given by $(\xi_{j},\rho_{j})$ for $j\geq k+1$, hence is
independent of $Z_{1},\dots Z_{S_{k}}$. Moreover, the second part of Lemma 3.5
applied with $\gamma=\alpha$ gives tail bounds on the maximum of $Y^{k+1}$.
That is,
$\begin{split}\sum_{k=1}^{\infty}\mathbb{P}\left[\sigma\leq
k<\tau_{1},\max_{n\geq 1}Y^{k+1}_{n}>\varepsilon
x/2k^{2}\right]&=\sum_{k=1}^{\infty}\mathbb{P}\left[\sigma\leq
k<\tau_{1}\right]\mathbb{P}\left[\max_{n\geq 1}Y^{k+1}_{n}>\varepsilon
x/2k^{2}\right]\\\ &\leq C_{3}\sum_{k=1}^{\infty}\mathbb{P}\left[\sigma\leq
k<\tau_{1}\right](\varepsilon x/2k^{2})^{-\alpha}\\\
&=C_{3}2^{\alpha}(\varepsilon
x)^{-\alpha}\sum_{k=1}^{\infty}k^{2\alpha}\mathbb{P}\left[\tau_{1}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}>k\right]\\\
&=C_{3}2^{\alpha}(2\alpha+1)^{-1}\varepsilon^{-\alpha}x^{-\alpha}\mathbb{E}\left[\tau_{1}^{2\alpha+1}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]\end{split}$
Since $\mathbb{E}\tau_{1}^{2\alpha+1}<\infty$ and
$\sigma(A)\overset{\mathbb{P}}{\longrightarrow}\infty$ as $A\to\infty$, one
may find $A_{1}(\varepsilon)$ such that for $A>A_{1}(\varepsilon)$ (4.1)
holds.
∎
We already gave bounds on the generations sizes of particles alive before time
$S_{\sigma}$ and those coming from immigrants arriving after that time. What
is left is investigating behaviour of the particles residing exactly in
generation $S_{\sigma}$ and their progeny.
For $k\geq S_{\sigma}$ let $V_{\sigma,k}$ be the number of progeny of the
particles from generation $S_{\sigma}$ residing in generation $k$ and let
${\mathbb{V}}_{\sigma,n}=V_{\sigma,S_{n}}$; in particular,
${\mathbb{Z}}_{\sigma}={\mathbb{V}}_{\sigma,\sigma}$. Recall the variables
$\Psi_{m,k}$ defined in (3.6).
###### Lemma 4.4.
For any $\varepsilon>0$ there exists $A_{2}(\varepsilon)$ such that for
$A>A_{2}(\varepsilon)$,
$\mathbb{P}\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right]\leq\varepsilon
x^{-\alpha}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right].$
###### Proof.
We begin by estimating the difference of maxima within one block. Observe that
the potential $\Psi$ is constant within each block, therefore for any
$n\in\mathbb{N}$,
$\begin{split}&\left|\max_{S_{n}\leq
k<S_{n+1}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{S_{n}\leq
k<S_{n+1}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|\\\
&\leq\left|\max_{S_{n}\leq
k<S_{n+1}-1}(V_{\sigma,k}+V_{\sigma,k+1})-2{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|\\\
&+|V_{\sigma,S_{n+1}-1}+V_{\sigma,S_{n+1}}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n+1}|\end{split}$
Let us estimate the first ingredient. Since
$\max_{S_{n}\leq
k<S_{n+1}-1}(V_{\sigma,k}+V_{\sigma,k+1})=2{\mathbb{V}}_{\sigma,n}+\max_{S_{n}\leq
k<S_{n+1}-1}\left(V_{\sigma,k}+V_{\sigma,k+1}-2{\mathbb{V}}_{\sigma,n}\right),$
we have
$\begin{split}\left|\max_{S_{n}\leq
k<S_{n+1}-1}(V_{\sigma,k}+V_{\sigma,k+1})-2{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|&\leq
2\left(\left|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|+\max_{S_{n}\leq
k<S_{n+1}}|V_{\sigma,k}-{\mathbb{V}}_{\sigma,n}|\right).\end{split}$
The second ingredient may be estimated simply by
$\begin{split}&|V_{\sigma,S_{n+1}-1}+V_{\sigma,S_{n+1}}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n+1}|\\\
&\leq|{\mathbb{V}}_{\sigma,n+1}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n+1}|+|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}|+|V_{\sigma,S_{n+1}-1}-{\mathbb{V}}_{\sigma,n}|,\end{split}$
which gives
$\begin{split}&\left|\max_{S_{n}\leq
k<S_{n+1}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{S_{n}\leq
k<S_{n+1}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|\\\ &\leq
3|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}|+3\max_{S_{n}\leq
k<S_{n+1}}|V_{\sigma,k}-{\mathbb{V}}_{\sigma,n}|+|{\mathbb{V}}_{\sigma,n+1}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n+1}|.\end{split}$
Next, in view of
$\begin{split}&\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|\\\
&=\left|\max_{n\geq\sigma}\max_{S_{n}\leq
k<S_{n+1}}(V_{\sigma,k}+V_{\sigma,k+1})-\max_{n\geq\sigma}{\mathbb{Z}}_{\sigma}\max_{S_{n}\leq
k<S_{n+1}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|\\\
&\leq\sum_{n\geq\sigma}\left|\max_{S_{n}\leq
k<S_{n+1}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{S_{n}\leq
k<S_{n+1}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|,\end{split}$
the above estimations give
$\begin{split}\mathbb{P}&\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right]\\\
&\leq\mathbb{P}\left[4\sum_{n\geq\sigma}\left|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|>\varepsilon
x/2,\sigma<\tau_{1}\right]\\\
&+\mathbb{P}\left[3\sum_{n\geq\sigma}\max_{S_{n}\leq
k<S_{n+1}}|V_{\sigma,k}-{\mathbb{V}}_{\sigma,n}|>\varepsilon
x/2,\sigma<\tau_{1}\right].\end{split}$
Both ingredients can be estimated by Lemma 3.4 applied with $\gamma=\alpha$.
Conditioned on $(\sigma,Z_{1},\dots Z_{S_{\sigma}})$, the process
$(V_{\sigma,n})_{n\geq S_{\sigma}}$ is a sum of ${\mathbb{Z}}_{\sigma}$
independent copies of the process $U$. We have, on the set
$\\{\sigma<\tau_{1}\\}$,
$\begin{split}\mathbb{P}&\left[4\sum_{n\geq\sigma}\left|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|>\varepsilon
x/2\,\Bigg{|}\,\sigma,Z_{1},\dots Z_{S_{\sigma}}\right]\leq C_{1}(\varepsilon
x/8)^{-\alpha}{\mathbb{Z}}_{\sigma}^{\alpha/2},\end{split}$
which gives
$\begin{split}\mathbb{P}\left[4\sum_{n\geq\sigma}\left|{\mathbb{V}}_{\sigma,n}-{\mathbb{Z}}_{\sigma}\Pi_{\sigma+1,n}\right|>\varepsilon
x/2,\sigma<\tau_{1}\right]\leq C_{1}8^{\alpha}(\varepsilon
x)^{-\alpha}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha/2}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right].\end{split}$
Similarly,
$\mathbb{P}\left[3\sum_{n\geq\sigma}\max_{S_{n}<k<S_{n+1}}|V_{\sigma,k}-{\mathbb{V}}_{\sigma,n}|>\varepsilon
x/2,\sigma<\tau_{1}\right]\leq C_{1}6^{\alpha}(\varepsilon
x)^{-\alpha}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha/2}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right].$
Therefore, for some constant $C_{2}$,
$\mathbb{P}\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right]\\\ \leq C_{2}(\varepsilon
x)^{-\alpha}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha/2}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right].$
Finally, for any fixed $\varepsilon>0$, since ${\mathbb{Z}}_{\sigma}\geq A$,
we have
$\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha/2}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]\leq
A^{-\alpha/2}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]$
and one may choose $A_{2}(\varepsilon)$ large enough for the claim to hold.
∎
###### Lemma 4.5.
There exists $c_{\Psi}\in(0,\infty)$ such that for any fixed $A>0$,
(4.2) $\mathbb{P}\left[{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})>x,\sigma<\tau_{1}\right]\sim
c_{\Psi}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]x^{-\alpha}.$
###### Proof.
Since the sequence $\Psi_{\sigma+1,k}$ is constant on the blocks between
marked points, we have
$\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})=\max_{n\geq\sigma}\left(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1})\right)\Pi_{\sigma+1,n}.$
Observe that
$\log\left(\left(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1})\right)\Pi_{1,n}\right)=\sum_{k=1}^{n}\log(\rho_{k})+\log(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1}))$
is a perturbed random walk. By Theorem 1.3.8 in [12], assumptions $(A)$
guarantee that
$\mathbb{P}\left[\max_{n\geq
0}(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1}))\Pi_{1,n}>x\right]\sim
c_{\Psi}x^{-\alpha}$
for a constant $c_{\Psi}\in(0,\infty)$ given by
$c_{\Psi}={\rm
E}(2^{\alpha}\operatorname{\mathbbm{1}}_{\xi_{1}>1}\vee(1+\rho_{1})^{\alpha}-\max_{n\geq
2}(2^{\alpha}\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1})^{\alpha})\Pi_{1,n}^{\alpha})_{+}.$
Note that the variables
${\mathbb{Z}}_{\sigma}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}$ and
$\max_{n\geq\sigma}(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1}))\Pi_{\sigma+1,n}$
are independent under $\mathbb{P}$. Therefore, by Breiman’s lemma,
$\mathbb{P}\left[{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma+1,k}+\Psi_{\sigma+1,k+1})>x,\sigma<\tau_{1}\right]\\\
=\mathbb{P}\left[{\mathbb{Z}}_{\sigma}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\cdot\max_{n\geq\sigma}(2\operatorname{\mathbbm{1}}_{\xi_{n+1}>1}\vee(1+\rho_{n+1}))\Pi_{\sigma+1,n}>x\right]\sim\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]c_{\Psi}x^{-\alpha}.$
∎
The rest of the proof is standard. First, all the lemmas proven so far allow
us to determine the asymptotics of the maximum in time $[0,S_{\tau_{1}})$.
Then we use the fact that the extinctions divide our process into independent
pieces.
###### Proposition 4.6.
For some constant $c_{M}>0$,
$\mathbb{P}\left[\max_{0\leq n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\sim
c_{M}x^{-\alpha}.$
###### Proof.
Fix $\varepsilon>0$ and take
$A>A(\varepsilon):=\max\\{A_{1}(\varepsilon),A_{2}(\varepsilon)\\}$. First,
observe that
$\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x,\sigma<\tau_{1}\right]\leq\mathbb{P}\left[\max_{0\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\\\
\leq\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x,\sigma<\tau_{1}\right]+\mathbb{P}\left[\max_{n<S_{\sigma}\wedge
S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right].$
Lemma 4.2 ensures that for large enough $x$,
$\mathbb{P}\left[\max_{n<S_{\sigma}\wedge
S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\leq\mathbb{P}\left[2\max_{n<S_{\sigma}\wedge
S_{\tau_{1}}}Z_{n}>x\right]\leq\varepsilon x^{-\alpha}.$
Recall that by $Y^{k}=(Y^{k}_{j})_{j\in{\mathbb{Z}}}$ we denoted the process
counting the progeny of immigrants arriving in $k$’th block, with the
convention $Y_{j}^{k}=0$ for $j<0$. For $n\geq S_{\sigma}$,
$Z_{n}=V_{\sigma,n}+\sum_{k=\sigma+1}^{\tau_{1}}Y^{k}_{n-S_{k-1}},$
thus
$\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(V_{\sigma,n}+V_{\sigma,n+1})>x,\sigma<\tau_{1}\right]\leq\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x,\sigma<\tau_{1}\right]\\\
\leq\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(V_{\sigma,n}+V_{\sigma,n+1})>(1-\varepsilon)x,\sigma<\tau_{1}\right]+\mathbb{P}\left[2\sum_{k=\sigma+1}^{\tau_{1}}\max_{n\geq
1}Y_{n}^{k}>\varepsilon x\right]$
and (4.1) ensures that
$\mathbb{P}\left[2\sum_{k=\sigma+1}^{\tau_{1}}\max_{n\geq
1}Y_{n}^{k}>\varepsilon x\right]\leq\varepsilon x^{-\alpha}.$
Finally,
$\begin{split}\mathbb{P}&\left[{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma,k}+\Psi_{\sigma,k+1})>(1+\varepsilon)x,\sigma<\tau_{1}\right]\\\
&-\mathbb{P}\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma,k}+\Psi_{\sigma,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right]\\\ &\leq\mathbb{P}\left[\max_{S_{\sigma}\leq
n<S_{\tau_{1}}}(V_{\sigma,n}+V_{\sigma,n+1})>x,\sigma<\tau_{1}\right]\\\
&\leq\mathbb{P}\left[{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma,k}+\Psi_{\sigma,k+1})>(1-\varepsilon)x,\sigma<\tau_{1}\right]\\\
&+\mathbb{P}\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma,k}+\Psi_{\sigma,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right],\end{split}$
and by Lemma 4.4,
$\mathbb{P}\left[\left|\max_{k\geq
S_{\sigma}}(V_{\sigma,k}+V_{\sigma,k+1})-{\mathbb{Z}}_{\sigma}\max_{k\geq
S_{\sigma}}(\Psi_{\sigma,k}+\Psi_{\sigma,k+1})\right|>\varepsilon
x,\sigma<\tau_{1}\right]\leq\varepsilon
x^{-\alpha}\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau}\right].$
Putting things together and invoking Lemma 4.5 we get that for any
$\varepsilon>0$ such that $\varepsilon(1-\varepsilon)^{\alpha}<c_{\Psi}$ and
for any $A>A(\varepsilon)$,
$0<((1+\varepsilon)^{-\alpha}c_{\Psi}-\varepsilon)\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]\\\
\leq\liminf_{x\to\infty}x^{\alpha}\mathbb{P}\left[\max_{0\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\leq\limsup_{x\to\infty}x^{\alpha}\mathbb{P}\left[\max_{0\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\\\
\leq((1-2\varepsilon)^{-\alpha}c_{\Psi}+\varepsilon)\mathbb{E}\left[{\mathbb{Z}}_{\sigma}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma<\tau_{1}}\right]+2\varepsilon<\infty.$
Observe that this relation implies that both the limits
$\lim_{x\to\infty}x^{\alpha}\mathbb{P}\left[\max_{0\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]\quad\textnormal{and}\quad\lim_{A\to\infty}\mathbb{E}\left[{\mathbb{Z}}_{\sigma(A)}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma(A)<\tau_{1}}\right]$
exist, are positive and satisfy
$\lim_{x\to\infty}x^{\alpha}\mathbb{P}\left[\max_{0\leq
n<S_{\tau_{1}}}(Z_{n}+Z_{n+1})>x\right]=c_{\Psi}\lim_{A\to\infty}\mathbb{E}\left[{\mathbb{Z}}_{\sigma(A)}^{\alpha}\operatorname{\mathbbm{1}}_{\sigma(A)<\tau_{1}}\right]=:c_{M}.$
∎
Due to Lemma 3.2 and the relation (3.2), the next result implies Theorem 2.1.
###### Theorem 4.7.
Under assumptions $(A)$,
$\mathbb{P}\left[n^{-1/\alpha}\max_{0\leq
k<S_{n}}(Z_{k}+Z_{k+1})>x\right]\xrightarrow{n\to\infty}1-\exp\left(-\frac{c_{M}}{\mathbb{E}\tau_{1}}x^{-\alpha}\right).$
###### Proof.
Since the extinctions divide the process $Z$ into independent epochs, an
immediate corollary of Proposition 4.6 is that
$\mathbb{P}\left[n^{-1/\alpha}\max_{0\leq
k<S_{\tau_{n}}}(Z_{k}+Z_{k+1})>x\right]\xrightarrow{n\to\infty}1-\exp(-c_{M}x^{-\alpha}).$
Lemma 3.1 implies that $\mathbb{E}\tau_{1}<\infty$. Therefore passing from the
maximum up to time $S_{\tau_{n}}$ to the maximum up to $S_{n}$ may be done
exactly as in the proof of Lemma 3.2. ∎
## 5\. Proof of Theorem 2.2
As we have seen in the proof of Theorem 2.1, the limiting behaviour of maxima
in case $(A)$ comes from the tail asymptotics of variable $M_{\Psi}$ defined
in (3.7). The assumption ${\rm E}\xi^{\alpha+\delta}<\infty$ implies that for
every $k$, $\max_{j<\xi_{k}}Y^{k}_{j}$ is negligible. In terms of the random
walk, this means that the time the walker spends in a block when crossing it
for the first time is negligible. As we will see, under assumptions $(B)$ it
is not; the maximal local time is obtained when the walker crosses a
particularly long block for the first time, by their visits to sites within
this block and potentially excursions to the left.
Consider a simple symmetric random walk on ${\mathbb{Z}}$ and denote by
$\bar{L}_{k}(n)$ the number of times the walk visits site $k$ before reaching
$n$. Consider $(\bar{L}_{s}(n))_{s\in[0,n]}$ being a piecewise linear
interpolation of $(\bar{L}_{k}(n))_{0\leq k\leq n}$. The Ray-Knight theorem
(see [9, Theorem 2.15]) states that
$\left(\frac{1}{n}\bar{L}_{n(1-t)}(n)\right)_{t\in[0,1]}\overset{{\rm
d}}{\longrightarrow}\left(B_{t}\right)_{t\in[0,1]}$
in $C[0,1]$ as $n\to\infty$, where $B$ is a squared Bessel process which may
be defined as
(5.1) $B_{t}=||W(t)||^{2},$
for $W(t)=(W_{1}(t),W_{2}(t))$ being a standard two-dimensional Brownian
motion with $W(0)=0$. By the continuous mapping theorem,
(5.2) $\left(\frac{1}{n}\max_{k\leq
n}\bar{L}_{k}(n),\frac{1}{n}\bar{L}_{0}(n)\right)\overset{{\rm
d}}{\longrightarrow}(M_{B},B(1)),$
where $M_{B}=\sup\\{B_{t}:t\in[0,1]\\}$.
With this at hand, we may inspect the maximal local time that the RWSRE
obtains when crossing a (long) block between marked points for the first time.
To this end, consider a walk starting at $0$ in the environment that has
marked points only on the non-positive half-line, and stop it when it reaches
point $N$. By Ray-Knight theorem, the limit of maximal local time in the
interval $[1,N]$, where the walk is symmetric, scaled by $N$, is $M_{B}$. As
we have seen in the proof of Theorem 2.1, the number of visits in the negative
half-line should be controlled by the number of visits to $1$ and the maxima
of the potential $\Psi$.
In the associated branching process, the steps of the walk during its first
crossing of a block between marked points are counted by the process $Y$.
Therefore our goal is to understand the growth of maximal generation in the
process $Y$ as the size of the first block – in which the immigrants arrive –
tends to infinity. To this end, for any $N\in\mathbb{N}$ let $Y^{(N)}$ be a
BPSRE evolving in an environment with fixed $\xi_{1}=N$ and such that the
immigrants arrive only in generations up to $N-1$’th.
###### Lemma 5.1.
Under assumptions $(B)$,
(5.3) $\frac{1}{N}\max_{k\geq 0}(Y^{(N)}_{k}+Y^{(N)}_{k+1})\overset{{\rm
d}}{\longrightarrow}M_{\infty}\quad\textnormal{as }N\to\infty,$
where $M_{\infty}\overset{{\rm d}}{=}\max(M_{B},B(1)M_{\Psi}/2)$ and
$M_{\Psi}$ is a copy of the variable defined in (3.7) independent of the
Bessel process $B$.
###### Proof.
To simplify the notation we shall write $Y$ instead of $Y^{(N)}$. Observe that
(5.2) and the duality between branching process and random walk imply
$\left(\frac{1}{N}\max_{k\leq
N-2}(Y_{k}+Y_{k+1}),\frac{1}{N}(Y_{N-1}+Y_{N-2})\right)\overset{{\rm
d}}{\longrightarrow}(M_{B},B(1)).$
However, since the particles in generation $N-1$ are children of those from
$N-2$’th and an immigrant, born with distribution $Geo(1/2)$, we have
$\mathbb{E}\left(Y_{N-1}-Y_{N-2}-1\right)^{2}=\mathbb{E}(Y_{N-1}-\mathbb{E}\left[Y_{N-1}\,|\,Y_{N-2}\right])^{2}=2(\mathbb{E}Y_{N-2}+1)=2(N-1),$
which, together with Chebyshev’s inequality, implies that
$(Y_{N-1}-Y_{N-2})/N\overset{\mathbb{P}}{\longrightarrow}0$ and thus
$\left(\frac{1}{N}\max_{k\leq
N-2}(Y_{k}+Y_{k+1}),\frac{Y_{N-1}}{N}\right)\overset{{\rm
d}}{\longrightarrow}(M_{B},B(1)/2).$
Moreover, the variables $Y_{k}$ for $k\leq N-1$ are independent of the
environment, in particular of $\Psi_{1,n},n\geq 0$.
From here on we proceed as in the proof of Lemma 4.4, to show that the maximum
in generations after $N-1$’th is comparable with $Y_{N-1}M_{\Psi}$. That is,
we use Lemma 3.4 applied with $\gamma=\beta$ to obtain, for some constant
$C>0$,
(5.4) $\mathbb{P}\left[\left|\max_{k\geq
N}(Y_{k}+Y_{k+1})-\mathbb{Y}_{1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})\right|>x\right]\leq
Cx^{-\beta}\mathbb{E}\mathbb{Y}_{1}^{\beta/2}$
for any $x>0$. The particles in the first marked generation $S_{1}=N$ are born
with distribution $Geo(\lambda_{1})$ from those counted by $Y_{{N-1}}$ and an
immigrant. Therefore we have ${\rm E}_{\omega}\mathbb{Y}_{1}=N\rho_{1}$, and
by Jensen’s inequality,
$\mathbb{E}\mathbb{Y}_{1}^{\beta/2}\leq N^{\beta/2}{\rm E}\rho^{\beta/2}.$
Moreover, we may calculate quenched moments of $\mathbb{Y}_{1}$ conditioned on
$Y_{N-1}$ to get an analogue of (3.15). We obtain
(5.5)
$\begin{split}\mathbb{E}\left|\mathbb{Y}_{1}-\rho_{1}Y_{N-1}\right|^{\beta}&\leq{\rm
E}\left({\rm
E}_{\omega}(\mathbb{Y}_{1}-\rho_{1}Y_{N-1})^{2}\right)^{\beta/2}\\\ &={\rm
E}\left(({\rm
E}_{\omega}Y_{N-1}(\rho_{1}^{2}+\rho_{1})+2\rho_{1}^{2}+\rho_{1}\right)^{\beta/2}\\\
&\leq(N^{\beta/2}+1)(2^{\beta/2}{\rm E}\rho^{\beta}+{\rm
E}\rho^{\beta/2}),\end{split}$
where the last inequality follows from subadditivity of $x\mapsto x^{\beta/2}$
and the fact that ${\rm E}_{\omega}Y_{N-1}=N-1$. Observe that $\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})\leq 2+M_{\Psi,2}$ and by (3.8), ${\rm
E}M_{\Psi}^{\beta}<\infty$. Therefore, since
$(Y_{N-1},\mathbb{Y}_{1},\rho_{1})$ is independent of $(\rho_{j})_{j\geq 2}$,
we have
(5.6) $\begin{split}\mathbb{P}&\left[\left|\mathbb{Y}_{1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})-\rho_{1}Y_{N-1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})\right|>x\right]\\\ &\leq x^{-\beta}{\rm
E}(2+M_{\Psi})^{\beta}\mathbb{E}|\mathbb{Y}_{1}-\rho_{1}Y_{N-1}|^{\beta}\leq
C^{\prime}x^{-\beta}(N^{\beta/2}+1)\end{split}$
for some constant $C^{\prime}>0$ and any $x>0$.
Observe that (5.4) and (5.6) imply that for any fixed $\varepsilon>0$,
$\begin{split}\mathbb{P}&\left[\left|\max_{k\geq
N}(Y_{k}+Y_{k+1})-Y_{N-1}\max_{k\geq
N}(\Psi_{1,k}+\Psi_{1,k+1})\right|>\varepsilon N\right]\\\
&\leq\mathbb{P}\left[\left|\max_{k\geq
N}(Y_{k}+Y_{k+1})-\mathbb{Y}_{1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})\right|>\varepsilon N/2\right]\\\
&+\mathbb{P}\left[\left|\mathbb{Y}_{1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})-\rho_{1}Y_{N-1}\max_{k\geq
N}(\Psi_{2,k}+\Psi_{2,k+1})\right|>\varepsilon N/2\right]\\\ &\leq(\varepsilon
N/2)^{-\beta}\left(CN^{\beta/2}{\rm
E}\rho^{\beta/2}+C^{\prime}(N^{\beta/2}+1)\right)=O(N^{-\beta/2}).\end{split}$
Finally, by (5.5), for any $\varepsilon>0$,
$\mathbb{P}\left[|\mathbb{Y}_{1}-\rho_{1}Y_{N-1}|>\varepsilon
N\right]\leq\varepsilon^{-\beta}(N^{-\beta/2}+N^{-\beta})(2^{\beta/2}{\rm
E}\rho^{\beta}+{\rm E}\rho^{\beta/2})=O(N^{-\beta/2}),$
therefore the weak limit of
$\frac{1}{N}\max_{k\geq 0}(Y_{k}+Y_{k+1})=\frac{1}{N}\max\left(\max_{k\leq
N-2}(Y_{k}+Y_{k+1}),Y_{N-1}+\mathbb{Y}_{1},\max_{k\geq
N}(Y_{k}+Y_{k+1})\right)$
is the same as that of
$\frac{1}{N}\max\left(\max_{k\leq
N-2}(Y_{k}+Y_{k+1}),Y_{N-1}(1+\rho_{1}),Y_{N-1}\max_{k\geq
N}(\Psi_{1,k}+\Psi_{1,k-1})\right)\\\ =\frac{1}{N}\max\left(\max_{k\leq
N-2}(Y_{k}+Y_{k+1}),Y_{N-1}M_{\Psi,1}\right)$
which is $\max(M_{B},B(1)M_{\Psi}/2)$ by the continuous mapping theorem. ∎
###### Remark 5.2.
Under assumptions $(B)$, $\mathbb{E}M_{\infty}^{\beta+\delta}<\infty$. Indeed,
by (5.1),
$M_{B}^{2}=\sup\left\\{\left(W_{1}(t)^{2}+W_{2}(t)^{2}\right)^{2}\,:\,t\in[0,1]\right\\},$
where $W_{1},W_{2}$ are independent one-dimensional Brownian motions. Doob’s
maximal inequality applied to $W_{1},W_{2}$ implies that
$\mathbb{E}M_{B}^{2}<\infty$. Since $\beta+\delta\leq 2$, it follows that
$\mathbb{E}M_{B}^{\beta+\delta}<\infty$. Moreover, by (3.8),
$\mathbb{E}M_{\Psi}^{\beta+\delta}<\infty$, and since $M_{\Psi}$ and $B$ are
independent, we have
$\mathbb{E}M_{\infty}^{\beta+\delta}\leq\mathbb{E}M_{B}^{\beta+\delta}\mathbb{E}(1+M_{\Psi}/2)^{\beta+\delta}<\infty.$
Recall that the process $Y^{k}$ counts the progeny of immigrants arriving in
$k$’th block. Since Lemma 5.1 suggests that the maximum of process $Y^{k}$
should be comparable with $\xi_{k}M_{\infty}$ when $\xi_{k}$ is large, we
begin the proof of Theorem 2.2 by distinguishing large blocks in the
environment. Recall the sequence $(a_{n})_{n\in\mathbb{N}}$ defined in (2.3).
Fix $\varepsilon>0$ and let
$I_{n,\varepsilon}=\\{k\leq n\,:\,\xi_{k}>\varepsilon a_{n}\\},\quad
I_{n,\varepsilon}^{c}=\\{k\leq n\,:\,\xi_{k}\leq\varepsilon a_{n}\\}.$
For fixed $n$ and $k\leq n$, we will say that $k$’th block is large if $k\in
I_{n,\varepsilon}$, and small otherwise.
It follows from the definition of the sequence $(a_{n})_{n\in\mathbb{N}}$ and
regular variation of the tails of $\xi$ that for any $x>0$,
(5.7) $n{\rm P}[\xi>xa_{n}]\to x^{-\beta},\quad n\to\infty.$
Therefore, by Proposition 3.21 in [15],
(5.8) $\sum_{k=1}^{n}\delta_{(\xi_{k}/a_{n},k/n)}\overset{{\rm
d}}{\longrightarrow}P_{\mu},$
where $P_{\mu}$ is a Poisson point process on $(0,\infty]\times[0,\infty)$
with intensity measure $d\mu(x,t)=\beta x^{-\beta-1}dxdt$. In particular, as
$n\to\infty$, the sequence of variables $|I_{n,\varepsilon}|$, which count the
number of large blocks, converges weakly to Poisson distribution with
parameter $\varepsilon^{-\beta}$.
We begin by showing that all the progeny of immigrants arriving in small
blocks is negligible.
###### Proposition 5.3.
There is a constant $C_{5}$ such that for any $\varepsilon>0$ and
$\bar{\varepsilon}>0$,
$\limsup_{n\to\infty}\mathbb{P}\left[\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}^{c}}Y^{k}_{j-S_{k-1}}>\bar{\varepsilon}a_{n}\right]\leq
C_{5}\bar{\varepsilon}^{-\beta-\delta}\varepsilon^{\delta}.$
###### Proof.
We will use the fact that the extinction times divide our process into i.i.d.
pieces. Let
$\eta_{n}=\inf\\{k>0:\tau_{k}>n\\}.$
Since $\mathbb{E}\tau_{1}<\infty$ by Lemma 3.1, the strong law of large
numbers implies $\eta_{n}/n\to\eta:=1/\mathbb{E}\tau$ as $n\to\infty$,
$\mathbb{P}$-a.s. We have
$\begin{split}\mathbb{P}\left[\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}^{c}}Y^{k}_{j-S_{k-1}}>\bar{\varepsilon}a_{n}\right]&\leq\mathbb{P}\left[\max_{j\geq
1}\sum_{k\leq\tau_{2n\eta}}Y^{k}_{j-S_{k-1}}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]\\\
&+\mathbb{P}\left[|\eta-\eta_{n}/n|>\eta\right].\end{split}$
The second term tends to $0$ as $n\to\infty$. Since the extinctions divide our
process into i.i.d. pieces, we have
$\begin{split}\mathbb{P}\left[\max_{j\geq
1}\sum_{k\leq\tau_{2n\eta}}Y^{k}_{j-S_{k-1}}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]&\leq\sum_{m=1}^{2n\eta}\mathbb{P}\left[\max_{j\geq
1}\sum_{k=\tau_{m-1}}^{\tau_{m}}Y^{k}_{j-S_{k-1}}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]\\\ &=2n\eta\,\mathbb{P}\left[\max_{j\geq
1}\sum_{k=0}^{\tau_{1}}Y^{k}_{j-S_{k-1}}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]\\\ &\leq
2n\eta\,\mathbb{P}\left[\sum_{k=0}^{\tau_{1}}\max_{j\geq
1}Y^{k}_{j}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]\\\
&=2n\eta\,\mathbb{P}\left[\sum_{k=1}^{\infty}\operatorname{\mathbbm{1}}_{k\leq\tau_{1}}\max_{j\geq
1}Y^{k}_{j}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}\right]\\\ &\leq
2n\eta\sum_{k=1}^{\infty}\mathbb{P}\left[\tau_{1}\geq
k\right]\mathbb{P}\left[\max_{j\geq
1}Y^{k}_{j}\operatorname{\mathbbm{1}}_{\xi_{k}\leq\varepsilon
a_{n}}>\bar{\varepsilon}a_{n}/2k^{2}\right],\end{split}$
where in the last line we used the fact that $\\{\tau_{1}\geq k\\}$ and the
process $Y^{k}$ are independent.
Since the environment is given by an i.i.d. sequence, it is enough to estimate
the tails of the maximum of the process
$(Y_{j}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon a_{n}})_{j\geq 1}$.
By Lemma 3.5 applied with $\gamma=\beta+\delta$,
$\mathbb{P}\left[\max_{j\geq
1}Y_{j}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon a_{n}}>x\right]\leq
C_{2}x^{-\gamma}\left({\rm E}\left({\rm
E}_{\omega}Y_{\xi_{1}-1}^{2}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}\right)^{\gamma/2}+\mathbb{E}\mathbb{Y}_{1}^{\gamma}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}\right).$
As we have calculated in the proof of Lemma 3.5,
${\rm
E}_{\omega}Y_{\xi_{1}-1}^{2}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}=\xi_{1}(\xi_{1}-1)\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}},\quad{\rm
E}_{\omega}\mathbb{Y}_{1}^{2}=(2\xi_{1}^{2}\rho_{1}^{2}+\xi_{1}\rho_{1})\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}},$
therefore
${\rm E}\left({\rm
E}_{\omega}Y_{\xi_{1}-1}^{2}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}\right)^{\gamma/2}\leq{\rm
E}\xi^{\gamma}\operatorname{\mathbbm{1}}_{\xi\leq\varepsilon a_{n}}$
and
$\mathbb{E}\mathbb{Y}_{1}^{\gamma}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}\leq{\rm E}\left({\rm
E}_{\omega}\mathbb{Y}_{1}^{2}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon
a_{n}}\right)^{\gamma/2}\leq\left(2^{\gamma/2}{\rm E}\rho^{\gamma}+{\rm
E}\rho^{\gamma/2}\right){\rm
E}\xi^{\gamma}\operatorname{\mathbbm{1}}_{\xi\leq\varepsilon a_{n}}.$
Putting things together, for some constant $C>0$ and any $x>0$,
$\mathbb{P}\left[\max_{j\geq
1}Y_{j}\operatorname{\mathbbm{1}}_{\xi_{1}\leq\varepsilon a_{n}}>x\right]\leq
Cx^{-\gamma}{\rm E}\xi^{\gamma}\operatorname{\mathbbm{1}}_{\xi\leq\varepsilon
a_{n}}\leq Cx^{-\gamma}\int_{0}^{\varepsilon a_{n}}t^{\gamma-1}{\rm
P}[\xi>t]dt.$
By Karamata’s theorem ([2], Theorem 1.5.11) and (5.7),
$\int_{0}^{\varepsilon a_{n}}t^{\gamma-1}{\rm
P}[\xi>t]dt\sim\frac{1}{\gamma+\beta}(\varepsilon a_{n})^{\gamma}{\rm
P}[\xi>\varepsilon
a_{n}]\sim\frac{1}{\gamma+\beta}\varepsilon^{\gamma-\beta}a_{n}^{\gamma}n^{-1}.$
Using those estimates, we obtain, for some constants $C,C^{\prime}>0$,
$\begin{split}\mathbb{P}\left[\max_{j\geq
1}\sum_{k\leq\tau_{2n\eta}}Y^{k}_{j-S_{k-1}}\operatorname{\mathbbm{1}}_{\\{\xi_{k}\leq\varepsilon
a_{n}\\}}>\bar{\varepsilon}a_{n}\right]&\leq
Cn\sum_{k=1}^{\infty}\mathbb{P}[\tau_{1}\geq
k]\left(\bar{\varepsilon}a_{n}/2k^{2}\right)^{-\gamma}\varepsilon^{\gamma-\beta}a_{n}^{\gamma}n^{-1}\\\
&\leq
C^{\prime}\bar{\varepsilon}^{-\gamma}\varepsilon^{\gamma-\beta}\mathbb{E}\tau_{1}^{2\gamma+1},\end{split}$
which finishes the proof since $\gamma=\beta+\delta$ and
$\mathbb{E}\tau_{1}^{2\gamma+1}<\infty$ by Lemma 3.1. ∎
The next step is to investigate the maximal generations among the progeny of
immigrants from large blocks. Although it may happen that the descendants of
particles from several large blocks coexist in one generation of the process
$Z$, we will show later that it is unlikely, so that we may begin by
investigating the maxima of $|I_{n,\varepsilon}|$ independent processes, each
representing progeny of immigrants from a large block. To this end, assume
that our probability space contains variables
$\left\\{(Y^{j,(N)}_{k})_{k\in\mathbb{N}}\,:\,j,N\in\mathbb{N}\right\\}$ such
that
* •
the processes $(Y^{j,(N)}_{k})_{k\in\mathbb{N}}$ are i.i.d. copies of
$(Y^{(N)}_{k})_{k\in\mathbb{N}}$,
* •
the family
$\left\\{(Y^{j,(N)}_{k})_{k\in\mathbb{N}}\,:\,j,N\in\mathbb{N}\right\\}$ is
independent of the environment
$\\{(\xi_{k},\lambda_{k})\\}_{k\in{\mathbb{Z}}}$.
For any $j,N\in\mathbb{N}$ denote $M_{N}^{j}=\max_{k\geq
0}(Y^{j,(N)}_{k}+Y^{j,(N)}_{k+1})$ and let
$\bar{D}_{j,n}=\\{\mathbb{Y}^{j,(\xi_{j})}_{\sqrt{n}}=0\\}$. Observe that the
event $\bar{D}_{j,n}$ means that the process $Y^{j,(\xi_{j})}$ went extinct at
most at its $\sqrt{n}$’th marked generation.
###### Proposition 5.4.
Fix $\varepsilon>0$ and let $A_{n}\in\sigma(I_{n,\varepsilon})$ be such that
${\rm P}[A_{n}]\to 1$ as $n\to\infty$. For any $x>0$,
$\lim_{n\to\infty}\mathbb{P}\left[\max_{j\in
I_{n,\varepsilon}}M^{j}_{\xi_{j}}\operatorname{\mathbbm{1}}_{\bar{D}_{j,n}}>xa_{n},A_{n}\right]=1-\exp\left(-x^{-\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}-\varepsilon^{-\beta}\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\right).$
###### Proof.
Observe that due to our assumptions the event $\bar{D}_{j,n}$ depends only on
$\xi_{j}$ and the process $Y^{j,(\xi_{j})}$. Therefore we investigate a
maximum of variables which are i.i.d. under $\mathbb{P}$.
Recall that $|I_{n,\varepsilon}|$ converges in distribution to
$Poiss(\varepsilon^{-\beta})$. Moreover, conditioning on
$|I_{n,\varepsilon}|=k$, the examined maximum is a maximum of $k$ independent
variables with distribution given by
$\mathbb{P}\left[M_{\xi}\operatorname{\mathbbm{1}}_{\bar{D}_{n}}\in\cdot\,\Big{|}\,\xi>\varepsilon
a_{n}\right],$
for $\xi$ independent of $\\{Y^{(N)},M_{N}\,:\,N\in\mathbb{N}\\}$ and
$\bar{D}_{n}=\\{\mathbb{Y}^{(\xi)}_{\sqrt{n}}=0\\}$. In particular,
(5.9) $\begin{split}\mathbb{P}\left[\max_{k\in
I_{n,\varepsilon}}M^{k}_{\xi_{k}}\operatorname{\mathbbm{1}}_{\bar{D}_{k,n}}>xa_{n}\right]&=1-{\rm
E}\left[(1-\mathbb{P}\left[M_{\xi}>xa_{n},\,\bar{D}_{n}\,|\,\xi>\varepsilon
a_{n}\right])^{|I_{n,\varepsilon}|}\operatorname{\mathbbm{1}}_{A_{n}}\right]\\\
&=1-{\rm
E}\left[(1-\mathbb{P}\left[M_{\xi}>xa_{n},\,\bar{D}_{n}\,|\,\xi>\varepsilon
a_{n}\right])^{|I_{n,\varepsilon}|}\right]+o(1),\end{split}$
where the second equality follows from
${\rm
E}\left[(1-\mathbb{P}\left[M_{\xi}>xa_{n},\,\bar{D}_{n}\,|\,\xi>\varepsilon
a_{n}\right])^{|I_{n,\varepsilon}|}\operatorname{\mathbbm{1}}_{A_{n}^{c}}\right]\leq{\rm
P}[A_{n}^{c}].$
Note that, since the extinction time of the process $Y^{(\xi)}$ is dominated
by $\tau_{1}$, Lemma 3.1 implies
$\mathbb{P}[\bar{D}_{n}^{c}]\leq\mathbb{P}[\tau_{1}\geq\sqrt{n}]\leq
e^{-c\sqrt{n}}\mathbb{E}e^{c\tau_{1}},$
and by (5.7),
$\mathbb{P}[\bar{D}_{n}^{c}\,|\,\xi>\varepsilon a_{n}]\leq
e^{-c\sqrt{n}}\mathbb{E}e^{c\tau_{1}}{\rm P}[\xi>\varepsilon
a_{n}]^{-1}\sim\mathbb{E}e^{c\tau_{1}}\varepsilon^{\beta}ne^{-c\sqrt{n}}\to 0$
as $n\to\infty$. Therefore, for any fixed $\bar{\varepsilon}>0$, for $n$ large
enough,
(5.10) $\mathbb{P}\left[M_{\xi}>xa_{n},\,\bar{D}_{n}^{c}\,|\,\xi>\varepsilon
a_{n}\right]\leq\bar{\varepsilon}.$
By Lemma 5.1, $M_{N}/N\overset{{\rm d}}{\longrightarrow}M_{\infty}$ as
$N\to\infty$. Observe that the distribution of $M_{\infty}$ is continuous and
thus appropriate cumulative distribution functions converge uniformly; in
particular, for large enough $n$,
(5.11) $\sup_{y>0}\left|\mathbb{P}\left[M_{\xi}>y\,|\,\xi>\varepsilon
a_{n}\right]-\mathbb{P}\left[M_{\infty}>y/\xi\,|\,\xi>\varepsilon
a_{n}\right]\right|<\bar{\varepsilon}$
for $M_{\infty}$ independent of $\xi$. Observe that
$\begin{split}\mathbb{P}&\left[M_{\infty}>xa_{n}/\xi\,|\,\xi>\varepsilon
a_{n}\right]=\frac{\mathbb{P}[\xi M_{\infty}>xa_{n},\xi>\varepsilon
a_{n}]}{{\rm P}[\xi>\varepsilon a_{n}]}\\\ &=\frac{1}{{\rm P}[\xi>\varepsilon
a_{n}]}\left(\int_{[0,x/\varepsilon)}{\rm
P}[\xi>xa_{n}/t]\mathbb{P}[M_{\infty}\in dt]+\int_{[x/\varepsilon,\infty)}{\rm
P}[\xi>\varepsilon a_{n}]\mathbb{P}[M_{\infty}\in dt]\right)\\\
&=\int_{[0,x/\varepsilon)}\frac{{\rm P}[\xi>xa_{n}/t]}{{\rm P}[\xi>\varepsilon
a_{n}]}\mathbb{P}[M_{\infty}\in dt]+\mathbb{P}[M_{\infty}\geq
x/\varepsilon].\end{split}$
By uniform convergence theorem for regularly varying functions (see (B.1.2) in
[3]), for $n$ large enough,
$\sup_{c\geq 1}\left|\frac{{\rm P}[\xi>c\varepsilon a_{n}]}{{\rm
P}[\xi>\varepsilon a_{n}]}-c^{-\beta}\right|<\bar{\varepsilon},$
which means that
$\left|\int_{[0,x/\varepsilon)}\frac{{\rm P}[\xi>xa_{n}/t]}{{\rm
P}[\xi>\varepsilon
a_{n}]}-\left(\frac{x}{t\varepsilon}\right)^{-\beta}\mathbb{P}[M_{\infty}\in
dt]\right|<\bar{\varepsilon}.$
Hence
$\left|\mathbb{P}[M_{\infty}>xa_{n}/\xi\,|\,\xi>\varepsilon
a_{n}]-\left(x^{-\beta}\varepsilon^{\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}+\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\right)\right|<\bar{\varepsilon},$
which together with (5.11) implies
(5.12) $\left|\mathbb{P}\left[M_{\xi}>xa_{n}\,|\,\xi>\varepsilon
a_{n}\right]-\left(x^{-\beta}\varepsilon^{\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}+\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\right)\right|<2\bar{\varepsilon}.$
Putting the estimates (5.10) and (5.12) to (5.9) and using the fact that
$|I_{n,\varepsilon}|\overset{{\rm
d}}{\longrightarrow}Poiss(\varepsilon^{-\beta})$, we obtain
$\begin{split}1-&\exp\left(-\varepsilon^{-\beta}\left(x^{-\beta}\varepsilon^{\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}+\mathbb{P}[M_{\infty}\geq
x/\varepsilon]-3\bar{\varepsilon}\right)\right)\\\
&\leq\liminf_{n\to\infty}\mathbb{P}\left[\max_{k\leq
n}M^{k}_{\xi_{k}}\operatorname{\mathbbm{1}}_{\xi_{k}>\varepsilon
a_{n}}>xa_{n}\right]\leq\limsup_{n\to\infty}\mathbb{P}\left[\max_{k\leq
n}M^{k}_{\xi_{k}}\operatorname{\mathbbm{1}}_{\xi_{k}>\varepsilon
a_{n}}>xa_{n}\right]\\\ &\leq
1-\exp\left(-\varepsilon^{-\beta}\left(x^{-\beta}\varepsilon^{\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}+\mathbb{P}[M_{\infty}\geq
x/\varepsilon]+3\bar{\varepsilon}\right)\right),\end{split}$
which finishes the proof since $\bar{\varepsilon}$ is arbitrary. ∎
We are now ready to prove Theorem 2.2, rephrased into the setting of the
associated branching process.
###### Theorem 5.5.
Under assumptions $(B)$,
$\mathbb{P}\left[a_{n}^{-1}\max_{0\leq
k<S_{n}}(Z_{k}+Z_{k+1})>x\right]\xrightarrow{n\to\infty}1-\exp\left(-\mathbb{E}M_{\infty}^{\beta}x^{-\beta}\right).$
###### Proof.
Fix $\varepsilon>0$. For any $\bar{\varepsilon}>0$,
(5.13) $\mathbb{P}\left[\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}}(Y^{k}_{j-S_{k-1}}+Y^{k}_{j-S_{k-1}+1})>xa_{n}\right]\leq\mathbb{P}\left[\max_{j<S_{n}}(Z_{j}+Z_{j+1})>xa_{n}\right]\\\
\leq\mathbb{P}\left[2\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}^{c}}Y^{k}_{j-S_{k-1}}>\bar{\varepsilon}a_{n}\right]+\mathbb{P}\left[\max_{j\geq
1}\sum_{k\in
I_{n,\varepsilon}}(Y^{k}_{j-S_{k-1}}+Y^{k}_{j-S_{k-1}+1})>(x-\bar{\varepsilon})a_{n}\right].$
Note that because of (5.8) we expect that for large $n$ the set
$I_{n,\varepsilon}$ should be distributed rather uniformly on $\\{1,\dots
n\\}$, so that the large blocks are far from each other. Indeed, since
$nP[\xi>\varepsilon a_{n}]\to\varepsilon^{-\beta}$, for any sequence $b_{n}$
such that $b_{n}=o(n)$,
${\rm P}\left[\left(\exists k,l\in I_{n,\varepsilon}\right)\,k\neq l,|k-l|\leq
b_{n}\right]\leq n{\rm P}[\xi>\varepsilon a_{n}]\cdot b_{n}{\rm
P}[\xi>\varepsilon a_{n}]\to 0\quad\textnormal{ as }n\to\infty.$
That is, with high probability, large blocks are at distance at least $b_{n}$
from each other. On the other hand, we know that the extinction occurs very
often in our process, which should mean that as the process evolves, no two
bloodlines of immigrants from large blocks coexist at one time. Let
$D_{k,n}=\left\\{\mathbb{Y}^{k}_{\sqrt{n}}=0\right\\}$
be an event that the progeny of immigrants from $k$’th block does not survive
more than $\sqrt{n}$ blocks. Then, by Lemma 3.1,
$\mathbb{P}\left[\bigcup_{k\leq n}D_{k,n}^{c}\right]\leq
n\mathbb{P}[\tau_{1}>\sqrt{n}]\leq ne^{-c\sqrt{n}}\mathbb{E}e^{c\tau_{1}}\to
0$
as $n\to 0$. Therefore the probability of the set
$D_{n}=\bigcap_{k\leq n}D_{k,n}$
converges to $1$ as $n\to\infty$ and so does the probability of
$A_{n}=\\{\left(\forall k,l\in I_{n,\varepsilon}\right)k\neq
l\implies|k-l|>2\sqrt{n}\\}.$
Moreover, on the set $A_{n}\cap D_{n}$, the progeny of immigrants from each
large block dies out before the next large block occurs. That is, $\max_{j\geq
1}\sum_{k\in I_{n,\varepsilon}}(Y^{k}_{j-S_{k-1}}+Y^{k}_{j-S_{k-1}+1})$ is
really a maximum of independent maxima of $Y^{k}$ such that $k\in
I_{n,\varepsilon}$. Therefore,
$\mathbb{P}\left[\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}}(Y^{k}_{j-S_{k}}+Y^{k}_{j-S_{k}+1})>xa_{n},\,A_{n}\cap
D_{n}\right]=\mathbb{P}\left[\max_{k\in
I_{n,\varepsilon}}M^{k}_{\xi_{k}}\operatorname{\mathbbm{1}}_{\bar{D}_{k,n}}>xa_{n},\,A_{n}\right].$
By Proposition 5.4, this quantity converges to
$1-\exp\left(-x^{-\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}-\varepsilon^{-\beta}\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\right)$
as $n\to\infty$. Going back to (5.13), we have
(5.14)
$1-\exp\left(-x^{-\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}-\varepsilon^{-\beta}\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\right)\leq\liminf_{n\to\infty}\mathbb{P}\left[\max_{j<S_{n}}(Z_{j}+Z_{j+1})>xa_{n}\right].$
On the other hand, by Proposition 5.3,
$\limsup_{n\to\infty}\mathbb{P}\left[2\max_{j\geq 1}\sum_{k\in
I_{n,\varepsilon}^{c}}Y^{k}_{j-S_{k-1}}>\bar{\varepsilon}a_{n}\right]\leq
C_{5}(\bar{\varepsilon}/2)^{-\beta-\delta}\varepsilon^{\delta},$
which means that
(5.15)
$\begin{split}\limsup_{n\to\infty}\mathbb{P}&\left[\max_{j<S_{n}}(Z_{j}+Z_{j+1})>xa_{n}\right]\leq
C_{5}(\bar{\varepsilon}/2)^{-\beta-\delta}\varepsilon^{\delta}\\\
&+1-\exp\left(-(x-\bar{\varepsilon})^{-\beta}\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<(x-\bar{\varepsilon})/\varepsilon}-\varepsilon^{-\beta}\mathbb{P}[M_{\infty}\geq(x-\bar{\varepsilon})/\varepsilon]\right).\end{split}$
Observe that, since $\mathbb{E}M_{\infty}^{\beta+\delta}<\infty$ (see Remark
5.2), we have
$\varepsilon^{-\beta}\mathbb{P}[M_{\infty}\geq
x/\varepsilon]\leq\varepsilon^{\delta}x^{-\beta-\delta}\mathbb{E}M_{\infty}^{\beta+\delta}\to
0\quad\textnormal{ as }\varepsilon\to 0,$
while by the monotone convergence theorem,
$\mathbb{E}M_{\infty}^{\beta}\operatorname{\mathbbm{1}}_{M_{\infty}<x/\varepsilon}\to\mathbb{E}M_{\infty}^{\beta}\quad\textnormal{
as }\varepsilon\to 0.$
Therefore passing with $\varepsilon$ to $0$ in (5.14) gives
$1-\exp\left(-x^{-\beta}\mathbb{E}M_{\infty}^{\beta}\right)\leq\liminf_{n\to\infty}\mathbb{P}\left[\max_{j<S_{n}}(Z_{j}+Z_{j+1})>xa_{n}\right],$
and similarly in (5.15),
$\limsup_{n\to\infty}\mathbb{P}\left[\max_{j<S_{n}}(Z_{j}+Z_{j+1})>xa_{n}\right]\leq
1-\exp\left(-(x-\bar{\varepsilon})^{-\beta}\mathbb{E}M_{\infty}^{\beta}\right),$
which ends the proof since $\bar{\varepsilon}>0$ is arbitrary. ∎
## References
* [1] S. Alili, _Asymptotic behaviour for random walks in random environments_ , Journal of Applied Probability 36 (1999), no. 2, 334–349.
* [2] N. H. Bingham, C. M. Goldie, and J. L. Teugels, _Regular Variation_ , Cambridge University Press, 6 1987.
* [3] D. Buraczewski, E. Damek, and T. Mikosch, _Stochastic models with power-law tails: The equation X = AX + B_ , Springer Series in Operations Research and Financial Engineering, 2016.
* [4] D. Buraczewski and P. Dyszewski, _Precise large deviations for random walk in random environment_ , Electronic Journal of Probability 23 (2018), 1–26.
* [5] D. Buraczewski, P. Dyszewski, A. Iksanov, and A. Marynych, _Random walks in a strongly sparse random environment_ , Stochastic Processes and their Applications 130 (2020), 3990–4027.
* [6] D. Buraczewski, P. Dyszewski, A. Iksanov, A. Marynych, and A. Roitershtein, _Random walks in a moderately sparse random environment_ , Electronic Journal of Probability 24 (2019).
* [7] D. Buraczewski, P. Dyszewski, and A. Kołodziejska, _Weak quenched limit theorems for a random walk in a sparse random environment_ , Electronic Journal of Probability 29 (2024), 1 – 30.
* [8] A. Dembo, Y. Peres, and O. Zeitouni, _Tail estimates for one-dimensional random walk in random environment_ , Communications in Mathematical Physics 181 (1996), no. 3, 667–683.
* [9] D. Dolgopyat, _Random walks in one dimensional environment_.
* [10] D. Dolgopyat and I. Goldsheid, _Quenched limit theorems for nearest neighbour random walks in 1d random environment_ , (2010).
* [11] I. Goldsheid, _Simple transient random walks in one-dimensional random environment: the central limit theorem_ , (2006).
* [12] A. Iksanov, _Renewal theory for perturbed random walks and similar processes_ , Birkhäuser, 2016.
* [13] H. Kesten, M. Kozlov, and F. Spitzer, _A limit law for random walk in a random environment_ , Compositio Mathematica 30 (1975), no. 2, 145–168.
* [14] A. Matzavinos, A. Roitershtein, and Y. Seol, _Random walks in a sparse random environment_ , Electronic Journal of Probability 21 (2016).
* [15] S. I. Resnick, _Extreme Values, Regular Variation and Point Processes_ , Springer New York, 1987.
* [16] F. Solomon, _Random walks in a random environment_ , The Annals of Probability 3 (1975), no. 1, 1–31.
## Acknowledgements
The research was supported by the National Science Center, Poland (Opus, grant
number 2020/39/B/ST1/00209).
|
# Structure of wavefunction for interacting bosons in mean-field with random
$k$-body interactions
Priyanka Rao N. D<EMAIL_ADDRESS>Department of
Applied Physics, Faculty of Technology and Engineering, The Maharaja Sayajirao
University of Baroda, Vadodara-390001, India
###### Abstract
Wavefunction structure is analyzed for dense interacting many-boson systems
using Hamiltonian $H$, which is a sum of one-body $h(1)$ and an embedded GOE
of $k$-body interaction $V(k)$ with strength $\lambda$. In the first analysis,
a complete analytical description of the variance of the strength function as
a function of $\lambda$ and $k$ is derived and the marker $\lambda_{t}$
defining thermalization region is obtained. In the strong coupling limit
($\lambda>\lambda_{t}$), the conditional $q$-normal density describes Gaussian
to semi-circle transition in strength functions as body rank $k$ of the
interaction increases. In the second analysis, this interpolating form of the
strength function is utilized to describe the fidelity decay after $k$-body
interaction quench and also to obtain the smooth form for the number of
principal components, a measure of chaos in finite interacting many-particle
systems. The smooth form very well describes embedded ensemble results for all
$k$ values.
## I Introduction
It is now well established that Random Matrix Theory, due to it’s universality
[1], successfully describes the spectral as well as wavefunction properties of
isolated finite many-particle quantum systems [2]. The spectral statistics
deals only with the energy eigenvalues while the statistical properties
related to the structure of the wavefunctions can reveal different layers of
chaos and hence give profound understanding of various problems in the field
of quantum many-body chaos and thermalization, in isolated finite interacting
particle systems such as atomic nuclei, atoms, mesoscopic systems (quantum
dots, small metallic grains), interacting spin systems modeling quantum
computing core, ultra- cold atoms and quantum black holes with SYK model and
so on [2, 3, 4, 5, 6, 7, 8, 9]. To analyze the wavefunction properties, it is
very crucial to examine the so-called strength functions (also known as local
density of states) in detail, as they give information about how a particular
basis state spreads onto the eigenstates. The chaos measures like number of
principal components (NPC), information entropy, fidelity decay etc. can also
be determined by examining the general features of the strength functions [2].
The statistical properties of isolated finite many-particle quantum systems
investigated by employing random matrix ensembles are generally referred as
Gaussian ensembles (and in particular the Gaussian orthogonal ensemble (GOE))
for $m$-particle system. They involve interaction up to $m$-body in character
and are dominated by the $m$-body interactions. However, constituents of
isolated quantum systems interact via few-body interactions. Hence the concept
of embedded ensemble (EE) of $k$-body interaction, in particular EGOE($k$)
(GOE version of EE($k$)) was introduced by French and co-workers [10, 11].
These models for the particles in a mean-field and interacting via two-body
interactions ($k=2$) and their various extended versions form good models for
understanding various aspects of chaos in interacting particle systems [2] and
they are investigated in detail both for fermion systems (called EGOE(1+2))
[12, 13, 14, 15, 16, 17] as well as boson systems (called BEGOE(1+2) with ’B’
for bosons) [18, 19, 20, 21, 22, 23]. Here, with $m$ particles distributed in
$N$ single particle (sp) states, two limiting situations exist, one is the
dilute limit (defined as $m\rightarrow\infty$, $N\rightarrow\infty$ and
$m/N\rightarrow 0$) and another is the dense limit (defined by
$m\rightarrow\infty$, $N\rightarrow\infty$ and $m/N\rightarrow\infty$). In the
dilute limit, one can expect similar behavior for both fermion and boson
systems while the dense limit is feasible only for boson systems and therefore
the focus was on the dense limit in BEGOE investigations [18, 19, 20, 21, 22,
23, 24]. For EGOE(1+2) in dilute limit and for BEGOE(1+2) in dense limit, as a
function of the two-body interaction strength $\lambda$ (measured in units of
the average spacing between the one-body mean-field sp levels), exhibits three
transition or chaos markers $(\lambda_{C},\lambda_{F},\lambda_{t})$: (a) as
the two-body interaction is turned on, level fluctuations exhibit a transition
from Poisson to GOE at $\lambda=\lambda_{C}$; (b) with further increase in
$\lambda$, the strength functions make a transition from Breit-Wigner (BW)
form to Gaussian form at $\lambda=\lambda_{F}>\lambda_{C}$; and (c) beyond
$\lambda=\lambda_{F}$, there is a region of thermalization around
$\lambda=\lambda_{t}$ where the basis dependent thermodynamic quantities like
entropy behave alike. It is important to note that the transitions mentioned
above are inferred from large number of numerical calculations and they are
well verified to be valid in the bulk part of the spectrum. For further
details see [2] and references there in.
Going beyond two-body interaction, it is seen that the higher body
interactions i.e. $k>2$ play an important role in strongly interacting quantum
systems [25, 26], nuclear physics [27], quantum black holes [7, 28] and
wormholes [29] with SYK model and also in quantum transport in disordered
networks connected by many-body interactions [30, 31, 32]. Therefore, it is
necessary to extend the analysis of EE to higher $k$-body interactions in
order to understand these problems. From the previous studies, it is known
that with EGOE($k$) or (BEGOE($k$)), the eigenvalue density for a system of
$m$ fermions/bosons in $N$ sp states changes from Gaussian form to semi-circle
as $k$ changes from 2 to $m$ [2, 6, 13, 33]. Very recently, $q$-Hermite
polynomials have been employed to study spectral densities of the so-called
SYK model [34, 35] and quantum spin glasses [36], along with studying the
strength functions and fidelity decay (also known as survival or return
probability) in EE, both for fermion as well as boson systems [33]. The smooth
form of eigenvalue density can be given by the so-called $q$-normal
distribution $f_{qN}$ and formulas for parameter $q$ in terms of $m$, $N$ and
$k$ are derived for fermionic and bosonic EE($k$) in [33] which explain the
Gaussian to semi-circle transition in spectral densities, strength functions
and fidelity decay in many-body quantum systems as a function of rank $k$ of
interactions. Recently, the lower-order bivariate reduced moments of the
transition strengths are examined for the action of a transition operator on
the eigenstates generated by EGOE($k$) and it is shown that the ensemble
averaged distribution of transition strengths follows a bivariate $q$-normal
distribution $f_{biv-qN}$ and a formula for NPC in the transition strengths
from a state is obtained [37]. Very recently, analytical formulas for the
lowest four moments of the strength functions for fermion systems modeled by
EGOE(1+$k$) are derived and it is shown that the conditional $q$-normal
density $f_{CqN}$ can be used to represent strength functions in the strong
coupling limit [38]. One can expect similar behavior for isolated finite
interacting boson systems with $k$-body interactions in the dense limit. The
purpose of the present letter is firstly to demonstrate that in strong
coupling domain (in the thermalization region), the strength functions indeed
can be represented by the conditional $q$-normal distribution $f_{CqN}$ in the
dense interacting boson systems interacting via $k$-body interaction.
Secondly, using $f_{CqN}$ form and parameters that enter in this form,
fidelity decay is described in BEGOE(1+$k$) and an analytical formula for NPC
is derived.
The Letter is organized as follows. We briefly introduce BEGOE(1+$k$) and
$q$-Hermite polynomials along with their generating function and conditional
$q$-normal distribution in Section II. The numerical results of the variation
of parameter $q$ as a function of $k$-body interaction strength $\lambda$ in
BEGOE(1+$k$) are presented in Section III. Also the formula of $q$ for
BEGOE($k$) is given for the sake of completeness, even though it is clearly
given in [6, 33]. Further, a complete analytical description of the variance
of the strength function, in terms of the correlation coefficient $\zeta$, for
BEGOE(1+$k$) is given and ($m$,$N$,$k$) dependence of marker $\lambda_{t}$ is
derived. In Section IV, the results for the variation of strength function, in
the strong coupling domain ($\lambda>>\lambda_{t}$), are presented as a
function of body rank $k$ and ensemble averaged results are compared with
smooth forms given by $f_{CqN}$. In Section V the interpolating form $f_{CqN}$
for the strength function is utilized to describe the fidelity decay after
random $k$-body interaction quench in BEGOE(1+$k$) in the thermalization
region. Further, two parameter ($\zeta$ and $q$) analytical formula for NPC is
derived as a function of energy for $k$-body interaction and tested with
numerical embedded ensemble results in Section VI. Finally, the concluding
remarks are given in section VII.
## II Preliminaries
### II.1 Embedded bosonic ensembles - BEGOE(1+$k$)
Consider $m$ spinless bosons distributed in $N$ degenerate sp states
interacting via $k$-body ($1\leq k\leq m$) interactions. Distributing these
$m$ bosons in all possible ways in $N$ sp states generates many-particle basis
of dimension $d={N+m-1\choose{m}}$. The $k$-body random Hamiltonian $V(k)$ is
defined as,
$V(k)=\displaystyle\sum_{k_{a},k_{b}}V_{k_{a},k_{b}}B^{\dagger}(k_{a})B(k_{b})\;.$
(1)
Here, operators $B^{\dagger}(k_{a})$ and $B(k_{b})$ are $k$-boson creation and
annihilation operators. They obey the boson commutation relations.
$V_{k_{a},k_{b}}$ are the symmetrized matrix elements of $V(k)$ in the
$k$-particle space with the matrix dimension being $d_{k}={N+k-1\choose k}$.
They are chosen to be randomly distributed independent Gaussian variables with
zero mean and unit variance, in other words, $k$-body Hamiltonian is chosen to
be a GOE. BEGOE($k$) is generated by action of $V(k)$ on the many-particle
basis states. Due to $k$-body nature of interactions, there will be zero
matrix elements in the many-particle Hamiltonian matrix, unlike a GOE. By
construction, we have a GOE for the case $k=m$. For further details about
these ensembles, their extensions and applications, see [2, 39, 40] and
references therein.
In realistic systems, bosons also experience mean-field generated by presence
of other bosons in the system and hence, it is more appropriate to model these
systems by BEGOE($1+k$) defined by,
$H=h(1)+\lambda V(k)$ (2)
Here, the one-body operator $h(1)=\sum_{i=1}^{N}\epsilon_{i}n_{i}$ is
described by fixed sp energies $\epsilon_{i}$; $n_{i}$ is the number operator
for the $i$th sp state. The parameter $\lambda$ represents the strength of the
$k$-body interaction and it is measured in units of the average mean spacing
of the sp energies defining $h(1)$. In this analysis, we have employed fixed
sp energies $\epsilon_{i}=i+1/i$ in defining the mean-field Hamiltonian
$h(1)$. As the dense limit is more interesting for bosons, for numerical
study, we have chosen $N=5$, $m=10$ with space dimensionality of $d=1001$ and
varied $k$ from 2 to $m$. It is now known that in nuclear reactions and
strongly interacting quantum systems $k=2,3,4$ are of physical importance[7,
25, 26]. However for the sake of completeness, to study the generic features
of embedded ensembles and the possibility of higher $k$ becoming prominent, we
address $k=2$ to $m$.
### II.2 $q$-Hermite polynomials and conditional $q$-normal distribution
The $q$-Hermite polynomials were first introduced by L. J. Rogers in
Mathematics. Consider $q$ numbers $[n]_{q}$ defined as
$\left[n\right]_{q}=(1-q)^{-1}(1-q^{n})$. Then, $[n]_{q\rightarrow 1}=n$, and
$[n]_{q}!=\Pi^{n}_{j=1}[j]_{q}$ with $[0]_{q}!=1$. Now, $q$-Hermite
polynomials $H_{n}(x|q)$ are defined by the recursion relation [41],
$x\,H_{n}(x|q)=H_{n+1}(x|q)+\left[n\right]_{q}\,H_{n-1}(x|q)$ (3)
with $H_{0}(x|q)=1$ and $H_{-1}(x|q)=0$. Note that for $q=1$, the $q$-Hermite
polynomials reduce to normal Hermite polynomials (related to Gaussian) and for
$q=0$ they will reduce to Chebyshev polynomials (related to semi-circle).
Importantly, $q$-Hermite polynomials are orthogonal within the limits $\pm
2/\sqrt{1-q}$, with the $q$-normal distribution $f_{qN}(x|q)$ as the weight
function defined by [37],
$f_{qN}(x|q)=\displaystyle\frac{\sqrt{1-q}}{2\pi\sqrt{4-(1-q)x^{2}}}\displaystyle\prod_{i=0}^{\infty}(1-q^{i+1})[(1+q^{i})^{2}-(1-q)q^{i}x^{2}].$
(4)
Here, $-2/\sqrt{1-q}\leq x\leq 2/\sqrt{1-q}$ and $q\in[0,1]$. Note that
$\int_{s(q)}f_{qN}(x|q)\;dx=1$ over the range
$s(q)=(-2/\sqrt{1-q},2/\sqrt{1-q})$. It is seen that in the limit
$q\rightarrow 1$, $f_{qN}(x|q)$ will take Gaussian form and in the limit $q=0$
semi-circle form. Now the bivariate $q$-normal distribution $f_{biv-
qN}(x,y|\zeta,q)$ is defined as follows [37, 42],
$\begin{array}[]{rcl}f_{biv-
qN}(x,y|\zeta,q)&=&f_{qN}(x|q)f_{CqN}(y|x;\zeta,q)\\\ \\\
&=&f_{qN}(y|q)f_{CqN}(x|y;\zeta,q)\end{array}$ (5)
where $\zeta$ is the bivariate correlation coefficient and the conditional
$q$-normal densities, $f_{CqN}$ can be given as,
$\begin{array}[]{rcl}f_{CqN}(x|y;\zeta,q)&=&f_{qN}(x|q)\;\displaystyle\prod_{i=0}^{\infty}\frac{(1-\zeta^{2}q^{i})}{h(x,y|\zeta,q)};\\\
\\\
f_{CqN}(y|x;\zeta,q)&=&f_{qN}(y|q)\;\displaystyle\prod_{i=0}^{\infty}\frac{(1-\zeta^{2}q^{i})}{h(x,y|\zeta,q)};\\\
\\\ h(x,y|\zeta,q)&=&(1-\zeta^{2}q^{2i})^{2}-(1-q)\zeta
q^{i}(1+\zeta^{2}q^{2i})xy+(1-q)\zeta^{2}(x^{2}+y^{2})q^{2i}.\end{array}$ (6)
The $f_{CqN}$ and $f_{biv-qN}$ are normalized to 1 over the range $s(q)$,
which can be inferred from the following property,
$\int_{s(q)}H_{n}(x|q)f_{CqN}(x|y;\zeta,q)\;dx=\zeta^{n}H_{n}(y|q).$ (7)
The first four moments of the $f_{CqN}$ can be given [38] as,
$\begin{array}[]{rcl}\text{Centroid}&=&\zeta y,\\\ \\\
\text{Variance}&=&1-\zeta^{2}\;,\\\ \\\
\text{Skewness,}\;\gamma_{1}&=&-\displaystyle\frac{\zeta(1-q)y}{\sqrt{1-\zeta^{2}}},\\\
\\\
\text{Excess,}\;\gamma_{2}&=&(q-1)+\displaystyle\frac{\zeta^{2}(1-q)^{2}y^{2}+\zeta^{2}(1-q^{2})}{(1-\zeta^{2})}\;.\end{array}$
(8)
Recently, it is shown that generating function for $q$-Hermite polynomials
describes Gaussian to semi-circle transition in the eigenvalue density as $k$
changes from from $1$ to $m$ in spectral densities using $k$-body EGOE and
their Unitary variants EGUE, both for fermion and boson systems [33]. Very
recently, in the strong coupling domain the lowest four moments of the
strength function for $k$-body fermionic embedded ensemble are obtained and it
is shown that they are essentially same as that of $f_{CqN}$ [38]. Therefore,
one can use $f_{CqN}$ distribution to represent the smooth forms of the
strength functions and analyze the wavefunction structure in quantum many-body
systems with $k$-body interactions. With this, the width of $f_{CqN}$ (and
also of the strength fucntion) is related to the correlation coefficient
$\zeta$ by Eq. (8). In the next section, we will present our results for the
variation of parameter $q$ and the correlation coefficient $\zeta$ as a
function of $k$-body interaction strength $\lambda$ in BEGOE(1+$k$). Also, a
complete analytical description of $\zeta$, in terms of $N,m$,$k$ and
$\lambda$, for BEGOE(1+$k$) is given.
## III Parameter dependence of $q$ and $\zeta$ : results for BEGOE(1+$k$)
### III.1 Formula of $q$-parameter
It has already been demonstrated that the state density for EE($k$)(and also
EE(1+$k$)) in general exhibits Gaussian to semi-circle transition as $k$
increases from $1$ to $m$ [17]. This is now well verified in many numerical
calculations and analytical proofs obtained via lower order moments [2, 6, 9,
20, 39, 43]. Figure 1(a) represents ensemble averaged state density obtained
for a 100 member BEGOE(1+$k$) ensemble with $m=10$ bosons distributed in $N=5$
sp states and the body rank of interaction changing from $k$ = 2 to 10. In
these calculations, the eigenvalue spectrum for each member of the ensemble is
first zero centered ($\epsilon_{H}$ is centroid) and scaled to unit width
($\sigma_{H}$ is width) and then the histograms are constructed. The results
clearly display transition in the spectral density from Gaussian to semi-
circle form as $k$ changes from 2 to $m=10$. With $E$ as zero centered and
using $x=E/\sigma_{H}$, the numerical results are compared with the normalized
state density $\rho(E)=d\;f_{qN}(x|q)$ with
$\epsilon_{H}-\frac{2\sigma_{H}}{\sqrt{1-q}}\leq
E\leq\epsilon_{H}+\frac{2\sigma_{H}}{\sqrt{1-q}}$. Here the parameter $q$ is
computed using the formula, valid for BEGOE($k$)(i.e. $H=V(k)$), given in
[33],
$\begin{array}[]{l}q_{V(k)}\sim\displaystyle\binom{N+m-1}{m}^{-1}\displaystyle\sum_{\nu=0}^{\nu_{max}=\min[k,m-k]}\;\displaystyle\frac{X(N,m,k,\nu)\;d(g_{\nu})}{\left[\Lambda^{0}(N,m,k)\right]^{2}}\;;\\\
\\\ X(N,m,k,\nu)=\Lambda^{\nu}(N,m,m-k)\;\Lambda^{\nu}(N,m,k)\;;\\\ \\\
\Lambda^{\nu}(N,m,r)=\displaystyle\binom{m-\nu}{r}\;\displaystyle\binom{N+m+\nu-1}{r}\;,\\\
\\\
d(g_{\nu})=\displaystyle\binom{N+\nu-1}{\nu}^{2}-\displaystyle\binom{N+\nu-2}{\nu-1}^{2}\;.\end{array}$
(9)
In the strong coupling domain, one can also apply Eq.(9) to BEGOE(1+$k$), as
the $k$-body part of the interaction is expected to dominate over one-body
part. One can see that the ensemble averaged results in Figure 1(a) are in
excellent agreement with the smooth forms obtained using $f_{qN}$. With
$\lambda=0$ in Eq.(2) i.e. one-body part $h(1)$ only, the analytical formula
of $q$ for bosons, based on trace propagation method [44], can be given as,
$\begin{array}[]{lcl}q_{h(1)}&=&{\langle h(1)^{4}\rangle}^{m}-2\\\ \\\
&=&\displaystyle{\\{\frac{3(m-1)N(1+N)(1+m+N)}{m(2+N)(3+N)(m+N)}-2\\}}\\\
&&+\displaystyle{\frac{m^{2}+(N+m)^{2}+(N+2m)^{2}}{m(N+m)}\frac{\sum_{i=1}^{N}\tilde{\epsilon_{i}}^{4}}{(\sum_{i=1}^{N}\tilde{\epsilon_{i}}^{2})^{2}}}.\\\
\end{array}$ (10)
Here, ${\langle h(1)^{4}\rangle}^{m}$ is the reduced fourth moment of one-body
part and $\tilde{\epsilon_{i}}$ are the traceless sp energies of $i$’th state.
With $H=h(1)$ and uniform sp energies $\epsilon_{i}=i$, Eq.(10) gives $q=0.71$
for ($m=5,N=10$) and $q=0.68$ for ($m=10,N=5$). While with sp energies
$\epsilon_{i}=i+1/i$, used in the present study, one obtains $q=0.68$ for
($m=5,N=10$) and $q=0.63$ for ($m=10,N=5$). Figure 1(b) shows variation of
$q_{h(1)}$ as a function of $N$ for various values of $m/N$. Here, sp energies
$\epsilon_{i}=i+1/i$ are used. It can be clearly seen that in the dense limit
($m\rightarrow\infty$, $N\rightarrow\infty$ and $m/N\rightarrow\infty$),
$q_{h(1)}\rightarrow 1$. In the dilute limit ($m\rightarrow\infty$,
$N\rightarrow\infty$ and $m/N\rightarrow 0$), similar variation in $q_{h(1)}$
can be observed due to $m\leftrightarrow N$ symmetry between the dense limit
and the dilute limit as identified in [18, 44]. Furthermore, the variation of
parameter $q$ is also studied as the interaction strength $\lambda$ varies in
BEGOE(1+$k$) for a fixed body rank $k$. Here, the ensemble averaged value of
$q$ is computed for a system of 100 member BEGOE(1+$k$) ensemble with $m=10$
bosons in $N=5$ sp states and results are shown in Figure 1(c). $q$ estimates
are also shown in the figure by horizontal marks for $H=h(1)$ and $H=V(k)$ on
left and right vertical axes respectively. One can see that for very small
values of $\lambda$, ensemble averaged $q$ values are found very close to
$q_{h(1)}$ for all body rank $k$. While for a sufficiently large $\lambda$,
where $k$-body part dominates over one-body part and ensemble averaged $q$
values reach corresponding $q_{V(k)}$ given by Eq.(9). From the variation of
ensemble averaged $q$ values in Figure 1(c), one can see that the shape of the
state density takes intermediate form between Gaussian to semi-circle as
$\lambda$ changes in BEGOE(1+$k$) for a fixed $k$. Therefore, the $q$-normal
distribution $f_{qN}$ formula can be used to describe the transition in the
state density with any value of $\lambda$ and $k$ in BEGOE(1+$k$).
|
---|---
Figure 1: (a) Histograms represent the state density vs. normalized energy $E$
results of the spectra of a 100 member BEGOE($1+k$) ensemble with $m=10$
bosons in $N=5$ sp states for different $k$ values. The strength of
interaction $\lambda=0.5$ is chosen and in the plots $\int\rho(E)dE=d$.
Ensemble averaged state density histogram is compared with $q$-normal
distribution (continuous black curves) given by $f_{qN}(x|q)$ with the
corresponding $q$ values given by Eq. (9). (b) $q_{h(1)}$ vs. $N$ for various
values of $m/N$. $q_{h(1)}$ is obtained using Eq. (10) with sp energies
$\epsilon_{i}=i+1/i$. Dense limit curve corresponds to the result with
$m/N=1000$. (c) Ensemble averaged $q$ vs. $\lambda$ for a 100 member
BEGOE(1+$k$) ensemble with $m=10$ bosons in $N=5$ sp states for different $k$
values. The horizontal black mark on left $q$-axis indicates $q$ estimate for
$H=h(1)$ given by Eq. (10), while the colored marks on right $q$-axis
represent the $q$ values, given by Eq. (9), for corresponding $k$-body rank
with $H=V(k)$. See text for more details.
### III.2 Formula of $\zeta$
The parameter $\zeta$, which is the correlation coefficient between full
Hamiltonian $H$ and the diagonal part $H_{\text{dia}}$ of the full
Hamiltonian, is related to the width $\sigma_{F}$ of the strength functions,
given by,
$\zeta=\sqrt{1-\displaystyle\frac{\sigma_{H_{\text{off-
dia}}}^{2}}{\sigma_{H}^{2}}}=\sqrt{1-\sigma_{F}^{2}},\;\;\;\;\sigma_{F}=\displaystyle\frac{\sigma_{H_{\text{off-
dia}}}}{\sigma_{H}}$ (11)
In the above equation, $\sigma_{H}^{2}$ and $\sigma_{H_{\text{off-dia}}}^{2}$
are variances of the eigenvalue distribution using full Hamiltonian and by
taking all diagonal matrix elements as zero, respectively. Since $\zeta$ and
$\sigma_{F}$ are simply related as $\sigma_{F}^{2}=1-\zeta^{2}$, here the
discussion is in terms of $\zeta$. For BEGOE(1+$k$) ensemble, analytical
expression for $\zeta$ based on the method of trace propagation can be derived
as follows. For $H=V(k)$ i.e. with all sp energies as degenerate, it is known
that [20],
$\begin{array}[]{rcl}\sigma_{H=V(k)}^{2}&=&\displaystyle
T(N,m,k)\binom{N+k-1}{k}^{-1}\;\sum_{\alpha,\beta}\overline{w^{2}_{\alpha\beta}}\;,\\\
\\\ T(N,m,k)&=&\displaystyle\Lambda^{0}(N,m,k)/\binom{N+k-1}{k}\;.\end{array}$
(12)
Here, $\alpha$ and $\beta$ denote $k$-particle states. In $k$-particle space,
the $H$ matrix is GOE. Therefore, the $k$-particle matrix elements
$w_{\alpha\beta}$ are Gaussian random variates with zero mean and unit
variance. The variance of diagonal matrix elements is
$\overline{w^{2}_{\alpha\alpha}}=2$ while that of off-diagonal matrix elements
is $\overline{w^{2}_{\alpha\beta}}=1$ for ($\alpha\neq\beta$). With this,
$\sigma_{H=V(k)}^{2}=T(N,m,k)\;\binom{N+k-1}{k}^{-1}\left\\{2\times\text{no-
dia}+2\times\text{no-offdia}\right\\},$ (13)
here the number of independent diagonal $k$-body matrix elements is ’no-
dia’$=\binom{N+k-1}{k}$ and that of off-diagonal is ’no-
offdia’$=\frac{1}{2}\binom{N+k-1}{k}\\{\binom{N+k-1}{k}-1\\}$. Similarly,
$\sigma_{H_{\text{off-dia}}}$ is given by removing the contribution of
diagonal $k$-body matrix elements from the above equation. Then using Eq.(11)
for $H=V(k)$,
$\zeta^{2}=\frac{4}{{N+k-1\choose k}+1}\;.$ (14)
Here, it can be immediately seen that $\zeta^{2}$ is independent of $m$ for
BEGOE($k$). In the dense limit with $N\rightarrow\infty$ and
$m\rightarrow\infty$, $\sigma_{F}\rightarrow 1$ giving $\zeta\rightarrow 0$ as
was suggested in [21]. Also, with $k<<m$, $\zeta^{2}\propto 1/N^{k}$. Using
$m\leftrightarrow N$ symmetry between the dense limit and the dilute limit
formula [18, 44], we have $\zeta^{2}\propto 1/m^{k}$ in the dilute limit and
this result is in agreement with [38]. Going further, with inclusion of one-
body part defined by the external sp energies ($\epsilon_{i}$), and with
$H=h(1)+\lambda V(k)$, we have
$\begin{array}[]{rcl}\sigma_{H}^{2}&=&\sigma_{h(1)}^{2}+\lambda^{2}\;\sigma_{V(k)}^{2},\\\
\\\
&=&\frac{m(N+m)}{N(N+1)}\;\sum\tilde{\epsilon_{i}}^{2}+\lambda^{2}\;\sigma_{V(k)}^{2}.\end{array}$
(15)
The analytical expression for $\zeta^{2}$ can be given by,
$\zeta^{2}=\frac{\frac{m(N+m)}{N(N+1)}\;\sum\tilde{\epsilon_{i}}^{2}+2\;\lambda^{2}\;T(N,m,k)}{\frac{m(N+m)}{N(N+1)}\;\sum\tilde{\epsilon_{i}}^{2}+\lambda^{2}\;T(N,m,k)\;\\{1+\binom{N+k-1}{k}\\}}\;.$
(16)
In the above equation, the contribution from the diagonal part of $V(k)$ is
also included into the numerator term. The analytical expression for
$\zeta^{2}$ given by Eq.(16) is tested with the numerical ensemble averaged
results obtained using a 100 member BEGOE(1+$k$) ensemble with $(m=10,N=5)$.
The results of $\zeta^{2}$ as a function of $k$-body interaction strength
$\lambda$ for different body rank $k$ are presented in Figure 2. The black
smooth curve in each plot is obtained using Eq.(16) with fixed sp energies
employed in the present study. It can be seen from the results that agreement
between the ensemble averaged values (red solid circles) and the smooth forms
obtained by Eq.(16) is very good for all $k$ values. Small difference with
large $\lambda$, for $k<5$, is due to neglect of induced sp energies. The
contribution of induced sp energies reduces as $\lambda$ and $k$ increases.
One can see from the results shown in Figure 2 that the width of the strength
function is strongly dependent on $\lambda$. For $\lambda\rightarrow 0$,
$\zeta^{2}\rightarrow 1$ for all $k$ and the strength functions are known to
be represented by $\delta$ functions. With increase in $\lambda$
i.e.$\lambda\geq\lambda_{C}$, the strength functions are known to be described
by the Briet-Wigner (Lorentz) form. With further increase in
$\lambda>>\lambda_{F}$, $\zeta^{2}$ goes on decreasing smoothly leading to a
fully chaotic domain giving the Gaussian or semi-circle or intermediate to
Gaussian and semi-circle character of the strength functions depending upon
the values of $\lambda$ and $k$. One can also observe the BW to Gaussian to
semi-circle transition in strength functions by changing both $\lambda$ and
$k$. Therefore, it is possible to have a shape intermediate to BW and semi-
circle for some values of $\lambda$ and $k$ [45].
For two-body interaction, the thermodynamic region $\lambda=\lambda_{t}$ can
be determined using the condition $\zeta^{2}=0.5$ [23, 46]; i.e. the spreading
produced by one-body part and two-body part are equal. Similarly, one can
obtain marker $\lambda_{t}$ for $k$-body interactions in presence of mean
field by considering the spreading produced by one-body part and $k$-body part
equal in Eq.(16). Solving it for $\lambda$, ($m$, $N$, $k$) dependence of
marker $\lambda_{t}$ is given by,
$\lambda_{t}=\sqrt{\frac{m(N+m)\;\sum\tilde{\epsilon_{i}}^{2}}{N(N+1)\Lambda^{0}(N,m,k)(1-3\;\binom{N+k-1}{k}^{-1})}}\;\;.$
(17)
Figure 3 shows the variation of marker $\lambda_{t}$ in dense boson systems
with BEGOE(1+$k$) as a function of $N$ for the fixed sp energies used in the
present study. The results are shown for body rank values $k=2,3$ and $4$, and
with $m/N=2$ and $5$. From the results one can clearly see that $\lambda_{t}$
decreases as the rank of the interaction $k$ increases. Hence, the
thermalization sets in faster as the rank of interaction $k$ increases.
Recently, using $k$-body embedded ensembles both for fermions and bosons, it
is demonstrated that in the thermalization region ($\lambda\geq\lambda_{t}$),
shape of the strength functions changes from Gaussian to semi-circle for the
states close to the center of the spectrum as the rank of the interaction $k$
increases and they can be well represented by $f_{qN}$ form for all $k$ values
in $V(k)$ [33]. The strength functions are symmetrical in $E$ near the center
of the spectrum as is the result with $f_{qN}$. However, it is seen in some
calculations with $k=2$ that the strength functions become asymmetrical in $E$
as one moves away from the center [24]. This feature can be incorporated by
representing strength function using $f_{CqN}$ which can not be generated by
$f_{qN}$. This will be verified with a numerical example in the next section
and more importantly, a single interpolating function $f_{CqN}$, in terms of
parameters $q$ and $\zeta$, is considered for describing Gaussian to semi-
circle transition in the strong coupling domain as the body rank $k$ in
BEGOE(1+$k$) is changed.
Figure 2: Ensemble averaged $\zeta^{2}$ (red solid circles) as a function of
interaction strength $\lambda$, calculated for BEGOE(1+$k$) ensemble with
$N=5,m=10$ example, are shown for different $k$ values. The smooth black
curves are due to Eq.(16) using fixed sp energies $\epsilon_{i}=i+1/i$
employed in the present study. Figure 3: Variation of marker $\lambda_{t}$ as
a function of $N$ for dense boson systems with BEGOE(1+$k$). Results are shown
for various values of ($k$,$m/N$) using Eq.(17).
## IV Strength function
Given $m$-particle basis state $\left|{\kappa}\right\rangle$, the diagonal
matrix elements of $m$-particle Hamiltonian $H$ are denoted as energy
$\xi_{{\kappa}}$, so that $\xi_{{\kappa}}=\langle{\kappa}|H|{\kappa}\rangle$.
The diagonalization of the full matrix $H$ gives the eigenstates
$\left|E_{i}\right\rangle$ with eigenvalues $E_{i}$, where
$\left|{\kappa}\right\rangle=\sum_{i}C_{{\kappa}}^{i}\left|E_{i}\right\rangle$.
The strength function that corresponds to the state
$\left|{\kappa}\right\rangle$ is defined as
$F_{\xi_{\kappa}}(E)=\sum_{i}{|C_{{\kappa}}^{i}|}^{2}\;\delta(E-E_{i})$. In
the present study, we take the $\left|{\kappa}\right\rangle$ states to be the
eigenstates of $h(1)$. In order to get an ensemble averaged form of the
strength functions, the eigenvalues $E_{i}$ are scaled to have zero centroid
and unit variance for the eigenvalue distribution. The ${\kappa}$-energies,
$\xi_{{\kappa}}$, are also scaled similarly. Now, for each member, all
${|C_{{\kappa}}^{i}|}^{2}$ are summed over the basis states ${\kappa}$ with
energy $\xi$ in the energy window $\xi\pm\Delta$. Then, the ensemble averaged
$F_{\xi}(E)$ vs. $E$ are constructed as histograms by applying the
normalization condition $\int_{s(q)}F_{\xi}(E)\;dE=1$. In Figure 4, histograms
represent ensemble averaged $F_{\xi}(E)$ results for all body rank $k$ values
with $\lambda=0.5$ using a 250 member BEGOE(1+$k$) ensemble with $m=10$ and
$N=5$ system. The strength function plots are obtained for $\xi=0.0,\pm 1.0$
and $\pm 2.0$. The value of $k$-body interaction strength is chosen such that
$\lambda>>\lambda_{t}$, i.e. the system exists in the region of thermalization
[9, 23]. The histograms, representing BEGOE(1+$k$) results of strength
functions, are compared with the conditional $q$-normal density function as
given by,
$F_{\xi}(E)=f_{CqN}(x=E|y=\xi;\zeta,q).$ (18)
The smooth black curves in Figure 4 for each $k$ are obtained via Eq.(18)
using corresponding ensemble averaged $\zeta$ and $q$ values. With
$\lambda>>\lambda_{t}$, $\zeta^{2}<<1/2$, the $q$ value in Eq.(18) can fairly
be given by Eq.(9) [38]. The results in Figure 4 clearly show very good
agreement between the numerical histograms and continuous black curves for all
body rank $k$. The $F_{\xi}(E)$ results for $\xi=0$ are given in Figure 4(a)
which clearly demonstrate that the strength functions are symmetric and also
exhibit a transition from Gaussian form to semi-circle as $k$ changes from $2$
to $m=10$. The smooth form given by Eq.(18) using the conditional $q$-normal
density function interpolates this transition very well. Going further,
$F_{\xi}(E)$ results for $\xi\neq 0$ are shown in Figures 4(b) and 4(c). One
can see that $F_{\xi}(E)$ results are asymmetrical about $E$ as demonstrated
earlier [24]. Also, $F_{\xi}(E)$ are skewed more in the positive direction for
$\xi>0$ and skewed more in the negative direction for $\xi<0$ and their
centroids vary linearly with $\xi$. We have also computed the first four
moments (centroid, variance, skewness ($\gamma_{1}$) and excess
($\gamma_{2}$)) of the strength function results shown in Figure 4 for the
body rank $k$ going from $2$ to $m=10$. Figure 5 represents results for
centroid, $\gamma_{1}$ and $\gamma_{2}$ for various values of $\xi$. As
discussed earlier in Section III, the variance of the strength functions is
independent of $\xi$ and simply related to correlation coefficient; for more
details, see results of $\zeta^{2}$ (Figure 2). From the numerical results
obtained for strength functions (Figure 4) along with results of lower order
moments (Figure 5), one can clearly see that in the thermodynamic domain, the
strength functions of dense interacting many-boson systems, with $k$-body
interaction, follow the conditional $q$-normal distribution $f_{CqN}$. The
results are also consistent with the analytical forms derived in [38].
---
|
---|---
Figure 4: Strength function vs. normalized energy $E$ for a system of $m=10$ bosons in $N=5$ sp states with $\lambda=0.5$ for different $k$ values in BEGOE(1+$k$) ensemble. An ensemble of 250 members is used for each $k$. Strength function plots are obtained for (a) $\xi=0$ (purple histogram) , (b) $\xi=-1.0$ (blue histogram) and $1.0$ (red histogram) and (c) $\xi=-2.0$ (blue histogram) and $2.0$ (red histogram). In the plots $\int F_{\xi}(E)dE=1$. The continuous black curves are due to fitting with $f_{CqN}$ given by Eq. (18) using $q$ and $\zeta$ values obtained by Eq. (9) and Eq. (11), respectively. See text for more details. | |
---|---|---
Figure 5: Ensemble averaged (a) Centroid, (b) $\gamma_{1}$ and (c)
$\gamma_{2}$ as a function of body rank $k$ for the strength function results
presented in Figure 4. Results are shown for various values of $\xi$.
In the study of thermalization and relaxation dynamics of an isolated finite
quantum system after a random interaction quench, strength functions play an
important role. Having tested that in the thermodynamic region with
$\lambda>>\lambda_{t}$, ensemble averaged strength functions of dense boson
systems with $k$-body interaction can be represented by smooth forms given by
$f_{CqN}$, we will now utilize these interpolating forms, in the coming
sections, to study fidelity decay and NPC in dense boson systems with $k$-body
interaction.
## V Fidelity decay after an interaction quench
Fidelity decay or return probability of a quantum system after a sudden quench
is an important quantity in the study of relaxation of a complex (chaotic)
system to an equilibrium state. Let’s say the system is prepared in one of the
eigenstates ($\psi(0)=\left|{\kappa}\right\rangle$) of the mean-field
Hamiltonian $H=h(1)$. With the quench at $t=0$ by $\lambda V(k)$, the system
evolves unitarily with respect to $H\rightarrow h(1)+\lambda V(k)$ and the
state changes after time $t$ to
$\psi(t)=\left|{\kappa}(t)\right\rangle=\exp(-iHt)\left|{\kappa}\right\rangle$.
Then, the probability to find the system in it’s initial unperturbed state
after time $t$, called fidelity decay, is given by,
$\begin{array}[]{lll}W_{0}(t)&=&|\left\langle\psi(t)|\psi(0)\right\rangle|^{2}=\left|\sum_{E}\left[C_{k}^{E}\right]^{2}\exp-
iEt\right|^{2}\\\ \\\ &=&\int F_{\xi}(E)\exp-iEt\;dE\\\ \\\
&=&\int_{s(q)}f_{CqN}(E|\xi;\zeta,q)\exp-iEt\;dE\;.\end{array}$ (19)
Thus, fidelity is the Fourier transform in energy of the strength function;
this is valid for times not very short or very long. In the thermalization
region, the form of $F_{\xi}(E)$ is Gaussian for $k=2$ while it is semi-circle
for $k=m$. These two extreme situations are recently studied, both
analytically [47] as well as numerically [48, 49, 50]. The formula for
$W_{0}(t)$ can be given in terms of width of $\lambda V(k)$ scaled by
$\sigma_{H}$. Clearly, following the results of the previous section,
$f_{CqN}$ can be used to obtain $W_{0}(t)$ generated by BEGOE(1+$k$). As
analytical formula for the Fourier transform of $f_{CqN}$ is not available,
therefore we evaluated Eq.(19) numerically. Figure 6 shows results for
$W_{0}(t)$ (red solid circles) for a 100 member BEGOE(1+$k$) ensemble with
$m=10$, $N=5$ and $\lambda=0.5$ for various $k$ values and they are compared
with numerical Fourier transform (black smooth curves) of Eq.(18). Here, we
have used normalized eigenenergies in the computation of $W_{0}$ and therefore
the time $t$ is measured in the units of $1/\sigma_{H}$. It is clear from the
results that the Fourier transform of $f_{CqN}$ describes the short-time
behavior nicely and also captures the positions of the oscillations. The
results generated here are consistent with the reported results in [33],
obtained using $f_{qN}$ form for $F_{\xi}(E)$.
It is known that in the strong interaction domain, the decrease in $W_{0}$
(for $k=2$) follows quadratic in time and this Gaussian decrease can last for
a quite large time and after that an exponential one emerges [51]. The
transition time depends on the ratio of the spectral width and the square of
the second moment of strength fucntion ($\sigma_{F}^{2}$). As here
$\lambda>>\lambda_{t}$, $\zeta^{2}\rightarrow 0$ giving $\sigma_{F}^{2}\approx
1$, $t$ is in $1/\sigma_{H}$ units and the spectral width will be in
$\sigma_{H}$ units. Therefore, the results in Figure 6 describe $W_{0}$ nicely
for short time and the standard exponential decrease for long time for $k=2$
seems absent. The long time behavior of fidelity decay is of great interest as
it is expected that $W_{0}$ surely demonstrates a power-law behavior i.e.
$W_{0}(t)\propto t^{-\gamma}$ with $\gamma\geq 2$ implying thermalization
[52], no matter how fast the decay may initially be. As shown in [52], the
power-law behavior appears due to the fact that the energy spectrum is bounded
from both the ends. This condition is essentially satisfied by $f_{CqN}$.
Therefore, it is important to analyze the long-time behavior of fidelity decay
for embedded ensembles first to establish it’s universality and second to test
whether it can be explained with the use of $f_{CqN}$. These are open
questions.
Figure 6: Fidelity decay $W_{0}(t)$ as a function of time for a 100 member
BEGOE(1+$k$) ensemble with $N=5$ and $m=10$ represented by the red solid
circles; the $\psi(0)$ here corresponds to middle states of $h(1)$ spectrum.
red Here $t$ is measured in the units of $\sigma_{H}^{-1}$. The black smooth
curves are obtained by taking numerical Fourier transform of the strength
functions represented by Eq.(18).
In the study of fidelity decay, strength function with $\xi=0$ is involved.
However, the statistical properties, related to wavefunction structure, namely
NPC and $S^{\text{info}}$ can be written as integrals involving strength
functions over all $\xi$ energies. Very recently, an integral formula for NPC
in the transition strengths from a state as a function of energy for fermionic
EGOE($k$) using the bivariate $q$-normal form is presented in [37]. In the
past, the smooth forms, for NPC and $S^{\text{info}}$, were derived in terms
of energy and correlation coefficient $\zeta$ for two-body interaction [53].
In the next section, we present our results for NPC and $S^{\text{info}}$
using $f_{CqN}$ forms for the strength functions and compare with those for
dense interacting boson systems with $k$-body interaction.
## VI NPC and Information entropy
The NPC in wavefunction characterizes various layers of chaos in interacting
particle systems [16, 54, 55] and for a system like atomic nuclei, NPC for
transition strengths is a measure of fluctuations in transition strength sums
[37]. For an eigenstate $|E_{i}\rangle$ spread over the basis states
$|{\kappa}\rangle$, with energies
$\xi_{\kappa}=\langle{\kappa}|H|{\kappa}\rangle$, NPC (also known as inverse
participation ratio) is defined as,
$\mbox{NPC}(E)=\left\\{{\displaystyle\sum\limits_{\kappa}{\left|{C_{{\kappa}}^{i}}\right|^{4}}}\right\\}^{-1}$
(20)
NPC essentially gives the number of basis states
$\left.\left|{\kappa}\right.\right\rangle$ that constitute an eigenstate with
energy $E$. The GOE value for NPC is $d/3$. NPC can be studied by examining
the general features of the strength functions $F_{\xi}(E)$. The smooth forms
for NPC$(E)$ can be written as [53],
$\mbox{NPC}(E)=\displaystyle\frac{d}{3}\left\\{\displaystyle\int
d\xi\;\displaystyle\frac{\rho^{H_{\kappa}}(\xi)[F_{\xi}(E)]^{2}}{[\rho^{H}(E)]^{2}}\right\\}^{-1}\;,$
(21)
where $\rho^{H_{\kappa}}(\xi)$ and $\rho^{H}(E)$ are normalized eigenvalue
densities generated by diagonal Hamiltonian $H_{\kappa}$ matrix and full
Hamiltonian $H$ matrix, respectively. Taking $E$ and $\xi$ as zero centered
and scaled by corresponding widths, the above equation can be written in terms
of $f_{qN}$ and $f_{CqN}$ [37, 38],
$\mbox{NPC}(E)=\displaystyle\frac{d}{3}\left\\{\displaystyle\int_{S(q)}d\xi\;\displaystyle\frac{f_{qN}(\xi|q)[f_{CqN}(E|\xi;\zeta,q)]^{2}}{f_{qN}(E|q)}\right\\}^{-1}\;,$
(22)
In general, $q$’s in the above equation need not be same [37, 38]. However, in
the thermalization region, with $\zeta^{2}\leq 1/2$, one can approximate
$\gamma_{2}\approx(q-1)$ in Eq.(8). Then, the formula for $q$ given by Eq.(9)
is valid for $f_{qN}$ as well as for $f_{CqN}$. This is well verified
numerically in Section II. Also, the results of $\gamma_{2}$ in Figure 5(c)
corroborate this claim. With this, it is possible to simplify Eq.(22) using
Eqs.(6) and (7) and a simple two parameter formula, valid in chaotic domain,
for NPC can be written as,
$\mbox{NPC}(E)=\displaystyle\frac{d}{3}\displaystyle\left\\{\sum_{n=0}^{\infty}\frac{\zeta^{2n}}{[n]_{q}!}\,H_{n}^{2}(E|q)\right\\}^{-1},$
(23)
It is easy to see from above formula that NPC($E$) approaches GOE value $d/3$
as $\zeta\rightarrow 0$. Also for $q\rightarrow 1$, $f_{qN}$ and $f_{CqN}$ in
Eq.(22) reduce to Gaussian and then Eq.(23) gives similar results obtained for
$k=2$ in [53]. We have tested this formula with numerical ensemble averaged
BEGOE(1+$k$) results. Figure 7, shows results for ensemble averaged NPC vs.
normalized energy, for a 100 member BEGOE(1+$k$) with $m=10$ and $N=5$ example
for different values of $\lambda$ and $k$. The ensemble averaged NPC values
are shown with red solid circles and continuous lines are obtained using the
theoretical expression given by Eq. (23). One can see from the results that
with fixed $k$ (i) for small value of $\lambda$, where the one-body part of
the interaction is dominating, the numerical NPC values are zero and the
theoretical curve is far away from the numerical results indicating that the
wavefunctions are completely localized (the bottom panels in Figure 7); (ii)
with further increase in $\lambda$, the theoretical estimate for NPC in the
chaotic domain is much above the ensemble averaged curve indicating that the
chaos has not yet set in; (iii) However, with sufficiently large $\lambda$, we
see that the ensemble averaged curve is matching with the theoretical estimate
given by Eq. (23), indicating that system is in chaotic domain corresponding
to the thermalization region given by $\zeta^{2}\sim 1/2$ [23] and the
strength functions $F_{\xi}(E)$ are well represented by conditional $q$ normal
distribution. Again with further increase in $\lambda$ (the top panels in
Figure 7), the match between the theoretical chaotic domain estimate and the
ensemble averaged values is very well in the bulk part of the spectrum
($|E|<2$) for all values of $k$ with deviations near the spectrum tails.
Hence, in the chaotic domain, the energy variation of NPC($E$) using Eq. (23)
is essentially given by two parameters, $\zeta$ and $q$. The results clearly
show that the thermalization sets in faster with increase in the body rank
$k$.
Another statistical quantity normally considered is the information entropy
defined by $S^{\text{info}}(E)=-\sum_{{\kappa}}p_{\kappa}^{i}\ln
p_{\kappa}^{i}=-\sum_{\kappa}|C_{{\kappa}}^{i}|^{2}\ln|C_{{\kappa}}^{i}|^{2}$,
here $p_{\kappa}^{i}$ is the probability of basis state ${\kappa}$ in the
eigenstate at energy $E_{i}$. The localization length, $l_{H}$ is related to
$S^{\text{info}}(E)$ by $l_{H}(E)=\exp{S^{info}(E)}/(0.48d)$. Then the
corresponding embedded ensemble expression for $l_{H}$ involving $F_{\xi}(E)$,
can be written as[53],
$l_{H}(E)=-\displaystyle\int
d\xi\;\displaystyle\frac{F_{\xi}(E)\;\rho^{H_{\kappa}}(\xi)}{\rho^{H}(E)}\ln\left\\{\displaystyle\frac{F_{\xi}(E)}{\rho^{H}(E)}\right\\}\;.$
(24)
Replacing $\rho^{H_{\kappa}}(\xi)$ and $\rho^{H}(E)$ by $f_{qN}$ and
$F_{\xi}(E)$ by $f_{CqN}$, formula for $l_{H}$ valid in chaotic domain is
given by,
$l_{H}(E)=-\displaystyle\int
d\xi\;\displaystyle\frac{f_{CqN}(E|\xi;\zeta,q)f_{qN}(\xi|q)}{f_{qN}(E|q)}\;\ln\left\\{\displaystyle\frac{f_{CqN}(E|\xi;\zeta,q)}{f_{qN}(E|q)}\right\\}\;.$
(25)
Simplifying Eq.(25) for $l_{H}$ is an open problem and therefore, it is
evaluated numerically and results are compared with ensemble averaged
numerical results of BEGOE(1+$k$). Figure 8, shows results for ensemble
averaged $l_{H}$ vs. normalized energy $E$ for a 100 member BEGOE($1+k)$ with
$m=10$ bosons in $N=5$ sp states for different values of $k$. Here, we choose
$k$-body interaction strength $\lambda=1$ so that the system will be in
thermalization region. Numerical embedded ensemble results (red solid circles)
are compared with theoretical estimates (black curves) obtained using Eq.
(25). The $\zeta$ values are shown in the figure. A very good agreement
between numerical results and smooth form is obtained for all values of $k$ in
the bulk of the spectrum with small deviations near the spectrum tails. Hence,
in the chaotic domain, the energy variation of $l_{H}(E)$, with Eq. (25), is
essentially given by conditional $q$ forms for the strength functions.
Figure 7: Ensemble averaged NPC as a function of normalized energy $E$ for a
100 member BEGOE(1+$k$) with $m=10$ interacting bosons in $N=5$ sp states for
different values of $k$. Ensemble averaged BEGOE(1+$k$) results are
represented by solid circles while continuous curves correspond to the
theoretical estimates in the chaotic domain obtained using Eq. (23). The
ensemble averaged $\zeta$ and $q$ values are also given in the figure. GOE
estimate is represented by dotted line in each graph. Figure 8: Ensemble
averaged localization lengths $l_{H}$ vs. normalized energy $E$ for a 100
member BEGOE(1+$k$) with $m=10$ interacting bosons in $N=5$ sp states for
different $k$ values. Here, $\lambda=1$ is chosen for all $k$. Ensemble
averaged BEGOE(1+$k$) results (red solid circles) are compared with the smooth
forms obtained via Eq.(25) involving parameters $\zeta$ and $q$. The ensemble
averaged $\zeta$ values are given in the figure and Eq.(9) is used for $q$
values. Dotted lines in each graph represent GOE estimate.
## VII Conclusions
In the present work, we have analyzed wavefunction structure of dense many-
body bosonic systems with $k$-body interaction by modeling the Hamiltonian of
these complex systems using BEGOE(1+$k$). We have shown that for dense boson
systems with BEGOE(1+$k$), the $q$-polynomials are used to describe the
transition from Gaussian to semi-circle in the state density as the strength
of the $k$-body interaction increases. A complete analytical description of
the correlation coefficient $\zeta$, which is related to variance of strength
functions, is obtained in terms of $N$,$m$,$k$ and $\lambda$ and it is found
to describe the embedded ensemble results very well for all the values of rank
of interaction $k$. Also, in the dense limit $\zeta\rightarrow 0$. We have
also obtained formula for $\lambda_{t}$ in terms of ($m$, $N$, $k$). Further,
it is shown that in the strong interaction domain ($\lambda>>\lambda_{t}$),
the strength functions make transition from Gaussian to semi-circle as the
rank of interaction $k$ increases in BEGOE(1+$k$) and their smooth forms can
be represented by the $q$-normal distribution function $f_{CqN}$ to describe
this crossover. Moreover, the variation of the lowest four moments of strength
functions computed numerically are in good agreement with the analytical
formulas obtained in [38]. With this, we have first utilized the interpolating
form for strength function $f_{CqN}$ to describe the fidelity decay in dense
boson systems after $k$-body random interaction quench. Secondly, using smooth
forms for $f_{qN}$ and $f_{CqN}$, we have also derived two parameter ($q$ and
$\zeta$) formula for NPC valid in thermalization region and shown that these
smooth forms describe BEGOE(1+$k$) ensemble averaged results very well.
Therefore, the results of this work, along with [33, 37, 38], establish that
the $q$-Hermite polynomials play a very significant role in analyzing many-
body quantum systems interacting via $k$-body interaction. The generic
features explored in this work are important for a complete description of
many-body quantum systems interacting via $k$-body interaction as the nuclear
interactions are now known to have some small 3-body and 4-body parts and
higher body interactions may become prominent in strongly interacting quantum
systems [7, 25, 26].
Following the work in [52], it is interesting to analyze power-law behavior of
fidelity decay for very long time using embedded ensembles with $k$-body
forces as smooth forms of strength functions can be represented by $f_{CqN}$.
Further, as smooth forms for the density of states can be represented by
$f_{qN}$, it is possible to study normal mode decomposition of the density of
states for various $k$ values using $f_{qN}$ [13, 17, 56] and thereby one can
study spectral statistics in strongly interacting quantum systems. This is for
future. It is also known that the strength functions and the entanglement
essentially capture the same information about eigenvector structure [55, 57]
and therefore it is important to study entanglement properties using embedded
ensembles with $k$-body forces.
## Acknowledgements
Thanks are due to Manan Vyas for collaboration in the initial stages of this
work and V. K. B. Kota for many useful discussions. Authors acknowledge
support from Department of Science and Technology(DST), Government of India
[Project No.: EMR/2016/001327]. NDC acknowledges support from the
International Centre for Theoretical Sciences (ICTS) during a visit for
participating in the program - Thermalization, Many body localization and
Hydrodynamics (Code: ICTS/hydrodynamics2019/11).
## References
* [1] O. Bohigas, M.J. Giannoni, C. Schmit, Phys. Rev. Lett. 52 (1984) 1.
* [2] V. K. B. Kota, Embedded Random Matrix Ensembles in Quantum Physics, Springer-Verlag, Heidelberg, 2014.
* [3] A. Polkovnikov, K. Sengupta, A. Silva, Rev. Mod. Phys. 83 (2011) 863.
* [4] L. D’Alessio, Y. Kafri, A. Polkovnikov, M. Rigol, Adv. Phys. 65 (2016) 239.
* [5] F. Borgonovi, F.M. Izrailev, L.F. Santos, V.G. Zelevinsky, Phys. Rep. 626 (2016) 1.
* [6] V. K. B. Kota, N. D. Chavda, Int. J. Mod. Phys. E 27 (2018) 1830001.
* [7] J. S. Cotler et al., J. High Energy Phys. 05 (2017) 118.
* [8] J. J. M. Verbaarschot, Quantum chromodynamics, in The Oxford Handbook of Random Matrix Theory edited by G. Akemann, J. Baik, and P. Di. Francesco, Oxford University Press, Oxford, 2011.
* [9] V. K. B. Kota N. D. Chavda, Entropy 20 (2018) 541.
* [10] J. B. French, S. S. M. Wong, Phys. Lett. B 33 (1970) 449.
* [11] O. Bohigas, J. Flores, Phys. Lett. B 34 (1971) 261.
* [12] V. K. B. Kota, Phys. Rep. 347 (2001) 223.
* [13] T.A. Brody, J. Flores, J.B. French, P.A. Mello, A. Pandey, S.S.M. Wong, Rev. Mod. Phys. 53 (1981) 385.
* [14] T. Papenbrock, H. A. Weidenmüller, Rev. Mod. Phys. 79 (2007) 997.
* [15] V.V. Flambaum, G.F. Gribakin, F.M. Izrailev, Phys. Rev. E 53 (1996) 5729.
* [16] V.V. Flambaum, F.M. Izrailev, Phys. Rev. E 56 (1997) 5144.
* [17] K. K. Mon, J. B. French, Ann. Phys. (N.Y.) 95 (1975) 90.
* [18] K. Patel, M.S. Desai, V. Potbhare, V.K.B. Kota, Phys. Lett. A 275 (2000) 329.
* [19] T. Asaga, L. Benet, T. Rupp, H. A. Weidenmüller, Eur. Phys. Lett. 56 (2001) 340.
* [20] T. Asaga, L. Benet, T. Rupp, H.A. Weidenmüller, Ann. Phys. (N.Y.) 298 (2002) 229.
* [21] N. D. Chavda, V. Potbhare, V. K. B. Kota, Phys. Lett. A 311 (2003) 331.
* [22] N. D. Chavda, V. Potbhare, V. K. B. Kota, Phys. Lett. A 326 (2004) 47.
* [23] N.D. Chavda, V.K.B. Kota, V. Potbhare, Phys. Lett. A 376 (2012) 2972.
* [24] N. D. Chavda, V.K.B. Kota, Ann. Phys. (Berlin) 529 (2017) 1600287.
* [25] D. W. E. Blatt, B. H. J. McKellar, Phys. Rev. C 11 (1975) 2040.
* [26] H-W. Hammer, A. Nogga, A. Schwenk, Rev. Mod. Phys. 85 (2013) 197.
* [27] K. D. Launey, T. Dytrych, J. P. Draayer, Phys. Rev. C 85 (2012) 044003.
* [28] A.M. Garcia-Garcia, Y.Jia, J.J.M. Verbaarschot, Phys. Rev. D 97 (2018) 106003.
* [29] A.M. Garcia-Garcia, T. Nosaka, D. Rosa, J.J.M. Verbaarschot, Phys. Rev. D 100 (2019) 026002.
* [30] A. Ortega, M. Vyas, L. Benet, Ann. Phys. (Berlin) 527 (2015) 748.
* [31] A. Ortega, T. Stegmann, L. Benet, Phys. Rev. E 94 (2016) 042102.
* [32] A. Ortega, T. Stegmann, L. Benet, Phys. Rev. E 98 (2018) 012141.
* [33] M. Vyas, V. K. B. Kota, J. Stat. Mech. Theor. Expt. 10 (2019) 103103.
* [34] A.M. Garcia-Garcia, J.J.M. Verbaarschot, Phys. Rev. D 94 (2016) 126010.; Phys. Rev. D 96 (2017) 066012.
* [35] Y.Jia, J.J.M. Verbaarschot, J. High Energy Phys. 07 (2020) 193.
* [36] L. Erdos, D. Schroder, Math. Phys. Anal. Geom. 17 (2014) 441.
* [37] V. K. B. Kota, M. Vyas, arXiv:2003.09191v1
* [38] V. K. B. Kota, M. Vyas, arXiv:2011.05799v1
* [39] L. Benet, T. Rupp, H. A. Weidenmüller, Ann. Phys. (N.Y.) 292 (2001) 67.
* [40] M. Vyas, T. H. Seligman, AIP Conf. Proc. 1950 (2018) 030009.
* [41] M. E. H. Ismail, D. Stanton, G. Viennot, Europ. J. Combinatorics 8 (1987) 379.
* [42] P. Szablowski, Electronic Journal of Probability 15 (2010) 1296.
* [43] R. A. Small, S. Müller, Ann. Phys. (N.Y.) 356 (2015) 269.
* [44] V.K.B. Kota, V. Potbhare, Phys. Rev. C 21 (1980) 2637.
* [45] B. Lauritzen, P. F. Bortignon, R. A. Broglia, V. G. Zelevinsky, Phys. Rev. Lett. 74 (1995) 5190.
* [46] D. Angom, S. Ghosh, V.K.B. Kota, Phys. Rev. E 70 (2004) 016209.
* [47] S. K. Haldar, N. D. Chavda, M. Vyas, V. K. B. Kota, J. Stat. Mech. Theor. Expt. 2016 (2016) 043101.
* [48] E. J. Torres-Herrera, M. Vyas, L. F. Santos, New J. Phys. 16 (2014) 063010.
* [49] E.J. Torres-Herrera, J. Karp, M. Tavora, L. F. Santos, Entropy 18 (2016) 359.
* [50] L. F. Santos, E. J. Torres-Herrera, AIP Conf. Proc. 1912 (2017) 020015.
* [51] M.Schiulaz, E. J. Torres-Herrera, and L. F. Santos, Phys. Rev. B 99 (2019) 174313.
* [52] M. Távora, E. J. Torres-Herrera, L. F. Santos, Phys. Rev. A 95 (2017) 013604.
* [53] V.K.B. Kota, R. Sahu, Phys. Rev. E 64 (2001) 016219.
* [54] P.G. Silvestrov, Phys. Rev. E 58 (1998) 5629.
* [55] C. Mejia-Monasterio, J. Richert, T. Rupp, H.A. Weidenmüller, Phys. Rev. Lett. 81 (1998) 5189.
* [56] R. J. Leclair, R. U. Haq, V. K. B. Kota, and N. D. Chavda, Phys. Lett. A 372 (2008) 4373.
* [57] W. G. Brown, L. F. Santos, D. J. Starling, and L. Viola, Phys. Rev. E 77 (2008) 021106.
|
# Gammatonegram Representation for End-to-End Dysarthric Speech Processing
Tasks: Speech Recognition, Speaker Identification, and Intelligibility
Assessment
###### Abstract
Dysarthria is a disability that causes a disturbance in the human speech
system and reduces the quality and intelligibility of a person’s speech.
Because of this effect, the normal speech processing systems cannot work
correctly on this impaired speech. This disability is usually associated with
physical disabilities. Therefore, designing a system that can perform some
tasks by receiving voice commands in the smart home can be a significant
achievement. In this work, we introduce Gammatonegram as an effective method
to represent audio files with discriminative details, which can be used as
input for convolutional neural networks. In other words, we convert each
speech file into an image and propose an image recognition system to classify
speech in different scenarios. The proposed convolutional neural networks are
based on the transfer learning method on the pre-trained Alexnet. This
research evaluates the efficiency of the proposed system for speech
recognition, speaker identification, and intelligibility assessment tasks.
According to the results on the UASpeech dataset, the proposed speech
recognition system achieved a 91.29% word recognition rate in speaker-
dependent mode, the speaker identification system acquired an 87.74%
recognition rate in text-dependent mode, and the intelligibility assessment
system achieved a 96.47% recognition rate in two-class mode. Finally, we
propose a multi-network speech recognition system that works fully
automatically. This system is located in a cascade arrangement with the two-
class intelligibility assessment system, and the output of this system
activates each one of the speech recognition networks. This architecture
achieves a word recognition rate of 92.3%.
Index Terms— Disordered Speech, dysarthric Speech, Gammatonegram, CNN, Speech
Recognition, Speaker Identification, Intelligibility Assessment.
## 1 Introduction
Speech is the act of conveying emotions and thoughts through vocal sounds to
engage in communication with others. However, certain factors, such as illness
or physical disability, can result in speech taking on an unintelligible form,
thereby hindering the communication process. Individuals who suffer from
dysarthria cannot produce natural speech due to limited control over the
articulatory aspects of their brain. Furthermore, these individuals often face
physical disabilities that impede their ability to perform simple daily tasks
effectively.
Artificial Intelligence (AI)-based systems have the potential to assist humans
in various ways, and aiding individuals with disabilities has always been a
prominent area of focus. AI systems can provide a consistent and predefined
level of performance, unaffected by environmental or mental factors, when
individuals cannot perform specific tasks for various reasons. For individuals
with speech disorders, having a system that can automatically process their
speech to enhance their quality of life is highly advantageous. For instance,
in smart home scenarios designed for disabled individuals, basic tasks such as
operating the television, controlling lighting fixtures, and interacting with
computers can be made more accessible through Automatic Speech Recognition
(ASR) systems. These ASR systems can receive and recognize voice commands,
allowing disabled individuals to interact with their environment effectively.
However, designing an ASR system that correctly performs for impaired and
highly variable speech poses a significant challenge. Typical ASR systems
developed for normal speech may not perform well when applied to impaired
speech [1]. Therefore, it is necessary to develop specific ASR systems
tailored to impaired speech, capable of learning the unique characteristics of
such speech and delivering acceptable performance.
In recent years, deep learning has shown remarkable advancements in various
signal processing domains [2, 3]. Two-dimensional Convolutional Neural
Networks (CNNs) have played a crucial role in image processing [4]. However,
researchers have explored the same strategy for one-dimensional CNNs in speech
processing [5]. As an innovation, this study proposes a two-dimensional CNN to
develop the systems for three scenarios: ASR, speaker identification, and
intelligibility assessment. Additionally, we introduce a cascade multi-network
ASR system based on the intelligibility levels of speakers. This system aims
to enhance the ASR system’s overall performance by leveraging speakers’
intelligibility information. We used the UA-speech dataset for dysarthric
individuals [6] and employed transfer learning to train the networks,
particularly in scenarios with limited data availability [7].
Traditionally, speech processing systems have relied on short-term speech
features, which are inefficient for dysarthric speech [8]. However, we offer a
different approach by considering the overall view of an audio file. Our
system makes decisions based on a general representation of a voice command,
considering these characteristics of dysarthric speech. This is because
dysarthric speech often exhibits interruptions in the middle of words,
particularly in explosive phonemes and repeated syllables in a periodic
manner. The duration of these events can vary depending on the individual’s
mental and physical conditions. Therefore, analyzing the speech at the word
level or considering high-level features can be beneficial.
To this end, we proposed the Gammatonegram representation, a weighted version
of the traditional spectrogram. Human speech has a particular characteristic
where most information is concentrated in the low-frequency range from 50 to
5000 Hz [9]. The Gammatone filter-bank operates non-linearly for low and high
frequencies, providing high resolution for low frequencies and low resolution
for high frequencies. This behavior makes Gammatonegrams an efficient
representation of speech. Using the Gammatongram image to represent dysarthric
speech files is one of our innovations. The experiment results demonstrated
that CNNs can perform better for different speech processing scenarios when we
used Gammatonegrams as input.
The remainder of the article is organized as follows: Section 2 analyzes the
related works in dysarthric speech processing. Section 3 explains the
methodology that yields the objective of this research. Section 4 reports the
system parameters and experimental results. Comparison with the previous works
is reported in Section 5, and Section 6 presents the discussion and
conclusions.
## 2 Related Works
This study contains several systems in three ASR, speaker identification, and
intelligibility assessment tasks. This subsection reports some of the related
works in these categories.
Dysarthric speech recognition is one of the most interesting tasks in impaired
speech processing. Most conventional dysarthric speech recognition systems
used Hidden Markov Models (HMMs) with several states to model the sequential
structure of the speech signal and Gaussian Mixture Models (GMMs) to model the
distribution of the features in each state [10].
In recent years, impaired speech processing performances have grown thanks to
the development of deep neural network (DNN) algorithms. Kim et al. [11]
adopted convolutional long short-term memory recurrent neural networks to
model dysarthric speech in a speaker-independent situation. Authors in [12]
attempted to use a gated neural network to explore the robust integration of
pitch features to improve disordered speech recognition performance. The study
in [13] proposed a denoising autoencoder to enhance dysarthric speech and
improve feature extraction. Shahamiri [14] proposed a speech vision system for
dysarthria speech recognition. It generated synthetic voicegrams for all words
and speakers. This method delivered an average word recognition rate of
64.71%. Some works focused on applying meta-learning to find an end-to-end
model initialization for dysarthric speech recognition [15]. This paper
introduced a base model pre-trained from large-scale normal speech data and
proposed methods to meta-update the base model by incorporating across-
dysarthric speakers’ knowledge into the re-initialized model. Speaker
adaptation results on the UASpeech dataset achieved a 54.2% relative word
recognition rate.
In [16], a set of novel modeling techniques were employed, including neural
architectural search, data augmentation model-based speaker adaptation, and
cross-domain generation of visual features within an audio-visual speech
recognition system framework. Combining these techniques produced a word error
rate of 25.21% on the UA Speech dataset. The multi-stream model introduced in
[17] consists of convolutional and recurrent layers. It allows for fusing the
vocal tract and excitation components. Moreover, they proposed a system with
various features, studied the training dynamics, explored the usefulness of
the data augmentation, and provided interpretation for the learned
convolutional filters. Their best model reaches 40.6% and 11.8% word error
rates for dysarthric and typical speech, respectively. Takashima et al., [18]
acquired an end-to-end ASR framework trained by not only the speech data of a
Japanese person with an articulation disorder but also the speech data of a
physically unimpaired Japanese person and a non-Japanese person with an
articulation disorder to relieve the lack of training data of a target
speaker.
In [19], a customized deep transformer architecture has been proposed. To deal
with the data scarcity problem, a two-phase transfer learning pipeline was
designed to leverage healthy speech, investigate neural freezing
configurations, and utilize audio data augmentation, and in the best
situation, a word recognition rate of 67% has been reported. Almadhor et al.
[20] proposed a spatio-temporal dysarthric ASR system using a spatial CNN and
multi-head attention transformer to extract the speech features visually.
Their system utilized transfer learning to generate synthetic leverage and
visuals, resulting in a recognition rate of 20.72% for the UA-Speech database.
Yu et al. [21] proposed a Multi-stage Audio Visual-HuBERT framework by fusing
the dysarthric speech’s visual and acoustic information. They offered to use
the AV-HuBERT framework to pre-train the recognition architecture of fusing
audio and visual information of dysarthric speech. The knowledge gained by the
pre-trained model was applied to address the over-fitting problem of the
model. The best word error rate of the proposed method was 13.5% on moderate
dysarthric speech. In [22] a transfer learning approach using the Whisper
model was utilized to develop a dysarthric ASR system. Using the Whisper-based
method, a word recognition average rate of 59.78% was obtained for UA-Speech
Corpus, based on the Bi-LSTM classifier model.
Few studies have been published on dysarthric speaker recognition tasks. One
of our previous works [23] described the performance of the typical ANN-based
system with deep belief network-based features. This system was implemented in
single and multi-network modes. In the single-network and text-independent
mode, the best results on the UA speech dataset were yielded with 80.1%
speaker identification accuracy for 16 dysarthric speakers. In another work,
[24] presented a new approach to improve the analysis and classification of
disordered speech. For this purpose, an ear model was introduced. This ear
model provided relevant auditory-based cues combined with the usual Mel-
Frequency Cepstral Coefficients (MFCC) to represent atypical speech
utterances. The experiments were carried out using data from Nemours and Torgo
databases of dysarthric speech. gaussian mixture models, support vector
machines, and hybrid systems were tested and compared in the context of
dysarthric speaker identification. The experimental results achieved a correct
speaker identification rate of 97.2%. However, the challenge of data scarcity
was not addressed, which is the concern of the proposed system of our work.
Salim et al. [25] evaluated the performance of the automatic speaker
verification system by comparing Constant-Q Cepstral Coefficients (CQCC) and
MFCC features and their combination. The study involved training separate
i-vector and x-vector models using MFCC and CQCC features alone and in
combination and improved the i-vector and x-vector model’s equal error rates
by 15.07% and 22.75%, respectively. In [26], the x-vector models were trained
and compared using individual MFCC, prosodic variables, and combinations. The
proposed system achieved an 87.34% recognition rate.
Some researchers have worked on speech intelligibility assessment or severity
level measurement. In [27], a new technique to detect dysarthric severity
levels was proposed. The authors presented time-domain, frequency-domain, and
Teager energy operator analysis of dysarthric speech to justify spectrogram as
a feature representation particularly capable of capturing unstructured
spectral energy density distributions. Quantifying dysarthria severity based
on a residual neural network with short speech segments was reported 98.9%
recognition rate on the UA speech dataset.
Al-Qatab et al. [28] examined the acoustic features and feature selection
methods to improve the classification of dysarthric speech. Four acoustic
features, including prosody, spectral, cepstral, and voice quality, were used
for feature extraction. Furthermore, six classification algorithms were
evaluated. The best classification accuracy was 95.80%. A comparative study on
the classification of dysarthria severity levels using different deep learning
techniques and speech-disorder specific features computed from prosody,
articulation, phonation, and glottal functioning were evaluated on DNN models
[29]. In the best situation, the proposed system gave an accuracy of 93.97%
under the speaker-dependent scenario and 49.22% under the speaker-independent
scenario for the UA-Speech database. Hall et al. in [30] reported the optimal
setup of deep learning–based dysarthric intelligibility assessment and
explained different evaluation strategies. Results indicate an average of
78.2% classification accuracy for unforeseen low intelligibility speakers,
40.6% for moderate intelligibility speakers, and 40.4% for high
intelligibility speakers.
In [31] a few-shot approach using a transformer model was employed. This
whisper-large-v2 transformer model trained on a subset of the UASpeech dataset
containing medium intelligibility level patients achieved an accuracy of 85%.
Moreover, the multiclass model achieved an accuracy of 67%. Venugopalan et
al., [32] developed dysarthric speech intelligibility classifiers on 551,176
disordered speech samples contributed by a diverse set of 468 speakers, with a
range of self-reported speaking disorders and rated for their overall
intelligibility on a five-point scale.
Based on the previous research, it has been observed that the current systems
and algorithms, although highly efficient for normal speech, still face
significant challenges regarding dysarthric speech. These systems need to
undergo further development and refinement. One domain that can enhance the
efficiency of such systems is feature extraction. Particularly, it is
advisable to focus on high-level features due to the substantial variations in
dysarthric speech. Additionally, image processing systems have shown promise
in addressing these challenges. Hence, this study proposes using Gammatonegram
representation as features and a two-dimensional CNN to improve the
performance of dysarthric speech processing. Moreover, we evaluate the
proposed methodology in all three tasks.
Furthermore, we have discovered that implementing a multi-network scenario can
significantly benefit individuals with dysarthric speech. This is because
dysarthric speech exhibits a wide range of severity with a corresponding
diversity in speech characters. Consequently, it is more effective to train
individual networks for each class of intelligibility. Since some of the
previous works proposed multi-network ASR systems, they all need a human as an
assistant to activate the corresponding sub-network based on users’ speech
intelligibility level. To create a fully automated multi-network scenario, it
is essential to assign speech files to their corresponding sub-network
automatically. To this end, we have proposed a cascade architecture based on
the intelligibility assessment system to feed the multi-network ASR system.
Fig. 1: Diagram of the architecture of Alexnet with feature extraction and
classification parts
## 3 Methodology
This section presents the methods and algorithms utilized in this study,
including the description of transfer learning, introduction of Gammatonegram,
UA dysarthria speech dataset, and presentation of the utilized Voiced Activity
Detector (VAD) algorithm.
### 3.1 Transfer Learning
CNNs are widely used algorithms in image processing. The term ”convolutional”
refers to the fact that these networks consist of one or more layers that
utilize the convolution operator. Typically, a CNN is composed of two main
parts. The first part is responsible for feature extraction and processing of
input information through convolutional layers. During the learning process,
this part of the network learns to understand visual patterns by employing
convolutional multilayer processing. The second part of the network is a
classifier that utilizes the features extracted in the first part to construct
a model for each class. The network can associate a given speech file with the
appropriate class based on the extracted features.
Fig. 2: Block diagram of Gammatonegram extraction steps
CNNs typically require a large amount of training data to give optimal
performance. However, pre-trained CNNs can be modified and reused in limited-
data scenarios. These pre-trained models contain information about the input
data’s dimensions and content. The model’s parameters are predetermined in
this situation, including the number and type of layers, architecture, and
layer connections. Transfer learning is a technique that leverages the weights
and parameters of a pre-trained CNN for a new task. Transfer learning
eliminates the need for extensive training data by utilizing the knowledge
gained from previous training. This is particularly advantageous in low-data
conditions as it allows the network to have a pre-existing understanding of
vision.
The Alexnet is a classic CNN with five convolutional layers to extract more
valuable features in deeper layers [4]. The last convolutional layer connects
to three fully connected layers. The outputs of these layers use the ReLU
activation function. The last layers are the softmax and classifier, which
determine the output based on the 1000 pre-trained classes. The input of this
network is a colored image with dimensions of 227*227*3. The architecture of
this network includes about 60 million parameters and more than 650,000
neurons. This network was trained with more than one million images from the
Imagenet dataset [33]. Therefore, according to the classical structure of this
network, we used it as the primary network for transfer learning. The
structure and parameters of the Alexnet are shown in Fig. 1. To create a
network for our tasks, we use the feature extraction part of Alexnet and
replace new fully connected, softmax, and classifier layers in the
classification part to learn the new classes.
The study utilizes Gammatonegrams as visual representations of audio signals
for input into the CNN. A Gammatonegram is an image that depicts the amplitude
or energy of speech signals at different frequency bands and their time of
occurrence [34]. This allows the CNN to process the audio information in a
format suitable for image-based analysis.
### 3.2 Gammatonegram
The block diagram presented in Fig. 2, illustrates the steps involved in the
Gammatonegram extraction. This algorithm has similarities to the spectrogram
[35], but it offers a more effective representation. The Gammatonegram
extraction process begins with pre-emphasis, which involves the utilization of
a single-pole filter. This filter compensates for the inherent characteristics
of the human speech production system, where high frequencies tend to have
lower amplitudes compared to low frequencies. By applying this filter, the
energy range in the higher frequencies is increased, resulting in improved
intelligibility of the speech. Speech signals are non-static, meaning they
cannot be accurately modeled as a combination of sine and cosine functions.
Consequently, conventional Fourier transform methods are not suitable for
transforming speech signals into the frequency domain. However, within short
durations of 20 to 30 milliseconds, speech signals exhibit a more static
behavior. To account for this, the speech signal is divided into rectangular
frames with a duration of 25 milliseconds.
The Gammatonegram extraction process involves applying a hamming window to the
rectangular frames before performing the Fourier transform. This windowing
technique helps reduce unwanted side lobes that can appear in the transform.
To compensate for information loss at the edges, a 10-millisecond overlap is
used between frames. The Fourier transform is then applied to the signal, and
the amplitude is extracted. Finally, the speech signal is weighted using a
Gammatone filter-bank.
The Gammatone filter-bank, as depicted in Fig. 3, exhibits a high resolution
in low frequencies and a low resolution in high frequencies. Multiplying the
speech signal with each filter in the filter-bank and summing the outputs of
all the filters results in the proposed Gammatonegram representation.
The Gammatonegram is represented as an RGB color image, making it suitable for
input into a CNN. This type of representation provides higher resolution in
low frequencies compared to the traditional spectrogram representation. Fig. 4
shows an example of these Gammatonegram images compared with the spectrogram
to bold the differences. This increased resolution can enhance the
discriminative power of different classes. To align with the input layer
properties of AlexNet, the final Gammatonegram image is saved in the size of
227x227x3.
Fig. 3: Gammatone filter-bank
Fig. 4: Comparison spectrogram and Gammatonegram representation method in
three different utterances
### 3.3 UA Speech Dataset
Fig. 5: VAD decision and Gammatonegram before and after VAD for a given speech
file
A dataset, including 16 dysarthric speakers, has been collected and published
by researchers at the University of Illinois [6]. These speakers have
different severities and speak with varying levels of intelligibility from 2%
to 95%. The information of the speakers is reported in Table 1. This dataset
includes 255 isolated dysarthric speech words, consisting of uncommon words,
radio alphabet, digits, computer commands, and common worlds. This dataset was
collected in three sessions, B1, B2, and B3, with eight microphones. The
sampling frequency in this dataset is 16 kHz. It is important to note that
this dataset also contains speech files from 12 normal speakers, which were
not utilized in this study.
Table 1: Information of the UA speech dataset
No. Speaker ID gender Age Speech Intelligibility 1 F02 Female 30 29% 2 F03
Female 51 6% 3 F04 Female 18 62% 4 F05 Female 22 95% 5 M01 Male >18 15% 6 M04
Male >18 2% 7 M05 Male 21 58% 8 M06 Male 18 39% 9 M07 Male 58 28% 10 M08 Male
28 93% 11 M09 Male 18 86% 12 M10 Male 21 93% 13 M11 Male 48 62% 14 M12 Male 19
7.4% 15 M14 Male 40 90.4% 16 M16 Male >18 43%
In this study, speech files from 16 dysarthric speakers were used. This subset
includes recordings of 30 isolated words, comprising 9 digits, 19 computer
commands, and 2 radio alphabets. Each speaker’s utterances were saved in eight
different files, and these files were found to be almost identical. To ensure
reliable evaluations, the K-fold cross-validation method was employed with K=3
because there were three sessions. One session was separated from the other
two sessions to avoid excessive similarity between the expressions and prevent
any unnatural similarity between the training and testing data. In all
experiments, the data from one session was used for training, and two others
for testing.
### 3.4 Voiced Activity Detector
Silence can have a negative impact on speech processing systems, which is why
VAD algorithms are commonly used in such systems. In the case of dysarthric
individuals, the inability to pronounce certain syllables, even within a word,
often leads to pauses during their speech. Therefore, incorporating VAD can
significantly enhance the performance of speech processing systems for these
individuals.
In our study, we utilize the GMMVAD algorithm [36] before representing the
speech signal using both the Gammatonegram and spectrogram. This pre-
processing step helps to reduce the intra-class variability and can improve
the overall efficiency of the system. Fig. 5 provides an example of the GMMVAD
process applied to an audio file, as well as the corresponding Gammatonegram
representation before and after applying VAD.
### 3.5 Evaluation Criteria
In evaluating the performance of speech recognition systems, various criteria
are used. In this study, the Word Recognition Rate (WRR) criterion is
employed. WRR calculates the number of isolated words that are correctly
recognized compared to the total number of test data.
For the speaker identification systems proposed in this work, the network’s
decision is made based on each audio expression of an isolated word.
Therefore, the evaluation involves calculating the number of correct decisions
made by the system in comparison to the total number of audio files.
In the intelligibility assessment section of the proposed system, each audio
file is classified into predetermined categories. The classification is
independent of the speaker’s identity or speech content. This system’s
decision is also based on each expression, ensuring that each audio file is
evaluated individually.
## 4 Experimental Results
In the experiments, we evaluated the performance of the proposed system based
on Gammatonegram representation and the pre-trained CNN in three modes:
automatic speech recognition for 30 dysarthric isolated words, dysarthric
speaker identification for 16 speakers, speech intelligibility assessment for
2 and 3 class modes, and finally a fully-automated multi-network speech
recognition in a cascade architecture.
Convolutional neural networks are data hungry, meaning we need lots of data to
train a CNN. Transfer learning is a technique to compensate for data shortages
in various scenarios. In this work, we first re-train the basic Alexnet with
about 40 hours of speech data to recognize dysarthric isolated words in 255
classes. The goal of this work is not to achieve high performance, but we want
to give a lot of data to the network so that its feature extraction part can
be trained appropriately with Gammatonegram and spectrogram images. This new
CNN was used as the pre-trained network to build the systems in all the
proposed tasks.
Before evaluating our innovative systems, we answer two questions about the
proposed method. 1) How is the efficiency of this system compared to a
traditional system based on HMM. 2) Does the proposed Gammatonegram perform
better than the classical spectrogram. These two questions make up the initial
experiments.
Table 2: Overall comparison of the result of the preliminary tests
System WRR% HMM-GMM 66.23 CNN + Spectrogram 86.59 CNN + Gammatonegram 91.29
### 4.1 Initial Experiments
Before the era of deep neural networks, the HMM was one of the most popular
methods for speech recognition [37, 38]. Therefore, we initially evaluated the
performance of a traditional HMM-GMM-based ASR system with MFCC feature for
dysarthric speech and compared it with the proposed end-to-end systems to
highlight the proposed system concept. In this comparison, the training and
test data were completely identical to be a benchmark for measuring
performance.
In addition to the classification method, we need to pursue the efficiency of
the proposed representation method. Therefore, the proposed representation
method, i.e., Gammatonegram, should be compared with the conventional
representation method, i.e., spectrogram. To this goal, two systems were built
separately under the same conditions based on Gammatonegram and spectrogram,
in which the number of classes, the amount of training and test data, the
network structure, and learning parameters were completely similar.
All these three systems were trained for 30 dysarthric isolated words. The
system based on HMM-GMM has three states and four Gaussians in each state. The
MFCC features, energy, and first and second-order derivatives have been
extracted from the audio signal, totaling 39 features per frame. These
parameters have been chosen based on lots of experiments. It should be noted
that the proposed HMM system was implemented using Murphy Toolbox [39].
However, we trained the proposed CNN network using the introduced pre-trained
network for Gammatonegram and spectrogram separately.
Based on results in Table 2, the HMM-based system achieved 66.23% overall WRR,
which is poor performance compared to the other two systems. The CNN-based
systems show an acceptable performance despite the insufficient training data.
Meanwhile, the Gammatonegram representation system shows better results and
reaches a 91.29% WRR. These results verify that the proposed Gammatonegram
method for representation and CNN for end-to-end classification are the right
choices for dysarthric speech processing.
### 4.2 Automatic Speech Recognition
For disabled people, having a smart home system based on artificial
intelligence can be helpful. One of the best ways to interact with this system
is through speech signals. In this case, by checking the contents of the
speech file, the ASR system tries to identify the command word. In this
system, the information related to speech content is important, not the
speaker’s identity. Therefore, this system generally operates in speaker-
dependent (SD) and Speaker-Independent (SI) modes. In the SD mode, the
speakers’ identity in the training and test phases are the same and the
network adapts to these speakers’ information. In this case, the system is
more efficient because it is familiar with the parameters related to the
speakers. However, In the SI mode, there is no information about test speakers
in the training phase. The performance of ASR systems usually decreases in SI
mode because the information related to the test speakers affects their
performance.
In this section, proposed dysarthric ASR systems are evaluated in both modes.
A unique CNN was trained for all the speakers in the SD mode. In SI mode,
there is a specific ASR system for each speaker. To evaluate the proposed ASR
systems, 51 models have been trained for all modes and folds. To create these
systems in SI mode, each test speaker’s speech files were left out, and the
system was trained using the speech of other speakers. The simulation was
repeated for all 16 speakers, and a specific SI network was trained for each
speaker. In Table 3, the results of the proposed ASR systems are reported.
Table 3: Results of automatic speech recognition systems in SD and SI
scenarios
Spkr WRR in SD (%) WRR in SI (%) F02 98.19 86.63 F03 80.18 63.82 F04 95.59
93.18 F05 97.93 95.28 M01 88.28 83.62 M04 68.06 51.67 M05 92.63 90.95 M06
94.16 78.81 M07 85.71 85.70 M08 98.85 95.71 M09 98.62 97.57 M10 98.85 97.14
M11 93.01 88.33 M12 78.49 61.87 M14 96.43 89.93 M16 95.70 91.83 Mean 91.29
84.50
In these experiments, the CNNs were trained with batch size 32, which was the
best choice based on our computational resources, and based on several
experiments with different amounts for epoch numbers, we found that 20 was the
best choice. The ASR system in the SD mode achieved an average WRR of 91.29%,
which is about 6.5% better than the SI mode with 84.50% WRR. In addition, by
analyzing the results for each speaker, it can be found the system has its
lowest performance for speakers with high severity. In detail, the system’s
performance for M04 and F03 was worse. It was because of the very low
intelligibility of their speech that the characteristic features of speech
were strongly destroyed. This was because of less control in muscles
participating in the speech production mechanism. However, the proposed system
learned the normal speech features properly and performed well for high-
intelligibility speech, such as speech files belonging to F05, M08, and M09.
Results showed that our proposed Gammatonegram method, in cooperation with the
end-to-end ASR system, has acceptable performance for dysarthric speech
because of the high potential to represent the speech contents.
### 4.3 Automatic Speaker Identification
In scenarios like smart homes, the voice key is beneficial for disabled
individuals because in cases such as locking the door or permission to access
control, speaker identification can allow the disabled person to gain access.
Therefore, designing an efficient speaker identification system can be
helpful. The proposed systems were evaluated in two Text-Dependent (TD) and
text-independent (TI) modes. We trained a CNN for each one of the scenarios
and these CNNs were trained with about 5 minutes of speech for each speaker.
The UA speech dataset consists of 16 dysarthric speakers, so the output layer
has 16 classes, each representing one of the speakers.
The texts expressed in the test and training phases are the same in the TD
mode. In other words, the dysarthric person has to repeat a specific password
in both stages. The system was tested with two sessions’ data of the UA
dataset. However, the speech contents used for training and testing in TI mode
are different. In other words, in this case, a person can use any word as a
voice password, and the system recognizes the person’s identity with different
speech content outside of the training data. For the test of the TI system,
the CW1 to CW50 words of the UA dataset, which had not been used in the
training phase, were used. The systems were trained with batch size 32, and 30
iterations based on several evaluations to find the best parameter measure.
The results obtained from both modes were reported in Table 4. The performance
of the systems reached 87.74% accuracy in TD mode and 80.70% in TI mode. In
speaker identification systems, like ASR systems, speakers with low speech
intelligibility rates, such as F03 and M12, are the reduction agents in the
recognition rate. This performance was acquired in low training data
conditions and depicted that Gammatonegram contains speaker-specific features.
Table 4: Results of speaker identification systems in text-dependent and text-
independent modes
Spkr Text-Dependent (%) Text-Independent (%) F02 95.10 81.50 F03 89.89 76.50
F04 95.34 91.75 F05 98.38 88.03 M01 94.56 90.90 M04 84.19 79.47 M05 75.34
58.39 M06 89.71 66.76 M07 88.47 88.20 M08 64.51 65.47 M09 91.24 79.57 M10
80.41 64.09 M11 86.82 80.99 M14 80.95 86.71 M16 90.05 93.34 Mean 87.74 80.70
### 4.4 Cascade System For Multi-Network ASR
In previous dysarthric speech processing studies, multi-network architectures
have been utilized [23, 40]. However, none of these studies have automated the
process of assigning audio files to the appropriate network. Instead,
individuals with dysarthria were required to manually determine which network
or category their speech belonged to. In our proposed multi-network cascade
architecture, we introduce an intelligibility assessment system that
automatically activates one of the multi-networks for ASR. This architecture,
depicted in Fig. 6, consists of two main steps. According to this figure, in
the first step, the intelligibility assessment system classifies incoming
speech into two categories: high intelligibility and low intelligibility. In
the second step, we trained two ASR systems for each intelligibility category.
Fig. 6: Structure of multi-network speech recognition system in cascade
architecture with two-class automatic intelligibility assessment
Automatic process the disabled people’s speech to determine their speech
intelligibility level is effective for many purposes. For instance,
automatically diagnose the disease severity and the growth process of
disability by periodically checking their speech. Moreover, the automatic
intelligibility assessment can improve the efficiency of ASR and speaker
identification systems in multi-network scenarios. In this scenario, we
trained several parallel networks for ASR. The dysarthric speakers expressed
speech commands without knowledge of the multi-network structure or even the
severity level of their disability. Automatic intelligibility assessment
examines the person’s speech and assigns it to the corresponding network
according to the intelligibility level.
For this purpose, different categories were made according to the
intelligibility percentage. In this study, according to the efficiency of the
system and the amount of available data, the speakers are divided into three-
class and two-class modes based on the intelligibility level, and two separate
networks were trained to recognize the intelligibility. The interesting point
in this scenario is that the speech of dysarthric individuals is sometimes
accompanied by unusual silence, especially for explosive phonemes in the
middle of a word. This phenomenon can play an essential role in determining
the intelligibility level of a dysarthric person’s speech. For this reason,
intelligibility assessment systems were trained and evaluated without VAD. In
this case, CNN networks were trained using batch size 32, and 20 iterations.
Table 5: Results of two automatic intelligibility assessments and results of two proposed architecture of cascade speech recognition systems | Intelligibility (%) | | | | | | | Cascade ASR in SD (%)
---|---|---|---|---|---|---|---|---
Spkr | 3-Class | 2-Class | | | | | Spkr | Severity | 3-Class | 2-Class
F02 | 97.06 | 98.21 | | | | | [HTML]A6A6A6F02 | [HTML]A6A6A6 | [HTML]A6A6A692.99 | [HTML]A6A6A694.13
F03 | 100 | 100 | | | | | [HTML]A6A6A6F03 | [HTML]A6A6A6 | [HTML]A6A6A671.03 | [HTML]A6A6A681.89
M01 | 87.23 | 93.31 | | | | | [HTML]A6A6A6M01 | [HTML]A6A6A6 | [HTML]A6A6A675.99 | [HTML]A6A6A683.28
M04 | 94.89 | 99.33 | | | | | [HTML]A6A6A6M04 | [HTML]A6A6A6 | [HTML]A6A6A671.11 | [HTML]A6A6A681.56
M07 | 89.05 | 99.76 | | | | | [HTML]A6A6A6M07 | [HTML]A6A6A6 | [HTML]A6A6A688.1 | [HTML]A6A6A693.57
M12 | 98.33 | 100 | | | | | [HTML]A6A6A6M12 | [HTML]A6A6A6 High 2%-37% | [HTML]A6A6A673.7 | [HTML]A6A6A686.11
F04 | 79.8 | 92.72 | | | | | [HTML]BFBFBFF04 | [HTML]BFBFBF | [HTML]BFBFBF92.72 | [HTML]A6A6A695.36
M05 | 89.37 | 93.81 | | | | | [HTML]BFBFBFM05 | [HTML]BFBFBF | [HTML]BFBFBF92.06 | [HTML]A6A6A694.13
M06 | 97.04 | 94.78 | | | | | [HTML]BFBFBFM06 | [HTML]BFBFBF | [HTML]BFBFBF89.91 | [HTML]A6A6A695.13
M11 | 74.07 | 89.94 | | | | | [HTML]BFBFBFM11 | [HTML]BFBFBF | [HTML]BFBFBF88.52 | [HTML]A6A6A693.33
M16 | 86.3 | 92.22 | | | | | [HTML]BFBFBFM16 | [HTML]BFBFBF Mid 35%-62% | [HTML]BFBFBF91.85 | [HTML]A6A6A694.07
F05 | 97.62 | 98.1 | | | | | [HTML]D9D9D9F05 | [HTML]D9D9D9 | [HTML]D9D9D997.94 | [HTML]D9D9D998.1
M08 | 98.73 | 98.41 | | | | | [HTML]D9D9D9M08 | [HTML]D9D9D9 | [HTML]D9D9D996.83 | [HTML]D9D9D996.83
M09 | 98.41 | 97.94 | | | | | [HTML]D9D9D9M09 | [HTML]D9D9D9 | [HTML]D9D9D995.56 | [HTML]D9D9D995.56
M10 | 98.09 | 97.77 | | | | | [HTML]D9D9D9M10 | [HTML]D9D9D9 | [HTML]D9D9D998.41 | [HTML]D9D9D998.73
M14 | 97.89 | 97.24 | | | | | [HTML]D9D9D9M14 | [HTML]D9D9D9 Low 63%-95% | [HTML]D9D9D995.62 | [HTML]D9D9D995.13
Mean | 92.74 | 96.47 | | | | | Mean | | 88.27 | 92.30
Table 5 reports the results of three- and two-class networks. In the three-
class mode, speakers were classified into three categories: high, mid, and
low, whose intelligibility range in each class is shown in Table 5. In the
two-class mode, the high and mid categories were combined because we realized
a high correlation between data for these two classes. However, the low
severity category remains unchanged. These two systems were trained in SD
mode, in which one session of the dataset was used for training and two others
were acquired for testing. According to the results, the performance has
improved in the two-class mode, so the average intelligibility recognition
accuracy using CNN and Gammatonegram in the two and three classes have reached
96.47% and 92.74%, respectively.
Part 2 of Table 5 provides the results of the multi-network ASR in cascade
structure with the intelligibility assessment system. The results are reported
in two and three-class modes. According to these results, the performance of
the speech recognition system in the dual-network improved compared to the
single-network mode and reached 92.3% WRR in the SD mode. This achievement was
because each network focuses on close-range speech intelligibility or less
intra-class variation.
## 5 Comparative Analysis of Proposed Systems
The performance of proposed ASR systems in different modes is shown in Fig. 8
so that it can be analyzed more efficiently for each speaker. In this chart,
the speakers are sorted based on dysarthric severity from the highest to the
lowest, as reported in the dataset. In the single network, both in the SD and
SI modes, the performance was consistently lower than the average for the
first five speakers who had the highest severity of dysarthria. This can be
attributed to the variability and instability of the dysarthric speech signal
in individuals with high severity, leading to system errors. Conversely, the
recognition rate for the low-severity group was higher than the average, as
their speech parameters closely resembled normal speech with a predictable
form and minimal diversity between the test and training data.
The proposed multi-network ASR system, particularly in the two-class mode,
demonstrated a significant improvement in performance for the high-severity
group. This improvement was achieved by designing a network that specifically
focused on the parameters of the high-severity group, which differed
significantly from the other two groups. Consequently, this network
efficiently learned the parameters of the high-severity group’s speech.
Figure 8 illustrates the performance of the speaker identification and
intelligibility assessment systems. Based on the results, there seems to be a
low correlation between the speaker identification system’s performance and
the severity of dysarthria in comparison with the ASR system. However,
Gammatonegram performed well in the intelligibility assessment task,
validating our hypothesis that using Gammatonegram without VAD is effective,
as the system’s efficiency was deemed acceptable. Leveraging the achievements
and performance of Gammatonegram, we subsequently designed our multi-network
fully automated ASR system based on the intelligibility assessment approach.
Fig. 7: Comparison of the performance of dysarthric speech recognition systems
in single and multi-network scenarios of different speakers
Fig. 8: Comparison of the performance of speaker recognition and
intelligibility assessment systems of different speakers
The performance of Gammatonegram in the ASR task reached a WRR of 84.50% in
the SI mode and 91.29% in the SD mode. In the speaker identification task, our
proposed system achieved recognition rates of 80.70% and 87.74% in the TI and
TD modes, respectively. Moreover, Gammatonegram performed well in the
intelligibility assessment task, with average recognition rates of 92.74% for
the three-class mode and 96.47% for the two-class mode. Finally, the proposed
cascade ASR system achieved 92.3% WRR. A detailed comparison with previous
works based on their respective tasks is provided in Table 6 to better
understand our achievements.
Table 6: Results of two automatic intelligibility assessments and results of two proposed architecture of cascade speech recognition systems Task | Reference | WRR (%) | Method
---|---|---|---
ASR | [22] | 59.78 | Bi-LSTM
[14] | 64.71 | Voicegram
[19] | 67.00 | Deep Transformers
[16] | 74.79 | Visual Features
[20] | 79.28 | E2E
[21] | 86.50 | AV-HuBERT
Proposed ASR | 92.30 | Cascade system
Spkr Ident. | [25] | 84.93 | MFCC+ivector
[26] | 87.34 | xvector
Proposed System | 87.74 | E2E+Gammatonegram
Intell. A. | [31] | 85.00 | Transformer
[29] | 93.97 | DNN+Prosody Feature
[28] | 95.80 | Acoustic Feature
Proposed System | 96.47 | E2E+Gammatonegram
Based on the results and the comparison with previous studies, it is evident
that the Gammatongram representation effectively captures the speech
characteristics of individuals with dysarthria. Additionally, the utilization
of a two-dimensional convolutional network demonstrates strong performance.
Notably, the proposed Cascade network introduces a novel approach to speech
recognition for dysarthric individuals, allowing for the seamless integration
of multi-network ASR in a fully automated manner.
## 6 Conclusion
In this work, we introduced Gammatonegram as an adequate representation method
and utilized transfer learning to build end-to-end dysarthric speech
processing systems based on CNNs. The introduced systems have been evaluated
in three tasks: speech command recognition, speaker identification, and
intelligibility assessment. Before considering the proposed methods, we
compare the performance of a traditional ASR system based on HMM-GMM with our
proposed end-to-end system based on Gammatonegram representation. Results
depicted that the proposed system outperformed in an ASR scenario with a
significant interval. Another comparison has been made to verify our proposed
Gammatonegram with a traditional spectrogram as a popular method for
representing speech signals as an image in a similar situation. Results
verified all subsequent simulations using the proposed method.
The proposed systems utilized the UA dysarthric speech dataset and employed
the GMMVAD algorithm for silence removal. The widely recognized Alexnet was
chosen as the initial network and then retrained using 255 audio commands.
This retraining process focused on training the first part of the network,
which was responsible for feature extraction, with a substantial number of
Gammatonegram images. This pre-trained network was then employed to model all
scenarios using the transfer learning technique. In each Folds evaluation,
Only one session was utilized for system training, while two others were used
for system evaluation.
In the first task, speech recognition systems were designed and evaluated in
speaker-dependent and speaker-independent modes based on the Gammatonegram
representation. The results demonstrated that the proposed system achieved
acceptable performance. It was observed from the results that the progression
of the disease in individuals had an inverse relationship with the efficiency
of the speech recognition system for their speech. In other words, the system
was less efficient for the speech from individuals with more severe diseases.
Moving on to the second task, the objective was to recognize the identity from
the speech signal. Two scenarios, namely text-independent and text-dependent,
were evaluated. The efficiency of the systems in this task revealed that the
Gammatonegram representation contains valuable information about the speaker,
which enables the system to recognize their identity.
The third task focused on intelligibility assessment, conducted in two- and
three-class scenarios. Since silence within each word also plays a crucial
role in speech intelligibility, the VAD was not employed in this task. The
results indicated that speech intelligibility assessment performs better in
the two-class mode and can be used as a complementary tool for new tasks, such
as multi-network speech recognition.
Lastly, we developed an automatic multi-network system for ASR. This system
automatically assigned input speech utterances to corresponding speech
recognition networks based on the intelligibility percentage. Using a cascade
architecture and a two-class speech recognition approach, the system achieved
a WRR of 92.3%, indicating an improvement compared to the single-network mode.
Future studies could further improve the results by implementing a cascade
approach for speaker identification tasks. In addition, incorporating data
augmentation techniques could be beneficial. By adding different types of
noises and music to the speech files, the system can be trained to be more
robust and adaptable to real-world scenarios. The source code of this paper is
available
111https://github.com/areffarhadi/Gammatonegram_CNN_Dysarthric_speech.
Declarations
Ethical Approval: This paper reflects the authors’ own research and analysis
truthfully and completely and is not currently being considered for
publication elsewhere.
Competing interests: The authors declare that they have no known competing
financial interests or personal relationships that could have appeared to
influence the work reported in this paper.
Authors’ contributions: In preparing this paper, all the authors’ shares of
contributions were equal.
Funding: The authors did not receive support from any organizations or
sources for the submitted work.
Availability of data and materials: The source code of this paper is
available. Moreover, The UASpeech dataset is available freely.
## References
* [1] Kinfe Tadesse Mengistu and Frank Rudzicz, “Comparing humans and automatic speech recognition systems in recognizing dysarthric speech,” in Advances in Artificial Intelligence: 24th Canadian Conference on Artificial Intelligence, Canadian AI 2011, St. John’s, Canada, May 25-27, 2011. Proceedings 24. Springer, 2011, pp. 291–300.
* [2] Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu J Han, Shinji Watanabe, and Shrikanth Narayanan, “A review of speaker diarization: Recent advances with deep learning,” Computer Speech & Language, vol. 72, pp. 101317, 2022.
* [3] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
* [4] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
* [5] Chao-Han Huck Yang, Jun Qi, Samuel Yen-Chi Chen, Pin-Yu Chen, Sabato Marco Siniscalchi, Xiaoli Ma, and Chin-Hui Lee, “Decentralizing feature extraction with quantum convolutional neural network for automatic speech recognition,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6523–6527.
* [6] Heejin Kim, Mark Hasegawa-Johnson, Adrienne Perlman, Jon Gunderson, Thomas S Huang, Kenneth Watkin, and Simone Frame, “Dysarthric speech database for universal access research,” in Ninth Annual Conference of the International Speech Communication Association, 2008.
* [7] Qiang Zhang, Qifan Yang, Xujuan Zhang, Qiang Bao, Jinqi Su, and Xueyan Liu, “Waste image classification based on transfer learning and convolutional neural network,” Waste Management, vol. 135, pp. 150–157, 2021.
* [8] Zhaopeng Qian, Kejing Xiao, and Chongchong Yu, “A survey of technologies for automatic dysarthric speech recognition,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2023, no. 1, pp. 48, 2023.
* [9] Ray D Kent and Yunjung Kim, “Acoustic analysis of speech,” The handbook of clinical linguistics, pp. 360–380, 2008.
* [10] Steve Young, Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Xunying Liu, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, et al., “The htk book,” Cambridge university engineering department, vol. 3, no. 175, pp. 12, 2002.
* [11] Myung Jong Kim, Beiming Cao, Kwanghoon An, and Jun Wang, “Dysarthric speech recognition using convolutional lstm neural network.,” in INTERSPEECH, 2018, pp. 2948–2952.
* [12] Shansong Liu, Shoukang Hu, Xunying Liu, and Helen Meng, “On the use of pitch features for disordered speech recognition.,” in Interspeech, 2019, pp. 4130–4134.
* [13] Chitralekha Bhat, Biswajit Das, Bhavik Vachhani, and Sunil Kumar Kopparapu, “Dysarthric speech recognition using time-delay neural network based denoising autoencoder.,” in INTERSPEECH, 2018, pp. 451–455.
* [14] Seyed Reza Shahamiri, “Speech vision: An end-to-end deep learning-based dysarthric automatic speech recognition system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 852–861, 2021.
* [15] Disong Wang, Jianwei Yu, Xixin Wu, Lifa Sun, Xunying Liu, and Helen Meng, “Improved end-to-end dysarthric speech recognition via meta-learning based model re-initialization,” in 2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2021, pp. 1–5.
* [16] Shansong Liu, Mengzhe Geng, Shoukang Hu, Xurong Xie, Mingyu Cui, Jianwei Yu, Xunying Liu, and Helen Meng, “Recent progress in the cuhk dysarthric speech recognition system,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2267–2281, 2021.
* [17] Zhengjun Yue, Erfan Loweimi, and Zoran Cvetkovic, “Raw source and filter modelling for dysarthric speech recognition,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 7377–7381.
* [18] Yuki Takashima, Tetsuya Takiguchi, and Yasuo Ariki, “End-to-end dysarthric speech recognition using multiple databases,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6395–6399.
* [19] Seyed Reza Shahamiri, Vanshika Lal, and Dhvani Shah, “Dysarthric speech transformer: A sequence-to-sequence dysarthric speech recognition system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.
* [20] Ahmad Almadhor, Rizwana Irfan, Jiechao Gao, Nasir Saleem, Hafiz Tayyab Rauf, and Seifedine Kadry, “E2e-dasr: End-to-end deep learning-based dysarthric automatic speech recognition,” Expert Systems with Applications, vol. 222, pp. 119797, 2023.
* [21] Chongchong Yu, Xiaosu Su, and Zhaopeng Qian, “Multi-stage audio-visual fusion for dysarthric speech recognition with pre-trained models,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1912–1921, 2023.
* [22] Siddharth Rathod, Monil Charola, and Hemant A Patil, “Transfer learning using whisper for dysarthric automatic speech recognition,” in International Conference on Speech and Computer. Springer, 2023, pp. 579–589.
* [23] Aref Farhadipour, Hadi Veisi, Mohammad Asgari, and Mohammad Ali Keyvanrad, “Dysarthric speaker identification with different degrees of dysarthria severity using deep belief networks,” Etri Journal, vol. 40, no. 5, pp. 643–652, 2018.
* [24] Kamil Lahcene Kadi, Sid Ahmed Selouani, Bachir Boudraa, and Malika Boudraa, “Fully automated speaker identification and intelligibility assessment in dysarthria disease using auditory knowledge,” Biocybernetics and Biomedical Engineering, vol. 36, no. 1, pp. 233–247, 2016.
* [25] Shinimol Salim and Waquar Ahmad, “Constant q cepstral coefficients for automatic speaker verification system for dysarthria patients,” Circuits, Systems, and Signal Processing, pp. 1–18, 2023.
* [26] Shinimol Salim, Syed Shahnawazuddin, and Waquar Ahmad, “Automatic speaker verification system for dysarthria patients.,” in INTERSPEECH, 2022, pp. 5070–5074.
* [27] Siddhant Gupta, Ankur T Patil, Mirali Purohit, Mihir Parmar, Maitreya Patel, Hemant A Patil, and Rodrigo Capobianco Guido, “Residual neural network precisely quantifies dysarthria severity-level based on short-duration speech segments,” Neural Networks, vol. 139, pp. 105–117, 2021.
* [28] Bassam Ali Al-Qatab and Mumtaz Begum Mustafa, “Classification of dysarthric speech according to the severity of impairment: an analysis of acoustic features,” IEEE Access, vol. 9, pp. 18183–18194, 2021.
* [29] Amlu Anna Joshy and Rajeev Rajan, “Automated dysarthria severity classification: A study on acoustic features and deep learning techniques,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 1147–1157, 2022.
* [30] Kyle Hall, Andy Huang, and Seyed Reza Shahamiri, “An investigation to identify optimal setup for automated assessment of dysarthric intelligibility using deep learning technologies,” Cognitive Computation, vol. 15, no. 1, pp. 146–158, 2023.
* [31] Paleti Nikhil Chowdary, Vadlapudi Sai Aravind, Gorantla VNSL Vishnu Vardhan, Menta Sai Akshay, Menta Sai Aashish, and Jyothish Lal G, “A few-shot approach to dysarthric speech intelligibility level classification using transformers,” arXiv e-prints, pp. arXiv–2309, 2023.
* [32] Subhashini Venugopalan, Jimmy Tobin, Samuel J Yang, Katie Seaver, Richard JN Cave, Pan-Pan Jiang, Neil Zeghidour, Rus Heywood, Jordan Green, and Michael P Brenner, “Speech intelligibility classifiers from 550k disordered speech samples,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
* [33] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
* [34] Aref Farhadi Pour, Mohammad Asgari, and Mohammad Reza Hasanabadi, “Gammatonegram based speaker identification,” in 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE). IEEE, 2014, pp. 52–55.
* [35] Lawrence Rabiner and Ronald Schafer, Theory and applications of digital speech processing, Prentice Hall Press, 2010.
* [36] Alexey Sholokhov, Md Sahidullah, and Tomi Kinnunen, “Semi-supervised speech activity detection with an application to automatic speaker verification,” Computer Speech & Language, vol. 47, pp. 132–156, 2018.
* [37] Hossein Sameti, Hadi Veisi, Mohammad Bahrani, Bagher Babaali, and Khosro Hosseinzadeh, “Nevisa, a persian continuous speech recognition system,” in Advances in Computer Science and Engineering: 13th International CSI Computer Conference, CSICC 2008 Kish Island, Iran, March 9-11, 2008 Revised Selected Papers. Springer, 2009, pp. 485–492.
* [38] Rupali S Chavan and Ganesh S Sable, “An overview of speech recognition using hmm,” International Journal of Computer Science and Mobile Computing, vol. 2, no. 6, pp. 233–238, 2013.
* [39] Kevin Murphy, “Hidden markov model (hmm) toolbox for matlab,” ”https://www.cs.ubc.ca/ murphyk/Software/HMM/hmm.html”, 1998.
* [40] Seyed Reza Shahamiri and Siti Salwah Binti Salim, “Real-time frequency-based noise-robust automatic speech recognition using multi-nets artificial neural networks: A multi-views multi-learners approach,” Neurocomputing, vol. 129, pp. 199–207, 2014.
|
# Is GPT-4 Alone Sufficient for Automated Essay Scoring?: A Comparative
Judgment Approach Based on Rater Cognition
Seungju Kim
KNUE, South Korea
<EMAIL_ADDRESS>
&Meounggun Jo
Hoseo University, South Korea
<EMAIL_ADDRESS>
###### Abstract
Large Language Models (LLMs) have shown promise in Automated Essay Scoring
(AES), but their zero-shot and few-shot performance often falls short compared
to state-of-the-art models and human raters. However, fine-tuning LLMs for
each specific task is impractical due to the variety of essay prompts and
rubrics used in real-world educational contexts. This study proposes a novel
approach combining LLMs and Comparative Judgment (CJ) for AES, using zero-shot
prompting to choose between two essays. We demonstrate that a CJ method
surpasses traditional rubric-based scoring in essay scoring using LLMs.
Figure 1: Comparative overview of scoring strategies: traditional rubric-based
scoring vs. two-step scoring employing comparative judgment (CJ) method
## 1 Introduction
Essay scores are more than just numbers; they provide students with clear
benchmarks for improving their writing skills and help them understand what
high-quality writing looks like. Recent advancements in Large Language Models
(LLMs) have shown promise in Automated Essay Scoring (AES), but their
performance in zero-shot and few-shot settings often falls short compared to
state-of-the-art models and human raters. While fine-tuning LLMs for specific
essay scoring tasks yields better results, this approach is limited in
scalability and adaptability. Especially, in real-world educational settings,
diverse essay prompts and rubrics are used across various subjects, grade
levels, and educational institutions. Fine-tuning LLMs for each specific task
is time-consuming, resource-intensive, and impractical. Therefore, exploring
zero-shot and few-shot approaches is crucial for developing AES systems that
can be easily adapted to various educational settings without extensive fine-
tuning.
From a psychological perspective, multi-trait essay scoring using rubrics is a
cognitively demanding task for human raters (Bejar, 2012; Hamp-Lyons and
Henning, 1991; Zhang, 2013). Meanwhile, recent writing assessment research
proposes Comparative Judgment (CJ) as an alternative method. CJ involves
repeatedly comparing pairs of essays to produce results, offering a more
cognitively intuitive approach for humans (Laming, 2003) and highly reliable
scoring results (Verhavert et al., 2019). This study starts with the question:
Could the task that is natural for humans also be natural for LLMs? The
combination of LLMs and CJ presents a novel approach to AES. This study
investigates using few-shot prompting to enable LLMs to choose between two
essays, emulating the comparative judgment process used by human raters.
## 2 Related Work
### 2.1 Automated Essay Scoring
Automated Essay Scoring (AES) is a field of research that focuses on
developing computer systems to evaluate and score written essays. The goal of
AES is to provide a reliable, efficient, and consistent method for assessing
writing quality, which can be particularly useful in educational settings.
#### 2.1.1 Performance of LLMs in AES
Recent studies have explored the application of decoder-only Transformer-based
language models, such as GPT-3.5 and GPT-4, in AES. Despite the impressive
generalizability demonstrated by these models across various tasks, their
potential has not been fully leveraged in the AES domain.
While fine-tuned models have shown promising results in capturing essay
quality (Xiao et al., 2024; Do et al., 2024), their zero-shot and few-shot
performances often fall short compared to previous state-of-the-art models
(Han et al., 2023; Mansour et al., 2024). Han et al. (2023) reported that the
BERT model achieved an average QWK score of 0.421, while the GPT-3.5 model
with zero-shot or few-shot learning only achieved a QWK score of 0.336–0.385
on the DREsS dataset. Similarly, Mansour et al. (2024) found that on the ASAP
dataset, the existing SOTA model achieved QWK scores between 0.544 and 0.771,
whereas the GPT-3.5-turbo model and Llama2 model resulted in QWK scores
ranging from 0.023 to 0.327. Xiao et al. (2024) also observed that GPT-4 with
few-shot learning showed lower performance (0.257–0.784) compared to Fine-
tuned GPT-3.5 (0.613–0.859) in all essay sets of the ASAP dataset.
#### 2.1.2 Limitations of Fine-tuning-based Methods
However, fine-tuning-based methods require a large amount of data in advance,
which limits their applicability in contexts where a wide variety of essay
prompts and rubrics are used, except for tasks in assessment situations that
are conducted in a batch manner. Furthermore, considering that essay scores
are generally provided analytically rather than holistically, as mentioned by
Do et al. (2024), creating separate models or fine-tuning for each trait would
require substantial resources. This suggests that in addition to fine-tuning
language models, new approaches are needed to overcome the limitations of
language models in extremely limited resource environments.
#### 2.1.3 Effects of Prompt Engineering
Recent studies have investigated the effect of prompt engineering on the
performance of LLMs in AES. Han et al. (2023) found that providing more
context to GPT-3.5, particularly by requesting it to generate feedback related
to the scoring rubrics, can enhance its essay scoring performance. Yancey et
al. (2023) observed that GPT-4, when provided with calibration examples, can
achieve a QWK close to a strong baseline AES system but lower than human
raters. Mansour et al. (2024) designed four prompts with incremental
improvements and found that different types of prompts yielded higher
performance for different essay tasks and LLM models, with no consistent
results. While prompt engineering can enhance the performance of LLMs in AES
to some extent, These results highlight the need for further research and
development in this area.
### 2.2 Rater Cognition in Essay Scoring
#### 2.2.1 Rubric-based Scoring
Rubric-based scoring, which underlies AES, is a cognitively demanding task for
human raters. The process of scoring written texts involves a complex
interplay between the scorer’s internal standards and external scoring
rubrics, resulting in the formation of mental representations (Freedman and
Calfee, 1983; Lumley, 2002; Wolfe and Feltovich, 1994). However, raters often
struggle to internalize the externally provided scoring criteria (Lumley,
2002), which can further complicate the scoring process. While analytical
scoring requires raters to assess multiple aspects of writing based on
detailed criteria, this process is cognitively demanding (Bejar, 2012) and can
lead to inconsistencies in scoring outcomes due to various cognitive biases
(Tavares and Eva, 2013; Zhang, 2013). Therefore, obtaining reliable scores
through rubric-based scoring requires a significant investment of resources,
including the development of assessment criteria and extensive training of
human raters (McCaffrey et al., 2022; North, 2003).
#### 2.2.2 Comparative Judgment
Comparative Judgment (CJ) has been proposed as an alternative method to
address the limitations of rubric-based scoring (Pollitt, 2012). In CJ, raters
select which of two different objects (i.e., essays) is better, and by
repeating this process multiple times, the rank and strength of each essay can
be calculated. The concept of CJ was first introduced by Thurstone (1927), and
the Bradley-Terry model (Bradley and Terry, 1952) is commonly used to analyze
the data. CJ offers a more intuitive decision-making process for raters
(Laming, 2003) and has been shown to produce highly reliable scoring results
(Verhavert et al., 2019). As a result, it is considered a promising
alternative to rubric-based scoring.
However, the efficiency of CJ becomes limited as the number of essays
increases due to the rapidly growing number of pairs that need to be compared
by human raters (Bouwer:24; Goossens and De Maeyer, 2018). This scalability
issue poses a significant challenge for the widespread adoption of CJ in
large-scale assessment contexts. Therefore, there is a need for innovative
approaches that can maintain the benefits of CJ while addressing its
limitations in terms of efficiency and scalability.
## 3 Research Questions
Building upon the existing knowledge in the field of AES and rater cognition,
this study explores a novel approach to utilizing LLMs for AES by employing
CJ. Instead of relying on rubric-based scoring, the proposed method prompts
LLMs to choose the better essay between two given essays without any
additional training, using only zero-shot prompting. The study aims to address
the following research questions:
RQ1. When using a rubric-based scoring strategy, will the GPT-4 model be able
to better imitate human-rater scores compared to the GPT-3.5 model?
RQ2. When using a rubric-based scoring strategy, will GPT models be able to
better imitate human rater’s scores if an elaborated scoring rubric with
descriptors is used?
RQ3. When using a CJ-based scoring strategy, will the GPT model be able to
better imitate human rater scores compared to the rubric-based scoring
strategy?
RQ4. When using a CJ-based scoring strategy and utilizing fine-grained scores,
will GPT models be able to better imitate human rater scores?
## 4 Methods
### 4.1 Dataset
We utilized essay sets 7 and 8 from the ASAP
dataset111https://www.kaggle.com/c/asap-aes, which include multiple raters’
scores and analytical scoring based on 4 and 6 traits, respectively. These two
prompt sets are the only ones in the ASAP dataset that provide rubric-based
scores instead of a single holistic score. Prompt set 7 consists of 1,569
essays written by 7th-grade students, with an average length of 250 words.
The essays are scored on a scale of 0-3 across four traits (ideas,
organization, style, and conventions). Prompt set 8, on the other hand,
comprises 723 essays written by 10th-grade students, with an average length of
650 words. These essays are scored on a scale of 1-6 across six traits (ideas
and content, organization, voice, word choice, sentence fluency, and
conventions). To minimize the variance arising from the ambiguity of the
rubric itself (or the diversity of rubric interpretations) and to more
dramatically reveal the effects of differences between scoring strategies,
such as the rubric-based method and the CJ-based method, we focused on these
analytically scored essay sets.
### 4.2 Models
The LLM models used for inference in this study are the GPT-3.5 model
(gpt-3.5-turbo-0123) and the GPT-4 model (gpt-4-0125-preview), both developed
by OpenAI. The models were accessed through API calls, and the temperature
parameter was set to 0 for all experiments.
### 4.3 Rubric-based Scoring Strategy
#### 4.3.1 Basic-type Rubric
The first condition, the Basic rubric, involves the LLM scoring essays using
the rubrics that were used for grading Essay Set 7 and Set 8 in the ASAP
dataset. The basic rubric consists of 4 traits for Set 7 and 6 traits for Set
8, with each score having a corresponding descriptor. The descriptors for the
rubric used in Set 7 are relatively simple, while those in Set 8 are very
specific. The average word count per trait for the descriptors is M=66.2
(SD=15.2) for Set 7 and M=543.2 (SD=62.4) for Set 8.
#### 4.3.2 Elaborated-type Rubric
To examine the influence of rubric type on LLM performance in automated essay
scoring, the rubric descriptors for Essay Set 7 were elaborated using two main
methods: by adding general descriptions (Elaborated with General Description;
EGD) or by including explanations of the logic between the scores awarded on
the example essays and the rubric (Elaborated with Specific Examples; ESE).
The gpt-4-0125-preview model was employed to generate these elaborated
rubrics. The prompts used for this purpose and examples of the EGD rubric and
ESE rubric are provided in Appendix A and C, respectively. The rubrics
generated by the GPT-4 model were used without any modifications.
The example essays used for elaborating the rubrics were randomly sampled
(seed=1, 2) from the remaining data after excluding the evaluation dataset.
For each score level, a maximum of three essays that received identical scores
from both raters 1 and 2 were selected. The same seed values used for
extracting the evaluation dataset were employed for random sampling. However,
in some cases, there were insufficient essays for certain score levels. In
such instances, the rubric was elaborated by inferring from the examples of
other score levels and the existing descriptions. When using either the EGD-
type or ESE-type rubrics, all the available example essays were included in
the prompts.
### 4.4 CJ-based Scoring Strategy
The CJ-based Scoring strategy involves choosing the better essay between two
essays. The essay judged as better written in a pairwise comparison is
assigned 1 if it wins, while the essay deemed relatively poor is assigned 0 if
it loses. And then estimating a value representing the relative superiority of
the essays through the Bradley-Terry model (Bradley and Terry, 1952), as shown
in the equation below:
$prob(A\ beats\ B\ |\lambda_{a},\
\lambda_{b})=\frac{exp(\lambda_{a}-\lambda_{b})}{1+exp(\lambda_{a}-\lambda_{b})}$
(1)
In the equation above, $\lambda$ represents the quality parameter of each
essay. Rasch (Rasch, 1960) demonstrated that the optimal solution can be
obtained through Maximum Likelihood (ML) estimation. For this purpose, the btm
module implemented in the sirt package (v.4.1-15) in R was used
(ignore.ties=True, maxiter=200).
### 4.5 Evaluation
#### 4.5.1 Test Data
The number of essays used for testing varied by trait, with 31-35 essays for
Set 7 and 27-31 essays for Set 8. These essays were obtained through
stratified sampling from each dataset, using the average scores assigned by
raters as labels. Due to insufficient data for some labels, there were
differences in the number of essays tested for each trait.
For Set 7, approximately 5 essays were randomly sampled for each label, while
for Set 8, around 2 essays were sampled per label. The random seeds used for
sampling were fixed at 1 and 2. The number of essays used for testing was
limited to manage API call costs during the comparative judgment process. The
number of essays sampled for each essay set, trait, and score level is
provided in Appendix D.
#### 4.5.2 Evaluation Method
To evaluate the AES performance, we used Quadratic Weighted Kappa (QWK)
(Cohen, 1968), the most widely used metric in the AES task. In the rubric-
based scoring condition, scores were predicted on the same scale as the essay
set’s rubric, compared with each rater’s scores using QWK, and then averaged.
In the CJ-based scoring condition, the relatively estimated scores were
converted to absolute scores for comparison with raters’ scores. The prompts
used for rubric-based scoring and comparative judgment are provided in
Appendix B.
$T(p)=R(p)\times(\max(S_{c_{j}})-\min(S_{c_{j}}))+\min(S_{c_{j}})$
(2) $R(x)=\arg\min_{s\in S_{cj}}|s-x|$ (3)
The CJ results were linearly transformed to the scoring scale range of each
essay set using the transformation equation (2), where equation (3) is the
rounding function. The scale range $S_{cj}$ is {0, 1, 2, 3} for Essay Set 7
and {1, 2, 3, 4, 5, 6} for Essay Set 8. The finer-grained scale $S_{cjf}$
represents scores obtained by averaging raters’ scores and rounding to the
second decimal place, with ranges {0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0} for
Essay Set 7 and {1.0, 1.5, 2.0, 2.3, 2.5, 2.7, 3.0, 3.3, 3.5, 3.7, 4.0, 4.3,
4.5, 4.7, 5.0, 5.5, 6.0} for Essay Set 8.
## 5 Results
### 5.1 RQ1: Rubric-based Scoring with Basic-type Rubric
As shown in Table 1, the GPT-4 model demonstrated substantially better
performance compared to GPT-3.5, except for traits 5 and 6 of Essay Set 8,
where performance decreased. A Wilcoxon signed-rank test revealed that the
differences between the two models were statistically significant
(p-value$<$.000, statistic=145).
However, despite the overall superiority of GPT-4, the traits in Essay Set 7
exhibited lower average performance compared to those in Set 8, as evident in
Table 1. Specifically, for GPT-4, the QWK values ranged from 0.267 to 0.557 in
Essay Set 7, while they were higher in Essay Set 8, ranging from 0.722 to
0.802.
Table 1: QWK Performance Comparison: Rubric-based vs CJ-based Scoring
Evaluation Strategy | Rubric Type | Model | Total | Essay Set #7 | Essay Set #8
---|---|---|---|---|---
Trait1 | Trait2 | Trait3 | Trait4 | Trait1 | Trait2 | Trait3 | Trait4 | Trait5 | Trait6
R | B | Human | 0.734 | 0.763 | 0.775 | 0.682 | 0.746 | 0.75 | 0.779 | 0.721 | 0.683 | 0.661 | 0.761
(±0.073) | ±0.063 | ±0.014 | (±0.082) | ±0.055 | (±0.098) | (±0.048) | (±0.105) | (±0.124) | (±0.084) | (±0.101)
R | B | GPT-3.5 | 0.438 | 0.399 | 0.191 | 0.271 | 0.19 | 0.532 | 0.47 | 0.633 | 0.608 | 0.734 | 0.704
(±0.100) | (±0.131) | (±0.033) | (±0.102) | (±0.172) | (±0.090) | (±0.070) | (±0.146) | (±0.097) | (±0.065) | (±0.073)
R | B | GPT-4 | 0.567 | 0.566 | 0.454 | 0.322 | 0.269 | 0.763 | 0.741 | 0.743 | 0.749 | 0.704 | 0.686
(±0.102) | (±0.120) | (±0.054) | (±0.133) | (±0.083) | (±0.118) | (±0.075) | (±0.082) | (±0.072) | (±0.143) | (±0.152)
CJ | B | GPT-3.5 | 0.573 | 0.545 | 0.437 | 0.366 | 0.506 | 0.632 | 0.738 | 0.67 | 0.739 | 0.671 | 0.648
(±0.086) | (±0.100) | (±0.073) | (±0.029) | (±0.092) | (±0.121) | (±0.092) | (±0.091) | (±0.099) | (±0.090) | (±0.100)
CJ | B | GPT-4 | 0.674 | 0.635 | 0.606 | 0.595 | 0.59 | 0.724 | 0.784 | 0.731 | 0.786 | 0.751 | 0.672
(±0.087) | (±0.095) | (±0.104) | (±0.054) | (±0.059) | (±0.146) | (±0.051) | (±0.093) | (±0.068) | (±0.106) | (±0.114)
CJ_F | B | GPT-3.5 | 0.641 | 0.577 | 0.44 | 0.455 | 0.562 | 0.751 | 0.754 | 0.771 | 0.822 | 0.797 | 0.747
(±0.064) | (±0.097) | (±0.018) | (±0.069) | (±0.048) | (±0.075) | (±0.121) | (±0.029) | (±0.101) | (±0.062) | (±0.037)
CJ_F | B | GPT-4 | 0.776 | 0.75 | 0.68 | 0.733 | 0.679 | 0.847 | 0.819 | 0.847 | 0.869 | 0.86 | 0.813
±0.071 | (±0.148) | (±0.092) | ±0.045 | (±0.060) | ±0.074 | ±0.088 | ±0.046 | ±0.046 | ±0.044 | ±0.038
### 5.2 RQ2: Rubric-based Scoring with Elaborated-type Rubric
In this section, we examined the impact of using elaborated rubrics with
descriptors on the performance of GPT models in imitating human rater’s scores
for Essay Set 7. As shown in Table 2, when using the GPT-3.5 model, we
observed an increase in the average QWK values across traits compared to the
Basic-type (B) rubric.
However, under the GPT-4 model condition, some traits exhibited either no
difference or even a decrease in QWK values. A Wilcoxon signed-rank test
revealed that the only statistically significant difference was found when
using the ESE-type rubric compared to the B-type rubric with the GPT-3.5 model
(p-value $<$.000, statistic=3).
Table 2: Performance comparison of GPT models using basic and elaborated type
rubrics
Model | | Rubric
---
Type
Total | Trait1 | Trait2 | Trait3 | Trait4
Human | B | 0.741 | 0.763 | 0.775 | 0.682 | 0.746
(±0.054) | (±0.063) | (±0.014) | (±0.082) | (±0.055)
GPT-3.5 | B | 0.263 | 0.399 | 0.191 | 0.271 | 0.19
(±0.109) | (±0.131) | (±0.033) | (±0.102) | (±0.172)
GPT-3.5 | EGD | 0.449 | 0.637 | 0.375 | 0.464 | 0.318
(±0.119) | (±0.107) | (±0.077) | (±0.049) | (±0.240)
GPT-3.5 | ESE | 0.446 | 0.642 | 0.419 | 0.452 | 0.273
(±0.053) | (±0.082) | (±0.035) | (±0.048) | (±0.047)
GPT-4 | B | 0.403 | 0.566 | 0.454 | 0.322 | 0.269
(±0.098) | (±0.120) | (±0.054) | (±0.133) | (±0.083)
GPT-4 | EGD | 0.4 | 0.562 | 0.554 | 0.217 | 0.267
(±0.067) | (±0.092) | (±0.038) | (±0.023) | (±0.115)
GPT-4 | ESE | 0.435 | 0.566 | 0.509 | 0.367 | 0.296
(±0.059) | (±0.012) | (±0.019) | (±0.071) | (±0.133)
### 5.3 RQ3: CJ-based Scoring
We examined the effectiveness of the CJ-based scoring strategy compared to the
rubric-based scoring strategy in enabling GPT models to better imitate human
rater scores. As presented in Table 1, under the CJ-based scoring condition,
the average QWK values were 0.573 for GPT-3.5 and 0.674 for GPT-4,
representing performance improvements of approximately 30.8% and 18.9%,
respectively, compared to the Basic-type rubric-based scoring condition.
Furthermore, as shown in Figure 2, a Wilcoxon signed-rank test revealed that
the performance enhancements due to CJ were statistically significant,
regardless of the model employed (GPT-3.5: p-value$<$.000, statistic=1092;
GPT-4: p-value$<$.000, statistic=371).
Figure 2: Performance Improvements with CJ-based Scoring Across Models
### 5.4 RQ4: CJ-based Scoring with Fine-grained Scores
As shown in Table 1, under the fine-grained score condition (CJ_F), both GPT
models demonstrated additional performance improvements compared to the CJ
condition. A Mann-Whitney U test revealed that these differences were
statistically significant for the GPT-4 model (p-value$<$.000,
statistic=1430). These findings suggest that incorporating fine-grained scores
when using the CJ-based scoring strategy can enhance the performance of GPT
models, particularly GPT-4, in imitating human rater scores.
## 6 Further Analysis
### 6.1 CJ with Elaborated Rubrics
We further investigated the impact of using elaborated scoring rubrics in
conjunction with CJ on model performance, particularly for Essay Set 7, where
the initial scoring rubric was less detailed. While the overall performance
was lower in the CJ condition with the basic rubric for this essay set, we
aimed to determine if employing an elaborated rubric would lead to performance
improvements.
As presented in Table 3, our findings suggest that using an elaborated rubric
in the CJ condition resulted in some observable improvements in average
scores. However, these differences were not statistically significant.
Table 3: Performance improvements of CJ and CJ_F across rubric types
Model | | Evaluation
---
Strategy
| Rubric
---
Type
Total
Human | R | B | 0.741(±0.059)
GPT-3.5 | CJ | B | 0.464(±0.099)
GPT-3.5 | CJ | EGD | 0.446(±0.114)
GPT-3.5 | CJ | ESE | 0.449(±0.094)
GPT-3.5 | CJ_F | B | 0.508(±0.082)
GPT-3.5 | CJ_F | EGD | 0.502(±0.107)
GPT-3.5 | CJ_F | ESE | 0.519(±0.103)
GPT-4 | CJ | B | 0.607(±0.075)
GPT-4 | CJ | EGD | 0.602(±0.073)
GPT-4 | CJ | ESE | 0.624(±0.064)
GPT-4 | CJ_F | B | 0.710(±0.079)
GPT-4 | CJ_F | EGD | 0.712(±0.063)
GPT-4 | CJ_F | ESE | 0.726(±0.088)
### 6.2 Effectiveness of CJ-based approach between rubric types
To further examine whether the effects of the CJ and CJ_F conditions were
statistically significant across different rubric types, we conducted a
Wilcoxon signed-rank test. As illustrated in Figure 2, the results showed that
the performance improvements from the R condition to the CJ and CJ_F condition
were statistically significant, regardless of the rubric type.
Figure 3: Performance Improvements of CJ and CJ_F Across Rubric Types
## 7 Discussion
This research illustrates the potential use of Large Language Models (LLMs)
with Comparative Judgment (CJ) for Automated Essay Scoring (AES). The results
provide valuable insights into how LLMs can be effectively utilized in this
area. In the following discussion, we will closely examine these findings and
analyze their significance for the field of AES.
### 7.1 Impact of Essay Set Characteristics on LLM Performance
The present study highlights the substantial impact of essay set
characteristics on the performance of LLMs, even for the advanced GPT-4 model.
A marked disparity was observed between essay sets 7 and 8, suggesting that
factors beyond the model’s inherent capabilities, such as the level of detail
in scoring rubrics, play a pivotal role in determining AES performance.
The lack of specificity in the rubrics for essay set 7, which contained
approximately nine times fewer words per sub-trait compared to set 8, likely
led the LLM to evaluate set 7 based on logic and evidence that diverged from
human raters. These findings underscore the importance of providing
comprehensive and well-defined scoring criteria to guide the judgment of LLMs
in AES tasks.
Traits 4 of essay set 7 and 6 of essay set 8, both related to the evaluation
criteria for conventions, exemplify this divergence. LLMs demonstrated a
capacity for rigorous analysis of error characteristics and nuances, focusing
intently on detailed aspects of the text. In contrast, human raters may apply
these evaluation criteria from a more qualitative perspective, such as whether
the level of errors interferes with their understanding of the text content
(Cumming et al., 2002). Further research is needed to better understand and
address these discrepancies between LLM and human rater judgments.
### 7.2 Influence of Elaborated Scoring Rubrics on GPT Models
The study reveals the varying impact of elaborated scoring rubrics on the
performance of different GPT models. While GPT-3.5 generally benefited from
more detailed rubrics, GPT-4 exhibited mixed results, with some traits even
showing a decrease in performance. This suggests that the rubrics may have
been overfitted to the essay dataset used in the elaboration process.
It is noteworthy that for essay set 8, using GPT-4 with basic-type rubrics
alone yielded QWK scores ranging from 0.686 to 0.763. These values are similar
to the performance level achieved when using GPT-4 with basic-type rubrics in
the CJ condition, highlighting the importance of elaborated rubrics.
Furthermore, changes in rubrics influence performance improvements or
deterioration even under conditions utilizing the CJ strategy, demonstrating
that rubrics remain an important factor in essay evaluation.
However, considering that the general rubric development process is iterative
and resource-demanding (Janssen et al., 2015; McCaffrey et al., 2022), further
research is needed on how LLMs can effectively assist this process.
Investigating methods for utilizing LLMs to create comprehensive and well-
defined scoring criteria should be a priority to enhance the accuracy and
efficiency of AES systems.
### 7.3 Effectiveness of CJ-based Scoring Strategy
The CJ-based scoring strategy proved to be more effective than the traditional
rubric-based method in enabling GPT models to emulate human rater scores, with
significant performance improvements observed for both GPT-3.5 and GPT-4.
However, it is important to consider that when scoring, human raters not only
compare essays but also clearly connect rubric descriptors with essay features
(Cumming et al., 2002). This approach may sometimes be more economical than
comparing multiple pairs of essays.
For future research, adopting a two-way approach that reflects these human
cognitive processes by utilizing both methods appears promising in terms of
both efficiency and reliability.
### 7.4 Utilizing Fine-grained Scores in CJ-based Scoring
Incorporating fine-grained scores in the CJ-based scoring strategy further
augmented the performance of GPT models, particularly GPT-4. This finding
underscores the value of utilizing granular scoring information to improve the
accuracy of AES systems powered by advanced language models.
Generally, scores and scoring results are referred to as ”score bands,” which
represent categories of ability levels that exist on a continuous scale.
However, these scores are given as discrete values, which means that machines
have no choice but to understand these values discretely, and the scores can
be distorted depending on how we assign tasks to LLMs. It is important to
consider that human perception of writing quality is more nuanced and
granular. As such, the development of datasets constructed using the CJ
approach from the outset could enable a more rigorous validation of LLM
judgments and align more closely with human intuition.
## 8 Future Work
This study has demonstrated the potential of combining Large Language Models
(LLMs) with Comparative Judgment (CJ) for Automated Essay Scoring (AES).
However, there are several avenues for future research that can further
enhance the generalizability, robustness, and practical applicability of this
approach.
### 8.1 Validation on Diverse Datasets
While CJ proved to be the most effective strategy in this study, the
performance varied considerably depending on the trait and essay set. As the
writing tasks in Essay Sets 7 and 8 were narrative in nature, it is necessary
to verify whether this approach can effectively work on data from other types
of writing tasks, such as persuasive writing. Currently, there is a lack of
publicly available rubric-based evaluation datasets to test this. Although
some datasets, such as ASAP++ (Mathias and Bhattacharyya, 2018), provide
scores based on specific traits, it is unclear which rubrics were used to
score these traits. However, with the recent release of publicly accessible
rubric-based evaluation datasets like DREsS (Yoo et al., 2024), further
validation on various datasets is necessary.
### 8.2 Assigning Absolute Scores
This study assumed a uniform distribution and employed Bradley-Terry modeling
and linear transformation of CJ estimates. However, methods for assigning
absolute scores to essays require further development. In scenarios with
imbalanced data, the use of the Bradley-Terry model may lead to bias in
parameter estimation. Additionally, the Maximum Likelihood Estimation (MLE)
process utilized in this research could potentially face nonconvergence
issues.
While mathematical and statistical methods and alternatives exist to address
these challenges, data augmentation methods show particular promise. It is
established that certain textual factors can complicate essay evaluation
(Wolfe et al., 2016), suggesting that human raters may find it especially
difficult to judge texts with specific characteristics. Hypothetically, if
this principle extends to LLMs, incorporating generated data points with
features conducive to easier evaluation into the essay set might yield
additional performance improvements.
Furthermore, by having human raters pre-judge the absolute grades of some
generated data that possess features conducive to easier evaluation, these
could serve as model essays for LLM evaluation. If LLMs then assess the
remaining essays through comparison with these model essays, the resulting
scores may transcend mere relative rankings and carry absolute meaning.
### 8.3 Human-AI Interaction
In educational settings where resources are limited and fine-tuning is not
feasible, it is crucial to investigate how these technologies can effectively
support assessment while collaborating with teachers. Human raters are
susceptible to cognitive biases, and even the evaluation data used in this
study, despite its extensive use in previous research, may not be entirely
error-free or of the highest quality. Had LLMs assisted human raters in
creating the evaluation data from the outset, this study’s results might have
differed slightly.
Although this study successfully enabled LLMs to perform evaluations similar
to humans by modifying their operational method without separate fine-tuning,
the full automation approach has limitations in supporting human evaluators’
reading or assessment processes. In scenarios requiring human-AI interaction,
it is crucial for LLMs to be finely adjustable (controllable) and sufficiently
interpretable. This aspect warrants further research to enhance the synergy
between human expertise and AI capabilities in educational assessment.
### 8.4 Optimization of Comparison Pairs
Due to the cost limitations of using the GPT-4 model, this study could not
validate the approach on larger datasets. This is partly due to the problem of
the number of pairs to be compared increasing exponentially when applying CJ
(Goossens and De Maeyer, 2018). Future research should integrate methods such
as Adaptive Comparative Judgment (ACJ) (Pollitt, 2012) to optimize the number
of comparison pairs and verify the effectiveness of such approaches.
## 9 Conclusion
This study contributes to the growing body of research on the application of
Large Language Models (LLMs) in Automated Essay Scoring (AES) by investigating
the effectiveness of combining LLMs with Comparative Judgment (CJ). The
findings demonstrate that the CJ-based scoring strategy, particularly when
combined with elaborated rubrics and fine-grained scores using GPT-4, is more
effective than the traditional rubric-based scoring in enabling LLMs to
imitate human rater scores. This study shows that while GPT-4 is a powerful
tool for AES, it is not sufficient on its own, as many factors influence both
human raters and LLMs in essay scoring.
The results have significant implications not just for the advancement and
utilization of LLMs in AES, but also for several research domains that entail
generating multi-trait scoring data with a hierarchy. The insight gained from
this study can guide the development of automated scoring systems in various
fields, emphasizing the significance of taking into account elements such as
scoring criteria, scoring methods, and the specific language model used. This
work highlights the significance of interdisciplinary collaboration among
specialists in the areas of natural language processing, educational
assessment, and cognitive psychology to further enhance the progress and
implementation of LLMs in intricate educational problems.
## References
* Bejar (2012) Issac I. Bejar. Rater cognition: Implications for validity. _Educational Measurement: Issues and Practice_ , 31(3):2–9, 2012. doi: 10.1111/j.1745-3992.2012.00238.x.
* Bradley and Terry (1952) Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_ , 39(3/4):324–345, 1952. doi: 10.2307/2334029.
* Cohen (1968) Jacob Cohen. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. _Psychological bulletin_ , 70(4):213, 1968.
* Cumming et al. (2002) Alister Cumming, Robert Kantor, and Donald E Powers. Decision making while rating esl/efl writing tasks: A descriptive framework. _The Modern Language Journal_ , 86(1):67–96, 2002.
* Do et al. (2024) Heejin Do, Yunsu Kim, and Gary Geunbae Lee. Autoregressive score generation for multi-trait essay scoring. arXiv preprint arXiv:2403.08332, 2024.
* Freedman and Calfee (1983) S. W. Freedman and R. C. Calfee. Holistic assessment of writing: Experimental design and cognitive theory. In P. Mosenthal, L. Tamor, and S. A. Walmsley, editors, _Research in writing: Principles and methods_ , pages 75–98. Longman, New York, 1983.
* Goossens and De Maeyer (2018) Maarten Goossens and Sven De Maeyer. How to obtain efficient high reliabilities in assessing texts: Rubrics vs comparative judgement. In _Technology Enhanced Assessment: 20th International Conference, TEA 2017, Barcelona, Spain, October 5–6, 2017, Revised Selected Papers 20_ , pages 13–25. Springer, 2018.
* Hamp-Lyons and Henning (1991) Liz Hamp-Lyons and Grant Henning. Communicative writing profiles: An investigation of the transferability of a multiple-trait scoring instrument across esl writing assessment contexts. _Language Learning_ , 41(3):337–373, 1991.
* Han et al. (2023) Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Hyunseung Lim, Yoonsu Kim, Tak Yeon Lee, Hwajung Hong, Juho Kim, So-Yeon Ahn, and Alice Oh. Fabric: Automated scoring and feedback generation for essays. arXiv preprint arXiv:2310.05191, 2023.
* Janssen et al. (2015) Gerriet Janssen, Valerie Meier, and Jonathan Trace. Building a better rubric: Mixed methods rubric revision. _Assessing writing_ , 26:51–66, 2015.
* Laming (2003) Donald Laming. _Human judgment: The eye of the beholder_. Cengage Learning EMEA, 2003.
* Lumley (2002) Tom Lumley. Assessment criteria in a large-scale writing test: what do they really mean to the raters? _Language Testing_ , 19(3):246–276, 2002. doi: 10.1191/0265532202lt230oa.
* Mansour et al. (2024) Watheq Mansour, Salam Albatarni, Sohaila Eltanbouly, and Tamer Elsayed. Can large language models automatically score proficiency of written essays? _arXiv preprint arXiv:2403.06149_ , 2024.
* Mathias and Bhattacharyya (2018) Sandeep Mathias and Pushpak Bhattacharyya. Asap++: Enriching the asap automated essay grading dataset with essay attribute scores. In _Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018)_ , 2018.
* McCaffrey et al. (2022) Daniel F McCaffrey, Jodi M Casabianca, Kathryn L Ricker-Pedley, René R Lawless, and Cathy Wendler. Best practices for constructed-response scoring. _ETS Research Report Series_ , 2022(1):1–58, 2022.
* North (2003) Brian North. Scales for rating language performance: Descriptive models, formulation styles, and presentation formats. _Research Monograph Series. Princeton, NJ: Edu cational Testing Service_ , 2003.
* Pollitt (2012) Alastair Pollitt. The method of adaptive comparative judgement. _Assessment in Education: Principles, Policy & Practice_, 19(3):281–300, 2012. doi: 10.1080/0969594X.2012.665354.
* Rasch (1960) Georg Rasch. _Probabilistic models for some intelligence and attainment tests_. Copenhagen: Institute of Education Research, 1960.
* Tavares and Eva (2013) Walter Tavares and Kevin W Eva. Exploring the impact of mental workload on rater-based assessments. _Advances in Health Sciences Education_ , 18(2):291–303, 2013. doi: 10.1007/s10459-012-9370-3.
* Thurstone (1927) LL Thurstone. A law of comparative judgment. _Psychological Review_ , 34(4):273, 1927.
* Verhavert et al. (2019) San Verhavert, Renske Bouwer, Vincent Donche, and Sven De Maeyer. A meta-analysis on the reliability of comparative judgement. _Assessment in Education: Principles, Policy & Practice_, 26(5):541–562, 2019. doi: 10.1080/0969594X.2019.1602027.
* Wolfe and Feltovich (1994) Edward W Wolfe and Brian Feltovich. Learning to rate essays: A study of scorer cognition. New Orleans, LA, 1994. Educational Testing Service.
* Wolfe et al. (2016) Edward W Wolfe, Tian Song, and Hong Jiao. Features of difficult-to-score essays. _Assessing Writing_ , 27:1–10, 2016.
* Xiao et al. (2024) Changrong Xiao, Wenxing Ma, Sean Xin Xu, Kunpeng Zhang, Yufang Wang, and Qi Fu. From automation to augmentation: Large language models elevating essay scoring landscape. arXiv preprint arXiv:2401.06431, 2024.
* Yancey et al. (2023) Kevin P Yancey, Geoffrey Laflair, Anthony Verardi, and Jill Burstein. Rating short l2 essays on the cefr scale with gpt-4. In _Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)_ , pages 576–584, Toronto, Canada, 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.bea-1.49.
* Yoo et al. (2024) Haneul Yoo, Jieun Han, So-Yeon Ahn, and Alice Oh. Dress: Dataset for rubric-based essay scoring on efl writing. _arXiv preprint arXiv:2402.16733_ , 2024.
* Zhang (2013) Mo Zhang. Contrasting automated and human scoring of essays. _R &D Connections_, 21(2):1–11, 2013.
## Appendix
## Appendix A Prompts for Rubric Elaboration
Below are representative essay examples for each score on the
"{criteria_name}" aspect of the essay grading scale. Use the essay examples to
elaborate on existing descriptors. Create specific descriptors for each score,
but write them as generalised statements. //Grading scale:
{scale_to_elaborate} //Example essay: \- Score 3: //Essay1: {essay#1_content}
//Essay2: {essay#2_content} //Essay3: {essay#3_content} … //Writing task:
{essay_prompt}
---
Prompt for elaboration (EGD-type) Below are representative essay examples for
each score on the "{criteria_name}" aspect of the essay grading scale. Use the
essay examples to elaborate on existing descriptors. Elaborate descriptors for
each score, with specific examples. //Grading scale: {scale_to_elaborate}
//Example essay: \- Score 3: //Essay1: {essay#1_content} //Essay2:
{essay#2_content} //Essay3: {essay#3_content} … //Writing task: {essay_prompt}
---
Prompt for elaboration (ESE-type)
## Appendix B Prompts for Evaluation
### B.1 Prompt for Rubric-based Scoring
Q. Please score student writing according to the criteria given in the
’{criteria_name}’ aspect. //Criteria: {criteria} //Answer format:
{’score_explanation’: [content], ’score’: [number]} score = [0, 1, 2, 3]
Please answer only in the above dictionary format. //Prompt: {essay_prompt}
//Essay: {essay_content}
---
Prompt for Scoring (B-type rubric) Q. Please score student writing according
to the scoring examples and criteria given in the ’{criteria_name}’ aspect.
//Scoring examples: {examples} //Criteria: {elaborated criteria with general
description} //Answer format: {’score_explanation’: [content], ’score’:
[number]} score = [0, 1, 2, 3] Please answer only in the above dictionary
format. //Prompt: {essay_prompt} //Essay: {essay_content}
---
Prompt for Scoring (EGD-type or ESE-type rubric)
### B.2 Prompt for Comparative Judgment
Q. You’re a writing assessment expert. Compare two essays (Essay A, Essay B)
based on the criteria below and choose which one did better. Please answer
without explanation. (e.g., Essay A or Essay B) //Criteria: {criteria_name}
{criteria} //Prompt: {essay_prompt} Essay A: {essayA_content} //Essay B:
{essayB_content}
---
Prompt for Comparative Judgement
## Appendix C Example of Rubric
* •
Basic-type Rubric
Ideas
Score 3: Tells a story with ideas that are clearly focused on the topic and
are thoroughly developed with specific, relevant details. Score 2: Tells a
story with ideas that are somewhat focused on the topic and are developed with
a mix of specific and/or general details. Score 1: Tells a story with ideas
that are minimally focused on the topic and developed with limited and/or
general details. Score 0: Ideas are not focused on the task and/or are
undeveloped.
---
* •
EGD-type Rubric
**Ideas**
**Score 3:** \- The essay presents a narrative that is directly aligned with
the prompt, showcasing a deep understanding of the concept of patience. \-
Ideas are not only relevant but are also enriched with vivid, specific details
that enhance the story, making it engaging and illustrative of the theme. \-
The narrative structure is coherent, with a clear beginning, development, and
conclusion that collectively underscore the significance of patience. \- The
writer effectively uses descriptive language and personal reflections to
convey emotions and insights, making the theme of patience resonate with the
reader.
**Score 2:** \- The essay addresses the prompt, but the connection to the
theme of patience may be less direct or slightly obscured by less relevant
details. \- Ideas are generally focused on the topic of patience, but the
narrative may include a mix of specific and general details, diluting the
overall impact. \- The narrative demonstrates an attempt to explore the theme
of patience, but the development of ideas may lack depth or fail to fully
captivate the reader. \- While the essay aims to convey a message about
patience, the execution may be uneven, with some parts more effectively
illustrated than others.
---
**Score 1:** \- The essay makes a minimal effort to address the prompt, with
the theme of patience only loosely connected to the narrative. \- Ideas
related to patience are underdeveloped and rely heavily on generalities,
lacking the specific details necessary to bring the story to life. \- The
narrative may struggle to maintain focus, with tangential or irrelevant
details detracting from the theme of patience. \- The exploration of patience
is superficial, offering little insight or reflection, and fails to engage the
reader meaningfully.
**Score 0:** \- The essay fails to address the prompt, with the narrative
either not focused on patience or completely lacking in development of the
theme. \- Ideas are scattered or absent, with no discernible attempt to
explore the concept of patience in any meaningful way. \- The narrative lacks
coherence, making it difficult to identify a central theme or purpose related
to patience. \- The essay does not meet the basic requirements of the task,
offering no insight, reflection, or narrative related to patience.
---
*
* •
ESE-type Rubric
**Ideas**
**Score 3 Descriptors and Examples:**
\- Ideas are clearly focused on the topic of patience, demonstrating a deep
understanding and engagement with the concept. \- The narrative is thoroughly
developed with specific, relevant details that enhance the story and provide
insight into the theme of patience. \- Examples: \- Essay1 vividly describes
the frustration of waiting in line for food, the anticipation, and the
eventual reward of patience, culminating in receiving a warm cookie. This
story effectively uses specific details like the empty food trays and the
aroma of pizza to illustrate the experience of patience. \- Essay2 captures
the excitement and patience involved in fishing, detailing the struggle with a
large fish and the eventual loss, highlighting patience in the face of
disappointment. \- Essay3 focuses on the anticipation and long wait for a ride
at an amusement park, using specific details like the length of the car ride
and the queue for the ride to illustrate the theme of patience and the
eventual payoff of an enjoyable experience.
---
\- Essay2 captures the excitement and patience involved in fishing, detailing
the struggle with a large fish and the eventual loss, highlighting patience in
the face of disappointment. \- Essay3 focuses on the anticipation and long
wait for a ride at an amusement park, using specific details like the length
of the car ride and the queue for the ride to illustrate the theme of patience
and the eventual payoff of an enjoyable experience.
**Score 2 Descriptors and Examples:** \- Ideas are somewhat focused on the
topic of patience but may include some irrelevant details or slightly off-
topic content. \- The narrative is developed with a mix of specific and
general details, which sometimes dilutes the focus or clarity of the theme of
patience. \- Examples: \- Essay1 discusses the concept of patience in the
context of waiting for a grade improvement, but the narrative includes a mix
of specific scenarios and more general statements about patience, making the
focus less clear. \- Essay2 describes the experience of shopping in a crowded
store, which is relevant to patience, but the story includes some general
complaints and lacks the depth of specific details that would more effectively
illustrate patience. \- Essay3 recounts waiting in a long line at customs, a
situation that requires patience. However, the narrative is more of a
straightforward account with fewer vivid, specific details that would enrich
the theme.
**Score 1 Descriptors and Examples:** \- Ideas are minimally focused on the
topic of patience, with the narrative often veering off-topic or lacking a
clear connection to the theme. \- The narrative is developed with limited
and/or general details, which fails to provide a meaningful insight into the
concept of patience or to engage the reader effectively. \- Examples: \-
Essay1 briefly mentions hunting and fishing as activities requiring patience
but offers very little detail or development, making the connection to
patience weak and the narrative underdeveloped. \- Essay2 confuses the concept
of being a patient in a medical sense with the theme of patience, resulting in
a narrative that is off-topic and lacks focus. \- Essay3 mentions waiting at a
volleyball tournament but provides minimal detail about the experience,
resulting in a narrative that barely touches on the theme of patience.
**Score 0 Descriptors and Examples:** \- Ideas are not focused on the task of
discussing patience, with narratives that are either completely off-topic or
so underdeveloped that they fail to address the theme meaningfully.
---
\- The narrative lacks development, with no clear storyline or details related
to patience, making it difficult to discern any meaningful engagement with the
topic. \- Examples: \- Essay1 rambles about various situations where one might
need to be patient but lacks a coherent narrative or specific details related
to personal experiences of patience, making it off-topic and undeveloped. \-
Essay2 makes general statements about patience without providing any narrative
or examples, resulting in a piece that is undeveloped and fails to meet the
task. \- Essay3 expresses a personal disinterest in patience without offering
a narrative or examples, making it off-topic and not focused on the task of
writing about patience.
---
## Appendix D Number of Essays Sampled for Testing
Essay set #7 Label | Trait1 | Trait2 | Trait3 | Trait4
---|---|---|---|---
0.0 | 5 | 2 | 1 | 2
0.5 | 5 | 4 | 5 | 5
1.0 | 5 | 5 | 5 | 5
1.5 | 5 | 5 | 5 | 5
2.0 | 5 | 5 | 5 | 5
2.5 | 5 | 5 | 5 | 5
3.0 | 5 | 5 | 5 | 5
Total | 35 | 31 | 31 | 32
Essay set #8 Label | Trait1 | Trait2 | Trait3 | Trait4 | Trait5 | Trait6
---|---|---|---|---|---|---
1.0 | 1 | 1 | 1 | 1 | 1 | 1
1.5 | 1 | 1 | 0 | 1 | 1 | 1
2.0 | 2 | 2 | 2 | 1 | 2 | 2
2.3 | 1 | 2 | 0 | 0 | 1 | 1
2.5 | 2 | 2 | 2 | 2 | 2 | 2
2.7 | 1 | 2 | 1 | 1 | 2 | 2
3.0 | 2 | 2 | 2 | 2 | 2 | 2
3.3 | 2 | 2 | 2 | 2 | 2 | 2
3.5 | 2 | 2 | 2 | 2 | 2 | 2
3.7 | 2 | 2 | 2 | 2 | 2 | 2
4.0 | 2 | 2 | 2 | 2 | 2 | 2
4.3 | 2 | 2 | 2 | 2 | 2 | 2
4.5 | 2 | 2 | 2 | 2 | 2 | 2
4.7 | 2 | 2 | 2 | 2 | 2 | 2
5.0 | 2 | 2 | 2 | 2 | 2 | 2
5.3 | 1 | 1 | 2 | 1 | 0 | 0
5.5 | 2 | 1 | 2 | 1 | 1 | 1
5.7 | 1 | 0 | 1 | 0 | 0 | 0
6.0 | 1 | 1 | 1 | 1 | 1 | 1
Total | 31 | 31 | 30 | 27 | 29 | 29
|
engineers), especially during Covid lucas2021mindful , we are also interested
in working with the self-connection scale by barrett2015validation .
## 7 Conclusion
In this article, we presented the results of an intervention with live group
breathing practice to deepen the participants’ connection to themselves,
framed with a weekly self-development topic. Awareness raising is happening on
a neurological/unconscious level by breathwork, and on a mental/rational level
by the topic presentations and reflecting upon them in group conversation as
well as in personal practice with proposed tools. The quantitative and
qualitative results indicate that this intervention may be helpful in
improving participants’ mindfulness attention awareness, well-being, and self
efficacy.
There is a wide selection of wellness classes available outside of work for
the person looking, while at work there may be a few generic offerings that
work on a content level, but often not on a neurophysiological or embodied
level.
Software engineers have a strong background in rational thinking and work with
empirical evidence, so there is a need for programs with adequate language
such that software engineers who feel overwhelmed are attracted - science-
based and in a safe space, brought to them by someone who can relate to their
specific work experiences. This may help sway hesitant software engineers to
try out a relaxation and recovery technique, benefitting their personal
resilience and well-being and, in turn, their work performance and job
satisfaction (important for retention). Consequently, we see three ways of
potential impact by our study: 1) to inform and raise awareness in the
research community as well as in practice, 2) to train further cohorts of
software engineers and software engineering researchers and educators in
restorative practices, 3) to develop tailored programs for companies and
higher education that teach these techniques and frame them science-based
while still focusing on the embodiment component to increase self-connection.
The main challenge that remains is that the pace of work life is artificially
high because of a perceived need for constant competition (e.g. time to
market, to offer better service, to increase our skills, etc.) as remarked by
several participants in our study to the point where they felt they didn’t
“have time” for restorative practices. The speeding-up of life we have been
witnessing over the past decades has consequences for health. In a certain
pattern, physical stress is healthy and makes sure that we get certain things
done - and those phases of stress needs to be taking turns with phases of
recovery (beyond sleeping 6 hours per night). When recovery is not
sufficiently given, stress wears on our physical (adrenal fatigue), mental
(burn-out), and emotional health (depression and anxiety). Restorative
practices can help us recover more quickly and become more resilient - they do
not change the underlying systemic misalignments.
Our vision is that restorative and contemplative practices can support us in
recovering a stronger connection to self, such that we have the mental and
emotional capacity to reflect on our values and how we live into them. We get
to decide every day how we want to continue, and the constraints can be
shifted, some immediately, some over time. There are systems with unhealthy
dynamics in place, yes, and we can change them – because we humans are the
ones that created them. If we don’t like the constant stress and time
pressure, let’s change the systems and societal structures that create them.
Part of that is acknowledging the tendency of the human mind to always want
more (and we see how it plays out in our economy), and developing our own
practice to stay present with that dass1971here . The first step towards that
from the perspective of our research is: Let’s normalise taking care of our
nervous systems as much as brushing our teeth, and thereby improve our
physical, mental, and emotional health. There could be a start of every
meeting with a deep breath to become present, someone teaching peers an
emergency breathing technique to relax and focus before a presentation, there
a well-being course that teaches breathing practices (or other restorative
techniques) twice a year at a company, a weekly meditation group that provides
community support in addition to daily personal practice (when it comes to
personal practice, 5 minutes is always better than nothing). The options are
many, the prioritization is an individual choice.
We leave you with a quote from a journal entry that sums up results reflected
for a number of participants and that seem worth acknowledging:
> Today was the last day of the 12 weeks. I took away a whole new world, that
> I am still trying to reconcile with. (…) Anyway, learnings: be conscious
> where you put your attention, and hence your energy, what the wonder
> precious moment is, that I am not different—I am unique, to put intentions
> to things, how meditation with a intention/visualization can change your
> day, that breathing can “make you float” and have psychedelic experiences,
> the forgotten joy of dancing, the power of gratefulness and that I am
> grateful for the bad stuff that happened to me (!), how important it is to
> love and be kind to oneself, to surrender to feelings rather than pushing
> them away, the power of small routines (as well as the difficulty of keeping
> them), that I am not my thoughts or my emotions (what the f*+@?!?!), (…)
> What else can I say, really? THANK YOU!!! \- participant 75, run 1, journal,
> Dec 10 2020
## 8 Data Availability
To support open science, the replication package including the raw
quantitative data is available on Zenodo https://zenodo.org/record/5082388,
which links to a Github repository https://github.com/torkar/rise2flow.
The qualitative responses are not available as many of them reveal very
personal experiences, deep emotions, and individual life circumstances that
might involuntarily disclose identifiable information.
###### Acknowledgements.
We thank the participants of Rise 2 Flow 1 and 2 for their trust in us to
support them in cultivating a personal practice for increased well-being, for
their dedication, and for their generous feedback. The first author thanks
Robert Feldt for a helpful discussion of available survey instruments during
the design phase of this study, and Sabine and Fritz Penzenstadler for helpful
input in conversation and action. We thank Francisco Gomes de Oliveira Neto
and Leticia Duboc for thoughtful feedback on earlier versions of this
manuscript. We thank the anonymous reviewers who gave very thorough and
thoughtful feedback on an earlier version (shout-out to especially Reviewer
1). We appreciate you. The computations were enabled by resources provided by
the Swedish National Infrastructure for Computing (SNIC), partially funded by
the Swedish Research Council through grant agreement no. 2018-05973. Part of
this research is financed by the Area of Advance ICT at Chalmers University of
Technology under no. C-2019-0299.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* (1) Akula, B., Cusick, J.: Impact of overtime and stress on software quality. In: 4th International Symposium on Management, Engineering, and Informatics (MEI 2008), Orlando, Florida, USA (2008)
* (2) Amin, A., Basri, S., Hassan, M.F., Rehman, M.: Software engineering occupational stress and knowledge sharing in the context of global software development. In: 2011 National Postgraduate Conference, pp. 1–4. IEEE (2011)
* (3) Baer, R.A., Smith, G.T., Hopkins, J., Krietemeyer, J., Toney, L.: Using self-report assessment methods to explore facets of mindfulness. Assessment 13(1), 27–45 (2006)
* (4) Baltes, S., Ralph, P.: Sampling in software engineering research: A critical review and guidelines. CoRR abs/2002.07764 (2020). URL https://arxiv.org/abs/2002.07764
* (5) Bandura, A., Freeman, W., Lightsey, R.: Self-efficacy: The exercise of control (1999)
* (6) Bandura, A., Wessels, S.: Self-efficacy (1994)
* (7) Barajas, S., Garra, L.: Mindfulness and psychopathology: Adaptation of the mindful attention awareness scale (maas) in a spanish sample. Clínica y Salud 25(1), 49–56 (2014)
* (8) Barrett, F.S., Johnson, M.W., Griffiths, R.R.: Validation of the revised mystical experience questionnaire in experimental sessions with psilocybin. Journal of Psychopharmacology 29(11), 1182–1190 (2015)
* (9) Bech, P.: Health-related quality of life measurements in the assessment of pain clinic results. Acta Anaesthesiologica Scandinavica 43(9), 893–896 (1999)
* (10) Bernárdez, B., Durán, A., Parejo, J.A., Ruiz-Cortés, A.: A controlled experiment to evaluate the effects of mindfulness in software engineering. In: Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 1–10 (2014)
* (11) Bernárdez, B., Durán, A., Parejo, J.A., Ruiz-Cortés, A.: An experimental replication on the effect of the practice of mindfulness in conceptual modeling performance. Journal of Systems and Software 136, 153–172 (2018)
* (12) Braun, V., Clarke, V.: Thematic analysis. (2012)
* (13) Braun, V., Clarke, V.: To saturate or not to saturate? questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative research in sport, exercise and health 13(2), 201–216 (2021)
* (14) Broome, B.J.: Building shared meaning: Implications of a relational approach to empathy for teaching intercultural communication. Communication education 40(3), 235–249 (1991)
* (15) Brown, K.W., Ryan, R.M.: The benefits of being present: mindfulness and its role in psychological well-being. Journal of personality and social psychology 84(4), 822 (2003)
* (16) Brown, R.P., Gerbarg, P.L.: Sudarshan kriya yogic breathing in the treatment of stress, anxiety, and depression: part i—neurophysiologic model. Journal of Alternative & Complementary Medicine 11(1), 189–201 (2005)
* (17) Brown, S.: Speed: facing our addiction to fast and faster–and overcoming our fear of slowing down. Berkley (2014)
* (18) Brulé, D.: Just Breathe: Mastering Breathwork. Simon and Schuster (2017)
* (19) Busseri, M.A.: Examining the structure of subjective well-being through meta-analysis of the associations among positive affect, negative affect, and life satisfaction. Personality and Individual Differences 122, 68–71 (2018)
* (20) Buxton, O.M., Cain, S.W., O’Connor, S.P., Porter, J.H., Duffy, J.F., Wang, W., Czeisler, C.A., Shea, S.A.: Adverse metabolic consequences in humans of prolonged sleep restriction combined with circadian disruption. Science translational medicine 4(129), 129ra43–129ra43 (2012)
* (21) Capretz, L.F.: Personality types in software engineering. International Journal of Human-Computer Studies 58(2), 207–214 (2003)
* (22) Carmody, J., Reed, G., Kristeller, J., Merriam, P.: Mindfulness, spirituality, and health-related symptoms. Journal of psychosomatic research 64(4), 393–403 (2008)
* (23) Chalmers, D.J.: The conscious mind: In search of a fundamental theory. Oxford Paperbacks (1996)
* (24) Cheng, L., Ramchandran, S., Vatanen, T., Lietzén, N., Lahesmaa, R., Vehtari, A., Lähdesmäki, H.: An additive Gaussian process regression model for interpretable non-parametric analysis of longitudinal data. Nature Communications 10(1), 1798 (2019). DOI 10.1038/s41467-019-09785-8
* (25) Chlebak, C.M., James, S., Westwood, M.J., Gockel, A., Zumbo, B.D., Shapiro, S.L.: Mindfulness meditation & gratitude journalling. Counseling et spiritualité/Counselling and Spirituality 32(2), 79–103 (2013)
* (26) Cockburn, A.: Agile software development: the cooperative game. Pearson Education (2006)
* (27) Creswell, J.D., Pacilio, L.E., Lindsay, E.K., Brown, K.W.: Brief mindfulness meditation training alters psychological and neuroendocrine responses to social evaluative stress. Psychoneuroendocrinology 44, 1–12 (2014)
* (28) Crisp, R.: Well-being. Stanford Encyclopedia of Philosophy (2001)
* (29) Dagenais-Desmarais, V., Savoie, A.: What is psychological well-being, really? a grassroots approach from the organizational sciences. Journal of Happiness Studies 13(4), 659–684 (2012)
* (30) Dass, R.: Be here now. Three Rivers Press (CA) (1971)
* (31) Deng, Y.Q., Li, S., Tang, Y.Y., Zhu, L.H., Ryan, R., Brown, K.: Psychometric properties of the chinese translation of the mindful attention awareness scale (maas). Mindfulness 3(1), 10–14 (2012)
* (32) Derksen, F., Bensing, J., Lagro-Janssen, A.: Effectiveness of empathy in general practice: a systematic review. British Journal of General Practice 63(606), e76–e84 (2013)
* (33) Diener, E., Wirtz, D., Biswas-Diener, R., Tov, W., Kim-Prieto, C., Choi, D.w., Oishi, S.: New measures of well-being. In: Assessing well-being, pp. 247–266. Springer (2009)
* (34) Dillman, D.A., Smyth, J.D., Christian, L.M.: Internet, phone, mail, and mixed-mode surveys: the tailored design method. John Wiley & Sons (2014)
* (35) Donnelly, N., Proctor-Thomson, S.B.: Disrupted work: home-based teleworking (hbtw) in the aftermath of a natural disaster. New Technology, Work and Employment 30(1), 47–61 (2015)
* (36) Easterbrook, S.: From computational thinking to systems thinking. In: The 2nd international conference ICT for Sustainability (ICT4S), Stockholm (2014)
* (37) Elliot, D.: The Reluctant Healer. Hawk Press (2005)
* (38) Evans, S., Ferrando, S., Findler, M., Stowell, C., Smart, C., Haglin, D.: Mindfulness-based cognitive therapy for generalized anxiety disorder. Journal of anxiety disorders 22(4), 716–721 (2008)
* (39) Evans, S., Tsao, J.C., Sternlieb, B., Zeltzer, L.K.: Using the biopsychosocial model to understand the health benefits of yoga. Journal of complementary and integrative medicine 6(1) (2009)
* (40) Feldmann-Barrett, L.: Seven and a Half Lessons About the Brain. Picador, UK (2020)
* (41) Feldt, R., Torkar, R., Angelis, L., Samuelsson, M.: Towards individualized software engineering: empirical studies should collect psychometrics. In: Proceedings of the 2008 international workshop on Cooperative and human aspects of software engineering, pp. 49–52 (2008)
* (42) Fisher, M.: A their of public well-being. BMC public health 19 (2019)
* (43) Fletcher, C., Bailey, C.: Assessing self-awareness: some issues and methods. Journal of managerial psychology (2003)
* (44) Fucci, D., Scanniello, G., Romano, S., Juristo, N.: Need for sleep: the impact of a night of sleep deprivation on novice developers’ performance. IEEE Transactions on Software Engineering (2018)
* (45) Graziotin, D., Fagerholm, F., Wang, X., Abrahamsson, P.: What happens when software developers are (un) happy. Journal of Systems and Software 140, 32–47 (2018)
* (46) Gren, L.: Standards of validity and the validity of standards in behavioral software engineering research: the perspective of psychological test theory. In: Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 1–4 (2018)
* (47) Hafner, M., Stepanek, M., Taylor, J., Troxel, W.M., Van Stolk, C.: Why sleep matters—the economic costs of insufficient sleep: a cross-country comparative analysis. Rand health quarterly 6(4) (2017)
* (48) Haus, E.L., Smolensky, M.H.: Shift work and cancer risk: potential mechanistic roles of circadian disruption, light at night, and sleep deprivation. Sleep medicine reviews 17(4), 273–284 (2013)
* (49) den Heijer, P., Koole, W., Stettina, C.J.: Don’t forget to breathe: a controlled trial of mindfulness practices in agile project teams. In: International Conference on Agile Software Development, pp. 103–118. Springer (2017)
* (50) Herbsleb, J.D., Moitra, D.: Global software development. IEEE software 18(2), 16–20 (2001)
* (51) Herrman, H., Stewart, D.E., Diaz-Granados, N., Berger, E.L., Jackson, B., Yuen, T.: What is resilience? The Canadian Journal of Psychiatry 56(5), 258–265 (2011)
* (52) Höfling, V., Moosbrugger, H., Schermelleh-Engel, K., Heidenreich, T.: Mindfulness or mindlessness? European Journal of Psychological Assessment (2011)
* (53) Hofmann, S.G., Asmundson, G.J., Beck, A.T.: The science of cognitive therapy. Behavior therapy 44(2), 199–212 (2013)
* (54) Houben, M., Van Den Noortgate, W., Kuppens, P.: The relation between short-term emotion dynamics and psychological well-being: A meta-analysis. Psychological bulletin 141(4), 901 (2015)
* (55) Jahoda, M.: Current concepts of positive mental health. New York, NY, US: Basic Books (1958). URL https://doi.org/10.1037/11258-000
* (56) James, W.: Memories and studies. New York, Longmans (1911 (republished in 1924))
* (57) James, W.: The principles of psychology, vol. 1. Cosimo, Inc. (2007)
* (58) Jerusalem, M., Schwarzer, R.: Skala zur allgemeinen selbstwirksamkeitserwartung. Skalen zur Erfassung von Lehrer-und Schülermerkmalen. Dokumentation der psychometrischen Verfahren im Rahmen der Wissenschaftlichen Begleitung des Modellversuchs Selbstwirksame Schulen. Berlin: Freie Universität Berlin (1999)
* (59) Jovanović, V.: Beyond the panas: Incremental validity of the scale of positive and negative experience (spane) in relation to well-being. Personality and Individual Differences 86, 487–491 (2015)
* (60) Kabat-Zinn, J.: Mindfulness-based interventions in context: past, present, and future. Clinical psychology: Science and practice 10(2), 144–156 (2003)
* (61) Kabat-Zinn, J.: Meditation is not what you think. Mindfulness 12(3), 784–787 (2021)
* (62) Kahneman, D.: Thinking, fast and slow. Macmillan (2011)
* (63) Kessler, R.C., Barber, C., Beck, A., Berglund, P., Cleary, P.D., McKenas, D., Pronk, N., Simon, G., Stang, P., Ustun, T.B., et al.: The world health organization health and work performance questionnaire (hpq). Journal of Occupational and Environmental Medicine 45(2), 156–174 (2003)
* (64) Konrath, S.: The empathy paradox: Increasing disconnection in the age of increasing connection. In: Handbook of research on technoself: Identity in a technological society, pp. 204–228. IGI Global (2013)
* (65) Kotler, S.: The Art of Impossible. Harper Wave (2021)
* (66) Krishnan, P.: A review of the non-equivalent control group post-test-only design. Nurse researcher 29(2) (2021)
* (67) Lavallée, M., Robillard, P.N.: Why good developers write bad code: An observational case study of the impacts of organizational factors on software quality. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, vol. 1, pp. 677–687. IEEE (2015)
* (68) Lenberg, P., Feldt, R., Wallgren, L.G.: Behavioral software engineering: A definition and systematic literature review. Journal of Systems and software 107, 15–37 (2015)
* (69) Lucas, J.J.: Mindful energy and information flow: A reflective account of self connection during covid-19. Qualitative Social Work 20(1-2), 214–221 (2021)
* (70) MacKillop, J., Anderson, E.J.: Further psychometric validation of the mindful attention awareness scale (maas). Journal of Psychopathology and Behavioral Assessment 29(4), 289–293 (2007)
* (71) Marek, T., Schaufeli, W.B., Maslach, C.: Professional burnout: Recent developments in theory and research. Routledge (2017)
* (72) Maudgalya, T., Wallace, S., Daraiseh, N., Salem, S.: Workplace stress factors and ‘burnout’ among information technology professionals: A systematic review. Theoretical Issues in Ergonomics Science 7(3), 285–297 (2006)
* (73) McElreath, R.: Statistical rethinking: A Bayesian course with examples in R and Stan. Chapman and Hall/CRC (2018)
* (74) Medvedyk, T., Antoniuk, I., Lebid, S.: Influence of stress factors on cognitive tasks performance. In: 2019 IEEE 20th International Conference on Computational Problems of Electrical Engineering (CPEE), pp. 1–4. IEEE (2019)
* (75) Meyer, A.N.: Fostering software developer productivity through awareness increase and goal-setting. Ph.D. thesis, University of Zurich (2019)
* (76) Nestor, J.: Breath: The new science of a lost art. Penguin UK (2020)
* (77) Ostberg, J.P., Graziotin, D., Wagner, S., Derntl, B.: A methodology for psycho-biological assessment of stress in software engineering. PeerJ Computer Science 6, e286 (2020)
* (78) Paananen, T., Piironen, J., Andersen, M.R., Vehtari, A.: Variable selection for Gaussian processes via sensitivity analysis of the posterior predictive distribution. In: K. Chaudhuri, M. Sugiyama (eds.) Proceedings of Machine Learning Research, _Proceedings of Machine Learning Research_ , vol. 89, pp. 1743–1752. PMLR (2019). URL http://proceedings.mlr.press/v89/paananen19a.html
* (79) Panikkar, R.: The Vedic experience: Mantramañjarī: an anthology of the Vedas for modern man and contemporary celebration. Motilal Banarsidass Publ. (1994)
* (80) Pavot, W., Diener, E.: The subjective evaluation of well-being in adulthood: Findings and implications. Ageing International 29(2), 113–135 (2004)
* (81) Penzenstadler, B.: What is your remedy to cognitive overload? IEEE Software Blog (2020). http://blog.ieeesoftware.org/2020/03/what-is-your-remedy-to-cognitive.html?m=1
* (82) Penzenstadler, B.: Rise 2 flow replication package. Zenodo (2021). URL https://zenodo.org/record/5082388
* (83) Ralph, P., Baltes, S., Adisaputri, G., Torkar, R., Kovalenko, V., Kalinowski, M., Novielli, N., Yoo, S., Devroey, X., Tan, X., et al.: Pandemic programming: How COVID-19 affects software developers and how their organizations can help. Empirical Software Engineering (2020). DOI 10.1007/s10664-020-09875-y
* (84) Rieken, B., Shapiro, S., Gilmartin, S., Sheppard, S.: How mindfulness can help engineers solve problems. harvard business review. Harvard business review (2019). Https://hbr.org/2019/01/how-mindfulness-can-help-engineers-solve-problems
* (85) Riess, H.: The science of empathy. Journal of patient experience 4(2), 74–77 (2017)
* (86) Samuelson, M., Carmody, J., Kabat-Zinn, J., Bratt, M.A.: Mindfulness-based stress reduction in massachusetts correctional facilities. The Prison Journal 87(2), 254–268 (2007)
* (87) Scuffham, P.A., Vecchio, N., Whiteford, H.A.: Exploring the validity of hpq-based presenteeism measures to estimate productivity losses in the health and education sectors. Medical Decision Making 34(1), 127–137 (2014)
* (88) Selye, H.: What is stress. Metabolism 5(5), 525–530 (1956)
* (89) Seppälä, E.M., Bradley, C., Moeller, J., Harouni, L., Nandamudi, D., Brackett, M.A.: Promoting mental health and psychological thriving in university students: a randomized controlled trial of three well-being interventions. Frontiers in psychiatry 11, 590 (2020)
* (90) Shapiro, S.L., Oman, D., Thoresen, C.E., Plante, T.G., Flinders, T.: Cultivating mindfulness: effects on well-being. Journal of clinical psychology 64(7), 840–862 (2008)
* (91) Sharma, P., Thapliyal, A., Chandra, T., Singh, S., Baduni, H., Waheed, S.M.: Rhythmic breathing: immunological, biochemical, and physiological effects on health. Adv Mind Body Med 29(1), 18–25 (2015)
* (92) Sheppard, S., Gilmartin, S., Chen, H.L., Donaldson, K., Lichtenstein, G., Eris, O., Lande, M., Toye, G.: Exploring the engineering student experience: Findings from the academic pathways of people learning engineering survey (apples). tr-10-01. Center for the Advancement of Engineering Education (NJ1) (2010)
* (93) Smith, J.M.: Is computational thinking critical thinking? In: Expanding Global Horizons Through Technology Enhanced Language Learning, pp. 191–201. Springer (2021)
* (94) Squire, L., Berg, D., Bloom, F.E., Du Lac, S., Ghosh, A., Spitzer, N.C.: Fundamental neuroscience. Academic press (2012)
* (95) Sutton, A.: Measuring the effects of self-awareness: Construction of the self-awareness outcomes questionnaire. Europe’s journal of psychology 12(4), 645 (2016)
* (96) Tan, C.M., Goleman, D., Kabat-Zinn, J.: Search Inside Yourself: The Unexpected Path to Achieving Success, Happiness (and World Peace). HarperCollins (2012)
* (97) Topp, C.W., Østergaard, S.D., Søndergaard, S., Bech, P.: The who-5 well-being index: a systematic review of the literature. Psychotherapy and psychosomatics 84(3), 167–176 (2015)
* (98) Van Dam, N.T., Earleywine, M., Borders, A.: Measuring mindfulness? an item response theory analysis of the mindful attention awareness scale. Personality and Individual Differences 49(7), 805–810 (2010). DOI https://doi.org/10.1016/j.paid.2010.07.020. URL https://www.sciencedirect.com/science/article/pii/S0191886910003727
* (99) Vanhatalo, J., Riihimäki, J., Hartikainen, J., Jylänki, P., Tolvanen, V., Vehtari, A.: GPstuff: Bayesian modeling with Gaussian Processes. Journal of Machine Learning Research 14, 1175–1179 (2013)
* (100) Wagner, S., Mendez, D., Felderer, M., Graziotin, D., Kalinowski, M.: Challenges in survey research. In: Contemporary Empirical Methods in Software Engineering, pp. 93–125. Springer (2020)
* (101) Walker III, J., Pacik, D.: Controlled rhythmic yogic breathing as complementary treatment for post-traumatic stress disorder in military veterans: a case series. Medical acupuncture 29(4), 232–238 (2017)
* (102) Wasserstein, R.L., Lazar, N.A.: The asa statement on p-values: context, process, and purpose (2016)
* (103) Watson, D., Clark, L.A., Tellegen, A.: Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology 54(6), 1063 (1988)
* (104) Westen, D.: Psychology: Mind, brain, & culture. John Wiley & Sons (1996)
## Appendix A Survey Instruments
### A.1 The MAAS instrument
The Mindfulness Attention Awareness Scale (MAAS) is replicated from
brown2003benefits .
Instruction MAAS:
> Below is a collection of statements about your everyday experience. Using
> the 1 (almost never) - 6 (almost always) scale below, please indicate how
> frequently or infrequently you currently have each experience. Please answer
> according to what *really reflects* your experience rather than what you
> think your experience should be. Please treat each item separately from
> every other item.
| 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---
I could be experiencing some emotion and not be conscious of it until some time later. | | | | | |
I break or spill things because of carelessness, not paying attention, or thinking of something else. | | | | | |
I find it difficult to stay focused on what’s happening in the present. | | | | | |
I tend to walk quickly to get where I’m going without paying attention to what I experience along the way. | | | | | |
I tend not to notice feelings of physical tension or discomfort until they really grab my attention. | | | | | |
I forget a person’s name almost as soon as I’ve been told it for the first time. | | | | | |
It seems I am “running on automatic,” without much awareness of what I’m doing. | | | | | |
I rush through activities without being really attentive to them. | | | | | |
I get so focused on the goal I want to achieve that I lose touch with what I’m doing right now to get there. | | | | | |
I do jobs or tasks automatically, without being aware of what I’m doing. | | | | | |
I find myself listening to someone with one ear, doing something else at the same time. | | | | | |
I drive places on ‘automatic pilot’ and then wonder why I went there. | | | | | |
I find myself preoccupied with the future or the past. | | | | | |
I find myself doing things without paying attention. | | | | | |
I snack without being aware that I’m eating. | | | | | |
Table 6: The Mindfulness Attention Awareness Scale (MAAS) brown2003benefits
### A.2 The instruments SPANE, PWB, and PTS
Diener et al. diener2009new proposed a set of related instruments in ‘New
measures of well-being’ that includes the Scale of Positive And Negative
Experience (SPANE), the scale of Psychological Well-being (PWB), and the scale
of Positive Thinking (PTS).
Instruction SPANE:
> Please think about what you have been doing and experiencing during the past
> four weeks. Then report how much you experienced each of the following
> feelings, using the scale below. For each item, select a number from 1 (Very
> rarely or never) to 5 (Very often or always).
| 1 | 2 | 3 | 4 | 5
---|---|---|---|---|---
Positive | | | | |
Negative | | | | |
Good | | | | |
Bad | | | | |
Pleasant | | | | |
Unpleasant | | | | |
Happy | | | | |
Sad | | | | |
Afraid | | | | |
Joyful | | | | |
Angry | | | | |
Contented | | | | |
Table 7: The Scale of Positive and Negative Experiences (SPANE) diener2009new
Instruction PWB:
> Below are 8 statements with which you may agree or disagree. Using the 1
> (Strongly disagree) – 7 (Strongly agree) scale below, indicate your
> agreement with each item by indicating that response for each statement.
| 1 | 2 | 3 | 4 | 5 | 6 | 7
---|---|---|---|---|---|---|---
I lead a purposeful and meaningful life. | | | | | | |
My social relationships are supportive and rewarding. | | | | | | |
I am engaged and interested in my daily activities | | | | | | |
I actively contribute to the happiness and well-being of others | | | | | | |
I am competent and capable in the activities that are important to me | | | | | | |
I am a good person and live a good life | | | | | | |
I am optimistic about my future | | | | | | |
People respect me | | | | | | |
Table 8: The Psychological Well-Being (PWB) diener2009new
Instruction PTS:
> The following items are to be answered “Yes” or “No.” Write an answer next
> to each item to indicate your response.
| Yes | No
---|---|---
I see my community as a place full of problems. | |
I see much beauty around me. | |
I see the good in most people. | |
When I think of myself, I think of many shortcomings. | |
I think of myself as a person with many strengths. | |
I am optimistic about my future. | |
When somebody does something for me, I usually wonder if they have an ulterior motive. | |
When something bad happens, I often see a “silver lining,” something good in the bad event. | |
I sometimes think about how fortunate I have been in life. | |
When good things happen, I wonder if they might have been even better. | |
I frequently compare myself to others. | |
I think frequently about opportunities that I missed. | |
When I think of the past, the happy times are most salient to me. | |
I savor memories of pleasant past times. | |
I regret many things from my past. | |
When I see others prosper, even strangers, I am happy for them. | |
When I think of the past, for some reason the bad things stand out. | |
I know the world has problems, but it seems like a wonderful place anyway. | |
When something bad happens, I ruminate on it for a long time. | |
When good things happen, I wonder if they will soon turn sour. | |
When I see others prosper, it makes me feel bad about myself. | |
I believe in the good qualities of other people. | |
Table 9: The Positive Thinking Scale
### A.3 Self Efficacy
The instrument was developed by Jerusalem et al. jerusalem1999skala and based
on Bandura et al.’s bandura1999self self-efficacy model. It is used to assess
the individual stress resilience of the participants and encompasses ten items
that offer a positively phrased statement on change, challenges or unexpected
circumstances which the participant has to rate as “Not true” (1), “Hardly
true” (2), “Rather true” (3) or “Exactly true” (4).
Instruction:
> Please rate the following statements on the basis of the given scale and
> tick as appropriate:
| 1 | 2 | 3 | 4
---|---|---|---|---
When problems arise, I find ways to carry through. | | | |
I always succeed in solving difficult problems, if I try. | | | |
It does not give me any difficulty to realize my intentions and goals. | | | |
In unexpected situations I always know how to behave. | | | |
Even with surprising events, I believe that I can handle them well. | | | |
I can easily face difficulties because I can always trust my abilities. | | | |
Whatever happens, I’ll be fine. | | | |
For every problem I can find a solution. | | | |
When a new thing comes to me, I know how to handle it. | | | |
If a problem arises, I can do it on my own. | | | |
Table 10: Self efficacy instrument by Jerusalem et al. jerusalem1999skala
### A.4 Perceived Productivity
The HPQ232323http://www.hcp.med.harvard.edu/hpq measures perceived
productivity in two ways: First, it uses an eight-item scale (summative,
multiple reversed indicators), that assesses overall and relative performance,
and second, it uses an eleven-point list of general ratings of participants’
own performance as well as typical performance of similar workers.
Instructions PP:
> The next questions are about the time you spent during your hours at work in
> the past 4 weeks (28 days). Select the one response for each question that
> comes closest to your experience from “None of the time” (1) to “All of the
> time” (5).
| 1 | 2 | 3 | 4 | 5
---|---|---|---|---|---
How often was your performance higher than most workers on your job? | | | | |
How often was your performance lower than most workers on your job? | | | | |
How often did you do no work at times when you were supposed to be working? | | | | |
How often did you find yourself not working as carefully as you should? | | | | |
How often was the quality of your work lower than it should have been? | | | | |
How often did you not concentrate enough on your work? | | | | |
How often did health problems limit the kind or amount of work you could do? | | | | |
Table 11: Perceived Productivity from the HPQ
* •
On a scale from 0 to 10 where 0 is the worst job performance anyone could have
at your job and 10 is the performance of a top worker, how would you rate the
usual performance of most workers in a job similar to yours?
* •
Using the same 0-to-10 scale, how would you rate your usual job performance
over the past year or two?
* •
Using the same 0-to-10 scale, how would you rate your overall job performance
on the days you worked during the past 4 weeks (28 days)?
* •
How would you compare your overall job performance on the days you worked
during the past 4 weeks (28 days) with the performance of most other workers
who have a similar type of job?
* –
You were a lot better than other workers
* –
You were somewhat better than other workers
* –
You were a little better than other workers
* –
You were about average
* –
You were a little worse than other workers
* –
You were somewhat worse than other workers
* –
You were a lot worse than other workers
### A.5 The WHO-5 instrument
The 5-item World Health Organization Well-Being Index (WHO-5, see Tab. 12) is
a short and generic global rating scale measuring subjective well-being.
Because the WHO considers positive well-being to be another term for mental
health jahoda , the WHO-5 only contains positively phrased items, and its use
is recommended by bech1999health .
Instruction:
> Please indicate for each of the five statements which is closest to how you
> have been feeling over the last week from “At no time” (1) to “All of the
> time” (6). Over the last week:
| 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---
I have felt cheerful and in good spirits. | | | | | |
I have felt calm and relaxed. | | | | | |
I have felt active and vigorous. | | | | | |
I woke up feeling fresh and rested. | | | | | |
My daily life has been filled with things that interest me. | | | | | |
Table 12: WHO-5
## Appendix B Model designs
### B.1 Gaussian Process model
Below is the model specification for modeling the weekly or daily trends using
a Gaussian Process.
$\displaystyle\left[\begin{array}[]{c}\mathrm{Q}1_{i}\\\ \vdots\\\
\mathrm{Q}5_{i}\\\ \end{array}\right]$
$\displaystyle\sim\mathrm{Cumulative}\left(\left[\begin{array}[]{c}\phi_{\mathrm{Q}1,i}\\\
\vdots\\\ \phi_{\mathrm{Q}5,i}\end{array},\mathbf{S}\right]\right)$
$\displaystyle\mathrm{[likelihood]}$
$\displaystyle\operatorname{logit}(\phi_{\mathrm{Q}\\{1,\ldots,5\\},i})$
$\displaystyle=\gamma_{\mathrm{\scriptscriptstyle{TIME}}[i]}+\alpha_{\mathrm{\scriptscriptstyle{ID}}[i]}$
[linear model] $\displaystyle\left[\begin{array}[]{c}\gamma_{1}\\\ \vdots\\\
\gamma_{n}\end{array}\right]$
$\displaystyle\sim\mathrm{MVNormal}\left(\left(\begin{array}[]{c}0\\\
\vdots\\\ 0\end{array}\right),\mathbf{K}\right)$ [prior Gaussian process]
$\displaystyle\mathbf{K}_{ij}$
$\displaystyle=\tau^{2}\exp(-T^{2}_{ij}/2\rho^{2})$
$\displaystyle\text{[covariance matrix }\mathcal{GP}\text{]}$
$\displaystyle\tau$ $\displaystyle\sim\mathrm{Weibull}(2,1)$
$\displaystyle\text{[prior std dev }\mathcal{GP}\text{]}$ $\displaystyle\rho$
$\displaystyle\sim\text{Inv-Gamma}(4,1)$ $\displaystyle\text{[prior length-
scale }\mathcal{GP}\text{]}$ $\displaystyle\mathbf{S}$
$\displaystyle=\left(\begin{array}[]{ccccc}\sigma_{\text{Q}1}&0&0&0&0\\\
0&\sigma_{\text{Q}2}&0&0&0\\\ 0&0&\sigma_{\text{Q}3}&0&0\\\
0&0&0&\sigma_{\text{Q}4}&0\\\ 0&0&0&0&\sigma_{\text{Q}5}\\\
\end{array}\right)\mathbf{R}\left(\begin{array}[]{ccccc}\sigma_{\text{Q}1}&0&0&0&0\\\
0&\sigma_{\text{Q}2}&0&0&0\\\ 0&0&\sigma_{\text{Q}3}&0&0\\\
0&0&0&\sigma_{\text{Q}4}&0\\\ 0&0&0&0&\sigma_{\text{Q}5}\\\
\end{array}\right)$ [covariance matrix]
$\displaystyle\sigma_{\text{Q}1},\ldots,\sigma_{\text{Q}5}$
$\displaystyle\sim\text{Weibull}(2,1)$ [prior std dev among questions]
$\displaystyle\mathbf{R}$ $\displaystyle\sim\mathrm{LKJ}(2)$ [prior
correlation matrix] $\displaystyle\alpha_{\mathrm{\scriptscriptstyle{ID}}[i]}$
$\displaystyle\sim\mathrm{Normal}(\bar{\alpha},\sigma_{\mathrm{\scriptscriptstyle{ID}}})$
[adaptive prior] $\displaystyle\bar{\alpha}$
$\displaystyle\sim\mathrm{Normal}(0,2)$ [hyperprior avg ID]
$\displaystyle\sigma_{\mathrm{\scriptscriptstyle{ID}}}$
$\displaystyle\sim\mathrm{Weibull}(2,1)$ [hyperprior std dev of IDs]
For the weekly trend, on Line $1$ we assume a Cumulative likelihood where we
model all questions’ covariance using a covariance matrix $\mathbf{S}$. The
linear model on the next line uses a $\operatorname{logit}$ link function as
is default, and then models the time, $\gamma$, with a Gaussian Process
($\mathcal{GP}$), with a varying intercept $\alpha$ for subjects.
Line $3$ places a multivariate normal distribution as prior for the
$\mathcal{GP}$, while Lines $4$–$6$ declares a covariance matrix, a prior for
the standard deviations, and a prior for the length-scale argument of the
$\mathcal{GP}$.
On Line $7$ a covariance matrix is declared for $\mathbf{S}$. Then priors for
the standard deviations among questions and the correlation matrix
$\mathbf{R}$ are declared (Lines $8$–$9$).
Finally, Lines $10$–$12$ declare an adaptive prior for the varying intercept
among subjects, and hyperpriors for the average subject (Line $11$) and the
standard deviation of subjects (final line).
For the daily trend the same model can be used. However, for the daily trend
there was only one question asked. This means that the covariance between
questions does not need to be modeled and, hence, Lines $7$–$9$ can be
removed. Additionally, a suitable prior for the daily data concerning length-
scale is $\text{Inv-Gamma}(1.6,0.1)$.
As is evident from the reproducibility package, prior predictive checks were
conducted and the combination of priors were uniform on the outcome scale.
### B.2 Dummy variable regression model
Recall, that for the dummy variable regression models (DVRMs) each instrument
(MAAS, SPANE, etc.) was modeled separately with the time ($t_{0}$ vs. $t_{1}$)
used as an indicator (predictor). Four population-level effects (age, gender,
occupation, and living conditions) and one group-level effect (subject) were
used as predictors.
$\displaystyle\left[\begin{array}[]{c}\mathrm{Q}1_{i}\\\ \vdots\\\
\mathrm{Q}n_{i}\\\ \end{array}\right]$
$\displaystyle\sim\mathrm{Cumulative}\left(\left[\begin{array}[]{c}\phi_{\mathrm{Q}1,i}\\\
\vdots\\\ \phi_{\mathrm{Q}n,i}\end{array},\mathbf{S}\right]\right)$
$\displaystyle\mathrm{[likelihood]}$ $\displaystyle\mathbf{S}$
$\displaystyle=\left(\begin{array}[]{ccc}\sigma_{\text{Q}1}&0&0\\\
0&\ddots&0\\\
0&0&\sigma_{\text{Q}n}\end{array}\right)\mathbf{R}\left(\begin{array}[]{ccc}\sigma_{\text{Q}1}&0&0\\\
0&\ddots&0\\\ 0&0&\sigma_{\text{Q}n}\end{array}\right)$ [covariance matrix]
$\displaystyle\sigma_{\text{Q}1},\ldots,\sigma_{\text{Q}n}$
$\displaystyle\sim\text{Weibull}(2,1)$ [prior std dev among questions]
$\displaystyle\mathbf{R}$ $\displaystyle\sim\mathrm{LKJ}(2)$ [prior
correlation matrix]
$\displaystyle\operatorname{logit}(\phi_{\mathrm{Q}\\{1,\ldots,n\\},i})$
$\displaystyle=\alpha\cdot\text{AGE}+\gamma\cdot\text{GENDER}+\omega\cdot\text{OCCUPATION}$
$\displaystyle+\lambda\cdot\text{LIVING}+\tau\cdot\text{TIME}+\iota_{\mathrm{\scriptscriptstyle{ID}}[i]}$
[linear model] $\displaystyle\alpha,\gamma,\omega,\lambda,\tau$
$\displaystyle\sim\mathrm{Normal}(0,3)$ [priors population-level effects]
$\displaystyle\iota_{\mathrm{\scriptscriptstyle{ID}}[i]}$
$\displaystyle\sim\mathrm{Normal}(\bar{\alpha},\sigma_{\mathrm{\scriptscriptstyle{ID}}})$
[adaptive prior] $\displaystyle\bar{\alpha}$
$\displaystyle\sim\mathrm{Normal}(0,2)$ [hyperprior avg ID]
$\displaystyle\sigma_{\mathrm{\scriptscriptstyle{ID}}}$
$\displaystyle\sim\mathrm{Weibull}(2,1)$ [hyperprior std dev of IDs]
For each instrument we assumed a Cumulative likelihood where all questions’
covariance was modeled by a covariance matrix $\mathbf{S}$. On Line $2$ the
covariance matrix is declared for $\mathbf{S}$ and priors for the standard
deviations among questions and the correlation matrix $\mathbf{R}$ are
declared on Lines $3$–$4$).
The linear model on the next two lines uses a $\operatorname{logit}$ link
function as is default, and then declares five population-level parameters and
a varying intercept $\iota$ for subjects. On Line $7$ priors for the
population-level parameters are declared.
Finally, Lines $8$–$10$ an adaptive prior with hyperpriors is declared for the
varying intercept $\iota$.
The only thing that differs between the instruments are the number of
questions asked. This implies that the covariance matrix $\mathbf{S}$ differs
in size depending on number of questions.
Additionally, for one instrument, SE, there were two questions modeled with a
$\mathsf{Bernoulli}$ likelihood due to responses on two levels.
As is evident from the reproducibility package, prior predictive checks were
conducted and the combination of priors were uniform on the outcome scale.
## Appendix C Detailed Findings: Significant Effects of Other Predictors
To show that the experiments of run 1 and run 2 confirm the general
tendencies, we confirm the underlying latent scale in Fig. 11. The similar
curves with similar centers of the peak show that there is no threat to
validity given by the two different lengths of the experiment. In addition,
combining the two runs gives the model more certainty, which makes the results
more reliable. Had we taken the results of both runs separately, there would
be more uncertainty in both individual models, but this was not necessary
given the present latency.
Figure 11: Underlying latent scale for outcome, given experimental session X
### C.1 Mindfulness Attention Awareness Scale
The MAAS instrument (App. A.1) consisted of $15$ statements to agree or
disagree with. Eleven of the ratings indicated a significant difference at
$t_{0}$ vs. $t_{1}$: Q$1$–$8$, $11$–$12$, and $14$. In all the above cases the
effect was negative, i.e., the responses were higher at $t_{0}$ than at
$t_{1}$ (please see Fig. 6. If we look at the other predictors, age and gender
did not have a significant effect, while occupation was significant (negative)
for Q$2$, i.e., “I break or spill things because of carelessness, not paying
attention, or thinking of something else.”
Figure 12: MAAS Density plots computed from posterior draws. The densities are
cut off at 95% and the shaded area is the 50% uncertainty interval. We can see
a number of questions not crossing zero (no effect observed).
Additionally, the predictor living condition was significant (negative) in
Q$1$–$3$, $8$, and $12$ (items listed in App. A.1).
### C.2 Scale of Positive And Negative Experiences
For the SPANE items, see App. A.2. The results for the predictor time are in
Fig. 9.
Below we summarize the significant effects of the other predictors. In all the
following tables for predictors, a $+$ means that the item was rated higher
for that variable, and a $-$ means that the item was rated lower for that
variable.
For gender, a $-$ means that females rated themselves more negatively than
males, and a $+$ means that females rated themselves more positively. This is
not visible directly from the table below, but requires to know how the data
was coded inside the model. For this specific reason, we moved these tables
into the appendix, as they are not relevant to understand the narrative of the
article, but can be considered interesting observations.
Question | Age | Gender | Occupation | Living conditions
---|---|---|---|---
Q$3$ | | $-$ | |
Q$6$ | | $-$ | |
Q$7$ | | $-$ | |
Q$9$ | $+$ | | |
In summary for this table, the higher the age, the higher the response in
Q$9$. Concerning gender, males answered with higher values in Q$3$, Q$6$, and
Q$7$.
### C.3 Psychological Well-Being
Figure 13: The effects of $t$ for the PWB instrument. The temporal variable
$t$ clearly has an effect (positive) in all questions except Q3.
Figure 13 shows the effects for the predictor time. The temporal variable $t$
clearly has an effect (positive) in all questions except Q3.
Below we summarize the significant effects of the other predictors for PWB
(for the items, see App. A.2). The same logic applies here as in the previous
table; however, one new effect is present, i.e, occupation. In Q$3$ (I am
engaged and interested in my daily activities.), participants with occupation
student replied with higher responses compared to others.
Question | Age | Gender | Occupation | Living conditions
---|---|---|---|---
Q$1$ | | | | $+$
Q$2$ | $-$ | $-$ | |
Q$3$ | $+$ | | $-$ |
Q$4$ | | $-$ | |
Q$7$ | | | | $-$
### C.4 Positive Thinking Scale
For the PTS items, see App. A.2. The results for the predictor time are given
below in Fig. 14.
Figure 14: The PTS results for the predictor time.
Below we summarize the significant effects of the other predictors. Please
refer to the appendix for the respective survey items.
Question | Age | Gender | Occupation | Living conditions
---|---|---|---|---
Q$1$ | | | $-$ | $-$
Q$3$ | | | $+$ |
Q$11$ | $-$ | $-$ | |
Q$16$ | | | | $+$
Q$17$ | | | $-$ | $-$
Q$19$ | | | | $-$
### C.5 Self Efficacy
The SE instrument (App. A.3) consisted of ten questions (Likert $1$–$4$).
Questions $6$, $7$, and $9$ showed a significant effect (positive), i.e.,
higher responses at $t_{1}$, see Fig. 15.
Figure 15: SE effects for predictor time.
* Q$6$
I can easily face difficulties because I can always trust my abilities.
* Q$7$
Whatever happens, I’ll be fine.
* Q$9$
When a new thing comes to me, I know how to handle it.
Concerning the other predictors, no significant effects were present, i.e., it
is not clear which predictors drove the significant difference between $t_{0}$
and $t_{1}$.
### C.6 Perceived Productivity
The HPQ part consisted of eleven questions (with Likert scales varying, going
up to $5$, $7$, or $10$, depending on the question, see App. A.4). The results
for the predictor time are given in Fig. 16. Only Q$1$ (How often was your
performance higher than most workers on your job?) shows a significant
difference when moving from $t_{0}$ to $t_{1}$ (lower responses at $t_{1}$).
Figure 16: The PP results for the predictor time.
Below we summarize the significant effects of the other predictors, i.e. Q$3$
(How often did you do no work at times when you were supposed to be working?)
showing a higher score for gender female, and Q$5$ (How often was the quality
of your work lower than it should have been?) showing a lower score when the
living condition was shared with partner or family as opposed to living by
oneself.
Question | Age | Gender | Occupation | Living conditions
---|---|---|---|---
Q$3$ | | $+$ | |
Q$5$ | | | | $-$
### C.7 Predictor Number of Sessions
The following Table 13 shows an overview of all significant effects for total
number of sessions as predictor. The first column is an ID, the rowname
indicates the variable of the instrument, e.g. MAASQ116_total_sessions refers
to MAAS question 1 (Likert scale 1 -6) for total sessions attended. The next
two columns indicate the estimate and the estimation error. Please note that
for SPANE, the results seem to be alternating, but looking back at the
instrument (see Sec. A.2), half of the items were scored reversely in exactly
the pattern that is reflected here.
| rowname | Estimate | Est.Error | Q2.5 | Q97.5
---|---|---|---|---|---
1 | $MAASQ116_{t}otal_{s}essions$ | -0.3155496 | 0.1216486 | -0.5569970 | -0.08184895
2 | $MAASQ216_{t}otal_{s}essions$ | -0.3804576 | 0.1273390 | -0.6411205 | -0.13837388
3 | $MAASQ316_{t}otal_{s}essions$ | -0.2634123 | 0.1231561 | -0.5100075 | -0.02042572
5 | $MAASQ516_{t}otal_{s}essions$ | -0.3689709 | 0.1167109 | -0.6023460 | -0.14523413
6 | $MAASQ616_{t}otal_{s}essions$ | -0.2894895 | 0.1305140 | -0.5477286 | -0.03658702
7 | $MAASQ716_{t}otal_{s}essions$ | -0.3647491 | 0.1231923 | -0.6058283 | -0.12399760
8 | $MAASQ816_{t}otal_{s}essions$ | -0.2611191 | 0.1214209 | -0.5011597 | -0.02438610
10 | $MAASQ1016_{t}otal_{s}essions$ | -0.2886498 | 0.1174016 | -0.5226733 | -0.05928175
11 | $MAASQ1116_{t}otal_{s}essions$ | -0.4540885 | 0.1211362 | -0.6957715 | -0.21564968
12 | $MAASQ1216_{t}otal_{s}essions$ | -0.2509503 | 0.1246984 | -0.4957514 | -0.01287479
14 | $MAASQ1416_{t}otal_{s}essions$ | -0.4358311 | 0.1179180 | -0.6693166 | -0.20832100
1 | $SPANEQ115_{t}otal_{s}essions$ | 0.4662730 | 0.1511756 | 0.1756023 | 0.77767102
2 | $SPANEQ215_{t}otal_{s}essions$ | -0.5187067 | 0.1341723 | -0.7911272 | -0.25860345
3 | $SPANEQ315_{t}otal_{s}essions$ | 0.4918396 | 0.1524530 | 0.2054288 | 0.80508272
4 | $SPANEQ415_{t}otal_{s}essions$ | -0.4509748 | 0.1308125 | -0.7134680 | -0.20059290
5 | $SPANEQ515_{t}otal_{s}essions$ | 0.3955807 | 0.1311677 | 0.1416188 | 0.65872865
6 | $SPANEQ615_{t}otal_{s}essions$ | -0.2643148 | 0.1243299 | -0.5096883 | -0.01981721
7 | $SPANEQ715_{t}otal_{s}essions$ | 0.5689896 | 0.1411704 | 0.3003263 | 0.84980113
8 | $SPANEQ815_{t}otal_{s}essions$ | -0.3191885 | 0.1221512 | -0.5628583 | -0.08297879
9 | $SPANEQ915_{t}otal_{s}essions$ | -0.4594716 | 0.1445001 | -0.7530686 | -0.18126877
10 | $SPANEQ1015_{t}otal_{s}essions$ | 0.3753050 | 0.1239305 | 0.1374020 | 0.61918997
11 | $SPANEQ1115_{t}otal_{s}essions$ | -0.2855759 | 0.1255116 | -0.5319962 | -0.04022932
1 | $PWBQ117_{t}otal_{s}essions$ | 0.3232594 | 0.1505071 | 0.03253444 | 0.6231016
2 | $PWBQ217_{t}otal_{s}essions$ | 0.2971393 | 0.1408516 | 0.02316987 | 0.5816784
4 | $PWBQ417_{t}otal_{s}essions$ | 0.3391010 | 0.1257622 | 0.09843479 | 0.5881914
5 | $PWBQ517_{t}otal_{s}essions$ | 0.2659871 | 0.1345689 | 0.01074883 | 0.5332659
6 | $PWBQ617_{t}otal_{s}essions$ | 0.3150326 | 0.1478417 | 0.02922867 | 0.6087226
7 | $PWBQ717_{t}otal_{s}essions$ | 0.3061679 | 0.1298780 | 0.05548536 | 0.5639220
8 | $PWBQ817_{t}otal_{s}essions$ | 0.3378056 | 0.1378315 | 0.07209965 | 0.6095512
9 | $PSTQ901_{t}otal_{s}essions$ | 1.9809234 | 0.9627086 | 0.470679200 | 4.20554675
12 | $PSTQ1201_{t}otal_{s}essions$ | -0.5350738 | 0.2776467 | -1.103002250 | -0.01894271
17 | $PSTQ1701_{t}otal_{s}essions$ | -0.9643101 | 0.3736942 | -1.751491250 | -0.28321947
18 | $PSTQ1801_{t}otal_{s}essions$ | 0.6554668 | 0.3499538 | 0.009657123 | 1.38352275
7 | $SEQ714_{t}otal_{s}essions$ | 0.4327188 | 0.1503527 | 0.1492953 | 0.736231
6 | $PPHQ615_{t}otal_{s}essions$ | -0.2617817 | 0.1271908 | -0.5070381 | -0.01792041
Table 13: Significant effects for total number of sessions as predictor
The following Table 14 shows an overview of all significant effects for number
of sessions live and recorded as predictor.
| rowname | Estimate Est. | Error | Q2.5 | Q97.5
---|---|---|---|---|---
1 | $MAASQ116_{l}ive_{s}essions$ | -0.2939231 | 0.1254551 | -0.5422444 | -0.04608940
3 | $MAASQ216_{l}ive_{s}essions$ | -0.2663390 | 0.1292418 | -0.5273328 | -0.01924983
9 | $MAASQ516_{l}ive_{s}essions$ | -0.3714334 | 0.1238394 | -0.6162243 | -0.13673993
13 | $MAASQ716_{l}ive_{s}essions$ | -0.3463737 | 0.1278613 | -0.5996803 | -0.10126535
19 | $MAASQ1016_{l}ive_{s}essions$ | -0.2683671 | 0.1265510 | -0.5189259 | -0.02028169
21 | $MAASQ1116_{l}ive_{s}essions$ | -0.3519825 | 0.1277744 | -0.6044776 | -0.10675305
22 | $MAASQ1116_{r}ecorded_{s}essions$ | -0.2729695 | 0.1309911 | -0.5282038 | -0.01542422
27 | $MAASQ1416_{l}ive_{s}essions$ | -0.4692584 | 0.1242340 | -0.7120738 | -0.22517830
1 | $SPANEQ115_{l}ive_{s}essions$ | 0.3714696 | 0.1564535 | 0.07600304 | 0.68959905
3 | $SPANEQ215_{l}ive_{s}essions$ | -0.4590504 | 0.1376351 | -0.73415957 | -0.19549265
5 | $SPANEQ315_{l}ive_{s}essions$ | 0.3926880 | 0.1523264 | 0.10560065 | 0.70054970
7 | $SPANEQ415_{l}ive_{s}essions$ | -0.3858643 | 0.1326585 | -0.65485220 | -0.13025333
9 | $SPANEQ515_{l}ive_{s}essions$ | 0.4090131 | 0.1388661 | 0.14721700 | 0.69743993
13 | $SPANEQ715_{l}ive_{s}essions$ | 0.6621751 | 0.1569447 | 0.36908735 | 0.98411335
15 | $SPANEQ815_{l}ive_{s}essions$ | -0.2616480 | 0.1268337 | -0.51337150 | -0.01369651
17 | $SPANEQ915_{l}ive_{s}essions$ | -0.3424188 | 0.1480671 | -0.63512853 | -0.05596512
19 | $SPANEQ1015_{l}ive_{s}essions$ | 0.4365976 | 0.1324461 | 0.17729748 | 0.69863018
7 | $PWBQ417_{l}ive_{s}essions$ | 0.3380077 | 0.1354551 | 0.07777354 | 0.6092814
9 | $PWBQ517_{l}ive_{s}essions$ | 0.2920369 | 0.1449304 | 0.01599527 | 0.5898144
16 | $PWBQ817_{r}ecorded_{s}essions$ | 0.3250622 | 0.1521206 | 0.03104731 | 0.6273903
2 | $PSTQ101_{r}ecorded_{s}essions$ | -0.8114198 | 0.4127656 | -1.70676300 | -0.08103662
8 | $PSTQ401_{r}ecorded_{s}essions$ | -0.6589501 | 0.3618867 | -1.41424650 | -0.01133453
17 | $PSTQ901_{l}ive_{s}essions$ | 3.1336475 | 1.5388280 | 0.76178103 | 6.67567475
22 | $PSTQ1101_{r}ecorded_{s}essions$ | -1.2048545 | 0.4597776 | -2.20838500 | -0.40500393
28 | $PSTQ1401_{r}ecorded_{s}essions$ | 1.5137920 | 0.9497095 | 0.02391954 | 3.70106175
33 | $PSTQ1701_{l}ive_{s}essions$ | -0.7760812 | 0.3848592 | -1.59662025 | -0.07400772
36 | $PSTQ1801_{r}ecorded_{s}essions$ | 1.0777616 | 0.6296321 | 0.02372501 | 2.47252725
44 | $PSTQ2201_{r}ecorded_{s}essions$ | 2.0895201 | 1.2682835 | 0.10201318 | 4.99020650
13 | $SEQ714_{l}ive_{s}essions$ | 0.4491137 | 0.158366 | 0.1545877 | 0.7680788
22 | $PPOQ117_{r}ecorded_{s}essions$ | -0.2526645 | 0.1271695 | -0.5066494 | -0.003687787
Table 14: Significant effects for number of live and recorded sessions as
predictor |
# Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language
Prompt
Yongqi Wang, Ruofan Hu11footnotemark: 1, Rongjie Huang, Zhiqing Hong, Ruiqi
Li,
Wenrui Liu, Fuming You, Tao Jin, Zhou Zhao
Zhejiang University
{cyanbox, 3200102312, rongjiehuang<EMAIL_ADDRESS>Equal contribution.
###### Abstract
Recent singing-voice-synthesis (SVS) methods have achieved remarkable audio
quality and naturalness, yet they lack the capability to control the style
attributes of the synthesized singing explicitly. We propose Prompt-Singer,
the first SVS method that enables attribute controlling on singer gender,
vocal range and volume with natural language. We adopt a model architecture
based on a decoder-only transformer with a multi-scale hierarchy, and design a
range-melody decoupled pitch representation that enables text-conditioned
vocal range control while keeping melodic accuracy. Furthermore, we explore
various experiment settings, including different types of text
representations, text encoder fine-tuning, and introducing speech data to
alleviate data scarcity, aiming to facilitate further research. Experiments
show that our model achieves favorable controlling ability and audio quality.
Audio samples are available at http://prompt-singer.github.io.
Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language
Prompt
Yongqi Wang††thanks: Equal contribution., Ruofan Hu11footnotemark: 1, Rongjie
Huang, Zhiqing Hong, Ruiqi Li, Wenrui Liu, Fuming You, Tao Jin, Zhou Zhao
Zhejiang University {cyanbox, 3200102312, rongjiehuang<EMAIL_ADDRESS>
## 1 Introduction
Singing-voice-synthesis (SVS) systems (Chen et al., 2020; Huang et al., 2021;
Liu et al., 2022; Zhang et al., 2022b, c, 2023b; Hong et al., 2023), which aim
to generate high-fidelity singing voices given lyrics and pitch notes, have
made significant advancements in improving audio quality and naturalness in
recent years, facilitating music composition and development of entertainment
industries. However, it hasn’t been fully studied to control the style
attributes of synthesized singing, such as speaker timbre, vocal range and
energy. Despite that some works use fixed speaker IDs Huang et al. (2021);
Zhang et al. (2022c) or reference speech/singing segments Shen et al. (2023);
Huang et al. ; Huang et al. (2023d) to provide information on singer identity
or other style attributes, these mechanisms are not user-friendly and lack the
ability to control specific acoustic attributes explicitly.
An ideal approach to controlling the style of generated singing voices is to
use natural language instructions as style prompts, as it can not only achieve
precise control over specific attributes with certain descriptions, but also
simplify user interaction, which may bring convenience to non-professional
users such as musicians and video creators. However, applying natural language
style prompts in singing-voice-synthesis faces several challenges:
* •
Decoupling Melody and Vocal Range. In real-life situations, different speakers
(e.g. an elderly man and a little girl) may sing the same song within
different vocal ranges. However, pitch annotations in SVS data are each tied
to a specific singer in a certain vocal range. This coupling nature makes it
challenging to generate singing voices with consistent vocal range and timbre
to the prompt together with an accurate melody aligned with given pitch notes.
* •
Textual Representation. Despite that some works have explored connecting text
representations with music, speech and general audio concepts Elizalde et al.
(2023a, b); Wu et al. (2023), there is no text representation tailored for
singing style descriptions, and the optimal choice of prompt representation
for this task remains unknown.
* •
Data Scarcity. Due to the requirement of fine-grained annotations, existing
SVS datasets Liu et al. (2022); Wang et al. (2022); Huang et al. (2021); Zhang
et al. (2022a) are small in scale, typically consisting of only a few hours or
tens of hours of singing data. This not only causes limited data diversity but
also poses more challenges to learning the correlation between natural
language descriptions and data distribution.
In this paper, we propose Prompt-Singer, the first controllable SVS model with
natural language prompts to control the singer gender, vocal range and volume.
Considering the outstanding performance of recent spoken LLMs Borsos et al.
(2023); Wang et al. (2023); Huang et al. (2023d); Yang et al. (2023b) in terms
of generation and in-context learning capabilities, we adopt a decoder-only
transformer with a multi-scale hierarchy for conditional generation of
discrete codec units of the singing, together with a unit vocoder for waveform
reconstruction. To address the challenges mentioned above, we 1) design a
decoupled pitch representation with a vocal range factor and a speaker-
independent melody sequence, enabling voice range controlling while
maintaining melodic accuracy; 2) investigate various text encoders for prompt
encoding, as well as fine-tuning the encoders to seek the optimal textual
representation for this task; 3) introduce speech data to alleviate data
scarcity, and evaluate the model performance under different levels of low-
resource singing data combined with speech data. Experiments show that our
method achieves favorable style controlling accuracy on the three attributes,
while keeping good audio quality and melodic accuracy. Our contributions are
summarized as follows:
* •
We propose the first controllable SVS model with natural language prompts to
control the singer gender, vocal range, and volume of the generated singing
voice.
* •
We design a pitch representation for SVS that decouples voice range and
melody, which enables prompt-conditioned voice range manipulation while
keeping melodic accuracy.
* •
We investigate different text representations and fine-tune the text encoders
to seek optimal text representation for the prompt in this task.
* •
We alleviate data scarcity by introducing speech data, which boosts prompt-SVS
performances in low-resource scenarios.
## 2 Related Works
### 2.1 Singing Voice Synthesis
Singing-voice-synthesis aims to generate human-like singing voices from lyrics
and pitch notes, and recent deep-learning-based models have achieved
remarkable progress in synthesized voice quality. Several works Chen et al.
(2020); Zhang et al. (2022c, 2023b); Huang et al. (2022) adopt generative
adversarial networks for high-fidelity SVS. Diffsinger Liu et al. (2022)
adopts a shallow diffusion mechanism to enhance the quality of the generated
mel-spectrogram. VISinger Zhang et al. (2022b) proposes an end-to-end
architecture based on a variational autoencoder. UniSinger Hong et al. (2023)
proposes a unified framework for multiple singing-voice-related tasks based on
representation disentanglement and cross-modality information matching.
However, it has not been fully studied to control the style of generated
singing. Previous multi-singer systems Huang et al. (2021); Zhang et al.
(2022c) use a fixed group of IDs to indicate singer identities. NaturalSpeech
2 Shen et al. (2023) and Make-A-Voice Huang et al. (2023d) use a reference
singing or speech clip to provide holistic style information. Currently, there
is a lack of fine-grained controllable methods for SVS.
### 2.2 Instruct-guided Voice Generation
Inspired by the success in text, image and audio generation guided with
natural language instructions Brown et al. (2020); Ramesh et al. (2021); Kreuk
et al. (2022); Huang et al. (2023a, b, c), some recent works have explored
using text prompts to govern the stylistic attributes in voice synthesis.
PromptTTS Guo et al. (2023) incorporates style features from a fine-tuned BERT
into a TTS backbone with attention. InstructTTS Yang et al. (2023a) achieves a
text-controlled expressive TTS system with cross-modal representation
learning. PromptTTS 2 Leng et al. (2023) employs a variational network to
generate reference acoustic features conditioned on text features. PromptVC
Yao et al. (2023) and PromptSpeaker Zhang et al. (2023a) investigate text-
prompted voice conversion and speaker-embedding generation separately.
However, due to the data scarcity and the demand for precise pitch
controlling, research on natural-language-instructed SVS is currently lacking.
Figure 1: The pipeline of generating and fetching prompt sentence for training
data.
## 3 Prompt Generation and Fetching
Our goal is to control the singer gender, vocal range and volume in singing-
voice-synthesis with natural language prompts. Since there is no available
dataset for this task, we utilize normal SVS datasets and design a method for
generating a prompt sentence for each data item. We introduce this process in
this section.
Considering the high cost of manual annotation, we utilize a large language
model (GPT 3.5 Turbo) to generate prompt sentences. The prompt generation
mainly consists of 3 stages: 1) attribute categorization; 2) keyword and
sentence template generation and 3) prompt sentence assembling.
Figure 1(a) and (b) demonstrate the process of the first two stages.
Initially, we categorize the audio based on different attributes. The two
gender categories, male and female, are pre-annotated in the datasets. For
volume, we build three categories of “low”, “medium”, and “high”, indicating
the amplitude root mean square (RMS) ranges of $[0.02,0.04]$, $[0.07,0.10]$
and $[0.16,0.20]$, respectively. Additionally, we can rescale audio into
different ranges dynamically during training. For vocal range, we set two
categories of “high” and “low”, and use the average F0 of the voiced part as
the criterion for classification, with the threshold being 125 Hz for male
singers and 305 Hz for female singers.
After categorization, we use the LLM to generate a set of 4-7 synonyms for
each category as the keywords. We further utilize the LLM to generate prompt
sentence templates for each single attribute, where each template contains a
placeholder to be replaced with the keywords (such as Generate a song by a
[gender] singer). We also generate a small number of prompt sentences
targeting specific categories (such as Could you synthesize a song that’s as
powerful as a thunderstorm? for large volume). We obtain approximately 50
sentence templates for each attribute after manual selection. These single-
attribute templates can be further combined to create multi-attribute
templates by prompting the LLM. We provide sample sentence templates and
keywords in Appendix A.
The prompt sentence assembling stage takes place dynamically during training.
Figure 1(c) illustrates the pipeline of fetching a prompt sentence. We first
obtain the pre-annotated labels for the data item, and in order to make the
model adaptable to prompts with varying numbers of attributes, one or two
labels are randomly dropped with probabilities $p_{1}$ and $p_{2}$. We then
randomly fetch a keyword and a sentence template from the pre-generated sets,
and replace the placeholder with the keyword to get the final prompt sentence.
Note that we do not control vocal range independently in the absence of
gender, as its boundary is different for male and female. We use pre-generated
specific prompts for each sample in the evaluation for fair comparison.
Figure 2: Model architecture of Prompt-Singer and the multi-scale transformer.
## 4 Prompt-Singer
In this section, we introduce the model design of Prompt-Singer. The overall
architecture of our model is illustrated in Figure 2(a). It is primarily
composed of two sub-modules: 1) the multi-scale transformer, which generates
discrete acoustic units conditioned on inputs of natural language prompt,
lyrics with duration, and pitch information; and 2) the unit vocoder, which
maps the generated acoustic units to an audio waveform.
In the following subsections, we introduce the input and output
representations of the model in Section 4.1 to 4.3, model architecture in
detail in Section 4.5 and 4.6, together with our method for data scarcity
alleviation in Section 4.4.
### 4.1 Voice Representation
The acoustic units used as the prediction targets of the transformer are
generated by SoundStreamZeghidour et al. (2021), a neural codec with an
encoder-decoder architecture and a residual vector quantizer (RVQ). Such a
codec model can produce discrete compressed representations of audio by
employing a convolutional encoder followed by the RVQ, and these
representations can be used to reconstruct waveforms with the decoder. An
acoustic unit sequence can be represented as
$\mathbf{a}=[a_{1}^{1},a_{1}^{2},...,a_{1}^{C},a_{2}^{1},...,a_{T}^{C}],a_{i}^{j}\in\\{0,1,...,K_{a}-1\\},\forall
1\leq i\leq T,1\leq j\leq C$, with $T,C,K_{a}$ being number of frames, number
of residual codebooks and codebook size.
### 4.2 Textual Representation
The textual input for our model comprises two components: 1) lyrics, which
correspond to the content of the generated song, and 2) natural language
prompt, which controls the style of the singing. We introduce their
representations in this subsection.
For lyrics, we first phonemize the text and obtain corresponding phoneme-level
duration in seconds from dataset annotations or a forced-alignment tool
McAuliffe et al. (2017). We then convert the duration to frame level based on
a preset frame rate, and regulate the length of the phoneme sequence with this
duration by duplicating phonemes. We set the frame rate of phonemes to be the
same as acoustic units, making it easier for the model to learn the length
alignment. The regulated phoneme sequence is then embedded by a look-up table
(LUT) and fed to the transformer.
For the natural language prompt, we utilize a parameter-frozen text encoder to
extract a semantic representation, followed by a linear layer for mapping its
dimension to fit the transformer. To explore the impact of different text
representations on style controlling, we attempt three types of encoders in
our experiments: 1) BERT Devlin et al. (2018), a widely-used self-supervised
text encoder trained with masked language modeling and next sentence
prediction; 2) FLAN-T5 Chung et al. (2022), the encoder of a unified text-to-
text transformer fine-tuned with instructions; and 3) CLAP Wu et al. (2023), a
text encoder through contrastive pretraining on natural language and audio. We
compare BERT and FLAN-T5 of different sizes, as well as CLAP pretrained on two
different datasets. We also fine-tune BERT-large and FLAN-T5 large using
prompts and corresponding labels. We fine-tune BERT with multi-label
prediction and have FLAN-T5 predict the label sequence corresponding to the
prompt in a text-to-text manner. Note that the prompts used in the evaluation
are not included in fine-tuning.
### 4.3 Decoupled Pitch Representation
According to the equal temperament theory Von Helmholtz (1912), humans’
perception of musical intervals corresponds to the logarithmic distance of
frequencies. This means if we multiply the fundamental frequency (F0) of the
voiced part of singing by a factor (equivalent to adding an offset in the
logarithmic domain), we can adjust the vocal range without changing the
melody. Based on this principle, we decompose F0 into two components: 1)
$\bar{f_{0}}$, which is the average value of the voiced part of F0, indicting
the vocal range; and 2)
$\mathbf{\tilde{f_{0}}}=[\tilde{f_{0}^{1}},\tilde{f_{0}^{2}},...,\tilde{f_{0}^{T}}]$,
where we rescale the voiced part of the original F0 sequence to have a
specific mean value (230Hz, in our practice), indicating vocal-range-invariant
melody information. This simple yet effective representation creates an
information bottleneck, forcing the model to extract melodic and vocal range
information from the rescaled F0 sequence and average F0 factor, respectively.
In our practice, we round $\mathbf{\tilde{f_{0}}}$ and $\bar{f_{0}}$ into
integers, and use an LUT to embed them before feeding them to the transformer
backbone. Both $\mathbf{\tilde{f_{0}}}$ and $\bar{f_{0}}$ share the same
embedding space.
### 4.4 Alleviating Data Scarcity
Considering that both speech and singing are human voices in different forms,
it is intuitive that they share some commonalities in style characteristics
and distributions. Based on this point, we incorporate text-to-speech (TTS)
data into the training of the prompt SVS task to alleviate data scarcity.
Specifically, we employ the same methods as for singing to phonemize the text
and generate prompts, and use an off-the-shelf tool to extract pitch from the
speech, finally obtaining data items in the same format as SVS data.
Furthermore, we explore the feasibility of substituting speech data for
singing data in low-resource scenarios. We evaluate the model performance
under compositions of varying amounts of low-resourced SVS data with abundant
TTS data, with experiment results presented in Section 5.5.
### 4.5 Multi-Scale Transformer Architecture
The end-to-end differentiable multi-scale transformer architecture Yu et al.
(2024); Yang et al. (2023b) has exhibited remarkable capabilities in audio
synthesis and modeling intrinsic relationships between acoustic and other
modalities, as well as high efficiency of generating long sequences based on
sub-quadratic self-attention. In this work, we utilize a multi-scale
transformer derived from UniAudio Yang et al. (2023b) to serve as the backbone
of our model. It is a decoder-only transformer with a hierarchical structure
to facilitate the modeling of long sequences. This module aims to generate
discrete acoustic units of singing voices conditioned on natural language
prompts, lyrics phonemes, phoneme durations and vocal-range agnostic melody
representation, together with the vocal-range factor as intermediate output.
During training, the conditional inputs and target outputs are concatenated
into a single sequence and fed to the transformer, which models the
correlation using next-token-prediction with cross-entropy loss calculated on
the target output part. During inference, the model predicts the range factor
and acoustic units conditioned on the prefix input sequence autoregressively,
which can be formulated as:
$\displaystyle
P_{cond}\left(\mathbf{a}\right)=P_{cond}\left(\bar{f_{0}}\right)\cdot\prod_{t=1}^{T}\prod_{c=1}^{C}P_{AR}\left(\mathbf{a}_{t}^{c}\right)$
(1) $\displaystyle
P_{cond}\left(*\right)=p\left(*\mid\mathbf{E}_{\mathcal{P}}(\mathcal{P}),L,\mathbf{d},\mathbf{\tilde{f_{0}}};\theta_{AR}\right)$
(2) $\displaystyle
P_{AR}\left(\mathbf{a}_{t}^{c}\right)=p\left(\mathbf{a}_{t}^{c}\mid\mathbf{a}_{<t},\mathbf{a}_{t}^{<c},\mathbf{E}_{\mathcal{P}}(\mathcal{P}),L,\mathbf{d},\mathbf{\tilde{f_{0}}},\bar{f_{0}};\theta_{AR}\right)$
(3)
where $\mathbf{a}$, $\mathbf{E}_{\mathcal{P}}$, $\mathcal{P}$, $L$,
$\mathbf{d}$, $\mathbf{\tilde{f_{0}}}$, $\bar{f0}$ and $\theta_{AR}$ indicate
acoustic units, prompt encoder, prompt, lyrics, durations, melody
representation, vocal-range factor and model parameters, respectively, and
$t$, $c$ indicate temporal and codebook indices of the acoustic unit. Consider
the process of the transformer predicting the vocal range factor, which is
formulated by
$P_{cond}\left(\bar{f_{0}}\right)=p\left(\bar{f_{0}}\mid\mathbf{E}_{\mathcal{P}}(\mathcal{P}),L,\mathbf{d},\mathbf{\tilde{f_{0}}};\theta_{AR}\right),$
(4)
as we assume that the average F0 value is independent of the lyrics, duration
and melody, this formula indicates our model’s capability to control the vocal
range through natural language prompts. The predicted vocal range information
is further taken as a condition for singing acoustic unit generation.
The hierarchical structure of the multi-scale transformer is illustrated in
Figure 2(b). This structure is formed by a global and a local transformer,
both of which are decoder-only transformers. For a temporal position $t$,
embeddings $z^{1:n_{q}}_{t}$ of acoustic units from different codebooks are
concatenated and fed to the global transformer for inter-frame correlation
modeling. The output hidden feature $h_{t}$ is generated autoregressively
conditioned on $h_{1:t-1}$. This hidden feature is then split according to the
original shape of the embeddings, projected by a linear layer, and added to
the input embeddings of the local transformer as a frame-level context. The
local transformer predicts acoustic units of different codebooks inside a
frame autoregressively. For non-acoustic modalities, each item is repeated
$n_{q}$ times to fit this modeling mechanism, with $n_{q}$ being the number of
codebooks.
### 4.6 Unit Vocoder
When the acoustic unit generation finishes, the generated units need to be
mapped to a high-fidelity audio waveform. Due to the compressive nature of the
codec, reconstructing audio from acoustic units of limited codebooks with the
decoder may result in degraded perceptual quality. Instead of using the codec
decoder directly, we adopt a GAN-based unit vocoder for singing voice
reconstruction, aiming to generate audio of higher quality and richer details.
Specifically, our vocoder is derived from BigVGAN Lee et al. (2022), with a
generator built from a set of LUTs that embed the discrete units, and a series
of blocks composed of transposed convolution and a residual block with dilated
layers. Multi-period and multi-resolution discriminators (MPD, MRD) are used
for adversarial training.
## 5 Experiments
### 5.1 Datasets
We combine 4 SVS datasets for our task, including M4Singer, Opencpop,
Opensinger and PopCS, forming a multi-singer singing dataset of 127 hours. For
speech data, we utilize 4 Mandarin TTS corpora, including AISHELL-3, Biaobei,
THCHS-30 and a subset of DidiSpeech, totaling approximately 179 hours. We
provide details of these datasets in Appendix B.
We phonemize the lyrics with PyPinyin111https://github.com/mozillazg/python-
pinyin, and extract F0 from raw audios with harvest Morise et al. (2017). We
separately select 2% of the singing data randomly for validation and testing,
with the remaining used for training.
### 5.2 Model Configurations
The global transformer has 20 layers with 320M parameters, while the local
transformer has 6 layers with 100M parameters. Both of them share the same
hidden dimension of 1152. For acoustic units, we train a SoundStream model for
24k audio, with 12 quantization levels, a codebook size of 1024 and a
downsampling rate of 480. We use the first 3 quantization levels as the
acoustic units, and the unit vocoder is trained to reconstruct 24k audios from
acoustic units of 3 codebooks. The label dropping probability $p_{1}$ and
$p_{2}$ are both set to 0.05. Detailed structure and hyper-parameters of the
model are appended in Appendix C.
### 5.3 Experiment Settings
ID | Model | Gender (F/M) | Volume | Range | R-FFE | MOS | RMOS
---|---|---|---|---|---|---|---
Prompt-Singer with Pre-trained Text Encoders | |
1 | FLAN-T5 small | 76.7 / 78.1 | 92.0 | 79.1 | 0.11 | 3.75 $\pm$ 0.08 | 3.27 $\pm$ 0.09
2 | FLAN-T5 base | 82.2 / 79.5 | 92.4 | 80.8 | 0.12 | 3.79 $\pm$ 0.07 | 3.39 $\pm$ 0.07
3 | FLAN-T5 large | 83.1 / 80.8 | 92.7 | 82.6 | 0.12 | 3.83 $\pm$ 0.08 | 3.43 $\pm$ 0.08
4 | FLAN-T5 XL | 83.4 / 80.4 | 92.6 | 82.9 | 0.11 | 3.84 $\pm$ 0.06 | 3.46 $\pm$ 0.08
5 | BERT-base | 80.8 / 80.1 | 93.9 | 80.1 | 0.10 | 3.81 $\pm$ 0.06 | 3.42 $\pm$ 0.07
6 | BERT-large | 84.9 / 80.9 | 94.3 | 78.9 | 0.09 | 3.78 $\pm$ 0.08 | 3.44 $\pm$ 0.08
7 | CLAP-general | 82.2 / 79.5 | 94.1 | 80.3 | 0.12 | 3.83 $\pm$ 0.07 | 3.43 $\pm$ 0.06
8 | CLAP-speech/music | 82.2 / 78.1 | 94.2 | 80.8 | 0.11 | 3.85 $\pm$ 0.09 | 3.38 $\pm$ 0.08
Prompt-Singer with Fine-tuned Text Encoders | |
9 | FLAN-T5 large finetuned | 87.7 / 86.3 | 94.4 | 84.7 | 0.12 | 3.89 $\pm$ 0.07 | 3.62 $\pm$ 0.08
10 | BERT-large finetuned | 86.3 / 83.6 | 94.9 | 79.8 | 0.10 | 3.90 $\pm$ 0.07 | 3.60 $\pm$ 0.08
Non-controllable SVS models and Ground Truth | |
11 | FFT-Singer | / | / | / | 0.17 | 3.67 $\pm$ 0.08 | /
12 | Diffsinger | / | / | / | 0.09 | 3.86 $\pm$ 0.07 | /
13 | Ground Truth | 98.0 / 97.0 | / | / | / | 4.09 $\pm$ 0.06 | /
Table 1: Results on different text representations, including percentage accuracies of the three attributes, rescaled f0-frame error (R-FFE) and mean-opinion-scores of audio quality (MOS) and relevance to the prompt (RMOS). ID | SVS Data | TTS Data | Gender (F/M) | Volume | Range | R-FFE | MOS | RMOS
---|---|---|---|---|---|---|---|---
1 | ✓ | ✗ | 75.3 / 65.8 | 87.6 | 78.7 | 0.11 | 3.68 $\pm$ 0.08 | 3.37 $\pm$ 0.08
2 | ✓ | ✓ | 87.7 / 86.3 | 94.4 | 84.7 | 0.12 | 3.89 $\pm$ 0.07 | 3.62 $\pm$ 0.08
3 | 10min | 100h | 65.8 / 65.6 | 78.3 | 80.9 | 0.29 | 3.06 $\pm$ 0.09 | 2.89 $\pm$ 0.09
4 | 1h | 100h | 71.2 / 64.4 | 84.8 | 81.2 | 0.25 | 3.34 $\pm$ 0.08 | 3.03 $\pm$ 0.09
5 | 10h | 100h | 76.7 / 68.5 | 88.6 | 81.6 | 0.23 | 3.28 $\pm$ 0.08 | 3.17 $\pm$ 0.09
6 | 100h | 100h | 86.2 / 80.5 | 92.5 | 82.3 | 0.12 | 3.75 $\pm$ 0.08 | 3.45 $\pm$ 0.08
Table 2: Experiment results on data scarcity alleviation in low resource
scenarios.
As we are investigating a new task with no previous work to compare with, our
experiments mainly focus on exploring different settings within our framework,
including different text representations and different training data
compositions, together with ablation studies. The settings of various text
representations are presented in table 1. As described in Section 4.2, we
experimented with encoders of different types, parameter sizes, and pre-
training data as well as fine-tuning the encoders. We also provide the results
of ground truth and two non-controllable SVS models in table 1 as baselines of
singing quality: 1) FFT-Singer, which generates mel-spectrograms through
stacked feed-forward transformer blocks; and 2) DiffsingerLiu et al. (2022),
an SVS model based on the diffusion probabilistic model.
In table 2, we compare the results of incorporating speech data for training
or not, together with a series of low-resource data configurations with SVS
data varying from 10 minutes to 100 hours paired with speech data of a fixed
quantity of 100 hours. The ablation studies are described in a dedicated
subsection.
### 5.4 Metrics
We employ both subjective and objective metrics to measure the controlling
ability and singing voice quality of the models. For objectives metrics, we
calculate the percentage accuracy for each attribute, where we train a gender
classifier and use amplitude RMS and average F0 of the voiced part for volume
and range evaluation. We mainly use single-attribute prompts for evaluation
with an additional gender attribute for vocal range, and multi-attribute
evaluation is conducted in ablation studies. We also calculate R-FFE for
melodic accuracy between the synthesized and reference singing, which is
F0-frame-error (FFE) with the voiced part of F0 rescaled to have an average of
230Hz to eliminate the impact of vocal range. For subjective metrics, we use
crowd-sourced human evaluation via Amazon Mechanical Turk, where raters are
asked to rate scores on 1-5 Likert scales on singing voice quality and the
relevance between synthesized singing and the prompt. We report the mean-
opinion-scores of quality (MOS) and relevance (RMOS) with 95% confidence
intervals (CI) in the tables. Details of evaluation metrics are provided in
Appendix D.
### 5.5 Results and Analysis
We can draw two basic conclusions from the results in table 1: 1) Generally,
our models (1-10) exhibit favorable attribute controlling accuracies, with the
best values being 87.7 / 86.3, 94.9 and 84.7 for the three attributes,
together with competitive audio quality and melodic accuracy to non-
controllable baselines (1-10 v.s. 11-13), with the best R-FFE and MOS being
0.09 and 3.90. This indicates the effectiveness of our model design on the
task of controllable SVS. 2) The accuracies on volume are higher than gender
and vocal range by a salient margin, with the values varying between 7.4 and
15.4 across different models. We speculate that this is because the random
amplitude scaling in training allows the data with different volumes to be
expanded to a large scale (somewhat similar to data augmentation), while the
quantities and diversities of gender and range are limited by the training
datasets. This, from one perspective, confirms that data scarcity makes
learning the correlation between prompt and style attributes difficult.
ID | Model | Gender (F/M) | Volume | Range | R-FFE | RMOS
---|---|---|---|---|---|---
Ablation on Decoupled Pitch Representation
1 | Factor: ✓ Rescale: ✓ | 87.7 / 86.3 | 94.4 | 84.7 | 0.12 | 3.62 $\pm$ 0.08
2 | Factor: ✗ Rescale: ✓ | 78.1 / 63.0 | 91.3 | 76.1 | 0.11 | 3.34 $\pm$ 0.09
3 | Factor: ✗ Rescale: ✗ | 64.4 / 58.9 | 91.6 | 72.3 | 0.08 | 2.75 $\pm$ 0.09
Ablation on Different Prompted Attribute Numbers
4 | Attribute Num: 1 | 87.7 / 86.3 | 94.4 | / | 0.12 | 3.67 $\pm$ 0.08
5 | Attribute Num: 2 | 84.3 / 82.9 | 93.4 | 84.7 | 0.11 | 3.58 $\pm$ 0.08
6 | Attribute Num: 3 | 81.2 / 80.7 | 93.0 | 82.4 | 0.11 | 3.52 $\pm$ 0.07
Table 3: Results of ablation studies.
#### 5.5.1 Evaluation on Text Representations
We have the following further observations from the results in table 1: 1)
Fine-tuning the text encoders leads to a considerable improvement in
controlling accuracy (3 vs. 9 & 6 vs.10), with the improvements being 4.6 /
5.5, 1.7 and 2.1 for FLAN-T5 large, and 1.4 / 2.7, 0.6 and 0.9 for BERT-large.
This indicates that aligning the text representations with the labels, which
have a much simpler distribution, helps the model learn their correlation with
singing style. Nevertheless, using only the pre-trained text encoders already
yields quite good results. 2) Generally, larger model sizes bring better
results (1-4 & 5-6). However, such a tendency between 3 and 4 is less
significant compared to 1-2 and 2-3, suggesting that text encoder parameters
beyond a certain size are no longer a bottleneck for model performance. 3)
Different types of text encoders exhibit varying controlling capabilities over
different attributes. For instance (1-4 vs. 5-8), the FLAN-T5 family shows
weaker control over volume compared to CLAP and BERT, with an accuracy gap of
1.2-2.3. However, the large and xl models outperform CLAP and BERT in vocal-
range controlling accuracy by 1.8-4.0. This may be related to differences in
the models’ pretraining methods and data. We choose the fine-tuned FLAN-T5
large model for subsequent experiments.
#### 5.5.2 Evaluation on Data Scarcity Alleviation
From the results of different data compositions in table 2, we have the
following observations: 1) Introducing speech data leads to a comprehensive
improvement in controlling accuracies and generation quality, with the cost
being a slight increase in R-FFE of 0.01 (1 vs. 2). This is because the
additional speech data increases the quantity and diversity of the training
data, aiding the network in modeling the correlation between prompt and
acoustic style. However, due to the difference in the distributions of singing
melody and speech prosody, both of which are manifested in pitch variation,
the speech data may have a negative impact on modeling singing melody, causing
the slight increase in R-FFE. 2) In the low resource scenarios (3-6), we find
that there is a drastic decline in the singing audio quality, melody accuracy
as well as the accuracy on gender with the decrease in the quantity of SVS
data. In contrast, the changes in volume and vocal range are relatively
gradual, yielding acceptable results of 88.6 and 81.6 even with 10 hours of
singing data. This suggests that, while speech data helps improve controlling
accuracy and audio quality, it still cannot substitute for singing data in
modeling certain vocal characteristics. In conclusion, introducing speech data
effectively enhances the performance of controllable SVS, but it is still
necessary to have a sufficient amount of singing data to ensure synthesis
quality and melody accuracy.
### 5.6 Ablation Studies
We mainly focus on validating the effectiveness of our decoupled pitch
representation and multi-attribute prompting mechanism in the ablation
studies, and the results are presented in table 3.
For pitch representation (1-3), we first remove the vocal range factor from
the sequence, and then eliminate the rescaling on the input F0. We can see
that when removing the range factor, there is a drastic drop of 9.6 / 23.3,
3.1 and 8.6 in accuracies, accompanied by an RMOS decrease of 0.28. This
indicates that explicitly predicting the vocal range factor facilitates vocal
range and gender control greatly. When we continue to eliminate the input F0
rescaling, the accuracies on gender and range as well as RMOS further decline
by 13.7 / 4.1, 3.8 and 0.59, respectively, which indicates that the vocal
range information contained in the original F0 interferes with the model’s
modeling of the correlation between prompt and singing style. We also observe
that removing the range factor and input F0 rescaling leads to an improvement
in melodic accuracy. This suggests that the decoupling mechanism may cause
some loss of pitch information. Despite this, our model keeps a satisfactory
melodic accuracy with the decoupled pitch representation.
We further examine the model’s controlling effectiveness under multi-attribute
prompts. The results of 4-6 in table 3 show that there is a slight decrease in
accuracies and RMOS as the attribute number increases, with the drop being 3.4
/ 3.4, 1.0, 0.09 from 1 to 2 attributes, and 3.1 / 2.2, 0.4, 2.3, 0.06 from 2
to 3. We suggest that this is because the conditional distribution of acoustic
style with respect to controlling signals of multiple attributes is more
complicated to be modeled. Nevertheless, our model shows favorable performance
on prompts with both single and multiple attributes.
## 6 Conclusion
In this paper, we propose Prompt-Singer, the first singing-voice-synthesis
method with the ability of style control using natural language prompts. We
adopt a multi-scale decoder-only transformer for generating acoustic units of
singing, followed by a unit-vocoder for audio reconstruction. We design a
decoupled pitch representation for vocal range modification with an accurate
melody kept. Furthermore, we investigate various experiment settings,
including different text representations, fine-tuning the text encoders, and
using speech data to boost performance in low-resource scenarios.
In future works, we plan to introduce more style attributes in controllable
SVS, such as emotion, rhythm and more detailed singer information. We hope our
work will facilitate the development of the SVS community.
## 7 Limitations and Potential Risks
Despite that our model achieves remarkable controlling capability and audio
quality on prompt singing-voice-synthesis, it still has two major limitations:
1) Due to the simplicity and inflexibility of our existing prompt generation
pipeline, the generated prompt texts may suffer from distributional bias,
manifested mainly as grammatical errors, unnatural expressions, and
restrictions in expressive capacity and diversity. We suggest that a potential
solution is to pass the assembled prompt sentences through the LLM once more
for refinement and synonymous sentence generation to improve accuracy and
expressiveness. 2) Due to the utilization of large-scale models (including the
text encoders and the transformer backbone) along with an autoregressive
generation paradigm, our model entails relatively high computational overhead,
resulting in considerable inference latency. We discuss the relationship
between inference latency and the length of the generated audio in appendix E.
Besides, misuse of our model for singing voice generation may lead to
copyright issues. We will add some constraints to guarantee people who use our
code or pre-trained model will not use the model in illegal cases.
## Acknowledgements
This work is supported by National Key R&D Program of China under Grant
No.2022ZD0162000, National Natural Science Foundation of China under Grant No.
62222211 and No.62072397.
## References
* Borsos et al. (2023) Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. 2023. Audiolm: a language modeling approach to audio generation. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901.
* Chen et al. (2020) Jiawei Chen, Xu Tan, Jian Luan, Tao Qin, and Tie-Yan Liu. 2020. Hifisinger: Towards high-fidelity neural singing voice synthesis. _arXiv preprint arXiv:2009.01776_.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Dong Wang (2015) Zhiyong Zhang Dong Wang, Xuewei Zhang. 2015. Thchs-30 : A free chinese speech corpus.
* Elizalde et al. (2023a) Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. 2023a. Clap learning audio concepts from natural language supervision. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 1–5. IEEE.
* Elizalde et al. (2023b) Benjamin Elizalde, Soham Deshmukh, and Huaming Wang. 2023b. Natural language supervision for general-purpose audio representations.
* Guo et al. (2021) Tingwei Guo, Cheng Wen, Dongwei Jiang, Ne Luo, Ruixiong Zhang, Shuaijiang Zhao, Wubo Li, Cheng Gong, Wei Zou, Kun Han, et al. 2021. Didispeech: A large scale mandarin speech corpus. In _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 6968–6972. IEEE.
* Guo et al. (2023) Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, and Xu Tan. 2023. Prompttts: Controllable text-to-speech with text descriptions. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 1–5. IEEE.
* Hong et al. (2023) Zhiqing Hong, Chenye Cui, Rongjie Huang, Lichao Zhang, Jinglin Liu, Jinzheng He, and Zhou Zhao. 2023. Unisinger: Unified end-to-end singing voice synthesis with cross-modality information matching. In _Proceedings of the 31st ACM International Conference on Multimedia_ , pages 7569–7579.
* Huang et al. (2023a) Jiawei Huang, Yi Ren, Rongjie Huang, Dongchao Yang, Zhenhui Ye, Chen Zhang, Jinglin Liu, Xiang Yin, Zejun Ma, and Zhou Zhao. 2023a. Make-an-audio 2: Temporal-enhanced text-to-audio generation. _arXiv preprint arXiv:2305.18474_.
* Huang et al. (2021) Rongjie Huang, Feiyang Chen, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2021. Multi-singer: Fast multi-singer singing voice vocoder with a large-scale corpus. In _Proceedings of the 29th ACM International Conference on Multimedia_ , pages 3945–3954.
* Huang et al. (2022) Rongjie Huang, Chenye Cui, Feiyang Chen, Yi Ren, Jinglin Liu, Zhou Zhao, Baoxing Huai, and Zhefeng Wang. 2022. Singgan: Generative adversarial network for high-fidelity singing voice generation. In _Proceedings of the 30th ACM International Conference on Multimedia_ , pages 2525–2535.
* Huang et al. (2023b) Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023b. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. In _International Conference on Machine Learning_ , pages 13916–13932. PMLR.
* Huang et al. (2023c) Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. 2023c. Audiogpt: Understanding and generating speech, music, sound, and talking head. _arXiv preprint arXiv:2304.12995_.
* (17) Rongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech. In _Advances in Neural Information Processing Systems_.
* Huang et al. (2023d) Rongjie Huang, Chunlei Zhang, Yongqi Wang, Dongchao Yang, Luping Liu, Zhenhui Ye, Ziyue Jiang, Chao Weng, Zhou Zhao, and Dong Yu. 2023d. Make-a-voice: Unified voice synthesis with discrete representation. _arXiv preprint arXiv:2305.19269_.
* Kreuk et al. (2022) Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. 2022. Audiogen: Textually guided audio generation. _arXiv preprint arXiv:2209.15352_.
* Lee et al. (2022) Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, and Sungroh Yoon. 2022. Bigvgan: A universal neural vocoder with large-scale training. _arXiv preprint arXiv:2206.04658_.
* Leng et al. (2023) Yichong Leng, Zhifang Guo, Kai Shen, Xu Tan, Zeqian Ju, Yanqing Liu, Yufei Liu, Dongchao Yang, Leying Zhang, Kaitao Song, et al. 2023. Prompttts 2: Describing and generating voices with text prompt. _arXiv preprint arXiv:2309.02285_.
* Liu et al. (2022) Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, and Zhou Zhao. 2022. Diffsinger: Singing voice synthesis via shallow diffusion mechanism. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 36, pages 11020–11028.
* McAuliffe et al. (2017) Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech alignment using kaldi. In _Interspeech_ , volume 2017, pages 498–502.
* Morise et al. (2017) Masanori Morise et al. 2017. Harvest: A high-performance fundamental frequency estimator from speech signals. In _INTERSPEECH_ , pages 2321–2325.
* Ramesh et al. (2021) Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In _International Conference on Machine Learning_ , pages 8821–8831. PMLR.
* Shen et al. (2023) Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. 2023. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. _arXiv preprint arXiv:2304.09116_.
* Shi et al. (2020) Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, and Ming Li. 2020. Aishell-3: A multi-speaker mandarin tts corpus and the baselines. _arXiv preprint arXiv:2010.11567_.
* Von Helmholtz (1912) Hermann Von Helmholtz. 1912. _On the Sensations of Tone as a Physiological Basis for the Theory of Music_. Longmans, Green.
* Wang et al. (2023) Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al. 2023. Neural codec language models are zero-shot text to speech synthesizers. _arXiv preprint arXiv:2301.02111_.
* Wang et al. (2022) Yu Wang, Xinsheng Wang, Pengcheng Zhu, Jie Wu, Hanzhao Li, Heyang Xue, Yongmao Zhang, Lei Xie, and Mengxiao Bi. 2022. Opencpop: A high-quality open source chinese popular song corpus for singing voice synthesis. _arXiv preprint arXiv:2201.07429_.
* Wu et al. (2023) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. 2023. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 1–5. IEEE.
* Yang et al. (2023a) Dongchao Yang, Songxiang Liu, Rongjie Huang, Guangzhi Lei, Chao Weng, Helen Meng, and Dong Yu. 2023a. Instructtts: Modelling expressive tts in discrete latent space with natural language style prompt. _arXiv preprint arXiv:2301.13662_.
* Yang et al. (2023b) Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, et al. 2023b. Uniaudio: An audio foundation model toward universal audio generation. _arXiv preprint arXiv:2310.00704_.
* Yao et al. (2023) Jixun Yao, Yuguang Yang, Yi Lei, Ziqian Ning, Yanni Hu, Yu Pan, Jingjing Yin, Hongbin Zhou, Heng Lu, and Lei Xie. 2023. Promptvc: Flexible stylistic voice conversion in latent space driven by natural language prompts. _arXiv preprint arXiv:2309.09262_.
* Yu et al. (2024) Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. 2024. Megabyte: Predicting million-byte sequences with multiscale transformers. _Advances in Neural Information Processing Systems_ , 36.
* Zeghidour et al. (2021) Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. 2021. Soundstream: An end-to-end neural audio codec. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 30:495–507.
* Zhang et al. (2022a) Lichao Zhang, Ruiqi Li, Shoutong Wang, Liqun Deng, Jinglin Liu, Yi Ren, Jinzheng He, Rongjie Huang, Jieming Zhu, Xiao Chen, et al. 2022a. M4singer: A multi-style, multi-singer and musical score provided mandarin singing corpus. _Advances in Neural Information Processing Systems_ , 35:6914–6926.
* Zhang et al. (2022b) Yongmao Zhang, Jian Cong, Heyang Xue, Lei Xie, Pengcheng Zhu, and Mengxiao Bi. 2022b. Visinger: Variational inference with adversarial learning for end-to-end singing voice synthesis. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 7237–7241. IEEE.
* Zhang et al. (2023a) Yongmao Zhang, Guanghou Liu, Yi Lei, Yunlin Chen, Hao Yin, Lei Xie, and Zhifei Li. 2023a. Promptspeaker: Speaker generation based on text descriptions. _arXiv preprint arXiv:2310.05001_.
* Zhang et al. (2022c) Zewang Zhang, Yibin Zheng, Xinhui Li, and Li Lu. 2022c. Wesinger: Data-augmented singing voice synthesis with auxiliary losses. _arXiv preprint arXiv:2203.10750_.
* Zhang et al. (2023b) Zewang Zhang, Yibin Zheng, Xinhui Li, and Li Lu. 2023b. Wesinger 2: fully parallel singing voice synthesis via multi-singer conditional adversarial training. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 1–5. IEEE.
## Appendix A Sample Prompt Keywords and Sentence Templates
We list the keywords for each category in table 4, and provide some samples of
prompt sentence templates in table 6.
Category | Keywords
---|---
Gender
female | woman, lady, girl, female, lass, miss, madam
male | man, boy, guy, gentleman, male, sir
Volume
high | loud, ringing, booming, thunderous,
deafening, roaring
medium | moderate, average, intermediate,
middle-range
low | quiet, slight, twittering, hushed, whispering
Vocal Range
high | sharp, treble, shrill, whistling,
shrieking, high-pitched
low | deep, low, bass, thick, low-pitched
Table 4: Prompt keywords for each category.
## Appendix B Dataset Statistics
In table 5, we list the statistics of the datasets used. F and M in the
Speakers column indicate the numbers of female and male speakers or singers.
Dataset | Hours | Speakers
---|---|---
SVS datasets
M4Singer Zhang et al. (2022a) | 29.8 | F:10 M:10
Opencpop Wang et al. (2022) | 5.3 | F:1
Opensinger Huang et al. (2021) | 86.5 | F:49 M:28
PopCS Liu et al. (2022) | 5.9 | F:1
TTS datasets
AISHELL-3 Shi et al. (2020) | 86.4 | F:176 M:42
Biaobei 222https://www.data-baker.com/open_source.html | 11.8 | F:1
THCHS-30 Dong Wang (2015) | 34.2 | F:31 M:9
Didispeech Guo et al. (2021) | 47.0 | F:198 M:202
Table 5: Statistics of training datasets. Single-Attribute Templates
---
Do you have any songs with a [gender] lead singer?
Can you create a song sung by a [gender] vocalist?
I’m searching for a song featuring a [gender] singer.
I need a song with a [volume] voice that resonates.
Play me a song with a [volume] voice.
I’d like to listen to a song with a [volume] voice.
I need a song where every note is gentle and delicate. (for low volume)
Kindly provide me with a song that features a voice of balanced volume,
pleasing to the ears. (for medium volume)
Give me a song with a voice that shakes the ground with its thunderous vocals!
(for high volume)
Double-Attribute Templates
Can you find me a song with a [gender] singer and a [volume] voice?
I would like to hear a song with a [volume] voice and if possible, a [gender]
voice.
Synthesize a new song with a [volume] voice and a [gender] lead singer.
Need a [pitch] pitch song sung by a [gender] vocalist.
Generate a song featuring a [gender] vocalist with a unique use of [pitch]
pitch.
A [gender] voice with a [pitch] pitch is what I’m looking for.
Create an enchanting song sung by a [gender] vocalist in the [pitch] pitch.
Create a [gender] artist’s song with a [volume] voice, softly mesmerizing with
its gentle tone. (for low volume + any gender)
Generate a [gender] artist singing at just the right volume. (for medium
volume + any gender)
Can you generate a [gender]-sung song with a [volume] voice that balances
softness and loudness? (for medium volume + any gender)
I’m looking for a song with a [gender] singer and a voice that’s as powerful
as a thunderstorm. (for high volume + any gender)
Triple-Attribute Templates
Explore [gender] [volume] songs with emotive [pitch] pitch.
Synthesize a song with a [pitch] pitch and a [volume] voice, preferably
[gender].
Design a [gender] singer’s song with a [volume] voice and [pitch] pitch.
Showcasing superb [pitch] pitch, create a [volume] song by a [gender] artist.
Generate a song with stunning [pitch] harmonies and a [gender] singer with a
[volume] voice.
Can you compose a song with a [gender] vocalist and [volume] volume, while
incorporating the singer’s unique use of [pitch] pitch?
Generate a song featuring [gender] vocals, delicately whispered with [volume]
voice and [pitch] harmony. (for low volume + any gender / vocal range)
Compose a [pitch]-keyed song with a [volume] voice that balances softness and
loudness, sung by a [gender] singer. (for medium volume + any gender / vocal
range)
Craving a [gender] artist’s song with a [volume] voice that exudes energy and
power and a [pitch] note that creates a memorable hook! (for high volume + any
gender / vocal range)
Table 6: Sample prompt sentence templates.
## Appendix C Model Settings
We illustrate the architecture of the global transformer in Figure 3. The
local transformer shares the same structure as the global one with two
differences: 1) the local transformer has no positional embedding, and 2)
there is a linear lm-head appended to the top of it for token prediction. We
also list the model hyper-parameters of Prompt-Singer in Table 7. The multi-
scale transformer is trained with 6 NVIDIA-V100 gpus for about 4-5 days, and
the vocoder is trained with 4 NVIDIA-V100 gpus for a week.
Figure 3: Structure of Global Transformer Hyperparameter | Prompt-Singer
---|---
Global Transformer | Layers | 20
Hidden Dim | 1,152
Attention Headers | 16
FFN Dim | 4,608
Number of Parameters | 320.07M
Local Transformer | Layers | 6
Hidden Dim | 1,152
Attention Headers | 8
FFN Dim | 4,608
Number of Parameters | 100.13M
Unit Vocoder | Upsample Rates | [6,5,2,2,2,2]
Hop Size | 480
Upsample Kernel Sizes | [12,9,4,4,4,4]
Number of Parameters | 125.43M
Table 7: Hyperparameters of Prompt-Singer.
## Appendix D Evaluation Metrics
### D.1 Objective Evaluation
Figure 4: Soft-margin accuracy curve of high vocal-range of male. Figure 5:
Soft-margin accuracy curve of medium volume.
For gender controlling accuracy, we train an open-source gender
classifier333https://github.com/x4nth055/gender-recognition-by-
voice/tree/master with our singing and speech data. The performance of the
classifier on the test set is provided as ground-truth accuracy in line 13 of
table 1.
For controlling accuracies on volume and vocal range, considering that the
values of generated singing may slightly deviate from the boundaries used for
categorization, we adopt a soft-margin mechanism for accuracy calculation.
Specifically, we take the accuracy of data falling within the correct range as
100, and calculate the accuracy with $100*\exp{(-k\epsilon)}$ for data outside
the correct range, where $\epsilon$ is the error between the data value and
the boundary, and $k$ is a hyper-parameter controlling the decay rate of
accuracy at the margins, with larger $k$ corresponding to faster decay. We
take accuracy curves of high vocal-range of male and medium volume as examples
and illustrate them in Figure 4 and 5, respectively. We set $k$ to 120, 150
and 180 for high, medium and low volume, and 0.2 for vocal range accuracy.
### D.2 Subjective Evaluation
For each evaluated model, we mix all generated results together and randomly
select 220 items with their corresponding prompts for subjective evaluation.
Our subjective evaluation tests are crowd-sourced and conducted via Amazon
Mechanical Turk. For audio quality evaluation, we ask the testers to examine
the audio quality and naturalness and ignore the content. For prompt-style
relevance, we instruct the testers to evaluate the relevance between the
natural language prompt and the singing style while ignoring the content. The
testers rate scores on 1-5 Likert scales. We provide screenshots of the
testing interfaces in Figure 6 and 7. Each data item is rated by 4 testers,
and the testers are paid $8 hourly.
Figure 6: Screenshot of MOS testing. Figure 7: Screenshot of RMOS testing.
## Appendix E Inference Efficiency
To give an intuitive impression of our model’s inference efficiency, we
visualize the relationship between model inference latency and the length of
the generated audio in Figure 8, including the acoustic unit generation stage
with two types of text encoder, together with the wave reconstruction stage.
The inference is conducted on a single NVIDIA-V100 GPU. It can be observed
that the major latency comes from the transformer backbone, and it increases
with the length of the sequence; on the other hand, the latency of the non-
autoregressive vocoder is minimal and not significantly affected by the
sequence length.
(a) Latency of acoustic unit generation
(b) Latency of wave reconstruction
Figure 8: Inference latency at varying lengths of generated audio.
|
# ParrotTTS: Text-to-Speech synthesis by exploiting
self-supervised representations
Saiteja Kosgi 1 Neil Kumar Shah1,2 Vishal Tambrahalli 1
Neha Sherin1 Vineet Gandhi1
1Kohli Centre on Intelligent Systems, IIIT Hyderabad
2TCS Research, Pune
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Text-to-speech (TTS) systems are modelled as mel-synthesizers followed by
speech-vocoders since the era of statistical TTS that is carried forward into
neural designs. We propose an alternative approach to TTS modelling referred
to as ParrotTTS borrowing from self-supervised learning (SSL) methods.
ParrotTTS takes a two-step approach by initially training a speech-to-speech
model on unlabelled data that is abundantly available, followed by a text-to-
embedding model that leverages speech with aligned transcriptions to extend it
to TTS. ParrotTTS achieves competitive mean opinion scores on naturalness
compared to traditional TTS models but significantly improves over the
latter’s data efficiency of transcribed pairs and speaker adaptation without
transcriptions. This further paves the path to training TTS models on
generically trained SSL speech models. Speech samples from ParrotTTS can be
found at https://parrottts.github.io/tts/
## 1 Introduction
Vocal learning forms the first phase of infants starting to talk Locke (1996,
1994). In this phase, the learning happens by simply listening to
sounds/speech. Studies show that vocal learning begins in the final trimester
of pregnancy; the normally developing fetus can hear its mother’s voice within
the womb Kolata (1984). Several studies show that the best way to promote
language development for babies is to talk to them. It is hypothesized Kuhl
and Meltzoff (1996) that infants listening to ambient language store
perceptually derived representations of the speech sounds they hear, which in
turn serve as targets for the production of speech utterances. Interestingly,
in this phase, the infant has no conception of text or linguistic rules, and
speech is considered sufficient to influence speech production Kuhl and
Meltzoff (1996). Eventually, if parrots can talk without understanding
language, there is no reason human infants should need to possess grammatical
capability either to comprehend and produce speech Locke (1994).
Figure 1: (a) Traditional TTS and (b) Proposed TTS model
We propose a novel design for text-to-speech synthesis called ParrotTTS that
follows a similar learning process. Our idea mimics the-step approach, with
the first learning to produce sounds capturing the whole gamut of phonetic
variations. It is attained by learning quantized representations of sound
units in a self-supervised manner. The second phase builds on top of the first
by learning a mapping from text to the quantized representations (embeddings).
This step uses paired text-speech data. The two phases are analogous to first
learning to talk followed by learning to read.
Our proposed ParrotTTS is illustrated in Figure 1(b) distinguishing it from
traditional design in Figure 1(a). The self-supervised module learns discrete
speech representations using raw audio data from multiple speakers without
aligned transcriptions similar to Wav2Vec 2.0 Baevski et al. (2020) or Hubert
Hsu et al. (2021). The SSL module includes a speech-to-embedding (STE) encoder
trained on masked prediction task to generate the intermediate representation
of audio input. An embedding-to-speech (ETS) decoder is independently trained
to invert embeddings to synthesize audio waveforms and is additionally
conditioned on speaker identity. This learning to talk is the first of the
two-step training pipeline.
In the subsequent learning to read step, a separate text-to-embedding (TTE)
encoder is trained to generate embeddings from text (or equivalent phonetic)
inputs. This step requires labeled speech with aligned transcriptions.
However, the data requirement in this step is very low in terms of volume and
number of speakers. We show that transcribed samples from even a single
speaker suffices to learn phonetic mapping (TTE) sufficiently well for
generalization on a large number of speakers. Further, the decoder ETS can be
conditioned on speaker identity to change the voice of rendered speech. In our
model, the speech embeddings can be obtained either from the text (using TTE)
or directly from audio (using STE), providing a unified model for speech
synthesis, of which we limit the scope of this work to only text-to-speech.
Overall, the restructuring of learning components has effectively changed the
data dependence equation in our favor, cutting down the amount of transcribed
data needed by leveraging abundant raw audio to achieve similar speech
quality. This further makes it easy to extend the model to de novo voices
unseen in initial training by independently fine-tuning the ETS decoder module
on untranscribed audio from the corresponding speakers. Also, the ParrotTTS’
components are functionally different from that of traditional synthesizer-
vocoder design. This offers several other benefits.
1. 1.
For instance, our speech embedding has lower variance than that of Mel frames
reducing the complexity to train TTE and increasing capacity of downstream
ETS. We observe that, for example, our embeddings are speaker agnostic,
requiring ETS conditioning on speaker identity for speaker adaptation.
2. 2.
Speaker agnostic speech embeddings paired with independently trained STE
disentangled speaker handling from content. This enabled adaptation to novel
voices with untranscribed speech alone. The data requirement is placed between
zero-shot methods that use speaker-embedding but are poor in quality and
traditional TTS requiring fully transcribed speech while its quality matches
the latter.
3. 3.
Segregation of functions pushed acoustic handling into ETS module towards the
end that directly infers the speech signal without going through Mel frames.
This bypasses potential vocoder generalization issues Kim et al. (2021)
similar to FastSpeech2s Ren et al. (2020).
4. 4.
Reduced complexity helps in stabler training of TTE encoder for either
autoregressive or non-autoregressive choice. For example, we observe at least
eight-fold faster convergence in training iterations of our TTE module
compared to that of Ren et al. (2020) and Wang et al. (2017).
The main contribution of this work is the novel ParrotTTS architecture
detailed in Section 3. It redesigns the standard synthesizer-vocoder neural
TTS to leverage self-supervised learning from which the various benefits
listed above flow. We train multiple models of the proposed ParrotTTS approach
with different choices and study their effects like the quality of rendered
speech, data efficiency, word-error rates upon transcription of speech output,
etc., see Section 4. Experimental results reported in Section 5 consistently
point to the competitive or superior performance of ParrotTTS relative to the
current state-of-the-art for TTS. While these observations are of significant
value to practitioners in evaluating the adoption of ParrotTTS approach for
speech synthesis, numerous questions need further investigation. We conclude
in Section 6 with a discussion of these questions and the related topics that
need further exploration to better understand the proposed approach.
## 2 Related work
TTS systems have been studied for decades now, with the concatenative
statistical models from earlier attempts (Hunt and Black, 1996; Cohn and
Zellou, 2020) being increasingly replaced by neural variants in recent years
Oord et al. (2016). We specifically review the popular and better-performing
supervised models in Section 2.1 and their unsupervised counterparts in
Section 2.2. These references help understand data challenges for TTS training
and how their quality is observed to vary with the degree of supervision.
Towards the end of this section, we review the self-supervised learning
approach that ParrotTTS leverages with pointers to its application in other
domains.
### 2.1 Supervised TTS
A typical neural TTS model has an acoustic synthesizer that generates
frequency-domain Mel-spectrogram frames. The synthesizer has an encoder that
maps text or phonetic inputs to hidden states, followed by a decoder that
generates Mels from the hidden states. Predicted Mel frames contain all the
necessary information to reconstruct speech (Griffin and Lim, 1984) and an
independently trained vocoder (Oord et al., 2016; Kong et al., 2020)
transforms them into time-domain waves. Mel predicting decoders could be
autoregressive models (Wang et al., 2017; Valle et al., 2020; Shen et al.,
2018) that generate the Mel frames in sequential order. It conditions the
generation of a Mel frame at any time instant on all preceding predictions and
the encoder output using attention modules Graves (2013). In contrast, non-
autoregressive or parallel models (Ren et al., 2019, 2020; Łańcucki, 2021)
predict intermediate features like duration, pitch, and energy for each
phoneme. Mel frames of all time instants are then generated simultaneously
from these predicted intermediate features. Non-autoregressive models are
quicker at inference and robust to word skipping or repetition errors Ren et
al. (2020).
The quality and quantity of transcribed audio used in TTS training are known
to impact the quality of speech rendered. Public data with about $24$ hours of
studio recorded audio is known to train reasonable quality single-speaker
models (Ito and Johnson, 2017). This becomes more demanding in a multi-speaker
setting requiring sufficient per-speaker audio to learn all voices well Veaux
et al. (2017). Speaker conditioning of the decoder is commonly achieved by
one-hot-encoding of those seen at train time. Alternatively, speaker
embeddings (Jia et al., 2018) could be used for decoder conditioning which in
theory could render speech for de novo voices not part of the training set.
However, speech rendered through this method is known to be of poorer quality
and naturalness, especially for speakers not sufficiently represented in the
train set (Tan et al., 2021).
### 2.2 Raw-audio for TTS
Unsupervised speech synthesis Ni et al. (2022) does not require transcribed
text-audio pairs for the TTS acoustic modeling. They typically employ
unsupervised automatic speech recognition (ASR) model (Baevski et al., 2021;
Liu et al., 2022a) to transcribe raw speech to generate pseudo labels.
However, their performance tends to be bounded by the performance of the
unsupervised ASR model, which still has to close a significant gap compared to
supervised counterparts Baevski et al. (2021). Furthermore, switching to a
multi-speaker setup worsens quality relative to fully supervised models Liu et
al. (2022b).
Some prior works have looked at adapting TTS to novel speakers using
untranscribed audio Yan et al. (2021); Luong and Yamagishi (2019); Taigman et
al. (2017). Unlike ours, these methods require a large amount of paired data
from multiple speakers during initial training. Some of these Luong and
Yamagishi (2019); Taigman et al. (2017) jointly train the TTS pipeline and the
modules for speaker adaptation but the model convergence gets tricky. In
contrast, ParrotTTS benefits from the disentanglement of linguistic content
from speaker information, making adaptation easier.
Figure 2: Schematic diagram of the proposed model.
### 2.3 Self-supervised learning
Self-supervised learning (SSL) methods have become increasingly popular in
numerous applications owing to their ability to leverage copious amounts of
unlabeled data to learn large models that can be fine-tuned for multiple tasks
later. They are reported to achieve results better than supervised models
trained on fewer labeled samples and have found applications in computer
vision He et al. (2022), natural language processing Devlin et al. (2018);
Vaswani et al. (2017) and audio processing Schneider et al. (2019). Mask
prediction, temporally contrastive learning, next-step prediction, etc., are
some common techniques to train SSL models. Wav2vec2 Baevski et al. (2020),
Hubert Hsu et al. (2021) are popular SSL models for speech processing and ASR
Baevski et al. (2020), phoneme segmentation Kreuk et al. (2020), and spoken
language modeling Lakhotia et al. (2021), speech resynthesis Polyak et al.
(2021) are tasks that gained from leveraging them. In the same spirit, our
work explores SSL, specifically pre-trained Hubert Hsu et al. (2021), for TTS.
To the best of our knowledge, there are no known TTS models trained on SSL,
and our efforts fill this gap.
## 3 ParrotTTS architecture
As mentioned earlier, ParrotTTS has three modules; two encoders, STE and TTE
that map audio and text respectively to embedding, and a decoder ETS that maps
the embedding to the speech signal. Our speech encoder-decoder choices are
borrowed from Polyak et al. (2021). The speech encoder STE is HuBERT Hsu et
al. (2021) that maps input audio clip to discrete vectors with entries called
HuBERT units. Our speech decoder ETS is a modified version of HiFiGan Kong et
al. (2020). Text encoder TTE is an encoder-decoder architecture, and we
experiment with both autoregressive (AR) and non-autoregressive (NAR) choices
for the same. We give architectural details of these three modules below.
### 3.1 Speech encoder STE
The self-supervised HuBERT model we use for our STE is pre-trained on large
raw audio data on masked prediction task very similar to the BERT model for
text Devlin et al. (2018) to learn “combined acoustic and language model over
the continuous inputs” of speech. It borrows the base architecture from
Wav2vec 2.0 Baevski et al. (2020) with convolutions on raw inputs followed by
a few transformer layers, however, replaces its contrastive loss with a BERT-
like classification. The “noisy” classes are derived by clustering MFCC
features of short speech signals. Encoder input is audio signal
$X=(x_{1},....x_{T})$ sampled at a rate of $16$kHz. Let $E_{r}$ denote the
raw-audio encoder, and its output be,
$\mathbf{h}_{r}=(h_{1},....,{h_{\widehat{T}}})\coloneqq E_{r}(X),$
Where $\widehat{T}=T/320$ indicating downsampling and each
$h_{i}\in\\{1,\dots,K\\}$ with $K$ being a number of clusters in HuBERT’s
clustering step, set to $100$ in our experiments.
### 3.2 Speech decoder ETS
We use a modified version of HiFiGAN Kong et al. (2020) vocoder for our ETS to
decode from $\mathbf{h}=(\mathbf{h}_{r},\mathbf{h}_{s})$ to speech, where
$\mathbf{h}_{s}$ is the one-hot speaker embedding. It has a generator $G$ and
a discriminator $D$. $G$ runs $\mathbf{h}$ through transposed convolutions for
upsampling to recover the original sampling rate followed by residual block
with dilations to increase the receptive field to synthesize the signal,
$\widehat{X}\coloneqq G(\mathbf{h})$.
The discriminator distinguishes synthesized $\widehat{X}$ from the original
signal $X$ and is evaluated by two sets of discriminator networks. Multi-
period discriminators operate on equally spaced samples, and multi-scale
discriminators operate at different scales of the input signal. Overall, the
model attempts to minimize $D(X,\widehat{X})$ over all its parameters to train
ETS.
### 3.3 Text encoder TTE
The third module we train, TTE is a text encoder that maps phoneme sequence
$P=(p_{1},\dots,p_{N})$ to embedding sequence
$\mathbf{h}_{p}=(h_{1},\dots,h_{\widehat{N}})$. We train a sequence-to-
sequence architecture to achieve this $\mathbf{h}_{p}\coloneqq E_{p}(P)$.
$E_{p}$ initially encodes $P$ into a sequence of fixed dimensional vectors
(phoneme embeddings), conditioned upon which its sequence generator produces
variable dimensional $\mathbf{h}_{p}$. Embedding $\mathbf{h}_{p}$ is intended
to mimic $\mathbf{h}_{r}\coloneqq E_{r}(X)$ extracted from the audio $X$
corresponding to the text $P$. Hence, the requirement of transcribed data
$(X,P)$ to derive the target $\mathbf{h}_{r}$ for training TTE by optimizing
over the parameters of $E_{p}$.
One could model $E_{p}$ to generate $\mathbf{h}_{p}$ autoregressively one step
at a time, which we refer to as AR-TTE model. See Figure 2(b) for an
illustration. Input phoneme sequence is encoded through a feed-forward
transformer block that stacks self-attention layers Vaswani et al. (2017) and
1D convolutions similar to FastSpeech2 Ren et al. (2019). Decoding for
$\mathbf{h}_{p}$ uses a transformer module with self-attention and cross-
attention. Future-masked self-attention attends to ground truth at train and
to previous decoder predictions at inference. Cross-attention attends to
phoneme encoding in both cases.
Alternatively, for a non-autoregressive choice of $E_{p}$, the model NAR-TTE
determines the output length $\widehat{N}$ followed by a step to
simultaneously predict all $\widehat{N}$ entries of $\mathbf{h}_{p}$. Figure
2(c) depicts NAR-TTE where the input phoneme sequence encoding is similar to
that of AR-TTE. The duration predictor and length regulator modules are
responsible for determining $\widehat{N}$ followed by the decoding step to
generate $\mathbf{h}_{p}$.
| Model | MOS $\uparrow$ | WER $\downarrow$
---|---|---|---
Traditional TTS | SS-FastSpeech2 | 3.87 | 4.52
SS-Tacotron2 | 3.90 | 4.59
FastSpeech2-SupASR | 3.78 | 4.72
Tacotron2-UnsupASR | 3.50 | 11.3
ParrotTTS | AR-TTE${}_{\text{LJS}}$+SS-ETS | 3.85 | 4.80
NAR-TTE${}_{\text{LJS}}$+SS-ETS | 3.86 | 4.58
NAR-TTE${}_{\frac{1}{2}\text{LJS}}$+SS-ETS | 3.81 | 6.14
Table 1: Subjective and objective comparison of studied TTS models in the
single speaker setting.
## 4 Experiments
We train multiple models of the ParrotTTS under different settings and
benchmark them against comparable models in the literature. Specifically, we
train single-speaker and multi-speaker models to evaluate naturalness,
intelligibility, and speaker adaptability. Naturalness is measured by mean-
opinion scores (MOS) from human judgments. Intelligibility is measured by
word-error rates from an ASR model on the rendered speech output. Speaker
adaptability is measured using Equal-Error-Rate from a pre-trained speaker
verification system. We perform these experiments with both autoregressive and
non-autoregressive choices of TTE.
### 4.1 ParrotTTS training
We use two public data sets for our experiments. LJSpeech Ito and Johnson
(2017) provides about 13k high-quality English transcribed audio clips
totaling about 24 hours from a single speaker. Data are split into two, with
512 samples set aside for validation and the remaining available for model
training. VCTK Veaux et al. (2017) with about 44 hours of transcribed speech
from 108 different speakers is used for the multi-speaker setup. It has a
minimum, average, and maximum of $7$, $22.8$, and $31$ minutes per speaker
speech length, respectively. All audio samples are resampled to $16$kHz before
use.
STE training. We use $12$ layer transformer model for HuBERT trained for two
epochs on $960$ hour-long LibriSpeech corpus Panayotov et al. (2015) as our
STE module to extract $\mathbf{h}_{r}$ embeddings. The model splits each $T$
seconds long audio into units of $T/320$ seconds and maps each of the obtained
units to a $768$ dimensional vector. The vectors are drawn from the network’s
activation units on the sixth layer similar to that of Lakhotia et al. (2021).
Continuous vectors are then discretized to $\mathbf{h}_{r}$ embeddings using a
codebook made from applying $k$-means (with $k$ set to $100$) to $100$ hour
subset of the data called LibriSpeech-clean Panayotov et al. (2015).
TTE training. We use LJSpeech to train two different TTE encoder modules;
TTE${}_{\textsc{LJS}}$ that uses all the data from our LJSpeech train set and
a second, TTE${}_{\frac{1}{2}\textsc{LJS}}$ with only half the data. This is
used to understand the effect of training data size on our metrics. All
variants of TTE we experiment with are trained only on samples from the single
speaker in LJSpeech data.
Text converted to phoneme sequence as described by Sun et al. (2019) are
inputs with $\mathbf{h}_{r}$ targets extracted from STE for training.
Additionally, NAR-TTE requires phonetic alignment to train the duration
predictor. We use Montreal forced-aligner McAuliffe et al. (2017) to generate
them for its training. Unlike standard TTS systems that predict Mel
spectrograms, TTE generates discrete units. Hence, we replace the mean-square
error loss used in Mels with cross-entropy with as many classes as clusters in
the discretization codebook.
ETS training. We train a single-speaker ETS, SS-ETS using only speech clips
from LJSpeech since its training does not require transcriptions. Similarly,
our multi-speaker ETS, MS-ETS decoder model uses only raw audio of all
speakers from VCTK data Veaux et al. (2017). So only embeddings
$\mathbf{h}_{r}$ extracted from VCTK audio clips are used along with one-hot
speaker vector $\mathbf{h}_{s}$. We emphasize that VCTK data were used only in
training the multi-speaker-ETS module, and the TTE has not seen any from this
set.
### 4.2 Comparison to prior art
Single Speaker TTS. We compare against state-of-the-art TTS models from the
literature of both kinds; Tacotron2 Wang et al. (2017) from among
autoregressive models and FastSpeech2 Ren et al. (2020) from the non-
autoregressive models. Both models are trained using the ground truth
transcripts of LJspeech and referred to as SS-Tacotron2 and SS-FastSpeech2.
We additionally trained an unsupervised version of FastSpeech2 by replacing
the ground truth transcripts with transcriptions obtained from the ASR model.
FastSpeech2-SupASR uses supervised ASR model Radford et al. (2022) to generate
the transcripts while Tacotron2-UnsupASR Ni et al. (2022) alternatively uses
unsupervised ASR Wav2vec-U 2.0 Liu et al. (2022a). We compare against three
variants of ParrotTTS;
1. 1.
AR-TTE${}_{\text{LJS}}$+SS-ETS that is autoregressive TTE trained on full
LJSpeech with single speaker ETS,
2. 2.
NAR-TTE${}_{\text{LJS}}$+SS-ETS that pairs TTE with non-autoregressive
decoding trained on full LJSpeech with single speaker ETS, and
3. 3.
NAR-TTE${}_{\frac{1}{2}\text{LJS}}$+SS-ETS that uses TTE with non-
autoregressive decoding trained on half LJSpeech with single speaker ETS.
Multi-speaker TTS. In the multi-speaker setting, we compare against a fully
supervised Fastspeech2 baseline trained on VCTK with all its speakers using
the entire paired audio-transcript data that we refer to as MS-FastSpeech2. We
borrow the TTE module trained on LJSpeech and use the raw audio of VCTK to
train the multi-speaker ETS module. We refer to this multi-speaker variant of
our ParrotTTS model as NAR-TTE${}_{\text{LJS}}$+MS-ETS that uses non-
autoregressive decoding for TTE similar to the FastSpeech2 baseline trained on
LJSpeech alone and multi-speaker ETS trained on VCTK alone.
For a fair comparison, we also curate a multi-speaker TTS baseline using a
combination of single-speaker TTS and a voice cloning model. We use
FastSpeech2 trained on LJspeech with state-of-the-art voice cloning model
Polyak et al. (2021) in our experiments and refer to this model as VC-
FastSpeech2. We also compare against multi-speaker TTS trained by obtaining
pseudo labels from a supervised ASR called MS-FastSpeech2-SupASR. In all
multi-speaker experiments, we use one-hot encoding for speaker identity.
Additionally, we also report numbers from GT-Mel+Vocoder that converts ground
truth Mels from actual audio clip back to speech using a vocoder Kong et al.
(2020) for a perspective of best achievable with ideal Mel frames.
Model | VCTK Transcripts | MOS $\uparrow$ | WER $\downarrow$ | EER $\downarrow$
---|---|---|---|---
GT-Mel+Vocoder | Yes | 4.12 | 2.25 | 2.12
MS-FastSpeech2 | Yes | 3.62 | 5.32 | 3.21
MS-FastSpeech2-SupASR | No | 3.58 | 6.65 | 3.85
VC-FastSpeech2 | No | 3.41 | 7.44 | 8.18
NAR-TTE${}_{\text{LJS}}$+MS-ETS | No | 3.78 | 6.53 | 4.38
Table 2: Comparison of the studied multi-speaker TTS models on the VCTK
dataset. The second column suggests if the corresponding method uses the
ground truth VCTK transcripts while training.
### 4.3 Evaluation metrics
Naturalness is measured by mean opinion scores (MOS) from subjective listening
tests on a five-point Likert scale, with $1$ being “completely unnatural”
speech to $5$ indicating “completely natural” output. We randomly sample five
clips per model from the validation set for each of our forty subjects who are
proficient English speakers. They are asked to make quality judgments by
rating the naturalness of the synthesized speech samples. The average rating
of MOS is calculated and reported. Intelligibility is measured by the word
error rate of ASR transcriptions of rendered speech. We use pre-trained
Whisper small model Radford et al. (2022) for this.
We validate the speaker adaptability by reporting Equal Error Rate (EER) from
a pre-trained speaker verification network. Specifically, we use the
verification model proposed in Desplanques et al. (2020) trained on VoxCeleb2
Chung et al. (2018) with a $0.8$% EER on the test split of VoxCeleb1 Chung et
al. (2018).
## 5 Results
Quantitative and qualitative results evaluating the proposed ParrotTTS system
are shown in Tables 1 and 2 for single-speaker and multi-speaker models,
respectively.
### 5.1 Single-speaker TTS
Naturalness and intelligibility. As shown in Table 1, ParrotTTS is competitive
to state-of-the-art in the single-speaker setting. In the autoregressive case,
our AR-TTE${}_{\textsc{LJS}}$+SS-ETS has a statistically insignificant drop
(of about $0.05$ units) on the MOS scale relative to the Tacotron2 baseline.
The non-autoregressive case has a similar observation (with a $0.01$ drop) on
MOS in our NAR-TTE${}_{\textsc{LJS}}$+SS-ETS model relative to FastSpeech2.
This empirically establishes that the naturalness of the speech rendered by
ParrotTTS is on par with the currently established methods. The WER scores
show a similar trend with a statistically insignificant drop (of under
$0.2$pp111Percentage points abbreviated as pp.) among the autoregressive and
non-autoregressive model classes.
Supervision and data efficiency. In the study to understand how the degree of
supervision affects TTS speech quality, we see a clear drop by $0.28$ MOS
units in moving from the FastSpeech2-SupASR model that employs supervised ASR
for transcriptions to Tacotron2-UnsupASR model using unsupervised ASR. Despite
some modeling variations, this is generally indicative of the importance of
clean transcriptions on TTS output quality, given that all other models are
within $0.05$ MOS units of each other.
The data requirement for TTS supervision needs to be understood in light of
this impact on output quality, and we show how ParrotTTS helps cut down on
this dependence. TTE is the only module that needs transcriptions to train,
and we show that by reducing the size of the train set by half in NAR-
TTE${}_{\frac{1}{2}\textsc{LJS}}$+SS-ETS the MOS is still comparable to that
of the model trained on all data NAR-TTE${}_{\textsc{LJS}}$+SS-ETS (with only
about $0.04$ units MOS drop). Finally, the MOS numbers of FastSpeech2-SupASR,
need to be read with some caution since the supervised ASR model used,
Whisper, is itself trained with plenty of transcriptions (spanning over $600$k
hours) from the web, including human and machine transcribed data achieving
very low WERs on various public and test sets. So, the machine transcriptions
used in FastSpeech2-SupASR are indeed very close to ground truth.
Figure 3: Visualization of attention between output units and phonemes. (a)
Evolution of attention matrix with training steps. (b) Attention loss plotted
against training steps.
### 5.2 Multi-speaker TTS
Naturalness and intelligibility. Table 2 summarizes results from our multi-
speaker experiments. Among all methods listed in it, NAR-
TTE${}_{\textsc{LJS}}$+MS-ETS clearly outperform all other models ranking only
below re-synthesizing from ground truth Mels, GT-Mel+Vocoder. Interestingly,
ParrotTTS fares even better than MS-FastSpeech2, which is, in turn, better
than other models that ignore transcripts at the train, namely, MS-
FastSpeech2-SupASR and VC-FastSpeech2. On the WER metric for intelligibility,
ParrotTTS is about $1$pp behind supervised MS-FastSpeech2 but fares better
than the other two models that discard VCTK transcripts for training.
Speaker adaptability. VC-FastSpeech2 is the closest in terms of experimental
setup since it is limited to transcriptions from LJSpeech for training similar
to ours, with VCTK used only for adaptation. In this case, EER of NAR-
TTE${}_{\textsc{LJS}}$+MS-ETS is about twice as good as that of VC-
FastSpeech2. However, improvements are visible when VCTK transcripts are part
of training data but remain under $1$pp relative to ParrotTTS while GT-
Mel+Vocoder continues to dominate the scoreboard leaving room for further
improvement.
### 5.3 Stabler training and faster inference
In Figure 3, we compare training profiles of Tacotron2 and AR-TTE keeping
batch size the same. As visualized in Figure 3(a), the attention matrix in
Tacotron2 takes about $20$k iterations to stabilize with an anti-diagonal
structure and predict a phoneme-aligned Mel sequence. AR-TTE, in contrast, is
about ten times faster at predicting a discrete HuBERT unit sequence that
aligns with input phonemes taking only about $2$k iterations to arrive at a
similar-looking attention plot. While the snapshots are illustrative, we use
the guided-attention loss described by Tachibana et al. (2018) as a metric to
quantify the evolution of the attention matrix through training steps. As
shown in Figure 3(b), the loss dives down a lot sooner for ParrotTTS relative
to its Tacotron2 counterpart. In a similar comparison, we observe that NAR-TTE
converges ($20$k steps) about eight times faster than FastSpeech2 ($160$k
steps).
We suppose that the faster convergence derives from the lower variance of
discrete embeddings in ParrotTTS as opposed to the richness of Mels that are
complete with all acoustic variations, including speaker identity, prosody,
etc. The output speech is independent of inputs given the Mel-spectrogram
unlike ParrotTTS embeddings that further need cues like speaker identity in
later ETS module. We hypothesize that segregating content mapping away from
learning acoustics like speaker identity helps improve training stability,
convergence, and data efficiency for the TTE encoder.
The proposed NAR-TTE system also improves inference latency and memory
footprint, which are crucial factors for real-world deployment. On NVIDIA RTX
$2080$ Ti GPU, we observe ParrotTTS serves 15% faster than FastSpeech2,
reducing the average per utterance inference time to 11ms from 13 ms.
Furthermore, the TTE module uses $17$M parameters in contrast to $35$M
parameters of the Mel synthesizer module in Fastspeech2.
## 6 Conclusion, limitations and future work
In this work, we proposed ParrotTTS, a fast, high quality, and efficient to
train TTS system. The two-stage learning process of ParrotTTS is designed to
leverage untranscribed speech data and the corresponding self-supervised
embeddings. We show that even when trained using transcribed data of a single
speaker from the LJSpeech dataset, ParrotTTS can synthesize speech in 108
different voices of the VCTK corpus. In terms of naturalness of speech,
ParrotTTS outperforms the established prior art and alternative baselines by a
noticeable margin in the multi-speaker setup. On single speaker benchmarks,
ParrotTTS provides competitive performance compared to the prior art. Overall,
our work paves the way for further explorations towards exploiting SSL in TTS
models.
Our experiments are limited to a single language. A deeper study exploring
multiple languages, effects of background noise, accents, and other
demographic variations is left for future work. The current pre-trained HuBERT
model skips prosody information Kharitonov et al. (2021), so the model has no
levers to control these prosodic variations. We want to study ways to bring
prosodic controllability into ParrotTTS. Further, it would be essential to
improve TTE training to use noisy samples that the current model does not work
well with to leverage weak supervision to scale.
## 7 Ethical Considerations
Our research is founded on ethical considerations. We are excited about the
potential of text-to-speech synthesis to push the frontier of technology, such
as in accessibility (giving voice to the voiceless), human-computer
interaction, telecommunications, and education. However, there is the
potential for misuse. Notably, multi-speaker text-to-speech systems have
raised concerns about unethical cloning. Our experiments limit to publicly
available datasets, and our method is not intended for synthesizing someone’s
voice without their permission. Another potential misuse is creating an audio
file of someone supposedly speaking words they never actually uttered. We are
keenly aware of these negative consequences. While the benefits outweigh the
concerns at this point, we firmly believe that the research community should
proactively continue to identify methods for detecting and preventing misuse.
Our approach is trained on western speech data and has yet to be validated on
different languages or people with speech impediments. As such, the dataset
and results are not representative of the population. A deeper understanding
of this issue requires future studies in tandem with linguistic and socio-
cultural insights.
## References
* Baevski et al. (2021) Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised speech recognition. _Advances in Neural Information Processing Systems_ , 34:27826–27839.
* Baevski et al. (2020) Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. _Advances in Neural Information Processing Systems_ , 33:12449–12460.
* Chung et al. (2018) Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. 2018. Voxceleb2: Deep speaker recognition. _arXiv preprint arXiv:1806.05622_.
* Cohn and Zellou (2020) Michelle Cohn and Georgia Zellou. 2020. Perception of concatenative vs. neural text-to-speech (tts): Differences in intelligibility in noise and language attitudes. In _Proceedings of Interspeech_.
* Desplanques et al. (2020) Brecht Desplanques, Jenthe Thienpondt, and Kris Demuynck. 2020. Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification. _arXiv preprint arXiv:2005.07143_.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Graves (2013) Alex Graves. 2013. Generating sequences with recurrent neural networks. _arXiv preprint arXiv:1308.0850_.
* Griffin and Lim (1984) Daniel Griffin and Jae Lim. 1984. Signal estimation from modified short-time fourier transform. _IEEE Transactions on acoustics, speech, and signal processing_ , 32(2):236–243.
* He et al. (2022) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 16000–16009.
* Hsu et al. (2021) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 29:3451–3460.
* Hunt and Black (1996) Andrew J Hunt and Alan W Black. 1996. Unit selection in a concatenative speech synthesis system using a large speech database. In _1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings_ , volume 1, pages 373–376. IEEE.
* Ito and Johnson (2017) Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/LJ-Speech-Dataset/.
* Jia et al. (2018) Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. _Advances in neural information processing systems_ , 31.
* Kharitonov et al. (2021) Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi, Jade Copet, Kushal Lakhotia, Tu-Anh Nguyen, Morgane Rivière, Abdelrahman Mohamed, Emmanuel Dupoux, et al. 2021. Text-free prosody-aware generative spoken language modeling. _arXiv preprint arXiv:2109.03264_.
* Kim et al. (2021) Jaehyeon Kim, Jungil Kong, and Juhee Son. 2021. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. In _International Conference on Machine Learning_ , pages 5530–5540. PMLR.
* Kolata (1984) Gina Kolata. 1984. Studying learning in the womb: behavioral scientists are using established experimental methods to show that fetuses can and do learn. _Science_ , 225(4659):302–303.
* Kong et al. (2020) Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. _Advances in Neural Information Processing Systems_ , 33:17022–17033.
* Kreuk et al. (2020) Felix Kreuk, Joseph Keshet, and Yossi Adi. 2020. Self-supervised contrastive learning for unsupervised phoneme segmentation. _Interspeech_.
* Kuhl and Meltzoff (1996) Patricia K Kuhl and Andrew N Meltzoff. 1996. Infant vocalizations in response to speech: Vocal imitation and developmental change. _The journal of the Acoustical Society of America_ , 100(4):2425–2438.
* Lakhotia et al. (2021) Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, et al. 2021. On generative spoken language modeling from raw audio. _Transactions of the Association for Computational Linguistics_ , 9:1336–1354.
* Łańcucki (2021) Adrian Łańcucki. 2021. Fastpitch: Parallel text-to-speech with pitch prediction. In _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 6588–6592. IEEE.
* Liu et al. (2022a) Alexander H Liu, Wei-Ning Hsu, Michael Auli, and Alexei Baevski. 2022a. Towards end-to-end unsupervised speech recognition. _arXiv preprint arXiv:2204.02492_.
* Liu et al. (2022b) Alexander H Liu, Cheng-I Jeff Lai, Wei-Ning Hsu, Michael Auli, Alexei Baevskiv, and James Glass. 2022b. Simple and effective unsupervised speech synthesis. _Interspeech_.
* Locke (1994) John L Locke. 1994. Phases in the child’s development of language. _American Scientist_ , 82(5):436–445.
* Locke (1996) John L Locke. 1996. Why do infants begin to talk? language as an unintended consequence. _Journal of child language_ , 23(2):251–268.
* Luong and Yamagishi (2019) Hieu-Thi Luong and Junichi Yamagishi. 2019. A unified speaker adaptation method for speech synthesis using transcribed and untranscribed speech with backpropagation. _arXiv preprint arXiv:1906.07414_.
* McAuliffe et al. (2017) Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech alignment using kaldi. In _Interspeech_ , volume 2017, pages 498–502.
* Ni et al. (2022) Junrui Ni, Liming Wang, Heting Gao, Kaizhi Qian, Yang Zhang, Shiyu Chang, and Mark Hasegawa-Johnson. 2022. Unsupervised text-to-speech synthesis by unsupervised automatic speech recognition. _arXiv preprint arXiv:2203.15796_.
* Oord et al. (2016) Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. _arXiv preprint arXiv:1609.03499_.
* Panayotov et al. (2015) Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In _2015 IEEE international conference on acoustics, speech and signal processing (ICASSP)_ , pages 5206–5210. IEEE.
* Polyak et al. (2021) Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled self-supervised representations. _Interspeech_.
* Radford et al. (2022) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak supervision. _OpenAI Blog_.
* Ren et al. (2020) Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020\. Fastspeech 2: Fast and high-quality end-to-end text to speech. _arXiv preprint arXiv:2006.04558_.
* Ren et al. (2019) Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019\. Fastspeech: Fast, robust and controllable text to speech. _Advances in Neural Information Processing Systems_ , 32.
* Schneider et al. (2019) Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. _Interspeech_.
* Shen et al. (2018) Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. 2018\. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In _2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)_ , pages 4779–4783. IEEE.
* Sun et al. (2019) Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. 2019. Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion. In _Proc. Interspeech 2019_ , pages 2115–2119.
* Tachibana et al. (2018) Hideyuki Tachibana, Katsuya Uenoyama, and Shunsuke Aihara. 2018. Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention. In _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 4784–4788. IEEE.
* Taigman et al. (2017) Yaniv Taigman, Lior Wolf, Adam Polyak, and Eliya Nachmani. 2017. Voiceloop: Voice fitting and synthesis via a phonological loop. _arXiv preprint arXiv:1707.06588_.
* Tan et al. (2021) Xu Tan, Tao Qin, Frank Soong, and Tie-Yan Liu. 2021. A survey on neural speech synthesis. _arXiv preprint arXiv:2106.15561_.
* Valle et al. (2020) Rafael Valle, Kevin Shih, Ryan Prenger, and Bryan Catanzaro. 2020. Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis. _arXiv preprint arXiv:2005.05957_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30.
* Veaux et al. (2017) Christophe Veaux, Junichi Yamagishi, and Kirsten MacDonald. 2017. Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit.
* Wang et al. (2017) Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. _arXiv preprint arXiv:1703.10135_.
* Yan et al. (2021) Yuzi Yan, Xu Tan, Bohan Li, Tao Qin, Sheng Zhao, Yuan Shen, and Tie-Yan Liu. 2021\. Adaspeech 2: Adaptive text to speech with untranscribed data. In _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 6613–6617. IEEE.
|
# Insensitizing controls for a fourth order semi-linear parabolic equations
Bo You***Email address<EMAIL_ADDRESS>
School of Mathematics and Statistics, Xi’an Jiaotong University
Xi’an, 710049, P. R. China
Fang Li†††Email address<EMAIL_ADDRESS>
School of Mathematics and Statistics, Xidian University
Xi’an, 710071, P. R. China
###### Abstract
This paper is concerned with the existence of insensitizing controls for a
fourth order semilinear parabolic equation. Here, the initial data is
partially unknown, we would like to find controls such that a specific
functional is insensitive for small perturbations of the initial data. In
general, this kind of problems can be recast as a null controllability problem
for a nonlinear cascade system. We will first prove a null controllability
result for a linear problem by global Carleman estimates and dual arguments.
Then, by virtue of Leray-Schauder’s fixed points theorem, we conclude the null
controllability for the cascade system in the semi-linear case.
Keywords: Carleman estimates; Insensitizing controls; Null controllability,
Leray-Schauder’s fixed points theorem.
Mathematics Subject Classification (2010) : 35Q93; 49J20; 90C31; 93B05; 93C20;
93C41.
## 1 Introduction
Let $D\subset\mathbb{R}^{n}(n\geq 2)$ be a nonempty bounded connected open set
with smooth boundary $\partial D,$ $T>0$ and $\omega\subset D$ is a small
nonempty open subset which is usually referred to as a control domain. Denote
by $Q=D\times(0,T),$ $\Sigma=\partial D\times(0,T),$
$Q_{\omega}=\omega\times(0,T).$ Let $\mathcal{O}\subset D$ be another open set
which is the so-called observation set.
In this paper, we mainly consider the following semilinear fourth order
parabolic equation with incomplete data:
$\begin{cases}\frac{\partial y}{\partial
t}+\Delta^{2}y+a_{0}y+B_{0}\cdot\nabla y+B:\nabla^{2}y+a_{1}\Delta
y=F(y,\nabla y,\nabla^{2}y)+v\chi_{\omega}+f,\,\,\,\,\forall\,\,\,(x,t)\in
Q,\\\ y=\Delta y=0,\,\,\,\,\,\forall\,\,\,\,(x,t)\in\Sigma,\\\
y(x,0)=y_{0}(x)+\tau\hat{y}_{0}(x),\,\,\,\,\forall\,\,\,\,x\in D.\end{cases}$
(1.1)
Here, the functions $a_{0},$ $a_{1}\in L^{\infty}(Q;\mathbb{R}),$
$B_{0}=(B_{01},B_{02},\cdots,B_{0n})\in L^{\infty}(Q;\mathbb{R}^{n}),$
$B=(B_{ij})_{n\times n}\in L^{\infty}(Q;\mathbb{R}^{n^{2}}),$ $f\in L^{2}(Q)$
is a given externally applied force, the function
$F:\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}}\rightarrow\mathbb{R}$
is locally Lipschitz continuous, $\chi_{\omega}$ is the characteristic
function of the set $\omega,$ $v\in L^{2}(Q_{\omega})$ is a control function
to be determined and the initial data $y(x,0)$ is partially unknown in the
following sense:
1. (a)
$y_{0}\in L^{2}(D)$ is known.
2. (b)
$\hat{y}_{0}\in L^{2}(D)$ is unknown with $\|\hat{y}_{0}\|_{L^{2}(D)}=1.$
3. (c)
$\tau$ is a small unknown real number.
Let $y$ be the solution of problem (1.1) associated to $\tau$ and $v,$ we
observe the solution of problem (1.1) via some functional $\Phi(y),$ which is
called the sentinel. Here, the sentinel is defined by the square of the local
$L^{2}$-norm of the state variable:
$\displaystyle\Phi(y)=\frac{1}{2}\int_{0}^{T}\int_{\mathcal{O}}|y(x,t)|^{2}\,dxdt.$
(1.2)
A control function $v$ is said to insensitize the functional $\Phi,$ if
$\displaystyle\frac{\partial\Phi(y)}{\partial\tau}|_{\tau=0}=0,\,\,\,\,\forall\,\,\,\hat{y}_{0}\in
L^{2}(D)\,\,\,\textit{with}\,\,\,\|\hat{y}_{0}\|_{L^{2}(D)}=1.$ (1.3)
Thus, the insensitizing control problem is to seek for a control $v,$ such
that the uncertainty in the initial data does not effect the measurement
$\Phi$ at least at the first order.
To the best of our knowledge, this kind of insensitizing control problem was
first considered by J. L. Lions in [31]. Later, in [2, 30], the authors
reformulated the insensitization problem with this kind of the sentinel $\Phi$
as a null controllability problem for a cascade system. Inspired by these
works, there have been many results concerning the existence of insensitizing
controls in all kinds of different contexts. Initially, the existence of an
approximate insensitizing controls (i.e., such that
$\left|\partial_{\tau}\Phi(y)|_{\tau=0}\right|\leq\epsilon$) was proved in [2]
for a semilinear heat system with $\mathcal{C}^{1}$ and globally Lipschitz
nonlinearities. In [16], the author proved for the linear heat equation that
we cannot expect insensitivity to hold for all initial data, except when the
control acts everywhere in $\Omega.$ Regarding the class of initial data that
can be insensitized, the results in [18] also give different results of
positive and negative nature. Later, the results in [2] was generalized in [3,
4, 38] to superlinear heat equation with nonlinear terms depending on the
state and/or its gradient. In particular, there are some results about the
existence of insensitizing controls for the parabolic equation with different
boundary conditions. For example, the authors in [41] proved the existence of
insensitizing controls for the parabolic equations with dynamic boundary
conditions. The existence of a local insensitizing control for the semilinear
parabolic equations with nonlinear Fourier boundary conditions was established
in [5]. Moreover, the author in [32] proved the existence of insensitizing
controls for the quasilinear parabolic equations. The existence of
insensitizing controls for a phase field system was proved in
[6].Additionally, the authors studied the existence of insensitizing controls
for the Navier-Stokes equation and the Boussinesq system (see [7, 10, 11,
26]), the semilinear wave equations (see [1, 37]). It is worthy to mention
that the authors treated the case of a different type of sentinel consisting
of the gradient of the solution of a parabolic equation in [24, 36] and the
case of the curl for the Stokes system in [23].
Adapting the computations in [2] to problem (1.1)-(1.3), we conclude that the
existence of a control $v$ such that (1.3) holds is equivalent to the
existence of a control $v$ such that the solution $(y,q)$ of problem
$\begin{cases}\frac{\partial y}{\partial
t}+\Delta^{2}y+a_{0}y+B_{0}\cdot\nabla y+B:\nabla^{2}y+a_{1}\Delta
y=F(y,\nabla y,\nabla^{2}y)+\chi_{\omega}v+f,\,\,\,\,\forall\,\,\,(x,t)\in
Q,\\\ -\frac{\partial q}{\partial
t}+\Delta^{2}q+a_{0}q-\nabla\cdot(B_{0}q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}q)}{\partial
x_{i}\partial x_{j}}+\Delta(a_{1}q)=F_{y}(y,\nabla y,\nabla^{2}y)q\\\
-\nabla\cdot(\nabla_{p}F(y,\nabla
y,\nabla^{2}y)q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(F_{r_{ij}}(y,\nabla
y,\nabla^{2}y)q)}{\partial x_{i}\partial
x_{j}}+\chi_{\mathcal{O}}y,\,\,\,\,\forall\,\,\,(x,t)\in Q,\\\ y=\Delta
y=0,\,\,\,q=\Delta q=0,\,\,\forall\,\,\,\,(x,t)\in\Sigma,\\\
y(x,0)=y_{0}(x),\,\,q(x,T)=0,\,\,\forall\,\,\,\,x\in D\end{cases}$ (1.4)
verifying
$\displaystyle q(x,0)=0,\,\,\,\,\forall\,\,\,x\in D,$ (1.5)
where $p=\nabla y$ and $r_{ij}=\frac{\partial^{2}y}{\partial x_{i}\partial
x_{j}}.$
In recent several years, there are some results about the controllability for
fourth order parabolic equations in both one dimension (see [8, 9, 12, 13, 14,
22, 27, 33]) and the higher dimensions (see [19, 25, 28, 34, 40]). In
particular, the approximate controllability and non-approximate
controllability of higher order parabolic equations were studied in [19]. The
author in [40] proved the null controllability of fourth order parabolic
equations by using the ideas of [29]. It is worthy to mention that the
Carleman inequality for a fourth order parabolic equation with $n\geq 2$ was
first established in [25]. Later, the author in [28] proved the null
controllability and the exact controllability to the trajectories at any time
$T>0$ for the fourth order semi-linear parabolic equations with a control
function acting at the interior. The null controllability for fourth order
stochastic parabolic equations was proved by duality arguments and a new
global Carleman estimates in [34]. A unified weighted inequality for fourth-
order partial differential operators was given in [15]. Moreover, they applied
it to obtain the log-type stabilization result for the plate equation.
Recently, we in [39] established the global Carleman estimates for the fourth
order parabolic equations with low regularity terms subject to the homogeneous
Dirichlet boundary conditions of $y$ as well as $\Delta y,$ and applied it to
the null controllability. However, there is no results concerning the
existence of insensitizing controls for fourth order semilinear parabolic
equations. Since the insensitizing control problems describe some kind of
stability of system (1.1) with respect to initial data, it is very meaningful
to investigate the existence of insensitizing controls for problem (1.1).
The main objective of this paper is to study the insensitizing controls
problem (1.1)-(1.3). Inspired by the work in [2], we conclude that the
insensitizing controls problem (1.1)-(1.3) is equivalent to the partial null
controllability of problem (1.4).Thus, we need to establish an observation
inequality for the adjoint problem (3.1) of the linearized system for problem
(1.4) based on the duality arguments. But problem (1.4) is coupled, we will
choose some suitable cut-off function and combine the global Carleman
estimates to conclude the following inequality
$\displaystyle\int_{0}^{T}\int_{\mathcal{O}}|\psi|^{2}e^{2s\alpha}\leq
C\int_{Q_{\omega_{1}}}|\varphi|^{2}\,dxdt$
for some suitable subset $\omega_{1}$ of $D$ and weight function $\alpha,$
which will entails the desired observability inequality of problem (3.1).
Throughout this paper, we will always suppose that
$\omega\cap\mathcal{O}\neq\emptyset$, which is a condition that has always
been imposed as long as insensitizing controls are concerned. However, in
[17], it has been proved that this is not a necessary condition for
$\epsilon$-insensitizing controls for some linear parabolic equations (see
also [35]). Thus, we shall assume that $y_{0}\equiv 0$ which is a classical
hypothesis in insensitization problems.
The rest of this paper is organized as follows: in Section 2, we will recall
some global Carleman estimates and prove a technique lemma. In Section 3, we
will prove an observability inequality, which implies the existence of
insensitizing controls for fourth order linear parabolic equations. Section 4
is devoted to the existence of insensitizing controls for the semilinear case.
## 2 Preliminaries
In this section, we will recall the Carleman inequalities of fourth order
parabolic equations and some lemmas used in the sequel. To this purpose, we
first introduce the following weight functions.
###### Lemma 2.1.
([21]) Let $\omega_{0}\subset\subset D$ be an arbitrary fixed subdomain of $D$
such that $\overline{\omega_{0}}\subset\omega.$ Then there exists a function
$\eta\in\mathcal{C}^{4}(\overline{D})$ such that
$\displaystyle\eta(x)>0,\,\,\,\,\forall\,\,\,x\in
D;\,\,\eta(x)=0,\,\,\,\,\forall\,\,\,x\in\partial
D,;\,\,|\nabla\eta(x)|>0,\,\,\,\,\textit{for\,\,\,all}\,\,\,x\in\overline{D\backslash\omega_{0}}.$
In order to state the global Carleman inequality, we define some weight
functions:
$\displaystyle\alpha_{0}(x)=e^{\lambda(2\|\eta\|_{L^{\infty}(D)}+\eta(x))}-e^{4\lambda\|\eta\|_{L^{\infty}(D)}},\,\,\,\,\xi_{0}(x)=e^{\lambda(2\|\eta\|_{L^{\infty}(D)}+\eta(x))},$
(2.1)
$\displaystyle\alpha(x,t)=\frac{\alpha_{0}(x)}{\sqrt{t(T-t)}},\,\,\,\xi(x,t)=\frac{e^{\lambda(2\|\eta\|_{L^{\infty}(D)}+\eta(x))}}{\sqrt{t(T-t)}}.$
(2.2)
Moreover, they possess the following properties:
$\displaystyle\nabla\alpha=\nabla\xi=\lambda\xi\nabla\eta,\,\,\xi^{-1}\leq\frac{T}{2},\,\,|\alpha_{t}|+|\xi_{t}|\leq\frac{T}{2}\xi^{3},\,\,\forall\,\,(x,t)\in
Q.$ (2.3)
###### Lemma 2.2.
(see [25, 39]) Assume that $z_{0}\in L^{2}(D),$ $g\in L^{2}(Q)$ and the
functions $\alpha,$ $\xi$ are defined by (2.2). Then there exists
$\hat{\lambda}>0$ such that for an arbitrary $\lambda\geq\hat{\lambda},$ we
can choose $s_{0}=s_{0}(\lambda)>0$ satisfying: there exists a constant
$C=C(\lambda)>0$ independent of $s,$ such that the solution $z\in L^{2}(Q)$ to
problem
$\begin{cases}L^{*}z=-\frac{\partial z}{\partial
t}+\Delta^{2}z=g,\,\,\,\,\textit{in}\,\,\,Q,\\\ z=\Delta
z=0,\,\,\,\,\,\textit{on}\,\,\,\,\Sigma,\\\
z(x,T)=z_{0}(x),\,\,\,\,\textit{in}\,\,\,\,D,\end{cases}$ (2.4)
satisfies the following inequality:
1. (i)
If $g\in L^{2}(Q),$ for any $\lambda\geq\hat{\lambda}$ and any $s\geq
s_{0}(\lambda)(\sqrt{T}+T),$ one has
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|z|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla
z|^{2}+s^{3}\lambda^{4}\xi^{3}|\Delta
z|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}z|^{2}+s\lambda^{2}\xi|\nabla\Delta
z|^{2}\right)\,dxdt$
$\displaystyle+\int_{Q}e^{2s\alpha}\left(\frac{1}{s\xi}(|z_{t}|^{2}+|\Delta^{2}z|^{2})\right)\,dxdt$
$\displaystyle\leq
C\left(\int_{Q_{\omega}}s^{7}\lambda^{8}\xi^{7}|z|^{2}e^{2s\alpha}\,dxdt+\int_{Q}|g|^{2}e^{2s\alpha}\,dxdt\right)$
2. (ii)
If $g=g_{0}+\sum_{i=1}^{n}\frac{\partial g_{i}}{\partial
x_{i}}-\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}y)}{\partial x_{i}\partial
x_{j}}-\Delta(a_{1}y)$ with $g_{i}\in L^{2}(Q)$ for any $0\leq i\leq n,$ and
$a_{1}\in L^{\infty}(Q;\mathbb{R}),$ $B=(B_{ij})_{n\times n}\in
L^{\infty}(Q;\mathbb{R}^{n^{2}}),$ then
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|z|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla
z|^{2}+s^{2}\lambda^{4}\xi^{2}|\Delta
z|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}z|^{2}\right)\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\int_{Q}\left(|g_{0}|^{2}+\sum_{i=1}^{n}(s\lambda\xi)^{2}|g_{i}|^{2}\right)e^{2s\alpha}\,dxdt+C\int_{Q_{\omega}}s^{7}\lambda^{8}\xi^{7}|z|^{2}e^{2s\alpha}\,dxdt$
for any
$\lambda\geq\hat{\lambda}(\lambda)(1+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(\sqrt{T}+T).$
In what follows, we also prove the following technical lemma, which will be
used to establish an observability inequality.
###### Lemma 2.3.
Let the functions $\alpha_{0},$ $\xi_{0},$ $\alpha,$ $\xi$ be defined by (2.1)
and (2.2), denote by $m_{0}=\min\limits_{x\in D}\alpha_{0}(x),$
$M_{0}=\max\limits_{x\in D}\alpha_{0}(x)<0,$ $n_{0}=\min\limits_{x\in
D}\xi_{0}(x)>0$ and $N_{0}=\max\limits_{x\in D}\xi_{0}(x).$ Then the following
conclusions hold:
1. (1)
For any $s\geq\frac{4T}{|M_{0}|},$ we have
$\displaystyle s^{16}\xi^{16}e^{2s\alpha}\leq
2^{48}\left(\frac{N_{0}}{M_{0}e}\right)^{16}$
for any $(x,t)\in Q.$
2. (2)
For any $s\geq 0,$ we have
$\displaystyle s^{6}\xi^{6}e^{2s\alpha}\geq A_{s}e^{-\frac{M_{s}}{\sqrt{t}}}$
for any $(x,t)\in\Omega\times(0,\frac{T}{2}),$ where
$\displaystyle
A_{s}=\frac{(2sn_{0})^{6}}{T^{6}}e^{\frac{-2|m_{0}|s}{T}},\,\,\,\,M_{s}=\frac{2|m_{0}|s}{\sqrt{T}}.$
3. (3)
For any $s\geq 0,$ we have
$\displaystyle\xi^{-6}e^{-2s\alpha}\leq(2n_{0})^{-6}T^{6}e^{\frac{8|m_{0}|s}{\sqrt{3}T}}$
for any $(x,t)\in\Omega\times(\frac{T}{4},\frac{3T}{4}).$
###### Proof.
1. (i)
Let $\alpha_{0},$ $\alpha,$ $N_{0}$ and $M_{0}$ be as in the statement. Then
we have
$\displaystyle
s^{16}\xi^{16}e^{2s\alpha}\leq(sN_{0})^{16}e^{-\frac{2|M_{0}|s}{\sqrt{t(T-t)}}}t^{-8}(T-t)^{-8}=f_{s}(t)=\frac{1}{g_{s}(t)}$
for any $s>0$ and any $t\in(0,T).$ In what follows, we will give a lower bound
of $g_{s}(t)$ on $(0,T).$ Thanks to
$\displaystyle
g^{\prime}_{s}(t)=\frac{1}{(sN_{0})^{16}}e^{\frac{2|M_{0}|s}{\sqrt{t(T-t)}}}t^{\frac{13}{2}}(T-t)^{\frac{13}{2}}(T-2t)\left\\{8\sqrt{t(T-t)}-|M_{0}|s\right\\},$
which implies that for any $s\geq\frac{4T}{|M_{0}|},$ the function $g_{s}$ is
strictly decreasing in $(0,\frac{T}{2})$ and strictly increasing in
$(\frac{T}{2},T).$ Thus, we have
$\displaystyle f_{s}(t)\leq f_{s}(\frac{T}{2})=2^{16}T^{-16}N_{0}^{16}G(s)$
for any $t\in(0,T)$ with $G(s)=s^{16}e^{-\frac{4|M_{0}|s}{T}}.$ Thanks to
$\displaystyle
G^{\prime}(s)=4s^{15}e^{-\frac{4|M_{0}|s}{T}}(4-\frac{|M_{0}|s}{T}),$
which entails that the function $G(s)$ is strictly decreases in
$(\frac{4T}{|M_{0}|},+\infty).$ Thus, for every $s\geq\frac{4T}{|M_{0}|},$ we
have
$\displaystyle s^{16}\xi^{16}e^{2s\alpha}\leq
2^{16}T^{-16}N_{0}^{16}G(\frac{4T}{|M_{0}|})=2^{48}e^{-16}\left(\frac{N_{0}}{M_{0}}\right)^{16}$
for any $(x,t)\in Q.$
2. (ii)
First of all, notice that
$\displaystyle
s^{6}\xi^{6}e^{2s\alpha}\geq(sn_{0})^{6}e^{-\frac{2|m_{0}|s}{\sqrt{Tt}}}t^{-3}(T-t)^{-3}e^{-\frac{2|m_{0}|s}{\sqrt{t(T-t)}}+\frac{2|m_{0}|s}{\sqrt{Tt}}}$
and for any $s\geq 0,$
$\displaystyle-\frac{2|m_{0}|s}{\sqrt{t(T-t)}}+\frac{2|m_{0}|s}{\sqrt{Tt}}=$
$\displaystyle-\frac{2|m_{0}|s\sqrt{t}}{\sqrt{T(T-t)}(\sqrt{T}+\sqrt{T-t})}$
$\displaystyle\geq$ $\displaystyle-\frac{2|m_{0}|s}{T}$
for any $t\in(0,\frac{T}{2}).$ Therefore, for any $s\geq 0,$ we obtain
$\displaystyle s^{6}\xi^{6}e^{2s\alpha}\geq$
$\displaystyle(sn_{0})^{6}e^{-\frac{2|m_{0}|s}{\sqrt{Tt}}}t^{-3}(T-t)^{-3}e^{-\frac{2|m_{0}|s}{T}}$
$\displaystyle\geq$
$\displaystyle\frac{(2sn_{0})^{6}}{T^{6}}e^{-\frac{2|m_{0}|s}{T}}e^{-\frac{2|m_{0}|s}{\sqrt{Tt}}}$
for any $t\in(0,\frac{T}{2}).$
3. (iii)
Thanks to
$\displaystyle\frac{1}{\sqrt{t(T-t)}}\leq\frac{4}{\sqrt{3}T}$
for any $t\in(\frac{T}{4},\frac{3T}{4}),$ we obtain
$\displaystyle\xi^{-6}e^{-2s\alpha}\leq
n_{0}^{-6}e^{\frac{2|m_{0}|s}{\sqrt{t(T-t)}}}2^{-6}T^{6}\leq(2n_{0})^{-6}T^{6}e^{\frac{8|m_{0}|s}{\sqrt{3}T}}$
for any $(x,t)\in\Omega\times(\frac{T}{4},\frac{3T}{4}).$
∎
## 3 The linear case
In this section, we will always assume that $F\equiv 0$ and prove the
existence of an insensitizing control of problem (1.1) such that (1.3) holds.
To start with, we introduce the adjoint problem of the linearized system of
problem (1.4):
$\begin{cases}-\frac{\partial\psi}{\partial
t}+\Delta^{2}\psi+a_{0}\psi-\nabla\cdot(B_{0}\psi)+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}\psi)}{\partial
x_{i}\partial
x_{j}}+\Delta(a_{1}\psi)=\chi_{\mathcal{O}}\varphi,\,\,\,\,\forall\,\,\,(x,t)\in
Q,\\\ \frac{\partial\varphi}{\partial
t}+\Delta^{2}\varphi+a_{0}\varphi+B_{0}\cdot\nabla\varphi+B:\nabla^{2}\varphi+a_{1}\Delta\varphi=0,\,\,\,\,\forall\,\,\,(x,t)\in
Q,\\\
\psi=\Delta\psi=0,\,\,\,\varphi=\Delta\varphi=0,\,\,\,\,\,\forall\,\,\,\,(x,t)\in\Sigma,\\\
\psi(x,T)=0,\,\,\,\varphi(x,0)=\varphi_{0}(x),\,\,\,\,\forall\,\,\,\,x\in
D.\end{cases}$ (3.1)
From the regularity of fourth order parabolic equations, we conclude that for
any $\varphi_{0}\in L^{2}(D),$ there exists a unique solution of problem (3.1)
satisfying
$\displaystyle\varphi,\psi\in X=L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))\cap
H^{1}(0,T;(H^{2}(D))^{*}).$
In what follows, we will establish an observability inequality of problem
(3.1), which used to obtain the existence of an insensitizing control such
that the solution of problem (1.1) verifying (1.3) in the linear case.
###### Theorem 3.1.
Assume that $\omega\cap\mathcal{O}\neq\emptyset.$ Then there exist two
positive constants $M$ and $H,$ such that for any $\varphi_{0}\in L^{2}(D),$
the corresponding solution $(\psi,\varphi)$ of problem (3.1) with initial data
$(0,\varphi_{0})$ satisfies
$\displaystyle\int_{Q}e^{-\frac{M}{\sqrt{t}}}|\psi|^{2}\,dxdt\leq
H\int_{Q_{\omega}}|\psi|^{2}\,dxdt.$ (3.2)
More precisely, $M=\frac{2|m_{0}|s}{\sqrt{T}}$ and
$\displaystyle
H=C2^{48}\left(\frac{N_{0}}{M_{0}e}\right)^{16}+C2^{42}\left(\frac{N_{0}}{M_{0}e}\right)^{16}e^{2\beta
T+\frac{8|m_{0}|s}{\sqrt{3}T}}n_{0}^{-6}T^{6}$
for any $s\geq\frac{4T}{|M_{0}|},$ where $C=C(D,\omega,\mathcal{O}).$
###### Proof.
Let $\omega_{1}$ and $\omega_{2}$ be two open subsets such that
$\omega_{1}\subset\subset\omega_{2}\subset\subset\omega\cap\mathcal{O}.$
Applying Lemma 2.2 to the second equation of problem (3.1) with
$g=-a_{0}\varphi-B_{0}\cdot\nabla\varphi-B:\nabla^{2}\varphi-
a_{1}\Delta\varphi$ and $\omega=\omega_{1},$ we conclude that there exists a
positive constant $\hat{\lambda},$ such that for any
$\lambda\geq\hat{\lambda},$ we can choose $s_{0}=s_{0}(\lambda)$ satisfying:
there exist a positive constant $C_{1}=C_{1}(D,\omega_{1}),$ such that
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|\varphi|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla\varphi|^{2}+s^{3}\lambda^{4}\xi^{3}|\Delta\varphi|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}\varphi|^{2}+s\lambda^{2}\xi|\nabla\Delta\varphi|^{2}\right)\,dxdt$
$\displaystyle\leq$ $\displaystyle
C_{1}\int_{Q}\left(|a_{0}|^{2}|\varphi|^{2}+|B_{0}|^{2}|\nabla\varphi|^{2}+|B|^{2}|\nabla^{2}\varphi|^{2}+|a_{1}|^{2}|\Delta\varphi|^{2}\right)e^{2s\alpha}\,dxdt$
$\displaystyle+C_{1}\int_{Q_{\omega_{1}}}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}e^{2s\alpha}\,dxdt$
for any $s\geq s_{0}(\lambda)(T+\sqrt{T}),$ which implies that
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|\varphi|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla\varphi|^{2}+s^{3}\lambda^{4}\xi^{3}|\Delta\varphi|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}\varphi|^{2}+s\lambda^{2}\xi|\nabla\Delta\varphi|^{2}\right)\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\int_{Q_{\omega_{1}}}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}e^{2s\alpha}\,dxdt$
(3.3)
for any
$\lambda\geq\hat{\lambda}(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(\lambda)(T+\sqrt{T}).$
Employing again Lemma 2.2 to the first equation of problem (3.1) with
$g=-a_{0}\psi+\nabla\cdot(B_{0}\psi)-\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}\psi)}{\partial
x_{i}\partial x_{j}}-\Delta(a_{1}\psi)$ and $\omega=\omega_{2},$ we deduce
that there exists a positive constant $\hat{\lambda},$ such that for any
$\lambda\geq\hat{\lambda},$ we can choose $s_{0}=s_{0}(\lambda)$ satisfying:
there exist a positive constant $C_{2}=C_{2}(D,\omega_{2}),$ such that
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|\psi|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla\psi|^{2}+s^{2}\lambda^{4}\xi^{2}|\Delta\psi|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}\psi|^{2}\right)\,dxdt$
$\displaystyle\leq
C_{2}\left(\int_{Q_{\omega_{2}}}s^{7}\lambda^{8}\xi^{7}|\psi|^{2}e^{2s\alpha}\,dxdt+\int_{Q}\left(|a_{0}|^{2}|\psi|^{2}+(s\lambda\xi)^{2}|B_{0}|^{2}|\psi|^{2}+\chi_{\mathcal{O}}|\varphi|^{2}\right)e^{2s\alpha}\,dxdt\right),$
for any
$\lambda\geq\hat{\lambda}(1+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(\lambda)(\sqrt{T}+T),$ which entails that
$\displaystyle\int_{Q}e^{2s\alpha}\left(s^{6}\lambda^{8}\xi^{6}|\psi|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla\psi|^{2}+s^{2}\lambda^{4}\xi^{2}|\Delta\psi|^{2}+s^{2}\lambda^{4}\xi^{2}|\nabla^{2}\psi|^{2}\right)\,dxdt$
$\displaystyle\leq
C\left(\int_{Q_{\omega_{2}}}s^{7}\lambda^{8}\xi^{7}|\psi|^{2}e^{2s\alpha}\,dxdt+\int_{0}^{T}\int_{\mathcal{O}}|\varphi|^{2}e^{2s\alpha}\,dxdt\right)$
(3.4)
for any
$\lambda\geq\hat{\lambda}(\lambda)(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(T+\sqrt{T}).$ In what follows, we will prove an
inequality which bounds $\varphi$ with respect to $\psi.$ Let
$\theta_{1}\in\mathcal{C}_{0}^{\infty}(\omega_{2})$ be a cut-off function such
that
$\displaystyle 0\leq\theta_{1}\leq
1,\,\,\,\textit{in}\,\,\,\omega_{2};\,\,\,\theta_{1}\equiv
1,\,\,\,\forall\,\,\,x\in\omega_{1}.$ (3.5)
Define
$\displaystyle u=s^{7}\lambda^{8}\xi^{7}e^{2s\alpha}$
for any
$\lambda\geq\hat{\lambda}(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(\lambda)(T+\sqrt{T}).$ Multiplying the first equation of
problem (3.1) by $u\varphi\theta_{1}$ and integrating by parts, we obtain
$\displaystyle\int_{0}^{T}\int_{\mathcal{O}}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}e^{2s\alpha}\theta_{1}\,dxdt=\int_{Q}\psi
u_{t}\theta_{1}\varphi+4\nabla\Delta\varphi\cdot\nabla(u\theta_{1})\psi+2\Delta\varphi\Delta(u\theta_{1})\psi\,dxdt$
$\displaystyle+\int_{Q}4\nabla^{2}\varphi:\nabla^{2}(u\theta_{1})\psi+4\nabla\varphi\cdot\nabla\Delta(u\theta_{1})\psi+\Delta^{2}(u\theta_{1})\varphi\psi+B_{0}\cdot\nabla(u\theta_{1})\varphi\psi\,dxdt$
$\displaystyle+\int_{Q}\sum_{i,j=1}^{n}\left(B_{ij}\frac{\partial\varphi}{\partial
x_{i}}\frac{\partial(u\theta_{1})}{\partial
x_{j}}\psi+B_{ij}\frac{\partial\varphi}{\partial
x_{j}}\frac{\partial(u\theta_{1})}{\partial
x_{i}}\psi+B_{ij}\frac{\partial^{2}(u\theta_{1})}{\partial x_{i}\partial
x_{j}}\psi\varphi\right)\,dxdt$
$\displaystyle+\int_{Q}\left(2a_{1}\nabla\varphi\cdot\nabla(u\theta_{1})\psi+a_{1}\varphi\psi\Delta(u\theta_{1})\right)\,dxdt=:\sum_{i=1}^{12}I_{i}.$
(3.6)
In what follows, let $C$ be a positive constant depending only on $D,$
$\omega_{1}$ and $\omega_{2},$ which may change from one line to another, we
will estimate each $I_{i}$ in inequality (3) for $1\leq i\leq 12$ by Hölder’s
inequality, Young’s inequality along with inequality (3).
To begin with, we conclude from the properties of weight functions (2.3) that
$\displaystyle|u_{t}|\leq$ $\displaystyle
Cs^{10}\lambda^{8}\xi^{10}e^{2s\alpha},$
$\displaystyle|\nabla^{k}(u\theta_{1})|\leq$ $\displaystyle
C(s^{7+k}\lambda^{8+k}\xi^{7+k})e^{2s\alpha}\chi_{\omega_{2}},\,\,\,\forall\,\,k\in\mathbb{Z}_{+}$
for any $\lambda\geq\hat{\lambda}$ and any $s\geq s_{0}(1+\sqrt{T}+T).$ Thus,
we obtain
$\displaystyle|I_{1}|\leq$ $\displaystyle
C\int_{Q}s^{10}\lambda^{8}\xi^{10}|\varphi||\psi|\theta_{1}e^{2s\alpha}\,dxdt$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\int_{Q_{\omega_{2}}}s^{13}\lambda^{8}\xi^{13}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.7)
$\displaystyle|I_{2}|+|I_{3}|+|I_{4}|\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}\left(s^{8}\lambda^{9}\xi^{8}|\nabla\Delta\varphi||\psi|+s^{9}\lambda^{10}\xi^{9}|\Delta\varphi||\psi|\right)e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\left(\int_{Q}\left(s\lambda^{2}\xi|\nabla\Delta\varphi|^{2}+s^{3}\lambda^{4}\xi^{3}|\Delta\varphi|^{2}\right)e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega_{2}}}s^{15}\lambda^{16}\xi^{15}|\psi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\int_{Q_{\omega_{2}}}s^{15}\lambda^{16}\xi^{15}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.8)
$\displaystyle|I_{5}|+|I_{6}|\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}\left(s^{10}\lambda^{11}\xi^{10}|\nabla\varphi||\psi|+s^{11}\lambda^{12}\xi^{11}|\varphi||\psi|\right)e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\left(\int_{Q}\left(s^{4}\lambda^{6}\xi^{4}|\nabla\varphi|^{2}+s^{6}\lambda^{8}\xi^{6}|\varphi|^{2}\right)e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega_{2}}}s^{16}\lambda^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\int_{Q_{\omega_{2}}}s^{16}\lambda^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.9)
$\displaystyle|I_{7}|\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}|B_{0}|s^{8}\lambda^{9}\xi^{8}|\varphi||\psi|e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\|B_{0}\|_{L^{\infty}(Q)}\left(\int_{Q}s^{6}\lambda^{8}\xi^{6}|\varphi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega_{2}}}s^{10}\lambda^{10}\xi^{10}|\psi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\|B_{0}\|_{L^{\infty}(Q)}^{2}\int_{Q_{\omega_{2}}}s^{10}\lambda^{10}\xi^{10}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.10)
$\displaystyle|I_{8}|+|I_{9}|\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}|B|s^{8}\lambda^{9}\xi^{8}|\nabla\varphi||\psi|e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\|B\|_{L^{\infty}(Q)}\left(\int_{Q}s^{4}\lambda^{6}\xi^{4}|\nabla\varphi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega_{2}}}s^{12}\lambda^{12}\xi^{12}|\psi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\|B\|_{L^{\infty}(Q)}^{2}\int_{Q_{\omega_{2}}}s^{12}\lambda^{12}\xi^{12}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.11)
$\displaystyle|I_{10}|\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}|B|s^{9}\lambda^{10}\xi^{9}|\psi||\varphi|e^{2s\alpha}\,dxdt$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\|B\|_{L^{\infty}(Q)}^{2}\int_{Q_{\omega_{2}}}s^{12}\lambda^{12}\xi^{12}|\psi|^{2}e^{2s\alpha}\,dxdt,$
(3.12)
$\displaystyle|I_{11}|+|I_{12}|\leq
C\int_{Q_{\omega_{2}}}|a_{1}|(s^{8}\lambda^{9}\xi^{8}|\nabla\varphi||\psi|+s^{9}\lambda^{10}\xi^{9}|\varphi||\psi|)e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\|a_{1}\|_{L^{\infty}(Q)}\left(\int_{Q}\left(s^{6}\lambda^{8}\xi^{6}|\varphi|^{2}+s^{4}\lambda^{6}\xi^{4}|\nabla\varphi|^{2}\right)e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega_{2}}}s^{12}\lambda^{12}\xi^{12}|\psi|^{2}e^{2s\alpha}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{14}\int_{Q}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}\theta_{1}e^{2s\alpha}\,dxdt+C\|a_{1}\|_{L^{\infty}(Q)}^{2}\int_{Q_{\omega_{2}}}s^{12}\lambda^{12}\xi^{12}|\psi|^{2}e^{2s\alpha}\,dxdt.$
(3.13)
Therefore, we deduce from inequalities (3)-(3) that
$\displaystyle\int_{0}^{T}\int_{\mathcal{O}}s^{7}\lambda^{8}\xi^{7}|\varphi|^{2}e^{2s\alpha}\theta_{1}\,dxdt\leq
C\int_{Q_{\omega_{2}}}s^{16}\lambda^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt$
(3.14)
for any
$\lambda\geq\hat{\lambda}(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(1+\sqrt{T}+T).$
Thus, in view of inequality (3) and inequality (3.14), yields
$\displaystyle\int_{Q}e^{2s\alpha}s^{6}\xi^{6}|\varphi|^{2}\leq$
$\displaystyle
C_{1}\int_{Q_{\omega_{1}}}s^{7}\xi^{7}|\varphi|^{2}e^{2s\alpha}\,dxdt$
$\displaystyle\leq$
$\displaystyle\int_{0}^{T}\int_{\mathcal{O}}s^{7}\xi^{7}|\varphi|^{2}e^{2s\alpha}\theta_{1}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C\int_{Q_{\omega_{2}}}s^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt$ (3.15)
for any fixed
$\lambda\geq\hat{\lambda}(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(1+\sqrt{T}+T).$
Combining inequalities (3) with inequality (3), we obtain
$\displaystyle\int_{Q}s^{6}\xi^{6}|\psi|^{2}e^{2s\alpha}\,dxdt\leq
C\int_{Q_{\omega_{2}}}s^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt$ (3.16)
for any fixed
$\lambda\geq\hat{\lambda}(1+\|a_{0}\|_{L^{\infty}(Q)}^{\frac{1}{4}}+\|B_{0}\|_{L^{\infty}(Q)}^{\frac{1}{3}}+\|B\|_{L^{\infty}(Q)}^{\frac{1}{2}}+\|a_{1}\|_{L^{\infty}(Q)}^{\frac{1}{2}})$
and any $s\geq s_{0}(1+\sqrt{T}+T).$
Finally, we will combining energy estimates with inequalities (3)-(3.16) to
obtain the desired observability inequality. At this point, applying classical
estimates of the fourth order parabolic equation to systems (3.1), we obtain
for any $t_{1},$ $t_{2}\in[0,T]$ with $t_{1}<t_{2}$ and any $t\in[0,T],$
$\displaystyle\|\varphi(t_{2})\|_{L^{2}(D)}^{2}\leq
e^{2\beta(t_{2}-t_{1})}\|\varphi(t_{1})\|_{L^{2}(D)}^{2}$ (3.17)
and
$\displaystyle\|\psi(t)\|_{L^{2}(D)}^{2}\leq\int_{t}^{T}e^{2\beta(s-t)}\|\varphi(s)\|_{L^{2}(\mathcal{O})}^{2}\,ds,$
(3.18)
where
$\displaystyle\beta=2+\|a_{0}\|_{L^{\infty}(Q)}^{2}+\|B_{0}\|_{L^{\infty}(Q)}^{2}+\|B\|_{L^{\infty}(Q)}^{2}+\|a_{1}\|_{L^{\infty}(Q)}^{2}.$
In particular, we have
$\displaystyle\|\varphi(t+\frac{T}{4})\|_{L^{2}(D)}^{2}\leq e^{\frac{\beta
T}{2}}\|\varphi(t)\|_{L^{2}(D)}^{2}$
for any $t\in[\frac{T}{4},\frac{3T}{4}],$ which implies that
$\displaystyle\int_{\frac{T}{2}}^{T}\|\varphi(t)\|_{L^{2}(D)}^{2}\,dt\leq
e^{\frac{\beta
T}{2}}\int_{\frac{T}{4}}^{\frac{3T}{4}}\|\varphi(t)\|_{L^{2}(D)}^{2}.$ (3.19)
On the other hand, we deduce from inequality (3.18) that
$\displaystyle\int_{t}^{T}\|\psi(s)\|_{L^{2}(D)}^{2}\,ds\leq(T-t)e^{\beta
T}\int_{t}^{T}\|\varphi(s)\|_{L^{2}(\mathcal{O})}^{2}\,ds$
for any $t\in[\frac{T}{2},T],$ which entails that
$\displaystyle\int_{\frac{T}{2}}^{T}\|\psi(s)\|_{L^{2}(D)}^{2}\,ds\leq
e^{(1+\beta)T}\int_{\frac{T}{2}}^{T}\|\varphi(s)\|_{L^{2}(\mathcal{O})}^{2}\,ds.$
(3.20)
Denote by $m_{0}=\min\limits_{x\in\overline{D}}\alpha_{0}(x)$ and
$M_{0}=\max\limits_{x\in\overline{D}}\alpha_{0}(x),$ we deduce from Lemma 2.3
that for any $s\geq 0,$
$\displaystyle\int_{Q}s^{6}\xi^{6}|\psi|^{2}e^{2s\alpha}\,dxdt\geq
A_{s}\int_{0}^{\frac{T}{2}}\int_{D}e^{-\frac{M_{s}}{\sqrt{t}}}|\psi|^{2}\,dxdt$
(3.21)
with $A_{s}$ and $M_{s}$ given in Lemma 2.3.
In what follows, we will Bounding the right hand side of inequality (3.16) by
using Lemma 2.3, we obtain
$\displaystyle\int_{Q}s^{6}\xi^{6}|\psi|^{2}e^{2s\alpha}\,dxdt\leq$
$\displaystyle
C\int_{Q_{\omega_{2}}}s^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C2^{48}\left(\frac{N_{0}}{M_{0}e}\right)^{16}\int_{Q_{\omega_{2}}}|\psi|^{2}\,dxdt$
(3.22)
for any $s\geq\frac{4T}{|M_{0}|}.$ Thus, we obtain
$\displaystyle\int_{0}^{\frac{T}{2}}\int_{D}e^{-\frac{M_{s}}{\sqrt{t}}}|\psi|^{2}\,dxdt\leq
C\frac{2^{30}}{e^{16}}e^{\frac{2|m_{0}|s}{T}}\left(\frac{N_{0}^{16}}{M_{0}^{10}n_{0}^{6}}\right)\int_{Q_{\omega_{2}}}|\psi|^{2}\,dxdt.$
(3.23)
Along with inequality (3.20) and inequality (3.23), yields
$\displaystyle\int_{Q}e^{-\frac{M_{s}}{\sqrt{t}}}|\psi|^{2}\,dxdt\leq$
$\displaystyle\int_{0}^{\frac{T}{2}}\int_{D}e^{-\frac{M_{s}}{\sqrt{t}}}|\psi|^{2}\,dxdt+\int_{\frac{T}{2}}^{T}\int_{D}|\psi|^{2}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C2^{48}\left(\frac{N_{0}}{M_{0}e}\right)^{16}\int_{Q_{\omega_{2}}}|\psi|^{2}\,dxdt$
$\displaystyle+e^{(1+\beta)T}\int_{\frac{T}{2}}^{T}\|\varphi(s)\|_{L^{2}(\mathcal{O})}^{2}\,ds.$
(3.24)
In view of inequalities (3.16), (3.19) and Lemma 2.3, yields
$\displaystyle\int_{\frac{T}{2}}^{T}\|\varphi(s)\|_{L^{2}(D)}^{2}\,ds\leq$
$\displaystyle e^{\frac{\beta
T}{2}}\int_{\frac{T}{4}}^{\frac{3T}{4}}\|\varphi(t)\|_{L^{2}(D)}^{2}$
$\displaystyle\leq$ $\displaystyle e^{\frac{\beta
T}{2}+\frac{8|m_{0}|s}{\sqrt{3}T}}(2n_{0})^{-6}T^{6}\int_{\frac{T}{4}}^{\frac{3T}{4}}\int_{D}\xi^{6}e^{2s\alpha}|\varphi(t)|^{2}\,dxdt$
$\displaystyle\leq$ $\displaystyle Ce^{\frac{\beta
T}{2}+\frac{8|m_{0}|s}{\sqrt{3}T}}(2n_{0})^{-6}T^{6}\int_{\frac{T}{4}}^{\frac{3T}{4}}\int_{Q_{\omega_{2}}}s^{16}\xi^{16}|\psi|^{2}e^{2s\alpha}\,dxdt$
$\displaystyle\leq$ $\displaystyle
C2^{42}\left(\frac{N_{0}}{M_{0}}\right)^{16}e^{\frac{\beta
T}{2}+\frac{8|m_{0}|s}{\sqrt{3}T}-16}n_{0}^{-6}T^{6}\int_{\frac{T}{4}}^{\frac{3T}{4}}\int_{Q_{\omega_{2}}}|\psi|^{2}\,dxdt$
(3.25)
for any $s\geq\frac{4T}{|M_{0}|}.$
Combining inequality (3) with inequality (3), we obtain
$\displaystyle\int_{Q}e^{-\frac{M_{s}}{\sqrt{t}}}|\psi|^{2}\,dxdt\leq
H\int_{Q_{\omega}}|\psi|^{2}\,dxdt$ (3.26)
for any $s\geq\frac{4T}{|M_{0}|},$ where
$\displaystyle
H=C2^{48}\left(\frac{N_{0}}{M_{0}e}\right)^{16}+C2^{42}\left(\frac{N_{0}}{M_{0}e}\right)^{16}e^{2\beta
T+\frac{8|m_{0}|s}{\sqrt{3}T}}n_{0}^{-6}T^{6}.$
∎
In the following, we will prove the existence of an insensitizing control such
that the solution of problem (1.1) verifies condition (1.3), i.e., we will
prove the null-controllability of problem (1.4).
###### Theorem 3.2.
Assume that $\omega\cap\mathcal{O}\neq\emptyset,$ $y_{0}=0$ and the positive
constants $M$ and $H$ are defined as in Theorem 3.1. If $f\in L^{2}(Q)$
satisfies
$\displaystyle\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt<+\infty,$
then there exists a control $v\in L^{2}(Q_{\omega}),$ such that the solution
$(y,q)$ of problem (1.4) satisfies
$\displaystyle q(x,0)\equiv 0,\,\,\forall\,\,x\in D.$ (3.27)
Moreover, we also have
$\displaystyle\|v\|_{L^{2}(Q_{\omega})}\leq
2\sqrt{H}\left(\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt\right)^{\frac{1}{2}}.$
###### Proof.
In what follows, we will prove the null controllability of problem (1.1) by
the similar method in [20]. To this purpose, for any $\epsilon>0,$ we
introduce a functional defined on $L^{2}(D):$
$\displaystyle\mathcal{J}(\varphi_{0})=\frac{1}{2}\int_{Q_{\omega}}|\psi|^{2}\,dxdt+\epsilon\|\varphi_{0}\|_{L^{2}(D)}+\int_{Q}f\psi\,dxdt,$
where $(\psi,\varphi)$ is the solution of problem (3.1) with initial data
$\psi(0)=0$ and $\varphi(0)=\varphi_{0}\in L^{2}(D).$
In view of Theorem 3.1, we conclude that the functional
$\mathcal{J}(\varphi_{0})$ is continous, strictly convex and coercive on
$L^{2}(D).$ Therefore, for any $\epsilon>0,$ there exists a unique minimum
point $\varphi_{0\epsilon}\in L^{2}(D)$ of $\mathcal{J},$ which implies that
$\displaystyle
0=\mathcal{J}(0)\geq\mathcal{J}(\varphi_{0\epsilon})=\frac{1}{2}\int_{Q_{\omega}}|\psi_{\epsilon}|^{2}\,dxdt+\epsilon\|\varphi_{0\epsilon}\|_{L^{2}(D)}+\int_{Q}f\psi_{\epsilon}\,dxdt,$
(3.28)
where $(\psi_{\epsilon},\varphi_{\epsilon})$ solves problem (3.1) with initial
data $(0,\varphi_{0\epsilon}).$
Therefore, we deduce from inequalities (3.2), (3.28) and Hölder’s inequality
that
$\displaystyle\frac{1}{2}\int_{Q_{\omega}}|\psi_{\epsilon}|^{2}\,dxdt+\epsilon\|\varphi_{0\epsilon}\|_{L^{2}(D)}\leq$
$\displaystyle-\int_{Q}f\psi_{\epsilon}\,dxdt$ $\displaystyle\leq$
$\displaystyle\left(\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q}e^{-\frac{M}{\sqrt{t}}}|\psi_{\epsilon}|^{2}\,dxdt\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\sqrt{H}\left(\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\omega}}|\psi_{\epsilon}|^{2}\,dxdt\right)^{\frac{1}{2}}$
for any $s\geq\frac{4T}{|M_{0}|}.$
Employing Young’s inequality, yields
$\displaystyle\int_{Q_{\omega}}|\psi_{\epsilon}|^{2}\,dxdt+4\epsilon\|\varphi_{0\epsilon}\|_{L^{2}(D)}\leq
4H\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt$
for any $s\geq\frac{4T}{|M_{0}|}.$
If $\varphi_{0\epsilon}\neq 0,$ then $\mathcal{J}$ satisfies the optimality
condition
$\displaystyle\int_{Q_{\omega}}\psi_{\epsilon}\psi\,dxdt+\int_{Q}f\psi\,dxdt+\frac{\epsilon}{\|\varphi_{0\epsilon}\|_{L^{2}(D)}}\int_{D}\varphi_{0\epsilon}\varphi_{0}\,dx=0$
(3.29)
for any $\varphi_{0}\in L^{2}(D),$ where $(\psi,\varphi)$ is the solution of
problem (3.1) with initial data $(0,\varphi_{0}).$
Now, let $v_{\epsilon}=\psi_{\epsilon}$ and let $(y_{\epsilon},q_{\epsilon})$
be the solution of problem (1.4), then we infer from problem (1.4) and problem
(3.1) that
$\displaystyle\int_{Q}\chi_{\mathcal{O}}\varphi
y_{\epsilon}\,dxdt=\int_{Q}(\chi_{\omega}v_{\epsilon}+f)\psi\,dxdt$ (3.30)
and
$\displaystyle\int_{D}q_{\epsilon}(x,0)\varphi_{0}\,dx=\int_{Q}\chi_{\mathcal{O}}\varphi
y_{\epsilon}\,dxdt.$ (3.31)
Thus, along with inequalities (3.29)-(3.31) and the fact that
$v_{\epsilon}=\psi_{\epsilon},$ we obtain
$\displaystyle\int_{D}q_{\epsilon}(x,0)\varphi_{0}\,dx=-\frac{\epsilon}{\|\varphi_{0\epsilon}\|_{L^{2}(D)}}\int_{D}\varphi_{0\epsilon}\varphi_{0}\,dx$
(3.32)
for any $\varphi_{0}\in L^{2}(D),$ which implies that
$\displaystyle\|q_{\epsilon}(0)\|_{L^{2}(D)}\leq\epsilon.$ (3.33)
If $\varphi_{0\epsilon}=0,$ then
$\displaystyle\lim_{t\rightarrow 0}\frac{\mathcal{J}(t\varphi_{0})}{t}\geq 0$
for any $\varphi_{0}\in L^{2}(D),$ i.e.,
$\displaystyle\epsilon\|\varphi_{0}\|_{L^{2}(D)}+\int_{Q}f\psi\,dxdt\geq 0,$
(3.34)
where $(\psi,\varphi)$ solves problem (3.1) with initial data
$(0,\varphi_{0}).$ Consequently, we can also conclude from inequalities
(3.30)-(3.31), (3.34) and the fact that $v_{\epsilon}=\psi_{\epsilon}=0$ that
$\displaystyle\epsilon\|\varphi_{0}\|_{L^{2}(D)}+\int_{D}q_{\epsilon}(x,0)\varphi_{0}\,dx\geq
0$
for any $\varphi_{0}\in L^{2}(D),$ which also implies that
$\displaystyle\|q_{\epsilon}(0)\|_{L^{2}(D)}\leq\epsilon.$
Therefore, the solution $(y_{\epsilon},q_{\epsilon})$ of problem (1.4)
associated with $v_{\epsilon}$ satisfies inequality
$\displaystyle\|q_{\epsilon}(0)\|_{L^{2}(D)}\leq\epsilon.$ (3.35)
Moreover, we obtain
$\displaystyle\int_{Q_{\omega}}|v_{\epsilon}|^{2}\,dxdt\leq
4H\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt$
for any $s\geq\frac{4T}{|M_{0}|},$ which entails that the controls
$\\{v_{\epsilon}\\}_{\epsilon>0}$ are uniformly bounded in
$L^{2}(Q_{\omega}).$ Without loss of generality, we can assume that
$v_{\epsilon}\rightharpoonup v$ weakly in $L^{2}(Q_{\omega})$ and
$\displaystyle(y_{\epsilon},q_{\epsilon})\rightharpoonup(y,q),\,\,\,\,\,\textit{weakly\,\,\,in}\,\,\,X\times
X,$
where $(y,q)$ is the solution of problem (1.4) with $v.$ In particular, we
have the weak convergence of $q_{\epsilon}(0)$ in $L^{2}(D).$ Thus, we
conclude from inequality (3.35) that $q(0)\equiv 0,$ i.e., $v$ is the desired
control. Moreover, we have
$\displaystyle\int_{Q_{\omega}}|v_{\epsilon}|^{2}\,dxdt\leq
4H\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt$
for any $s\geq\frac{4T}{|M_{0}|}.$ ∎
## 4 The semi-linear case
In this section, under the assumptions that $F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}};\mathbb{R})$
and $y_{0}=0,$ we will prove the existence of an insensitizing control of
problem
$\begin{cases}\frac{\partial y}{\partial
t}+\Delta^{2}y+a_{0}y+B_{0}\cdot\nabla y+B:\nabla^{2}y+a_{1}\Delta
y=F(y,\nabla y,\nabla^{2}y)+v\chi_{\omega}+f,\,\,\,\,\forall\,\,\,(x,t)\in
Q,\\\ -\frac{\partial q}{\partial
t}+\Delta^{2}q+a_{0}q-\nabla\cdot(B_{0}q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}q)}{\partial
x_{i}\partial x_{j}}+\Delta(a_{1}q)=F_{y}(y,\nabla y,\nabla^{2}y)q\\\
-\nabla\cdot(\nabla_{p}F(y,\nabla
y,\nabla^{2}y)q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(F_{r_{ij}}(y,\nabla
y,\nabla^{2}y)q)}{\partial x_{i}\partial
x_{j}}+\chi_{\mathcal{O}}y,\,\,\,\,\forall\,\,\,(x,t)\in Q,\\\ y=\Delta
y=0,\,\,\,q=\Delta q=0,\,\,\forall\,\,\,\,(x,t)\in\Sigma,\\\
y(x,0)=0,\,\,q(x,T)=0,\,\,\forall\,\,\,\,x\in D\end{cases}$ (4.1)
such that
$\displaystyle q(x,0)\equiv 0,\,\,\,\forall x\in D.$ (4.2)
From the regularity of fourth order parabolic equations, we conclude that
there exists a unique solution of problem (4.1) satisfying
$\displaystyle y\in Y=L^{2}(0,T;H_{0}^{1}(D)\cap H^{4}(D))\cap
H^{1}(0,T;L^{2}(D)),$ $\displaystyle q\in X=L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D))\cap H^{1}(0,T;(H^{2}(D))^{*}).$
In what follows, we will establish the existence of an insensitizing control
such that the solution of problem (4.1) verifying (4.2) in the semi-linear
case.
###### Theorem 4.1.
Assume that $\omega\cap\mathcal{O}\neq\emptyset,$ $y_{0}=0,$ $F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}};\mathbb{R}),$
the assumption on $f$ is given as in Theorem 3.2. Then there exists a control
$v\in L^{2}(Q_{\omega}),$ such that the solution $(y,q)$ of problem (4.1)
satisfies (4.2).
###### Proof.
Let $z\in L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$ be given, consider the
following problem
$\begin{cases}\frac{\partial y}{\partial
t}+\Delta^{2}y+a_{0}y+B_{0}\cdot\nabla y+B:\nabla^{2}y+a_{1}\Delta
y=G_{1}(z,\nabla z,\nabla^{2}z)y+G_{2}(z,\nabla z,\nabla^{2}z)\cdot\nabla y\\\
+G_{3}(z,\nabla
z,\nabla^{2}z):\nabla^{2}y+F(0,0,0)+v\chi_{\omega}+f,\,\,\,\,(x,t)\in Q,\\\
-\frac{\partial q}{\partial
t}+\Delta^{2}q+a_{0}q-\nabla\cdot(B_{0}q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}q)}{\partial
x_{i}\partial x_{j}}+\Delta(a_{1}q)=F_{y}(z,\nabla z,\nabla^{2}z)q\\\
-\nabla\cdot(\nabla_{p}F(z,\nabla
z,\nabla^{2}z)q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(F_{r_{ij}}(z,\nabla
z,\nabla^{2}z)q)}{\partial x_{i}\partial
x_{j}}+y\chi_{\mathcal{O}},\,\,\,\,(x,t)\in Q,\\\ y=\Delta y=0,\,\,\,q=\Delta
q=0,\,\,(x,t)\in\Sigma,\\\ y(x,0)=0,\,\,q(x,T)=0,\,\,\,x\in D,\end{cases}$
(4.3)
where
$\displaystyle G_{1}(w,\nabla w,\nabla^{2}w)=\int_{0}^{1}\frac{\partial
F}{\partial y}(\tau w,\tau\nabla w,\tau\nabla^{2}w)\,d\tau,$ $\displaystyle
G_{2}(w,\nabla w,\nabla^{2}w)=\int_{0}^{1}\nabla_{p}F(\tau w,\tau\nabla
w,\tau\nabla^{2}w)\,d\tau,$ $\displaystyle G_{3}^{ij}(w,\nabla
w,\nabla^{2}w)=\int_{0}^{1}\frac{\partial F}{\partial r_{ij}}(\tau
w,\tau\nabla w,\tau\nabla^{2}w)\,d\tau.$
Since $F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}},\mathbb{R}),$
there exists a positive constant $M,$ such that
$\displaystyle|G_{1}(u,p,r)|+|G_{2}(u,p,r)|+|G_{3}(u,p,r)|\leq
M,\,\,\,\,\,\forall\,\,\,\,(u,p,r)\in\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}}$
and
$\displaystyle|F_{y}(u,p,r)|+|\nabla_{p}F(u,p,r)|+\sum_{i,j=1}^{n}\left|\frac{\partial
F}{\partial r_{ij}}(u,p,r)\right|\leq
M,\,\,\,\,\,\forall\,\,\,\,(u,p,r)\in\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}}.$
From Theorem 3.1, we conclude that there exists at least one control $v\in
L^{2}(Q_{\omega}),$ such that the solution $(y^{z},q^{z})$ of problem (4.3)
satisfies
$\displaystyle q^{z}(x,0)\equiv 0,\,\,\forall\,\,x\in D.$ (4.4)
Moreover, we also have
$\displaystyle\|v^{z}\|_{L^{2}(Q_{\omega})}\leq
2\sqrt{H}\left(\int_{Q}e^{\frac{M}{\sqrt{t}}}|f|^{2}\,dxdt\right)^{\frac{1}{2}}.$
(4.5)
In what follows, we denote by $v^{z}$ the control with the minimal
$L^{2}(Q_{\omega})$-norm in the set of the controls such that the solution
$(y^{z},q^{z})$ of problem (4.3) corresponding to $z$ satisfies (4.4).
From the regularity theory of parabolic equations, we conclude that there
exists a unique weak solution $(y^{z},q^{z})\in Y\times X.$ Moreover, since
$F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}};\mathbb{R}),$
there exists a positive constant $C$ independent of $z,$ such that
$\displaystyle\|y^{z}\|_{Y}+\|q^{z}\|_{X}\leq$ $\displaystyle
C(\|F(0,0,0)+v^{z}\chi_{\omega}+f\|_{L^{2}(Q)})$ $\displaystyle\leq$
$\displaystyle C(1+\|v^{z}\|_{L^{2}(Q_{\omega})}+\|f\|_{L^{2}(Q)}).$ (4.6)
Thus, along with inequalities (4.5)-(4), we deduce that there exists a
positive constant $\mathcal{L}_{1}$ independent of $z,$ such that
$\displaystyle\|y^{z}\|_{Y}+\|q^{z}\|_{X}\leq\mathcal{L}_{1}\left(1+\|e^{\frac{M}{2\sqrt{t}}}f\|_{L^{2}(Q)}\right).$
(4.7)
Define $\Lambda:L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))\rightarrow
L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$ by
$\displaystyle\Lambda(z)=y^{z},$
then the mapping $\Lambda$ is well-defined. In what follows, we will prove the
existence of a fixed point for the operator $\Lambda$ by the Leray-Schauder’s
fixed points Theorem. To this purpose, we will first prove that $\Lambda$ is
continuous, i.e., if $z_{k}\rightarrow z$ in $L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D)),$ we have $\Lambda(z_{k})\rightarrow\Lambda(z).$
Denote by $y^{k}=\Lambda(z_{k}),$ where $(y^{k},q^{k})$ is the solution of
problem
$\begin{cases}\frac{\partial y^{k}}{\partial
t}+\Delta^{2}y^{k}+a_{0}y^{k}+B_{0}\cdot\nabla
y^{k}+B:\nabla^{2}y^{k}+a_{1}\Delta y^{k}=G_{1}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})y^{k}\\\ +G_{2}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\cdot\nabla y^{k}+G_{3}(z_{k},\nabla
z_{k},\nabla^{2}z_{k}):\nabla^{2}y^{k}+F(0,0,0)+v_{z_{k}}\chi_{\omega}+f,\,\,\,\,(x,t)\in
Q,\\\ -\frac{\partial q^{k}}{\partial
t}+\Delta^{2}q^{k}+a_{0}q^{k}-\nabla\cdot(B_{0}q^{k})+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}q^{k})}{\partial
x_{i}\partial x_{j}}+\Delta(a_{1}q^{k})=F_{y}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})q^{k}\\\ -\nabla\cdot(\nabla_{p}F(z_{k},\nabla
z_{k},\nabla^{2}z_{k})q^{k})+\sum_{i,j=1}^{n}\frac{\partial^{2}(F_{r_{ij}}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})q^{k})}{\partial x_{i}\partial
x_{j}}+y^{k}\chi_{\mathcal{O}},\,\,\,\,(x,t)\in Q,\\\ y^{k}=\Delta
y^{k}=0,\,\,\,q^{k}=\Delta q^{k}=0,\,\,(x,t)\in\Sigma,\\\
y^{k}(x,0)=0,\,\,q^{k}(x,T)=0,\,\,\,x\in D.\end{cases}$ (4.8)
It follows from inequality (4.7) and the fact that $z_{k}\rightarrow z$ in
$L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$ that
$\displaystyle\\{(y^{k},q^{k})\\}_{k=1}^{\infty}\,\,\,\textit{is\,\,\,uniformly\,\,\,bounded\,\,in}\,\,Y\times
X,$
$\displaystyle\\{v^{z_{k}}\\}_{k=1}^{\infty}\,\,\,\textit{is\,\,\,uniformly\,\,\,bounded\,\,in}\,\,L^{2}(Q_{\omega}),$
which entails that there exists a subsequence of $\\{y^{k}\\}_{k=1}^{\infty},$
$\\{q^{k}\\}_{k=1}^{\infty},$ $\\{v^{z_{k}}\\}_{k=1}^{\infty}$ (still denote
by themselves) and $y\in Y,$ $q\in X,$ $v\in L^{2}(Q_{\omega}),$ such that
$\displaystyle y^{k}\rightharpoonup
y\,\,\,\textit{in}\,\,Y\,\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle y^{k}\rightarrow
y\,\,\,\textit{in}\,\,L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D))\,\,\,\textit{as}\,\,k\rightarrow+\infty,$ $\displaystyle
q^{k}\rightharpoonup
q\,\,\,\textit{in}\,\,X\,\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle v^{z_{k}}\rightharpoonup
v\,\,\,\textit{in}\,\,L^{2}(Q_{\omega})\,\,\,\textit{as}\,\,k\rightarrow+\infty.$
Since $F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}},\mathbb{R}),$
we conclude that there exists a subsequence of $\\{G_{1}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\\}_{k=1}^{\infty},$ $\\{G_{2}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\\}_{k=1}^{\infty},$ $\\{G_{3}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\\}_{k=1}^{\infty},$ $\\{F_{y}(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\\}_{k=1}^{\infty},$ $\\{\nabla_{p}F(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\\}_{k=1}^{\infty},$ $\\{(F_{r_{ij}}(z_{k},\nabla
z_{k},\nabla^{2}z_{k}))_{1\leq i,j\leq n}\\}_{k=1}^{\infty},$ (still denote by
themselves), such that
$\displaystyle G_{1}(z_{k},\nabla z_{k},\nabla^{2}z_{k})\rightarrow
G_{1}(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle G_{2}(z_{k},\nabla z_{k},\nabla^{2}z_{k})\rightarrow
G_{2}(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle G_{3}(z_{k},\nabla z_{k},\nabla^{2}z_{k})\rightarrow
G_{3}(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle F_{y}(z_{k},\nabla z_{k},\nabla^{2}z_{k})\rightarrow
F_{y}(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle\nabla_{p}F(z_{k},\nabla
z_{k},\nabla^{2}z_{k})\rightarrow\nabla_{p}F(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty,$
$\displaystyle F_{r_{ij}}(z_{k},\nabla z_{k},\nabla^{2}z_{k})\rightarrow
F_{r_{ij}}(z,\nabla
z,\nabla^{2}z)\,\,\,\textit{weakly\,\,star\,\,in}\,\,L^{\infty}(Q),\,\,\textit{as}\,\,k\rightarrow+\infty.$
Let $k\rightarrow+\infty$ in problem (4.8), we obtain
$\begin{cases}\frac{\partial y}{\partial
t}+\Delta^{2}y+a_{0}y+B_{0}\cdot\nabla y+B:\nabla^{2}y+a_{1}\Delta
y=G_{1}(z,\nabla z,\nabla^{2}z)y+G_{2}(z,\nabla z,\nabla^{2}z)\cdot\nabla y\\\
+G_{3}(z,\nabla
z,\nabla^{2}z):\nabla^{2}y+F(0,0,0)+v\chi_{\omega}+f,\,\,\,\,(x,t)\in Q,\\\
-\frac{\partial q}{\partial
t}+\Delta^{2}q+a_{0}q-\nabla\cdot(B_{0}q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(B_{ij}q)}{\partial
x_{i}\partial x_{j}}+\Delta(a_{1}q)=F_{y}(z,\nabla z,\nabla^{2}z)q\\\
-\nabla\cdot(\nabla_{p}F(z,\nabla
z,\nabla^{2}z)q)+\sum_{i,j=1}^{n}\frac{\partial^{2}(F_{r_{ij}}(z,\nabla
z,\nabla^{2}z)q)}{\partial x_{i}\partial
x_{j}}+y\chi_{\mathcal{O}},\,\,\,\,(x,t)\in Q,\\\ y=\Delta y=0,\,\,\,q=\Delta
q=0,\,\,(x,t)\in\Sigma,\\\ y(x,0)=y_{0}(x),\,\,q(x,T)=0,\,\,\,x\in
D\end{cases}$ (4.9)
and
$\displaystyle q(x,0)\equiv 0,\,\,\forall\,\,x\in D,$ (4.10)
which entails that $y=\Lambda(z).$ Thus, we have proved that
$\Lambda(z_{k})\rightarrow\Lambda(z)$ in $L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D)),$ i.e., the mapping $\Lambda:L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D))\rightarrow L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$ is continuous.
Thanks to the compactness of $X\subset L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$
and inequality (4.7), we conclude that the mapping
$\Lambda:L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))\rightarrow
L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))$ is compact. Denote by
$\displaystyle\mathcal{R}_{1}=\mathcal{L}_{1}\left(1+\|e^{\frac{M}{2\sqrt{t}}}f\|_{L^{2}(Q)}\right)$
and
$\displaystyle B=\\{u\in L^{2}(0,T;H_{0}^{1}(D)\cap
H^{2}(D)):\|u\|_{L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D))}\leq\mathcal{R}_{1}\\},$
then $\Lambda:B\rightarrow B.$ Thus, we can employ the Leray-Schauder’s fixed
points Theorem to conclude that the operator $\Lambda$ possesses at least one
fixed point $y\in L^{2}(0,T;H_{0}^{1}(D)\cap H^{2}(D)).$ That is, for any
$y_{0}\in L^{2}(\Omega),$ there exist at least one control $v\in
L^{2}(Q_{\omega}),$ such that the corresponding solutions to problem (4.1)
satisfy $q(x,0)\equiv 0$ for any $x\in D$ ∎
###### Corollary 4.2.
Assume that $\omega\cap\mathcal{O}\neq\emptyset,$ $y_{0}=0,$ $F\in
W^{1,\infty}(\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n^{2}};\mathbb{R}),$
the assumption on $f$ is given as in Theorem 3.2. Then there exists a control
$v\in L^{2}(Q_{\omega})$ insensitizing the functional $\Phi$ defined by (1.2)
for problem (1.1).
###### Remark 4.3.
Under the same assumptions as in Corollary 4.2. If problem (1.1) is subject to
the homogeneous Dirichlet boundary conditions $y|_{\Sigma}=\frac{\partial
y}{\partial\vec{n}}|_{\Sigma}=0,$ the same conclusion as in Corollary 4.2
remains true.
## Acknowledgement
This work was supported by the National Science Foundation of China Grant
(11801427, 11871389) and the Fundamental Research Funds for the Central
Universities (xzy012022008, JB210714).
## References
* [1] F. Alabau-Boussouira. Insensitizing exact controls for the scalar wave equation and exact controllability of 2-coupled cascade systems of PDE’s by a single control. Math. Control Signals Syst., 26:1–46, 2014.
* [2] O. Bodart and C. Fabre. Controls insensitizing the norm of the solution of a semilinear heat equation. J. Math. Anal. Appl., 195(3):658–683, 1995.
* [3] O. Bodart, M. González-Burgos, and R. Pérez-García. Existence of insensitizing controls for a semilinear heat equation with a superlinear nonlinearity. Commun. Partial Differ. Equ., 29(7-9):1017–1050, 2004.
* [4] O. Bodart, M. González-Burgos, and R. Pérez-García. Insensitizing controls for a heat equation with a nonlinear term involving the state and the gradient. Nonlinear Anal., 5(5-6):687–711, 2004.
* [5] O. Bodart, M. González-Burgos, and R. Pérez-García. A local result on insensitizing controls for a semilinear heat equation with nonlinear boundary Fourier conditions. SIAM J. Control Optim., 43(3):955–969, 2004.
* [6] B. M. R. Calsavara, N. Carreño, and E. Cerpa. Insensitizing controls for a phase field system. Nonlinear Anal., 143:120–137, 2016.
* [7] N. Carreño. Insensitizing controls for the Boussinesq system with no control on the temperature equation. Adv. Differ. Equ., 22(3-4):235–258, 2017.
* [8] N. Carreño and E. Cerpa. Local controllability of the stabilized Kuramoto-Sivashinsky system by a single control acting on the heat equation. J. Math. Pures Appl., 106(4):670–694, 2016.
* [9] N. Carreño, E. Cerpa, and A. Mercado. Boundary controllability of a cascade system coupling fourth- and second-order parabolic equations. Systems Control Lett., 133:104542, 2019.
* [10] N. Carreño, S. Guerrero, and M. Gueye. Insensitizing controls with two vanishing components for the three-dimensional Boussinesq system. ESAIM Control Optim. Calc. Var., 21(1):73–100, 2015.
* [11] N. Carreño and M. Gueye. Insensitizing controls with one vanishing component for the Navies-Stokes system. J. Math. Pures Appl., 101:27–53, 2014.
* [12] N. Carreño and P. Guzmán. On the cost of null controllability of a fourth-order parabolic equation. J. Differential Equations, 261(11):6485–6520, 2016.
* [13] E. Cerpa, R. Lecaros, T. N. T. Nguyen, and A. Pérez. Carleman estimates and controllability for a semi-discrete fourth-order parabolic equation. J. Math. Pures Appl., 164:93–130, 2022.
* [14] E. Cerpa and A. Mercado. Local exact controllability to the trajectories of the 1-D Kuramoto-Sivashinsky equation. J. Differential Equations, 250(4):2024–2044, 2011.
* [15] Y. Cui, X. Y. Fu, and J. X. Tian. A unified weighted inequality for fourth-order partial differential operators and applications. arXiv preprint, 2022.
* [16] L. de Teresa. Insensitizing controls for a semilinear heat equation. Commun. Partial Differ. Equations, 25(1-2):39–72, 2000.
* [17] L. de Teresa and O. Kavian. Unique continuation principle for systems of parabolic equations. ESAIM Control Optim. Calc. Var., 16(2):247–274, 2010.
* [18] L. de Teresa and E. Zuazua. Identification of the class of initial data for the insensitizing control of the heat equation. Commun. Pure Appl. Anal., 8(1):457–471, 2009.
* [19] J. I. Díaz and A.M. Ramos. On the approximate controllability for higher order parabolic nonlinear equations of Cahn-Hilliard type. in: Control and Estimation of Distributed Parameter Systems, Vorau, 1996, in: Int. Ser. Numer. Math., vol. 126, Birkhäuser, Basel, pp. 111-127, 1998.
* [20] C. Fabre, J. P. Puel, and E. Zuazua. Approximate controllability of the semilinear heat equation. Proc. Roy. Soc. Edinburgh Sect. A, 125:31–61, 1995.
* [21] A. V. Fursikov and O. Y. Imanuvilov. Controllability of Evolution Equations, Lecture Notes Series, vol. 34. Seoul National University, Research Institute of Mathematics, Global Analysis Research Center, Seoul, 1996.
* [22] P. Gao. Insensitizing controls for the Cahn-Hilliard type equation. Electron. J. Qual. Theory Differ. Equ., 35:22, 2014.
* [23] S. Guerrero. Controllability of systems of Stokes equations with one control force: existence of insensitizing controls. Ann. Inst. H. Poincaré Anal. Non Linéaire, 24(6):1029–1054, 2007.
* [24] S. Guerrero. Null controllability of some systems of two parabolic equations with one control force. SIAM J. Control Optim., 46(2):379–394, 2007.
* [25] S. Guerrero and K. Kassab. Carleman estimate and null controllability of a fourth order parabolic equation in dimension $n\geq 2$. J. Math. Pures Appl., 121:135–161, 2019.
* [26] M. Gueye. Insensitizing controls for the Navier-Stokes equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 30(5):825–844, 2013.
* [27] V. Hernández-Santamaría and L. Peralta. Controllability results for stochastic coupled systems of fourth- and second-order parabolic equations. J. Evol. Equ., 22:Paper No. 23, 2022.
* [28] K. Kassab. Null controllability of semi-linear fourth order parabolic equations. J. Math. Pures Appl., 136:279–312, 2020.
* [29] G. Lebeau and L. Robbiano. Contrôle exact de l’équation de la chaleur. Commun. Partial Differ. Equ., 20(1-2):335–356, 1995.
* [30] J. L. Lions. Quelques notions dans l’analyse et le contrôle de systèmes à données incomplètes (Some notions in the analysis and control of systems with incomplete data), in: Proceedings of the XIth Congress on Differential Equations and Applications/First Congress on Applied Mathematics, Málaga, 1989, pp. 43-54. Univ. Málaga, 1990.
* [31] J. L. Lions. Sentinelles pour les systèmes distribués à données incomplètes (Sentinelles for Distributed Systems with Incomplete Data). Rech. Math. Appl., vol. 21, Masson, Paris, 1992.
* [32] X. Liu. Insensitizing controls for a class of quasilinear parabolic equations. J. Differential Equation, 253:1287–1316, 2012.
* [33] M. López-García and A. Mercado. Uniform null controllability of a fourth-order parabolic equation with a transport term. J. Math. Anal. Appl., 498:124979, 2021.
* [34] Q. Lu and Y Wang. Null controllability for fourth order stochastic parabolic equations. SIAM J. Control Optim., 60(3):1563–1590, 2022.
* [35] S. Micu, J. H. Ortega, and L. de Teresa. An example of $\epsilon$-insensitizing controls for the heat equation with no intersecting observation and control regions. Appl. Math. Lett., 17(8):927–932, 2004.
* [36] M. C. Santos and T. Y. Tanaka. An insensitizing control problem for the Ginzburg-Landau equation. J. Optim. Theory Appl., 183:440–470, 2019.
* [37] L. Tebou. Some results on the controllability of coupled semilinear wave equations: The desensitizing control case. SIAM J. Control Optim., 49(3):1221–1238, 2011.
* [38] Y. J. Xu, X. G. Zhang, and D. Y. Liu. Insensitizing controls for a nonlinear parabolic equation with a nonlinear term involving the state and the gradient in unbounded domains. Nonlinear Anal., 71:5885–5894, 2009.
* [39] B. You and F. Li. Global carleman estimates for the fourth order parabolic equations and application to null controllability. arxiv, arxiv:4576347, 2022.
* [40] H. Yu. Null controllability for a fourth order parabolic equation. Sci. China Ser. F, 52(11):2127–2132, 2009.
* [41] M. M. Zhang, J. X. Yin, and H. Gao. Insensitizing controls for the parabolic equations with dynamic boundary conditions. J. Math. Anal. Appl., 475:861–873, 2019.
|
# Channel-Adaptive Wireless Image Transmission with OFDM
Haotian Wu, Yulin Shao, Krystian Mikolajczyk, and Deniz Gündüz The authors are
with the Department of Electrical and Electronic Engineering, Imperial College
London, London SW7 2AZ, U.K. (e-mail: haotian.wu17@imperial.ac.uk). D. Gündüz
is also with the Department of Engineering ‘Enzo Ferrari’, University of
Modena and Reggio Emilia (UNI- MORE), Italy.This work was supported by the
European Research Council (ERC) under Grant 677854, and by the UK EPSRC
(EP/W035960/1 and EP/S032398/1) under the CHIST-ERA program (CHIST-
ERA-20-SICT-004).
###### Abstract
We present a learning-based channel-adaptive joint source and channel coding
(CA-JSCC) scheme for wireless image transmission over multipath fading
channels. The proposed method is an end-to-end autoencoder architecture with a
dual-attention mechanism employing orthogonal frequency division multiplexing
(OFDM) transmission. Unlike the previous works, our approach is adaptive to
channel-gain and noise-power variations by exploiting the estimated channel
state information (CSI). Specifically, with the proposed dual-attention
mechanism, our model can learn to map the features and allocate transmission-
power resources judiciously to the available subchannels based on the
estimated CSI. Extensive numerical experiments verify that CA-JSCC achieves
state-of-the-art performance among existing JSCC schemes. In addition, CA-JSCC
is robust to varying channel conditions and can better exploit the limited
channel resources by transmitting critical features over better subchannels.
###### Index Terms:
Joint source channel coding, deep neural networks, OFDM, image communications.
## I Introduction
Shannon’s separation theorem states that it is optimal to design source and
channel codes separately for an infinite block-length [1]. However, an
increasing number of wireless applications, such as Internet-of-things and
edge intelligence [2, 3, 4], require the efficient transmission of large
volumes of data under strict delay constraints, resulting in an increasing
interest in joint source channel coding (JSCC) in recent years. Recently,
inspired by the success of deep learning techniques, researchers have started
to exploit deep neural networks to design novel and competitive JSCC schemes
to transmit high information content signals, such as images or videos, over
wireless channels [5, 6, 7, 8, 9, 10, 11, 12, 13]. This approach has been
pioneered in [5], where an autoencoder-based JSCC architecture is proposed for
wireless image transmission, which outperformed conventional compression and
channel coding schemes over additive white Gaussian noise (AWGN) and Rayleigh
fading channels. This was later extended to feedback channels in [6] and to
bandwidth-adaptive transmission in [7]. In [8], authors consider JSCC over
orthogonal frequency division multiplexing (OFDM) channels. An alternative
generative architecture is considered in [9, 10, 11].
Figure 1: Our proposed channel-adaptive CA-JSCC scheme. Figure 2: Basic blocks
of our dual-attention encoder and decoder architectures.
However, adaptability to various channel conditions is still a challenge for
deep-learning-based JSCC. Methods in [5, 6, 7] are either trained for a
specific signal-to-noise ratio (SNR), or for a range of channel SNRs. The
former requires significant storage memory to store different network
parameters for different channel conditions, while the latter sacrifices
performance and does not exploit the channel state information (CSI). In
conventional digital communication systems, CSI at the transmitter can allow
power allocation to boost the communication rate. In [12, 8, 13], CSI is used
in a similar manner in the context of learning-aided design, mainly to adjust
the feature weights according to the CSI; however, in the case of OFDM, CSI
will be instrumental not only for power control, but also to decide the
mapping of the features to different subcarriers according to their relative
qualities. For example, more critical features of the input image can be
mapped to more reliable subcarriers.
We introduce a channel-adaptive JSCC (CA-JSCC) scheme, which employs a dual-
attention mechanism to adjust its features in the multi-scale intermediate
layers according to the estimated CSI at the encoder and decoder. Our dual-
attention mechanism employs both channel-wise attention and spatial attention,
and jointly learns to map the features to the subcarriers and to allocate
power judiciously. Our method achieves state-of-the-art performance and can
adapt to different channel conditions.
Our main contributions can be summarized as:
* •
To the best of our knowledge, channel adaptability for JSCC with OFDM has not
been studied before. All previous methods require the training and testing
SNRs to match without fully exploiting the CSI.
* •
We present a CA-JSCC scheme with state-of-the-art performance in various SNR
and bandwidth scenarios. We propose a dual-attention mechanism to
simultaneously exploit the estimated CSI to aid the allocation of features and
power resources to adapt to time-varying channel conditions.
## II System Model
We consider OFDM-based JSCC of images over a multipath fading channel with
$L_{t}$ paths. We transmit each input image using $N_{s}$ OFDM symbols
accompanied with $N_{p}$ pilots for channel estimation of $L_{f}$ OFDM
subcarriers. As shown in Fig 1, an encoding function ${E}_{\bm{\theta}}$ first
maps the input image $\bm{x}\in\mathbb{R}^{c\times h\times w}$ into a complex
matrix $\bm{Y}\in\mathbb{C}^{N_{s}\times L_{f}}$, where $c,h$ and $w$ denote
the color, height and width. The generated channel input can be denoted by
$\bm{Y}={E}_{\bm{\theta}}(\bm{x,\hat{h}})$, where $\bm{\hat{h}}$ is the
estimated CSI vector available at the transmitter.
Channel input $\bm{Y}$ is subject to an average power constraint $P_{s}$:
$\frac{1}{N_{s}L_{f}}\mathbb{E}\big{[}\|\bm{Y}\|^{2}_{\text{F}}\big{]}\leq
P_{s}$, where the expectation is taken over the input images, and
$\|\cdot\|_{\text{F}}$ denotes the Frobenius norm. Without loss of generality,
we set $P_{s}=1$.
Each OFDM symbol is passed through the inverse discrete Fourier transform
(IFFT) module, appended with the cyclic prefix (CP), and transmitted to the
receiver over the multipath channel. The transfer function of the multipath
fading channel with $L_{t}$ paths is defined as:
$\bm{\hat{y}}=h_{c}(\bm{y})=\bm{h_{t}}\ast\bm{y}+\bm{w}$, where $\bm{y}$ and
$\bm{\hat{y}}$ denote the input and output vectors, respectively; $\ast$ is
the linear convolution operation, $\bm{h_{t}}\in\mathbb{C}^{L}$ is the channel
impulse response, and $\bm{w}$ is the AWGN term.
The receiver first demodulates $\bm{\hat{y}}$ by removing the CP and applying
fast Fourier transform (FFT). The equivalent frequency-domain transfer
function of $h_{c}$ can be written as:
$\hat{Y}[i,k]=H[k]\bar{Y}[i,k]+W[i,k],$ (1)
where the frequency-domain channel matrix
$\bm{H}\in\mathbb{C}^{(L_{f},L_{f})}$ is a diagonal matrix with the $k$-th
diagonal element being $H[k]$ (frequency-domain channel response of the $k$-th
subcarrier). Let $\bar{\bm{Y}}\in\mathbb{C}^{(N_{s},L_{f})}$ denote the output
of the power normalization module at the transmitter (i.e., the inputs to the
IFFT module), where $\bar{Y}[i,k]$ denotes the symbol at the $k$-th subcarrier
of the $i$-th OFDM symbol. $\bm{W}\in\mathbb{C}^{(N_{s},L_{f})}$ is the
frequency domain noise matrix, where $W[i,k]\sim\mathcal{CN}(0,\sigma^{2})$
and independent of each other.
Given the FFT output of the pilots at the receiver, we use minimum mean square
error (MMSE) or least square (LS) channel estimator to estimate the CSI
($H[k]$) in frequency-domain (Eqn. (1)). The estimated CSI vector
$\bm{\hat{h}}$ is then used to equalize the data (MMSE equalizer). We have
$\bm{\hat{h}}\triangleq\left[|\hat{h}_{1}|,|\hat{h}_{2}|,\cdots,|\hat{h}_{L_{f}}|,\mu\right]^{\top},$
(2)
where $\hat{h}_{i}$, $i=1,...,L_{f}$, is the estimated channel gain of the
$i$-th subcarrier; while $\mu$ is the average SNR defined as
$\mu=10\log_{10}\frac{P_{s}}{\sigma^{2}}$ dB for transmit power $P_{s}$ and
noise power $\sigma^{2}$.
Based on the equalized data $\bm{\hat{Y}_{e}}$ and the estimated CSI vector
$\bm{\hat{h}}$, the decoder $D_{\bm{\phi}}$ reconstructs the transmitted image
as $\bm{\hat{x}}$, i.e.,
$\bm{\hat{x}}=D_{\bm{\phi}}(\bm{\hat{Y}_{e}},\bm{\hat{h}})$. The performance
indicator is the peak signal-to-noise ratio (PSNR), defined as
$\text{PSNR}=10\log_{10}\frac{(\max{\bm{x}})^{2}}{\text{MSE}(\bm{x},\bm{\hat{x}})}~{}(\text{dB})$,
where $\max{\bm{x}}$ denotes the maximum possible value of the input signal
$\bm{x}$, $\text{MSE}(\bm{x},\bm{\hat{x}})\triangleq
E[\|\bm{x}-\bm{\hat{x}}\|^{2}_{2}]$ and the expectation is taken over all
pixels.
We then jointly train the encoder and decoder to minimize the loss function
$\mathcal{L}(\bm{\theta},\bm{\phi})=\mathbb{E}\big{[}\text{PSNR}(\bm{x},\bm{\hat{x}})\big{]}$,
where the expectation is taken over the randomness both in the source and
channel distributions.
(a)
(b)
(c)
Figure 3: (a) Comparison of our CA-JSCC scheme with Exp-JSCC when the values
of the train and test SNRs are the same. (b) Ablation experiments for the
attention strategy. (c) Visualization of the power allocation for the CA-JSCC
and the Exp-JSCC.
## III Dual-attention mechanism
In OFDM systems, different subcarriers face different channel gains, and a
judicious transmission scheme should be able to allocate power and features
across appropriate subcarriers to adapt to channel variations. To fully
exploit the estimated CSI, we propose a dual-attention based CA-JSCC scheme.
The architecture of our method is shown in Fig 2, both the encoder and the
decoder have five feature learning (FL) and four channel learning (CL) modules
in a comb structure, allowing modulating each feature with a different scale.
The FL module consisting of 2D convolution/deconvolution, batch normalization,
and PReLU layers, is designed to learn an intermediate representation of the
input. The dual-attention based CL module is designed to learn an attention
mask to map the intermediate features to appropriate subcarriers based on the
estimated CSI and input features. The CL module consists of a channel-wise
attention module and a spatial attention module. Its operation is presented in
Algorithm 1 in detail.
Algorithm 1 Dual-attention based CL
Input: $\bm{F_{in}}\in\mathbb{R}^{c\times h\times w}$,
$\bm{\hat{h}}\in\mathbb{R}^{L_{f}+1}$
Output: $\bm{F_{out}}\in\mathbb{R}^{c\times h\times w}$
Stage 1: Channel-wise attention
1:$\bm{F_{ca}}=Ave_{c}(\bm{F_{in}})\in\mathbb{R}^{(c)}$
2:$\bm{i_{c}}=concatenate(\bm{F_{ca}},\bm{\hat{h}})\in\mathbb{R}^{(c+L_{f}+1)}$
3:$\bm{S_{c}}=f_{c}(\bm{i_{c}})\in\mathbb{R}^{(c,1,1)}$
4:for i= 0:1:c do
5:
$\bm{F_{cout}}[i,h,w]=\bm{S_{c}}[i]\odot\bm{F_{in}}[i,h,w]\in\mathbb{R}^{(c,h,w)}$
6:end for
Stage 2: Spatial attention
1:$\bm{F_{sa}}=Ave_{s}(\bm{F_{cout}})\in\mathbb{R}^{(1,h,w)}\Rightarrow\mathbb{R}^{(hw)}$
2:$\bm{i_{s}}=concatenate(\bm{F_{sa}},\bm{\hat{h}})\in\mathbb{R}^{(hw+L_{f}+1)}$
3:$\bm{S_{s}}=f_{s}(\bm{i_{s}})\in\mathbb{R}^{(hw)}\Rightarrow\mathbb{R}^{(h,w)}$
4:for j= 0:1:h do
5: for k= 0:1:w do
6:
$\bm{F_{out}}[c,j,k]=\bm{S_{s}}[j,k]\odot\bm{F_{cout}}[c,j,k]\in\mathbb{R}^{(c,h,w)}$
7: end for
8:end for
#### III-1 Channel-wise attention module
Our channel-wise attention module is inspired by[12], which adapts to a single
SNR value in an AWGN channel model. Instead, CA-JSCC learns an attention mask
to allocate and map features based on the estimated CSI of all $N_{s}$
subcarriers.
We first apply average pooling $Ave_{c}(\bm{F_{in}})$ on input features
$\bm{F_{in}}$ along the spatial direction to get vector $\bm{F_{ca}}$, where
$Ave_{c}(\bm{F_{in}})\triangleq\frac{1}{hw}\sum_{j=1}^{h}\sum_{k=1}^{w}F_{in}[c,h,w]$.
$\bm{F_{ca}}$ is then concatenated with $\bm{\hat{h}}$ to get the intermediate
vector $\bm{i_{c}}$ to compute the channel-wise attention mask $\bm{S_{c}}$ by
several fully connected (FC) layers: $S_{c}=f_{c}(\bm{i_{c}})$, where $f_{c}$
represents the FC layers followed by PReLU functions. Finally, we get the
output of our channel-wise attention module as
$\bm{F_{cout}}=\bm{S_{c}}\odot\bm{F_{in}}$.
Our channel-wise attention module learns to map features from the input to the
subcarriers based on the estimated CSI, allowing JSCC to dynamically adjust to
different channel SNRs. But the spatial information is ignored when computing
the channel attention mask by the average pooling operation. So we design a
spatial attention module to compensate for spatial information.
#### III-2 Spatial attention module
Our spatial attention module learns to match the more critical spatial
features along the $h$ and $w$ dimensions with better channel conditions
depending on the estimated CSI.
This time we firstly apply average pooling $Ave_{s}(\bm{F_{cout}})$ on the
$\bm{F_{cout}}$ along the channel direction to get $\bm{F_{sa}}$, where
$Ave_{s}(\bm{F_{cout}})\triangleq\frac{1}{c}\sum_{j=1}^{c}F_{cout}[c,h,w]$.
Then, $\bm{\hat{h}}$ and $\bm{F_{sa}}$ are concatenated to get the
intermediate feature $\bm{i_{s}}$, which is used to compute the spatial mask
$\bm{S_{s}}$ by several FC layers. We compute the final output feature vector
as $\bm{F_{out}}=\bm{S_{s}}\odot\bm{F_{cout}}$.
The spatial attention module can further improve the PSNR performance by
exploiting the spatial information and helping JSCC encoder to do more
adaptive power allocation, which matches critical features with better
channels.
(a) $R=1/12$
(b) $R=1/6$
(c) $R=1/3$
Figure 4: Performance of our CA-JSCC model comparied with the Exp-JSCC model
of different bandwidth ratios.
## IV Training and evaluation
This section presents numerical experiment results to evaluate the performance
of our CA-JSCC scheme. The Exp-JSCC scheme introduced in [8] is the most
related work to the current study in the literature. We use Exp-JSCC trained
on different channel conditions as a benchmark to compare with the proposed
CA-JSCC scheme.
### IV-A Experimental setup
If not specified otherwise, all experiments were performed on the CIFAR-10
dataset [14] with PyTorch. Models were trained until the performance on a
validation set (selected separately from the training dataset) stops
improving. The Adam optimizer is used to perform backpropagation.
We set the number of subcarriers to $L_{f}=64$. The Zadoff-Chu (ZC) sequence
[15], denoted by $\bm{Y_{p}}\in\mathbb{C}^{(2,64)}$, is used as the pilot. The
values of channel gains $\\{H[k]:k=1,2,...,L_{f}\\}$ are sampled from a
complex Gaussian distribution $\mathcal{CN}(0,1)$. Unless specified otherwise,
the frequency-domain channel responses in the experiments are estimated by an
MMSE estimator. We also sort the channels based on their estimated CSI to make
training process easier; that is, we have
$|H[1]|^{2}\geq\cdots\geq|H[L]|^{2}$. Following[6], we define the bandwidth
ratio (i.e., bandwidth usage to source symbol ratio) as
$R\triangleq\frac{N_{s}L_{f}}{c\times h\times w}$, where $N_{s}L_{f}$ is the
number of symbols transmitted per image.
### IV-B Channel-gain adaptability
We first verify the adaptability of the CA-JSCC scheme to channel-gain
variations. Specifically, under a fixed bandwidth ratio and a given SNR, we
want to see if our dual-attention mechanism can instruct the transmitter to
exploit better channels and allocate power to different subcarriers
judiciously.
The experimental results are shown in Fig 3a, where we set the number of OFDM
symbols to $N_{s}=8$ and the bandwidth ratio to $R=1/6$. The Exp-JSCC and CA-
JSCC models are both trained at a fixed SNR, and tested at the same SNR. As
can be seen, by feeding the estimated CSI to the transmitter, CA-JSCC can
exploit the channel-gain information and adaptively allocate power to
different subcarriers. We can see a significant gain compared to Exp-JSCC at
all SNRs. This can be attributed to the advantage of our method in better
exploiting the channel gains and allocating power to different subcarriers.
### IV-C SNR adaptability
Next, we evaluate the SNR adaptability of our scheme. If not specified
otherwise, we train the CA-JSCC model at random SNR values of each training
episode chosen uniformly from $[0,20]$ dB and test the well-trained model at
different SNRs.
Compared with the CA-JSCC scheme trained at specific SNRs in Fig 3a, we
observe that there is a slight performance degradation when it is trained at
random SNR values. We conclude that, while CA-JSCC can learn to adapt to
different channel SNRs, this flexibility comes at the expense of some loss in
the PSNR (up to $1$dB). However, this CA-JSCC model trained at random SNR
values still outperforms the Exp-JSCC models trained at specific SNR values.
We also compare the performance of dual-attention based architecture CA-JSCC
with an alternative using only channel-wise attention, called (CA-JSCC-CH), as
an ablation study. As shown in Fig 3b, for three different bandwidth ratios,
CA-JSCC architecture outperforms CA-JSCC-CH at all SNR values, which shows
that the spatial attention mechanism is essential to achieve the improved
performance provided by CA-JSCC. We also observe larger gains by our dual-
attention method at higher bandwidth ratios and $\text{SNR}_{\text{test}}$
values, where more spatial information and better CSI adaptability benefit
both feature mapping and power allocation. To visualize the power allocation
executed by CA-JSCC and Exp-JSCC, we plot the average channel gain and the
average power allocated to each subcarrier in Fig. 3c, where we set SNR$=1$dB
and R=$1/6$. The channel gains are ordered in an increasing manner in the
plot. Compared with Exp-JSCC, Fig. 3c shows that CA-JSCC generally allocates
more power for the subcarrier with better channel conditions, as one would
desire.
It is worth noting that the Exp-JSCC scheme is not SNR-adaptive, which means
the training and test SNRs of Exp-JSCC must match to achieve a sound
performance, as shown in Fig. 3a. Fig 4 presents the PSNR versus test SNR
results for bandwidth ratios of $R=1/12,1/6,1/3$ (we set $L_{f}=64$ and vary
$N_{s}$ to attain different $R$ values). As stated above, our CA-JSCC scheme
can be trained with random SNRs, and yields a single model for each bandwidth
ratio to be verified on a range of test SNRs. The Exp-JSCC scheme, on the
other hand, is trained at five different SNRs, yielding five different models
under each bandwidth ratio. The Exp-JSCC scheme performs the best when the
training and test SNRs match. However, our CA-JSCC scheme is SNR-adaptive and
consistently outperforms Exp-JSCC at all SNRs and bandwidth ratios with a
considerable margin.
Additional experiments by training over the ImageNet dataset is shown in Fig.
5a. We train the models with randomly cropped $64\times 64$ patches from
ImageNet, and evaluate the models on the Kodak dataset. Results show that
training on a sufficiently large dataset (ImageNet) can allow our CA-JSCC
model to perform well on a never-seen dataset (Kodak). CA-JSCC can still
achieve state-of-the-art performance with the additional capability of channel
adaptability.
(a)
(b)
Figure 5: (a) Additional experiments with training on the ImageNet dataset
tested on the Kodak dataset. (b) Comparison of CA-JSCC with different channel
estimation methods.
### IV-D Impact of CSI estimation errors
In the above experiments, we have assumed MMSE estimated CSI. In this
subsection, we look into the effect of channel estimation errors on the
performance of CA-JSCC. We repeat the experiment in Fig 4b with three types of
CSI: i) perfect CSI, $H_{per}$; ii) MMSE estimated CSI, $H_{mmse}$; and iii)
LS estimated CSI, $H_{ls}$. We remark that $H_{mmse}$ provides a more accurate
estimate than $H_{ls}$.
The experimental results are shown in Fig 5b, where we train our CA-JSCC
models with $H_{mmse}$ and $H_{per}$, respectively, and evaluate these two
models with $H_{per}$, $H_{mmse}$ and $H_{ls}$, respectively. As expected, the
model trained and tested with $H_{per}$ achieves the best performance. On the
other hand, the model trained with perfect CSI is not robust to CSI errors
during test time. Its performance gets worse as the quality of channel
estimation degrades. Instead, we see that models trained with $H_{mmse}$
perform better, since during training they learn to compensate for CSI
estimation errors. We conclude from these results that a more accurate CSI
during testing is generally beneficial, and the performance improves if
training is done with the same type of CSI.
## V Conclusion
We presented the CA-JSCC scheme for wireless image transmission over OFDM
channels. CA-JSCC employs a dual-attention mechanism to exploit the available
CSI to simultaneously aid the allocation of features and power resources
adaptively. The dual-attention mechanism comprises a channel-wise attention
module, which learns to map features to subcarriers according to the CSI, and
a spatial attention module, which learns high-level spatial features, allowing
the JSCC encoder to match the most important features with the most reliable
subcarriers. Numerical experiments show that our method achieves state-of-the-
art performance in various SNR and bandwidth scenarios. Besides, our method
simultaneously uses the estimated CSI to adapt to time-varying channel
conditions.
## References
* [1] C. E. Shannon, “A mathematical theory of communication,” _The Bell System Technical Journal_ , vol. 27, no. 3, pp. 379–423, 1948.
* [2] D. Gündüz, D. B. Kurka, M. Jankowski, M. M. Amiri, E. Ozfatura, and S. Sreekumar, “Communicate to learn at the edge,” _IEEE Communications Magazine_ , vol. 58, no. 12, pp. 14–19, 2020.
* [3] M. Jankowski, D. Gündüz, and K. Mikolajczyk, “Wireless image retrieval at the edge,” _IEEE Journal on Selected Areas in Communications_ , vol. 39, no. 1, pp. 89–100, 2021.
* [4] Y. Shao, D. Gunduz, and S. C. Liew, “Federated edge learning with misaligned over-the-air computation,” _IEEE Transactions on Wireless Communications_ , vol. 21, no. 6, pp. 3951–3964, 2022.
* [5] E. Bourtsoulatze, D. B. Kurka, and D. Gündüz, “Deep joint source-channel coding for wireless image transmission,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 5, no. 3, pp. 567–579, 2019.
* [6] D. B. Kurka and D. Gündüz, “Deepjscc-f: Deep joint source-channel coding of images with feedback,” _IEEE Journal on Selected Areas in Information Theory_ , vol. 1, no. 1, pp. 178–193, 2020.
* [7] D. B. Kurka and D. Gündüz, “Bandwidth-agile image transmission with deep joint source-channel coding,” _IEEE Transactions on Wireless Communications_ , vol. 20, no. 12, pp. 8081–8095, 2021.
* [8] M. Yang, C. Bian, and H.-S. Kim, “OFDM-guided deep joint source channel coding for wireless multipath fading channels,” _IEEE Transactions on Cognitive Communications and Networking_ , pp. 1–1, 2022.
* [9] K. Choi, K. Tatwawadi, A. Grover, T. Weissman, and S. Ermon, “Neural joint source-channel coding,” in _International Conference on Machine Learning_. PMLR, 2019, pp. 1182–1192.
* [10] Y. M. Saidutta, A. Abdi, and F. Fekri, “Joint source-channel coding over additive noise analog channels using mixture of variational autoencoders,” _IEEE Journal on Selected Areas in Communications_ , vol. 39, no. 7, pp. 2000–2013, 2021.
* [11] E. Erdemir, P. L. Dragotti, and D. Gunduz, “Privacy-aware communication over the wiretap channel with generative networks,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2022.
* [12] J. Xu, B. Ai, W. Chen, A. Yang, P. Sun, and M. Rodrigues, “Wireless image transmission using deep source channel coding with attention modules,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 32, no. 4, pp. 2315–2328, 2022.
* [13] T.-Y. Tung and D. Gündüz, “Deepwive: Deep-learning-aided wireless video transmission,” _IEEE Journal on Selected Areas in Communications_ , vol. 40, no. 9, pp. 2570–2583, 2022.
* [14] A. Krizhevsky, G. Hinton _et al._ , “Learning multiple layers of features from tiny images,” University of Toronto, Tech. Rep., 2009.
* [15] D. Chu, “Polyphase codes with good periodic correlation properties (corresp.),” _IEEE Transactions on Information Theory_ , vol. 18, no. 4, pp. 531–532, 1972.
|
# Towards a fully Unsupervised framework for intent induction in Customer
Support Dialogues
Rita Costa
INOV/Instituto Superior Técnico,
<EMAIL_ADDRESS>&Bruno Martins
INESC-ID/Instituto Superior Técnico,
<EMAIL_ADDRESS>&Sérgio Viana
Xpand-it,
<EMAIL_ADDRESS>&Luisa Coheur
INESC-ID/Instituto Superior Técnico,
<EMAIL_ADDRESS>
###### Abstract
State of the art models in intent induction require annotated datasets.
However, annotating dialogues is time-consuming, laborious and expensive. In
this work, we propose a completely unsupervised framework for intent induction
within a dialogue. In addition, we show how pre-processing the dialogue
corpora can improve results. Finally, we show how to extract the dialogue
flows of intentions by investigating the most common sequences. Although we
test our work in the MultiWOZ dataset, the fact that this framework requires
no prior knowledge make it applicable to any possible use case, making it very
relevant to real world customer support applications across industry.
## 1 Introduction
The evolution of technology has allowed the automation of several processes
across diversified engineering industry fields, such as customer support
services, which have drastically evolved with the advances in Natural Language
Processing and Machine Learning. One of the major challenges of these systems
is to identify users intentions, a complex Natural Language Understanding
task, that vary across domains. With the evolution of Deep Learning
architectures, recent works focused on modelling intentions and creating a
taxonomy of intents, so they can be fed to powerful supervised clustering
algorithms (Haponchyk et al.,, 2020; Chatterjee and Sengupta,, 2021).
However, these systems have the bottleneck of requiring the existence of
labelled data to be trained and deployed, and, thus, they can not be easily
transferred to real world customer support services, where the available data
for a commercial chatbot usually consists in no more than a dataset of
interactions between clients and operators. As labeling hundreds of utterances
with intent labels can be time-consuming, laborious, expensive and, sometimes,
even requires someone with expertise, it is not straightforward to apply
current state of the art supervised models to new domains (Chatterjee and
Sengupta,, 2020).
In this work, we propose a totally unsupervised intent induction framework and
apply it to the MultiWOZ dataset (Budzianowski et al.,, 2018). Previous
unsupervised intent induction works have used methods which perform clustering
of user query utterances in human-human conversations (Perkins and Yang,,
2019; Haponchyk et al.,, 2020; Chatterjee and Sengupta,, 2020). Popular
clustering algorithms for practical applications include centroid-based
algorithms, such as the K-Means algorithm (Lloyd,, 1982), and density based
algorithms, namely DBSCAN (Daszykowski and Walczak,, 2009) and HDBSCAN
(McInnes and Healy,, 2017). An advantage of the density-based algorithms is
not requiring to define the number of clusters a priori (Ghaemi and Farnaghi,,
2019), being more efficient for detecting clusters with arbitrary shapes from
a noisy dataset, particularly for a case where the number of dialogue
intentions is not known a priori. By using HDBSCAN, we also do not require the
prior definition of the density threshold used to create the clusters
(contrary to DBSCAN), which is more suitable for this application. Moreover,
we show that text pre-processing techniques, such as performing named entity
recognition, can improve the clustering process of dialogue utterances.
Finally, we complement this experiment with an analysis of the most common
dialogue flows, based on the detected intents.
In summary, the main contributions of this work are:
* •
the application of a fully unsupervised method for extracting intents within a
dialogue, requiring no prior information about its content, and hence avoiding
the time-consuming task of manually analysing user questions and identifying
the intents (both intra- and inter-domain studies are conducted);
* •
an exploratory analysis of the dataset, motivating the usage of general text
processing techniques to optimize the intent extraction process, that can be
applied to any corpora;
* •
an informal analysis of the most common flows of discovered intentions.
As there is no required prior knowledge of any dataset specificities or
deployment details, our proposal is applicable to any type of data and use
case, making it relevant for a huge variety of applications, such as customer
support applications.
This paper is organized as follows: in Section 2 we present related work, in
Section 3 we present a data analysis, and in Section 4 the experimental
results. Then, in Section 5 we present the main conclusions and some future
work.
## 2 Related Work
This section gives an overview of the tools used in the development of this
work. In Section 2.1, we present the MultiWOZ dataset, a task-oriented
collection of dialogues whose utterances are used for the experiments in
Section 4. Before feeding these sentences into an algorithm, it is required to
transform them in a space representation, for which an overview is given in
Section 2.2. In Section 2.3, we present HDBSCAN and motivate the choice of
this clustering algorithm. Finally, the method for analysis of dialogue flows
is presented in Section 2.4.
### 2.1 MultiWOZ Dataset
The MultiWOZ dataset (Budzianowski et al.,, 2018) is a labelled human-human
collection of goal-oriented dialogues, simulating natural conversations
between a tourist and an assistant from an information center in a touristic
city. The corpus has conversations spanning over 7 domains — attraction,
hospital, police, hotel, restaurant, taxi, train — with diverse complexity of
tasks, going from a simple information query about an attraction, to booking a
night at a hotel, a restaurant reservation and a taxi to connect both places.
The dataset is composed of 10438 dialogues, which can be either single domain
or multi-domain. The average number of turns per dialogue is 8.93 and 15.39,
for single and multi-domain, respectively.
One particularity about this dataset is the richness in annotations at two
levels for each utterance: domain and intent. This information will allows us
to conduct the experiments with a reference of ground truth, helping
validating the used approach. In Figure 1, it is possible to see an example of
a part of a dialogue, with the corresponding domains and intents for each
utterance. Besides the possible conversation domains, an utterance can also
belong to two broader domains: the booking domain — if it refers to the act of
booking an entitiy — or to the general domain — if it is a greeting, an
acknowledgement, etc. In addition to the dialogues and their annotations, the
dataset is also composed of 7 database files, one for each possible domain of
conversation.
A further exploratory analysis of this dataset can be found in Section 3.
Figure 1: A dialogue example with domains and intents.
### 2.2 Text Representation
An important part of Natural Language Processing is how to represent sentences
such that it is possible to build algorithms on them. Initially, the focus was
in representing words independently. The most basic approach was to represent
text through a one-hot vector, with value 1 assigned to a word that is
present, and 0 corresponding to not present. The impossibility to transmit the
similarity between words gave rise to what are now called word embeddings,
which represent a word in a low dimensional vector space, where similar words
take similar parts of the modelling space. Popular word embeddings techniques
include Word2Vec (Mikolov et al.,, 2013) and GloVe (Pennington et al.,, 2014).
The need to solve ambiguities around words meanings and represent them with
respect to the sentence they are inserted led to the evolution of contextual
word embeddings, such as ELMo (Peters et al.,, 2018). The representations move
beyond word-level semantics, in that each word has a representation which is a
function of the entire input sequence, being able to capture syntax and
semantics. The evolution of text representation techniques opened the door to
more complex language models, with transformer architectures that use
attention to learn embeddings, such as GPT (Radford and Salimans,, 2018) and
BERT Devlin et al., (2019). In tasks such as clustering and semantic search, a
common method is to map each sentence such that semantically similar sentences
are close, as proposed in Sections 2.2.1 and 2.2.2.
#### 2.2.1 Sentence-BERT
BERT related models have the state-of-the-art performance on sentence-pair
regression tasks like semantic textual similarity. However, to compute the
similarity between two sentences requires that they are both fed into the
model, which makes it too expensive for pair regression tasks, such as
semantic similarity search and clustering, due to too many possible
combinations. To make this task more efficient, Sentence-BERT (SBERT) (Reimers
and Gurevych,, 2020) uses siamese and triplet network structures to derive
semantically meaningful sentence embeddings. These techniques represent entire
sentences and their semantic information as vectors, making semantically
similar sentences close in the vector space. This helps the machine in
understanding the context, intention, and other nuances in the entire text.
Then, by using a similarity measure like cosine-similarity or euclidean
distance, it is possible to find semantically similar sentences. SBERT is
available in the Sentence-Transformer framework111https://www.sbert.net/, with
pre-trained models of sentence embeddings tuned for various tasks, in more
than 100 languages.
#### 2.2.2 Dimensionality Reduction
After using SBERT for utterance representation, we obtain embeddings with
dimension of 768. Since high dimensionality embeddings lead to a loss of
robustness in clustering algorithms, we trade the loss of information for a
more robust clustering by reducing the dimensionality of the embeddings before
feeding them to the clustering algorithm.
There are a few alternatives of methods for dimensionality reduction, such as
t-Distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten and
Hinton,, 2008), and Uniform Manifold Approximation and Projection (UMAP)
(McInnes et al.,, 2018). Both were designed to predominantly preserve the
local structure, by grouping neighbouring data points together, which provides
a very informative visualization of the heterogeneity present in the data.
UMAP is more adequate for this context, since t-SNE produces unstable
embeddings, making the experiences non reproducible.
### 2.3 HDBSCAN for unsupervised clustering
Clustering is an unsupervised Machine Learning technique that consists of
grouping data points such that those with similar features are classified with
the same group, meaning that data points belonging to different groups should
have more dissimilar properties. Depending on the notion of what defines a
cluster, there are a variety of diversified clustering algorithms: some of
them are centroid based, such as the K-Means (Lloyd,, 1982), where the
clustering is done based on some randomly initialized points and the minimum
distance from a point to others; others are density based, such as DBSCAN
(Daszykowski and Walczak,, 2009), where points are clustered based on their
densities in a particular region. Density based clustering is particularly
relevant for problems where little is known about the dataset, since they do
not require the a priori definition of the amount of clusters.
In most density-based clustering algorithms, such as DBSCAN, it is necessary
to define a density threshold to make a cluster. This parameter is specially
difficult to adjust for higher dimensional data, posing a problem for
obtaining clusters with varying densities. To solve this problem, the
Hierarchical Density-Based Spatial Clustering of Applications with Noise
(HDBSCAN) (Campello et al.,, 2015) was developed, not requiring the prior
definition of this density threshold. The algorithm first builds a hierarchy
to figure out which peaks end up merging together and in what order. Then, for
each cluster, it evaluates if it is more beneficial to keep that cluster or
split it into subclusters, considering the volume of each peak. HDBSCAN uses
soft clustering: unlike most clustering algorithms, data points are not
assigned cluster labels, but rather a vector of probabilities for each
cluster, identified by $c\in\\{0,n_{clusters}-1\\}$, allowing each point to
potentially be a mix of clusters. It is also noise aware, meaning that it has
a notion of data samples that are not assigned to any cluster, to which it
assigns the label -1.
### 2.4 Sequential Pattern Mining for the analysis of dialogue flows
In the context of dialogue interactions, besides identifying utterances
intentions, it is relevant to evaluate the most common interactions, allowing
to discover the flow of the dialogue. To do so, it is possible to use the
sequential pattern mining algorithm Prefix-projected Sequential pattern
(PrefixSpan) (Pei et al.,, 2001), which discovers frequent subsequences as
patterns in a sequence database.
The PrefixSpan implementation222https://pypi.org/project/prefixspan/ to be
used outputs traditional single-item sequential patterns. This library also
includes the frequent closed sequential pattern mining algorithm BIDE (Wang
and Han,, 2004), and the frequent generator sequential pattern mining
algorithm FEAT (Gao et al.,, 2008). To use the algorithm via API, we will
refer to the PrefixSpan class in prefixspan/api.py. In this implementations,
two types of sequences can be obtained: `.frequent(n)` returns the sequences
that appear n times; and `.topk(k)` gives the k most frequent sequences in the
dataset. These methods also support other specificities, which can be found in
the algorithm documentation link.
## 3 Data Analysis
To have a better understanding of the task we have at hand, it is relevant to
perform an analysis of the dialogue utterances. In Section 3.1, the
similarities between embeddings of different utterances are investigated,
motivating the use of an open-source tool for entity identification. In
Section 3.2, we provide an overview of the distribution of the dataset over
domain and intent.
### 3.1 Embeddings representation
As proposed in Section 2.2, the dataset utterances are represented using the a
Sentence-BERT model. These embeddings are obtained using the sentence-
transformer `‘paraphrase-distilroberta-base-v1’` package, that outputs
embeddings with dimension 768 for each utterance.
(a) Similarity between pairs of embeddings. (b) Similarity between pairs of
embeddings after NER tool from spacy. (c) Similarity between pairs of
embeddings after adding entities to the NER tool from spacy.
Figure 2: Similarity.
Following Laban et al., (2021), to evaluate the feasibility of this
experience, we measured the similarity between the embeddings of 1000 pairs of
random utterances belonging to the different categories:
* •
utterances belonging to the same domain, or group of domains (label `domain`);
* •
utterances belonging to the same domain and the same intent, or groups of
domains and intents (label `domain_intent`);
* •
subsequent utterances in the dialogue (label `followed`);
* •
and utterances randomly obtained from the dataset (label `random`).
The plot can be seen in Figure 2a. As the goal is to discover broader intents
for the dialogues utterances, it can be useful to make them more homogeneous,
in order to avoid clusters around entities. For this purpose, we use the spaCy
Named Entity Recognition tool333https://spacy.io/usage/linguistic-features,
which replaces the recognized entities for broader tags, such as general known
numbers (e.g one, thirty-three, two hundred) and places (e.g Cambridge). The
similarity plot for these processed utterances is shown in Figure 2b. In
addition to this, as there is information about specific entities present in
the dataset, such as hotel and restaurants names, we can also remove them from
the dialogues (e.g. In Figure 2c, we can see the similarity between pairs of
utterances with entities from the dataset removed.
From the plots, it is clear that the distribution of similarity between pairs
of embeddings is higher for utterances with common domains and intentions,
suggesting that a clustering experience based on this measure may be
successful. Besides, this difference is higher for utterances where entities
were removed, motivating the use of this tool for improving the clustering
experience.
### 3.2 Domain and intent annotations
As seen in Figure 1, one utterance from the MultiWOZ dataset can belong to
more than one domain. In Figure 3, we present the frequency of each possible
combination of domains (the combinations which were present less than 10 times
were kept out of this plot for the sake of visibility). The highest presence
are for single domain utterances. The highest value is for the general domain,
followed by train, restaurant, hotel, attraction, taxi and booking. The police
and hospital domains have fewer utterances assigned, as they also have less
dialogues.
Figure 3: The possible combinations of domains.
For the generality of domains, the possible intents classifications are
inform, request, recommend, no-offer and select. The booking domain has other
possibilities regarding the booking possibility, book or no-book. The general
domain has its particular intents: greet, welcome, reqmore, thank and bye.
Naturally, it is possible for a utterance to hold more than one intent. As
there are many more possible combinations of intents than domains, we will not
present plots of all domains, but rather exemplify with the representations of
utterances belonging to the hotel domain. In Figure 4, it is possible to see
the 2-D representations of utterances belonging to this domain, using the UMAP
algorithm with dimension 2. Although these are just 2-D representations of way
higher dimensional embeddings, it is still possible to identify some groups of
sentences belonging to the same domain or intent. This can translate the
possibility of performing density clustering in these data points.
Figure 4: 2-D representations of utterances embeddings per intent in the hotel
domain.
## 4 Experimental Results
This section includes an analysis of the experimental results. An introduction
to the evaluation methods is given in Section 4.1. In Section 4.3, we present
and analyse the results of an intra-domain clustering experiment for the hotel
domain. In Section 4.2, an inter-domain clustering experience is conducted.
### 4.1 Evaluation Metrics
To evaluate the results of a clustering experiment, one can use intrinsic
methods (based on properties of the algorithm itself), such as the relative
validity index. This metric measures how close elements from one cluster are
to each other, and how distant they are from elements in other clusters. It is
important to note that the topic of clustering validation is considered one of
the most challenging topics in the clustering literature: since these are
unsupervised algorithms, it is required to resort to internal validity
criteria, calculated solely based on information intrinsic to the data.
In these particular experiences, since we have annotation references from the
dataset, it is also possible to resort to extrinsic methods that compare the
clusters with a pre-existing structure — a ground truth solution. In this
context, BCubed precision and BCubed recall (Bagga and Baldwin,, 1998) are
found to be the only ones that satisfy all the proposed properties/constraints
for clustering evaluation metrics (Amigó et al.,, 2009). The BCubed precision
of an item is the proportion of items in its cluster which have the item’s
category, including itself, related to the amount of items in its cluster. The
BCubed recall is analogous, but related to the amount of items within its
category. The overall BCubed precision and recall are the averaged precision
and recall of all the items.
Naturally, extrinsic methods are not usable when there are no ground truth
references, leaving intrinsic methods as the most relevant for clustering
experiences, since they are the only ones applicable in real world scenarios.
### 4.2 Inter-domain Clustering
Firstly, we present the clustering results for an experience with all the
utterances from the MultiWOZ dataset. In this inter-domain clustering
experience, we have two types of possible labels: domain and intent. To
simplify, we present the possible domains for the utterances, whose 2-D
representations are plotted in Figure 5. As evident in Figure 5, there are
many possible combinations of domain labels for the data points. Hence, we
will refrain from plotting the possible combinations of intents, as there are
even more possibilities than those in Figure 5, and its analysis would be too
exhaustive.
Figure 5: 2-D representations of utterances embeddings per domain.
For these experiences, we opted to remove the utterances from the general
domain. As seen in the plot from Figure 3, these are the most present in the
dataset. The fact that these types of utterances are very repetitive, with a
very low variability, makes the dataset very imbalanced. As the same exact
utterance from the general domain occurs very often, this can damage the
clustering experience by generating clusters for equal utterances only, which
is not aligned with the goals for this task.
When running the HDBSCAN algorithm, there are two important parameters to set:
`min_cluster_size`, defining the smallest size grouping that should be
considered a cluster. The bigger its value, the less clusters will be
obtained; and `min_samples`, which provides a measure of how conservative the
clustering should be. The larger its value, the more conservative the
clustering, and the more points will be considered as noise, being clusters
progressively restricted to more dense areas. By default, it is set to the
value of `min_cluster_size`. We fine tune these values by making `min_samples`
vary from 0 to 100 with a step size of 10, `min_cluster_size` vary from 25 to
300 with a step size of 25 and measuring the relative validity index, as
depicted in Table 1. It is not possible to see a direct relationship between
this value and both of the variables. The best result happens for
$\verb|min_samples|=100$ and $\verb|min_cluster_size|=300$.
Table 1: Grid search of relative validity index over min_cluster_size and
min_samples for all utterances of the MultiWOZ dataset.
| 25 | 50 | 75 | 100 | 125 | 150 | 175 | 200 | 225 | 250 | 275 | 300
---|---|---|---|---|---|---|---|---|---|---|---|---
10 | $3.32\text{\times}{10}^{-2}$ | $4.31\text{\times}{10}^{-2}$ | $3.70\text{\times}{10}^{-5}$ | $2.47\text{\times}{10}^{-5}$ | $2.47\text{\times}{10}^{-5}$ | $2.47\text{\times}{10}^{-5}$ | $3.09\text{\times}{10}^{-6}$ | $3.09\text{\times}{10}^{-6}$ | $3.09\text{\times}{10}^{-6}$ | $3.09\text{\times}{10}^{-6}$ | $3.09\text{\times}{10}^{-6}$ | $3.09\text{\times}{10}^{-6}$
20 | $4.01\text{\times}{10}^{-2}$ | $3.76\text{\times}{10}^{-2}$ | $2.17\text{\times}{10}^{-3}$ | $1.50\text{\times}{10}^{-3}$ | $1.50\text{\times}{10}^{-3}$ | $4.95\text{\times}{10}^{-6}$ | $4.95\text{\times}{10}^{-6}$ | $2.02\text{\times}{10}^{-3}$ | $1.25\text{\times}{10}^{-4}$ | $1.23\text{\times}{10}^{-4}$ | $2.60\text{\times}{10}^{-3}$ | $2.60\text{\times}{10}^{-3}$
30 | $3.30\text{\times}{10}^{-2}$ | $2.67\text{\times}{10}^{-2}$ | $1.01\text{\times}{10}^{-4}$ | $1.38\text{\times}{10}^{-3}$ | $1.38\text{\times}{10}^{-3}$ | $1.38\text{\times}{10}^{-3}$ | $1.15\text{\times}{10}^{-5}$ | $5.46\text{\times}{10}^{-6}$ | $5.46\text{\times}{10}^{-6}$ | $4.38\text{\times}{10}^{-2}$ | $4.38\text{\times}{10}^{-2}$ | $4.15\text{\times}{10}^{-2}$
40 | $1.62\text{\times}{10}^{-2}$ | $1.62\text{\times}{10}^{-2}$ | $5.72\text{\times}{10}^{-5}$ | $8.88\text{\times}{10}^{-4}$ | $4.17\text{\times}{10}^{-6}$ | $1.26\text{\times}{10}^{-2}$ | $1.11\text{\times}{10}^{-2}$ | $9.46\text{\times}{10}^{-3}$ | $4.23\text{\times}{10}^{-2}$ | $4.03\text{\times}{10}^{-2}$ | $4.03\text{\times}{10}^{-2}$ | $4.03\text{\times}{10}^{-2}$
50 | $1.62\text{\times}{10}^{-2}$ | $2.48\text{\times}{10}^{-2}$ | $1.16\text{\times}{10}^{-3}$ | $5.89\text{\times}{10}^{-6}$ | $5.89\text{\times}{10}^{-6}$ | $6.99\text{\times}{10}^{-3}$ | $5.62\text{\times}{10}^{-3}$ | $1.44\text{\times}{10}^{-2}$ | $1.44\text{\times}{10}^{-2}$ | $5.10\text{\times}{10}^{-3}$ | $5.10\text{\times}{10}^{-3}$ | $5.09\text{\times}{10}^{-3}$
60 | $1.82\text{\times}{10}^{-2}$ | $2.06\text{\times}{10}^{-2}$ | $6.17\text{\times}{10}^{-4}$ | $2.42\text{\times}{10}^{-5}$ | $2.42\text{\times}{10}^{-5}$ | $8.76\text{\times}{10}^{-3}$ | $7.50\text{\times}{10}^{-3}$ | $3.88\text{\times}{10}^{-3}$ | $3.88\text{\times}{10}^{-3}$ | $3.88\text{\times}{10}^{-3}$ | $3.88\text{\times}{10}^{-3}$ | $3.87\text{\times}{10}^{-3}$
70 | $2.52\text{\times}{10}^{-5}$ | $2.52\text{\times}{10}^{-5}$ | $1.03\text{\times}{10}^{-3}$ | $1.03\text{\times}{10}^{-3}$ | $1.03\text{\times}{10}^{-3}$ | $2.02\text{\times}{10}^{-2}$ | $1.89\text{\times}{10}^{-2}$ | $1.66\text{\times}{10}^{-2}$ | $1.34\text{\times}{10}^{-2}$ | $1.14\text{\times}{10}^{-2}$ | $1.14\text{\times}{10}^{-2}$ | $3.09\text{\times}{10}^{-2}$
80 | $5.03\text{\times}{10}^{-4}$ | $5.03\text{\times}{10}^{-4}$ | $1.42\text{\times}{10}^{-5}$ | $1.42\text{\times}{10}^{-5}$ | $1.42\text{\times}{10}^{-5}$ | $1.87\text{\times}{10}^{-2}$ | $7.15\text{\times}{10}^{-2}$ | $7.15\text{\times}{10}^{-2}$ | $7.15\text{\times}{10}^{-2}$ | $7.15\text{\times}{10}^{-2}$ | $8.43\text{\times}{10}^{-2}$ | $1.04\text{\times}{10}^{-1}$
90 | $8.46\text{\times}{10}^{-7}$ | $5.81\text{\times}{10}^{-4}$ | $5.81\text{\times}{10}^{-4}$ | $1.14\text{\times}{10}^{-5}$ | $1.14\text{\times}{10}^{-5}$ | $8.93\text{\times}{10}^{-2}$ | $8.81\text{\times}{10}^{-2}$ | $8.57\text{\times}{10}^{-2}$ | $8.57\text{\times}{10}^{-2}$ | $9.97\text{\times}{10}^{-2}$ | $9.77\text{\times}{10}^{-2}$ | $9.80\text{\times}{10}^{-2}$
100 | $3.04\text{\times}{10}^{-6}$ | $3.82\text{\times}{10}^{-6}$ | $3.82\text{\times}{10}^{-6}$ | $3.82\text{\times}{10}^{-6}$ | $3.82\text{\times}{10}^{-6}$ | $1.37\text{\times}{10}^{-1}$ | $1.36\text{\times}{10}^{-1}$ | $1.36\text{\times}{10}^{-1}$ | $1.12\text{\times}{10}^{-1}$ | $1.10\text{\times}{10}^{-1}$ | $1.10\text{\times}{10}^{-1}$ | $1.50\text{\times}{10}^{-1}$
Table 2: Optimal clustering results for each value of min_samples, with BCubed
metrics computed using the domain labels.
| | | | Clusters | | Soft Clusters | |
---|---|---|---|---|---|---|---|---
min_samples | min_cluster_size | validity | | P | R | M | | P | R | M | | n clusters
10 | 50 | $4.31\text{\times}{10}^{-2}$ | | 0.2559 | 0.6323 | 0.3643 | | 0.5167 | 0.0915 | 0.1555 | | 105
20 | 25 | $4.01\text{\times}{10}^{-2}$ | | 0.2302 | 0.7171 | 0.3486 | | 0.5236 | 0.0905 | 0.1544 | | 159
30 | 250 | $4.38\text{\times}{10}^{-2}$ | | 0.2301 | 0.6409 | 0.3386 | | 0.3884 | 0.3509 | 0.3687 | | 16
40 | 225 | $4.23\text{\times}{10}^{-2}$ | | 0.2300 | 0.6583 | 0.3409 | | 0.3877 | 0.3837 | 0.3857 | | 15
50 | 50 | $2.48\text{\times}{10}^{-2}$ | | 0.2096 | 0.7328 | 0.3260 | | 0.4733 | 0.1999 | 0.2811 | | 55
60 | 50 | $2.06\text{\times}{10}^{-2}$ | | 0.2068 | 0.7232 | 0.3217 | | 0.418 | 0.2506 | 0.3133 | | 43
70 | 300 | $3.09\text{\times}{10}^{-2}$ | | 0.2066 | 0.6967 | 0.3187 | | 0.3665 | 0.3625 | 0.3645 | | 11
80 | 300 | $1.04\text{\times}{10}^{-1}$ | | 0.2202 | 0.6869 | 0.3335 | | 0.3635 | 0.4073 | 0.3841 | | 10
90 | 250 | $9.97\text{\times}{10}^{-2}$ | | 0.2293 | 0.6770 | 0.3426 | | 0.3833 | 0.3987 | 0.3909 | | 12
100 | 300 | $1.50\text{\times}{10}^{-1}$ | | 0.2205 | 0.6689 | 0.3317 | | 0.3473 | 0.4139 | 0.3777 | | 9
Table 3: Optimal clustering results for each value of min_samples, with BCubed
metrics computed using the intent labels.
| | | | Clusters | | Soft Clusters | |
---|---|---|---|---|---|---|---|---
min_samples | min_cluster_size | validity | | P | R | M | | P | R | M | | n clusters
10 | 50 | $4.31\text{\times}{10}^{-2}$ | | 0.1739 | 0.6529 | 0.2746 | | 0.3231 | 0.1633 | 0.2170 | | 105
20 | 25 | $4.01\text{\times}{10}^{-2}$ | | 0.1521 | 0.7269 | 0.2516 | | 0.3324 | 0.1574 | 0.2136 | | 159
30 | 250 | $4.38\text{\times}{10}^{-2}$ | | 0.1314 | 0.6662 | 0.2196 | | 0.2008 | 0.4272 | 0.2732 | | 16
40 | 225 | $4.23\text{\times}{10}^{-2}$ | | 0.1294 | 0.6825 | 0.2176 | | 0.1960 | 0.4609 | 0.2751 | | 15
50 | 50 | $2.48\text{\times}{10}^{-2}$ | | 0.1242 | 0.7437 | 0.2128 | | 0.2655 | 0.2634 | 0.2644 | | 55
60 | 50 | $2.06\text{\times}{10}^{-2}$ | | 0.1191 | 0.7339 | 0.2050 | | 0.2345 | 0.3192 | 0.2704 | | 43
70 | 300 | $3.09\text{\times}{10}^{-2}$ | | 0.1130 | 0.7161 | 0.1953 | | 0.1819 | 0.4402 | 0.2574 | | 11
80 | 300 | $1.04\text{\times}{10}^{-1}$ | | 0.1049 | 0.7034 | 0.1825 | | 0.1678 | 0.4697 | 0.2473 | | 10
90 | 250 | $9.97\text{\times}{10}^{-2}$ | | 0.1090 | 0.6937 | 0.1885 | | 0.1756 | 0.4608 | 0.2543 | | 12
100 | 300 | $1.50\text{\times}{10}^{-1}$ | | 0.1043 | 0.6910 | 0.1812 | | 0.1605 | 0.4755 | 0.2400 | | 9
For a deeper analysis of the possible clustering results, we present the
values for BCubed precision (P), BCubed recall (R) and their harmonic mean
(M), for each value of `min_samples` and the corresponding optimal value of
`min_cluster_size`. the values for BCubed metrics using domains or intents as
labels are presented in Tables 2 and 3, respectively. Besides, we can evaluate
both the clusters and soft clusters, where the latter are obtained by choosing
the cluster with the maximum value of probability. The number of obtained
clusters for each experience is also presented, allowing to have an idea of
how granular the clusters are.
We can draw a few ideas from the results. Firstly, that an increase in the
value of P is usually combined with a decrease in the value of R, supporting
the need for analysing their harmonic mean (M). We can also confirm that we
need to increase both `min_samples` and `min_cluster_size` for the clustering
to become more conservative: for the same value of `min_cluster_size`, an
increase in `min_samples` leads to a lower number of obtained clusters (which
happens for $\verb|min_cluster_size|=50$, for example).
The BCubed metrics results are generally better when using the domain
annotations as labels. In Figure 6, we present the results for the inter-
domain experience with the optimal relative validity index, where the quality
of the clusters can be grasped. In Table 4, we present details about each
cluster: their length, persistence in the spanning tree, and the dataset
reference label, which corresponds to the label from the dataset with more
data points in each cluster. For a better analysis of each clustering
experience, we can also extract the most frequent words in each cluster of
utterances. In this experience, we use the TD-IDF algorithm, treating each
cluster of utterances as a single document444Following
https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6.
(a) Clusters for min_samples=100 and min_cluster_size=300.
(b) Soft clusters for min_samples=100 and min_cluster_size=300.
Figure 6: Results of clustering domains in MultiWOZ in MultiWOZ using optimal
samples and cluster size by intrinsic measures. Table 4: Details of the
clusters obtained for all the domains.
cluster | length | persistence | top words by TF-IDF | label
---|---|---|---|---
0 | 300 | 0.0829 | postcode, phone, address, number, code | attraction
1 | 300 | 0.0197 | price, range, preference, options, area | hotel
2 | 674 | 0.0578 | train, time, cambridge, leave, leaving | train
3 | 451 | 0.0464 | taxi, need, time, cardinal, contact | taxi
4 | 321 | 0.0586 | guesthouse, hotel, free, parking, star | hotel
5 | 300 | 0.0365 | restaurant, food, centre, town, restaurants | restaurant
6 | 314 | 0.0445 | people, date, cardinal, book, yes | hotel
7 | 300 | 0.0549 | reference, number, booking, successful, booked | booking general
8 | 300 | 0.1402 | fee, gbp, total, station, payable | general train
It is possible to say that the algorithm is successfully identifying different
clusters of domains, as most of the obtained clusters are clearly from the
domain assigned as label. While a few others seem to be more general (clusters
0, 1 and 6), we understand that these types of utterances must have a great
presence in the dataset, and possibly appearing in different types of domain
dialogues. We should underline that, as the amount and variability of dialogue
utterances increase, it is more likely that similar utterances belonging to
different domains appear, leading to utterances with different labels being
clustered together.
### 4.3 Intra-domain Clustering
For this experience, we consider utterances from the MultiWOZ dataset
belonging to the hotel domain. Intra-domain data is the most likely to be
found in a real world scenario, where dialogues that are jointly analyzed
belong to the same broader domain.
In Table 5, the values for the relative validity index are presented when
varying `min_samples` from 5 to 50 with a step size of 5, and
`min_cluster_size` from 10 to 100 with a step size of 10 — as we are in the
presence of a smaller amount of data, the range of values for the variables
have also been decreased. The best score of relative validity index is for the
combination of $\verb|min_samples|=50$ and $\verb|min_cluster_size|=80$.
Table 5: Grid search over min_cluster_size and min_samples for the hotel
domain.
| 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100
---|---|---|---|---|---|---|---|---|---|---
5 | $5.91\text{\times}{10}^{-2}$ | $5.45\text{\times}{10}^{-2}$ | $5.77\text{\times}{10}^{-5}$ | $2.89\text{\times}{10}^{-5}$ | $2.89\text{\times}{10}^{-5}$ | $1.35\text{\times}{10}^{-5}$ | $1.35\text{\times}{10}^{-5}$ | $1.35\text{\times}{10}^{-5}$ | $1.35\text{\times}{10}^{-5}$ | $1.35\text{\times}{10}^{-5}$
10 | $4.29\text{\times}{10}^{-2}$ | $5.36\text{\times}{10}^{-2}$ | $1.23\text{\times}{10}^{-3}$ | $2.92\text{\times}{10}^{-5}$ | $2.96\text{\times}{10}^{-5}$ | $2.96\text{\times}{10}^{-5}$ | $2.96\text{\times}{10}^{-5}$ | $2.96\text{\times}{10}^{-5}$ | $2.96\text{\times}{10}^{-5}$ | $5.88\text{\times}{10}^{-5}$
15 | $3.14\text{\times}{10}^{-2}$ | $2.78\text{\times}{10}^{-2}$ | $2.86\text{\times}{10}^{-2}$ | $2.27\text{\times}{10}^{-2}$ | $2.25\text{\times}{10}^{-2}$ | $4.92\text{\times}{10}^{-5}$ | $3.21\text{\times}{10}^{-5}$ | $3.21\text{\times}{10}^{-5}$ | $3.21\text{\times}{10}^{-5}$ | $3.21\text{\times}{10}^{-5}$
20 | $3.36\text{\times}{10}^{-2}$ | $2.69\text{\times}{10}^{-2}$ | $1.02\text{\times}{10}^{-5}$ | $1.02\text{\times}{10}^{-5}$ | $1.02\text{\times}{10}^{-5}$ | $4.21\text{\times}{10}^{-3}$ | $4.21\text{\times}{10}^{-3}$ | $1.41\text{\times}{10}^{-6}$ | $1.41\text{\times}{10}^{-6}$ | $1.41\text{\times}{10}^{-6}$
25 | $2.69\text{\times}{10}^{-2}$ | $2.99\text{\times}{10}^{-2}$ | $6.98\text{\times}{10}^{-7}$ | $6.98\text{\times}{10}^{-7}$ | $6.66\text{\times}{10}^{-3}$ | $6.66\text{\times}{10}^{-3}$ | $6.66\text{\times}{10}^{-3}$ | $5.54\text{\times}{10}^{-8}$ | $5.54\text{\times}{10}^{-8}$ | $5.54\text{\times}{10}^{-8}$
30 | $5.41\text{\times}{10}^{-4}$ | $3.52\text{\times}{10}^{-6}$ | $3.92\text{\times}{10}^{-6}$ | $3.92\text{\times}{10}^{-6}$ | $1.35\text{\times}{10}^{-2}$ | $1.35\text{\times}{10}^{-2}$ | $1.35\text{\times}{10}^{-2}$ | $1.09\text{\times}{10}^{-2}$ | $7.53\text{\times}{10}^{-3}$ | $1.79\text{\times}{10}^{-5}$
35 | $5.97\text{\times}{10}^{-4}$ | $1.89\text{\times}{10}^{-6}$ | $6.47\text{\times}{10}^{-3}$ | $6.47\text{\times}{10}^{-3}$ | $6.47\text{\times}{10}^{-3}$ | $6.47\text{\times}{10}^{-3}$ | $6.47\text{\times}{10}^{-3}$ | $1.86\text{\times}{10}^{-7}$ | $1.86\text{\times}{10}^{-7}$ | $1.86\text{\times}{10}^{-7}$
40 | $8.81\text{\times}{10}^{-4}$ | $2.95\text{\times}{10}^{-5}$ | $5.67\text{\times}{10}^{-3}$ | $5.67\text{\times}{10}^{-3}$ | $5.67\text{\times}{10}^{-3}$ | $5.67\text{\times}{10}^{-3}$ | $5.67\text{\times}{10}^{-3}$ | $5.67\text{\times}{10}^{-3}$ | $2.92\text{\times}{10}^{-6}$ | $2.92\text{\times}{10}^{-6}$
45 | $6.33\text{\times}{10}^{-4}$ | $7.52\text{\times}{10}^{-3}$ | $7.52\text{\times}{10}^{-3}$ | $7.52\text{\times}{10}^{-3}$ | $7.52\text{\times}{10}^{-3}$ | $7.53\text{\times}{10}^{-3}$ | $5.17\text{\times}{10}^{-3}$ | $2.70\text{\times}{10}^{-3}$ | $4.46\text{\times}{10}^{-6}$ | $3.48\text{\times}{10}^{-6}$
50 | $2.09\text{\times}{10}^{-5}$ | $2.09\text{\times}{10}^{-5}$ | $2.09\text{\times}{10}^{-5}$ | $2.09\text{\times}{10}^{-5}$ | $2.09\text{\times}{10}^{-5}$ | $2.09\text{\times}{10}^{-5}$ | $1.10\text{\times}{10}^{-2}$ | $3.11\text{\times}{10}^{-1}$ | $3.11\text{\times}{10}^{-1}$ | $4.71\text{\times}{10}^{-6}$
Table 6: Optimal clustering results for each value of min_samples.
| | | | Clusters | | Soft Clusters | |
---|---|---|---|---|---|---|---|---
| min_cluster_size | validity | | P | R | M | | P | R | M | | n clusters
5 | 10 | $5.91\text{\times}{10}^{-2}$ | | 0.5509 | 0.6357 | 0.5903 | | 0.6527 | 0.0381 | 0.0721 | | 161
10 | 20 | $5.36\text{\times}{10}^{-2}$ | | 0.5282 | 0.7183 | 0.6088 | | 0.6164 | 0.0703 | 0.1262 | | 59
15 | 10 | $3.14\text{\times}{10}^{-2}$ | | 0.5200 | 0.7790 | 0.6237 | | 0.6344 | 0.0525 | 0.0970 | | 96
20 | 10 | $2.69\text{\times}{10}^{-2}$ | | 0.5155 | 0.7694 | 0.6173 | | 0.6038 | 0.0870 | 0.1520 | | 50
25 | 20 | $2.99\text{\times}{10}^{-2}$ | | 0.5158 | 0.8127 | 0.6311 | | 0.5725 | 0.1055 | 0.1781 | | 28
30 | 50 | $1.35\text{\times}{10}^{-2}$ | | 0.4994 | 0.5578 | 0.5270 | | 0.5129 | 0.7407 | 0.6061 | | 6
35 | 30 | $6.47\text{\times}{10}^{-3}$ | | 0.4971 | 0.5491 | 0.5218 | | 0.5117 | 0.7548 | 0.6100 | | 6
40 | 30 | $5.67\text{\times}{10}^{-3}$ | | 0.4956 | 0.5368 | 0.5154 | | 0.5125 | 0.7593 | 0.6120 | | 6
45 | 20 | $7.52\text{\times}{10}^{-3}$ | | 0.4964 | 0.5272 | 0.5113 | | 0.5128 | 0.7743 | 0.6170 | | 6
50 | 80 | $3.11\text{\times}{10}^{-1}$ | | 0.4885 | 0.5328 | 0.5097 | | 0.4921 | 0.8209 | 0.6153 | | 3
(a) Clusters for min_samples=50 and min_cluster_size=80.
(b) Soft clusters for min_samples=50 and min_cluster_size=80.
(c) Clusters for min_samples=45 and min_cluster_size=20.
(d) Soft clusters for min_samples=45 and min_cluster_size=20.
(e) Clusters for min_samples=25 and min_cluster_size=20.
(f) Soft clusters for min_samples=25 and min_cluster_size=20.
Figure 7: Results of clustering intents in the hotel domain.
Similarly to before, we present the values P, R, and M in Table 6, for each
value of `min_samples` and the corresponding optimal value of
`min_cluster_size`. In what comes to the best performance in BCubed metrics,
there is a mismatch between the results from the clusters and soft clusters:
the former occurs for $\verb|min_samples|=25$ and
$\verb|min_cluster_size|=20$; and the latter is for $\verb|min_samples|=45$
and $\verb|min_cluster_size|=20$. These results are also not in accordance
with the optimal performance for relative validity index, which happens for
$\verb|min_samples|=50$ and $\verb|min_cluster_size|=80$. From these possible
combinations of values, 28 is the amount of obtained clusters which is more in
accordance with the original labels from the dataset.
In Figure 7, we present the results of these three clustering experiences in a
2-D representation, from fewer to greater obtained clusters. The color
gradient on the right side of each graph indicates the number of clusters
present in the plot, where the top indicates the maximum number of clusters
+1. For experiences where fewer clusters are obtained, there is generally a
broader cluster to which most of the data points belong, with a few more
specific ones. Although this can be supported by the nature of the dialogues,
where a lot of utterances are related to searching for a hotel, these results
are not that useful once we want to analyse the flow of intentions in a
dialogue. This fact advocates for the importance of adapting the hyper
parameters to the experience and results we are looking for, regardless of any
computed metric. In Tables 7, 8 and 9, we present details about each cluster,
for each clustering experience of Figure 7.
Table 7: Details of the 3 clusters obtained for the hotel domain.
cluster | length | persistence | top words | label
---|---|---|---|---
0 | 471 | 0.0526 | hotel, guesthouse, date, cardinal, free | hotel-inform
1 | 80 | 0.0213 | range, price, moderate, cheap, don | hotel-inform
2 | 82 | 0.0112 | price, range, options, cardinal, preference | hotel-request
Table 8: Details of the 6 clusters obtained for the hotel domain.
cluster | length | persistence | top words by TF-IDF | label
---|---|---|---|---
0 | 20 | 0.0674 | reference, number, yes, need, book | hotel-request
1 | 20 | 0.0181 | postcode, phone, address, number, code | hotel-request
2 | 20 | 0.0848 | restaurant, taxi, hotel, time, need | hotel-inform
3 | 309 | 0.0397 | cardinal, guesthouse, hotel, date, free | hotel-inform
4 | 20 | 0.0421 | range, price, moderate, cheap, priced | hotel-inform
5 | 24 | 0.0465 | price, range, options, mind, area | hotel-request
Table 9: Details of the 28 clusters obtained for the hotel domain.
cluster | length | persistence | top words by TF-IDF | label
---|---|---|---|---
0 | 20 | $2.00\text{\times}{10}^{-9}$ | yes, does, fine, sounds, matter | hotel-inform
1 | 20 | $1.06\text{\times}{10}^{-7}$ | date, time, try, starting, instead | hotel-inform
2 | 21 | $3.39\text{\times}{10}^{-8}$ | phone, number, postcode, date, help | hotel-inform
3 | 20 | $1.01\text{\times}{10}^{-7}$ | postcode, phone, number, just, address | hotel-request
4 | 20 | $3.28\text{\times}{10}^{-8}$ | address, road, phone, number, town | hotel-request
5 | 21 | $3.81\text{\times}{10}^{-7}$ | restaurant, taxi, hotel, time, need | hotel-inform
6 | 21 | $1.48\text{\times}{10}^{-7}$ | book, reference, number, yes, sounds | hotel-request
7 | 21 | $2.39\text{\times}{10}^{-7}$ | reference, number, yes, need, thank | hotel-request
8 | 20 | $1.57\text{\times}{10}^{-7}$ | range, price, moderate, cheap, priced | hotel-inform
9 | 20 | $1.70\text{\times}{10}^{-7}$ | price, range, options, mind, area | hotel-request
10 | 20 | $2.47\text{\times}{10}^{-8}$ | hotels, hotel, sorry, area, criteria | hotel-nooffer hotel-request
11 | 20 | $2.85\text{\times}{10}^{-8}$ | date, people, starting, room, cardinal | hotel-inform
12 | 46 | $2.28\text{\times}{10}^{-7}$ | date, people, starting, book, cardinal | hotel-inform
13 | 20 | $1.97\text{\times}{10}^{-7}$ | date, people, starting, cardinal, yes | hotel-inform
14 | 26 | $3.99\text{\times}{10}^{-1}$ | wifi, does, free, internet, include | hotel-inform
15 | 22 | $8.29\text{\times}{10}^{-7}$ | parking, free, does, offer, yes | hotel-inform
16 | 21 | $6.88\text{\times}{10}^{-7}$ | area, stay, town, like, prefer | hotel-request
17 | 20 | $8.89\text{\times}{10}^{-8}$ | hotel, prefer, preference, guesthouse, hotels | hotel-inform hotel-request
18 | 22 | $5.30\text{\times}{10}^{-7}$ | place, stay, looking, need, north | hotel-inform
19 | 20 | $8.83\text{\times}{10}^{-8}$ | guesthouse, cardinal, star, like, stars | hotel-inform
20 | 33 | $7.69\text{\times}{10}^{-7}$ | guesthouse, lovely, does, tell, house | hotel-recommend
21 | 22 | $6.40\text{\times}{10}^{-7}$ | called, hotel, looking, guesthouse, information | hotel-inform
22 | 20 | $2.87\text{\times}{10}^{-7}$ | guesthouse, suggest, recommend, prefer, like | hotel-recommend
23 | 20 | $5.23\text{\times}{10}^{-8}$ | guesthouse, book, like, room, recommend | hotel-recommend
24 | 21 | $3.90\text{\times}{10}^{-9}$ | parking, place, stay, free, cheap | hotel-inform
25 | 20 | $3.30\text{\times}{10}^{-7}$ | parking, guesthouse, free, looking, cheap | hotel-inform
26 | 21 | $1.60\text{\times}{10}^{-7}$ | star, cardinal, hotel, free, rating | hotel-inform
27 | 40 | $1.07\text{\times}{10}^{-7}$ | wifi, free, parking, need, hotel | hotel-inform
For the experience with only 3 obtained clusters (Table 7), it is easy to
understand that the two specific clusters are related to the hotel prince
range: cluster 1 (yellow) is probably mostly composed of utterances from the
user, due to the high presence of restrictive words (‘moderate’ and ‘cheap’);
cluster 2 (purple) should be mostly composed of utterances from the assistant
where a ‘preference’ is recurrently being asked. The rest of the utterances
belong to cluster 0 (magenta), where the most frequent words are certainly
directly obtained from the most frequent utterances from the dataset.
In the next experience (Table 8), there are other more specific clusters,
regarding booking (cluster 0 - magenta), hotel details such as postcode,
phone, and address (cluster 1 - orange), and requesting a taxi from the hotel
to the restaurant (cluster 2 - dark yellow).
The last experience results in a higher number of clusters, spanning over more
versatile types of intents: a confirmation (cluster 0), a suggestion of other
time or date (cluster 1), a recognition of the non existence of hotels
following the given criteria (cluster 10), an inquiry about the wifi (cluster
14), etc. The fact that the clusters are more granular also means that the
algorithm can split some clusters that could be broader, such as cluster 11
and 12, which both seem to be about a hotel room booking request. One
possibility can be the fact that one cluster includes more utterances
belonging to user inquiries, and the other to assistant replies.
In the three clustering experiences, most of the clusters are labelled with
either ‘hotel-inform’ or ‘hotel-request’, which are the most frequent labels
of utterances in the hotel domain, as seen in Figure 4. We can understand
that, despite being able to obtain reasonable clusters, it will be difficult
for the algorithm to match the level of granularity with the dataset
annotations, which explains the low results for the BCubed metrics.
### 4.4 Analysis of the dialogue flow
For this part of the experience, we feed the results from the intra-domain
clustering of the hotel domain to the tool for analysis of sequences. In Table
10, the most frequent flows between these 28 clusters are presented, which can
be informally analysed resorting to the most relevant utterances in each
cluster.
Table 10: The most frequent sequences of the identified 28 clusters for the
hotel domain.
n | sequence | frequency
---|---|---
2 | $26\rightarrow 19$ | 767
2 | $19\rightarrow 12$ | 625
2 | $10\rightarrow 19$ | 621
2 | $26\rightarrow 10$ | 574
2 | $27\rightarrow 19$ | 559
2 | $26\rightarrow 12$ | 492
2 | $26\rightarrow 23$ | 492
2 | $19\rightarrow 11$ | 451
2 | $19\rightarrow 23$ | 435
2 | $21\rightarrow 19$ | 420
3 | $26\rightarrow 10\rightarrow 19$ | 249
3 | $26\rightarrow 19\rightarrow 12$ | 204
3 | $10\rightarrow 19\rightarrow 12$ | 166
3 | $27\rightarrow 10\rightarrow 19$ | 162
3 | $27\rightarrow 19\rightarrow 12$ | 161
3 | $26\rightarrow 10\rightarrow 12$ | 156
3 | $27\rightarrow 26\rightarrow 19$ | 155
3 | $10\rightarrow 26\rightarrow 19$ | 151
3 | $26\rightarrow 10\rightarrow 23$ | 141
3 | $26\rightarrow 19\rightarrow 23$ | 141
Figure 8: A dialogue example with the assigned clusters.
Clusters 26 and 27 appear frequently, which are composed of utterances where
the user is asking for a hotel with some specific restrictions: the former
with the intent for a particular star rating, and the latter with parking
or/and wifi restrictions. Afterwards, the most common clusters are 10 and 19:
cluster 10 identifies the lack of domain entities obeying to the given
specifications; and cluster 19 suggests a hotel or guesthouse. Cluster 12 is
also frequent, usually assigned to utterances where the user is starting the
booking process.
Despite being possible to make this correspondence, some cases do not follow
these labels, such as the transition $10\rightarrow 19$, that apparently
matches two subsequent assistant utterances. As the utterances from the user
and assistant are all clustered at the same time, semantically similar
utterances from both of the parts can be assigned the same cluster. However,
this experience was not focused on dividing the utterances between user and
system, as this also does not happen in the dataset reference labels: as an
example, there are a lot of ‘hotel-inform’ subsequent utterances.
As an example, we provide a dialogue example with the assigned clusters, in
Figure 8. The dialogue starts with the transition $26\rightarrow 19$, which is
the most common transition in the dataset. Afterwards, it classifies two
subsequent utterances with the cluster 10, which can be justified by being
semantically close (both present negative sentences). The user comes back to
providing hotel restrictions, which is aligned with what we have seen about
cluster 26. The following suggestion from the assistant (the $6^{th}$
utterance) is also assigned to the cluster 26, which is not aligned with what
we discovered about the clusters — it should probably be assigned with cluster
19. One justification for these errors can be, that as we are forcing the
algorithm to assign one cluster to each utterance (as we used the results from
soft-clustering), very weak classifications are also being considered.
Besides, the most frequent clusters should also be the ones that are not that
specific, and the algorithm has more difficulties in classifying. When it
comes to the booking itself, the algorithm assigns two different clusters for
asking and providing the requirements, 15 and 2, which are in accordance with
the main topics extracted from the clusters: the first one is confirming the
hotel has free parking, and the latter providing the required hotel details.
## 5 Conclusion and Future Work
In this work, we successfully built a framework that is able to identify
dialogue intentions, in an unsupervised manner. To do so, we developed a
clustering tool for dialogue utterances, which groups them according to their
similarity and intention. As seen in the experiments, we were able to obtain
reasonable clusters with different levels of granularity, supporting the idea
that the algorithm parameters should be adapted to each use case and nature of
the data, regardless of how general the algorithm should be.
Besides, the sequence analysis tool proved to be able to find relevant flows
of intentions in a dialogue, which can be helpful for dialogue management
applications. In future work, it would make sense to perform two different
clustering experiences, for user and assistant utterances apart, to ensure
they are not mixed in the same clusters. Depending on the application, this
information could even be available, and an analysis of the sequence of user
requests without the assistant (and vice versa) could be valuable.
Besides, the problem of identifying dialogue flows can be further investigated
by modifying the sequence analysis tool to return sequences obeying to
different specifications, such as a longer length or sequences that do not
include a certain cluster. Regardless of that, these results do already prove
that it is possible to identify relevant clusters in a dialogue application,
and analyse their most common flows in an unsupervised scenario. Other
opportunities of future work are the creation of a taxonomy of intents, and
the comparison with the one provided in the datasets.
## Acknowledgements
This work was conducted within the IAG (Intelligent Agents Generator) project
with the universal code LISBOA-01-0247-FEDER-045385, co-funded by Lisboa 2020,
Portugal 2020, and the European Union, through the European Regional
Development Fund.
## References
* Amigó et al., (2009) Amigó, E., Gonzalo, J., Artiles, J., and Verdejo, F. 2009\. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information Retrieval, 12(4):461–486.
* Bagga and Baldwin, (1998) Bagga, A. and Baldwin, B. 1998\. Entity-Based Cross-Document Coreferencing Using the Vector Space Model. 79\.
* Budzianowski et al., (2018) Budzianowski, P., Wen, T.-h. H., Tseng, B.-H., Casanueva, I., Ultes, S., Ramadan, O., and Gašić, M. 2018\. MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. In Proc. of the Conference on Empirical Methods in Natural Language Processing.
* Campello et al., (2015) Campello, R. J., Moulavi, D., Zimek, A., and Sander, J. 2015\. Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection. ACM Transactions on Knowledge Discovery from Data, 10(1):1–51.
* Chatterjee and Sengupta, (2020) Chatterjee, A. and Sengupta, S. 2020\. Intent Mining from past conversations for Conversational Agent. pp. 4140–4152.
* Chatterjee and Sengupta, (2021) Chatterjee, A. and Sengupta, S. 2021\. Intent Mining from past conversations for conversational agent. arXiv.
* Daszykowski and Walczak, (2009) Daszykowski, M. and Walczak, B. 2009\. Density-Based Clustering Methods. Comprehensive Chemometrics, 2:635–654.
* Devlin et al., (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. 2019\. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics.
* Gao et al., (2008) Gao, C., Wang, J., He, Y., and Zhou, L. 2008\. Efficient mining of frequent sequence generators. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, 1051–1052, New York, NY, USA. Association for Computing Machinery.
* Ghaemi and Farnaghi, (2019) Ghaemi, Z. and Farnaghi, M. 2019\. A Varied Density-based Clustering Approach for Event Detection from Heterogeneous Twitter Data. ISPRS International Journal of Geo-Information, 8(2):1–18.
* Haponchyk et al., (2020) Haponchyk, I., Uva, A., Yu, S., Uryupina, O., and Moschitti, A. 2020\. Supervised Clustering of Questions into Intents for Dialog System Applications. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pp. 2310–2321.
* Laban et al., (2021) Laban, P., Bandarkar, L., and Hearst, M. A. 2021\. News Headline Grouping as a Challenging NLU Task.
* Lloyd, (1982) Lloyd, S. P. 1982\. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28(2):129–137.
* McInnes and Healy, (2017) McInnes, L. and Healy, J. 2017\. Accelerated Hierarchical Density Based Clustering. IEEE International Conference on Data Mining Workshops, ICDMW, 2017-Novem:33–42.
* McInnes et al., (2018) McInnes, L., Healy, J., and Melville, J. 2018\. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction.
* Mikolov et al., (2013) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. 2013\. Distributed Representations of Words and Phrases and their Compositionality. In Proc. of the International Conference on Neural Information Processing Systems.
* Pei et al., (2001) Pei, J., Han, J., Mortazavi-Asl, B., Pinto, H., Chen, Q., Dayal, U., and Hsu, M. C. 2001\. PrefixSpan: Mining sequential patterns efficiently by prefix-projected pattern growth. Proceedings - International Conference on Data Engineering, pp. 215–224.
* Pennington et al., (2014) Pennington, J., Socher, R., and Manning, C. 2014\. GloVe: Global Vectors for Word Representation. In Proc. of the Conference on Empirical Methods in Natural Language Processing.
* Perkins and Yang, (2019) Perkins, H. and Yang, Y. 2019\. Dialog Intent Induction with Deep Multi-View Clustering. arXiv, pp. 4016–4025.
* Peters et al., (2018) Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. 2018\. Deep Contextualized Word Representations. In Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
* Radford and Salimans, (2018) Radford, A. and Salimans, T. 2018\. Improving Language Understanding by Generative Pre-Training. OpenAI Blog.
* Reimers and Gurevych, (2020) Reimers, N. and Gurevych, I. 2020\. Sentence-BERT: Sentence embeddings using siamese BERT-networks. EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pp. 3982–3992.
* van der Maaten and Hinton, (2008) van der Maaten, L. and Hinton, G. 2008\. Visualizing Data using t-SNE. Journal of Machine Learning Research 9.
* Wang and Han, (2004) Wang, J. and Han, J. 2004\. Bide: Efficient mining of frequent closed sequences. pp. 79–90. Proceedings - 20th International Conference on Data Engineering - ICDE 2004 ; Conference date: 30-03-2004 Through 02-04-2004.
|
# An exact characterization of saturation for permutation matrices
Benjamin Aram Berendsohn Institut für Informatik, Freie Universität Berlin,
<EMAIL_ADDRESS>Work supported by DFG grant KO 6140/1-1.
###### Abstract
A 0-1 matrix $M$ _contains_ a 0-1 matrix _pattern_ $P$ if we can obtain $P$
from $M$ by deleting rows and/or columns and turning arbitrary 1-entries into
0s. The saturation function $\mathrm{sat}(P,n)$ for a 0-1 matrix pattern $P$
indicates the minimum number of 1s in an $n\times n$ 0-1 matrix that does not
contain $P$, but changing any 0-entry into a 1-entry creates an occurrence of
$P$. Fulek and Keszegh recently showed that each pattern has a saturation
function either in $\mathcal{O}(1)$ or in $\Theta(n)$. We fully classify the
saturation functions of _permutation matrices_.
## 1 Introduction
In this paper, all matrices are 0-1 matrices. For cleaner presentation, we
write matrices with dots ($\begin{smallmatrix}\bullet\end{smallmatrix}$)
instead of 1s and spaces instead of 0s, for example:
$\displaystyle\left(\begin{smallmatrix}0&1&0\\\ 0&0&1\\\
1&0&0\end{smallmatrix}\right)=\left(\begin{smallmatrix}&\bullet&\\\
&&\bullet\\\ \bullet&&\end{smallmatrix}\right)$
In line with this notation, we call a row or column _empty_ if it only
contains 0s. Furthermore, we refer to changing an entry from 0 to 1 as
_adding_ a 1-entry, and to the reverse as _removing_ a 1-entry.
A _pattern_ is a matrix that is not all-zero. A matrix $M$ _contains_ a
pattern $P$ if we can obtain $P$ from $M$ by deleting rows and/or columns, and
removing arbitrary 1-entries. If $M$ does not contain $P$, we say $M$ _avoids_
$P$. Matrix pattern avoidance can be seen as a generalization of two other
well-known areas in extremal combinatorics. Pattern avoidance in permutations
(see, e.g., Vatter’s survey [Vat14]) corresponds to the case where both $M$
and $P$ are permutation matrices; and forbidden subgraphs in bipartite graphs
correspond to avoiding a pattern $P$ and all other patterns obtained from $P$
by permutation of rows and/or columns.111For this, we interpret the $M$ and
$P$ as adjacency matrices of bipartite graphs. There are also close
connections to the extremal theory of ordered graphs [PT06] and posets
[GNPV21].
A classical question in extremal graph theory is to determine the maximum
number of edges in an $n$-vertex graph avoiding a fixed pattern graph $H$. The
corresponding problem in forbidden submatrix theory is determining the maximum
_weight_ (number of 1s) of an $m\times n$ matrix avoiding the pattern $P$,
denoted by $\mathrm{ex}(P,m,n)$. We call $\mathrm{ex}(P,n)=\mathrm{ex}(P,n,n)$
the _extremal function_ of the pattern $P$. The study of the extremal function
originates in its applications to (computational) geometry [Mit87, Für90,
BG91]. A systematic study initiated by Füredi and Hajnal [FH92] has produced
numerous results (e.g. [Kla00, Kla01, MT04, Tar05, Kesz09, Ful09, Gen09,
Pet11a, Pet11b]), and further applications in the analysis of algorithms have
been discovered [Pet10, CGK+15].
Clearly, for non-trivial patterns, $\mathrm{ex}(P,n)$ is at least linear and
at most quadratic. Large classes of patterns with linear and quasi-linear
extremal functions have been identified [Kesz09, Pet11a]. On the other hand,
there are patterns with nearly quadratic extremal functions [ARSz99].
A natural counterpart to the extremal problem is the _saturation problem_. A
matrix $M$ is _saturating_ for a pattern $P$, or _$P$ -saturating_ if it
avoids $P$ and is maximal in this respect, i.e., turning any 0-entry of $M$
into a 1 creates an occurrence of $P$. Clearly, $\mathrm{ex}(P,m,n)$ can also
be defined as the maximum weight of an $m\times n$ matrix that is
$P$-saturating. The function $\mathrm{sat}(P,m,n)$ indicates the _minimum_
weight of an $m\times n$ matrix that is $P$-saturating. We focus on square
matrices and the _saturation function_
$\mathrm{sat}(P,n)=\mathrm{sat}(P,n,n)$.
The saturation problem for matrix patterns was first considered by Brualdi and
Cao [BC20] as a counterpart of saturation problems in graph theory.222We refer
to [FK20] for references to graph saturation results. Fulek and Keszegh [FK20]
started a systematic study. They proved that, perhaps surprisingly, every
pattern $P$ satisfies $\mathrm{sat}(P,n)\in\mathcal{O}(1)$ or
$\mathrm{sat}(P,n)\in\Theta(n)$, where the hidden constants depend on $P$.
This is in stark contrast to the extremal problem, where a wide range of
different orders of magnitude is attained by various patterns. Fulek and
Keszegh also present large classes of patterns with linear saturation
functions. For our purposes, their most important result is that every
_decomposable_ pattern has linear saturation function. We call a pattern $P$
decomposable if it has the form
$\displaystyle\begin{pmatrix}A&\mathbf{0}\\\ \mathbf{0}&B\end{pmatrix}\text{
or }\begin{pmatrix}\mathbf{0}&A\\\ B&\mathbf{0}\end{pmatrix}$
for two matrices $A,B\neq\mathbf{0}$, where $\mathbf{0}$ denotes an all-0
matrix of the appropriate size. Otherwise, we call $P$ _indecomposable_. Also,
patterns of the first form $\left(\begin{smallmatrix}A&\mathbf{0}\\\
\mathbf{0}&B\end{smallmatrix}\right)$ are called _sum decomposable_ , and
patterns not of that form are called _sum indecomposable_.333These terms are
derived from the theory of permutation patterns (see, e.g., Vatter [Vat14]).
We are not aware of a standard term for this property in the context of 0-1
matrices.
$\displaystyle Q=\left(\begin{smallmatrix}&\bullet&&&\\\ &&&&\bullet\\\
&&\bullet&&\\\ \bullet&&&&\\\ &&&\bullet&\end{smallmatrix}\right)$ Figure 1:
The matrix with saturation function $\mathcal{O}(1)$ found by Fulek and
Keszegh [FK20].
Fulek and Keszegh also found a single non-trivial pattern with bounded
saturation function ($Q$, pictured in Figure 1), and conjectured that there
are many more. Geneson [Gen20] recently confirmed this by proving that almost
all _permutation matrices_ have bounded saturation function. A permutation
matrix is matrix with exactly one 1-entry in each row and each column. A
different class of matrices with bounded saturation function, containing both
permutation matrices and non-permutation matrices where found recently by the
author [Ber20].444These results have been incorporated into this paper in
Sections 1.1 and 2.
In this paper, we show that, in fact, _all_ indecomposable permutation
matrices have bounded saturation function. This completes the characterization
of permutation matrices in terms of their saturation function.
###### Theorem 1.1.
A permutation matrix has linear saturation function if and only if it is
decomposable.
A simple generalization of the technique that Fulek and Keszegh used to prove
that $\mathrm{sat}(Q,n)\in\mathcal{O}(1)$ implies the following: To prove
Theorem 1.1, it is sufficient to find a _vertical witness_ for every
indecomposable permutation matrix $P$, where we define a vertical witness for
$P$ to be a matrix $M$ (of arbitrary size) that avoids $P$, has an empty row,
and adding a 1-entry in that empty row creates an occurrence of $P$ in $M$.
We therefore construct vertical witnesses for all permutation matrices. Our
constructions are based on the fact that indecomposable permutation matrices
contain a certain substructure which we call _spanning oscillation_.
We also generalize a partial result to a class that contains non-permutation
patterns:
theoremrestateFourTravGen Let $P$ be a pattern that contains four 1-entries
$x_{1},x_{2},x_{3},x_{4}$ such that for each $i\in[4]$, there are no other
1-entries in the same row or column as $x_{i}$, and $x_{i}$ is in the first or
last row or column, and $x_{1},x_{2},x_{3},x_{4}$ form one of the two patterns
$\displaystyle\left(\begin{smallmatrix}&\bullet&&\\\ &&&\bullet\\\
\bullet&&&\\\
&&\bullet&\end{smallmatrix}\right),\left(\begin{smallmatrix}&&\bullet&\\\
\bullet&&&\\\ &&&\bullet\\\ &\bullet&&\end{smallmatrix}\right).$
Then $\mathrm{sat}(P,n)\in\mathcal{O}(1)$.
In Section 1.1 we define (vertical) witnesses, and in Section 1.2, we define
spanning oscillations. In Section 1.4 we introduce an alternative
characterization of pattern containment that simplifies our proofs. In
Sections 2, 3, and 4, we construct vertical witnesses for all permutation
matrices, based on different types of spanning oscillations, which proves
Theorem 1.1. We also prove Theorem 1.1 in Section 2.
We now introduce conventions and notation used throughout the paper. Some more
definitions that are only needed for Sections 2, 3, and 4 will be introduced
in Section 1.4.
We identify 1-entries in an $m\times n$ matrix $M$ as their positions
$(i,j)\in[m]\times[n]$, where $i$ is the row of the 1-entry (from top to
bottom), and $j$ is its column (from left to right). $E(M)$ denotes the set of
1-entries in $M$. For two 1-entries $x=(i,j)\in E(M)$ and
$x^{\prime}=(i^{\prime},j^{\prime})\in E(M)$, we write
$x<_{\mathrm{v}}x^{\prime}$ if $i<i^{\prime}$ and $x<_{\mathrm{h}}x^{\prime}$
if $j<j^{\prime}$. Define $x\leq_{\mathrm{v}}x^{\prime}$ and
$x\leq_{\mathrm{h}}x^{\prime}$ analogously. We also say $x$ is _above_
$x^{\prime}$ if $x<_{\mathrm{v}}x^{\prime}$, and use _below_ , _to the right_
, and _to the left_ similarly.
In a permutation matrix $P$, we denote the leftmost (rightmost, topmost,
bottommost) 1-entry of $P$ by $\ell_{P}$ ($r_{P}$, $t_{P}$, $b_{P}$). Note
that in an indecomposable $k\times k$ permutation matrix with $k\geq 2$, these
four 1-entries are pairwise distinct.
Let $M$ be an arbitrary matrix. Denote by $\mathrm{rot}(M)$ the matrix
obtained by 90-degree clockwise rotation of $M$, denote by $\mathrm{rev}(M)$
the matrix obtained by reversing all rows of $P$, and denote by
$\mathrm{trans}(M)$ the transpose of $M$, i.e., the matrix obtained by
swapping the roles of rows and columns.555We do not use the common superscript
T, as it will later be used with the meaning “top”.
### 1.1 Witnesses
Let $P$ be a matrix pattern without empty rows or columns. An _explicit
witness_ 666An explicit witness is what Fulek and Keszegh [FK20] call a
_witness_. for $P$ is a matrix $M$ that is $P$-saturating and contains at
least one empty row and at least one empty column. If
$\mathrm{sat}(P,n)\in\mathcal{O}(1)$, then $P$ has an explicit witness: assume
$\mathrm{sat}(P,n)\leq c_{P}$, then there exists a $(c_{P}+1)\times(c_{P}+1)$
$P$-saturating matrix $M$ with at most $c_{P}$ 1-entries. Clearly, $M$ has an
empty row and an empty column.
Fulek and Keszegh note that the reverse is also true: We can replace an empty
row (column) in a $P$-saturating matrix by an arbitrary number of empty rows
(columns), and the resulting arbitrarily large matrix will still be
$P$-saturating. As such, an $m_{0}\times n_{0}$ explicit witness for $P$ of
weight $w$ implies that $\mathrm{sat}(P,m,n)\leq w$ for each $m\geq m_{0}$ and
$n\geq n_{0}$. Note that it is critical here that $P$ has no empty rows or
columns. Otherwise, inserting empty rows or columns into $M$ might create an
occurrence of $P$.
We call a row (column) of a matrix $M$ _$P$ -expandable_ if the row (column)
is empty and adding a single 1-entry anywhere in that row (column) creates a
new occurrence of $P$ in $M$. An explicit witness for $P$ is thus a saturating
matrix with at least one $P$-expandable row and an $P$-expandable column. We
define a _witness_ for $P$ (used implicitly by Fulek and Keszegh) as a matrix
that avoids $P$ and has at least one $P$-expandable row and at least one
$P$-expandable column. Clearly, an explicit witness is a witness. The
following lemma shows that finding a witness is sufficient to show that
$\mathrm{sat}(P,n)\in\mathcal{O}(1)$.
###### Lemma 1.2.
If a pattern $P$ without empty rows or columns has an $m_{0}\times n_{0}$
witness, then $P$ has an $m_{0}\times n_{0}$ explicit witness.
###### Proof.
Let $M$ be an $m_{0}\times n_{0}$ witness for $P$. If $M$ is $P$-saturating,
then we are done. Otherwise, there must be a 0-entry $(i,j)$ in $M$ that can
be changed to 1 without creating an occurrence $P$. Choose one such 0-entry
and turn it into 1. Note that $(i,j)$ cannot be contained in an expandable row
or column of $M$, so the resulting matrix is still a witness. Thus, we obtain
an explicit witness after repeating this step at most $m_{0}\cdot n_{0}$
times. ∎
#### 1.1.1 Vertical and horizontal witnesses
Fulek and Keszegh also considered the asymptotic behavior of the functions
$\mathrm{sat}(P,m_{0},n)$ and $\mathrm{sat}(P,m,n_{0})$, where $m_{0}$ and
$n_{0}$ are fixed. The dichotomy of $\mathrm{sat}(P,n)$ also holds in this
setting:
###### Theorem 1.3 ([FK20, Parts of Theorem 1.3]).
For every pattern $P$, and constants $m_{0},n_{0}$,
1. (i)
either $\mathrm{sat}(P,m_{0},n)\in\mathcal{O}(1)$ or
$\mathrm{sat}(P,m_{0},n)\in\Theta(n)$;
2. (ii)
either $\mathrm{sat}(P,m,n_{0})\in\mathcal{O}(1)$ or
$\mathrm{sat}(P,m,n_{0})\in\Theta(m)$.
We can adapt the notion of witnesses in order to classify
$\mathrm{sat}(P,m_{0},n)$ and $\mathrm{sat}(P,m,n_{0})$. Let $P$ be a matrix
pattern without empty rows or columns. A _horizontal (vertical) witness_ for
$P$ is a matrix $M$ that avoids $P$ and contains an expandable column
(row).777A horizontal witness can be expanded horizontally, a vertical witness
can be expanded vertically. Clearly, $P$ has a horizontal witness with $m_{0}$
rows if and only if $\mathrm{sat}(P,m_{0},n)$ is bounded; and $P$ has a
vertical witness with $n_{0}$ columns if and only if $\mathrm{sat}(P,m,n_{0})$
is bounded. Further note that $M$ is a witness for $P$ if and only if $M$ is
both a horizontal witness and a vertical witness.
We now prove that we can essentially restrict our attention to the
classification of $\mathrm{sat}(P,m_{0},n)$ and $\mathrm{sat}(P,m,n_{0})$. The
following two lemmas are a generalization of the technique used by Fulek and
Keszegh to prove that $\mathrm{sat}(Q,n)\in\mathcal{O}(1)$ for the pattern $Q$
depicted in Figure 1.
###### Lemma 1.4.
Let $P$ be a matrix pattern without empty rows or columns, and only one
1-entry in the last row (column). Let $W$ be a horizontal (vertical) witness
for $P$. Then, appending an empty row (column) to $W$ again yields a
horizontal (vertical) witness.
###### Proof.
We prove the lemma for horizontal witnesses, and appending a row. The other
case follows by symmetry. Let $W$ be an $m_{0}\times n_{0}$ horizontal witness
for $P$, where the $j$-th column of $W$ is expandable. Let $W^{\prime}$ be the
matrix obtained by appending a row to $W$. Clearly, $W^{\prime}$ still does
not contain $P$. Moreover, adding an entry in $W^{\prime}$ at $(i,j)$ for any
$i\neq n_{0}+1$ creates a new occurrence of $P$. It remains to show that
adding an entry at $(n_{0}+1,j)$ creates an occurrence of $P$.
We know that adding an entry at $(n_{0},j)$ in $W^{\prime}$ creates an
occurrence of $P$. Let $I$ the set of positions of 1-entries in $W(P)$ that
form the occurrence of $P$. Since $P$ has only one entry in the last row, all
positions $(i^{\prime},j^{\prime})\in I\setminus\\{(n_{0},j)\\}$ satisfy
$i^{\prime}<n_{0}+1$. Thus, adding a 1-entry at $(n_{0}+1,j)$ instead of
$(n_{0},j)$ creates an ocurrence of $P$ at positions
$I\setminus\\{(n_{0},j)\\}\cup\\{(n_{0}+1,j)\\}$, which implies that
$W^{\prime}$ is a horizontal witness. ∎
###### Lemma 1.5.
Let $P$ be a indecomposable pattern without empty rows or columns, and with
only one 1-entry in the last row and one 1-entry in the last column. Then
$\mathrm{sat}(P,n)\in\mathcal{O}(1)$ if and only if there exist constants
$m_{0},n_{0}$ such that $\mathrm{sat}(P,m_{0},n)\in\mathcal{O}(1)$ and
$\mathrm{sat}(P,m,n_{0})\in\mathcal{O}(1)$.
###### Proof.
Suppose that $\mathrm{sat}(P,n)\in\mathcal{O}(1)$. Then $P$ has an
$m_{0}\times n_{0}$ witness $M$, and thus $\mathrm{sat}(P,m_{0},n)$ is at most
the weight of $M$, for every $n\geq n_{0}$. Similarly,
$\mathrm{sat}(P,m,n_{0})\in\mathcal{O}(1)$.
Now suppose that $\mathrm{sat}(P,m_{0},n)\in\mathcal{O}(1)$ and
$\mathrm{sat}(P,m,n_{0})\in\mathcal{O}(1)$. Then, for some $m_{1},n_{1}$,
there exists an $m_{0}\times n_{1}$ horizontal witness $W_{\mathrm{H}}$ and an
$m_{1}\times n_{0}$ vertical witness $W_{\mathrm{V}}$. Consider the following
$(m_{0}+m_{1})\times(n_{0}+n_{1})$ matrix, where $\mathbf{0}_{m\times n}$
denotes the all-0 $m\times n$ matrix:
$\displaystyle W=\begin{pmatrix}\mathbf{0}_{m_{0}\times
n_{0}}&W_{\mathrm{H}}\\\ W_{\mathrm{V}}&\mathbf{0}_{m_{1}\times
n_{1}}\end{pmatrix}$
We first show that $W$ does not contain $P$. Suppose it does. Since $P$ is
contained neither in $W_{\mathrm{H}}$ nor in $W_{\mathrm{V}}$, an occurrence
of $P$ in $W$ must contain 1-entries in both the bottom left and top right
quadrant. But then $P$ is decomposable, a contradiction.
By Lemma 1.4, $W_{\mathrm{V}}^{\prime}=(W_{\mathrm{V}},\mathbf{0}_{m_{1}\times
n_{1}})$ is a vertical witness, and
$W_{\mathrm{H}}^{\prime}=\binom{W_{\mathrm{H}}}{\mathbf{0}_{m_{1}\times
n_{1}}}$ is a horizontal witness. The expandable row in
$W_{\mathrm{V}}^{\prime}$ and the expandable column in
$W_{\mathrm{H}}^{\prime}$ are both also present in $W$. This implies that $W$
is a witness for $P$, so $\mathrm{sat}(P,n)\in\mathcal{O}(1)$. ∎
Figure 2 shows an example of a witness, constructed with Lemma 1.5, using
vertical/horizontal witnesses presented later in Section 2, and an explicit
witness constructed using Lemma 1.2.
$\displaystyle\left(\begin{smallmatrix}&&\bullet&\\\ \bullet&&&\\\
&&&\bullet\\\ &\bullet&&\end{smallmatrix}\right)\hskip
28.45274pt\left(\setcounter{MaxMatrixCols}{11}\begin{smallmatrix}&&&&&&&&\cdot&\bullet&\\\
&&&&&&&\bullet&\cdot&&\\\ &&&&&&&&\cdot&&\bullet\\\ &&&&&&\bullet&&\cdot&&\\\
&&&&&&&&\cdot&\bullet&\\\ &&&&&&&\bullet&\cdot&&\\\ &&\bullet&&&&&&\cdot&&\\\
\bullet&&&&\bullet&&&&\cdot&&\\\
\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\\
&\bullet&&&&\bullet&&&\cdot&&\\\
&&&\bullet&&&&&\cdot&&\end{smallmatrix}\right)\hskip
28.45274pt\left(\setcounter{MaxMatrixCols}{11}\begin{smallmatrix}\bullet&\bullet&\bullet&\bullet&&\bullet&\bullet&\bullet&\cdot&\bullet&\bullet\\\
&&&&&&&\bullet&\cdot&\bullet&\bullet\\\
&&&&&\bullet&\bullet&\bullet&\cdot&\bullet&\bullet\\\
&&&&&\bullet&\bullet&\bullet&\cdot&\bullet&\\\
&&&&&\bullet&&\bullet&\cdot&\bullet&\\\
&\bullet&\bullet&\bullet&&\bullet&&\bullet&\cdot&\bullet&\bullet\\\
&\bullet&\bullet&\bullet&&\bullet&&&\cdot&&\\\
\bullet&\bullet&&\bullet&\bullet&\bullet&&&\cdot&&\\\
\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\\
\bullet&\bullet&&\bullet&\bullet&\bullet&&&\cdot&&\\\
\bullet&&&\bullet&\bullet&\bullet&&&\cdot&\bullet&\bullet\end{smallmatrix}\right)$
Figure 2: A pattern (left), a witness (middle) and an explicit witness (right)
for the pattern. The small dots indicate the expandable row/column.
Observe that the transformations $\mathrm{rev}$, $\mathrm{rot}$, and
$\mathrm{trans}$ all preserve witnesses. However, the latter two change
vertical witnesses to horizontal witnesses, and vice versa. Formally:
###### Observation 1.6.
Let $P$ be a matrix with a vertical witness $W$. Then $\mathrm{rev}(W)$ is a
vertical witness of $\mathrm{rev}(P)$, $\mathrm{rot}(W)$ is a horizontal
witness of $\mathrm{rot}(P)$, and $\mathrm{trans}(W)$ is a horizontal witness
of $\mathrm{trans}(P)$.∎
Recall that our goal is to show that every indecomposable permutation matrix
has a witness. Since indecomposable permutation matrices are closed under
transposition, Lemmas 1.5 and 1.6 imply that it suffices to find a _vertical_
witness for each indecomposable permutation matrix. The same is true for every
class of permutation matrices satisfying the conditions of Lemma 1.5 that is
closed under transposition or 90-degree clockwise rotation. This is useful to
prove Theorem 1.1.
###### Lemma 1.7.
Let $\mathcal{P}$ be a class of indecomposable patterns without empty rows or
columns, and with only one 1-entry in the last row and one 1-entry in the last
column. If $\mathcal{P}$ is closed under transposition or 90-degree clockwise
rotation and each pattern in $\mathcal{P}$ has a vertical witness, then
$\mathrm{sat}(P,n)\in\mathcal{O}(1)$ for each $P\in\mathcal{P}$.
###### Proof.
Suppose that $\mathcal{P}$ is closed under transposition and each
$P\in\mathcal{P}$ has a vertical witness. By Lemma 1.5, it suffices to show
that each pattern in $\mathcal{P}$ has a horizontal witness. Let
$P\in\mathcal{P}$. Then $\mathrm{trans}(P)\in\mathcal{P}$ has a vertical
witness $W$. By 1.6, $\mathrm{trans}(W)$ is a horizontal witness for
$\mathrm{trans}(\mathrm{trans}(P))=P$.
The case that $\mathcal{P}$ is closed under 90-degree rotation can be handled
analogously. ∎
### 1.2 Spanning oscillations
We now introduce _spanning oscillations_ , a class of substructures that
characterizes indecomposable permutation matrices.
For a permutation matrix $P$, the _permutation graph_ $G_{P}$ of the
underlying permutation can be defined as follows: The vertex set is $E(P)$,
and two 1-entries $x,y\in E(P)$ have an edge between them if $x$ is below and
to the left of $y$ (or vice versa).
An _oscillation_ in a permutation matrix of $P$ is a sequence
$X=(x_{1},x_{2},\dots,x_{m})$ of distinct 1-entries in $P$ such that $X$ forms
an induced path in $G_{P}$, i.e., there is an edge between $x_{i}$ and
$x_{i+1}$ for each $i\in[m-1]$, and no other edges between 1-entries in $X$.
Oscillations have been studied before in several contexts [Pra73, BRV08,
Vat11]. Vatter showed that a permutation matrix $P$ is sum indecomposable if
and only if it has an oscillation that starts with $\ell_{P}$ and ends with
$r_{P}$ [Vat11, Propositions 1.4, 1.7]. Our characterization of indecomposable
permutations is very similar. Call an oscillation
$X=(x_{1},x_{2},\dots,x_{m})$ _spanning_ if
$\\{x_{1},x_{2}\\}=\\{\ell_{P},t_{P}\\}$ and
$\\{x_{m-1},x_{m}\\}=\\{b_{P},r_{P}\\}$.
###### Lemma 1.8.
Let $P$ be a sum indecomposable permutation matrix such that $t_{P}$ is to the
left of $b_{P}$ or $\ell_{P}$ is above $r_{P}$. Then $P$ has a spanning
oscillation.
###### Proof.
We write $\ell,t,b,r$ for $\ell_{P},t_{P},b_{P},r_{P}$. By symmetry, we can
assume that $t$ is to the left of $b$ (otherwise, replace $P$ by
$\mathrm{trans}(P)$, noting that $G_{P}=G_{\mathrm{trans}(P)}$). Recall that
$\ell,t,b,r$ are pairwise distinct, as $P$ is indecomposable and not $1\times
1$.
Since $P$ is sum indecomposable, it has an oscillation
$X^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},\dots,x_{m}^{\prime})$ with
$x_{1}^{\prime}=\ell$, $x_{m}^{\prime}=r$. Suppose first that $t$ occurs in
$X^{\prime}$. Since $G_{P}$ has an edge between $\ell$ and $t$, and $X$ is an
_induced_ path in $G_{P}$, this means that $x_{2}^{\prime}=t$. Otherwise, note
that $t$ is connected in $G_{P}$ to precisely those 1-entries that are to the
left of $t$. Let $i$ be maximal such that $x_{i}$ is to the left of $t$. If
$i=1$, then $(t,\ell,x_{2}^{\prime},\dots,x_{m}^{\prime})$ is an induced path
in $G_{P}$. Otherwise, $\ell,t,x_{i}^{\prime},\dots,x_{m}^{\prime}$ is an
induced path in $G_{P}$. In either case, we have an oscillation
$X^{\prime\prime}=(x_{1}^{\prime\prime},x_{2}^{\prime\prime},\dots,x_{m}^{\prime\prime})$
that starts with $\\{\ell,t\\}$ and ends with $r$.
It remains to make sure that $b$ is among the last two 1-entries in the
oscillation. If $b$ occurs in $X^{\prime\prime}$, then
$X_{m-1}^{\prime\prime}=b$, as with $t$. Otherwise, let $j$ be minimal such
that $x_{j}$ is to the right of $b$. If $j=m$, then
$X=(x_{1}^{\prime\prime},x_{2}^{\prime\prime},\dots,x_{m-1}^{\prime\prime},r,b)$
is an induced path in $G_{P}$. Otherwise,
$X=(x_{1}^{\prime\prime},x_{2}^{\prime\prime},\dots,x_{j}^{\prime\prime},b,r)$
is an induced path in $G_{P}$. Since $\ell,t$ are both to the left of $b$, we
have $j\geq 2$, so $X$ is a spanning oscillation. ∎
We obtain the following characterization of indecomposable permutation
matrices.
###### Corollary 1.9.
A permutation matrix $P$ is indecomposable if and only if $P$ or
$\mathrm{rev}(P)$ has a spanning oscillation or $P$ is the $1\times 1$
permutation matrix.
###### Proof.
First, assume $P$ is indecomposable. If $t_{P}$ is to the left of $b_{P}$,
then Lemma 1.8 implies that $P$ has a spanning oscillation. If $t_{P}$ is to
the right of $b_{P}$, then Lemma 1.8 implies that $\mathrm{rev}(P)$ has a
spanning oscillation. If $t_{P}=b_{P}$, then $P$ is $1\times 1$.
Second, assume $P$ has a spanning oscillation. Then $P$ is sum indecomposable.
Suppose $P$ is decomposable, then $P$ has the form
$\left(\begin{smallmatrix}\mathbf{0}&B\\\
A&\mathbf{0}\end{smallmatrix}\right)$, so $t$ is to the right of $b$ and
$\ell$ is below $r$. But then $\ell,b,t,r$ form the complete bipartite graph
$K_{2,2}$ in $G_{P}$, implying that $P$ has no spanning oscillation, a
contradiction. A symmetric argument shows that $P$ is indecomposable if
$\mathrm{rev}(P)$ has a spanning oscillation. ∎
(Spanning) oscillations have a very rigid structure, which we now describe
more concretely, in terms of relative positions of 1-entries. Let $P$ be a
permutation matrix and $X=(x_{1},x_{2},\dots,x_{m})$ be a spanning oscillation
of $P$. For $2\leq i\leq m-1$, call $x_{i}$ an _upper_ 1-entry if $x_{i}$ is
above and to the right of $x_{i-1}$ and $x_{i+1}$, and call $x_{i}$ a _lower_
1-entry if $x_{i}$ is below and to the left of $x_{i-1}$ and $x_{i+2}$. Since
$G_{P}$ contains the edges $\\{x_{i-1},x_{i}\\}$ and $\\{x_{i},x_{i+1}\\}$,
but not the edge $\\{x_{i},x_{i+2}\\}$, every 1-entry (except $x_{1},x_{m}$)
is either upper or lower. Clearly, upper and lower 1-entries alternate, i.e.,
$x_{i}$ is upper if and only if $x_{i+1}$ is lower, for $2\leq i<m-1$. It is
convenient to also call $\ell_{P},b_{P}$ lower 1-entries and $t_{P},r_{P}$
upper 1-entries. We then have:
###### Observation 1.10.
Let $P$ be a permutation matrix and $X=(x_{1},x_{2},\dots,x_{m})$ be a
spanning oscillation of $P$. If $x_{1}=\ell_{P}$, then all $x_{i}$ with odd
$i$ are lower 1-entries, and all $x_{i}$ with even $i$ are upper 1-entries. If
$x_{1}=t_{P}$, then all $x_{i}$ with odd $i$ are upper 1-entries, and all
$x_{i}$ with even $i$ are lower 1-entries.∎
It is easy to see that, if $x_{1}=\ell_{P}$, then $x_{3}$, $x_{4}$ must be
below and to the right of $x_{1}$. By induction, and by considering symmetric
cases, we can prove:
###### Observation 1.11.
Let $P$ be a permutation matrix and $X=(x_{1},x_{2},\dots,x_{m})$ be a
spanning oscillation of $P$. Then $x_{i}$ is above and to the left of $x_{j}$
for each $i\in[m-2]$ and $i+2\leq j\leq m$.
This leaves us with only two possible spanning oscillations for each length
$m$, see Figure 3. Observe that spanning oscillations are preserved by
transposition and 180-degree rotation, in the following sense. Let $P$ be a
permutation matrix and $X$ be a spanning oscillation of $P$. Let
$P^{\prime}=\mathrm{trans}(P)$ (resp.,
$P^{\prime}=\mathrm{rot}^{2}(P)=\mathrm{rot}(\mathrm{rot}(P))$). Then
$P^{\prime}$ has a spanning oscillation $X^{\prime}$ that corresponds to the
transpose (resp., the 180-degree rotation) of $X$. With slight abuse of
notation we write $X^{\prime}=\mathrm{trans}(X)$ (resp.,
$X^{\prime}=\mathrm{rot}^{2}(X)$).
$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$x_{7}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$x_{7}$
Figure 3: The spanning oscillations of length $m$, for $m=4,5,6,7$. The dashed
line segments indicate the edges of the permutation graph. The borders
indicate the possible positions for other 1-entries if the spanning
oscillation is tall (top row) or wide (bottom row).
A spanning oscillation $X=(x_{1},x_{2},\dots,x_{m})$ is _tall_ if the
following two properties are satisfied for each $2\leq i\leq m-2$ where
$x_{i}$ is an upper 1-entry.
1. (i)
$P$ has no 1-entry that is below $x_{i+1}$ and to the left of $x_{i}$.
2. (ii)
$P$ has no 1-entry that is above $x_{i}$ and to the right of $x_{i+1}$.
A spanning oscillation $X$ is _wide_ if $\mathrm{trans}(X)$ is tall. We now
show that we can always assume that a minimum-length spanning oscillation is
tall (or wide).
###### Lemma 1.12.
Let $P$ be a permutation matrix and $X=(x_{1},x_{2},\dots,x_{m})$ be a
spanning oscillation of $P$ of minimum length $m$. Then $P$ has a tall
spanning oscillation of length $m$ that starts with $x_{1},x_{2}$ and ends
with $x_{m-1},x_{m}$.
###### Proof.
Suppose $X$ is not tall, so it violates (i) or (ii) at some index $i$ with
$2\leq i\leq m-2$. We now show how to construct a spanning oscillation
$X^{\prime}$ of length $m$ that starts with $x_{1},x_{2}$, ends with
$x_{m-1},x_{m}$, and violates (i) or (ii) less often than $X$. Repeating this,
we eventually obtain a tall spanning oscillation.
Suppose first that $X$ violates (i) at index $i$. Then $x_{i}$ is an upper
1-entry, and there is a $y\in E(P)$ such that $y$ is below $x_{i+1}$ and to
the left of $x_{i}$. Assume $y$ is the bottommost such 1-entry. Note
$y\notin\\{\ell_{P},b_{P}\\}$, and that $x_{i+2}$ is above $x_{i+1}$ by 1.10.
Let $j$ be minimal such that $x_{j}$ is to the right of $y$. Since
$\ell_{P}<_{\mathrm{h}}y<_{\mathrm{h}}x_{i}$, we have $2\leq j\leq i$. Let $k$
be maximal such that $x_{k}$ is above $y$. Since
$x_{i+2}<_{\mathrm{v}}y<_{\mathrm{v}}b_{P}$, we have $i+2\leq k\leq m-1$.
Consider the sequence
$X^{\prime}=(x_{1},x_{2},\dots,x_{j},y,x_{k},x_{k+1},\dots,x_{m})$. We want to
show that $X^{\prime}$ is a spanning oscillation of $P$. Let $j^{\prime}<j$.
By definition of $j$, we know that $x_{j^{\prime}}$ is to the left of $y$. By
1.11, $x_{j^{\prime}}$ is above $x_{i+1}$, implying that $x_{j^{\prime}}$ is
above $y$. Thus, $G_{P}$ has no edge between $x_{j^{\prime}}$ and $y$.
Similarly, we can prove that there is no edge between $y$ and $x_{k^{\prime}}$
for each $k^{\prime}>k$. This means that $X^{\prime}$ is an oscillation. Since
$2\leq j$ and $k\leq m-1$, we know that $X^{\prime}$ starts with $x_{1},x_{2}$
and ends with $x_{m-1},x_{m}$, implying that $X^{\prime}$ is a spanning
oscillation.
By assumption, $P$ has no spanning oscillation shorter than $P$, so $X$ must
have length $m$, implying that $j=i$ and $k=i+2$. Further, $X^{\prime}$ does
not violate (i) at index $i$, since, by choice of $y$, there are no 1-entries
below $y$ and to the left of $x_{j}=x_{i}$. Thus, $X^{\prime}$ has strictly
less overall violations of (i) or (ii) than $X$.
The second case, where $X$ violates (ii), can be proven symmetrically. ∎
Clearly, the statement of Lemma 1.12 is also true when replacing “tall” with
“wide”, using the same proof on $\mathrm{trans}(P)$.
### 1.3 Structure of the main proof
We divide the proof of Theorem 1.1 into three cases, proven in Sections 2, 3,
and 4. In Section 2, we handle the special case of length-4 spanning
oscillations:
###### Lemma 1.13.
Each permutation matrix with a spanning oscillation of length 4 has a vertical
witness.
In Section 3, we prove: lemmarestateInvTravWit Each permutation matrix $P$
with a wide spanning oscillation of length $m\geq 5$ that starts with $t_{P}$
has a vertical witness.
The final and most involved case is treated in Section 4: lemmarestateTravWit
Each permutation matrix $P$ with a tall spanning oscillation of even length
$m\geq 6$ that starts with $\ell_{P}$ has a vertical witness.
It is not immediately obvious that Lemmas 1.13, 1.13, and 1.13 cover all
indecomposable permutation matrices. We now show that this is the case.
###### Corollary 1.14.
Every indecomposable permutation matrix has a vertical witness.
###### Proof.
Let $P$ be an indecomposable permutation matrix. If $P$ is $1\times 1$, any
all-zero matrix is a witness of $P$. Otherwise, one of $P$ and
$\mathrm{rev}(P)$ has a spanning oscillation $X$ by Corollary 1.9. By 1.6, it
suffices to find a vertical witness for either $P$ or $\mathrm{rev}(P)$, so
without loss of generality, assume that $X$ is a spanning oscillation of $P$,
and that $X$ has minimum length $m$. If $m=4$, we can apply Lemma 1.13. If
$m\geq 5$ and $X$ starts with $t_{P}$, then Lemma 1.12 implies that $P$ also
has a wide spanning oscillation of size $m$ that starts with $t_{P}$, so we
can apply Lemma 1.13.
Now assume $m\geq 5$ and $X$ starts with $\ell_{P}$. If $m$ is even, we can
apply Lemma 1.13, since by Lemma 1.12 we can assume that $X$ is tall.
Otherwise, if $m$ is odd, 1.10 implies that $X$ ends with $b_{P}$. This means
that the spanning oscillation $\mathrm{rot}^{2}(X)$ of $\mathrm{rot}^{2}(P)$
starts with $t_{\mathrm{rot}^{2}(P)}$, so we can apply Lemma 1.13 to obtain a
witness $W^{\prime}$ of $\mathrm{rot}^{2}(P)$. 1.6 implies that
$\mathrm{rot}^{2}(W^{\prime})$ is a witness of $P$. ∎
### 1.4 Embeddings
In the following sections, we use an alternative definition of pattern
containment based on sets of 1-entries. Let $P$ be a pattern and $M$ be a
matrix. We say a function $\phi\colon E(P)\rightarrow E(M)$ is an _embedding_
of $P$ into $M$ if for $x,y\in E(P)$ we have
$x<_{\mathrm{h}}y\Leftrightarrow\phi(x)<_{\mathrm{h}}\phi(y)$ and
$x<_{\mathrm{v}}y\Leftrightarrow\phi(x)<_{\mathrm{v}}\phi(y)$.
Note that if we allow empty rows or columns in $P$, then $E(P)$ does not
determine $P$, since appending an empty row or column to $P$ does not change
$E(P)$. This means that the existence of an embedding of $P$ into $M$ does not
necessarily imply that $P$ is contained in $M$. However, we only consider
patterns without empty rows or columns in this paper, and in that case,
equivalence holds. lemmarestateEquivContainment Let $P$, $M$ be matrices, and
let $P$ have no empty rows or columns. Then $P$ is contained in $M$ if and
only if there is an embedding of $P$ into $M$.
A proof of Section 1.4 is provided in Appendix A. We now introduce some
notation used in the following sections.
Let $x=(i,j)$, $y=(i^{\prime},j^{\prime})$ be two 1-entries. The _horizontal
distance_ between $x$ and $y$ is
$\mathrm{d}^{\mathrm{h}}((i,j),(i^{\prime},j^{\prime}))=|i-i^{\prime}|$, and
the _vertical distance_ between $x$ and $y$ is
$\mathrm{d}^{\mathrm{v}}((i,j),(i^{\prime},j^{\prime}))=|j-j^{\prime}|$. The
_width_ $0pt(A)$ (resp. _height_ $0pt(A)$) of a set $A\subseteq E(M)$ is the
maximum horizontal (resp. vertical) distance between 1-two entries in $A$.
Let $\phi$ be an embedding of $P$ into $M$, and let $x,y\in E(M)$. We define
variants of the above notions that only “count” 1-entries of $M$ that are hit
by $\phi$. This will be useful if we have some, but not full information about
$\phi$. Let $\mathrm{d}^{\mathrm{h}}_{\phi}(x,y)$ be the number of 1-entries
$z\in E(P)$ such that $x<_{\mathrm{h}}\phi(z)\leq_{\mathrm{h}}y$, and let
$\mathrm{d}^{\mathrm{v}}_{\phi}(x,y)$ be the number of 1-entries $z\in E(P)$
such that $x<_{\mathrm{v}}\phi(z)\leq_{\mathrm{v}}y$. For $A\subseteq E(M)$,
let $0pt_{\phi}(A)=\max_{x,y\in A}\mathrm{d}^{\mathrm{h}}_{\phi}(x,y)$, and
$0pt_{\phi}(A)=\max_{x,y\in A}\mathrm{d}^{\mathrm{v}}_{\phi}(x,y)$.
###### Observation 1.15.
Let $\phi$ be an embedding of $P$ into $M$, let $x,y\in E(P)$, and let
$\phi(x),\phi(y)\in A\subseteq E(M)$. Then
$\displaystyle\mathrm{d}^{\mathrm{h}}(x,y)=\mathrm{d}^{\mathrm{h}}_{\phi}(\phi(x),\phi(y))\leq\mathrm{d}^{\mathrm{h}}(\phi(x),\phi(y))\leq
0pt(A);\text{ and}$
$\displaystyle\mathrm{d}^{\mathrm{v}}(x,y)=\mathrm{d}^{\mathrm{v}}_{\phi}(\phi(x),\phi(y))\leq\mathrm{d}^{\mathrm{v}}(\phi(x),\phi(y))\leq
0pt(A).$ ∎
## 2 Spanning oscillations of length 4
In this section, we show Theorem 1.1, which immediately implies Lemma 1.13.
*
Let $\mathcal{P}$ denote the class of patterns defined in Theorem 1.1. Note
that $\mathcal{P}$ is closed under transposition. Thus, by Lemma 1.7, it is
sufficient to prove that each $P\in\mathcal{P}$ has a vertical witness.
Let $\mathcal{P}^{\prime}$ be the subset of patterns $P\in\mathcal{P}$ where
the unique leftmost 1-entry $\ell$ of $P$ is above the unique rightmost
1-entry $r$ of $P$. It is easy to see that $P$ has the following form, where
the boxes contain arbitrarily many 1-entries:
$\ell$$t$$b$$r$
Since for each $P\in\mathcal{P}\setminus\mathcal{P}^{\prime}$, we have
$\mathrm{rev}(P)\in\mathcal{P}^{\prime}$, 1.6 implies that it is sufficient to
prove that each $P\in\mathcal{P}^{\prime}$ has a vertical witness.
$\displaystyle\left(\begin{smallmatrix}&&\bullet&\\\ \bullet&&&\\\
&&&\bullet\\\
&\bullet&&\end{smallmatrix}\right)\rightarrow\left(\begin{smallmatrix}&&\bullet\\\
\bullet&&\\\ &&\\\
&\bullet&\end{smallmatrix}\right),\left(\begin{smallmatrix}&\bullet&\\\ &&\\\
&&\bullet\\\
\bullet&&\end{smallmatrix}\right)\rightarrow\left(\begin{smallmatrix}&&\bullet&&&\\\
\bullet&&&&\bullet&\\\ \cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\\
&\bullet&&&&\bullet\\\ &&&\bullet&&\end{smallmatrix}\right)$ Figure 4:
Construction of $S(Q_{1})$ from $Q_{1}$. The small dots indicate the
expandable row.
###### Lemma 2.1.
Each $P\in\mathcal{P}^{\prime}$ has a vertical witness.
###### Proof.
Let $P\in\mathcal{P}^{\prime}$ be a $k_{1}\times k_{2}$ pattern, let
$\ell=(i,j)$ be the unique leftmost 1-entry in $P$, and let
$r=(i^{\prime},j^{\prime})$ be the unique rightmost 1-entry in $P$. Note that
$i<i^{\prime}$
Let $P_{\mathrm{L}}$ and $P_{\mathrm{R}}$ be the submatrices of $P$ obtained
by removing the rightmost, resp. leftmost, column. Note that in
$P_{\mathrm{L}}$, the $i^{\prime}$-th row is empty, and in $P_{\mathrm{R}}$,
the $i$-th row is empty. We place a copy of $P_{\mathrm{L}}$ to the left of
$P_{\mathrm{R}}$, so that the two empty rows coincide. Formally, obtain $L$
from $P_{\mathrm{L}}$ by appending $i^{\prime}-i>0$ rows (at the bottom),
obtain $R$ from $P_{\mathrm{R}}$ by prepending $i^{\prime}-i>0$ rows (at the
top), and define $S(P)$ as the concatenation $(L,R)$. Note that $S(P)$ is a
$(k_{1}+i^{\prime}-i)\times(2k_{2}-2)$ matrix, and that the $i^{\prime}$-th
row of $S(P)$ is empty. Figure 4 shows an example of the construction. In the
following, we will use $L$ and $R$ interchangeably with the corresponding
subsets of $E(S(P))$.
We claim that the $i^{\prime}$-th row is $P$-expandable. Indeed, adding a
1-entry in the $i^{\prime}$-th row in the first $k-1$ columns (to the left of
$R$) completes an occurrence of $P$ with $R$, and adding a 1-entry in the last
$k-1$ columns (to the right of $L$) completes an occurrence of $P$ with $L$.
It remains to show that $S(P)$ avoids $P$. Suppose $S(P)$ contains $P$, so
there is an embedding $\phi$ of $P$ into $S(P)$. Let $t,b\in E(P)$ be the
unique topmost, respectively bottommost, 1-entry in $P$.
Suppose first that $\phi(b)\in L$. Since
$0pt(L)=\mathrm{d}^{\mathrm{v}}(t,b)=k-1$, and the $i^{\prime}$-th row of $P$
is empty, we have $0pt_{\phi}(L)<\mathrm{d}^{\mathrm{v}}(t,b)$. This implies
that $\phi(t)$ is above $L$. But $S(P)$ has no 1-entries above $L$, a
contradiction.
Otherwise, $\phi(b)\in R$. Since $t$ is to the right of $b$, this implies that
$\phi(t)\in R$. But a similar argument as above shows that
$0pt_{\phi}(R)<\mathrm{d}^{\mathrm{v}}(t,b)$, a contradiction.
Thus, $S(P)$ avoids $P$ and has a $P$-expandable row, implying that $S(P)$ is
a vertical witness of $P$. ∎
## 3 Spanning oscillations starting with $t$
In this section, we prove: *
In Section 3.1 we present a construction of (possible) witnesses, which we
first use for the case $m=5$ in Section 3.2, and then for the case $m\geq 5$
in Section 3.3.
### 3.1 Witness construction
Let $P$ be an $k\times k$ permutation matrix such that $\ell=\ell_{P}$ is
above $r=r_{P}$, and let $q=(i_{q},j_{q})\in E(P)$, such that $q$ is above
$\ell$. We first construct a matrix $S^{\prime}(P,q)$ with a $P$-expandable
row, and then modify $S^{\prime}(P,q)$ to obtain the matrix $S(P,q)$, which
retains the expandable row and will be shown to avoid $P$ if $P$ has a wide
spanning oscillation $(t_{P},\ell_{P},x_{3},x_{4},\dots,x_{m})$ with $m\geq 5$
and we choose $q=x_{3}$.
Let $P_{\mathrm{R}}$ ($P_{\mathrm{L}}$) be the submatrix of $P$ obtained by
removing the leftmost (rightmost) column. Both $P_{\mathrm{R}}$ and
$P_{\mathrm{L}}$ have an empty row. To start the construction of
$S^{\prime}(P,q)$, we place a copy of $P_{\mathrm{R}}$ to the _left_ of a copy
of $P_{\mathrm{L}}$, such that the two copies do not intersect, and the empty
rows are aligned. We denote the copy of $P_{\mathrm{R}}$ in the construction
with $R$ and the copy of $P_{\mathrm{L}}$ with $L$. Note that, compared to the
construction in Section 2, $L$ and $R$ switch places.
Let $P_{\mathrm{L}}^{\prime}$ consist of all columns to the left of $q$, and
$P_{\mathrm{R}}^{\prime}$ consist of all columns to the right of $q$. To
finish the construction of $S^{\prime}(P,q)$, we place a copy of
$P_{\mathrm{L}}^{\prime}$ to the left of $R$ and a copy of
$P_{\mathrm{R}}^{\prime}$ to the right of $L$, such that the empty $i_{q}$-th
rows of $P_{\mathrm{L}}^{\prime}$ and $P_{\mathrm{R}}^{\prime}$ are aligned
with the empty row in $R$ and $L$. Denote the copies of
$P_{\mathrm{L}}^{\prime}$ and $P_{\mathrm{R}}^{\prime}$ as $L^{\prime}$ and
$R^{\prime}$ and let $P^{\prime}=L^{\prime}\cup R^{\prime}$.
Clearly, the empty row in $S^{\prime}(P,q)$ is expandable: Adding a 1-entry to
the left of $R$ will complete the partial occurrence $R$ of $P$, adding a
1-entry to the right of $L$ will complete $L$, and adding a 1-entry inside $R$
or $L$ will complete $P^{\prime}$.
We modify $S^{\prime}(P,q)$ to obtain $S(P,q)$ as follows.888This modification
resembles the principle of Geneson’s construction. [Gen20] Let $B$ be the set
of entries in $P^{\prime}=L^{\prime}\cup R^{\prime}$ that are below the
leftmost 1-entry in $P^{\prime}$ (the copy of $\ell$ in $P^{\prime}$). Move
$B$ down by a fixed number of rows, such that each 1-entry in $B$ is lower
than all 1-entries in $R\cup L$. Clearly, the expandable row stays expandable
after this change.
Figure 5 sketches the constructions. In the following sections, we denote the
1-entries in $S(P,q)$ as follows. If $x$ is a 1-entry in $P$, then let
$x^{\mathrm{R}}$ be the copy of $x$ in $R$, let $x^{\mathrm{L}}$ be the copy
of $x$ in $L$, and let $x^{\prime}$ be the copy of $x$ in $P^{\prime}$. For
subsets $X\subseteq E(P)$, we use $X^{\mathrm{R}}$, $X^{\mathrm{L}}$ and
$X^{\prime}$ similarly.
$q$$\ell$$r$$P$
$L$$R$$R^{\prime}$$L^{\prime}$$S^{\prime}(P,q)$
$L$$R$$R^{\prime}$$L^{\prime}$$L^{\prime}_{1}$$L^{\prime}_{2}$$S(P,q)$
Figure 5: $P$ and the two witness constructions $S^{\prime}(P,q)$ and
$S(P,q)$. The expandable row and the distance between $R$, $L$ and
$P^{\prime}$ are exaggerated.
We now show a property of $S(P,q)$ that is useful in both of the following
subsections.
###### Lemma 3.1.
Let $P$ be a $k\times k$ permutation matrix and $q\in E(P)$ such that
$q<_{\mathrm{v}}\ell<_{\mathrm{v}}r$ and $t$ is to the left of $b$. If $\phi$
is an embedding of $P$ in $S(P,q)$, then $\phi(t)\notin L^{\prime}$ and
$\phi(b)\in R^{\prime}$.
###### Proof.
Let $L^{\prime}_{2}$ denote the portion of $L^{\prime}$ below $\ell^{\prime}$,
and let $L^{\prime}_{1}=L^{\prime}\setminus L^{\prime}_{2}$.
We first show that $\phi(t)\notin L^{\prime}$. Suppose $\phi(t)\in
L^{\prime}$. Then also $\phi(\ell)\in L^{\prime}$. Since
$0pt(L^{\prime}_{2})<\mathrm{d}^{\mathrm{v}}(\ell,b)$, and there are no
nonempty rows below $L^{\prime}_{2}$, we know that $\phi(\ell)\notin
L^{\prime}_{2}$, and thus $\phi(t),\phi(\ell)\in L^{\prime}_{1}$. But
$0pt_{\phi}(L^{\prime}_{1})\leq\mathrm{d}^{\mathrm{v}}(t,l)-1$, a
contradiction.
$\phi(t)\notin L^{\prime}$ already shows that $\phi(b)\notin L^{\prime}$,
since $b$ is to the right of $t$. It remains to show that $\phi(b)\notin R\cup
L$. First, suppose that $\phi(b)\in L$. Then there are at most $k-2$ nonempty
rows above $\phi(b)$, but $\mathrm{d}^{\mathrm{v}}(t,b)=k-1$, a contradiction.
Second, suppose that $\phi(b)\in R$. Then $\phi(t)\in L^{\prime}\cup R$,
because $t$ is to the left of $b$. Since $q$ is above $\ell$, we have
$\mathrm{d}^{\mathrm{v}}(t,q)<\mathrm{d}^{\mathrm{v}}(t,\ell)$, so $t^{R}$ is
above $t^{\prime}$, and thus $t^{R}$ is the highest 1-entry in $L^{\prime}\cup
R$. But then
$\mathrm{d}^{\mathrm{v}}_{\phi}(\phi(t),\phi(b))\leq\mathrm{d}^{\mathrm{v}}_{\phi}(t^{\mathrm{R}},b^{\mathrm{R}})\leq\mathrm{d}^{\mathrm{v}}(t,b)-1$,
a contradiction. ∎
### 3.2 Length-5 spanning oscillations
$\ell$$t$$q$$b$$r$
$L$$R$$R^{\prime}$$L^{\prime}$
Figure 6: $P$ and $S(P,q)$ in the case of Lemma 3.2.
###### Lemma 3.2.
Let $P$ be a permutation matrix with a spanning oscillation
$X=(t_{P},x_{2},x_{3},x_{4},x_{5})$. Then $S(P,x_{3})$ avoids $P$.
###### Proof.
Let $q=x_{3}$, and write $\ell,t,b,r$ for $\ell_{P},t_{P},b_{P},r_{P}$. Note
that $x_{2}=\ell$ and $x_{4}=b$, so $q$ is above $\ell$ and to the right of
$b$. Figure 6 sketches $P$ and $S(P,q)$. Suppose $\phi$ is an embedding of $P$
into $S(P,q)$. By Lemma 3.1, $\phi(b)\in R^{\prime}$. But
$0pt(R^{\prime})=\mathrm{d}^{\mathrm{h}}(q,r)-1<\mathrm{d}^{\mathrm{h}}(b,r)$,
a contradiction. ∎
### 3.3 Longer spanning oscillations
We now consider the case where $P$ has a wide spanning oscillation
$(t_{P},x_{2},\dots,x_{m})$ of length greater than five. We first prove a
general statement on spanning oscillations starting with $t_{P}$.
###### Lemma 3.3.
Let $P$ be a permutation matrix and $X=(t_{P},x_{2},\dots,x_{m})$ be a
spanning oscillation of $P$ with $m\geq 6$. Then, removing $t=t_{P}$, the
columns to the left of $t$, and the rows above $x_{3}$ (as well as all newly
created rows or columns) does not make $P$ decomposable.
###### Proof.
Suppose it does, and let $P_{0}$ be the resulting decomposable pattern. Since
$x_{3}$ is the highest 1-entry in $P_{0}$ (slightly abusing notation), and
$x_{3}$ is above $r=r_{P}$ and to the left of $b=b_{P}$, we know that $P_{0}$
has the form $\left(\begin{smallmatrix}A&\mathbf{0}\\\
\mathbf{0}&B\end{smallmatrix}\right)$, where $x_{3}$ lies in $A$ and $r$, $b$
lie in $B$. This means that $x_{4}$ lies in $A$, since
$t<_{\mathrm{h}}x_{4}<_{\mathrm{h}}x_{3}$. Let $P_{1}$ be the matrix obtained
from $P_{0}$ by further removing all columns to the right of $x_{4}$. Clearly,
$P_{1}$ is decomposable, but $(x_{3},x_{4},\dots,x_{m})$ is a spanning
oscillation of $P_{1}$, a contradiction. ∎
We are now ready to prove the main result of this subsection.
$A$$B$$C$$r$$b$$q$$t$$\ell$
$L$$R$$R^{\prime}$$L^{\prime}_{1}$$L^{\prime}_{2}$$t^{\prime}$$\ell^{\prime}$
Figure 7: $P$ and $S(P,q)$ in the case of Lemma 3.4.
###### Lemma 3.4.
Let $X=(t_{P},x_{2},\dots,x_{m})$ be a wide spanning oscillation of $P$ with
$m\geq 6$. Then $P$ has a vertical witness.
###### Proof.
We write $\ell,t,b,r$ for $\ell_{P},t_{P},b_{P},r_{P}$ in the following. Let
$q=x_{3}$, and let $P_{0}$ be the set of 1-entries of $P$ that are to the
right of $t$ and not above $q$. By Lemma 3.3, $P_{0}$ does not correspond to a
decomposable pattern. Let $A$ denote the set of 1-entries to the right of $q$.
Note that $b,r\in A$, and, by wideness of $X$, all 1-entries in $A$ are below
$\ell$. Let $x$ be the highest 1-entry in $A$, and let $B$ be the set of
1-entries below $x$, to the left of $q$ and to the right of $t$. Then
$B\neq\emptyset$, otherwise $P_{0}$ would be decomposable. Finally,
$C=P_{0}\setminus(A\cup B)$ consists of the 1-entries to the right of $t$, not
above $q$, and above $x$. Figure 7 shows a sketch of $P$ and $S(P,X)$. Note
that $A^{\prime}=R^{\prime}$.
Suppose $\phi$ is an embedding of $P$ into $S(P,q)$. By Lemma 3.1, $\phi(b)\in
R^{\prime}$ and $\phi(t)\notin L^{\prime}$. Since all 1-entries in $B$ are to
the right of $t$, this implies $\phi(y)\notin L^{\prime}$ for each $y\in B$.
Moreover,
$0pt(R^{\prime})=\mathrm{d}^{\mathrm{h}}(q,r)-1<\mathrm{d}^{\mathrm{h}}(y,r)$
for each $y\in B$, so we have $\phi(B)\subseteq L\cup R$.
Let $L^{\prime}_{2}$ denote the portion of $L^{\prime}$ below $\ell^{\prime}$
and let $L^{\prime}_{1}=L^{\prime}\setminus L^{\prime}_{2}$. Note that
$L^{\prime}_{2}$ is below all 1-entries in $L\cup R$. Since all 1-entries in
$C$ are above all 1-entries in $B$, and all 1-entries in $A$ are to the right
of all 1-entries in $B$, we have $\phi(P_{0})=\phi(A\cup B\cup C)\subseteq
L^{\prime}_{1}\cup L\cup R\cup R^{\prime}$. Since $R^{\prime}=A^{\prime}$, all
1-entries in $R^{\prime}$ are to the right and below all 1-entries in
$L^{\prime}_{1}\cup L\cup R$, so $L^{\prime}_{1}\cup L\cup R\cup R^{\prime}$
can be decomposed into the two blocks $L^{\prime}_{1}\cup L\cup R$ and
$R^{\prime}$. Further, $\phi(b)\in R^{\prime}$ by Lemma 3.1, and since
$0pt(R^{\prime})<\mathrm{d}^{\mathrm{v}}(q,b)$, we have $\phi(q)\notin
R^{\prime}$. This means that $P_{0}$ is decomposable, a contradiction. ∎
## 4 Even-length spanning oscillations starting with $\ell$
In this section, we prove: *
For our witness construction to work, we need to define a substructure that
generalizes (tall) spanning oscillations of even length that start with
$\ell$. We call that substructure a _traversal_. Defining our witness
construction for traversals instead of spanning oscillations will allow us to
make a maximality assumption that is required for the proof that the witness
avoids $P$.
### 4.1 Traversals
Let $P$ be a permutation matrix and let $m\geq 4$. A _traversal_ of $P$ is a
sequence $X$ of distinct 1-entries $x_{1},x_{2},\dots,x_{m}$ such that
1. (i)
$x_{1}=\ell_{P}$, $x_{2}=t_{P}$, $x_{m-1}=b_{P}$, $x_{m}=r_{P}$;
2. (ii)
$x_{1}<_{\mathrm{h}}x_{3}<_{\mathrm{h}}x_{2}<_{\mathrm{h}}x_{5}<_{\mathrm{h}}x_{4}<_{\mathrm{h}}\dots<_{\mathrm{h}}x_{m-1}<_{\mathrm{h}}x_{m-2}<_{\mathrm{h}}x_{m}$;
3. (iii)
$\ell_{P}<_{\mathrm{v}}x_{4}<_{\mathrm{v}}x_{6}<_{\mathrm{v}}\dots<_{\mathrm{v}}x_{m}$;
4. (iv)
$x_{3}<_{\mathrm{v}}x_{5}<_{\mathrm{v}}\dots<_{\mathrm{v}}x_{m-3}<_{\mathrm{v}}r_{P}$;
and
5. (v)
$x_{s}$ is below $x_{s+1}$ for each odd $s\in[m-1]$.
$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$x_{7}$$x_{8}$ Figure 8: A tall
traversal. The solid lines indicate possible positions for other 1-entries.
Intuitively, property (ii) keeps the horizontal order of its 1-entries fixed,
in the same way it is fixed in an even-length spanning oscillation starting
with $\ell_{P}$. Vertically, however, we allow to arrange the 1-entries more
freely. There are still _upper_ (even) and _lower_ (odd) 1-entries as in 1.10
(this is implied by (iii), (iv), (v)), and we keep the order within the upper,
resp. lower, 1-entries with (iii), (iv). But we drop the condition that
$x_{i}$ is above $x_{j}$ for each odd $i\leq m-3$ and even $j\geq i+3$. This
means that we are allowed to “move” some upper 1-entries upwards, and some
lower 1-entries downwards, as long as the vertical order among upper (lower)
1-entries is kept intact. (iii), (iv) additionally ensure that we cannot move
any 1-entries above $\ell_{P}$ or below $r_{P}$. Figure 8 shows the shortest
traversal that is not an oscillation.
We say a traversal $(x_{1},x_{2},\dots,x_{m})$ is _tall_ if it satisfies the
following two properties for each even $2\leq i\leq m-2$.
1. (vi)
$P$ has no 1-entry that is below $x_{i+1}$ and to the left of $x_{i}$.
2. (vii)
$P$ has no 1-entry that is above $x_{i}$ and to the right of $x_{i+1}$.
It is easy to see that each tall spanning oscillation of even length that
starts with $\ell$ is a tall traversal.
### 4.2 Maximality assumption
Let $P$ be a permutation matrix with a tall traversal $X$. We can assume that
$X$ is maximal in the sense that no tall traversal of $P$ has $X$ as a proper
subsequence. We now show that such a _maximally tall_ traversal also cannot be
extended to a larger non-tall traversal in the following sense. Call a
traversal $(x_{1},x_{2},\dots,x_{m})$ _extendable_ if there is an odd $s$ with
$5\leq s\leq m-5$, and two 1-entries $y_{1},y_{2}$ in $P$ such that
$(x_{1},x_{2},\dots,x_{s},y_{1},y_{2},x_{s+1},\dots,x_{m})$ is an traversal of
$P$.
###### Lemma 4.1.
Let $X=(x_{1},x_{2},\dots,x_{m})$ be a maximally tall traversal of the
permutation matrix $P$. Then $X$ is non-extendable.
###### Proof.
Suppose $X$ is extendable. Then there exists an odd $s$ with $5\leq s\leq m-5$
and 1-entries $y_{1},y_{2}\in E(P)$ such that
$Y=(x_{1},x_{2},\dots,x_{s},y_{1},y_{2},x_{s+1},\dots,x_{m})$ is a traversal
of $P$. We show that then $P$ has an tall traversal of length $m+2$ with $X$
as a subsequence. This contradicts our assumption that $X$ is maximally tall.
Note that property (v) of $X$ implies that $x_{s+1}$ is above $x_{s}$. Further
using properties (ii), (iii), (iv) of $Y$, it follows that the relative
positions of $x_{s-1}$, $x_{s}$, $y_{1}$, $y_{2}$, $x_{s+1}$, and $x_{s+2}$
are fixed as shown in Figure 9.
$x_{s-1}$$x_{s+1}$$x_{s}$$x_{s+2}$$y_{1}$$y_{2}$ Figure 9: Arrangement of
$x_{s-1},x_{s},y_{1},y_{2},x_{s+1},x_{s+2}$ in Lemma 4.1. The shaded areas
must be empty, since $X$ is tall.
Let $y_{1}^{\prime}$ and $y_{2}^{\prime}$ be 1-entries in $P$ such that
1. (a)
$y_{2}^{\prime}$ is to the left of $y_{1}^{\prime}$;
2. (b)
$y_{1}^{\prime}$ is above or equal to $y_{1}$ and $y_{2}^{\prime}$ is below or
equal to $y_{2}$; and
3. (c)
$\mathrm{d}^{\mathrm{v}}(y_{1}^{\prime},y_{2}^{\prime})$ is maximal under the
previous two conditions.
and let
$Y^{\prime}=(x_{1},x_{2},\dots,x_{s},y_{1}^{\prime},y_{2}^{\prime},x_{s+1},\dots,x_{m})$.
We first show that $Y^{\prime}$ is a traversal. $Y^{\prime}$ clearly satisfies
(i). Since $y_{1}^{\prime}$ is not below $y_{1}$, it is above $x_{s+1}$, so
tallness of $X$ implies that $y_{1}^{\prime}$ is to the left of $x_{s+2}$.
Symmetrically, $y_{2}^{\prime}$ is to the right of $x_{s-1}$, so (a) implies
$x_{s-1}<_{\mathrm{h}}y_{2}^{\prime}<_{\mathrm{h}}y_{1}^{\prime}<_{\mathrm{h}}x_{s+2}$,
and thus $Y^{\prime}$ satisfies (ii).
Since $x_{s-1}$ is to the left of $y_{1}^{\prime}$, tallness of $X$ implies
that $y_{1}^{\prime}$ is below $x_{s-1}$. We already observed that
$y_{1}^{\prime}$ is above $x_{s+1}$, so we have
$x_{s-1}<_{\mathrm{v}}y_{1}^{\prime}<_{\mathrm{v}}x_{s+1}$. Similarly, we have
$x_{s}<_{\mathrm{v}}y_{s}^{\prime}<_{\mathrm{v}}x_{s+2}$. Together with
$x_{s+1}<_{\mathrm{v}}x_{s}$, this implies the remaining traversal properties
(iii), (iv), (v).
It remains to show that $Y^{\prime}$ is tall. Suppose $Y$ violates tallness
property (vi). Since $X$ is tall, the only way this can happen is if there is
a 1-entry $z$ below $y_{2}^{\prime}$ and to the left of $y_{1}^{\prime}$. Then
$z$ is also below $y_{2}$, but
$\mathrm{d}^{\mathrm{v}}(y_{1}^{\prime},z)>\mathrm{d}^{\mathrm{v}}(y_{1}^{\prime},y_{2}^{\prime})$,
violating our assumption (c). A symmetric argument shows that $Y$ satisfies
(vii). ∎
### 4.3 Construction
Fix a $k\times k$ permutation matrix $P$. Throughout this subsection, we write
$\ell,b,t,r$ for $\ell_{P},b_{P},t_{P},r_{P}$. For a 1-entry $x=(i,j)\in
E(P)$, denote by $P^{\mathrm{L}}_{x}$ the submatrix of $P$ consisting of all
columns to the left of $x$ (i.e., the leftmost $j-1$ columns), and denote by
$P^{\mathrm{R}}_{x}$ the submatrix of $P$ consisting of all columns to the
right of $x$ (i.e., the rightmost $k-j$ columns). Note that in both
$P^{\mathrm{L}}_{x}$ and $P^{\mathrm{R}}_{x}$, the $i$-th row is empty. Also
note that the constructions in Sections 2 and 3 implicitly used
$P^{\mathrm{L}}_{x},P^{\mathrm{R}}_{x}$, with $x\in\\{\ell,r,q\\}$.
Let $X=(x_{1},x_{2},\dots,x_{m})$ be a traversal of $P$ with $m\geq 6$, and
write $(i_{s},j_{s})=x_{s}$ for $s\in[m]$. Then the $(2k-1)\times(m-2)k$
matrix $S(P,X)$ is constructed as follows. Let $L_{s}^{\prime}$ be the
$(2k-1)\times(j_{s}-1)$ matrix consisting of a copy of
$P^{\mathrm{L}}_{x_{s}}$ that is shifted down by $k-i_{s}$ rows (i.e., we
prepend $k-i_{s}$ rows and append $i_{s}-1$ rows to $P^{\mathrm{L}}_{x_{s}}$).
Similarly, let $R_{s}^{\prime}$ be the $(2k-1)\times(k-j_{s})$ matrix
consisting of a copy of $P^{\mathrm{R}}_{x_{s}}$ that is shifted down by
$k-i_{s}$ rows. Note that the empty $i_{s}$-th ($j_{s}$-th) row of
$P^{\mathrm{L}}_{x_{s}}$ ($P^{\mathrm{R}}_{x_{s}}$) corresponds to the $k$-th
row of $L_{s}^{\prime}$ ($R_{s}^{\prime}$). Finally, we define
$S^{\prime}(P,X)$ as the following horizontal concatenation of matrices:
$\displaystyle
S^{\prime}(P,X)=(L_{3}^{\prime},R_{1}^{\prime},L_{4}^{\prime},R_{3}^{\prime},L_{5}^{\prime},R_{4}^{\prime},\dots,L_{m-3}^{\prime},R_{m-4}^{\prime},L_{m-2}^{\prime},R_{m-3}^{\prime},L_{m}^{\prime},R_{m-2}^{\prime}).$
$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$$x_{6}$$L_{3}^{\prime}$$R_{1}^{\prime}$$L_{4}^{\prime}$$R_{3}^{\prime}$$L_{6}^{\prime}$$R_{4}^{\prime}$
Figure 10: A matrix $P$ consisting of a 6-traversal $X$, and the corresponding
construction $S^{\prime}(P,X)$. Some empty columns in $S^{\prime}(P,X)$ have
been omitted. The dotted line indicates the expandable row.
Note the irregularities at the beginning and the end. Notably,
$L_{2}^{\prime},R_{2}^{\prime},L_{m-1}^{\prime},R_{m-1}^{\prime}$ are not used
in the construction. $L_{1}^{\prime}$ and $R_{m}^{\prime}$ are not used,
either, but they are empty anyway, since $x_{1}=\ell$ and $x_{m}=r$. See
Figure 10 for an example.
We claim that the $k$-th row of $S^{\prime}(P,X)$ is expandable. Indeed, for
each $i$ with $3\leq i\leq m-2$, adding a 1-entry in the $k$-th row between
$L_{i}^{\prime}$ and $R_{i}^{\prime}$ will complete a copy of $P$ with
$L_{i}^{\prime}$ and $R_{i}^{\prime}$. Moreover, adding a 1-entry in the
$k$-th row to the left of $R_{1}^{\prime}$ or to the right of $L_{m}^{\prime}$
will complete a copy of $P$.
As in the previous section, we will not directly use $S^{\prime}(P,X)$, but
rather a modified construction that preserves the expandable row. In the
following, we will slightly abuse notation by writing $L_{s}^{\prime}$
($R_{s}^{\prime}$) for the subsets of $E(S^{\prime}(P,X))$ that correspond to
$L_{s}^{\prime}$ ($R_{s}^{\prime}$).
Let $S(P,X)$ be a $((2m-6)k+1)\times(m-2)k$ matrix, constructed as follows.
Start with a copy of $S^{\prime}(P,X)$, shifted down by $(m-4)k$ rows, such
that the expandable $k$-th row of $S(P,X)$ corresponds to the $(m-3)k$-th row
of $S^{\prime}(P,X)$. Now, for each $s\in\\{5,6,\dots,m-1,m-2,m\\}$, take all
1-entries in $L_{s}^{\prime}\cup R_{s}^{\prime}$ that are above the
$((m-3)k-1)$-th row (i.e., at least two rows above the expandable row), and
move them up by $(s-4)k$ rows. Similarly, for each
$s\in\\{1,3,4,\dots,m-4\\}$, take all 1-entries in $L_{s}^{\prime}\cup
R_{s}^{\prime}$ that are below the $((m-3)k+1)$-th row (i.e., at least two
rows below the expandable row), and move them down by $(m-s-3)k$ rows. Figure
11 shows the rough structure of $S(P,X)$ when $m=12$ and $X$ is tall.
Let $L_{s}$ ($R_{s}$) denote the the modified set of entries in $S(P,X)$
corresponding to $L_{s}^{\prime}$ ($R_{s}^{\prime}$). Clearly, $L_{s}$ and
$R_{s}$ still form a partial occurrence of $P$ with a single 1-entry missing
between them in the $(m-3)k$-th row, and $R_{1}$, $L_{m}$ similarly form
occurrences when adding a 1-entry in the left- or rightmost part of that row.
Thus:
###### Lemma 4.2.
If $X$ is a traversal of $P$, then $S(P,X)$ has an expandable row.
Note that the construction used in Section 2 can be seen as a special case of
both $S(P,X)$ and $S^{\prime}(P,X)$ when $m=4$.
The rest of this section is dedicated to the proof that if $X$ is a non-
extendable tall traversal of a permutation matrix $P$, then $S(P,X)$ avoids
$P$, implying that $S(P,X)$ is a vertical witness of $P$. We first fix some
notation and make a few observations about $S(P,X)$. Let $T$ denote the set of
1-entries that are above row $(m-3)k-1$ (at least two rows above the
expandable row). Similarly, let $B$ denote the set of 1-entries that are below
row $(m-3)k+1$, and let $M$ denote the remaining 1-entries. For a subset
$A\subseteq E(S(P,X))$, let $A^{\mathrm{T}}=A\cap T$, let
$A^{\mathrm{B}}=A\cap B$ and let $A^{\mathrm{M}}=A\cap M$. For a 1-entry
$p\neq x_{s}$, let $p^{s}$ denote the copy of $p$ in $L_{s}\cup R_{s}$.
###### Observation 4.3.
Let $s,u\in\\{1,3,4,\dots,m-3,m-2,m\\}$ with $s<u$. If $u\geq 5$, then every
1-entry in $L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$ is below every 1-entry
in $L_{u}^{\mathrm{T}}\cup R_{u}^{\mathrm{T}}$. Moreover, if $s\leq m-4$, then
every 1-entry in $L_{s}^{\mathrm{B}}\cup R_{s}^{\mathrm{B}}$ is below every
1-entry in $L_{u}^{\mathrm{B}}\cup R_{u}^{\mathrm{B}}$.∎
Since $X$ is tall, there are no 1-entries below and to the left of $x_{s}$ if
$s$ is odd, or above and to the right of $x_{s}$ if $s$ is even. This implies:
###### Observation 4.4.
$L_{s}^{\mathrm{B}}=\emptyset$ and $R_{s+1}^{\mathrm{T}}=\emptyset$ for each
odd $s$ with $3\leq s\leq m-3$.∎
We now consider the width and height of relevant parts of $S(P,X)$.
###### Observation 4.5.
For each $s\in\\{1,3,4,\dots,m-3,m-2,m\\}$,
* •
$0pt(L_{s})=\mathrm{d}^{\mathrm{h}}(\ell,x_{s})-1$;
* •
$0pt(R_{s})=\mathrm{d}^{\mathrm{h}}(x_{s},r)-1$;
* •
$0pt(L_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{T}})=\mathrm{d}^{\mathrm{v}}(t,x_{s})-2$, if
$L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}\neq\emptyset$;
* •
$0pt_{\phi}(L_{s}^{\mathrm{M}}\cup R_{s}^{\mathrm{M}})\leq 1$; and
* •
$0pt(L_{s}^{\mathrm{B}}\cup
R_{s}^{\mathrm{B}})=\mathrm{d}^{\mathrm{v}}(x_{s},b)-2$, if
$L_{s}^{\mathrm{B}}\cup R_{s}^{\mathrm{B}}\neq\emptyset$.∎
Let $3\leq s\leq m-3$ be odd. Since $X$ is tall, there are no 1-entries in $P$
above $x_{s-1}$ and to the right of $x_{s}$. Thus, $x_{s-1}^{s}$ is the
topmost 1-entry in $R_{s}$. Similarly, $x_{s+2}^{s+1}$ is the bottommost
1-entry in $L_{s+1}$. This implies the following improved bounds:
###### Observation 4.6.
For each odd $s\in\\{3,4,\dots,m-2\\}$:
* •
$0pt(R_{s}^{\mathrm{T}})\leq\mathrm{d}^{\mathrm{v}}(x_{s-1},x_{s})-2$, if
$R_{s}^{\mathrm{T}}\neq\emptyset$; and
* •
$0pt(L_{s+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},x_{s+2})-2$, if
$L_{s+1}^{\mathrm{B}}\neq\emptyset$.∎
$R_{1}^{\mathrm{T}}$$R_{1}^{\mathrm{B}}$$L_{3}^{\mathrm{T}}$$R_{3}^{\mathrm{T}}$$R_{3}^{\mathrm{B}}$$L_{4}^{\mathrm{T}}$$L_{4}^{\mathrm{B}}$$R_{4}^{\mathrm{B}}$$L_{5}^{\mathrm{T}}$$R_{5}^{\mathrm{T}}$$R_{5}^{\mathrm{B}}$$L_{6}^{\mathrm{T}}$$L_{6}^{\mathrm{B}}$$R_{6}^{\mathrm{B}}$$L_{7}^{\mathrm{T}}$$R_{7}^{\mathrm{T}}$$R_{7}^{\mathrm{B}}$$L_{8}^{\mathrm{T}}$$L_{8}^{\mathrm{B}}$$R_{8}^{\mathrm{B}}$$L_{9}^{\mathrm{T}}$$R_{9}^{\mathrm{T}}$$R_{9}^{\mathrm{B}}$$L_{10}^{\mathrm{T}}$$L_{10}^{\mathrm{B}}$$R_{10}^{\mathrm{B}}$$L_{12}^{\mathrm{T}}$$L_{12}^{\mathrm{B}}$$M$$M$
Figure 11: A sketch of the block structure of $S(P,X)$ with $|X|=12$.
### 4.4 $S(P,X)$ avoids $P$
In this section, we show:
###### Lemma 4.7.
Let $P$ be a permutation matrix, $m\geq 6$ be even and let
$X=(x_{1},x_{2},\dots,x_{m})$ be a non-extendable tall traversal of $P$. Then
$S(P,X)$ avoids $P$.
Together with Lemmas 4.1 and 4.2, this implies Lemma 1.13. For the remainder
of this section, fix $P$ and $X$ as in Lemma 4.7, and write $\ell,b,t,r$ for
$\ell_{P},b_{P},t_{P},r_{P}$. We use the same notation for parts of $S(P,X)$
as defined in Section 4.3. Suppose $\phi$ is an embedding of $P$ into
$S(P,X)$. Our overall strategy is to distinguish cases based on the location
of $\phi(t)$, and derive a contradiction in each case.
Note that we make no further assumptions on $P,X,\phi$, so each lemma or
corollary in this section holds on its own for every choice of $P,X,\phi$ (we
only fix $P,X,\phi$ for brevity). This allows us to make use of the following
_symmetry_ argument. Note that $S(P,X)$ is usually not symmetric, in the sense
that its 180-degree rotation $\mathrm{rot}^{2}(S(P,X))$ is equal to $S(P,X)$.
However, it is easy to see that $\mathrm{rot}^{2}(S(P,X))$ is equal to
$S(\mathrm{rot}^{2}(P),\mathrm{rot}^{2}(X))$. Now, in Lemma 4.8, for example,
we show that $\phi(t)\notin L_{3}$ for each choice of $P,X,\phi$, in
particular for $\mathrm{rot}^{2}(P),\mathrm{rot}^{2}(X)$ and every embedding
$\phi^{\prime}$ of $\mathrm{rot}^{2}(P)$ into $\mathrm{rot}^{2}(S(P,X))$.
Since $L_{3}$ in
$S(\mathrm{rot}^{2}(P),\mathrm{rot}^{2}(X))=\mathrm{rot}^{2}(S(P,X))$
corresponds to $R_{m-2}$ in $S(P,X)$, $t$ in $\mathrm{rot}^{2}(P)$ corresponds
to $b$ in $P$, and $\phi$ corresponds to some embedding $\phi^{\prime}$, we
also have $\phi(b)\notin R_{m-2}$.
#### 4.4.1 $\phi(t)$ in the front or the back
We first consider some special cases, showing that $\phi(t)$ cannot lie in the
leftmost few “blocks” of $S(P,X)$, and symmetric statements for $\phi(b)$. The
proofs in this section also serve as a warm-up for what follows.
###### Lemma 4.8.
$\phi(t)\notin L_{3}$ and $\phi(b)\notin R_{m-2}$.
###### Proof.
By symmetry, it suffices to show $\phi(t)\notin L_{3}$. Suppose $\phi(t)\in
L_{3}$. Then also $\phi(\ell)\in L_{3}$, since $S(P,X)$ contains no 1-entries
to the left of $L_{3}$. But
$0pt(L_{3})=\mathrm{d}^{\mathrm{h}}(\ell,x_{3})-1<\mathrm{d}^{\mathrm{h}}(\ell,t)-1$,
thus we cannot have both $\phi(\ell)$ and $\phi(t)$ in $L_{3}$, a
contradiction. ∎
###### Lemma 4.9.
$\phi(t)\notin R_{1}$ and $\phi(b)\notin L_{m}$.
###### Proof.
By symmetry, it suffices to show $\phi(t)\notin R_{1}$. Suppose $\phi(t)\in
R_{1}$. Note that $0pt_{\phi}(R_{1}^{\mathrm{T}}\cup
R_{1}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(t,\ell)<\mathrm{d}^{\mathrm{v}}(t,x_{3})$,
so $\phi(x_{3})\in B$. Since $x_{3}$ is to the right of $t$ and
$L_{3}^{\mathrm{B}}=\emptyset$, we have $\phi(x_{3})\in R_{1}^{\mathrm{B}}$.
Since $r$ is below $x_{3}$, and all 1-entries in $S(P,X)$ that are to the
right of $R_{1}$ are above $R_{1}^{\mathrm{B}}$, we have $\phi(r)\in
R_{1}^{\mathrm{B}}$. Since $0pt(R_{1})<\mathrm{d}^{\mathrm{h}}(\ell,r)$, this
implies that $\phi(\ell)$ is to the left of $R_{1}$. But since $t\in R_{1}$
and the highest 1-entry $t^{1}$ of $R_{1}$ is at most
$\mathrm{d}^{\mathrm{h}}(t,\ell)$ rows above the expandable row, $\phi(\ell)$
must be below the expandable row. This is a contradiction, since there are no
1-entries to the left of $R_{1}$ and below the expandable row. ∎
If $m=6$ (see Figure 12), then the only remaining possibility is
$\phi(t),\phi(b)\in L_{4}\cup R_{3}$, which implies $\phi(t)\in L_{4}$ or
$\phi(b)\in R_{3}$ (since $t$ is to the left of $b$). Thus, the following
lemma concludes the case $m=6$.
$R_{1}^{\mathrm{T}}$$R_{1}^{\mathrm{B}}$$L_{3}^{\mathrm{T}}$$R_{3}^{\mathrm{T}}$$R_{3}^{\mathrm{B}}$$L_{4}^{\mathrm{T}}$$L_{4}^{\mathrm{B}}$$R_{4}^{\mathrm{B}}$$L_{6}^{\mathrm{T}}$$L_{6}^{\mathrm{B}}$$M$$M$
Figure 12: A sketch of the block structure of $S(P,X)$ with $|X|=6$.
###### Lemma 4.10.
If $m=6$, then $\phi(t)\notin L_{4}$ and $\phi(b)\notin R_{3}$.
###### Proof.
By symmetry, it suffices to show $\phi(t)\notin L_{4}$. This can be done with
essentially the same argument as in the proof of Lemma 2.1. Suppose
$\phi(t)\in L_{4}$. Then $\phi(t)$ is not above $t^{4}\in L_{4}$. By Lemmas
4.8 and 4.9, $\phi(b)\in L_{4}\cup R_{3}$. The lowest 1-entry in $L_{4}\cup
R_{3}$ is $b^{4}$, so $\phi(b)$ is not below $b^{4}$. But
$\mathrm{d}^{\mathrm{v}}_{\phi}(t^{4},b^{4})<\mathrm{d}^{\mathrm{v}}(t,b)$
(note the empty expandable row), a contradiction. ∎
We now continue with the case $m\geq 8$.
###### Lemma 4.11.
If $m\geq 8$, then $\phi(t)\notin L_{4}$ and $\phi(b)\notin R_{m-3}$.
###### Proof.
By symmetry, it suffices to show $\phi(t)\notin L_{4}$. Suppose $\phi(t)\in
L_{4}$. We have $0pt_{\phi}(L_{4}^{\mathrm{T}}\cup
L_{4}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(t,x_{4})<\mathrm{d}^{\mathrm{v}}(t,x_{3})$,
implying $x_{3}\in B$. Since $x_{3}$ is to the left of $t$, this means that
$\phi(x_{3})\in R_{1}^{\mathrm{B}}\cup L_{4}^{\mathrm{B}}$. If $\phi(x_{3})\in
R_{1}^{\mathrm{B}}$, then also $\phi(r)\in R_{1}^{\mathrm{B}}$, but then
$\phi(r)$ is to the left of $\phi(t)$, which is impossible. Thus, we have
$\phi(x_{3})\in L_{4}^{\mathrm{B}}$.
This implies that $\phi(r)\in L_{4}^{\mathrm{B}}\cup R_{3}^{\mathrm{B}}\cup
R_{4}^{\mathrm{B}}$. As such, $\phi$ maps no 1-entry of $P$ to the right of
$R_{4}$, and thus $\phi$ maps no 1-entry into the rows below $M$ and above
$L_{4}^{\mathrm{B}}\cup R_{4}^{\mathrm{B}}$. If now $\phi(x_{3})$ is above or
equal to $x_{3}^{4}$, then
$\mathrm{d}^{\mathrm{v}}_{\phi}(\phi(t),\phi(x_{3}))\leq\mathrm{d}^{\mathrm{v}}_{\phi}(t^{4},x_{3}^{4})<\mathrm{d}^{\mathrm{v}}(t,x_{3})$
(note the empty expandable row), a contradiction. Thus, $\phi(x_{3})$ must be
below $x_{3}^{4}$. By tallness of $X$, this also implies that $x_{3}$ is to
the right of $x_{3}^{4}$, and $\phi(t)$ is to the right of $t^{4}$, since $t$
is to the right of $x_{3}$ and
$\mathrm{d}^{\mathrm{h}}(t^{4},x_{3}^{4})=\mathrm{d}^{\mathrm{h}}(t,x_{3})$.
Similarly, $\phi(x_{4})$ is to the right of $L_{4}$.
Consider now $\phi(x_{5})$. Since $\phi(x_{3})$ is below $x_{3}^{4}$ and
$x_{5}^{4}$ is the bottommost 1-entry in $L_{4}^{\mathrm{B}}$, we know that
$\phi(x_{5})$ is below $L_{4}^{\mathrm{B}}$. This implies $\phi(x_{5})\in
R_{3}^{\mathrm{B}}\cup R_{4}^{\mathrm{B}}$. Suppose $\phi(x_{5})\in
R_{4}^{\mathrm{B}}$. Then also $\phi(r)\in R_{4}^{\mathrm{B}}$. But
$0pt(R_{4})=\mathrm{d}^{\mathrm{h}}(x_{4},r)-1<\mathrm{d}^{\mathrm{h}}(x_{5},r)$,
a contradiction. Thus, $\phi(x_{5})\in R_{3}^{\mathrm{B}}$. This implies
$\phi(b),\phi(r)\in R_{3}^{\mathrm{B}}$, and $\phi(x_{4})\in R_{3}$.
Recall that $\phi(t)\in L_{4}\setminus\\{t^{4}\\}$, so $t$ is below $t^{4}$.
As $0pt_{\phi}(L_{4}^{\mathrm{T}}\cup M)\leq\mathrm{d}^{\mathrm{v}}(t,x_{4})$,
this means that $\phi(x_{4})\in R_{3}^{\mathrm{B}}$. But
$0pt(R_{3}^{\mathrm{B}})=\mathrm{d}^{\mathrm{v}}(x_{3},b)-2<\mathrm{d}^{\mathrm{v}}(x_{4},b)$,
a contradiction. ∎
###### Lemma 4.12.
Let $m\geq 8$. If $\phi(t)\in R_{3}$, then $\phi(b)$ is to the right of
$R_{4}$. Moreover, if $\phi(b)\in L_{m-2}$, then $\phi(t)$ is to the left of
$L_{m-3}$.
###### Proof.
By symmetry, proving the first statement suffices. Let $\phi(t)\in R_{3}$ and
suppose $\phi(b)$ is not to the right of $R_{4}$. The portion of $R_{3}$ above
the expandable row has height at most $\mathrm{d}^{\mathrm{v}}(t,x_{3})-1$, so
$\phi(x_{3})$ must lie below the expandable row. Let $q_{3}$ be the 1-entry
directly below $x_{3}$ in $P$. Clearly, $\phi(q_{3}),\phi(b),\phi(r)\in B$,
and since $\phi(b)$ is to the right of $R_{4}$, we have $\phi(b)\in
R_{3}^{\mathrm{B}}\cup R_{4}^{\mathrm{B}}$. We separately consider three
cases.
1. _Case_ 1:
$\phi(r)\in R_{3}^{\mathrm{B}}$, Since $X$ is tall, $q_{3}$ is to the right of
$t$, so $\phi(q_{3})\in R_{3}^{\mathrm{B}}$. But
$0pt(R_{3}^{\mathrm{B}})=\mathrm{d}^{\mathrm{v}}(x_{3},b)-2=\mathrm{d}^{\mathrm{v}}(q_{3},b)-1$,
a contradiction.
2. _Case_ 2:
$\phi(r)\in R_{4}^{\mathrm{B}}$. Consider $x_{5}$. Since $x_{5}$ is below
$x_{3}$, we have $\phi(x_{5})\in B$. Since $x_{5}$ is to the right of $t$, and
above and to the left of $r$, we have $\phi(x_{5})\in R_{4}^{\mathrm{B}}$. But
$0pt(R_{4})=\mathrm{d}^{\mathrm{h}}(x_{4},r)-1<\mathrm{d}^{\mathrm{h}}(x_{5},r)$,
a contradiction.
3. _Case_ 3:
$\phi(r)$ is to the right of $R_{4}$. Then $\phi(r)$ is also above
$L_{4}^{\mathrm{B}}\cup R_{4}^{\mathrm{B}}$. Consider again $x_{5}$. We know
that $\phi(x_{5})$ is below $M$ and above $L_{4}^{\mathrm{B}}\cup
R_{4}^{\mathrm{B}}$. Since $x_{5}$ is to the left of $b$, we also know that
$\phi(x_{5})$ is not to the right of $R_{4}$. But there are no such 1-entries
in $S(P,X)$.∎
We proceed with some more special cases, showing that $\phi(t)$ also cannot
lie in the rightmost few blocks of $S(P,X)$.
###### Lemma 4.13.
Let $m\geq 8$. Then, $\phi(t)$ lies to the left of $L_{m-2}$, and $\phi(b)$
lies to the right of $R_{3}$.
###### Proof.
By symmetry, it suffices to prove that $\phi(t)$ lies to the left of
$L_{m-2}$. If $\phi(b)$ lies to the left of $L_{m-2}$, then $\phi(t)$ does,
too. $\phi(b)\notin R_{m-3}\cup L_{m}\cup R_{m-2}$ by Lemmas 4.8, 4.9, and
4.11. The only remaining possibility is that $\phi(b)\in L_{m-2}$, where Lemma
4.12 implies that $\phi(t)$ lies to the left of $L_{m-3}$, and thus to the
left of $L_{m-2}$. ∎
To show that $\phi(t)\notin L_{m-3}\cup R_{m-4}$, we use the following more
general lemma, to be used in later sections. Figures 11 and 13 are useful to
visualize the proof.
###### Lemma 4.14.
Let $s$ be odd with $5\leq s\leq m-3$. If $\phi(t)\in L_{s}\cup R_{s-1}$, then
$\phi(b)$ lies to the right of $R_{s-1}$.
###### Proof.
Suppose not. Then, $\phi(b)\in L_{s}\cup R_{s-1}$.
1. _Case_ 1:
$\phi(\ell)\notin L_{s}\cup R_{s-1}$. Since $\ell$ is to the left of $t$, this
means that $\phi(\ell)$ is to the left of $L_{s}$. This implies that
$\phi(\ell)$ is also below $L_{s}^{\mathrm{T}}$, and thus $\phi(x_{4})$ is
below $L_{s}^{\mathrm{T}}$. Since $x_{4}$ is to the right of $t$, we have
$\phi(x_{4})\in M\cup B$, which implies $\phi(x_{5})\in B$, as
$0pt_{\phi}(M)\leq 1<\mathrm{d}^{\mathrm{v}}(x_{4},x_{5})$. Since $x_{5}$ is
to the right of $t$ and to the left of $b$, we further know $\phi(x_{5})\in
R_{s-1}^{\mathrm{B}}$. Since $0pt(R_{s-1})<\mathrm{d}^{\mathrm{h}}(x_{5},r)$,
this implies that $\phi(r)$ is to the right of $R_{s-1}$. But then $\phi(r)$
is above $\phi(x_{5})$, a contradiction.
2. _Case_ 2:
$\phi(\ell)\in L_{s}\cup R_{s-1}$. Then $\phi$ maps no 1-entry to the left of
$L_{s}$. It is easy to see that there must be some $y\in E(P)$ such that
$\phi(y)\in R_{s}^{\mathrm{T}}$, otherwise $P$ is decomposable (more
precisely, $P=\left(\begin{smallmatrix}A&\mathbf{0}\\\
\mathbf{0}&B\end{smallmatrix}\right)$, where $\phi(E(A))\subseteq L_{s}$).
Since $\phi(b)\in L_{s}\cup R_{s-1}$, we know that $b$ is to the left of $y$.
Tallness of $X$ implies that $y$ is not above $x_{m-2}$. now consider
$x_{s-1}$. We know $x_{s-1}\leq_{\mathrm{h}}x_{m-4}<_{\mathrm{h}}b$ and
$x_{s-1}\leq_{\mathrm{v}}x_{m-4}<_{\mathrm{v}}x_{m-2}\leq_{\mathrm{v}}y$.
Thus, $\phi(x_{s-1})\in L_{s}^{\mathrm{T}}$. But
$0pt(L_{s}^{\mathrm{T}})<\mathrm{d}^{\mathrm{v}}(\ell,x_{s})<\mathrm{d}^{\mathrm{v}}(\ell,x_{s-1})$,
a contradiction.∎
###### Corollary 4.15.
If $m\geq 8$, then $\phi(t)\notin L_{m-3}\cup R_{m-4}$ and $\phi(b)\notin
L_{5}\cup R_{4}$.
###### Proof.
By symmetry, it suffices to prove that $\phi(t)\notin L_{m-3}$. Suppose
$\phi(t)\in L_{m-3}$. By Lemmas 4.8, 4.9, 4.11, and 4.12, $\phi(b)$ cannot lie
in $L_{m-2}$ or further left. This contradicts Lemma 4.14. ∎
We now consolidate and reformulate the above results. For the more involved
proofs in Sections 4.4.2 and 4.4.3, it will be convenient to organize the
“middle” blocks $L_{i},R_{i}$ of $S(P,X)$ into two sets of _groups_ , as
follows. For each odd $s$ with $5\leq s\leq m-5$, let $G_{s}=L_{s}\cup
R_{s-1}\cup L_{s+1}\cup R_{s}$, and let $H_{s}=L_{s+1}\cup R_{s}\cup
L_{s+2}\cup R_{s+1}$. A sketch of $G_{s}$ and $H_{s}$ can be found in Figure
13. Combining Lemmas 4.8, 4.9, 4.11, 4.13, and 4.15 yields:
###### Corollary 4.16.
If $m\geq 8$, then:
* •
$\phi(t)$ lies to the right of $L_{4}$ and to the left of $L_{m-3}$. In other
words, $\phi(t)\in R_{3}$ or $\phi(t)\in G_{s}$ for some odd $s$ with $5\leq
s\leq m-5$; and
* •
$\phi(b)$ lies to the right of $R_{4}$ and to the left of $R_{m-3}$. In other
words, $\phi(b)\in L_{m-2}$ or $\phi(b)\in H_{s}$ for some odd $s$ with $5\leq
s\leq m-5$.
At this stage, we cannot easily show that both $\phi(t)\notin R_{3}$ and
$\phi(b)\notin L_{m-2}$, but we can show that at least one of the two must be
true.
###### Lemma 4.17.
If $m\geq 8$, then $\phi(t)\notin R_{3}$ or $\phi(b)\notin L_{m-2}$
###### Proof.
Suppose $\phi(t)\in R_{3}$ and $\phi(b)\in L_{m-2}$. Since
$0pt(R_{3}^{\mathrm{T}})<\mathrm{d}^{\mathrm{v}}(t,x_{3})$, we have
$\phi(x_{3})\in M$, implying $\phi(x_{5})\in B$. More precisely, since $b\in
L_{m-2}$, we have $\phi(x_{5})\in L_{m-2}^{\mathrm{B}}\cup
R_{m-3}^{\mathrm{B}}\cup L_{m}^{\mathrm{B}}\cup R_{m-2}^{\mathrm{B}}$.
Similarly, $\phi(x_{m-2})\in T\cup M$, implying $\phi(x_{m-4})\in
L_{3}^{\mathrm{T}}\cup R_{1}^{\mathrm{T}}\cup L_{4}^{\mathrm{T}}\cup
R_{3}^{\mathrm{T}}$. In particular, $\phi(x_{5})$ is to the right of
$\phi(x_{m-4})$. But $x_{5}<_{\mathrm{h}}x_{4}\leq_{\mathrm{h}}x_{m-4}$, a
contradiction. ∎
Note that Corollaries 4.16 and 4.17 completely resolve the case $m=8$.
In the following two subsections, we show that the remaining possibilities
also lead to a contradiction. In Section 4.4.2 we treat the easier case, where
$\phi(t)\in G_{s}$ for some odd $s$ with $5\leq s\leq m-5$, and $\phi(b)$ is
to the right of $R_{s+1}$ (i.e., to the right of $H_{s}$). This also handles
the symmetric case where $\phi(b)\in H_{s}$ and $\phi(t)$ is to the left of
$G_{s}$. In Section 4.4.3, we consider the case where $\phi(t)\in G_{s}$ and
$\phi(b)\in H_{s}$.
$L_{s}^{\mathrm{T}}$$L_{s+1}^{\mathrm{T}}$$R_{s}^{\mathrm{T}}$$L_{s+2}^{\mathrm{T}}$$G_{s}$$R_{s-1}^{\mathrm{B}}$$L_{s+1}^{\mathrm{B}}$$R_{s}^{\mathrm{B}}$$R_{s+1}^{\mathrm{B}}$$H_{s}$$\ell$$x_{3}$$t$$x_{5}$$x_{4}$$x_{s}$$x_{s-1}$$x_{s+2}$$x_{s+1}$$x_{m-3}$$x_{m-4}$$b$$x_{m-2}$$r$
Figure 13: _(left)_ A sketch of parts of $S(P,X)$. Here, $s$ is odd and $5\leq
s\leq m-5$. The dashed lines and open rectangles indicate $M$ and the rest of
$S(P,X)$. _(right)_ Sketches of three not necessarily disjoint parts of $P$.
The solid lines illustrate tallness.
#### 4.4.2 $\phi(t)$, $\phi(b)$ in the middle and far from each other
The following lemma is central to this subsection.
###### Lemma 4.18.
For each odd $s$ with $5\leq s\leq m-5$, if $\phi(t)\in G_{s}$ and $\phi(b)$
is to the right of $R_{s+1}$, then $\phi(x_{s})$ is below the expandable row.
###### Proof.
We consider the following cases:
1. _Case_ 1:
$\phi(t)\notin L_{s}\cup L_{s+1}\cup R_{s}$. Then $\phi(t)\in R_{s-1}$, so
$\phi(t)$ is below the expandable row, implying the same for $\phi(x_{s})$.
2. _Case_ 2:
$\phi(\ell)\notin L_{s}\cup L_{s+1}\cup R_{s}$. Then $\phi(\ell)$ is below
$L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$. Since $x_{4}$ is to the right of
$t$ and below $\ell$, this implies that $x_{4}\in M\cup B$, and thus
$\phi(x_{s})$ is below the expandable row.
3. _Case_ 3:
$\phi(t),\phi(\ell)\in L_{s}\cup R_{s}$. Since $\phi$ does not map any 1-entry
to a position below $L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$ and above $M$,
we have $0pt_{\phi}(L_{s}^{\mathrm{T}}\cup L_{s}^{\mathrm{M}}\cup
R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(t,x_{s})$. Thus, $\phi(x_{s})$
is either below $M$ or in the bottom row of $M$, so $\phi(x_{s})$ is below the
expandable row.
4. _Case_ 4:
$\phi(t)\in L_{s+1}$ and $\phi(\ell)\in L_{s}$. Since $x_{4}$ is below $\ell$
and to the right of $t$, we have $\phi(x_{4})\in M\cup B$ or $\phi(x_{4})\in
R_{s}^{\mathrm{T}}$. In the former case, we are done, as above. Otherwise,
note that $\phi$ does not map any 1-entry of $P$ into a row below
$R_{s}^{\mathrm{T}}$ and above $M$, thus $0pt_{\phi}(R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}})=\mathrm{d}^{\mathrm{v}}(x_{s-1},x_{s})\leq\mathrm{d}^{\mathrm{v}}(x_{4},x_{s})$.
This implies that $\phi(x_{s})$ is below the expandable row.
5. _Case_ 5:
$\phi(t),\phi(\ell)\in L_{s+1}$ and $\phi(x_{s})\notin L_{s+1}$. Since $x_{s}$
is to the right of $t$, this means that $\phi(x_{s})$ is to the right of
$L_{s+1}$. Suppose $x_{s}$ is above the expandable row. Tallness of $X$
implies that there are no 1-entries in $P$ that are below and to the left of
$x_{s}$, so $\phi$ maps no 1-entry to $L_{s+1}^{\mathrm{M}}\cup
L_{s+1}^{\mathrm{B}}$. But then $\phi$ maps every 1-entry of $P$ either to
$L_{s+1}^{\mathrm{T}}$ or to the left and below $L_{s+1}^{\mathrm{T}}$, and
both possibilities occur (e.g., with $t$ resp. $b$). This means that $P$ is
decomposable, a contradiction.
6. _Case_ 6:
$\phi(t),\phi(\ell),\phi(x_{s})\in L_{s+1}$. Suppose $\phi(x_{s})$ is above
the expandable row. Since
$0pt(L_{s+1}^{\mathrm{T}})<\mathrm{d}^{\mathrm{v}}(t,x_{s})$, we know that
$\phi(x_{s})$ is below $L_{s+1}^{\mathrm{T}}$, so $\phi(x_{s})\in
L_{s+1}^{\mathrm{M}}$.
$x_{s+1}$ is above and to the right of $x_{s}$, implying that
$\phi(x_{s+1})\in L_{s+1}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$. Further,
$x_{s+2}$ is below $x_{s}$, so $\phi(x_{s+2})\in M\cup B$, and
$x_{s}<_{\mathrm{h}}x_{s+2}<_{\mathrm{h}}x_{s+1}$, so $\phi(x_{s+2})\in
L_{s+1}\cup R_{s}$. Since $\phi(b)$ is to the right of $R_{s+1}$, we know that
$\phi$ maps no 1-entry to $L_{s+1}^{\mathrm{B}}\cup R_{s}^{\mathrm{B}}$. Thus,
$\phi(x_{s+2})\in L_{s+1}^{\mathrm{M}}\cup R_{s}^{\mathrm{M}}$.
But now $\phi(x_{s}),\phi(x_{s+2})\in M$, so $\phi$ maps no further 1-entries
to $M$. Therefore, $\phi$ maps every 1-entry either to
$A=L_{s+1}^{\mathrm{T}}\cup L_{s+1}^{\mathrm{M}}\cup R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}}$, or below and to the right of $A$ (and $\phi(t)\in A$,
$\phi(b)\notin A$). This means $P$ is decomposable, a contradiction.∎
We first consider a simple special case.
###### Lemma 4.19.
If $\phi(t)\in G_{s}$ for some odd $s$ with $5\leq s\leq m-5$, then
$\phi(b)\notin L_{m-2}$.
Moreover, if $\phi(b)\in H_{s}$ for some odd $s$ with $5\leq s\leq m-5$, then
$\phi(t)\notin R_{3}$.
###### Proof.
By symmetry, it suffices to show the first statement. Suppose $\phi(b)\in
L_{m-2}$. Then, $\phi(b)$ is to the right of $R_{m-4}$, and thus to the right
of $R_{s+1}$, so Lemma 4.18 implies that $\phi(x_{s})$ is below the expandable
row. Since $x_{s}$ is to the left and above $b$, we have $\phi(x_{s})\in
L_{m-2}$. Since $x_{s}<_{\mathrm{h}}x_{s+1}<_{\mathrm{h}}b$, and $x_{s+1}$ is
below $t$, we have $\phi(x_{s+1})\in L_{m-2}^{\mathrm{M}}\cup
L_{m-2}^{\mathrm{B}}$. But $0pt_{\phi}(L_{m-2}^{\mathrm{M}}\cup
L_{m-2}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{m-2},b)<\mathrm{d}^{\mathrm{v}}(x_{m-4},b)\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},b)$,
a contradiction. ∎
We proceed with the case that $\phi(t)\in G_{s}$ and $\phi(b)\in H_{u}$ for
some $5\leq s<u\leq m-5$.
###### Lemma 4.20.
Let $s,u$ be odd such that $5\leq s<u\leq m-5$. If $\phi(t)\in G_{s}$, then
$\phi(b)\notin H_{u}$.
###### Proof.
Suppose $\phi(b)\in H_{u}$. Note that this means that $\phi$ maps no 1-entry
to $G_{s}\cap B$ or $H_{u}\cap T$. We start by establishing a few facts about
$\phi(x_{s})$, $\phi(r)$, and $\phi(x_{u+1})$.
Lemma 4.18 implies that $\phi(x_{s})$ is below the expandable row. Since
$x_{u}$ is below $x_{s}$, we have $\phi(x_{u})\in B$.
We claim that $\phi(r)\in H_{u}$. Suppose not, then $\phi(r)$ must be to the
right of $H_{u}$, and thus above $L_{u+1}^{\mathrm{B}}\cup
R_{u+1}^{\mathrm{B}}$. Since $x_{u}$ is above $r$ and to the left of $b$, we
have that $\phi(x_{u})$ is above $L_{u+1}^{\mathrm{B}}\cup
R_{u+1}^{\mathrm{B}}$ and not to the right of $H_{u}$. But then
$\phi(x_{u})\in T\cup M$, contradicting our previous observation.
Further, $\phi(x_{u})\in B$ and $\phi(b),\phi(r)\in H_{u}$ imply
$\phi(x_{u})\in H_{u}$, and thus $\phi(x_{u-1})\in H_{u}$. This means that
$\phi(x_{u-1})\in M\cup B$ (as $\phi$ maps nothing to $H_{u}\cap T$), and thus
$\phi(x_{u+1})$ is below the expandable row.
We distinguish between the following cases:
1. _Case_ 1:
$\phi(b),\phi(r)\in L_{u+1}\cup R_{u+1}$. Note that $\phi$ does not map any
1-entries to the rows between $M$ and $L_{u+1}^{\mathrm{B}}\cup
R_{u+1}^{\mathrm{B}}$. Thus, $0pt_{\phi}(L_{u+1}^{\mathrm{M}}\cup
L_{u+1}^{\mathrm{B}}\cup R_{u+1}^{\mathrm{M}}\cup
R_{u+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{u+1},b)$, contradicting
the fact that $\phi(x_{u+1})$ is below the expandable row.
2. _Case_ 2:
$\phi(b),\phi(r)\in R_{u}$. Since
$0pt(R_{u}^{\mathrm{B}})<\mathrm{d}^{\mathrm{v}}(x_{u},b)<\mathrm{d}^{\mathrm{v}}(x_{u+1},b)$,
we have $\phi(x_{u+1})$ above $R_{u}^{\mathrm{B}}$.
Suppose first that $\phi(x_{u+1})\in L_{u+1}^{\mathrm{B}}\cup
L_{u+1}^{\mathrm{M}}$. Note that no 1-entry in $M$ is below $\phi(x_{u+1})$.
Thus, tallness of $X$ implies that $\phi$ maps no 1-entry to
$R_{u}^{\mathrm{M}}$. But then $\phi$ maps all 1-entries to
$R_{u}^{\mathrm{B}}$ or above and to the left of $R_{u}^{\mathrm{B}}$ (and
$\phi(b)\in R_{u}^{\mathrm{B}}$, $\phi(t)\notin R_{u}^{\mathrm{B}}$), so $P$
is decomposable, a contradiction.
Second, suppose that $\phi(x_{u+1})\in R_{u}^{\mathrm{M}}$. Since $x_{u-1}$ is
above $x_{u+1}$, this also implies that $\phi(x_{u-1})\in
L_{u+1}^{\mathrm{M}}\cup R_{u}^{\mathrm{M}}$. Note that $\phi$ maps no further
1-entries to $M$, and $\phi(x_{u-1}),\phi(x_{u+1})\in L_{u+1}\cup R_{u}$. But
this means that $\phi$ maps all 1-entries either to $L_{u+1}^{\mathrm{B}}\cup
L_{u+1}^{\mathrm{M}}\cup R_{u}^{\mathrm{B}}\cup R_{u}^{\mathrm{M}}$ or above
and to the left of that entry set, again contradicting that $P$ is
indecomposable.
3. _Case_ 3:
$\phi(b)\in R_{u}$ and $\phi(r)\in R_{u+1}$. Since $x_{u+2}$ is above $r$ and
to the left of $b$, we know that $\phi(x_{u+2})$ is above and not to the right
of $R_{u}^{\mathrm{B}}$. Since $\phi(x_{s})$ is below the expandable row and
$x_{u+2}$ is below $x_{s}$, we have $\phi(x_{u+2})\in B$, implying
$\phi(x_{u+2})\in L_{u+1}^{\mathrm{B}}$. Since $\phi(r)\in R_{u+1}$, we know
that $\phi$ maps no 1-entries into the rows between $M$ and
$L_{u+1}^{\mathrm{B}}\cup R_{u+1}^{\mathrm{B}}$. This implies that
$0pt_{\phi}(L_{u+1}^{\mathrm{M}}\cup
L_{u+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{u+1},x_{u+2})$. But
$\phi(x_{u+1})$ is below the expandable row, a contradiction.∎
Lemmas 4.19 and 4.20 together with Corollaries 4.16 and 4.17 imply:
###### Corollary 4.21.
There is some odd $s$ with $5\leq s\leq m-5$ such that $\phi(t)\in G_{s}$ and
$\phi(b)\in H_{s}$.
###### Proof.
Suppose first that $\phi(t)\in R_{3}$. Then Corollaries 4.16 and 4.17 imply
that $\phi(b)\in H_{s}$ for some odd $s$ with $5\leq s\leq m-5$. But then
Lemma 4.19 implies that $\phi(t)\notin R_{3}$, a contradiction. A similar
argument shows that $\phi(b)\notin L_{m-2}$.
As such, there are odd $s,u$ with $5\leq s,u\leq m-5$ such that $\phi(t)\in
G_{s}$ and $\phi(b)\in H_{u}$. Clearly, $s\leq u$, and Lemma 4.20 implies that
$s\geq u$, so $s=u$. ∎
#### 4.4.3 $\phi(t)$, $\phi(b)$ in the middle and close to each other
In this subsection, we show that Corollary 4.21 also leads to a contradiction,
which shows that our assumption that $S(P,X)$ contains $P$ must have been
false. Figure 13 will be useful throughout this subsection. The next two
lemmas treat the case that $\phi(t)\in L_{s}$ (or, symmetrically, $\phi(b)\in
R_{s+1}$).
###### Lemma 4.22.
Let $s$ be odd and $5\leq s\leq m-5$. If $\phi(t)\in L_{s}$, then
$\phi(\ell)\in L_{s}^{\mathrm{T}}$. Moreover, if $\phi(b)\in R_{s+1}$, then
$\phi(r)\in R_{s+1}$.
###### Proof.
By symmetry, it suffices to prove the first statement. Suppose $\phi(t)\in
L_{s}$ and $\phi(\ell)\notin L_{s}^{\mathrm{T}}$. Then $\phi(\ell)$ lies below
$L_{s}^{\mathrm{T}}$, and not to the right of $L_{s}$. Since $x_{4}$ is to the
right of $t$ and below $\ell$, we have $\phi(x_{4})\in M\cup B$. This directly
implies that $\phi(x_{3})\in M\cup B$, since $x_{3}$ is below $x_{4}$.
Further, $x_{3}$ is to the left of $t$, and $\phi(b)$ is not to the left of
$\phi(t)\in L_{s}$. This implies $\phi(x_{3})\in M$, and thus $x_{4}\in M$.
Note that, since $0pt_{\phi}(M)\leq 1$, no other 1-entries are mapped to $M$.
Clearly, $\phi(b)\in B$, and by Corollary 4.21, $\phi(b)\in H_{s}$. We now
consider the possible locations of $\phi(b)$.
1. _Case_ 1:
$\phi(b)\in L_{s+1}^{\mathrm{B}}\cup R_{s+1}^{\mathrm{B}}$. Then
$\phi(x_{s+1})$ is above $R_{s+1}^{\mathrm{B}}$ (because
$0pt(L_{s+1}^{\mathrm{B}}\cup
R_{s+1}^{\mathrm{B}})=\mathrm{d}^{\mathrm{v}}(x_{s+1},b)-2$) and not to the
right of $R_{s+1}$ (since $x_{s+1}$ is to the left of $b$). Thus,
$\phi(x_{s+1})\in T\cup M$. But $x_{s+1}$ is below $x_{4}$, so $x_{s+1}\in B$,
a contradiction.
2. _Case_ 2:
$\phi(b)\in R_{s}^{\mathrm{B}}$. First, suppose that $\phi(x_{4})$ is to the
left of $R_{s}$. Then tallness of $X$ implies that $\phi$ maps no 1-entry to
$R_{s}^{\mathrm{T}}$. But then $\phi$ maps all 1-entries either to
$R_{s}^{\mathrm{B}}$ or to the left and above $R_{s}^{\mathrm{B}}$, and
$\phi(b)\in R_{s}^{\mathrm{B}}$, $\phi(t)\notin R_{s}^{\mathrm{B}}$,
contradicting the fact that $P$ is indecomposable.
Second, suppose that $\phi(x_{4})$ is not to the left of $R_{s}$. Since
$\phi(b)\in R_{s}$, and $x_{4}$ is to the left of $b$, we have $\phi(x_{4})\in
R_{s}$. Moreover, since $x_{4}<_{\mathrm{h}}x_{6}<_{\mathrm{h}}b$ and
$x_{4}<_{\mathrm{v}}x_{6}<_{\mathrm{v}}b$, we have $\phi(x_{6})\in
R_{s}^{\mathrm{B}}$. But
$0pt(R_{s}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s},b)\leq\mathrm{d}^{\mathrm{v}}(x_{5},b)<\mathrm{d}^{\mathrm{v}}(x_{6},b)$,
a contradiction.∎
###### Lemma 4.23.
$\phi(t)\notin L_{s}$ and $\phi(b)\notin R_{s+1}$ for each odd $s$ with $5\leq
s\leq m-5$.
###### Proof.
By symmetry, it suffices to prove the first statement. Suppose $\phi(t)\in
L_{s}$. Then Lemma 4.22 implies that $\phi(\ell)\in L_{s}^{\mathrm{T}}$. This
means that $\phi$ does not map any 1-entry below $L_{s}^{\mathrm{T}}$ and
above $M$, so $0pt_{\phi}(L_{s}^{\mathrm{T}}\cup
L_{s}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(t,x_{s})$, implying that
$\phi(x_{s})$ is below the expandable row and thus to the right of $L_{s}$.
We consider several possibilities for the location of $\phi(b)$ and $\phi(r)$.
Corollary 4.21 implies that $\phi(b)\in H_{s}$. Since $\phi(x_{s})$ is below
the expandable row, $\phi(x_{s+2}),\phi(b)\in B$, and thus
$\phi(x_{s+2}),\phi(b)\in H_{s}\cup B=L_{s+1}^{\mathrm{B}}\cup
R_{s}^{\mathrm{B}}\cup R_{s+1}^{\mathrm{B}}$. Since
$x_{s+2}\leq_{\mathrm{v}}x_{m-3}<_{\mathrm{v}}r$, this also implies
$\phi(r)\in H_{s}\cup B$. This means that $\phi$ does not map any 1-entry to
the rows between $M$ and $L_{s+1}^{\mathrm{B}}\cup R_{s+1}^{\mathrm{B}}$, so
$0pt_{\phi}(L_{s+1}^{\mathrm{M}}\cup
L_{s+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},x_{s+2})$ and
$0pt_{\phi}(R_{s+1}^{\mathrm{M}}\cup
R_{s+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},b)$.
1. _Case_ 1:
$\phi(b)\in L_{s+1}^{\mathrm{B}}$. Then $\phi(r)\in L_{s+1}^{\mathrm{B}}\cup
R_{s+1}^{\mathrm{B}}$. Since $0pt_{\phi}(L_{s+1}^{\mathrm{M}}\cup
L_{s+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},x_{s+2})<\mathrm{d}^{\mathrm{v}}(x_{s+1},b)$,
we have $\phi(x_{s+1})\in T$. Since $x_{s+1}$ is to the left of $b$, we have
$\phi(x_{s+1})\in L_{s}^{\mathrm{T}}$. But
$0pt(L_{s}^{\mathrm{T}})=\mathrm{d}^{\mathrm{h}}(\ell,x_{s})-1<\mathrm{d}^{\mathrm{h}}(\ell,x_{s+1})$,
a contradiction.
2. _Case_ 2:
$\phi(b)\in R_{s+1}^{\mathrm{B}}$. Then $\phi(r)\in R_{s+1}^{\mathrm{B}}$.
Since $0pt_{\phi}(R_{s+1}^{\mathrm{M}}\cup
R_{s+1}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},b)$, we know that
$\phi(x_{s+1})$ is above the expandable row, and therefore to the left of
$R_{s+1}$.
Since $x_{s-1}$ is above $x_{s+1}$, we have $\phi(x_{s-1})\in T$, implying
$\phi(x_{s-1})\in L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$. Further,
$0pt(L_{s}^{\mathrm{T}})<\mathrm{d}^{\mathrm{h}}(\ell,x_{s-1})$, so
$\phi(x_{s-1})\in R_{s}^{\mathrm{T}}$.
Finally, since $\phi(x_{s+2})\in B$ and $x_{s+2}$ is to the left of $x_{s+1}$,
we have $\phi(x_{s+2})\in L_{s+1}^{\mathrm{B}}$. But then $\phi(x_{s+2})$ is
to the left of $\phi(x_{s-1})\in R_{s}^{\mathrm{T}}$, while $x_{s+2}$ is to
the right of $x_{s-1}$, a contradiction.
3. _Case_ 3:
$\phi(b),\phi(r)\in R_{s}^{\mathrm{B}}$. We consider the location of
$\phi(x_{s-1})$.
First suppose that $\phi(x_{s-1})\in R_{s}$. Let $q_{s}$ be the 1-entry of $P$
in the row below $x_{s}$. We have $\phi(q_{s})\in B$. Since $X$ is tall,
$q_{s}$ is to the right of $x_{s-1}$, thus $\phi(q_{s})\in
R_{s}^{\mathrm{B}}$. But
$0pt(R_{s}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s},b)-2=\mathrm{d}^{\mathrm{v}}(q_{s},b)-1$,
a contradiction.
Second, suppose $\phi(x_{s-1})\in L_{s+1}$, and $\phi(x_{s-1})$ is below the
expandable row. By tallness of $X$, there are no 1-entries in $P$ that are
above and to the right of $x_{s-1}$, so $\phi$ does not map any 1-entry to
$R_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{M}}$. But then $\phi$ maps all
1-entries to $R_{s}^{\mathrm{B}}$ or above and to the left of
$R_{s}^{\mathrm{B}}$ (and $\phi(b)\in R_{s}^{\mathrm{B}}$, $\phi(t)\notin
R_{s}^{\mathrm{B}}$). Thus, $P$ is decomposable, a contradiction.
Third, suppose $\phi(x_{s-1})\in L_{s+1}$, and $\phi(x_{s-1})$ is above the
expandable row. Since $\phi(t)$ is below $L_{s+1}^{\mathrm{T}}$, we know that
$\phi(x_{s-1})$ must be mapped to the row directly above the expandable row.
Since $X$ is tall, $\phi$ does not map any 1-entry to $R_{s}^{\mathrm{T}}$.
Note that $L_{s}^{\mathrm{M}}$ consists of only one row, which is already
occupied by $\phi(x_{s-1})$, so $\phi$ also maps no 1-entry to
$L_{s}^{\mathrm{M}}$. But then $\phi$ maps all 1-entries either to
$L_{s}^{\mathrm{T}}$ or below and to the right of $L_{s}^{\mathrm{T}}$ (and
$\phi(t)\in L_{s}^{\mathrm{T}}$, $\phi(b)\notin L_{s}^{\mathrm{T}}$), so $P$
is decomposable, a contradiction.
Fourth, suppose $\phi(x_{s-1})\in R_{s-1}^{\mathrm{M}}$. Then $\phi(x_{s})\in
B$, because $x_{s}$ is below $x_{s-1}$. But $\phi(x_{s})$ also lies to the
left of $\phi(x_{s-1})$ and above $\phi(b)\in R_{s}$, a contradiction.
Finally, suppose $\phi(x_{s-1})\in L_{s}$. Since
$0pt(L_{s})<\mathrm{d}^{\mathrm{h}}(\ell,x_{s-1})$, this is impossible.
4. _Case_ 4:
$\phi(b)\in R_{s}^{\mathrm{B}}$ and $\phi(r)\in R_{s+1}^{\mathrm{B}}$. Then
$\phi(x_{s+2})$ is above and not to the right of $R_{s}^{\mathrm{B}}$.
Together with the fact that $\phi(x_{s+2})\in B$, this implies
$\phi(x_{s+2})\in L_{s+1}^{\mathrm{B}}$. Since
$0pt_{\phi}(L_{s+1}^{\mathrm{B}}\cup
L_{s+1}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(x_{s+1},x_{s+2})$, we know
that $\phi(x_{s+1})$ is above the expandable row, and thus $\phi(x_{s-1})\in
T$, implying $\phi(x_{s-1})\in L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$.
Since $0pt(L_{s}^{\mathrm{T}})<\mathrm{d}^{\mathrm{h}}(\ell,x_{s-1})$, we have
$\phi(x_{s-1})\in R_{s}^{\mathrm{T}}$. Now $\phi(x_{s-1})\in
R_{s}^{\mathrm{T}}$ is to the right of $\phi(x_{s+2})\in
L_{s+1}^{\mathrm{B}}$, but $x_{s-1}$ is to the left of $x_{s+2}$, a
contradiction.∎
The next three lemmas deal with the case that $\phi(t)\in L_{s+1}$.
###### Lemma 4.24.
Let $5\leq s\leq m-5$. If $\phi(t)\in L_{s+1}$, then $\phi(b)\notin L_{s+1}$.
###### Proof.
Suppose $\phi(t),\phi(b)\in L_{s+1}$. First note that, since
$t<_{\mathrm{h}}x_{s+1}<_{\mathrm{h}}b$, we have $\phi(x_{s+1})\in L_{s+1}$.
Further, $0pt(L_{s+1}^{\mathrm{T}})<\mathrm{d}^{\mathrm{v}}(t,x_{s+1})$ and
$0pt(L_{s+1}^{\mathrm{B}})<\mathrm{d}^{\mathrm{v}}(x_{s+1})$ implies that
$\phi(x_{s+1})\in L_{s+1}^{\mathrm{M}}$.
Let $y\in E(P)$ be the 1-entry in the column directly left of $x_{s+1}$. Note
that $y^{s+1}$ is the rightmost 1-entry in $L_{s+1}$, and that
$\mathrm{d}^{\mathrm{h}}(t^{s+1},y^{s+1})=\mathrm{d}^{\mathrm{h}}(t,y)<\mathrm{d}^{\mathrm{h}}(t,b)$.
Since $\phi(b)$ is not to the right of $y^{s+1}$, this implies that $\phi(t)$
is to the left of $t^{s+1}$. By a similar argument, $\phi(x_{3})$ is to the
left of $x_{3}^{s+1}$.
Since $t^{s+1}$ is the topmost 1-entry in $L_{s+1}$, this also means that
$\phi(t)$ is below $t^{s+1}$. Further,
$\mathrm{d}^{\mathrm{v}}(t^{s+1},x_{3}^{s+1})=\mathrm{d}^{\mathrm{v}}(t,x_{3})\leq\mathrm{d}^{\mathrm{v}}(\phi(t),\phi(x_{3}))$,
so $\phi(x_{3})$ is below $x_{3}^{s+1}$.
Tallness of $X$ implies that $L_{s+1}$ contains no 1-entries to the left and
below $x_{3}^{s+1}$, so $\phi(x_{3})$ is to the left of $L_{s+1}$. Since
$\phi(b)\in L_{s+1}$, this means that $\phi(x_{3})\in T\cup M$.
Since $\phi(x_{3})$ is to the left of $L_{s+1}$, we also know that
$\phi(\ell)$ is to the left of $L_{s+1}$, implying that $\phi(\ell)$ is below
$L_{s+1}^{\mathrm{T}}$. Since $t<_{\mathrm{h}}x_{4}<_{\mathrm{h}}b$ and
$\ell<_{\mathrm{v}}x_{4}<_{\mathrm{v}}x_{3}$, we have $\phi(x_{4})\in
L_{s+1}^{\mathrm{M}}$. Since $x_{3}$ is below $x_{4}$, this also means that
$\phi(x_{3})\in M$. But now $\phi(x_{3}),\phi(x_{4}),\phi(x_{s+1})\in M$,
while $M$ consists of only two nonempty rows, a contradiction. ∎
###### Lemma 4.25.
Let $s$ be odd with $5\leq s\leq m-5$. If $\phi(t)\in L_{s+1}$ and $\phi(b)\in
R_{s}$, then $\phi(x_{s-1})\in L_{s+1}$ and $\phi(x_{s+2})\in R_{s}$.
###### Proof.
By symmetry, it suffices to show that $\phi(x_{s-1})\in L_{s+1}$. Suppose not.
Since $t<_{\mathrm{h}}x_{s-1}<_{\mathrm{h}}b$, this means that
$\phi(x_{s-1})\in R_{s}$. Let $q_{s}\in E(P)$ be the 1-entry of $P$ in the row
directly below $x_{s}$.
We claim that $\phi(x_{s})$ is below the expandable row, and thus
$\phi(q_{s})\in B$. If $\phi(x_{s-1})\in M\cup B$, then $\phi(x_{s})$ is below
the expandable row, since $x_{s}$ is below $x_{s-1}$. Otherwise,
$\phi(x_{s-1})\in R_{s}^{\mathrm{T}}$, which implies that $\phi(\ell)\in
L_{s}\cup L_{s+1}$, so $\phi$ maps no 1-entry into the rows between
$L_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{T}}$ and $M$. Thus,
$0pt_{\phi}(R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}})\leq\mathrm{d}^{\mathrm{v}}(x_{s-1},x_{s})$, implying that
$\phi(x_{s})$ is below the expandable row. This proves the claim.
$0pt(R_{s}^{\mathrm{B}})\leq\mathrm{d}^{\mathrm{v}}(x_{s},b)-2=\mathrm{d}^{\mathrm{v}}(q_{s},b)-1$
implies that $\phi(q_{s})\notin R_{s}^{\mathrm{B}}$. Since $X$ is tall,
$q_{s}$ is to the right of $x_{s-1}$ and thus $\phi(q_{s})$ is not to the left
of $R_{s}$. Since $\phi(q_{s})\in B\setminus R_{s}^{\mathrm{B}}$, this implies
that $\phi(q_{s})$ is to the right of $R_{s}$, so $\phi(r)$ is to the right of
$R_{s}$, and thus above $R_{s}^{\mathrm{B}}$.
Consider now $x_{s+2}$. First, $x_{s-1}<_{\mathrm{h}}x_{s+2}<_{\mathrm{h}}b$
implies that $\phi(x_{s+2})\in R_{s}$. Since $x_{s}$ is below the expandable
row, $\phi(x_{s+2})\in R_{s}^{\mathrm{B}}$. But then $\phi(x_{s+2})$ is below
$\phi(r)$, a contradiction. ∎
###### Lemma 4.26.
Let $s$ be odd with $5\leq s\leq m-5$. If $\phi(t)\in L_{s+1}$, then
$\phi(b)\in R_{s}$ and $\phi(E(P))\subseteq L_{s+1}\cup R_{s}$.
###### Proof.
Assume $\phi(t)\in L_{s+1}$. By Corollary 4.21, we have $\phi(b)\in H_{s}$.
Lemmas 4.23 and 4.24 imply that $\phi(b)\notin L_{s+1}\cup R_{s+1}$. If
$\phi(b)\in L_{s+2}$, then $\phi(b)$ is above the expandable row. But then
$\phi(r)$ is to the right of $R_{s}$, below $L_{s+2}^{\mathrm{T}}$ and in $T$,
which is impossible. The only remaining possibility is that $\phi(b)\in
R_{s}$.
To show $\phi(E(P))\subseteq L_{s+1}\cup R_{s}$, it is enough to prove that
$\phi(\ell)\in L_{s+1}$ and $\phi(r)\in R_{s}$, and by symmetry, we only have
to prove $\phi(\ell)\in L_{s+1}$. Suppose $\phi(\ell)\notin L_{s+1}$. Then
$\phi(\ell)$ is below $L_{s+1}^{\mathrm{T}}$. Since $x_{s-1}$ is below $\ell$
and $\phi(x_{s-1})\in L_{s+1}$ by Lemma 4.25, we have $\phi(x_{s-1})\in
L_{s+1}^{\mathrm{M}}\cup L_{s+1}^{\mathrm{B}}$.
Since
$0pt(L_{s-1}^{\mathrm{B}})<\mathrm{d}^{\mathrm{v}}(x_{s+1},x_{s+2})<\mathrm{d}^{\mathrm{v}}(x_{s-1},r)$,
we know that $\phi(r)$ is below $L_{s+1}^{\mathrm{B}}$, so $\phi(r)\in
R_{s}^{\mathrm{B}}$. Moreover, tallness of $X$ implies that $\phi$ maps no
1-entry to $R_{s}^{\mathrm{T}}\cup R_{s}^{\mathrm{M}}$. But then $\phi$ maps
all 1-entries either to $R_{s}^{\mathrm{B}}$ or above and to the left of
$R_{s}^{\mathrm{B}}$ (and $\phi(b)\in R_{s}^{\mathrm{B}}$, $\phi(t)\notin
R_{s}^{\mathrm{B}}$), so $P$ is decomposable, a contradiction. ∎
###### Lemma 4.27.
$\phi(t)\notin L_{s+1}$ and $\phi(b)\notin R_{s}$ for each odd $s$ with $5\leq
s\leq m-5$.
###### Proof.
By symmetry, it suffices to show the first statement. Suppose $\phi(t)\in
L_{s+1}$. By Lemmas 4.25 and 4.26, we have $\phi(x_{s-1})\in L_{s+1}$ as well
as $\phi(x_{s+2}),\phi(b)\in R_{s}$ and $\phi(P)\subseteq L_{s+1}\cup R_{s}$.
Since $t<_{\mathrm{h}}x_{s}<_{\mathrm{h}}x_{s-1}$, we have $\phi(x_{s})\in
L_{s+1}$. Symmetrically, $\phi(x_{s+1})\in R_{s}$. Let $p_{s+1}\in E(P)$ be
the 1-entry of $P$ in the row directly above $x_{s+1}$, and let $q_{s}\in
E(P)$ be the 1-entry in the row directly below $x_{s}$. Since
$0pt(L_{s+1}^{\mathrm{T}})=\mathrm{d}^{\mathrm{v}}(t,p_{s+1})-1$, we know that
$\phi(p_{s+1}),\phi(x_{s+1})$ are below $L_{s+1}^{\mathrm{T}}$, and,
symmetrically, $\phi(q_{s}),\phi(x_{s})$ are above $R_{s}^{\mathrm{B}}$. All
in all, we have $\phi(x_{s}),\phi(x_{s+1}),\phi(q_{s}),\phi(p_{s+1})\in
L_{s+1}^{\mathrm{M}}\cup L_{s+1}^{\mathrm{B}}\cup R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}}$.
Our strategy for the remainder of the proof is to find two 1-entries
$y_{1},y_{2}\in E(P)$ such that the sequence
$Y=x_{1},x_{2},\dots,x_{s},y_{1},y_{2},x_{s+1},\dots,x_{m}$ is a traversal of
$P$. For, this, we have to show that
$x_{s-1}<_{\mathrm{h}}y_{2}<_{\mathrm{h}}y_{1}<_{\mathrm{h}}x_{s+2}$, as well
as $y_{1}<_{\mathrm{v}}x_{s+1}$, and $x_{s}<_{\mathrm{v}}y_{2}$ (note that
$x_{s-1}<_{\mathrm{v}}y_{1}$ and $y_{2}<_{\mathrm{v}}x_{s+2}$ then follow from
tallness of $X$). The existence of such a traversal implies that $X$ is
extendable, contradicting Lemma 4.1.
We consider two cases. First, assume that $q_{s}$ is to the left of $p_{s+1}$.
Then we simply choose $y_{1}=p_{s+1}$ and $y_{2}=q_{s}$. By definition,
$p_{s+1}$ is above $x_{s+1}$ and $q_{s}$ is below $x_{s}$. By assumption,
$q_{s}<_{\mathrm{h}}p_{s+1}$, and tallness implies that
$x_{s-1}<_{\mathrm{h}}q_{s}$ and $p_{s+1}<_{\mathrm{h}}x_{s+2}$.
Otherwise, $p_{s+1}$ is to the left of $q_{s}$. Then either $\phi(p_{s+1})\in
L_{s+1}$ or $\phi(q_{s})\in R_{s}$. By symmetry, we can assume the former,
which implies $\phi(p_{s+1})\in L_{s+1}^{\mathrm{M}}\cup
L_{s+1}^{\mathrm{B}}$. Since $\phi(x_{s+1})\in R_{s}^{\mathrm{T}}\cup
R_{s}^{\mathrm{M}}$ and $x_{s+1}$ is below $p_{s+1}$, we have
$\phi(p_{s+1}),\phi(x_{s+1})\in M$. More precisely,
$\phi(p_{s+1})=p_{s+1}^{s+1}\in L_{s+1}^{\mathrm{M}}$ and
$\phi(x_{s+1})=q_{s}^{s}\in R_{s}^{\mathrm{M}}$.
Since $\phi(\ell)\in L_{s+1}$, we know that $\phi$ maps no 1-entry into the
rows between $M$ and $L_{s+1}^{\mathrm{B}}$. Since $\phi$ also maps no 1-entry
into the expandable row, we have
$\mathrm{d}^{\mathrm{v}}_{\phi}(p_{s+1}^{s+1},x_{s}^{s+1})\leq\mathrm{d}^{\mathrm{v}}(p_{s+1},x_{s})-1$.
This implies that $\phi(x_{s})$ is below $x_{s}^{s+1}$.
Since $X$ is tall, all 1-entries in $L_{s+1}$ below $x_{s}^{s+1}$, including
$\phi(x_{s})$, must be to the right of $x_{s-1}^{s+1}$ (note that
$x_{s-1}^{s+1}\in L_{s+1}$). Since $x_{s-1}$ is above $x_{s+1}$ and
$\phi(x_{s+1})=q_{s}^{s}$, we have $\phi(x_{s-1})\in
L^{\mathrm{T}}_{s+1}\cup\\{p_{s+1}^{s+1}\\}$ (note that $x_{s-1}=p_{s+1}$ is
possible). Tallness of $X$ implies that
$0pt(L_{s+1}^{\mathrm{T}}\cup\\{p_{s+1}^{s+1}\\})<\mathrm{d}^{\mathrm{h}}(\ell,x_{s+2})$,
so $\phi(x_{s-1})$ is to the left of $x_{s+2}^{s+1}$.
Putting everything together, we have
$x_{s-1}^{s+1}<_{\mathrm{h}}\phi(x_{s})<_{\mathrm{h}}\phi(x_{s-1})<_{\mathrm{h}}x_{s+2}^{s+1}$,
and $\phi(x_{s-1})\in L_{s+1}$ is above the expandable row, and $\phi(x_{s})$
is below $x_{s}^{s+1}$. Now choose $y_{1},y_{2}\in E(P)$ such that
$y_{1}^{s+1}=\phi(x_{s-1})$ and $y_{2}^{s+1}=\phi(x_{s})$. Then
$x_{s-1}<_{\mathrm{h}}y_{2}<_{\mathrm{h}}y_{1}<_{\mathrm{h}}x_{s+2}$, as well
as $y_{1}<_{\mathrm{v}}x_{s+1}$, and $x_{s}<_{\mathrm{v}}y_{2}$. This implies
that $Y$ is a traversal of $P$. ∎
The last remaining cases are now easy:
###### Lemma 4.28.
$\phi(t)\notin R_{s-1}\cup R_{s}$ and $\phi(b)\notin L_{s+1}\cup L_{s+2}$ for
each odd $s$ with $5\leq s\leq m-5$.
###### Proof.
By symmetry, it suffices to show the first statement. Suppose $\phi(t)\in
R_{s-1}\cup R_{s}$. By Corollaries 4.21, 4.23, and 4.27, we have
$\phi(b)=L_{s+1}\cup L_{s+2}$.
Suppose first that $\phi(t)\in R_{s-1}$. Then $\phi(t)$ is below the
expandable row, meaning that $\phi(\ell)$ is below $M$, but not to the right
of $R_{s-1}$. But $\phi(b)$ is above $R_{s-1}^{\mathrm{B}}$, implying that
$\phi(\ell)$ is also above $R_{s-1}^{\mathrm{B}}$, a contradiction.
Second, if $\phi(t)\in R_{s}$, then $\phi(b)\in L_{s+2}$, since $b$ is to the
right of $t$. A symmetric argument shows that $\phi(r)$ is above $M$ and below
$L_{s+2}^{\mathrm{T}}$, a contradiction. ∎
Lemmas 4.23, 4.27, and 4.28 imply that $\phi(t)\notin G_{s}$, contradicting
Corollary 4.21. As such, our assumption that $\phi$ is an embedding of $P$
into $S(P,X)$ must be false. This concludes the proof of Lemma 4.7.
## 5 Conclusion and open problems
We showed that each decomposable permutation matrix has bounded saturation
function, thereby completing the classification of saturation functions of
permutation matrices. Our proofs imply the upper bound $\mathrm{sat}(P,n)\leq
9k^{4}$ for an indecomposable $k\times k$ permutation matrix $P$ (note that
the largest witness $S(P,X)$ is not larger than $2k^{2}\times k^{2}$, and
Lemma 1.5 combines it with its 90-degree rotation, resulting in a
$3k^{2}\times 3k^{2}$ matrix). It would be interesting to improve this bound,
especially if a simpler construction for patterns satisfying the conditions of
Lemma 4.7 can be found. Note that for general patterns with bounded saturation
functions, no upper bound for $\mathrm{sat}(P,n)$ in terms of $P$ is known, as
noted by Fulek and Keszegh [FK20].
We also characterized a large class of non-permutation patterns with bounded
saturation function, including very dense matrices (Theorem 1.1). Still, a
full characterization of the saturation functions of all matrices remains out
of reach. Note that there are indecomposable patterns without spanning
oscillations, see, e.g., Figure 14. Thus, new techniques are likely required
to fully resolve this problem.
$\displaystyle\left(\begin{smallmatrix}\bullet&&&&\\\ \bullet&&&&\\\
\bullet&&&&\\\ \bullet&&&&\\\
\bullet&\bullet&\bullet&\bullet&\bullet\end{smallmatrix}\right)$ Figure 14: An
indecomposable non-permutation matrix without a spanning oscillation.
Our results trivially imply that every permutation matrix with a vertical
witness also has a horizontal witness. It would be interesting to determine
whether this is true for arbitrary patterns.
It is also possible consider the saturation functions of _sets_ of patterns.
If $\mathcal{P}$ is a set of patterns, let a matrix $M$ be
$\mathcal{P}$-saturating if $M$ avoids each $P\in\mathcal{P}$, and adding a
single 1-entry in $M$ creates an occurrence of some $P\in\mathcal{P}$ in $M$.
Let $\mathrm{sat}(\mathcal{P},n)$ be the minimum weight of
$\mathcal{P}$-saturating matrices. Since our witnesses for $k\times k$
permutation matrices have size at most $3k^{2}\times 3k^{2}$, and thus avoid
all patterns with one side of side length more than $3k^{2}$. Thus, if
$\mathcal{P}$ contains one permutation matrix, and arbitrarily many much
larger patterns, our results imply that
$\mathrm{sat}(\mathcal{P},n)\in\mathcal{O}(1)$.
It would be interesting to determine the saturation functions for, say, all
pairs of two permutation matrices of the same size. Gerbner, Nagy, Patkós, and
Vizer [GNPV21] observed that certain saturation problems for two-dimensional
posets can be reduced to saturation problems for sets of matrix patterns.
However, these sets usually contain both permutation matrices and non-
permutation matrices (of similar size).
##### Acknowledgments.
The author would like to thank Jesse Geneson and László Kozma for helpful
discussions and feedback.
## References
* [ARSz99] Noga Alon, Lajos Rónyai, and Tibor Szabó. Norm-graphs: Variations and applications. Journal of Combinatorial Theory, Series B, 76(2):280–290, 1999\. 10.1006/jctb.1999.1906.
* [BC20] Richard A. Brualdi and Lei Cao. Pattern-avoiding (0,1)-matrices. arXiv e-prints, 2020, 2005.00379v2.
* [Ber20] Benjamin Aram Berendsohn. Matrix patterns with bounded saturation function. arXiv e-prints, 2020, 2012.14717v1.
* [BG91] Dan Bienstock and Ervin Györi. An extremal problem on sparse 0-1 matrices. SIAM Journal on Discrete Mathematics, 4(1):17–27, 1991. 10.1137/0404002.
* [BRV08] Robert Brignall, Nik Ruškuc, and Vincent Vatter. Simple permutations: Decidability and unavoidable substructures. Theoretical Computer Science, 391(1):150–163, 2008. 10.1016/j.tcs.2007.10.037.
* [CGK+15] Parinya Chalermsook, Mayank Goswami, Laszlo Kozma, Kurt Mehlhorn, and Thatchaphol Saranurak. Pattern-avoiding access in binary search trees. 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 410–423, 2015. 10.1109/FOCS.2015.32.
* [FH92] Zoltán Füredi and Péter Hajnal. Davenport-schinzel theory of matrices. Discrete Mathematics, 103(3):233–251, 1992. 10.1016/0012-365X(92)90316-8.
* [FK20] Radoslav Fulek and Balázs Keszegh. Saturation problems about forbidden $0$-$1$ submatrices. arXiv e-prints, 2020, 2010.08256.
* [Ful09] Radoslav Fulek. Linear bound on extremal functions of some forbidden patterns in 0–1 matrices. Discrete Mathematics, 309(6):1736–1739, 2009. 10.1016/j.disc.2008.02.040.
* [Für90] Zoltán Füredi. The maximum number of unit distances in a convex n-gon. J. Comb. Theory, Ser. A, 55(2):316–320, 1990. 10.1016/0097-3165(90)90074-7.
* [Gen09] Jesse T. Geneson. Extremal functions of forbidden double permutation matrices. Journal of Combinatorial Theory, Series A, 116(7):1235 – 1244, 2009\. 10.1016/j.jcta.2009.03.004.
* [Gen20] Jesse T. Geneson. Almost all permutation matrices have bounded saturation functions. arXiv e-prints, December 2020, 2012.14150.
* [GNPV21] Dániel Gerbner, Dániel T. Nagy, Balázs Patkós, and Máté Vizer. Forbidden subposet problems in the grid. arXiv e-prints, February 2021, 2102.08297.
* [Kesz09] Balázs Keszegh. On linear forbidden submatrices. Journal of Combinatorial Theory, Series A, 116(1):232 – 241, 2009\. 10.1016/j.jcta.2008.05.006.
* [Kla00] Martin Klazar. The Füredi-Hajnal conjecture implies the stanley-wilf conjecture. In Formal power series and algebraic combinatorics, pages 250–255. Springer, 2000. 10.1007/978-3-662-04166-6_22.
* [Kla01] Martin Klazar. Enumerative and extremal combinatorics of a containment relation of partitions and hypergraphs. Habilitation thesis, 2001.
* [Mit87] Joseph S. B. Mitchell. Shortest rectilinear paths among obstacles. Technical report, Cornell University Operations Research and Industrial Engineering, 1987.
* [MT04] Adam Marcus and Gábor Tardos. Excluded permutation matrices and the Stanley–Wilf conjecture. Journal of Combinatorial Theory, Series A, 107(1):153–160, 2004\. 10.1016/j.jcta.2004.04.002.
* [Pet10] Seth Pettie. Applications of forbidden 0–1 matrices to search tree and path compression-based data structures. In Proceedings of the 2010 Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1457–1467, 2010. 10.1137/1.9781611973075.118.
* [Pet11a] Seth Pettie. Degrees of nonlinearity in forbidden 0–1 matrix problems. Discrete Mathematics, 311(21):2396 – 2410, 2011. 10.1016/j.disc.2011.06.020.
* [Pet11b] Seth Pettie. Generalized davenport-schinzel sequences and their 0-1 matrix counterparts. J. Comb. Theory Ser. A, 118(6):1863–1895, 2011.
* [Pra73] Vaughan R. Pratt. Computing permutations with double-ended queues, parallel stacks and parallel queues. In Proceedings of the Fifth Annual ACM Symposium on Theory of Computing, pages 268–277, New York, NY, USA, 1973. ACM. 10.1145/800125.804058.
* [PT06] János Pach and Gábor Tardos. Forbidden paths and cycles in ordered graphs and matrices. Israel Journal of Mathematics, 155(1):359–380, 2006.
* [Tar05] Gábor Tardos. On 0–1 matrices and small excluded submatrices. Journal of Combinatorial Theory, Series A, 111(2):266 – 288, 2005\. 10.1016/j.jcta.2004.11.015.
* [Vat11] Vincent Vatter. Small permutation classes. Proceedings of the London Mathematical Society, 103(5):879–921, 2011. 10.1112/plms/pdr017.
* [Vat14] Vincent Vatter. Permutation classes. arXiv e-prints, 2014, 1409.5159v3.
## Appendix A Proof of Section 1.4
*
###### Proof.
Say $P$ is $q\times s$ and $M$ is $m\times n$. Suppose $P=(p_{i,j})_{i,j}$ is
contained in $M$. Then there are rows $r_{1}<r_{2}<\dots<r_{q}$ and columns
$c_{1}<c_{2}<\dots<c_{r}$ such that $p_{i,j}\leq m_{r_{i},c_{j}}$ for each
$i\in[q],j\in[r]$. Now simply define $\phi(i,j)=(r_{i},c_{j})$. Clearly,
$\phi(E(P))\subseteq E(M)$. Moreover, consider
$(i,j),(i^{\prime},j^{\prime})\in E(P)$. We have $i<i^{\prime}$ if and only if
$r_{i}<r_{i^{\prime}}$, and $j<j^{\prime}$ if and only if
$r_{j}<r_{j^{\prime}}$. Thus $\phi$ is an embedding of $P$ into $M$.
Now suppose $\phi\colon E(P)\rightarrow E(M)$ is an embedding of $P$ into $M$.
Note that $x,y\in E(P)$ are in the same row (resp. column) if and only if
$\phi(x),\phi(y)$ are in the same row (resp. column). Thus, $\phi(E(P))$
intersects exactly $q$ rows and $s$ columns. Let $r_{1}<r_{2}<\dots<r_{q}$ be
those rows and $c_{1}<c_{2}<\dots<c_{r}$ be those columns. We show that
$\phi(i,j)=(r_{i},c_{j})$ for each $(i,j)\in E(P)$. Let
$x_{1},x_{2},\dots,x_{m}\in E(P)$ such that $x_{i}$ is in the $i$-th row for
each $i\in[m]$, and let $r_{i}^{\prime}$ be the row of $M$ containing
$\phi(x_{i})$. Clearly $r_{1}^{\prime}\geq r_{1}$. By induction, we further
have $r^{\prime}_{i}\geq r_{i}$ for each $i\in[m]$. Similarly,
$r_{m}^{\prime}\leq r_{m}$, and, again by induction, $r^{\prime}_{i}\leq
r_{i}$ for each $i\in[m]$. This implies that $\phi(i,j)$ is in the $r_{i}$-th
row of $M$ for every $(i,j)\in E(P)$. An analogous argument shows that
$\phi(i,j)$ is in the $c_{j}$-th column of $M$.
Since $\phi$ is an embedding, we have $(r_{i},c_{j})=\phi(i,j)\in E(M)$ for
each $(i,j)\in E(P)$. Thus, $p_{i,j}\leq m_{r_{i},c_{j}}$ for each
$(i,j)\in[q]\times[s]$, so $P$ is contained in $M$. ∎
|
This paper tackles the problem of designing efficient binary-level verification for a
subset of information flow properties encompassing constant-time and
secret-erasure. These properties are crucial for cryptographic implementations, but
are generally not preserved by compilers.
Our proposal builds on relational symbolic execution enhanced with new
optimizations dedicated to information flow and binary-level analysis,
yielding a dramatic improvement over prior work based on symbolic
We implement a prototype, , for bug-finding and bounded-verification
of constant-time and secret-erasure, and perform extensive experiments on a set
of 338 cryptographic implementations, demonstrating the benefits of our
Using , we also automate two
prior manual studies on preservation of constant-time and secret-erasure by
compilers for a total of 4148 and 1156 binaries respectively.
Interestingly, our analysis highlights incorrect usages of volatile data
pointer for secret erasure and shows that scrubbing mechanisms based on
volatile function pointers can introduce additional register spilling
which might break secret-erasure. We also discovered that and
backend passes of introduce violations of constant-time in
implementations that were previously deemed secure by a state-of-the-art
constant-time verification tool operating at LLVM level, showing the
importance of reasoning at binary-level.
<concept_desc>Security and privacy Logic and verification</concept_desc>
<concept_desc>Security and privacy Information flow control</concept_desc>
<concept_desc>Security and privacy Cryptanalysis and other attacks</concept_desc>
[500]Security and privacy Logic and verification
[100]Security and privacy Information flow control
[100]Security and privacy Cryptanalysis and other attacks
§ INTRODUCTION
Safety properties <cit.>, such as buffer
overflows, have been extensively studied and numerous efficient tools have been
developed for their
verification <cit.>.
However, safety properties are properties of individual execution traces,
whereas many important security properties are expressed as sets of
traces—i.e., are hyperproperties <cit.>. In
particular, information flow properties, which regulate the leakage of
information from the secret inputs of a program to public outputs, relate two
execution traces—i.e., are 2-hypersafety properties.
Constant-time and secret-erasure are two examples of
2-hypersafety properties that are crucial in cryptographic
The constant-time programming discipline (CT) is a software-based countermeasure
to timing and microarchitectural attacks which requires the control flow and the
memory accesses of the program to be independent from the secret
input[Some versions of constant-time also require that the size of operands of
variable-time instructions (e.g. integer division) is independent from
Constant-time has been proven to protect against cache-based timing
attacks <cit.>
and is widely used to secure cryptographic implementations (e.g.
BearSSL <cit.>,
NaCL <cit.>,
HACL* <cit.>, etc).
Secret-erasure <cit.> (a.k.a. data scrubbing or safe
erasure) requires to clear secret data (e.g. secret keys) from the memory after
the execution of a critical function, for instance by zeroing the corresponding
memory. It ensures that secret data do not persist in memory longer than
necessary, protecting them against subsequent memory disclosure vulnerabilities.
These properties are generally not preserved by
compilers <cit.>.
For example, reasoning about constant-time requires to know whether the code
c=(x<y)-1 will be compiled to branchless code or not, but this
depends on the compiler version and
optimization <cit.>.
Similarly, scrubbing operations used for secret-erasure have no effect on the
result of the program and can therefore be optimized away by the
dead-store-elimination pass of the
compiler <cit.>,
as detailed in CWE-14 <cit.>. Moreover, these scrubbing
operations do not erase secrets that have been copied on the stack by compilers,
e.g. from register spilling.
Several CT-analysis tools have been proposed to analyze source
code <cit.>,
or LLVM
code <cit.>,
but leave the gap opened for violations introduced in the executable
code either by the compiler <cit.> or by
closed-source libraries <cit.>.
Binary-level tools for constant-time using dynamic
approaches <cit.>
can find bugs, but otherwise miss vulnerabilities in unexplored portions of the
code, making them incomplete. Conversely, static
approaches <cit.>
cannot report precise counterexamples—making them of minor interest when the
implementation cannot be proven secure.
For secret-erasure there is currently no sound automatic analyzer. Existing
approaches rely on dynamic
tainting <cit.> or manual binary-code
analysis <cit.>. While there has been some work on
security preserving
compilers <cit.>,
they are not always applicable and are ineffective for detecting errors in
existing binaries.
Two main challenges arise in the verification of these properties:
* First, common verification methods do not directly apply because
information flow properties like constant-time and secret-erasure are
not regular safety properties but 2-hypersafety
properties <cit.> (i.e., relating two
execution traces), and their standard reduction to safety on a
transformed program,
self-composition <cit.>, is
inefficient <cit.>;
* Second, it is notoriously difficult to adapt formal methods to
binary-level because of the lack of structure information (data and
control) and the need to explicitly reason about the
memory <cit.>.
A technique that scales well on binary code and that naturally comes
into play for bug-finding and bounded-verification is symbolic execution
(SE) <cit.>. While
it has
proven very successful for standard safety
properties <cit.>, its direct adaptation to
2-hypersafety properties through (variants of) self-composition suffers from a
issue <cit.>.
Some recent approaches scale better, but at the cost of sacrificing
bounded-verification <cit.>
(by doing under-approximations)
or bug-finding <cit.> (by doing
The idea of analyzing pairs of executions for the verification of
2-hypersafety is not new (e.g. relational Hoare
logic <cit.>,
self-composition <cit.>, product
programs <cit.>, multiple
facets <cit.>). In the context of
symbolic execution, it has first been coined as
ShadowSE <cit.> for back-to-back
testing, and later as relational symbolic execution
(RelSE) <cit.>.
However, because of the necessity to model the memory, RelSE cannot be trivially
adapted to binary-level analysis. In particular, representing the memory as a
large array of bytes prevents sharing between executions and precise
information-flow tracking, which generates a high number of queries for
the constraint solver. Hence, a direct application of RelSE does not
We restrict to a subset of information flow properties relating traces
following the same path—which includes interesting security policies
such as constant-time and secret-erasure. We tackle the problem
of designing an efficient symbolic verification tool for
constant-time and secret-erasure at binary-level, that leverages the
full power of symbolic execution without sacrificing correct
bug-finding nor bounded-verification. We present , the
first efficient binary-level automatic tool for bug-finding and
bounded-verification of constant-time and secret-erasure at
binary-level. It is compiler-agnostic, targets x86 and ARM
architectures and does not require source code.
The technique is based on relational symbolic
execution <cit.>:
it models two execution traces following the same path in the same symbolic
execution instance
and maximizes sharing between them. We show via experiments
(<ref>) that RelSE alone does not scale at binary-level to
analyze constant-time on real cryptographic implementations.
Our key technical insights are (1) to complement RelSE with dedicated
optimizations offering a fine-grained information flow tracking in the memory,
improving sharing at binary-level (2) to use this sharing to track
secret-dependencies and reduce the number of queries sent to the solver.
can analyze about 23 million instructions in 98 min (3860
instructions per second), outperforming similar state of the art
binary-level symbolic
analyzers <cit.>
(cf. <ref>, page tab:comparison_se),
while being still correct and complete.
Contributions. Our contributions are the following:
* We design dedicated optimizations for information flow analysis at
binary-level. First, we complement relational symbolic execution with a
new on-the-fly simplification for binary-level analysis,
to track secret-dependencies and maximize sharing in the memory
(<ref>). Second, we design new simplifications for
information flow analysis: untainting (<ref>) and
fault-packing (<ref>). Moreover, we formally prove that our
analysis is correct for bug-finding and bounded-verification of
constant-time (<ref>) and discuss the adaptation of the
guarantees to other information-flow properties
(<ref>); in the accompanying tech
report <cit.>;
* We propose a tool named for constant-time and secret-erasure
analysis. Extensive experimental evaluation (338 samples) against
standard approaches (<ref>) shows that it can find bugs
in real-world cryptographic implementations much faster than these
techniques (\(\times 1000\) speedup) and can achieve bounded-verification
when they time out, with a performance close to standard SE
(\(\times 2\) overhead);
* In order to prove the effectiveness of , we perform an
extensive analysis of constant-time at binary-level. In particular, we
analyze 296 cryptographic binaries previously verified at a higher-level
(incl. codes from HACL* <cit.>,
BearSSL <cit.>,
NaCL <cit.>), we replay known bugs in 42
samples (incl. Lucky13 <cit.>)
and automatically generate counterexamples (<ref>);
* Simon et al. <cit.> have demonstrated
that 's optimizations break constant-timeness of code. We
extend this work in five directions—from 192
in <cit.> to 4148 configurations
* we automatically analyze the code that was manually checked
in <cit.>,
* we add new implementations,
* we add the compiler and a more recent version of
* we add and ARM,
* we investigate the impact of individual
optimizations—i.e., the of
and the if-conversion passes of .
Interestingly, we discovered that and backend passes of
introduce violations of constant-time that cannot be
detected by LLVM verification tools like
ct-verif <cit.> even when the
is disabled. On a positive
note, we also show that, contrary to ,
optimizations tend to help preserve constant-time. This
study is open-source and can be easily extended with new compilers and
* Finally, we build the first framework to automatically check the
preservation of secret-erasure by compilers. We use it to analyze 17
scrubbing functions—including countermeasures manually analyzed in a
prior study <cit.>, compiled with 10
compilers with different optimization levels, for a total of 1156 binaries
(<ref>). Our analysis:
* confirms that the main optimization affecting the
preservation of secret-erasure is the dead store elimination pass
(), but also shows that disabling it is not always
sufficient to preserve secret-erasure,
* shows that, while some versions of scrubbing functions based on
volatile data pointer are secure, it is easy to implement this
mechanism incorrectly, in particular by using a volatile pointer to non-volatile
data, or passing a pointer to volatile in a function call,
* interestingly it also shows that scrubbing mechanisms based on
volatile function pointers can introduce additional register
spilling that might break secret-erasure with and
* finally, secret-erasure mechanisms based on dedicated secure
functions (i.e., , ), memory
barriers, and weak symbols, are preserved in all tested setups.
This framework is open-source and can be easily extended with new
compilers and new scrubbing functions;
Our technique is shown to be highly efficient on bug-finding and
bounded-verification compared to alternative approaches, paving the way to a
systematic binary-level analysis of information-flow properties on cryptographic
implementations, while our experiments demonstrate the importance of developing
verification tools reasoning at binary-level.
Besides constant-time and secret-erasure, the tool can be readily adapted to
other 2-hypersafety properties of interest in security (e.g., cache-side
channels, or variants of constant-time taking operand size into account)—as
long as they restrict to pairs of traces following the same path.
Availability. We made open-source at
<https://github.com/binsec/rel>, our experiments are available at
<https://github.com/binsec/rel_bench>, and in particular, our studies on the
preservation of constant-time and secret-erasure by compilers are available at
Extension of article <cit.>.
This paper is an extension of the article Binsec/Rel: Efficient
Relational Symbolic Execution for Constant-Time at
Binary-Level <cit.>, with the following
additional contributions:
* The leakage model considered in <cit.> restricts
to constant-time while this work encompasses a more general subset of
information flow properties. In particular, we define a new leakage
model and property to capture the notion of secret-erasure (cf. <ref>);
* We extend the tool to verify the secret-erasure property;
* We perform an experimental evaluation on the preservation of
secret-erasure by compilers (cf. <ref>). This
evaluation highlights incorrect usages of volatile data pointers for
secret erasure, and shows that scrubbing mechanisms based on
volatile function pointers can introduce additional violations
from register spilling;
* Using , we also investigate the role of individual
compiler optimizations in the preservation of secret-erasure and
constant-time. For constant-time, we show that the if-conversion passes
of may help enforce constant-time in ARM binaries. We also
show that disabling the is not always sufficient
to preserve constant-time in the backend-passes of . For
secret-erasure, we confirm the key role of the dead store elimination
pass (), but also show that disabling it does not always
preserve secret-erasure.
In addition, we provide full proofs of relative completeness and correctness of
the analysis—whereas simple sketches of proofs were given
in <cit.> (<ref>)— we
evaluate the scalability of according to the size of the input
(<ref>), and we detail the vulnerabilities introduced by
with examples (<ref>).
In addition, we provide a technical report <cit.> which
contains full proofs of relative completeness and correctness of the analysis,
contains an evaluation of the scalability of according
to the size of the input, and details the vulnerabilities introduced by
with examples..
§ BACKGROUND
In this section, we present the basics of information flow properties
and symbolic execution.
Small examples of constant-time and standard adaptations of symbolic
execution are presented in <ref>, while formal
definitions of information flow policies (including constant-time and
secret-erasure) are given in <ref>.
§.§.§ Information flow properties
Information flow policies regulate the transfer of information between public
and secret domains. To reason about information flow, the program input is
partitioned into two disjoint sets: low (i.e., public) and high
(i.e., secret).
Typical information flow properties require that the observable output of a
program does not depend on the high input
(non-interference <cit.>). Constant-time
and secret-erasure can be expressed as information flow properties.
Constant-time requires both the program control flow and the memory
accesses to be independent from high input. It protects against
timing and microarchitectural attacks (exploiting cache, port contention,
branch predictors, etc.).
Secret-erasure requires specific memory locations (typically the call
stack) to be independent from high input when returning from a critical
function. It ensures that secret data do not persist in memory
longer than necessary <cit.>, protecting these secret
data against subsequent memory exposure, e.g. memory disclosure
vulnerabilities, access to persistent storage (swap memory).
Contrary to standard safety properties which state that nothing bad can
happen along one execution trace, information flow properties relate
two execution traces—they are 2-hypersafety
properties <cit.>.
Unfortunately, the vast majority of symbolic execution
tools <cit.>
is designed for safety verification and cannot directly be applied to
2-hypersafety properties.
In principle, 2-hypersafety properties can be reduced to standard
safety properties of a self-composed
program <cit.> but this reduction alone does not
scale <cit.>.
§.§.§ Symbolic execution
Symbolic Execution
(SE) <cit.>
consists in executing a program on symbolic inputs instead of
concrete inputs. Variables and expressions of the program are
represented as terms over these symbolic inputs and the current path is
modeled by a path predicate (a logical formula), which is the
conjunction of conditional expressions encountered along the
SE is built upon two main steps.
(1) Path search: at each conditional statement the symbolic
execution forks, the expression of the condition is added to the
first branch and its negation to the second branch, then the symbolic
execution continues along satisfiable branches;
(2) Constraint solving: the path predicate can be solved with
an off-the-shelf automated constraint solver, typically
SMT <cit.>, to generate a concrete input
exercising the path.
Combining these two steps, SE can explore different program paths
and generate test inputs exercising these paths. It can also check
local assertions in order to find bugs or perform
bounded-verification (i.e., verification up to a certain
Dramatic progress in program analysis and constraint solving over
the last two decades have made SE a tool of choice for intensive
testing <cit.>,
analysis <cit.>
and other security-related
analysis <cit.>.
§.§.§ Binary-level symbolic execution
Low-level code operates on a set of registers and a single (large)
untyped memory. During the execution,
a call stack contains information about the active functions such as
their arguments and local variables.
A special register (stack pointer) indicates the top
address of the call stack and local variables of a function can be
referenced as offsets from the initial
[ is specific to x86, but this is
generalizable, e.g. for ARMv7.].
Binary-level code analysis is notoriously more challenging than source code
analysis <cit.>.
First, evaluation and assignments of source code variables become memory load
and store operations, requiring to reason explicitly about the memory in a very
precise way. Second, the high level control flow structure (e.g.
loops) is not preserved, and there are indirect jumps to handle (e.g. instruction
of the form jmp eax). Fortunately, it turns out that SE is less
difficult to adapt from source code to binary code than other semantic
analysis—due to both the efficiency of SMT solvers and concretization (i.e.,
simplifying a formula by constraining some variables to be equal to their
observed runtime values).
Hence, strong binary-level SE tools do exist and have yielded several highly
promising case
studies <cit.>.
In this paper, we build on top of the binary-analysis platform
Binsec <cit.> and in particular its symbolic
execution engine Binsec/SE <cit.>.
One of the key components of binary-level symbolic execution is the
representation of the memory. A first solution, adopted in
Binsec/SE <cit.> and
Bap <cit.>, is to use a fully
symbolic memory model in which the memory is represented as a symbolic
array of bytes. Other solutions consist in concretizing (parts of) the memory.
For instance, angr <cit.> uses a partially
symbolic memory model <cit.> in which
write addresses are concretized and symbolic loads are encoded as symbolic
if-the-else expressions.
Fully symbolic memory models incur a performance overhead compared to
partially symbolic (or concrete) memory models. However, they can model all
possible values that load/write addresses can take—instead of considering
only a subset of the possible addresses. Hence, they offer better soundness
guarantees and are better suited for bounded-verification.
Logical notations. Binsec/SE relies on the theory of
bitvectors and arrays, <cit.>.
Values (e.g. registers, memory addresses, memory content) are modeled with
fixed-size bitvectors <cit.>.
We use the type \(\bvtype{m}\), where \(m\) is a constant number, to represent
symbolic bitvector expressions of size \(m\).
The memory is modeled with a logical array <cit.> of type
\(\memtype{}\) (assuming a $32$ bit architecture).
A logical array is a function \((Array~\mathcal{I}~\mathcal{V})\) that maps each
index \(i \in \mathcal{I}\) to a value \(v \in \mathcal{V}\).
Operations over arrays are:
* \(select : (Array~\mathcal{I}~\mathcal{V}) \times \mathcal{I}
\rightarrow \mathcal{V}\) takes an array \(a\) and an index \(i\)
and returns the value \(v\) stored at index \(i\) in \(a\),
* \(store: (Array~\mathcal{I}~\mathcal{V}) \times \mathcal{I} \times
\mathcal{V} \rightarrow (Array\ \mathcal{I}\ \mathcal{V})\) takes an
array \(a\), an index \(i\), and a value \(v\), and returns the
array \(a\) modified so that \(i\) maps to \(v\).
These functions satisfy the following constraints for all
\({a \in(Array~\mathcal{I}~\mathcal{V})}\), \({i \in \mathcal{I}}\),
\({j \in \mathcal{I}}\), \({v \in \mathcal{V}}\):
* \(select~(store~a~i~v)~i = v\): a store of a value \(v\) at index \(i\)
followed by a \(select\) at the same index returns the value \(v\);
* \(i \neq j \implies select~(store~a~i~v)~j = select~a~j\): a store at
index \(i\) does not affect values stored at other indexes \(j\).
§ MOTIVATING EXAMPLE: CONSTANT-TIME ANALYSIS
Consider the constant-time policy applied to the toy program in
<Ref>. The outcome of the conditional instruction at line <ref> and the
memory access at line <ref> are leaked. We say that a leak is
insecure if it depends on the secret input. Conversely, a
leak is secure if it does not depend on the secret
input. Constant-time holds for a program if there is no insecure leak.
Example. Consider two executions of this program with the same public
input: \((x,y)\) and \((x',y')\) where \(y = y'\). Intuitively, we can see that
the leakages produced at line <ref>, \(y = 0\) and \(y' = 0\), are necessarily equal
in both executions because \(y = y'\); hence this leak does not depend on the
secret input and is secure. On the contrary, the leakages \(x\) and \(x'\) at
line <ref> can differ in both executions (e.g. with \(x = 0\) and \(x' = 1\));
hence this leak depends on the secret input and is insecure.
The goal of an automatic analysis is to prove that the leak at
line <ref> is secure and to return concrete input showing that the leak
at line <ref> is insecure.
§.§ Symbolic Execution and Self-Composition (SC)
Symbolic execution can be adapted to the case of constant-time, following the
self-composition principle. Instead of self-composing the program, we rather
self-compose the formula with a renamed version of itself plus a precondition
stating that the low inputs are
equal <cit.>.
Basically, this amounts to model two different executions
following the same path and sharing the same low input in a single
At each conditional statement, exploration queries are sent to
the solver to determine satisfiable branches.
Additionally, insecurity queries specific to constant-time are sent
before each control-flow instruction and memory access to determine whether they
depend on the secret—if an insecurity query is satisfiable then a
constant-time violation is found.
As an illustration, let us consider the program in <Ref>.
First, we assign symbolic values to x and y and use
symbolic execution to generate a formula of the program until the first
conditional jump (line <ref>), resulting in the formula:
\(x = \beta ~\wedge~ y = \lambda ~\wedge~ c = (\lambda \neq 0)\). Second,
self-composition is applied on the formula with precondition
\(\lambda = \lambda'\) to constrain the low inputs to be equal in both
executions. Finally, a postcondition \(c \neq c'\) asks whether the value of the
condition can differ, resulting in the following insecurity query:
\begin{equation*}
\lambda = \lambda' ~\wedge~
\left(\begin{aligned}
x = \beta ~\wedge~ y = \lambda ~\wedge~ c = (\lambda \neq 0) ~\wedge~ \\
x' = \beta' ~\wedge~ y' = \lambda' ~\wedge~ c' = (\lambda' \neq 0) \\
\end{aligned}\right)
~\wedge~ c \neq c'
\end{equation*}
This formula is sent to an SMT-solver. If the solver returns
unsat, meaning that the query is not satisfiable, then the condition
does not differ in both executions and thus is secure. Otherwise, it means that
the outcome of the condition depends on the secret and the solver returns a
counterexample satisfying the insecurity query.
Here, the SMT-solver Z3 <cit.>
answers that the query is unsat and we can conclude that the leak is
With the same method, the analysis finds that the leak at line <ref> is insecure,
and returns two inputs (0,0) and (1,0), respectively leaking 0 and 1, as a
Limits. Basic self-composition suffers from two
* It generates insecurity queries at each control-flow instruction and
memory access. Yet, as seen in the previous example, insecurity queries
could be spared when expressions do not depend on
* The whole original formula is duplicated so the size of the
self-composed formula is twice the size of the original formula. Yet,
because the parts of the program which only depend on public input are
equal in both executions, the self-composed formula contains
redundancies that are not exploited.
§.§ Relational Symbolic Execution (RelSE)
RelSE improves over self-composition by maximizing sharing between the
pairs of
executions <cit.>.
RelSE models two executions of a program \(P\) in the same symbolic execution
instance, let us call them \(p\) and \(p'\). During RelSE, variables of \(P\)
are mapped to relational expressions which are either pairs of
expressions or simple expressions. Variables that must be equal in
\(p\) and \(p'\) (i.e., the low inputs) are represented as simple
expressions whereas those that may be different (i.e., the secret input)
are represented as pairs of expressions.
Secret-dependencies are propagated (in a conservative way) through symbolic
execution using these relational expressions: if the evaluation of an expression
only involves simple operands, its result will be a simple expression, meaning
that it does not depend on secret, whereas if it involves a pair of
expressions, its result will be a pair of expressions.
This representation offers two main advantages.
First, this enables sharing redundant parts of \(p\) and \(p'\), reducing the
size of the self-composed formula. Second, variables mapping to simple
expressions cannot depend on secret, which makes it possible to spare
insecurity queries.
As an example, let us perform RelSE of the toy program in
<Ref>. Variable x is assigned a pair of
expressions ${\pair{\beta}{\beta'}}$ and y is assigned a
simple expression
$\simple{\lambda}$. Note that the precondition that public variables
are equal is now implicit since we use the same symbolic variable in
both executions. At line <ref>, the conditional expression is evaluated to
$c = \simple{\lambda \neq 0}$ and we need to check that the leakage of
$c$ is secure. Since $c$ maps to a simple expression, we know by
definition that it does not depend on the secret, hence we can spare
the insecurity query.
Finally, when a control-flow instruction depends on a pair of
expressions ${\pair{\varphi}{\varphi'}}$, an insecurity query
\(\varphi \neq \varphi'\) is sent to the solver. If it is satisfiable, a
vulnerability is reported and RelSE continues with the constraint
${\varphi = \varphi'}$ so the same vulnerability is not reported twice;
otherwise the insecurity query is unsatisfiable, meaning that
${\varphi = \varphi'}$.
In both cases, the value of the control-flow instruction is the same in both
executions and RelSE only needs to model pairs of executions following the
same path.
RelSE maximizes sharing between both executions and tracks
secret-dependencies enabling to spare insecurity queries and reduce
the size of the formula.
§.§ Challenge of binary-level analysis
Recall that, represents the memory as a special variable of type
\(\memtype\). Consequently, it is not possible to directly store relational
expressions in it. In order to store high inputs at the beginning of the
execution, we have to duplicate it. In other words the memory is always
Consequently, every \(select\) operation will evaluate to a duplicated
expression, preventing to spare queries in many situations.
As an illustration, consider the compiled version of the previous program, given
in <Ref>. The steps of RelSE on this program are given
in <ref>.
When the secret input is stored in memory at line <ref>, the array
representing the memory is duplicated. This propagates to the load expression in
eax at line <ref> and to the conditional expression at line <ref>.
Intuitively, at line <ref>, eax should be equal to the simple
expression \(\simple{\lambda}\) in which case we could spare the insecurity
query like in the previous example.
However, because dependencies cannot be tracked in the array representing the
memory, eax evaluates to a pair of \(select\) expression and we have to
send the insecurity query to the solver.
Practical impact. <Ref> reports the
performance of constant-time analysis on an implementation of elliptic curve
Curve25519-donna <cit.>.
Both SC and RelSE fail to prove the program secure in less
than 1h.
RelSE does reduce the number of queries compared to
SC, but it is not sufficient.
Our solution. To mitigate this issue, we propose
dedicated simplifications for binary-level relational symbolic
execution that allow a precise tracking of secret-dependencies in
the memory (details in <ref>). In the particular
example of <ref>, our prototype proves that the code is secure in less than 20 minutes. Our
simplifications simplify all the queries, resulting in a
\(\times 2000\) speedup compared to standard RelSE and SC in terms of
number of instructions explored per second.
§ CONCRETE SEMANTICS AND LEAKAGE
We present the leakage models in an intermediate language called
Dynamic Bitvectors Automatas (DBA) <cit.>.
§.§ Dynamic Bitvectors Automatas
DBA <cit.>, shown in <ref>, is the representation
used in <cit.> to model
programs and perform its analysis.
Let \(\instrset\) denote the set of instructions and \(\locset\) the
set of program locations.
A program \(\prog{} : \locset \rightarrow \instrset\) is a map from
locations to instructions.
Values and variables range over the set of
fixed-size bitvectors \(\bvset{n} := {\{0,1\}}^n\) (set of \(n\)-bit
A concrete configuration is a tuple
\(\cconf{\locvar}{\cregmap}{\cmem}\) where:
* \(\locvar \in \locset\) is the current location, and
\(\locmap{l}\) returns the current instruction,
* \(\cregmap : \varset{} \to \bvset{n} \) is a register map that maps
variables to their bitvector value,
* \(\cmem : \bvset{32} \to \bvset{8}\) is the memory, mapping 32-bit
addresses to bytes and accessed by operators and .
The initial configuration is given by
\(\cconfvar_0 \mydef \cconf{\locvar_0}{\cregmap_0}{\cmem_0}\) with
\(\locvar_0\) the address of the entrypoint of the program, \(r_0\) an
arbitrary register map, and \(m_0\) an arbitrary memory. Let
\(\haltlocset \subseteq \locset\) the set of halting program locations
such that
\(\locvar \in \haltlocset \iff \locmap{\locvar} = \texttt{halt}\).
For the evaluation of indirect jumps, we define a partial one-to-one
correspondence from bitvectors to program locations,
\(\toloc : \bvset{32} \rightharpoonup \locset\). If a bitvector
corresponds to an illegal location (e.g. non-executable address),
\(\toloc\) is undefined.
§.§ Leakage Model
The behavior of programs is modeled with an instrumented operational semantics
in which each transition is labeled with an explicit notion of leakage. Building
on 's
framework <cit.>, the semantics is
parameterized with leakage functions, which permits to consider several leakage
The set of program leakages, denoted \(\leakset\), is defined according to the
leakage model.
A transition from a configuration \(c\) to a configuration \(c'\)
produces a leakage \(\leakvar \in \leakset\), denoted
\(c \cleval{\leakvar} c'\).
Analogously, the evaluation of an expression \(e\) in a configuration
\(\cconf{\locvar}{\cregmap}{\cmem}\), produces a leakage
\(\leakvar \in \leakset\), denoted
\(\ceconf{\cregmap}{\cmem}{e} \ceeval{\leakvar} bv\).
The leakage of a multistep execution is the concatenation of leakages,
denoted \(\concat\), produced by individual steps. We use
\(\cleval{\leakvar}^k\) with \(k\) a natural number to denote \(k\)
steps in the concrete semantics.
The concrete semantics is given in <ref> and
is parameterized with leakage functions
\(\leakfunc_{\unop}: \bvset{} \to \leakset\),
\(\leakfunc_{\binop}: \bvset{} \times \bvset{} \to \leakset\),
\(\leakfunc_{@}: \bvset{32} \to \leakset\),
\(\leakfunc_{pc}: \locset \to \leakset\),
\(\leakfunc_{\bot}: \locset \to \leakset\),
\(\leakfunc_{\mu}: \bvset{32} \times \bvset{8} \to \leakset\).
A leakage model is an instantiation of the leakage functions. We consider
the program counter, memory obliviousness, size
noninterference and constant-time, leakage models defined
in <cit.>. In addition, we define the
operand noninterference and secret-erasure leakage models.
Program counter <cit.>. The programs
counter leakage model leaks the control flow of the program.
The leakage of a program is a list of program location:
\(\leakset \mydef List(\locset)\).
The outcome of conditional jumps and the address of indirect jumps
is leaked: \(\leakfunc_{pc}(\locvar) = [\locvar]\).
Other instructions produce an empty leakage.
obliviousness <cit.>. The memory
obliviousness leakage model leaks the sequence of memory addresses accessed
along the execution.
The leakage of a program is a list of 32-bit bitvectors representing addresses
of memory accesses: \(\leakset \mydef List(\bvset{32})\).
The addresses of memory load and stores are leaked: \(\leakfunc_{@}(e) =
Other instructions produce an empty leakage.
Operand noninterference. The operand noninterference leakage model
leaks the value of operands (or part of it) for specific operators that execute
in non constant-time.
The leakage of a program is a list of bitvector values:
\(\leakset \mydef List(\bvset{})\). Functions \(\leakfunc_{\unop}\) and
\(\leakfunc_{\binop}\) are defined according to architecture specifics. For
instance, in some architectures, the execution time of shift or rotation
instructions depends on the shift or rotation count[See
<https://bearssl.org/constanttime.html>]. In this case, we can define
\(\leakfunc_{<<}(bv_1,bv_2) = [bv_2]\).
Other instructions produce an empty leakage.
noninterference <cit.>. The size
noninterference leakage model is a special case of operand noninterference where
the size of the operand is leaked. For instance, knowing that the execution time
of the division depends on the size of its operands, we can define
\(\leakfunc_{\div}(bv_1,bv_2) = [size(bv_1),size(bv_2)]\).
Constant-time <cit.>. The
constant-time leakage model combines the program counter and the memory
obliviousness security policies. The set of leakage is defined as
\(\leakset \mydef List(\locset~\cup~\bvset{32})\).
The control flow is leaked
\(\leakfunc_{pc}(\locvar) = [\locvar]\),
as well as the memory accesses
\(\leakfunc_{@}(e) = [e]\).
Other instructions produce an empty leakage.
Note that some definitions of constant-time also include size
noninterference <cit.> or operand
noninterference <cit.>.
The secret-erasure leakage model leaks the index and value of every store
operation—values that are overwritten are filtered-out from the leakage trace
(as we formalize later in <ref>).
With regard to secret dependent control-flow, we define a conservative notion of
secret-erasure forbidding to branch on secrets—thus including the program
counter policy.
The leakage of a program is a list of locations and pairs of bitvector values:
\(\leakset \mydef List(\locset~\cup~(\bvset{32} \times \bvset{8}))\).
The control flow is leaked
\(\leakfunc_{pc}(\locvar) = [\locvar]\),
as well as the end of the program
\(\leakfunc_{\bot}(\locvar) = [\locvar]\),
and the list of store operations
\(\leakfunc_{\mu}(bv, bv') = [(bv, bv')]\).
Other instructions produce an empty leakage.
§.§ Secure program
Let \(\highvarset \subseteq \varset\) be the set of high (secret)
variables and \(\lowvarset = \varset \setminus \highvarset\) be the
set of low (public) variables. Analogously, we define
\(\highmemset \subseteq \bvset{32}\) (resp.
\(\lowmemset = \bvset{32} \setminus \highmemset\)) as the addresses
containing high (resp. low) input in the initial memory.
The low-equivalence relation over concrete configurations
\(\cconfvar\) and \(\cconfvar'\), denoted
\(\cconfvar \loweq \cconfvar'\), is defined as the equality of low
variables and low parts of the memory.
Formally, two configurations
\(\cconfvar \mydef \cconf{\locvar}{\cregmap}{\cmem}\) and
\(\cconfvar' \mydef \cconf{\locvar'}{\cregmap'}{\cmem'}\) are
low-equivalent if and only if
for all variable \(v \in \lowvarset\),
\(\cregmap\ v = \cregmap'\ v\) and for all address
\(a \in \lowmemset\),
\(\cmem\ a = \cmem'\ a\).
Security is expressed as a form of observational noninterference
that is parameterized by the leakage model. Intuitively it guarantees that
low-equivalent configurations produce the same observations, according to the
leakage model:
A program is observationally noninterferent if and only if for all
low-equivalent initial configurations \(\cconfvar_0\) and
\(\cconfvar'_0\), and for all \(k \in
\mathbb{N}\),
\begin{equation*}
\cconfvar_0 \loweq \cconfvar_0'\ %
~\wedge~ \cconfvar_0 \cleval{\leakvar}^k \cconfvar_k %
~\wedge~ \cconfvar'_0 \cleval{\leakvar'}^k \cconfvar'_k%
\implies \filter(\leakvar) = \filter(\leakvar') %
\end{equation*}
The property is parameterized by a function,
\(\filter : \leakset \to \leakset\), that further restricts the
A program is
constant-time (CT) if it is ONI in the constant-time leakage model with
\(\filter\) set to the identity function.
A program enforces secret-erasure if it is ONI in the secret-erasure leakage
model with \(\filter\) set to the identity function for control-flow leakages
and only leaking store values at the end of the program
(\(\locvar \in \haltlocset\)), restricting to values that have not been
overwritten by a more recent store.
Formally, \(\filter(\leakvar) = \filter'(\leakvar, m_{\varepsilon})\) where
\(m_{\varepsilon}\) is the empty partial function from \(\bvset{32}\) to
\(\bvset{8}\) and \(\filter'(\leakvar, m_{acc})\) is defined as:
*[left=filter-empty]'(ε, m_acc) = ε*[left=filter-store]'((𝚊, 𝚟) , m_acc) = filter'(, m_acc[𝚊 ↦𝚟])*[left=filter-cf]∉'(, m_acc) = filter'(, m_acc)*[left=filter-halt]𝚊_𝚒 ∈dom(m_acc)
∈'(, m_acc) = m_acc(𝚊_0) …m_acc(𝚊_𝚗)
Intuitively, \(m_{acc}\) is a function used to accumulate values written to
the memory and leak them at the end of a program.
The filter-store rule accumulates a store operation \((a, c)\) from
the leakage trace into the function \(m_{acc}\). Notice that because
\(m_{acc}\) is a function, if \(m_{acc}(\mathtt{a})\) is already defined, its
value will be replaced by \(\mathtt{v}\) after
\(m_{acc}[\mathtt{a} \mapsto \mathtt{v}]\). The filter-cf rule adds
control-flow label to the final leakage trace. Finally, the
filter-halt rule is evaluated when a final location is reached and
leaks all the store values accumulated in \(m_{acc}\).
For example,
\(\filter((\mathtt{a}, \mathtt{x}) \concat (\mathtt{b}, \mathtt{y}) \concat (\mathtt{a}, \mathtt{z}) \concat \locvar_\bot)\)
where \(\locvar_\bot \in \haltlocset\) will return the leakage
\(\mathtt{y} \cdot \mathtt{z}\).
§ BINARY-LEVEL RELATIONAL SYMBOLIC
Binary-level symbolic execution relies on the quantifier-free theory
of fixed-size bitvectors and arrays
( <cit.>).
We let \(\beta\), \(\beta'\), \(\lambda\), \(\varphi\), range over the
set of formulas $\formulaset$ in the logic. A
relational formula \(\rel{\varphi}\) is either a formula
\(\simple{\varphi}\) or a pair \(\pair{\varphi_l}{\varphi_r}\) of two
formulas. We denote \(\lproj{\rel{\varphi}}\) (resp.\(\rproj{\rel{\varphi}}\)), the projection on the left (resp. right)
value of \(\rel{\varphi}\). If \(\rel{\varphi} = \simple{\varphi}\),
then \(\lproj{\rel{\varphi}}\) and \(\rproj{\rel{\varphi}}\) are both
defined as \(\varphi\). Let \(\rlift{\formulaset}\) be the set of
relational formulas and \(\rlift{\bvtype{n}}\) be the set of
relational symbolic bitvectors of size $n$.
Symbolic configuration.
Our symbolic evaluation restricts to pairs of traces following the same
path—which is sufficient for constant-time and our definition of
secret-erasure. Therefore, a symbolic configuration only needs to consider a single
program location \(l \in Loc\) at any point of the execution.
A symbolic configuration is of the form
\(\iconfold{l}{\regmap}{\smem}{\pc{}}\) where:
* \(l \in Loc\) is the current program point,
* \(\regmap{} : \varset{} \rightarrow \rlift{\formulaset}\) is a
symbolic register map, mapping variables from a set \(\varset{}\) to
their symbolic representation as a relational formula in
\(\rlift{\formulaset}\),
* \(\smem : \memtype \times \memtype\) is the symbolic memory—a
pair of arrays of values in \(\bvtype{8}\) indexed by addresses in
\(\bvtype{32}\),
* \(\pc{} \in \formulaset\) is the path predicate—a conjunction
of conditional statements and assignments encountered along a path.
Symbolic evaluation of instructions, denoted
\(\sconfvar \ieval{} \sconfvar'\) where $\sconfvar$ and $\sconfvar'$
are symbolic configurations, is given in
The evaluation of an expression \(expr\) to a relational formula
\(\rel{\varphi}\), is denoted
\(\econfold{\regmap}{\smem}{\pc}{expr} \eeval{} \rel{\varphi}\).
A model \(M\) assigns concrete values to symbolic variables.
The satisfiability of a formula \(\pi\) with a model \(M\) is denoted
$M \sat{\pi}$. In the implementation, an SMT-solver is used to determine
satisfiability of a formula and obtain satisfying model, denoted
$M \solver{\pi}$. Whenever the model is not needed for our purposes, we leave it
implicit and simply write $\sat{\pi}$ or $\solver{\pi}$ for satisfiability.
The symbolic evaluation is parameterized by symbolic leakage predicates
\(\sleakfunc_{\unop}, \sleakfunc_{\binop}, \sleakfunc_{@}, \sleakfunc_{dj}, \sleakfunc_{ite}\)
\(\sleakfunc_{\bot}\)
which are instantiated according to the leakage model (details on the
instantiation will be given in <ref>).
Symbolic leakage predicates take as input a path predicate and
expressions that can be leaked, and return \(true\) if and only if no secret
data can leak. The rules of the symbolic evaluation are guarded by these
symbolic leakage predicates: a rule can only be evaluated if the associated
leakage predicate evaluates to \(true\), meaning that no secret can
leak. If a symbolic leakage predicate evaluates to \(false\) then a secret
leak is detected and the analysis is stuck. Detailed explanations
of (some of) the symbolic evaluation rules follow:
cst is the evaluation of a constant and
returns the corresponding symbolic bitvector as a simple expression
\(\simple{bv}\).
load is the evaluation of a load expression.
It returns a pair of logical \(select\) formulas from the pair of
symbolic memories \(\smem\) (the box in the hypotheses should be
ignored for now, it will be explained in <ref>). Note that
the returned expression is always duplicated as the \(select\)
must be performed in the left and right memories independently.
d_jump is the evaluation of an indirect jump.
It finds a concrete value $l'$ for the jump target, and updates the path
predicate and the next location. Note that this rule is nondeterministic as
\(l'\) can be any concrete value satisfying the path constraint.
ite-true is the evaluation of a conditional jump when the
expression evaluates to \(true\) (the \(false\) case is
If the condition guarding the \(true\)-branch is satisfiable, the rule
updates the path predicate and the next location to explore it.
assign is the evaluation of an assignment. It allocates a
fresh symbolic variable to avoid term-size explosion, and updates the
register map and the path predicate.
The content of the box in the hypothesis and the rule
canonical-assign should be ignored for now and will be
explained in <ref>.
store is the evaluation of a store instruction.
It evaluates the index and value of the store and updates the symbolic
memories and the path predicate with a logical \(store\) operation.
§.§ Security evaluation
For the security evaluation, we start by defining a general
predicate, $\secleak$, which takes as an input a path predicate and a
relational expression that is leaked, and returns \(true\) if and only if no
secret data can leak (cf. <ref>). Then, we use this $\secleak$
predicate to instantiate symbolic leakage predicates
\(\sleakfunc_{\unop}, \sleakfunc_{\binop}, \sleakfunc_{@}, \sleakfunc_{dj}, \sleakfunc_{ite}\)
\(\sleakfunc_{\bot}\) according to the leakage model (cf. <ref>).
§.§.§ Predicate \(\secleak\)
We define a predicate
$\secleak : \rlift{\formulaset} \times \formulaset \to Bool$ which ensures that
a relational formula does not differ in its right and left components, meaning
that it can be leaked securely:
(φ, )
true if φ = φ
true if φ = φ_lφ_r
∧π∧φ_l ≠φ_r
false otherwise
By definition, a simple expression \(\simple{\varphi}\) does not depend on
secrets and can be leaked securely. Thus it spares an insecurity query to
the solver.
However, a duplicated expression \(\pair{\varphi_l}{\varphi_r}\) may
depend on secrets. Hence an insecurity query must be sent to the solver
to ensure that the leak is secure.
§.§.§ Instantiation of leakage predicates
Symbolic leakage predicates are instantiated according to the concrete leakage
models defined in <ref>.
Note that the analysis can be easily be extended to other leakage models by
defining symbolic leakage predicates accordingly.
Program counter. Symbolic leakage predicates ensure that the
outcome of control-flow instructions and the addresses of indirect jumps are the
same in both executions:
\(\sleakfunc_{dj}(\pc, \rel{\varphi}) = \secleak(\rel{\varphi}, \pc)\) and
\(\sleakfunc_{ite}(\pc, \rel{\varphi}) = \secleak(\rlift{eq_0}\ \rel{\varphi}, \pc)\)
where \(eq_0\ x\) returns \(true\) if \(x = 0\) and \(false\) otherwise, and
\(\rlift{eq_0}\) is the lifting of \(eq_0\) to relational formulas. Other
symbolic leakage predicates evaluate to true.
Memory obliviousness. Symbolic leakage predicates ensure that store and
load indexes are the same in both executions:
\(\sleakfunc_{@}(\pc, \rel{\varphi}) = \secleak(\rel{\varphi}, \pc)\).
Other symbolic leakage predicates evaluate to true.
Operand noninterference. Symbolic leakage predicates ensure that
operands (or part of them) are the same in both executions for specific
operators that execute in non constant-time.
For instance, for architectures in which the execution time of shift depends on
the shift count,
\(\sleakfunc_{<<}(\pc, \rel{\varphi},\rel{\phi}) =
\secleak(\rel{\varphi}, \pc)\).
Other symbolic leakage predicates evaluate to true.
Size noninterference (special case of operand noninterference).
Symbolic leakage predicates ensure that the size of operands is the same in both
executions for specific operators that execute in non constant-time. For
instance for the division, we have
\(\sleakfunc_{\div}(\pc, \rel{\varphi}, \rel{\psi}) = \secleak(\rlift{size}\ \rel{\varphi}, \pc)\),
where \(size : \bvtype{} \to \bvtype{}\) is a function that returns the size of
a symbolic bitvector and \(\rlift{size}\) its lifting to relational expressions.
Other symbolic leakage predicates evaluate to true.
Constant-time. This policy is a combination of the program counter
and the memory obliviousness policies. Symbolic leakage predicates
\(\sleakfunc_{dj}\) and \(\sleakfunc_{ite}\) are defined like in the program
counter policy, while \(\sleakfunc_{@}\) is defined like in the memory
obliviousness policy. Other symbolic leakage predicates evaluate to true.
At the end of the program, a symbolic leakage predicate ensures that the parts
of memory that have been written by the program are the same in both executions:
\begin{equation*}
\sleakfunc_{\bot}(\pc, \smem) =
\bigwedge\limits_{\iota \in addr(\smem)} \secleak(\pair{select(\lproj{\smem},\iota)}{select(\rproj{\smem},\iota)}, \pc)
\end{equation*}
where \(addr(\smem)\) is the list of store indexes in \(\smem\).
§.§.§ Specification of high and low input.
By default, the content of the memory and registers is low so the user has to
specify memory addresses that initially contain secret inputs. Addresses of high
variables can be specified as offsets from the initial stack pointer
(which requires manual reverse engineering), or using dummy
functions to annotate secret variables at source level (which is easier but only
applies to libraries or requires access to source code).
§.§.§ Bug-finding.
A vulnerability is found when the function
\(\secleak(\rel{\varphi}, \pc)\) evaluates to false. In this case, the
insecurity query is satisfiable and the solver returns a model \(M\) such that
\(M \solver{\pi \wedge (\lproj{\rel{\varphi}} \neq \rproj{\rel{\varphi}})}\).
The model $M$ assigns concrete values to variables that satisfy the insecurity
query. Therefore it can be returned as a concrete counterexample that triggers
the vulnerability, along with the current location of the vulnerability.
§.§ Optimizations for binary-level symbolic
Relational symbolic execution does not scale in the context of
binary-level analysis (see RelSE in
<Ref>). In order to achieve better
scalability, we enrich our analysis with an optimization, called
on-the-fly-read-over-write (FlyRow in
<ref>), based on
read-over-write <cit.>. This
optimization simplifies expressions and resolves load operations ahead
of the solver, often avoiding to resort to the duplicated memory and
allowing to spare insecurity queries.
We also enrich our analysis with two further optimizations, called
untainting and fault-packing (Unt and
FP in <ref>), specifically targeting RelSE
for information flow analysis.
§.§.§ On-the-fly read-over-write
Solver calls are the main bottleneck of symbolic execution, and reasoning about
\(store\) and \(select\) operations in arrays is particularly
challenging <cit.>. Read-over-write
(Row) <cit.> is a simplification for the theory of
arrays that efficiently resolves \(select\) operations. It is particularly
efficient in the context of binary-level analysis where the memory is
represented as an array and formulas contain many \(store\) and \(select\)
The standard read-over-write optimization <cit.>
has been implemented as a solver-pre-processing, simplifying a formula
before sending it to the solver. While it has proven to be very
efficient to simplify individual formulas of a single
execution <cit.>, we show in <ref>
that it does not scale in the context of relational reasoning, where
formulas model two executions and a lot of queries are sent to the
Thereby, we introduce on-the-fly read-over-write (FlyRow) to
track secret-dependencies in the memory and spare insecurity queries in the
context of information flow analysis. By keeping track of relational
\(store\) expressions along the execution, it can resolve \(select\)
operations—often avoiding to resort to the duplicated memory—and drastically
reduces the number of queries sent to the
improving the performance of the analysis.
Memory Lookup.
The symbolic memory can be seen as the history of the successive \(store\)
operations beginning with the initial memory \(\mu_0\).
Therefore, a memory \(select\) can be resolved by going back up the history and
comparing the index to load, with indexes previously stored.
Our FlyRow optimization consists in replacing selection in the memory
(<Ref>, load rule, boxed hypothesis) by a
new function
\(\lookup : (\memtype \times \memtype) \times \rlift{\bvtype{32}} \to \rlift{\bvtype{8}}\)
which takes a relational memory and a relational index, and returns the
relational bitvector value stored at that index.
For simplicity, we
define the function for simple indexes and detail the lifting to relational
indexes in <ref>:
For simplicity we
define the function for simple indexes and detail the lifting to relational
indexes in the companion technical report <cit.>:
(_0, ι) = select(_0, ι)select(_0, ι)
(_n, ι) =
φ_l if (ι,κ) ∧(φ_l,φ_r)
φ_lφ_r if (ι,κ) ∧(φ_l,φ_r)
(_n-1, ι) if (ι,κ)
select(_n, ι)select(_n, ι) if (ι,κ) =
where _n store(_n-1,κ,φ_l)store(_n-1,κ,φ_r)
where \(\compare(\iota,\kappa)\) is a comparison function relying on
syntactic term equality, which returns true (resp. false) only
if \(\iota\) and \(\kappa\) are equal (resp. different) in any
interpretation. If the terms are not comparable, it is undefined,
denoted \(\bot\).
Let us consider the memory:
* A call to \(\lookup(\rel{\mu}, ebp - 4)\) returns \(\lambda\).
* A call to \(\lookup(\rel{\mu}, ebp - 8)\) first compares the indexes
\([ebp-4]\) and \([ebp-8]\). Because it can determine that these
indexes are syntactically distinct, the function moves to the
second element, determines the syntactic equality of indexes and
returns \(\pair{\beta}{\beta'}\).
* A call to \(\lookup(\rel{\mu}, esp)\) tries to compare the indexes
\([ebp-4]\) and \([esp]\). Without further information, the equality
or disequality of \(ebp\) and \(esp\) cannot be determined, therefore
the lookup is aborted and the \(select\) operation cannot be
Term rewriting.
To improve the conclusiveness of syntactic equality checks for the
read-over-write, the terms are assumed to be in normalized form
\(\beta + o\) where \(\beta\) is a base (i.e., an expression on
symbolic variables) and \(o\) is a constant offset.
The comparison of two terms \(\beta + o\) and \(\beta' + o'\) in
normalized form can be efficiently computed as follows: if the bases
\(\beta\) and \(\beta'\) are syntactically equal, then return
\(o = o'\), otherwise the terms are not comparable.
In order to apply FlyRow, we normalize all the formulas
created during the symbolic execution using rewriting rules
similar as those defined in <cit.>. An excerpt
of these rules is given in <ref>.
Intuitively, these rewriting rules put symbolic variables at the
beginning of the term and the constants at the end
(see <ref>).
[Normalized formula]
\(\normalize\ ((eax + 4) + (ebx + 4)) = (eax + ebx) + 8 \)
In order to increase the conclusiveness of FlyRow, we also need
variable inlining. However, inlining all variables is not a viable option
as it would lead to an exponential term size growth.
Instead, we define a canonical form \(x + o\) where \(x\) is a bitvector
variable, and \(o\) is a constant bitvector offset, and we only inline formulas
that are in canonical form (see rule canonical-assign in
<ref>). It enables rewriting of most of the memory
accesses on the stack which, are of the form ebp + bv, while
avoiding term-size explosion.
§.§.§ Untainting
After the evaluation of a rule with the predicate $\secleak$ for a
duplicated expression \(\pair{\varphi_l}{\varphi_r} \), we know that
the equality \(\varphi_l = \varphi_r\) holds in the current
configuration. From this equality, we can deduce useful information
about variables that must be equal in both executions. We can then
propagate this information to the register map and memory in order to
spare subsequent insecurity queries concerning these variables.
For instance, consider the leak of the duplicated expression
\(\pair{x_l + 1}{x_r + 1}\), where \(x_l\) and \(x_r\) are symbolic
variables. If the leak is secure, we can deduce that \(x_l = x_r\) and
replace all occurrences of \(x_r\) by \(x_l\) in the rest of the
symbolic execution.
We define in <ref> a function
\(\untaint(\regmap,\smem, \rel{\varphi})\) which takes a register map
\(\regmap\), a memory \(\smem\), and a duplicated expression \(\rel{\varphi}\).
It deduces variable equalities from \(\rel{\varphi}\), propagate them in
\(\regmap\) and \(\smem\), and returns a pair of updated register map and memory
\((\regmap', \smem')\).
Intuitively, if the equality of variables \(x_l\) and \(x_r\) can be deduced
from \(\secleak(\rel{\varphi}, \pc)\), the \(untaint\) function replaces
occurrences of \(x_r\) by \(x_l\) in the memory and the register map. As a
result, a duplicated expression \(\pair{x_l}{x_r}\) would be replaced by the
simple expression \(\simple{x_l}\) in the rest of the execution[We
implement untainting with a cache of “untainted variables” that are
substituted in the program copy during symbolic evaluation of expressions.].
§.§.§ Fault-packing
Symbolic evaluation generates a large number of insecurity checks for
some leakage models (e.g. memory obliviousness, constant-time). The
fault-packing (FP) optimization gathers these insecurity
checks along a path and postpones their resolution to the end of the
basic block.
For example, let us consider a basic-block with a path predicate
\(\pc\). If there are two memory accesses along the basic block that
evaluate to \(\pair{\lproj{\varphi}}{\rproj{\varphi}}\) and
\(\pair{\lproj{\phi}}{\rproj{\phi}}\), we would normally generate
two insecurity queries
\((\pc \wedge \lproj{\varphi} \neq \rproj{\varphi})\) and
\((\pc \wedge \lproj{\phi} \neq \rproj{\phi})\)—one for each
memory access. Fault-packing regroups these checks into a single
\(\big(\pc \wedge ((\lproj{\varphi} \neq \rproj{\varphi}) \lor
(\lproj{\phi} \neq \rproj{\phi}))\big)\) sent to the solver at the
end of the basic block.
This optimization reduces the number of insecurity queries sent to the solver
and thus helps improving performance. However it degrades the precision of the
counterexample: while checking each instruction individually precisely points to
vulnerable instructions, fault-packing reduces accuracy to vulnerable basic
blocks only. Note that even though disjunctive constraints are usually harder to
solve than pure conjunctive constraints, those introduced by FP are
very simple—they are all evaluated under the same path predicate and are not
nested. Therefore, they never end up in a performance degradation
(see <ref>).
§.§ Theorems
Theorems and proof are given for the constant-time property. Adaptation of the
theorems and proofs for other leakage models are discussed in
Theorems and proof sketches are given for the constant-time property. In the
companion technical report <cit.>, we detail the full proofs
and discuss how the theorems and proofs can be adapted to other leakage models.
In order to define properties of our symbolic execution, we use
$\cleval{}^k$ (resp. $\ieval{}^k$), with $k$ a natural number, to
denote $k$ steps in the concrete (resp. symbolic) evaluation.
If a program \(\prog{}\) is constant-time up to \(k\) then for all
\(i \leq k\), \(\prog{}\) is constant-time up to \(i\).
Through this section we assume that theory is correct and complete
w.r.t. our concrete evaluation.
The satisfiability problem for the theory is
decidable <cit.>. Therefore we make the
following hypothesis on the solver:
We suppose that the SMT solver for is correct, complete and
always terminates. Therefore for a formula \(\varphi\),
\(M \sat \pc \iff M \solver \pc\).
We assume that the program \(\prog{}\) is defined on all locations computed
during the symbolic execution—notably by the function \(\toloc\) in rule
d_jump. Under this hypothesis, and because the solver always
terminates (<ref>), symbolic execution is stuck if and only if a
leakage predicate evaluates to false. In this case, an expression
\(\rel{\varphi}\) is leaked such that \(\secleak(\rel{\varphi}, \pc)\)
evaluates to \(false\) and the solver returns a model \(M\) such that
\({M \sat \pc \wedge (\lproj{\rel{\varphi}} \neq \rproj{\rel{\varphi}})}\)
(from <ref>).
Concrete semantics is deterministic, c.f. rules of the concrete
semantics in <ref>.
We restrict our analysis to safe programs (e.g. no division by 0,
illegal indirect jump, segmentation fault). Under this hypothesis, concrete
execution never gets stuck.
We define a concretization relation $\concsym{p}{M}$ between
concrete and symbolic configurations, where \(M\) is a model and
\(p \in \{l,r\}\) is a projection on the left or right side of a
symbolic configuration. Intuitively, the relation
$c\! \concsym{p}{M}\! s$ is the concretization of the \(p\)-side of
the symbolic state \(s\) with the model \(M\).
Let \(c \mydef \cconf{\locvar_1}{\cregmap}{\cmem}\) and
\(s \mydef \iconfold{\locvar_2}{\regmap}{\smem}{\pc}\). Formally
$c \concsym{p}{M} s$ holds iff \(M \sat \pc\),
\(\locvar_1 = \locvar_2\) and for all expression \(e\), either the
symbolic evaluation of \(e\) gets stuck or we have
\begin{equation*}
\econfold{\regmap}{\smem}{\pc}{e} \eeval{} \rel{\varphi} ~\wedge~ %
(M(\proj{\rel{\varphi}}) = \mathtt{bv} \iff c~e \ceeval{} \mathtt{bv}) %
\end{equation*}
Notice that because both sides of an initial configuration \(s_0\) are
low-equivalent, the following proposition holds:
For all concrete configurations \(\cconfvar_0\) and \(\cconfvar_0'\)
such that
\(\cconfvar_0 \concsym{l}{M} s_0 ~\wedge~ \cconfvar'_0
\concsym{r}{M} s_0\), then \(\cconfvar_0 \loweq \cconfvar_0'\).
The following lemma expresses that when the symbolic evaluation is
stuck on a state \(s_k\), there exist concrete configurations derived
from \(s_k\) which produce distinct leakages.
Let \(s_k\) be a symbolic configuration obtained after \(k\) steps. If \(s_k\)
is stuck, then there exists a model \(M\) such that for each concrete
configurations \(c_k \concsym{l}{M} s_k\) and \(c_k' \concsym{r}{M} s_k\), the
executions from \(c_k\) and \(c_k'\) produce distinct leakages.
(Full proof in <ref>)
The proof goes by case analysis on the symbolic evaluation of \(s_{k}\).
Let \(s_{k}\) be a symbolic configuration that is stuck (i.e., a
symbolic leakage predicate evaluates to \(false\) with a model \(M\)), then
\(s_{k}\) can be concretized using the model \(M\), producing concrete
states \(c_k\) and \(c_k'\) such that \(c_{k} \cleval{\leakvar} c_{k+1}\)
and \(c_{k}' \cleval{\leakvar'} c_{k+1}'\). Finally, because the symbolic
leakage model does not over-approximate the concrete leakage, i.e., each
symbolic leak corresponds to a concrete leak, we have
\(\leakvar \neq \leakvar'\).
The following lemma expresses that when symbolic evaluation does not
get stuck up to \(k\), then for each pair of concrete executions
following the same path up to \(k\), there exists a corresponding
symbolic execution.
Let $s_0$ be a symbolic initial configuration for a program $P$ that does not
get stuck up to \(k\). For every concrete states $c_0$, $c_k$, $c_0'$, $c_k'$
and model $M$ such that
${c_0 \concsym{l}{M} s_0} ~\wedge~ {c_0' \concsym{r}{M} s_0}$,
if $c_0 \cleval{\leakvar}^k c_k$ and $c_0' \cleval{\leakvar'}^k c_k'$ follow
the same path, then there exists a symbolic configuration \(s_k\) and a model
\(M'\) such that:
\[s_0 \ieval{}^k s_k ~\wedge~ %
c_k \concsym{l}{M'} s_k ~\wedge~ c_k' \concsym{r}{M'} s_k\]
(Full proof in <ref>)
The proof goes by induction on the number of steps \(k\). For each concrete
step \(c_{k-1} \ceval{} c_{k}\) and \(c_{k-1}' \ceval{} c_{k}'\), we show
that, as long as they follow the same path, there is a symbolic step from
\(s_{k-1}\) to a state \(s_{k}'\) that models \(c_{k}\) and \(c_{k}'\). This
follows from the fact that our symbolic execution does not make
§.§.§ Correctness of RelSE
The following theorem claims the correctness of our symbolic
execution, stating that for each symbolic execution and model \(M\)
satisfying the path predicate, the concretization of the symbolic
execution with \(M\) corresponds to a valid concrete execution (no
[Correctness of RelSE]theoremcorrectness
For every symbolic configurations $s_0$, $s_k$ such that
\(s_0 \ieval{}^k s_k\) and for every concrete configurations
\(c_0\), \(c_k\) and model \(M\), such that
\(c_0 \concsym{p}{M} s_0\) and \(c_k \concsym{p}{M} s_k\),
there exists a concrete execution \(c_0 \cleval{}^k c_k\).
(Full proof in <ref>)
The proof goes by induction on the number of steps \(k\). For each symbolic step
\(s_{k-1} \ieval{} s_{k}\) and model \(M_{k}\) such that
\(c_{k-1} \concsym{p}{M_{k}} s_{k-1}\) and \(c_{k} \concsym{p}{M_{k}} s_{k}\),
there exists a step \(c_{k-1} \ceval{} c_{k}\) in concrete execution. For each
rule, we show that there exists a unique step from \(c_{k-1}\) to a state
\(c_{k}'\) (from <ref>), and, because
there is no over-approximation in symbolic execution, \(c_{k}'\) satisfies
\(c_{k}' \concsym{p}{M_{k}} s_{k}\).
§.§.§ Correct bug-finding for CT
The following theorem expresses that when the symbolic execution gets
stuck, then the program is not constant-time.
[Bug-Finding for CT]theorembugfinding
Let $s_0$ be an initial symbolic configuration for a program $\prog$. If
symbolic evaluation gets stuck in a configuration \(s_k\) then $\prog$
is not constant-time at step \(k\). Formally, if there is a symbolic
evaluation \(s_0 \ieval{}^k s_k\) such that \(s_k\) is stuck, then
there exists a model \(M\) and concrete configurations
\(\cconfvar_0 \concsym{l}{M} s_0\),
\(\cconfvar_0' \concsym{r}{M} s_0 \),
\(\cconfvar_k \concsym{l}{M} s_k \) and
\(\cconfvar_k' \concsym{r}{M} s_k\) such that,
\begin{equation*}%
\cconfvar_0 \loweq \cconfvar_0' ~\wedge~%
\cconfvar_0 \cleval{\leakvar}^k \cconfvar_k \cleval{\leakvar_{k}} \cconfvar_{k+1} ~\wedge~ %
\cconfvar_0' \cleval{\leakvar'}^k \cconfvar'_k \cleval{\leakvar_{k}'} \cconfvar_{k+1} %
\wedge \leakvar_{k} \neq \leakvar_{k}' %
\end{equation*}
Let us consider symbolic configurations \(s_0\) and \(s_k\) such
that \(s_0 \ieval{}^k s_k\) and \(s_k\) is stuck.
From <ref>, there is a model \(M\) and concrete
configurations \(c_k\) and \(c_k'\) such that \(c_{k} \concsym{l}{M} s_{k}\)
and \(c_{k}' \concsym{r}{M} s_{k}\), and \(c_{k} \cleval{\leakvar_k} c_{k+1}\)
and \(c_{k}' \cleval{\leakvar_k'} c_{k+1}'\) with
\(\leakvar_k \neq \leakvar_k'\).
Additionally, let \(c_0, c_0'\) be concrete configurations such that
\(c_0 \concsym{l}{M} s_0\) and \(c_0' \concsym{r}{M} s_0\).
From <ref>, we have \(c_0 \loweq c_0'\), and from <ref>,
there are concrete executions \(c_0 \cleval{\leakvar}^{k} c_{k}\) and
\(c_0' \cleval{\leakvar'}^{k} c_{k}'\).
Therefore, we have
\(\cconfvar_0 \cleval{\leakvar}^k \cconfvar_k \cleval{\leakvar_{k}} \cconfvar_{k+1}\)
\(\cconfvar_0' \cleval{\leakvar'}^k \cconfvar'_k \cleval{\leakvar_{k}'} \cconfvar_{k+1}'\)
with \(c_0 \loweq c_0'\) and \(\leakvar_k \neq \leakvar_k'\), meaning that
\(\prog\) is not constant-time at step \(k\).
§.§.§ Relative completeness of RelSE
The following theorem claims the completeness of our
symbolic execution relatively to an initial symbolic state. If the
program is constant-time up to \(k\), then for each pair of concrete
executions up to \(k\), there exists a corresponding symbolic
execution (no under-approximation).
Notice that our definition of completeness differs from standard definitions of
completeness in SE <cit.>. Here, completeness up to
\(k\) only applies to programs that are constant-time up to \(k\). This directly
follows from the fact that our symbolic evaluation blocks on errors while
concrete execution continues.
[Relative Completeness of
Let \(P\) be a program constant-time up to \(k\) and $s_{0}$ be a
symbolic initial configuration for $P$. For every concrete states
$c_0$, $c_k$, $c_0'$, $c_k'$, and model $M$ such that
${c_0 \concsym{l}{M} s_0} ~\wedge~ {c_0' \concsym{r}{M} s_0}$,
if $c_0 \cleval{\leakvar}^k c_k$ and $c_0' \cleval{\leakvar}^k c_k'$
then there exists a symbolic configuration \(s_k\) and a model
\(M'\) such that:
\[s_0 \ieval{}^k s_k ~\wedge~ %
c_k \concsym{l}{M'} s_k ~\wedge~ c_k' \concsym{r}{M'} s_k\]
First, note that from <ref> and the hypothesis
that is constant-time up to \(k\), we know that symbolic
evaluation from \(s_0\) does not get stuck up to \(k\). Knowing
this, we can apply <ref> which directly entails
§.§.§ Correct bounded-verification for CT
Finally, we prove that if symbolic execution does not get stuck due to a
satisfiable insecurity query, then the program is constant-time.
[Bounded-Verification for
Let $s_0$ be a symbolic initial configuration for a program $P$. If
the symbolic evaluation does not get stuck, then $P$ is
constant-time w.r.t. $s_0$. Formally, if for all $k$,
$s_0 \ieval{}^k s_k$ then for all initial configurations
\(\cconfvar_0\) and \(\cconfvar_0'\) and model \(M\) such that
\(\cconfvar_0 \concsym{l}{M} s_0\), and
\(\cconfvar'_0 \concsym{r}{M} s_0\),
\begin{equation*}
\cconfvar_0 \cleval{\leakvar}^k \cconfvar_k ~\wedge~ %
\cconfvar_0' \cleval{\leakvar'}^k \cconfvar'_k %
\implies \leakvar = \leakvar'
\end{equation*}
Additionally, if \(s_0\) is fully symbolic, then \(P\) is
(Full proof in <ref>)
The proof goes by induction on the number of steps. If the program is constant-time up
to \(k-1\) (induction hypothesis) then from <ref> there is
a symbolic execution for any configurations \(c_{k-1}\) and \(c_{k-1}'\). If
these configurations produce distinct leakages, then symbolic execution stuck
at step at step \(k-1\) which is a contradiction. This relies on the fact that
the symbolic leakage model does not under-approximate the concrete leakage.
§.§ Adapting theorems and proofs for other leakage
Theorems and proofs in <ref> are given for the
constant-time property. In this section we discuss how the theorems
and proofs given in <ref> can be adapted to other leakage
Correctness of our symbolic execution (<ref>) holds
regardless of the leakage model considered. Indeed, we showed that our
symbolic execution makes no over-approximation, without using the
leakage model. Moreover, we can show that (<ref>)
still holds for other leakage models because symbolic leakage
predicates cannot remove constraints from the symbolic state (and
therefore cannot introduce over-approximations).
Bug-finding (<ref>) can also be easily adapted to other leakage models
as long as the symbolic leakage model does not over-approximate the concrete
leakage model. In particular, it still holds for secret-erasure. The adaptation
of <ref> to secret-erasure only requires to show that
<ref> holds for the halt rule.
Completeness (<ref>) follows from <ref>
and <ref> and thus can be adapted to other leakage models on two
First, because our symbolic semantics is blocking on errors, it only applies to
secure programs and its proof relies on the absence of false alarm—which is
given as long as the symbolic leakage model does not over-approximate the
concrete leakage model (<ref>).
Second, <ref> only applies to pairs of concrete executions
following the same path. Therefore, <ref> only holds for
leakage models leaking the control-flow (i.e., that include the program counter
leakage model). Note that these two conditions are met in the case of
Bounded-verification (<ref>) can be adapted to other leakage models on
two conditions. First, because it builds on <ref> which only
applies to pairs of concrete executions following the same path, it only holds
for leakage models leaking the control-flow (i.e., that include the program
counter leakage model). Second, it requires to show that the symbolic leakage
model does not under-approximate the concrete leakage model: if a leakage occurs
in concrete execution then this leakage is captured in symbolic execution. These
conditions hold for our definition of secret-erasure, we must just adapt the
proof for the halt rule as the \(\filter\) function delays the
leakage of store values upon termination.
§ EXPERIMENTAL RESULTS
Research questions.
We investigate the following research questions:
RQ1. Effectiveness: constant-time analysis on real-world cryptographic code. Is able to perform constant-time analysis on real cryptographic binaries, for both bug finding and bounded-verification?
RQ2. Genericity. Is generic enough to encompass several
architectures and compilers?
RQ3. Comparison with standard approaches. How does
scale compared to traditional approaches based on self-composition (SC) and RelSE?
RQ4. Impact of simplifications. What are the respective impacts of
our different simplifications?
RQ5. Comparison vs. SE. What is the overhead of compared
to standard symbolic execution (SE), and can our simplifications be useful for standard SE?
RQ6. Effectiveness: large scale analysis of scrubbing functions.
Is able to verify the secret-erasure property on a large number of binaries?
Setup. Experiments were performed on a laptop with an
Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz processor and 32GB of RAM.
Similarly to related work (e.g. <cit.>),
is initialized to a concrete value, we start the analysis from the
beginning of the function, we statically allocate data structures
and the length of keys and buffers is fixed. When not stated otherwise, programs
are compiled for a x86 (32bit) architecture with their default compiler setup.
Legend. Throughout this section, #\(\text{I}\) denotes the number
of static instructions of a program, #\(\text{I}_{unr}\) is the number of
unrolled instructions explored by the analysis, P is the number of program paths
explored, Time is the execution time give in seconds and is the number of
bugs (vulnerable instructions) found.
Status is set to for secure (exhaustive exploration),
for insecure, or for timeout (set to 1 hour).
Additionally, for each program, we report the type of operation
performed and the length of the secret key (Key) and message (Msg)
when applicable (in bytes).
§.§ Effectiveness of (RQ1)
We carry out two experiments to assess the effectiveness of our
* bounded-verification of secure cryptographic primitives previously
verified at source- or LLVM-level <cit.>
* automatic replay of known bug
studies <cit.> (<ref>).
Overall, our study encompasses 338 representative code samples for a
total of 70k machine instructions and 22M unrolled instructions (i.e.,
instructions explored by ).
§.§.§ Bounded-Verification.
We analyze a large range of secure constant-time cryptographic
primitives (296 samples, 64k instructions), comprising:
* Several basic constant-time utility functions such as selection
functions <cit.>, sort
functions <cit.>
and utility functions from
HACL* <cit.>
and OpenSSL <cit.>, compiled with (versions 3.0, 3.9 and 7.1), and (versions 5.4 and 8.3) and for optimizations levels and ;
* A set of representative constant-time cryptographic primitives
already studied in the literature on source
code <cit.> or
LLVM <cit.>, including implementations of TEA <cit.>,
Curve25519-donna <cit.>, and
encryption functions taken from
BearSSL <cit.>, cryptographic primitives from
libsodium <cit.>, and the constant-time
padding remove function , extracted from
OpenSSL <cit.>;
* A set of functions from the HACL*
library <cit.>.
Results are reported in <ref>. For each program, is
able to perform an exhaustive exploration without finding any violations of
constant-time in less than 20 minutes. Note that exhaustive exploration is
possible because in cryptographic programs, fixing the input size bounds loops.
Additionally, the scalability of according to the size
of the input data is evaluated in <ref> and unbounded loops
are discussed in <ref>.
Additionally, the scalability of according to the size
of the input data is evaluated in the companion technical
report <cit.> and unbounded loops are discussed in
These results show that can
perform bounded-verification of real-world cryptographic implementations at
binary-level in a reasonable time, which was impractical with previous
approaches based on self-composition or standard RelSE
(see <ref>). Moreover, this is the first automatic
constant-time analysis of these cryptographic libraries at the binary-level.
§.§.§ Bug-Finding.
We take three known bug studies from the
literature <cit.>
and replay them automatically at binary-level (42 samples, 6k
instructions), including:
(1) binaries compiled from constant-time sources of a selection
function <cit.> and sort
functions <cit.>,
(2) non-constant-time versions of and from
BearSSL <cit.>,
(3) the non-constant-time version of OpenSSL's
responsible for the famous Lucky13 attack <cit.>.
Results are reported in <ref> with fault-packing
disabled to report vulnerabilities at the instruction level. All
bugs have been found within the timeout.
Interestingly, we found 3 unexpected binary-level vulnerabilities
(from secure source codes) that slipped through prior analysis:
* function <cit.> was
deemed secured through binary-level manual inspection, still we confirm
that any version of with introduces a
secret-dependent conditional jump which violates constant-time;
* functions and , verified by
ct-verif <cit.> (LLVM
bitcode compiled with ), are vulnerable when compiled
with or (details in
Conclusion (RQ1). We perform an extensive analysis over 338
samples of representative cryptographic primitive studied in the
literature <cit.>
Overall, it demonstrates that does scale to realistic applications for
both bug-finding and bounded-verification. As a side-result, we also proved
CT-secure 296 binaries of interest.
§.§ Preservation of Constant-Time by Compilers (RQ2).
In this section, we present an easily extensible framework, based on ,
to check constant-time for small programs under multiple compiler
Using this framework, we replay a prior manual
study <cit.>, which analyzed whether
optimizations break the constant-time property, for 5 different versions of a
selection function ().
We reproduce their analysis in an automatic manner and extend it
significantly, adding:
29 new functions, 3 newer version of (7.1.0, 9.0.1 and
11.0.1), the compiler, and 2 new architectures (i.e.,
and , while only was considered in the initial
study)—for a total of 4148 configurations (192 in the initial
Additionally, we investigate the impact of individual optimizations
on the preservation of constant-time. For , we target the
which converts x86 cmov instructions
into branches when profitable and which is known to play a role in the
preservation of constant-time <cit.>. In particular, we
evaluate the impact of selectively disabling this optimization, by passing the
flags to , which we
denote .
For , we target the if-conversion (i.e., ), which transforms conditional
jumps into branchless equivalent. In particular, we evaluate the impact of
selectively enabling this optimization, by passing the flags
, (denoted ); and the impact of
selectively disabling this optimization using (denoted
Bear in mind that the architecture does not feature
cmov instructions but does.
Results are presented in <ref>. Results for
are not applicable to and
(denoted - in the table) as these versions do not recognize
the argument.
We confirm the main conclusion of Simon et
al. <cit.> that is more likely to
optimize away constant-time protections as the optimization level increases.
However, contrary to their work, our experiments show that newer
versions of are not necessarily more likely than older
ones to break constant-time (e.g. is compiled to a
non-constant-time code with but not with
Surprisingly, in contrast with , optimizations tend
to remove branches and thus, are less likely to introduce vulnerabilities in
constant-time code. Especially, for ARM produces secure
binaries from the insecure source codes. Indeed, the compiler takes advantage
of the many ARM conditional instructions to remove conditional jumps in
and .
This also applies to the architecture but only for
We conclude that the if-conversion passes of play a
role here, as disabling them () produces
insecure binaries.
However, the fact that is still
insecure shows that if-conversion passes must be combined with other
optimizations (at least ) to effectively remove conditional jumps.
Finally, we found that constant-time sort functions, taken from
the benchmark of the <cit.>
tool, can be compiled to insecure binaries for two different reasons
(both detailed in <ref>):
(details are provided in the technical report <cit.>).
* For the architecture and old compilers, conditional
select LLVM instructions are compiled to conditional jumps
because target architectures do not feature cmov
instructions. These violations are introduced in backend passes
of , making them of reach of LLVM verification tools like
[We did confirm that with
the setting does not
report the vulnerability.];
* More interestingly, we found that for more recent architectures
featuring cmov (i.e., ), the use of
cmov might introduce secret-dependent memory
accesses. Indeed, the compiler introduces a secret-dependent pointer
selection, done with cmov, which results in a memory-based
leak when the pointer is dereferenced.
We also remark that disabling the does not change
anything in our settings.
Conclusion (RQ2).
This study shows that is generic in the sense that it can be applied
with different versions and options of and , over x86
and ARM. We also get the following interesting results:
* We found that, contrary to ,
optimizations tend to help enforcing constant-time—
preserves constant-time in all our examples. even
sometimes produces secure binaries from insecure sources thanks to the
if-conversion passes;
* We found that backend passes of can introduce
vulnerabilities in codes that are secure at the LLVM level;
* We found that use of cmov
instructions might introduce secret-dependent memory accesses;
* Finally, this study shows that the preservation of
constant-time by compilers depends on multiple factors and cannot simply
rely on enabling/disabling optimizations. Instead, compiler-based
hardening <cit.> or
property preservation <cit.> seem promising
directions, in which could be used for validation.
§.§ Comparison against Standard Techniques (RQ3,RQ4,RQ5)
We compare with standard techniques based on
self-composition and relational symbolic execution (RelSE)
(<ref>), then we analyze the preformance of our different
simplifications (<ref>), and finally we investigate the overhead of
compared to standard SE, and whether our simplifications are
useful for SE (<ref>).
Experiments are performed on the programs introduced in
<ref> for bug-finding and
(338 samples, 70k instructions).
We report the following metrics: total number of unrolled instruction
#\(\text{I}_{unr}\), number of instruction explored per seconds
(#\(\text{I}_{unr}\)/s), total number of queries sent to the solver (#Q),
number of exploration (resp. insecurity) queries (\(\text{\#Q}_{\text{e}}\)),
(resp. \(\text{\#Q}_{\text{i}}\)), total execution time (T), timeouts
(), programs proven secure (), programs proven insecure
(), unknown status (). Timeout is set to 3600 seconds.
§.§.§ Comparison vs. Standard Approaches (RQ3).
We evaluate against SC and
Since no implementation of these methods fits our particular use-cases,
we implement them directly in . RelSE is obtained by
disabling optimizations (<ref>), while
SC is implemented on top of RelSE by duplicating low
inputs instead of sharing them and adding the adequate preconditions.
Results are given in <ref>.
While RelSE performs slightly better than SC
(\(1.6 \times\) speedup in terms of #\(\text{I}_{unr}/s\)) thanks
to a noticeable reduction of the number of queries (approximately 50%), both
techniques are not efficient enough on binary code:
RelSE times out in 13 cases and achieves an analysis speed of only 6.2
instructions per second while SC is worse.
completely outperforms both previous approaches:
* The optimizations implemented in drastically reduce the
number of queries sent to the solver (\(57\times\) less insecurity
queries than RelSE);
* reports no timeout, is \(1000\times\) faster than
RelSE and \(1600\times\) faster than SC in terms of
* can perform bounded-verification of large programs (e.g.
, , , etc.) that were out
of reach of prior approaches.
§.§.§ Performance of Simplifications
We evaluate the performance of our individual optimizations: on-the-fly
read-over-write (FlyRow), untainting (Unt) and fault-packing
(FP). Results are reported in <ref>:
* FlyRow is the major source of improvement in ,
drastically reducing the number of queries sent to the solver and
allowing a \(718\times\) speedup compared to RelSE in terms of
* Untainting and fault-packing do have a positive impact on
RelSE—untainting alone reduces the number of queries by
almost 50%, the two optimizations together yield a \(2\times\) speedup;
* Yet, their impact is more modest once FlyRow is activated:
untainting leads to a very slight slowdown, while fault-packing achieves
a \(1.4\times\) speedup.
Still, FP can be interesting on some particular programs,
when the precision of the bug report is not the priority. Consider for instance
the non-constant-time version of in BearSSL (i.e.,
): without FP reports 32 vulnerable
instructions in 1580 seconds, while with FP reports 2
vulnerable basic blocks (covering the 32 vulnerable instructions) in only
146 seconds (almost \(11 \times\) faster).
§.§.§ Comparison vs. Standard SE (RQ5).
We investigate the overhead of compared to standard symbolic
execution (SE); evaluate whether on-the-fly read-over-write
(FlyRow) can improve performance of SE; and also compare
FlyRow to a recent implementation of
read-over-write <cit.> (PostRow),
implemented posterior to symbolic-execution as a formula pre-processing.
Standard symbolic-execution is directly implemented in the Rel module
and models a single execution of the program with exploration queries but
without insecurity queries.
* , compared to our best setting for symbolic execution
(SE+FlyRow), only has an overhead of \(2\times\) in terms of
#\(\text{I}_{unr}/s\). Hence constant-time comes with an acceptable
overhead on top of standard symbolic execution. This is consistent with
the fact that our simplifications discard most insecurity queries,
letting only the exploration queries which are also part of
* For RelSE, FlyRow completely outperforms PostRow.
First, PostRow is not designed for relational verification and
duplicates the memory. Second, PostRow simplifications are not
propagated along the execution and must be recomputed for every query,
producing a significant simplification overhead. On the contrary,
FlyRow models a single memory containing relational values and
propagates along the symbolic execution.
* FlyRow also improves the performance of standard SE by a factor
\(643\) in our experiments, performing much better than PostRow
(\(430\times\) faster).
Conclusion (RQ3, RQ4, RQ5).
performs significantly better than previous approaches to
relational symbolic execution (\(1000\times\) speedup vs. RelSE). The
main source of improvement is the on-the-fly read-over-write simplification
(FlyRow), which yields a \(718\times\) speedup vs. RelSE and
sends \(57 \times\) less insecurity queries to the solver.
Note that, in our context, FlyRow outperforms state-of-the-art
binary-level simplifications, as they are not designed to efficiently cope with
relational properties and introduce a significant simplification-overhead at
every query.
Fault-packing and untainting, while effective over RelSE, have a much
slighter impact once FlyRow is activated; fault-packing can still be
useful on insecure programs.
Finally, in our experiments, FlyRow significantly improves performance
of standard symbolic-execution (\(643 \times\) speedup).
§.§ Preservation of Secret-Erasure by Compilers (RQ6)
Secret-erasure is usually enforced using scrubbing functions—functions
that overwrite a given part of the memory with dummy
In this section we present a framework to automatically check the preservation
of secret-erasure for multiple scrubbing functions and compilers. This framework
is open
and can be easily extended with new compilers and new
scrubbing functions. Using , we analyze 17 scrubbing functions; with
multiple versions of (3.0, 3.9, 7.1.0, 9.0.1 and 11.0.1) and
(5.4.0, 6.2.0, 7.2.0, 8.3.0 and 10.2.0); and multiple optimization
levels (, , and ).
We also investigate the impact of disabling individual optimizations
(those related to the dead-store-elimination pass) on the preservation of
secret-erasure (cf. <ref>).
This accounts for a total of 1156 binaries and extends a
prior manual study on scrubbing
mechanisms <cit.>.
In this section, clang-all-versions (resp.
gcc-all-versions) refer to all the aforementioned clang (resp. gcc) versions;
and in tables indicates that a program is secure and that it
is insecure w.r.t secret-erasure.
§.§.§ Naive implementations
First, we consider naive (insecure) implementations of scrubbing functions:
* loop: naive scrubbing function that uses a simple for
loop to set the memory to 0,
* memset: uses the memset function from the Standard C
* bzero: function defined in to set memory to 0.
Results (cf. <ref>). As expected,
without appropriate countermeasures, these naive implementation of scrubbing
functions are all optimized away by all versions of and
at optimization level and . Additionally,
as highlighted in <ref>, bzero is also
optimized away at optimization level with and
older versions.
This is because the function calls to scrub
and bzero are inlined in and older versions,
making the optimization possible whereas the call to scrub is not
inlined in and older versions.
§.§.§ Volatile function pointer
The volatile type qualifier indicates that the value of an object
may change at any time, preventing the compiler from optimizing memory accesses
to volatile objects. This mechanism can be exploited for secure secret-erasure
by using a volatile function pointer for the scrubbing function (e.g.eventually redirecting to memset). Because the function may change,
the compiler cannot optimize it away. <Ref> illustrates the
implementation of this mechanism in
OpenSSL <cit.>.
Results (cf. <ref>). reports that, for all versions of
, the secret-erasure property is not preserved at optimization
levels and .
Indeed, the caller-saved register edx is pushed on the stack before the
call to the volatile function. However, it contains secret data which are
spilled on the stack and not cleared afterwards.
This shows that our tool can find violations of secret erasure from
register spilling.
We conclude that while the volatile function pointer mechanism is effective for
preventing the scrubbing function to be optimized away, it may also
introduce unnecessary register spilling that might break secret-erasure.
§.§.§ Volatile data pointer
The volatile type qualifier can also be used for secure secret-erasure by
marking the data to scrub as volatile before erasing it. We analyze several
implementations based on this mechanism:
* casts the pointer buf to a
pointer-to-volatile vbuf (cf. <ref>,
line <ref>) before scrubbing data from vbuf
using a simple for or while loop. This is a commonly
used technique for scrubbing memory, used for instance in
Libgcrypt <cit.>, wolfSSL <cit.>,
or sudo <cit.>;
* is similar to
but scrubs data from memory using
memset. Note that this implementation is insecure as the
volatile type qualifier is discarded by the function
call—volatile char * is not compatible with void *;
* (resp. )
casts the pointer buf to a volatile pointer
vbuf—but pointing to non volatile data (cf. <ref>, line <ref>) before scrubbing data
from vbuf using a simple for or while loop
(resp. memset)[Although we did not find this
implementation in real-world cryptographic code, we were curious about
how the compiler would handle this case.];
* casts the pointer buf to a
volatile pointer-to-volatile vbuf (cf. <ref>, line <ref>) before
scrubbing data from vbuf using a simple for or
while loop. It is the fallback scrubbing mechanism used in
<cit.> and in
HACL* <cit.> cryptographic libraries;
* is similar to
but uses memset instead of a
Results (cf. <ref>). First, our
experiments show that using volatile pointers to non-volatile data does
not reliably prevent the compiler from optimizing away the scrubbing
function. Indeed, optimizes away the scrubbing function at
optimization level and in both
implementations. Second, using a pointer to volatile works in the loop
version (i.e., and
) but not in the memset versions
(i.e., and
) as the function call to memset
discards the volatile qualifier.
§.§.§ Memory barriers
Memory barriers are inline assembly statements which indicate the compiler that
the memory could be read or written, forcing the compiler to preserve preceding
store operations. We study four different implementations of memory barriers:
three implementations from safeclib <cit.>, plus the
approach recommended in a prior study on scrubbing
mechanisms <cit.>.
* (cf. <ref>,
line <ref>) is the implementation used in
and the fallback implementation used in
safeclib. As pointed by
this barrier works with <cit.> but might
not work with , which might optimize away a call to
memset or a loop before this
barrier <cit.>—although we could not reproduce the
behavior in our experiments;
* (cf. <ref>,
line <ref>) is similar to
with an additional mfence
instruction for serializing memory. It is used in safeclib when
mfence instruction is available;
* (cf. <ref>,
line <ref>) is similar to
but uses a lock prefix for
serializing memory. It is used in safeclib on
* (cf. <ref>,
line <ref>) is a more resilient approach than
, recommended in the study of
and used for instance in libsodium
<cit.>. It makes the pointer
buf visible to the assembly code, preventing prior store
operation to this pointer from being optimized away.
Results. For all the implementation of memory barriers that we
tested, we did not find any vulnerability—even with the version deemed
insecure in prior
study <cit.>[As explained in a
bug report <cit.>, is not
reliable because might consider that the inlined assembly
code does not access the buffer (e.g. by fitting all of the buffer in
registers). The fact that we were not able to reproduce this bug in our
setup is due to differences in programs (in our program the address of the
buffer escapes because of function calls whereas it is not the case in the
bug report); it does not mean that this barrier is secure (it is not).].
§.§.§ Weak symbols
Weak symbols are specially annotated symbols (with
[]__attribute__((weak))) whose definition may
change at link time. An illustration of a weak function symbol is given in
<ref>. The compiler cannot optimize a store operation
preceding the call to _sodium_dummy_symbol because its definition
may change and could access the content of the buffer. This mechanism, is used
in libsodium memzero <cit.> when weak symbols are
Results. did not find any vulnerability with
§.§.§ Off-the-shelf implementations
Finally, we consider two secure implementations of scrubbing functions proposed
in external libraries, namely and .
is a function defined in to set memory
to 0, with additional protections to not be optimized away by the compiler.
Similarly, is a function defined in the optional Annex K
(bound-checking interfaces) of the C11 standard, which sets a memory region to a
given value and should not be optimized away. We take the implementation of
<cit.>, compiled with its default Makefile
for a architecture.
Both implementations both rely on a memory barrier (see
<ref>) to prevent the compiler from optimizing
scrubbing operations.
Results. did not find any vulnerability with these functions.
§.§.§ Impact of disabling individual optimizations
In order to understand what causes compilers to introduce violations
of secret-erasure, we selectively disable the (i.e., dead store
elimination) option in and the and
(i.e., dead store elimination on tree) in .
Results. For , ,
and [ is
omitted in this study because we were not able to run the LLVM
optimizer () for in order to disable the
optimization.], disabling the transform pass
makes all our samples secure. This points towards the hypothesis that the
transform pass is often responsible for breaking secret-erasure
and that, in some cases, disabling it might be sufficient to preserve
secret-erasure[However, we strongly suspect that this conclusion does
not generalize to all programs, for instance to programs that violate
secret-erasure because of register spilling.].
The results for are given in table
<ref>. Firstly, we observe that both
and play a role in the preservation of
secret-erasure. Indeed, for , disabling is
sufficient for obtaining a secure binary, while for
and ,
must be disabled. On the contrary, for and
, it is necessary to disable both
Secondly, we observe that there are other factors that affect the
preservation of secret-erasure. Indeed, the program is
still insecure because of register spilling. Additionally, and
are also insecure because the loop is still
optimized away.
§ DISCUSSION
Limitations of the technique.
The relational symbolic execution introduced in this paper handles
loops and recursion with unrolling. Unrolling still enables exhaustive
exploration for programs without unbounded loops such as or
. However, for programs with unbounded loops, such as stream
ciphers or it leads to unexplored program
paths, and hence might miss violations[In our experiments we fix the
input size for these programs, but we could also keep it symbolic and
restrict it to a given range, which would extend security guarantees for all
input sizes in this range.]. A possible solution to enable sound analysis
for program with unbounded loops would be to use relational loop
invariants <cit.>—however, it would sacrifice
Similarly, indirect jump targets are only enumerated up to a given bound,
which might lead to unexplored program paths and consequently missed
violations[ detects and records incomplete jump target
enumerations and, if it cannot find any vulnerabilities, it returns
“unknown” instead of “secure”.]. However, we did not encounter incomplete
enumerations in our experiments: in the cryptographic primitives that we analyzed
indirect jumps had a single (or few) target.
Finally, any register or part of the memory that is concretized in the initial
state of the symbolic execution might lead to unexplored program behaviors and
missed violations. In , memory and register are symbolic by default
and any concretization (e.g. setting the initial value of esp, or
which memory addresses are initialized from the binary) must be made
explicitly by the user.
The definition of secret-erasure used in this paper is conservative in the sense that it forbids
secret-dependent branches (and hence related implicit flows). We leave for
future work the exploration of alternative (less conservative) definitions
that could either declassify secret-dependent conditions, or allow
secret-secret dependent conditions as long as both branches produce the same
Finally, restricts to a sequential semantics and hence
cannot detect Spectre vulnerabilities <cit.>,
however the technique has recently been adapted to a speculative
semantics <cit.>.
Implementation limitations.
The implementation of shows limitations commonly found in research
it does not support dynamic libraries (binaries must be statically linked or
stubs must be provided for external function calls), it does not support dynamic
memory allocation (data structures must be statically allocated), it does not
implement predefined system call stubs, it does not support
multi-threading, and it does not support floating point instructions. These
problems are orthogonal to the core contribution of this paper.
Moreover, the prototype is already efficient on real-world case studies.
Threats to validity in experimental evaluation.
We assessed the effectiveness of our tool on several known secure and
insecure real-world cryptographic binaries, many of them taken from
prior studies. All results have been crosschecked with the expected
output, and manually reviewed in case of deviation.
Our prototype is implemented as part of
<cit.>, whose efficiency and robustness
have been demonstrated in prior large scale studies on both adversarial code and
code <cit.>.
The IR lifting part has been positively evaluated in an external
study <cit.> and the symbolic engine features
aggressive formula optimizations <cit.>. All our
experiments use the same search heuristics (depth-first) and, for
bounded-verification, smarter heuristics do not change the performance.
Regarding the solver, we also tried Z3 <cit.> and
confirmed the better performance of Boolector.
Finally, we compare our tool to our own versions of SC and
primarily because none of the existing tools can be easily adapted for our
setting, and also because it allows us to compare very close implementations.
§ RELATED WORK
Related work has already been lengthily discussed along the paper. We
add here only a few additional discussions, as well as an overview of
existing SE-based tools for information flow analysis
(<ref>) partly taken
from <cit.>.
Self-composition and SE
has first been used by Milushev et
al. <cit.>. They use type-directed
self-composition and dynamic symbolic execution to find bugs of
noninterference but they do not address scalability and their
experiments are limited to toy programs. The main issues here are the
quadratic explosion of the search space (due to the necessity of
considering diverging paths) and the complexity of the underlying
Later works <cit.> suffer from
the same problems.
Instead of considering the general case of noninterference, we
focus on properties that relate traces following the same path, and
we show that it remains tractable for SE with adequate
Relational symbolic execution.
Shadow symbolic
execution <cit.>
aims at efficiently testing evolving software by focusing on the new
behaviors introduced by a patch.
It introduces the idea of sharing formulas across two
executions in the same SE instance. The term relational symbolic
execution has been coined more
recently <cit.> but this work is
limited to a simple toy imperative language and do not address
We maximize sharing between pairs of executions, as ShadowSE
does, but we also develop specific optimizations tailored to the
case of information-flow analysis at binary-level. Experiments show
that our optimizations are crucial in this context.
Scaling SE for information flow analysis.
Only three previous works in this category achieve scalability, yet at
the cost of either precision or soundness.
Wang et al. <cit.> and
Subramanyan et al. <cit.>
sacrifice soundness for scalability (no bounded-verification). The
former performs symbolic execution on fully concrete traces and only
symbolizes secrets.
The latter concretizes memory accesses.
In both cases, they may miss feasible paths as well as
Brotzman et al. <cit.> take the opposite
side and sacrifice precision for scalability (no bug-finding).
Their analysis scales by over-approximating loops and resetting the
symbolic state at chosen code locations.
We adopt a different approach and scale by heavy formula optimizations,
allowing us to keep both correct bug-finding and correct
bounded-verification. Interestingly, our method is faster than these
approximated ones.
Moreover, our technique is compatible with the previous approximations for
Other methods for constant-time analysis.
Dynamic approaches for constant-time are precise (they find real
violations) but limited to a subset of the execution traces, hence they are not
complete. These techniques include statistical
analysis <cit.>, dynamic binary
instrumentation <cit.>,
dynamic symbolic execution (DSE) <cit.>,
or fuzzing <cit.>.
Static approaches
based on sound static
analysis <cit.>
give formal guarantees that a program is free from timing-side-channels but they
cannot find bugs when a program is rejected.
Aside from a posteriori analysis, correct-by-design
approaches <cit.>
require to reimplement cryptographic primitives from scratch.
Program transformations have been proposed to automatically
transform insecure programs into (variations of) constant-time
programs <cit.>.
In particular, Raccoon and Constantine consider a constant-time leakage model
and seem promising, however they operate at LLVM level and do not protect
against violations introduced by backend compiler passes. Therefore,
is complementary to these techniques, as it can be used for
investigating code patterns and backend optimizations that may introduce
constant-time violations in backend compiler passes.
Other methods for secret-erasure.
Compiler or OS-based secure
deallocation <cit.>
have been proposed but require compiler or OS-support, in contrast this work
focuses on application-based secret-erasure.
<cit.> introduce
the first framework to specify erasure policies which has been later refined to
express richer policies using a knowledge-based
approach <cit.>, and cryptographic data
deletion <cit.>. These works focus on expressing
complex secret-erasure policies, but are not directly applicable to concrete
<cit.> propose the first application of
a simple secret-erasure policy for a concrete language (i.e., Java Card
Bytecode), which ensures that secrets are unavailable after program termination.
Our definition of secret erasure is close to theirs and directly applicable for
binary-level verification.
Most enforcement mechanisms for erasure are language-based and rely on type
systems to enforce information flow
control <cit.>.
Secretgrind <cit.>, a dynamic taint tracking tool based on
Valgrind <cit.> to track secret data in memory, is
the closest work to ours, with the main difference being that it uses dynamic
analysis and permits implicit flows, while we use static analysis and forbid
implicit flows.
The problem of (non-)preservation of secret-erasure by compilers is well
known <cit.>. To remedy it, a notion of information
flow-preserving program transformation has been
proposed <cit.> but this approach
requires to compile programs using CompCert <cit.>
and does not apply to already compiled binaries.
Finally, preservation of erasure functions by compilers has been studied
manually <cit.>, and we further this line of work
by proposing an extensible framework for automating the process.
§ CONCLUSION
We tackle the problem of designing an automatic and efficient binary-level
analyzer for information flow properties, enabling both bug-finding and
bounded-verification on real-world cryptographic implementations. Our approach
is based on relational symbolic execution together with original
dedicated optimizations reducing the overhead of relational reasoning and
allowing for a significant speedup.
Our prototype, , is shown to be highly efficient compared to
alternative approaches. We used it to perform extensive binary-level
constant-time analysis and secret-erasure for a wide range of cryptographic
implementations, and to automate prior manual studies on the preservation of
constant-time and secret-erasure by compilers.
We highlight incorrect usages of volatile data pointer for secret erasure, and
show that scrubbing mechanisms based on volatile function pointers can
introduce additional violation from register spilling.
We also found three constant-time vulnerabilities that slipped through prior
manual and automated analyses, and we discovered that and
backend passes of introduce violations of constant-time out of
reach of state-of-the-art constant-time verification tools at LLVM or source
§ ACKNOWLEDGMENTS
We would like to thank Guillaume Girol for his help with setting up Nix virtual
environments, which enable reproducible compilation in our frameworks, as well
as Frédéric Recoules for his help with the final release of the tool. We also
thank the anonymous reviewers for their valuable suggestions, which greatly
helped to improve the paper. This project has received funding from the European
Union Horizon 2020 research and innovation program under grant agreement No
101021727, from ANR grant ANR-20-CE25-0009-TAVA, and from ANR-17-CE25-0014-01
CISC project.
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
journaltitleDistributed Comput.
titleRecognizing Safety and Liveness
USENIX Association
booktitle8th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2008, December 8-10, 2008, San Diego, California, USA, Proceedings
titleKLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs
journaltitleCommun. ACM
titleSAGE: whitebox fuzzing for security testing
journaltitleFormal Aspects Comput.
titleFrama-C: A software analysis perspective
journaltitleInt. J. Softw. Tools Technol. Transf.
titleModel Checking JAVA Programs using JAVA PathFinder
booktitleProceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2015, Mumbai, India, January 15-17, 2015
titleA Formally-Verified C Static Analyzer
booktitleProgramming Languages and Systems, 14th European Symposium on Programming,ESOP 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8, 2005, Proceedings
seriesLecture Notes in Computer Science
titleThe ASTREÉ Analyzer
journaltitleIEEE Secur. Priv.
titleThe Mayhem Cyber Reasoning System
IEEE Computer Society
booktitleProceedings of the 21st IEEE Computer Security Foundations Symposium, CSF 2008, Pittsburgh, Pennsylvania, USA, 23-25 June 2008
booktitleProceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014
titleSystem-level Non-interference for Constant-time Cryptography
titleBearSSL - Constant-Time Crypto
booktitleProgress in Cryptology - LATINCRYPT 2012 - 2nd International Conference on Cryptology and Information Security in Latin America, Santiago, Chile, October 7-10, 2012. Proceedings
seriesLecture Notes in Computer Science
titleThe Security Impact of a New Cryptographic Library
booktitleProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017
titleHACL*: A Verified Modern Cryptographic Library
IEEE Computer Society
booktitle18th IEEE Computer Security Foundations Workshop, (CSFW-18 2005), 20-22 June 2005, Aix-en-Provence, France
titleLanguage-Based Information Erasure
booktitle2018 IEEE European Symposium on Security and Privacy, EuroS&P 2018, London, United Kingdom, April 24-26, 2018
titleWhat You Get is What You C: Controlling Side Effects in Mainstream C Compilers
IEEE Computer Society
booktitle2015 IEEE Symposium on Security and Privacy Workshops, SPW 2015, San Jose, CA, USA, May 21-22, 2015
titleThe Correctness-Security Gap in Compiler Optimization
booktitle32nd IEEE Computer Security Foundations Symposium, CSF 2019, Hoboken, NJ, USA, June 25-28, 2019
titleInformation-Flow Preservation in Compiler Optimisations
USENIX Association
booktitle26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017
titleDead Store Elimination (Still) Considered Harmful
titleCWE-14: Compiler Removal of Code to Clear Buffers
abstractFormal verification of cryptographic software implementations poses significant challenges for off-the-shelf tools. This is due to the domain-specific characteristics of the code, involving aggressive optimizations and non-functional security requirements, namely the critical aspect of countermeasures against side-channel attacks. In this paper, we extend previous results supporting the practicality of self-composition proofs of non-interference and generalizations thereof. We tackle the formal verification of high-level security policies adopted in the implementation of the recently proposed NaCl cryptographic library. We formalize these policies and propose a formal verification approach based on selfcomposition, extending the range of security policies that could previously be handled using this technique. We demonstrate our results by addressing compliance with the NaCl security policies in real-world cryptographic code, highlighting the potential for automation of our techniques.
journaltitleScience of Computer Programming
titleFormal Verification of Side-Channel Countermeasures Using Self-Composition
lesly/Zotero/storage/VSERZRNA/Bacelar Almeida et al. - 2013 - Formal verification of side-channel countermeasure.pdf
booktitleComputer Security - ESORICS 2017 - 22nd European Symposium on Research in Computer Security, Oslo, Norway, September 11-15, 2017, Proceedings, Part I
seriesLecture Notes in Computer Science
titleVerifying Constant-Time Implementations by Abstract Interpretation
USENIX Association
booktitle25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016
titleVerifying Constant-Time Implementations
booktitle2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019
titleCaSym: Cache Aware Symbolic Execution for Side Channel Detection and Mitigation
booktitleCryptology and Network Security - 15th International Conference, CANS 2016, Milan, Italy, November 14-16, 2016, Proceedings
seriesLecture Notes in Computer Science
titleWhen Constant-Time Source Yields Variable-Time Binary: Exploiting Curve25519-donna Built with MSVC 2015
titleImperialViolet - Checking That Functions Are Constant Time with Valgrind
booktitleProceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design, MEMOCODE 2017, Vienna, Austria, September 29 - October 02, 2017
titleQuantifying the information leak in cache attacks via symbolic execution
USENIX Association
booktitle26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017
titleCacheD: Identifying Cache-Based Timing Channels in Production Software
booktitleProceedings of the 34th Annual Computer Security Applications Conference, ACSAC 2018, San Juan, PR, USA, December 03-07, 2018
titleMicroWalk: A Framework for Finding Side Channels in Binaries
booktitleComputer Aided Verification - 24th International Conference, CAV 2012, Berkeley, CA, USA, July 7-13, 2012 Proceedings
seriesLecture Notes in Computer Science
titleAutomatic Quantification of Cache Side-Channels
USENIX Association
booktitleProceedings of the 22th USENIX Security Symposium, Washington, DC, USA, August 14-16, 2013
titleCacheAudit: A Tool for the Static Analysis of Cache Side Channels
booktitleProceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, Barcelona, Spain, June 18-23, 2017
titleRigorous analysis of software countermeasures against cache attacks
abstractSecretgrind: a Valgrind analysis tool to detect secrets in memory
IEEE Computer Society
booktitle17th IEEE Computer Security Foundations Workshop, (CSFW-17 2004), 28-30 June 2004, Pacific Grove, CA, USA
titleSecure Information Flow by Self-Composition
booktitleStatic Analysis, 12th International Symposium, SAS 2005, London, UK, September 7-9, 2005, Proceedings
seriesLecture Notes in Computer Science
titleSecure Information Flow as a Safety Problem
booktitleFM 2016: Formal Methods - 21st International Symposium, Limassol, Cyprus, November 9-11, 2016, Proceedings
seriesLecture Notes in Computer Science
titleRecovering High-Level Conditions from Binary Programs
journaltitleACM Trans. Program. Lang. Syst.
titleWYSINWYX: What you see is not what you eXecute
journaltitleCommun. ACM
titleSymbolic execution for software testing: three decades later
IEEE Computer Society
booktitle35th International Conference on Software Engineering, ICSE '13, San Francisco, CA, USA, May 18-26, 2013
titleBillions and billions of constraints: whitebox fuzz testing in production
booktitleProceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014
titleAutomating Information Flow Analysis of Low Level Code
booktitleICT Systems Security and Privacy Protection - 30th IFIP TC 11 International Conference, SEC 2015, Hamburg, Germany, May 26-28, 2015, Proceedings
seriesIFIP Advances in Information and Communication Technology
titleExploit Generation for Information Flow Leaks in Object-Oriented Programs
booktitleFormal Techniques for Distributed Systems - Joint 14th IFIP WG 6.1 International Conference, FMOODS 2012 and 32nd IFIP WG 6.1 International Conference, FORTE 2012, Stockholm, Sweden, June 13-16, 2012. Proceedings
seriesLecture Notes in Computer Science
titleNoninterference via Symbolic Execution
booktitle2016 Design, Automation & Test in Europe Conference & Exhibition, DATE 2016, Dresden, Germany, March 14-18, 2016
titleVerifying information flow properties of firmware using symbolic execution
booktitleProceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2004, Venice, Italy, January 14-16, 2004
titleSimple relational correctness proofs for static analyses and program transformations
booktitleFM 2011: Formal Methods - 17th International Symposium on Formal Methods, Limerick, Ireland, June 20-24, 2011. Proceedings
seriesLecture Notes in Computer Science
titleRelational Verification Using Product Programs
booktitleProceedings of the 39th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2012, Philadelphia, Pennsylvania, USA, January 22-28, 2012
titleMultiple facets for dynamic information flow
booktitleCompanion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon , France, April 23-27, 2018
titleA Better Facet of Dynamic Information Flow Control
booktitleProceedings of the 38th International Conference on Software Engineering, ICSE 2016, Austin, TX, USA, May 14-22, 2016
titleShadow of a doubt: testing for divergences between software versions
titleRelational Symbolic Execution
IEEE Computer Society
booktitle2013 IEEE Symposium on Security and Privacy, SP 2013, Berkeley, CA, USA, May 19-22, 2013
titleLucky Thirteen: Breaking the TLS and DTLS Record Protocols
booktitle2020 IEEE Symposium on Security and Privacy, SP 2020, San Francisco, CA, USA, May 18-21, 2020
titleBinsec/Rel: Efficient Relational Symbolic Execution for Constant-Time at Binary-Level
journaltitleCommun. ACM
titleCertification of Programs for Secure Information Flow
booktitleProceedings of the 13th USENIX Security Symposium, August 9-13, 2004, San Diego, CA, USA
titleUnderstanding Data Lifetime via Whole System Simulation (Awarded Best Paper!)
journaltitleACM Trans. Comput. Syst.
titleThe S2E Platform: Design, Implementation, and Applications
IEEE Computer Society
booktitleIEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016
titleSOK: (State of) The Art of War: Offensive Techniques in Binary Analysis
IEEE Computer Society
booktitleIEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering, SANER 2016, Suita, Osaka, Japan, March 14-18, 2016 - Volume 1
titleBINSEC/SE: A Dynamic Symbolic Execution Toolkit for Binary-Level Analysis
journaltitleCommun. ACM
titleSymbolic Execution and Program Testing
USENIX Association
booktitle6th USENIX Workshop on Offensive Technologies, WOOT'12, August 6-7, 2012, Bellevue, WA, USA, Proceedings
titleSMT Solvers in Software Security
The Internet Society
booktitleProceedings of the Network and Distributed System Security Symposium, NDSS 2011, San Diego, California, USA, 6th February - 9th February 2011
titleAEG: Automatic Exploit Generation
USENIX Association
booktitle20th USENIX Security Symposium, San Francisco, CA, USA, August 8-12, 2011, Proceedings
titleQ: Exploit Hardening Made Easy
IEEE Computer Society
booktitle2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015
titleA Generic Approach to Automatic Deobfuscation of Executable Code
IEEE Computer Society
booktitle2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017
titleBackward-Bounded DSE: Targeting Infeasibility Questions on Obfuscated Codes
booktitleDetection of Intrusions and Malware, and Vulnerability Assessment - 15th International Conference, DIMVA 2018, Saclay, France, June 28-29, 2018, Proceedings
seriesLecture Notes in Computer Science
titleSymbolic Deobfuscation: From Virtualized Code Back to the Original
booktitleTools and Algorithms for the Construction and Analysis of Systems - 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015. Proceedings
seriesLecture Notes in Computer Science
titleBINSEC: Binary Code Analysis with Low-Level Regions
booktitleComputer Aided Verification - 23rd International Conference, CAV 2011, Snowbird, UT, USA, July 14-20, 2011. Proceedings
seriesLecture Notes in Computer Science
titleBAP: A Binary Analysis Platform
Department of Computer Science, The University of Iowa
titleThe SMT-LIB Standard: Version 2.6
titleFixedSizeBitVectors Theory, SMT-LIB
titleArraysEx Theory, SMT-LIB
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany
booktitle2013 Imperial College Computing Student Workshop, ICCSW 2013, September 26/27, 2013, London, United Kingdom
titleSelf-composition by Symbolic Execution
booktitleTools and Algorithms for the Construction and Analysis of Systems, 14th International Conference, TACAS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29-April 6, 2008. Proceedings
seriesLecture Notes in Computer Science
titleZ3: An Efficient SMT Solver
booktitlePublic Key Cryptography - PKC 2006, 9th International Conference on Theory and Practice of Public-Key Cryptography, New York, NY, USA, April 24-26, 2006, Proceedings
seriesLecture Notes in Computer Science
titleCurve25519: New Diffie-Hellman Speed Records
seriesLecture Notes in Computer Science
titleThe BINCOA Framework for Binary Code Analysis
IEEE Computer Society
booktitle31st IEEE Computer Security Foundations Symposium, CSF 2018, Oxford, United Kingdom, July 9-12, 2018
titleSecure Compilation of Side-Channel Countermeasures: The Case of Cryptographic "Constant-Time"
booktitleLPAR-22. 22nd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, Awassa, Ethiopia, 16-21 November 2018
seriesEPiC Series in Computing
titleArrays Made Simpler: An Efficient, Scalable and Thorough Preprocessing
journaltitleACM Trans. Program. Lang. Syst.
titleSimplification by Cooperating Decision Procedures
journaltitleJournal on Satisfiability, Boolean Modeling and Computation
titleBoolector 2.0 System Description
abstractThe International Satisfiability Modulo Theories (SMT) Competition.
abstractContribute to imdea-software/verifying-constant-time development by creating an account on GitHub.
titleOpenSSL, Cryptography and SSL/TLS Toolkit
booktitleFast Software Encryption: Second International Workshop. Leuven, Belgium, 14-16 December 1994, Proceedings
seriesLecture Notes in Computer Science
titleTEA, a Tiny Encryption Algorithm
titleLLVM provides no side-channel resistance
booktitleCCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 - 19, 2021
titleConstantine: Automatic Side-Channel Resistance Using Efficient Control and Data Flow Linearization
USENIX Association
booktitle24th USENIX Security Symposium, USENIX Security 15, Washington, D.C., USA, August 12-14, 2015
titleRaccoon: Closing Digital Side-Channels through Obfuscated Execution
titleOpenSSL, function
titleLibgcrypt, function
titlewolfSSL, function
titlesudo, function
titlelibsodium, function
titleHACL*, function
titleSafeclib, macro
title6.47.2 Extended Asm - Assembler Instructions with C Expression Operands
titleBug 15495 - dead store pass ignores memory clobbering asm statement
titleSafeclib, function
booktitle2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019
titleSpectre Attacks: Exploiting Speculative Execution
booktitle34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, San Diego, CA, USA, November 11-15, 2019
titleGet Rid of Inline Assembly through Verification-Oriented Lifting
booktitleComputer Aided Verification - 30th International Conference, CAV 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 14-17, 2018, Proceedings, Part II
seriesLecture Notes in Computer Science
titleModel Generation for Quantified Formulas: A Taint-Based Approach
booktitleProceedings of the 25th International Symposium on Software Testing and Analysis, ISSTA 2016, Saarbrücken, Germany, July 18-20, 2016
titleSpecification of concretization and symbolization policies in symbolic execution
IEEE Computer Society
booktitleProceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, ASE 2017, Urbana, IL, USA, October 30 - November 03, 2017
titleTesting intermediate representations for binary analysis
IEEE Computer Society
booktitle25th IEEE Computer Security Foundations Symposium, CSF 2012, Cambridge, MA, USA, June 25-27, 2012
titleENCoVer: Symbolic Exploration for Information Flow Security
booktitle36th International Conference on Software Engineering, ICSE '14, Companion Proceedings, Hyderabad, India, May 31 - June 07, 2014
titleShadow symbolic execution for better testing of evolving software
booktitleDesign, Automation & Test in Europe Conference & Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 2017
titleDude, is my code constant time?
booktitle13th IEEE International Conference on Software Testing, Validation and Verification, ICST 2020, Porto, Portugal, October 24-28, 2020
titlect-fuzz: Fuzzing for Timing Leaks
booktitlePOPL 2000, Proceedings of the 27th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Boston, Massachusetts, USA, January 19-21, 2000
titleTransforming Out Timing Leaks
booktitleInformation Security and Cryptology - ICISC 2005, 8th International Conference, Seoul, Korea, December 1-2, 2005, Revised Selected Papers
seriesLecture Notes in Computer Science
titleThe Program Counter Security Model: Automatic Detection and Removal of Control-Flow Side Channel Attacks
booktitleProceedings of the 25th International Conference on Compiler Construction, CC 2016, Barcelona, Spain, March 12-18, 2016
titleSparse representation of implicit flows with applications to side-channel detection
booktitleProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017
titleJasmin: High-Assurance and High-Speed Cryptography
USENIX Association
booktitle26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017
titleVale: Verifying High-Performance Cryptographic Assembly Code
IEEE Computer Society
booktitleIEEE Cybersecurity Development, SecDev 2017, Cambridge, MA, USA, September 24-26, 2017
titleFaCT: A Flexible, Constant-Time Programming Language
journaltitleInt. J. Inf. Sec.
titleTransformational typing and unification for automatically correcting insecure programs
journaltitleIEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
titleSymbolic Verification of Cache Side-Channel Freedom
booktitleProceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, Amsterdam, The Netherlands, July 16-21, 2018 |
To gauge the robustness of the learning mechanism, a k-fold cross-validation
strategy is used. The dataset is sectioned into five segments. Each subset
alternately serves as the validation set, while the rest contribute to
training. Final performance is ascertained as an average over all iterations.
We assume $k=3$ in large datasets and $k=5$ and $k=10$ for moderate and small
datasets, respectively.
#### 2.1.2 Learning Process for the Graph Attention Network (GAT) Parameters
The hierarchical framework of our system necessitates a methodical approach to
the learning process, ensuring optimal convergence and model efficiency.
Within this multi-layered architecture, each stage introduces specific
learnable parameters. These are crucial for discerning intricate relationships
within video frames and, consequently, for achieving precise action
recognition.
Structured Learning Paradigm for GAT:
Here we also use staged training:
1. 1.
Object-Level GAT Training: Initiates by forming a base, ensuring recognition
of fundamental object relationships.
2. 2.
Single Hand Action-Level GAT Training: Builds on the object-level GAT’s
weights, refining recognition of single-hand actions.
3. 3.
Bimanual Action-Level GAT Training: Progresses with insights from the single-
hand GAT, concentrating on dual-hand coordinated actions.
A core challenge within our framework is the lack of direct ground truth
labels for the GAT layers. This stems from the model’s aim of unraveling
complex spatio-temporal relationships, leading to overarching action
categorizations.
Supervised Learning with Parameter Freezing:
To counter the label deficit, we employ supervised learning, using final
action labels as reference points. This entails freezing subsequent layer
parameters, concentrating solely on the active GAT layer, thereby maintaining
the holistic model context without unnecessary simultaneous adjustments.
##### Optimization Strategy for GAT Layers
To ensure efficient and effective learning of the GAT parameters, we adopt the
following strategies:
* •
Loss Function: We employ the cross-entropy loss, given the classification
nature of action recognition, given by (with $C$ classes):
$L_{\text{CE}}=-\sum_{i=1}^{N}\sum_{c=1}^{C}y_{i,c}\log(p_{i,c})$ (29)
where $N$ is the number of samples, $y_{i,c}$ is a binary indicator for the
correct classification of sample $i$ to class $c$, and $p_{i,c}$ is the
predicted probability of sample $i$ belonging to class $c$.
* •
Learning Rate: We use an initial learning rate of $0.001$ with the Adam
optimizer. If the validation loss remains stagnant for 10 epochs, the rate is
halved.
* •
Batch Training: Due to dataset intricacies, mini-batch training is employed
with a batch size of $32$.
* •
Regularization: To prevent overfitting and enhance model generalization, we
apply L2 regularization with a $0.0001$ coefficient to the weights and a
dropout rate of $0.5$ to attention scores.
* •
Early Stopping: If the validation loss does not improve for $20$ consecutive
epochs, training is halted, retaining the parameters from the epoch with the
least loss.
* •
Cross-Validation: For hyperparameter tuning, k-fold cross-validation
techniques are incorporated. We assume $k=5$ in large datasets and $k=10$ for
moderate and small datasets.
#### 2.1.3 Deriving the learnable parameters in the GAT layers
Next we describe details how to arrive at the different learnable parameters
in the three GAT layers.
Level 1: Object-Level GAT
* •
$\mathbf{W}^{(1)}$: Transformation matrix that linearly maps node features
from their original space to a new representation in Layer 1.
Initialization: By the Xavier method.
* •
$\mathbf{d}^{(1)}$: Learnable parameter that determines the importance of node
and its neighbors in the attention mechanism at Layer 1.
Initialization: Small random values.
* •
$\mathbf{W}_{e}$: Weight matrix that transforms the edge features. It
encapsulates spatial and relational dependencies between nodes. It is shared
across all three layers.
* •
$\mathbf{W}_{a}$: Weight matrix responsible for transforming action-edge
features. It captures the essence of the actions performed between objects. It
is shared across all three layers.
Initialization of $\mathbf{W}_{e}$ and $\mathbf{W}_{a}$: Given the non-linear
activations in GATs, particularly the LeakyReLU activations, an initialization
method tailored to this type of activation function is desired. We adopt the
He Initialization (also known as Kaiming Initialization) method for
initializing $W_{a}$ and $W_{e}$. This method is specifically designed for
ReLU-based activation functions, including LeakyReLU. The key idea behind He
Initialization is to draw the initial weights from a distribution with a mean
of 0 and a variance of $\frac{2}{n_{\text{in}}}$, where $n_{\text{in}}$
represents the number of input units to the layer. Mathematically, the
initialization can be represented as:
$W_{a},W_{e}\sim\mathcal{N}\left(0,\sqrt{\frac{2}{n_{\text{in}}}}\right).$
This initialization approach ensures that the model does not start with
activations and gradients that are excessively small or large, thus promoting
efficient gradient flow and convergence during training.
Level 2: Single Hand Action-Level GAT
* •
$\mathbf{W}^{(2)}$: Transformation matrix for node features at Layer 2,
refining the representations based on the outputs of Layer 1.
Initialization: Gaussian distribution with mean $0$ and standard deviation
$0.01$.
* •
$\mathbf{U}^{(2)}$: Learnable weight matrix specific to the second GAT layer,
capturing complex relationships between nodes.
Initialization: Gaussian distribution with mean $0$ and standard deviation
$0.01$.
* •
$\mathbf{d}^{(2)}$: Learnable parameter that refines the attention mechanism,
focusing on single-hand actions between nodes.
Initialization: Small random values from a uniform distribution.
Level 3: Bimanual Action-Level GAT
* •
$\mathbf{W}^{(3)}$: Transformation matrix for node features at Layer 3, which
focuses on refining node representations considering bimanual actions.
Given that this layer is even deeper, the Xavier method is employed, suitable
for layers with tanh or sigmoid activations.
* •
$\mathbf{U}^{(3)}$: Weight matrix at Layer 3 that captures the intricacies of
bimanual interactions in the graph.
Given that this layer is even deeper, the Xavier method is employed, suitable
for layers with tanh or sigmoid activations.
* •
$\mathbf{d}^{(3)}$: Determines the attention scores for Layer 3, emphasizing
bimanual interactions.
Initialization: Using a Gaussian distribution with mean $0$ and standard
deviation $0.01$.
Justification for Sharing $\mathbf{W}_{e}$ and $\mathbf{W}_{a}$ Across Levels
Choosing to share $\mathbf{W}_{e}$ and $\mathbf{W}_{a}$ across the layers is
guided by the following considerations:
* •
Parameter Efficiency: Sharing the weights reduces the total number of model
parameters. This not only makes the model computationally more efficient but
also reduces the risk of overfitting, especially when there’s limited training
data.
* •
Consistency: Using shared transformation weights for edge and action-edge
features ensures a consistent representation across layers. This can be
particularly useful if the fundamental nature of these relationships doesn’t
change across layers, even though their context or interpretation might.
* •
Regularization: Sharing parameters acts as a form of implicit regularization.
Instead of letting each layer learn its own transformation, which can lead to
overfitting, sharing forces the model to find a general transformation that
works well across all layers.
* •
Simplification: A model with fewer parameters is simpler and can be more
interpretable. It is easier to understand and diagnose the transformations
applied by the model when the same transformation matrices $\mathbf{W}_{e}$
and $\mathbf{W}_{a}$ are used across layers.
#### 2.1.4 Learning Process for the TCN-Based Spatio-Temporal Parameters
In our system, the TCN learning process ensures the capture of intricate time-
dependent characteristics embedded within video frames.
TCN Learnable Parameters:
* •
$\mathbf{K}^{(l)}$: Convolutional kernel at layer $l$.
* •
$\vec{b}^{(l)}$: Bias term for the convolution at layer $l$.
* •
$\mathbf{V}^{(l)}$: Dilation rate for the convolutional kernel at layer $l$.
Structured Learning Paradigm:
Here we use the principle of progressive dilation, which ensures that temporal
patterns across various scales are captured accurately:
1. 1.
Short-Term Temporal Dependencies: By concentrating on immediate temporal
relationships, this level discerns swift actions or alterations.
2. 2.
Mid-Term Temporal Dependencies: This stage augments the previous one by
broadening the temporal horizon, allowing for an extended field of view.
3. 3.
Long-Term Temporal Dependencies: With a larger receptive field, this level
identifies prolonged actions or evolving sequences in the scene.
Backpropagation Through Time (BPTT): Given the sequential nature of the TCN,
BPTT is pivotal for model weight updates, ensuring that the learning process
acknowledges dependencies spanning across different time instances.
Optimization Strategy for TCN Layers
* •
Loss Function: The Mean Squared Error (MSE) loss, suited for the regression
character of temporal sequences, is utilized:
$L_{\text{MSE}}=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2}$ (30)
where $N$ signifies the sample count, $y_{i}$ represents the true value, and
$\hat{y}_{i}$ is the predicted counterpart.
* •
Learning Rate: An Adam optimizer is used, with an initial learning rate set to
$0.001$. If the validation loss stagnates for 5 consecutive epochs, the
learning rate is reduced by half.
* •
Batch Training: Mini-batch training, with batches of $64$, addresses the
sequential intricacy and offers computational efficiency.
* •
Regularization: Dropout, with a rate of $0.2$, is applied after convolution
layer, mitigating the risk of overfitting.
* •
Early Stopping: The training halts if there’s no validation loss improvement
over $10$ epochs, ensuring the model’s state with the least loss is retained.
Deriving Learnable Parameters in the Initial TCN Layer:
The preliminary layer in the TCN captures immediate temporal nuances, laying a
solid foundation for subsequent layers.
Convolutional Kernel $\mathbf{K}^{(1)}$:
* •
Initialization: Xavier method.
* •
Forward Propagation: Convolution operation on the GAT output.
Bias Term $\vec{b}^{(1)}$:
* •
Initialization: Zeroes.
* •
Forward Propagation: Incorporated post-convolution, offering an affine shift.
Dilation Rate $\mathbf{V}^{(1)}$:
* •
Initialization: Set to $1$, ensuring that proximate relationships are
recognized.
* •
Forward Propagation: Modulates the convolution kernel’s spacing.
Learning Strategy for Deeper TCN Layers:
The ensuing layers in the TCN, building upon the initial layer, increment
their receptive scope to detect longer-lasting temporal dependencies.
Layer 2 and Beyond:
* •
Exponential enhancement of the dilation rate $\mathbf{V}^{(l)}$, accommodating
expansive temporal durations.
* •
$\mathbf{K}^{(l)}$ and $\vec{b}^{(l)}$ mirror the learning and initialization
patterns of the first layer but are adapted per their dilation rates.
Conclusively, by employing this stratified approach, our TCN captures spatio-
temporal associations across different temporal magnitudes, thus offering a
holistic video analysis. Furthermore, given the ground truth at the pipeline’s
end and the already optimized GAT parameters, our learning approach ensures
harmonious integration between spatial attention and temporal convolution
mechanisms.
#### 2.1.5 Learning Process for the Hierarchical Action Classification Step
Learning Process for Fully Connected Layers
In the step of the framework involving fully connected layers, we aim to
efficiently map the abstract features obtained from previous layers to
actionable labels. While traditionally these layers might involve learning
parameters, our approach focuses on using well-established fully connected
architectures from the literature, without fine-tuning the parameters. The
rationale behind this decision lies in the comprehensive learnable parameters
already present in our framework, ensuring that introducing additional
learning parameters for these layers is not necessary.
To determine the most suitable fully connected architecture for our action
recognition task, we conducted thorough experimentation using various well-
known architectures available in the literature. Specifically, we explored the
performance of two-layer fully connected architectures, each with varying
numbers of neurons. The choice of these architectures was inspired by their
wide usage in related tasks and their simplicity, which aligns with our
framework’s hierarchical structure.
We evaluated the performance of architectures such as LeNet-5, AlexNet,
VGG-16, VGG-19, and ResNet-50, which are renowned for their effectiveness in
various image-related tasks. These architectures come with different
configurations of fully connected layers, including varying numbers of neurons
and layers. Through rigorous experimentation, we found that the architecture
that yielded the best results for our action recognition problem consisted of
two dense layers, featuring 128 neurons in the first layer and 64 neurons in
the second layer.
#### 2.1.6 Learning Process for the Description Generation Parameters
The learning process for adapting the pre-trained GPT-2 model to generate
bimanual action descriptions has only been touched in the main text in Section
3.10. Here we provide details of these steps
1. 1.
Tokenization: We first tokenize the input data, including the object names,
SRs, and action types, into a sequence of tokens that the GPT-2 model can
understand.
2. 2.
Vectorization: Next, we use vectorization to convert the tokens into fixed-
size numerical vectors.
3. 3.
Sliding Windows: To handle longer input sentences, we employ a sliding window
approach that divides the input sequence into overlapping segments, where each
segment is of fixed size. The window size is chosen based on the maximum
length of the input sequence that the model can process. If the input sentence
is longer than the fixed-size window, we divide the sentence into overlapping
segments and each segment is used as an input to the model.
Let the length of the input sequence be denoted by $L$, and the window size be
denoted by $W$. Then, we can define the number of windows $N$ as
$N=\lfloor(L-W)/S\rfloor+1$ where $S$ is the stride length, which determines
the degree of overlap between adjacent windows. In our implementation, we set
$S=W/2$ to ensure significant overlap between adjacent windows. For each
window $i$, we extract the corresponding sub-sequence of length $W$ starting
at position $(i-1)\times S$ and use it as input to the model. This way, we
ensure that all the tokens in the input sequence are considered by the model.
The output of the model for each window can be concatenated together to form
the final output for the entire sequence. Values for the differnt varibables
are different for the datasets that we have used and are found in the
Supplementary Material.
4. 4.
Model Architecture: A generation layer is added on top of the pre-trained
GPT-2 model. The generation layer is responsible for generating the action
descriptions based on the input of object names, SRs, and action types. For
generating descriptions with different levels of detail, we add three separate
layers on top of the pre-trained GPT-2 model, one for each level of detail.
The generation layer consists of three sub-layers, each responsible for
generating descriptions at a different level of detail. The input to each sub-
layer is the output of the previous sub-layer, which allows for the generation
of increasingly detailed descriptions. Let ${r_{1},r_{2},...,r_{n}}$ be the
output of GPT-2, then $Z_{1}=Q_{1}(r_{1},r_{2},...,r_{n})$ is the output of
the first generation layer and $Z_{i+1}=Q_{i+1}(Z_{i})$ defines the output of
the next 2 generation layers accordingly.
5. 5.
Loss Function: We use the cross-entropy loss function to measure the
difference between the predicted output and the ground truth.
6. 6.
Optimizer: We use the Adam optimizer to update the model weights based on the
gradients of the loss function with respect to the model parameters.
7. 7.
Training Data: The annotated dataset containing the input and corresponding
output (i.e., bimanual action descriptions) is used to train the model. The
data is preprocessed and transformed into a format that can be fed into the
model.
### 2.2 Joint Learning and End-to-End Training
In the pursuit of enhancing the synergy between different components of our
video description generation model, we employ a joint learning approach. This
approach aims to capitalize on the interdependencies between specific stages,
allowing them to collaborate more effectively and contribute collectively to
the model’s understanding of video content. By sharing information and
refining features through joint training, we create a comprehensive framework
that can produce more accurate and coherent descriptions.
In this section, we describe our joint learning strategies, which encompass
collaborations between various stages of the video description generation
pipeline. We focus on three distinct joint learning scenarios, each tailored
to optimize the interaction between specific sets of components:
1. 1.
Joint Training of GAT (Step 3) and TCN (Step 4): Our initial joint learning
phase involves the concurrent training of Graph Attention Networks (GAT) in
Step 3 and Temporal Convolutional Networks (TCN) in Step 4. This combination
capitalizes on both the spatial relationships captured by GAT and the temporal
dynamics captured by TCN. By jointly learning these stages, we promote the
fusion of spatial and temporal features, enhancing the overall representation
of video content.
2. 2.
Joint Training of Node and Edge Embedding (Step 2), combined with GAT and TCN
steps (steps 3 and 4): We further extend our joint learning to incorporate
node and edge embedding from Step 2 into the procedure. This enables the
fusion of enriched spatial embeddings, graph attention mechanisms, and
temporal features. By simultaneously refining these representations, we pave
the way for more robust and nuanced feature aggregation in subsequent stages.
3. 3.
End-to-End Training of Node and Edge Embedding (Step 2), GAT (Step 3), TCN
(Step 4), and Description Generation (Step 10): Our final joint learning
scenario encapsulates the essence of the entire video description generation
pipeline. By integrating the node and edge embedding, GAT, and TCN stages with
the description generation step, we enable an end-to-end training approach.
This training strategy allows the model to holistically optimize its feature
extraction, understanding of actions, and narrative generation capabilities.
The interactions cultivated through joint learning enrich the information flow
between different stages, culminating in more coherent and contextually
aligned descriptions.
Every phase of joint learning aims to foster collaboration among specific
components, enhancing their collective performance and, in turn, improving the
overall effectiveness of our video description generation model. In the
following sections, we describe each joint learning process, providing
detailed explanations of how these collaborative efforts are coordinated.
### 2.3 Gradual Joint Learning of GAT (Step 3) and TCN (Step 4)
Starting Phase - Training TCN with Fixed GAT:
1. 1.
We begin by keeping the GAT component fixed, setting the GAT parameters to
their optimal values obtained during their training process.
2. 2.
During this time, only the TCN part learns and adjusts. But it benefits from
the information coming from GAT.
Step by Step Unfreezing:
* •
When the TCN’s learning starts to slow down, we start adjusting the GAT part.
* •
We begin with the last part of GAT (the one closest to TCN) and let it learn
and adjust.
* •
As we go on, we allow earlier parts of GAT to adjust too, going backwards
until every part learns.
Learning Together:
* •
When both GAT and TCN parts can learn, we train them together.
* •
We use a combined way to check their performance, which considers both the
GAT’s and TCN’s outputs.
$L_{\text{combined}}=\alpha L_{\text{GAT}}+\beta L_{\text{TCN}}$ (31)
where $\alpha$ and $\beta$ are weights we choose.
* •
We also use techniques like dropout across both parts to make sure they don’t
over-adjust.
Fine-tuning:
* •
After they have learned together, we do a final round of fine-tuning. This
means we make small adjustments to get even better results.
* •
We check the model’s performance on a test set regularly and decide when to
stop based on its results.
Tuning Hyperparameters $\alpha$ and $\beta$:
The selection of hyperparameters $\alpha$ and $\beta$ is a crucial aspect of
achieving an effective balance between spatial and temporal learning.
* •
These hyperparameters were fine-tuned through a methodical grid search
process, systematically exploring various combinations of values.
* •
We assessed the impact of different $\alpha$ and $\beta$ values on the
validation performance, aiming to optimize the convergence and effectiveness
of joint learning.
* •
The final chosen values for $\alpha$ and $\beta$ were $\alpha=0.6$ and
$\beta=0.4$, respectively, reflecting a balanced emphasis on both spatial and
temporal learning.
By following these steps, we make sure that the knowledge in GAT is respected
and blended with the new learning from TCN. This way, our model can understand
both space (from GAT) and time (from TCN) in a reasonable way. Here we applied
an initial learning rate of 0.001 and a batch size of 32.
### 2.4 Joint Training of Node and Edge Embedding (Step 2), GAT (Step 3), and
TCN (Step 4)
Initialization and Incorporation of Mixture Knowledge:
* •
For this phase, we initiate the model parameters by drawing upon the
understanding acquired from the previous joint training of GAT and TCN (Steps
3 and 4). This ensures that the components commence their collaboration with a
foundation enriched by spatial and temporal insights.
* •
The node and edge embeddings, which encapsulate spatial relationships, are
further augmented by the combined comprehension of dynamic relationships (TCN)
and graph attention (GAT).
Loss Function and Training Objective:
* •
The core of this joint learning phase lies in an encompassing loss function
that takes into consideration the goals of all three stages: embedding (Step
2), GAT (Step 3), and TCN (Step 4).
* •
The overarching loss function is defined as:
$L_{\text{joint}}=\alpha L_{\text{Embedding}}+\beta L_{\text{GAT}}+\gamma
L_{\text{TCN}}$ (32)
Here, the hyperparameters $\alpha$, $\beta$, and $\gamma$ play a pivotal role
in dictating the relative significance of each stage’s contribution within the
joint learning process.
Hyperparameter Computation and Optimization:
* •
The selection of hyperparameters $\alpha$, $\beta$, and $\gamma$ is guided by
the outcomes of the previous joint learning phases, with specific numerical
values.
* •
Inspired by the favorable results from the joint learning of GAT and TCN, we
chose $\beta=0.4$ and $\gamma=0.3$ as the initial values for $\alpha$ and
$\beta$, respectively.
* •
To account for the embedding step insights we set the initial value of
$\alpha=0.3$.
* •
Following these initial values, we explored a grid of hyperparameter
combinations to determine the optimal configuration that maximizes the
collaborative potential of node and edge embedding, graph attention, and
temporal convolution.
* •
We converged upon the optimal hyperparameters: $\alpha=0.25$, $\beta=0.45$,
and $\gamma=0.3$.
Regularization and Optimization:
* •
To maintain a balanced learning process and mitigate overfitting, dropout
regularization is uniformly applied across all three stages during the joint
training.
* •
The optimization strategy involves employing gradient-based methods such as
stochastic gradient descent (SGD) or Adam. The initial learning rates are
informed by the previous joint training phase’s mixture.
* •
We proactively monitor loss convergence and validation performance to fine-
tune hyperparameters, attaining an optimal equilibrium that harmonizes the
diverse contributions of different stages.
Enriching Feature Fusion:
* •
The integration of node and edge embedding, GAT, and TCN results in a unified
feature representation that holistically captures spatial, temporal, and
relational intricacies inherent in video data.
* •
The insights previously garnered from the GAT and TCN collaboration (Steps 3
and 4) continue to guide the learning paths of all three stages. This
synergistic effect amplifies the quality of feature fusion and deepens the
model’s comprehension of video content.
By co-training the node and edge embedding, GAT, and TCN components while
incorporating insights from their previous mixture, we construct a more
interwoven model that capitalizes on spatial, temporal, and relational cues.
This multi-dimensional approach lays the groundwork for subsequent joint
learning phases, further refining the model’s descriptive prowess.
### 2.5 End-to-End Training
In this final phase of joint learning, we integrate the insights distilled
from Steps 2, 3, and 4, with the description generation component (Step 10),
through an end-to-end training approach. This ensures that the entire video
description generation pipeline collaborates cohesively, yielding descriptions
that are coherent, contextually relevant, and accurate.
Initialization and Knowledge Incorporation:
* •
Parameters initialization: We initialize the model parameters using the
representations learned from the integrated joint learning of Steps 2, 3, and
4. The enriched representations from these steps serve as a solid foundation
for the end-to-end training.
Loss Function and Training Objective:
* •
Loss Function: The overarching loss function for end-to-end training comprises
the objectives of Steps 2, 3, 4, and 10:
$L_{\text{end-to-end}}=\alpha L_{\text{Embedding}}+\beta L_{\text{GAT}}+\gamma
L_{\text{TCN}}+\delta L_{\text{Description}}$ (33)
Here, $\alpha$, $\beta$, $\gamma$, and $\delta$ are hyperparameters that
control the relative weight of each objective in the training process.
Hyperparameter Selection and Optimization:
* •
Initial values: The initial values for the hyperparameters $\alpha$, $\beta$,
$\gamma$, and $\delta$ were chosen based on insights from previous joint
learning phases. We set $\alpha=0.1$, $\beta=0.3$, $\gamma=0.2$, and
$\delta=0.4$, prioritizing a slightly stronger influence from the GAT.
* •
Influence of Step 10: Given that Step 10 represents the final stage of our
end-to-end approach, we assign a higher weight to $\delta$ to prioritize the
description generation process.
* •
Optimal hyperparameters: Through grid search, the final optimal
hyperparameters were determined as $\alpha=0.05$, $\beta=0.35$, $\gamma=0.25$,
and $\delta=0.35$. These values reflect a balance between the contributions of
embedding, GAT, TCN, and description generation.
Regularization and Optimization:
* •
Dropout regularization: Dropout with a rae of $0.2$ is applied to all model
components to prevent overfitting.
* •
Optimization algorithm: We utilize gradient-based optimization algorithms
(Adam). The initial learning rates are informed by the joint learning phases
and start with $0.001$.
* •
Learning rate adjustments: Monitor the training progress and validation loss.
If the validation loss stagnates for a certain number of epochs, reduce the
learning rate by half to prevent overshooting.
Validation and Convergence:
* •
Validation set: Regularly assess the model’s performance on a dedicated
validation dataset during training.
* •
Early stopping: Implement an early stopping mechanism. If the validation loss
does not improve over 10 epochs, halt the training to prevent overfitting and
retain the best model state.
### 2.6 Computing Hand Groups
Hand Spatial Relations: To compute the hand spatial relationship category for
a bimanual action, we start by extracting the spatial coordinates of the hands
for each frame in the video using a hand detection and tracking algorithm
[94]. Let $p_{1}$ and $p_{2}$ be the 3D coordinates of the left and right hand
centers, respectively, and let $d$ be the Euclidean distance between $p_{1}$
and $p_{2}$.
We define the hand spatial relationship category based on the following
thresholds:
* •
Close-hand: $d<d_{c}$
* •
Crossed-hand: $d_{c}\leq d<d_{s}$
* •
Stacked-hand: $d\geq d_{s}$
Here, $d_{c}$ and $d_{s}$ are the thresholds for close-hand and stacked-hand,
respectively. These thresholds can be computed based on the characteristics of
the dataset, such as the average hand span or the maximum distance between the
hands in the dataset.
To determine the appropriate threshold values, we analyze the distribution of
hand distances in the dataset and choose values that best distinguish between
the different hand spatial relationships. For example, if the average hand
span is 20 cm, we may set $d_{c}$ to 5 cm and $d_{s}$ to 15 cm.
Level of precision category: To compute the level of precision category for a
bimanual action, we first extract the types of objects and actions involved in
the action using object recognition and action recognition algorithms. We then
define a precision score $s_{p}$ for each action based on the level of
precision required to perform it. Specifically, the precision score is
computed as follows (using an example to explain this):
Let $d_{min}$ and $d_{max}$ be the minimum and maximum distance between the
knife and the vegetables during the chopping action, respectively. We define
the following thresholds to determine the precision score:
* •
Low precision: $d_{max}-d_{min}<d_{lp}$
* •
Medium precision: $d_{lp}\leq d_{max}-d_{min}<d_{mp}$
* •
High precision: $d_{max}-d_{min}\geq d_{mp}$
Here, $d_{lp}$ and $d_{mp}$ are the thresholds for low precision and medium
precision, respectively. One method to derive these thresholds is to analyze
the dataset and determine the minimum and maximum distances between the
objects involved in the bimanual actions. The difference between those can
then be utilized to define the range of distances that correspond to low,
medium and high precision actions, respectively. This data-driven approach
provides a quantitative way to determine the thresholds based on the level of
precision required for the actions in the dataset.
The hand spatial relationship and level of precision categories can then be
combined with the symmetric/asymmetric and coordinated/uncoordinated
categories from [67] to form the complete bimanual action type.
## 3 Hierarchical Action Breakdown
One of the prominent challenges is the hierarchical nature of actions, where a
broad action category might be decomposed into multiple sub-levels, each
offering finer granularity. While the depth of action categorization can span
numerous nested levels, for the purposes of this breakdown, we have used a
maximum of five levels. It is pertinent to understand that many of our
datasets can be dissected into even finer categorizations, extending beyond
the five levels highlighted here. However, to create a balance between
comprehensive understanding and readability, we have prioritized certain
actions over others, focusing on those that best exemplify the dataset’s
essence.
### 3.1 Learning Processes
Each feature matrix $\mathbf{G}^{(t,j,k,o)}$ undergoes a series of fully
connected layers followed by a softmax function. The predicted action label
for the $t$-th GAT layer, action category $j$, sublevel $k$, and item $o$ is
represented as $\hat{y}^{(t,j,k,o)}=\arg\max_{j}(P^{(t,j,k,o)})$. The
classification process is trained using the cross-entropy loss between the
predicted action probabilities and the ground truth action labels is
minimized. This loss calculation involves all GAT layers, action categories,
sublevels, and items:
$L(y,\hat{y})=-\sum_{t=1}^{L}\sum_{j=1}^{N}\sum_{k=1}^{M}\sum_{o=1}^{O}y^{(t,j,k,o)}\log{\hat{y}^{(t,j,k,o)}}$
(34)
Where $y^{(t,j,k,o)}$ is the ground truth probability for item $o$ of sublevel
$k$ of action category $j$ in the $t$-th GAT layer, and $\hat{y}^{(t,j,k,o)}$
is the predicted probability.
If an item or level does not exist within a certain sublevel, it can be
denoted with a placeholder such as ’…’ to indicate the absence of that item or
level, and its related probability(s) will be considered as $0$.
The fully connected layers are responsible for mapping the abstract features
from GAT outputs to actionable labels. These layers encompass two dense
layers, with 128 and 64 neurons respectively. ReLU activations follow the
linear transformations, leading to the final layer that corresponds to the
number of items. The softmax activation ensures a probability distribution
over items.
The outcome of the classification process offers action predictions across GAT
layers, action categories, sublevels, and items. These predictions can be
harnessed to generate descriptive sentences at varying levels of detail,
providing a comprehensive depiction of actions in the video.
It is noteworthy that the probability distribution of each action category at
a GAT layer serves as input to the fully connected layers of the subsequent
action category. This hierarchical arrangement enables action recognition
across multiple levels of detail and GAT layers.
### 3.2 Action Categories
Note that the complete list of action categories is quite extensive. Thus, we
have chosen to present here only a few illustrative instances, offering an
insight into the inherent intricate hierarchy of the datasets.
I. Meal Preparation (Level 1)
* •
Setting the Scene (Level 2)
* –
Organizing Workspace (Level 3)
* @itemiii
Retrieving Tools (Level 4)
* +
Selecting appropriate utensils (Level 5)
* +
Placing tools on countertop (Level 5)
* @itemiii
Gathering Ingredients (Level 4)
* +
Sorting by type (Level 5)
* +
Organizing in order of use (Level 5)
* •
Ingredient Manipulation (Level 2)
* –
Texture Alteration (Level 3)
* @itemiii
Cutting (Level 4)
* +
Selecting knife type (Level 5)
* +
Chopping motion (Level 5)
* @itemiii
Peeling (Level 4)
* +
Holding the peeler (Level 5)
* +
Removing skin without waste (Level 5)
* –
Flavor Infusion (Level 3)
* @itemiii
Marinating (Level 4)
* +
Mixing marinade components (Level 5)
* +
Ensuring even coating on ingredient (Level 5)
* @itemiii
Seasoning (Level 4)
* +
Selecting spices (Level 5)
* +
Applying evenly (Level 5)
* –
Mixing Ingredients (Level 4)
* @itemiii
Using a hand whisk (Level 5)
* @itemiii
Using an electric mixer (Level 5)
* •
Cooking Process (Level 2)
* –
Heat Application (Level 3)
* @itemiii
Baking (Level 4)
* +
Preheating oven (Level 5)
* +
Monitoring cooking time (Level 5)
* @itemiii
Frying (Level 4)
* +
Selecting oil type (Level 5)
* +
Regulating heat level (Level 5)
* @itemiii
Boiling (Level 4)
* +
Filling pot with water (Level 5)
* +
Adjusting stove temperature (Level 5)
* –
Dough Manipulation (Level 3)
* @itemiii
Kneading (Level 4)
* +
Using hands for manual kneading (Level 5)
* +
Using a kneading machine (Level 5)
* @itemiii
Rolling (Level 4)
* +
Choosing a rolling pin (Level 5)
* +
Applying even pressure (Level 5)
* •
Plating & Serving (Level 2)
* –
Presentation (Level 3)
* @itemiii
Garnishing (Level 4)
* +
Selecting garnish type (Level 5)
* +
Placing attractively on dish (Level 5)
* @itemiii
Portioning (Level 4)
* +
Using serving tools (Level 5)
* +
Allocating even servings (Level 5)
* @itemiii
Arrangement (Level 4)
* +
Designing plate layout (Level 5)
* +
Adjusting for visual appeal (Level 5)
* •
Cleanup & Storage (Level 2)
* –
Storage (Level 3)
* @itemiii
Refrigerating (Level 4)
* +
Setting correct temperature (Level 5)
* +
Allocating space for dishes (Level 5)
* @itemiii
Freezing (Level 4)
* +
Sealing food in containers (Level 5)
* +
Labeling with dates and names (Level 5)
* –
Cleaning (Level 3)
* @itemiii
Dishwashing (Level 4)
* +
Pre-rinsing dishes (Level 5)
* +
Using appropriate soap quantity (Level 5)
* @itemiii
Wiping Countertops (Level 4)
* +
Selecting cleaning agent (Level 5)
* +
Ensuring no residue remains (Level 5)
II: Assembly (Level 1)
* •
Assembling Wooden Pieces (Level 2)
* –
Placing wooden pieces (Level 3)
* –
Joining pieces with nails and hammers (Level 3)
* @itemiii
Hammering nails into wood (Level 4)
* +
Striking nail with hammer to penetrate wood (Level 5)
* @itemiii
Attaching second piece of wood (Level 4)
* +
Placing second piece on top of the first (Level 5)
III: Painting a Wall (Level 1)
* •
Applying Paint (Level 2)
* –
Preparing Paint and Supplies (Level 3)
* @itemiii
Opening paint can (Level 4)
* @itemiii
Mixing paint thoroughly (Level 4)
* @itemiii
Getting paintbrush and tray (Level 4)
* –
Applying Paint to Wall (Level 3)
* @itemiii
Dipping brush in paint (Level 4)
* @itemiii
Spreading paint on wall surface (Level 4)
* @itemiii
Using roller for larger areas (Level 4)
* –
Achieving Desired Finish (Level 3)
* @itemiii
Applying additional coats (Level 4)
* @itemiii
Checking for uniform coverage (Level 4)
* •
Cleanup and Finishing (Level 2)
* –
Cleaning Tools (Level 3)
* @itemiii
Cleaning paintbrush (Level 4)
* @itemiii
Cleaning paint tray and roller (Level 4)
* @itemiii
Sealing paint can (Level 4)
IV: Juicing an Orange (Level 1)
* •
Extracting Juice (Level 2)
* –
Preparing Orange (Level 3)
* @itemiii
Selecting a ripe orange (Level 4)
* +
Rubbing the orange for texture checking (Level 5)
* @itemiii
Washing the orange (Level 4)
* +
Rinsing under water (Level 5)
* +
Drying with a cloth (Level 5)
* –
Cutting and Preparing (Level 3)
* @itemiii
Cutting the orange in half (Level 4)
* +
Using a sharp knife (Level 5)
* +
Placing cut side up (Level 5)
* @itemiii
Removing seeds (Level 4)
* +
Scooping out seeds with a spoon (Level 5)
* –
Using a juicer (Level 3)
* @itemiii
Using a manual juicer (Level 4)
* +
Placing orange half on juicer (Level 5)
* +
Twisting (Level 5)
* @itemiii
Squeezing the orange by hand (Level 4)
* +
Using both hands to squeeze (Level 5)
* +
Pouring juice into a container (Level 5)
* •
Serving (Level 2)
* –
Straining the Juice (Level 3)
* @itemiii
Using a fine mesh strainer (Level 4)
* +
Holding strainer over a glass (Level 5)
* +
Pouring juice through strainer (Level 5)
* –
Presentation (Level 3)
* @itemiii
Pouring the fresh juice into a glass (Level 4)
* @itemiii
Garnishing with orange slices (Level 4)
* +
Cutting thin slices from an orange (Level 5)
* +
Placing slices on the rim of the glass (Level 5)
* •
Cleaning Up (Level 2)
* –
Cleaning Equipment (Level 3)
* @itemiii
Washing the juicer (Level 4)
* +
Disassembling juicer parts (Level 5)
* +
Scrubbing with soap and water (Level 5)
* @itemiii
Rinsing the strainer (Level 4)
* +
Towel drying (Level 5)
* –
Wiping Surfaces (Level 3)
* @itemiii
Cleaning the countertop (Level 4)
* +
Drying the surface with a clean cloth (Level 5)
LARGEReferences
## References
* [1] J. R. Kwapisz, G. M. Weiss, S. A. Moore, Activity recognition using cell phone accelerometers, ACM SigKDD Explorations Newsletter 12 (2) (2011) 74–82.
* [2] F. J. Ordonez, G. Englebienne, P. De Toledo, T. Van Kasteren, A. Sanchis, B. Kröse, In-home activity recognition: Bayesian inference for hidden markov models, IEEE Pervasive Computing 13 (3) (2014) 67–75.
* [3] R. Poppe, A survey on vision-based human action recognition, Image and Vision Computing 28 (6) (2010) 976–990.
* [4] V. Aggarwal, C. K. Reddy, Human activity recognition: A survey, Computer Vision and Image Understanding 131 (2015) 3–33.
* [5] C. Liu, J. Ying, H. Yang, X. Hu, J. Liu, Improved human action recognition approach based on two-stream convolutional neural network model, The visual computer 37 (2021) 1327–1341.
* [6] Q. Xiong, J. Zhang, P. Wang, D. Liu, R. X. Gao, Transferable two-stream convolutional neural network for human action recognition, Journal of Manufacturing Systems 56 (2020) 605–614.
* [7] H. Yang, C. Yuan, B. Li, Y. Du, J. Xing, W. Hu, S. J. Maybank, Asymmetric 3d convolutional neural networks for action recognition, Pattern recognition 85 (2019) 1–12.
* [8] J. Arunnehru, G. Chamundeeswari, S. P. Bharathi, Human action recognition using 3d convolutional neural networks with 3d motion cuboids in surveillance videos, Procedia computer science 133 (2018) 471–477.
* [9] G. Chéron, I. Laptev, C. Schmid, P-cnn: Pose-based cnn features for action recognition, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 3218–3226.
* [10] H. Ma, W. Li, X. Zhang, S. Gao, S. Lu, Attnsense: Multi-level attention mechanism for multimodal human activity recognition., in: IJCAI, 2019, pp. 3109–3115.
* [11] X. Yin, Z. Liu, D. Liu, X. Ren, A novel cnn-based bi-lstm parallel model with attention mechanism for human activity recognition with noisy data, Scientific Reports 12 (1) (2022) 7878.
* [12] A. Billard, D. Kragic, Trends and challenges in robot manipulation, Science 364 (6446) (2019) eaat8414.
* [13] A. Thomaz, G. Hoffman, M. Cakmak, Computational human-robot interaction, Foundations and Trends® in Robotics 4 (2-3) (2016) 105–223.
* [14] M. Oberweger, P. Wohlhart, V. Lepetit, Hands deep in deep learning for hand pose estimation, arXiv preprint arXiv:1502.06807.
* [15] T. Chatzis, X. Yuan, A. A. Argyros, A comprehensive study on deep learning-based 3d hand pose estimation methods, Applied Sciences 10 (19) (2020) 6850.
* [16] E. A. Franz, Bimanual action representation: A window to human evolution, Taking action: Cognitive neuroscience perspectives on intentional acts (2003) 259–88.
* [17] D. Rakita, A. Grinshpun, O. Brock, Shared control–based bimanual robot manipulation, Science Robotics 4 (30) (2019) eaaw0955.
* [18] F. Xie, J. Hwangbo, H. Lee, J. Oh, J. Hwang, I. Lee, H.-W. Park, S. Lee, Deep imitation learning for bimanual robotic manipulation, in: Advances in Neural Information Processing Systems, Vol. 33, 2020, pp. 2327–2337.
* [19] L. P. Ureche, A. Billard, Constraints extraction from asymmetrical bimanual tasks and their use in coordinated behavior, Robotics and Autonomous Systems 103 (2018) 222–235.
* [20] Z. J. Hu, R. Xu, C. Lu, H. Ding, H. Qiao, H. Ding, Towards human-robot collaborative surgery: Trajectory and strategy learning in bimanual peg transfer, IEEE Robotics and Automation Letters.
* [21] E. Galofaro, E. D’Antonio, N. Lotti, F. Patané, M. Casadio, L. Masia, Bimanual motor strategies and handedness role in human-robot haptic interaction, IEEE Transactions on Haptics.
* [22] E. E. Aksoy, A. Abramov, F. Wörgötter, B. Dellen, Categorizing object-action relations from semantic scene graphs, in: 2010 IEEE International Conference on Robotics and Automation, IEEE, 2010, pp. 398–405.
* [23] J. Wu, L. Wang, L. Wang, J. Guo, G. Wu, Learning actor relation graphs for group activity recognition, in: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 2019, pp. 9964–9974.
* [24] J. Ji, R. Krishna, L. Fei-Fei, J. C. Niebles, Action genome: Actions as compositions of spatio-temporal scene graphs, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10236–10247.
* [25] D. Xu, Y. Zhu, C. Choy, L. Fei-Fei, Scene graph generation by iterative message passing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3097–3106.
* [26] J. Yang, J. Lu, S. Lee, D. Batra, D. Parikh, J. Chen, Graph r-cnn for scene graph generation, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 774–790.
* [27] P. Xu, X. Liang, L. Li, L. Zhang, H. Huang, A survey of scene graph: Generation and application, IEEE Transactions on Neural Networks and Learning Systems 31 (1) (2020) 1–19.
* [28] Y. Teng, L. Wang, Structured sparse r-cnn for direct scene graph generation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
* [29] W. Wang, D. Li, J. Zhu, L. Tian, Y. Shan, Exploring context and visual pattern of relationship for scene graph generation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
* [30] Y. Cong, W. Liao, H. Ackermann, B. Rosenhahn, M. Y. Yang, Spatial-temporal transformer for dynamic scene graph generation, in: Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16372–16382.
* [31] S. Feng, H. Mostafa, M. Nassar, S. Majumdar, S. Tripathi, Exploiting long-term dependencies for generating dynamic scene graphs, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 5130–5139.
* [32] A. Airin, R. U. Dawla, A. S. Noor, M. Al Hasan, A. R. Hasan, A. Zaman, D. M. Farid, Attention-based scene graph generation: A review, in: 2022 14th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), IEEE, 2022, pp. 210–215.
* [33] E. Nguyen, T. Bui, V. Swaminathan, J. Collomosse, Oscar-net: Object-centric scene graph attention for image attribution, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 14499–14508.
* [34] M. Khademi, O. Schulte, Deep generative probabilistic graph neural networks for scene graph generation, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 11237–11245.
* [35] R. Li, S. Zhang, B. Wan, X. He, Bipartite graph network with adaptive message passing for unbiased scene graph generation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11109–11119.
* [36] Z. Ravichandran, L. Peng, N. Hughes, J. D. Griffith, L. Carlone, Hierarchical representations and explicit memory: Learning effective navigation policies on 3d scene graphs using graph neural networks, in: 2022 International Conference on Robotics and Automation (ICRA), IEEE, 2022, pp. 9272–9279.
* [37] N. Xu, A.-A. Liu, J. Liu, W. Nie, Y. Su, Scene graph captioner: Image captioning based on structural visual representation, Journal of Visual Communication and Image Representation 58 (2019) 477–485.
* [38] K. Ye, A. Kovashka, Linguistic structures as weak supervision for visual scene graph generation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8289–8299.
* [39] C. Chen, Q. Dou, H. Chen, J. Qin, P.-A. Heng, Synergistic image and feature adaptation: Towards cross-modality domain adaptation for medical image segmentation, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 33, 2019, pp. 865–872.
* [40] A. Kojima, T. Tamura, K. Fukunaga, Natural language description of human activities from video images based on concept hierarchy of actions, International Journal of Computer Vision 50 (2) (2002) 171–184.
* [41] Nishida, S. Takamatsu, Japanese-english translation through internal expressions, in: Proceedings of the 9th conference on Computational linguistics-Volume 1, Academia Praha, 1982, pp. 271–276.
* [42] F. Nishida, S. Takamatsu, T. Tani, T. Doi, Feedback of correcting information in post editing to a machine translation system, in: Proceedings of the 12th conference on Computational linguistics-Volume 2, ACL, 1988, pp. 476–481.
* [43] M. W. Lee, A. Hakeem, N. Haering, S. Zhu, Save: A framework for semantic annotation of visual events, in: IEEE Computer Society Conference on CVPR Workshops, 2008, pp. 1–8.
* [44] Q. Abbas, M. E. Ibrahim, M. A. Jaffar, Video scene analysis: An overview and challenges on deep learning algorithms, Multimedia Tools and Applications 77 (16) (2018) 20415–20453.
* [45] C. Hori, R. Yamamoto, J. Kim, T. Hori, B. Harsham, J. R. Hershey, Attention-based multimodal fusion for video description, in: Proceedings of the IEEE International Conference on Computer Vision, 2017.
* [46] C. Wu, L. Wang, W. Guo, W. Liu, Hierarchical attention-based multimodal fusion for video captioning, Neurocomputing 315 (2018) 362–370.
* [47] S. Chen, T. Yao, Y.-G. Jiang, Deep learning for video captioning: A review, in: IJCAI, Vol. 1, 2019.
* [48] S. Islam, N. D. Roy, S. Majumder, G. Saha, Exploring video captioning techniques: A comprehensive survey on deep learning methods, SN Computer Science 2 (2) (2021) 1–28.
* [49] J. Xu, T. Mei, T. Yao, Y. Rui, M. Dai, Learning multimodal attention lstm networks for video captioning, in: Proceedings of the 25th ACM International Conference on Multimedia, ACM, 2017, pp. 1008–1016.
* [50] L. Zhou, T. Yao, A. Torabi, M. Paluri, J. Li, L. Fei-Fei, End-to-end dense video captioning with masked transformer, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8739–8748.
* [51] M. Chen, X. Xu, Q. Liu, X. Jin, H. Sun, Tvt: Two-view transformer network for video captioning, in: Asian Conference on Machine Learning, PMLR, 2018, pp. 785–800.
* [52] J. Lei, Z. Qi, H. Zhang, M.-M. Cheng, Mart: Memory-augmented recurrent transformer for coherent video paragraph captioning, arXiv preprint arXiv:2005.05402.
* [53] K. Lin, L. Li, F. Yu, J. Chen, M.-H. Yang, B. Zhou, Swinbert: End-to-end transformers with sparse attention for video captioning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
* [54] P. H. Seo, B. Zhou, Y. Cui, M. Cho, S. Kim, S. J. Oh, I. S. Kweon, End-to-end generative pretraining for multimodal video captioning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
* [55] A. Yang, A. Zhang, J. Gu, X. Li, Y. Chen, X. Dong, L. Ma, M.-H. Yang, Vid2seq: Large-scale pretraining of a visual language model for dense video captioning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
* [56] V. Iashin, E. Rahtu, A better use of audio-visual cues: Dense video captioning with bi-modal transformer, arXiv preprint arXiv:2005.08271.
* [57] V. Iashin, E. Rahtu, Multi-modal dense video captioning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020\.
* [58] A. Nagrani, H.-Y. Lee, A. Zisserman, G. Trigeorgis, Learning audio-video modalities from image captions, in: European Conference on Computer Vision, Springer, 2022.
* [59] J. Redmon, A. Farhadi, Yolov3: An incremental improvement, arXiv preprint arXiv:1804.02767.
* [60] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, Y. Sheikh, Openpose: realtime multi-person 2d pose estimation using part affinity fields, IEEE transactions on pattern analysis and machine intelligence 43 (1) (2021) 172–186.
* [61] B. Birmingham, A. Muscat, A. Belz, Adding the third dimension to spatial relation detection in 2d images, in: Proceedings of the 11th International Conference on Natural Language Generation, 2018, pp. 146–151.
* [62] E. E. Aksoy, A. Abramov, J. Dörr, K. Ning, B. Dellen, F. Wörgötter, Learning the semantics of object–action relations by observation, The International Journal of Robotics Research 30 (10) (2011) 1229–1249.
* [63] F. Ziaeetabar, E. E. Aksoy, F. Wörgötter, M. Tamosiunaite, Semantic analysis of manipulation actions using spatial relations, in: 2017 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2017, pp. 4612–4619.
* [64] F. Ziaeetabar, T. Kulvicius, M. Tamosiunaite, F. Wörgötter, Recognition and prediction of manipulation actions using enriched semantic event chains, Robotics and Autonomous Systems 110 (2018) 173–188.
* [65] C. R. G. Dreher, M. Wächter, T. Asfour, Learning object-action relations from bimanual human demonstration using graph networks, IEEE Robotics and Automation Letters (RA-L) 5 (1) (2020) 187–194.
* [66] T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781.
* [67] F. Krebs, T. Asfour, A bimanual manipulation taxonomy, IEEE Robotics and Automation Letters 7 (4) (2022) 11031–11038.
* [68] P. Schramowski, C. Turan, N. Andersen, C. A. Rothkopf, K. Kersting, Large pre-trained language models contain human-like biases of what is right and wrong to do, Nature Machine Intelligence 4 (3) (2022) 258–268.
* [69] M. Gira, R. Zhang, K. Lee, Debiasing pre-trained language models via efficient fine-tuning, in: Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, 2022, pp. 59–69.
* [70] X. Zhou, Y. Zhang, L. Cui, D. Huang, Evaluating commonsense in pre-trained language models, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 34, 2020, pp. 9733–9740.
* [71] Y. Yu, J. Chung, H. Yun, J. Hessel, J. S. Park, X. Lu, R. Zellers, P. Ammanabrolu, R. Le Bras, G. Kim, et al., Fusing pre-trained language models with multimodal prompts through reinforcement learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10845–10856.
* [72] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., Language models are few-shot learners, arXiv preprint arXiv:2005.14165.
* [73] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805.
* [74] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, Exploring the limits of transfer learning with a unified text-to-text transformer, arXiv preprint arXiv:1910.10683.
* [75] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft coco: Common objects in context, in: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer, 2014, pp. 740–755.
* [76] R. Goyal, S. Ebrahimi Kahou, V. Michalski, J. Materzynska, S. Westphal, H. Kim, V. Haenel, I. Fruend, P. Yianilos, M. Mueller-Freitag, et al., The ”something something” video database for learning and evaluating visual common sense, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 5842–5850.
* [77] T. Wang, T. Yang, M. Danelljan, F. S. Khan, X. Zhang, J. Sun, Learning human-object interaction detection using interaction points, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4116–4125.
* [78] B. Kim, J. Lee, J. Kang, E.-S. Kim, H. J. Kim, Hotr: End-to-end human-object interaction detection with transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 74–83.
* [79] L. Zhou, C. Xu, J. J. Corso, D. Wei, Towards automatic learning of procedures from web instructional videos, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3692–3701.
* [80] J. Gao, C. Sun, Z. Yang, R. Nevatia, Tall: Temporal activity localization via language query, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5267–5276.
* [81] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, J. Carlos Niebles, Dense-captioning events in videos, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 706–715.
* [82] M. Regneri, M. Rohrbach, D. Wetzel, S. Thater, B. Schiele, Grounding action descriptions in videos, Transactions of the Association for Computational Linguistics (TACL) 1 (2013) 25–36.
* [83] Y. Xiang, T. Schmidt, V. Narayanan, D. Fox, Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes, arXiv preprint arXiv:1711.00199.
* [84] V. Bloom, D. Makris, V. Argyriou, G3d: A gaming action dataset and real time action recognition evaluation framework, in: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2012.
* [85] Y. Wen, H. Pan, L. Yang, J. Pan, T. Komura, W. Wang, Hierarchical temporal transformer for 3d hand pose estimation and action recognition from egocentric rgb videos, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21243–21253.
* [86] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, A. M. Dollar, Yale-cmu-berkeley dataset for robotic manipulation research, The International Journal of Robotics Research 36 (3) (2017) 261–268.
* [87] Y. A. Andrade-Ambriz, S. Ledesma, M. A. Ibarra-Manzano, M. I. Oros-Flores, D. L. Almanza-Ojeda, Human activity recognition using temporal convolutional neural network architecture, Expert Systems with Applications 191 (2022) 116287\.
* [88] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, A. Gupta, Hollywood in homes: Crowdsourcing data collection for activity understanding, in: European Conference on Computer Vision (ECCV), 2016.
* [89] H. Xu, A. Das, K. Saenko, R. Doell, J. J. Corso, Msr-vtt: A large video description dataset for bridging video and language, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5288–5296.
* [90] D. L. Chen, W. B. Dolan, Collecting highly parallel data for paraphrase evaluation, in: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011, pp. 190–200.
* [91] B. Wang, L. Ma, W. Zhang, X. Chang, M.-H. Yang, Reconstruction network for video captioning, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
* [92] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, P. H. Torr, Fully-convolutional siamese networks for object tracking, in: Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II 14, Springer, 2016, pp. 850–865.
* [93] A. v. d. Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding, arXiv preprint arXiv:1807.03748.
* [94] Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime multi-person 2d pose estimation using part affinity fields, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. |
# Shall We Pretrain Autoregressive Language Models with Retrieval?
A Comprehensive Study
Boxin Wang ‡1 &Wei Ping∗†2 &Peng Xu∗2 &Lawrence McAfee2 Zihan Liu2 &Mohammad
Shoeybi2 &Yi Dong2 &Oleksii Kuchaiev2 Bo Li1 &Chaowei Xiao2,3 &Anima
Anandkumar2 &Bryan Catanzaro2 Equal contribution. ‡Work done during an
internship at NVIDIA. 1UIUC. 2NVIDIA. 3University of Wisconsin, Madison.
†Correspondence to: Wei Ping<EMAIL_ADDRESS>
###### Abstract
Large decoder-only language models (LMs) can be largely improved in terms of
perplexity by retrieval (e.g., Retro), but its impact on text generation
quality and downstream task accuracy is unclear. Thus, it is still an open
question: _shall we pretrain large autoregressive LMs with retrieval?_ To
answer it, we perform a comprehensive study on a _scalable pretrained_
retrieval-augmented LM (i.e., Retro) compared with standard GPT and retrieval-
augmented GPT incorporated at fine-tuning or inference stages. We first
provide the recipe to reproduce Retro up to 9.5B parameters while retrieving a
text corpus with 330B tokens. Based on that, we have the following novel
findings: _i)_ Retro outperforms GPT on text generation with much less
degeneration (i.e., repetition), moderately higher factual accuracy, and
slightly lower toxicity with a nontoxic retrieval database. _ii)_ On the LM
Evaluation Harness benchmark, Retro largely outperforms GPT on knowledge-
intensive tasks, but is on par with GPT on other tasks. Furthermore, we
introduce a simple variant of the model, Retro++, which largely improves open-
domain QA results of original Retro (e.g., EM score $+8.6$ on Natural
Question) and significantly outperforms retrieval-augmented GPT in both
finetuning and zero-shot evaluation settings. Our findings highlight the
promising direction of pretraining autoregressive LMs with retrieval as future
foundation models. We release our code and model at:
https://github.com/NVIDIA/Megatron-LM/blob/main/tools/retro/README.md.
## 1 Introduction
Large language models (LMs), including masked LMs (e.g., BERT (Devlin et al.,
2018)), autoregressive LMs (e.g., GPT (Brown et al., 2020)), and encoder-
decoder LMs (e.g., T5 (Raffel et al., 2020), BART (Lewis et al., 2020a)), have
obtained state-of-the-art results for various NLP tasks. Among them, the
autoregressive LMs like GPT-3 (Brown et al., 2020) and GPT-4 (OpenAI, 2023)
demonstrate noticeable in-context learning ability and excellent long-form
text generation results. Due to its importance, the community has spent
considerable efforts to scale up such autoregressive generative LMs with more
data and parameters and observed significant breakthroughs in a variety of
real-world applications (e.g., Brown et al., 2020), including open-ended text
generation and various downstream tasks (e.g., question answering). The
successful public examples include GPT-3 (w/ 170B parameters) Brown et al.
(2020), Gopher (280B) (Rae et al., 2021), Megatron-Turing (530B) (Smith et
al., 2022), and PaLM (540B) (Chowdhery et al., 2022).
Although large-scale autoregressive LMs have achieved huge successes, they
also suffer from several weaknesses. First, it requires a huge number of model
parameters to memorize the world knowledge, which makes it costly for
deployment. Second, it is difficult to safeguard factual accuracy, which may
provide users with incorrect information (Lee et al., 2022). Third, it is
expensive to update the model knowledge learned during pretraining with up-to-
date facts (Meng et al., 2022), yielding outdated answers (Lewis et al.,
2020b).
To mitigate the problems above, one line of research proposes to improve
language models with retrieval. The retrieval process can be integrated into
LMs at: _i)_ fine-tuning stage (Karpukhin et al., 2020; Lewis et al., 2020b;
Guu et al., 2020), or _ii)_ pretraining stage Borgeaud et al. (2022); Izacard
et al. (2022). Most previous work augments BERT or encoder-decoder LMs with
retrieval at fine-tuning stage, demonstrating successes for knowledge-
intensive NLP tasks (Guu et al., 2020; Karpukhin et al., 2020; Lewis et al.,
2020b; Khandelwal et al., 2020). However, it remains relatively underexplored
to pretrain autoregressive (decoder-only) LMs with retrieval, especially
considering the noticeable success of ChatGPT (OpenAI, 2022) that underscores
the extreme importance of the autoregressive LMs.
Most recently, Retro Borgeaud et al. (2022) proposes to pretrain
autoregressive LMs with a retrieval module, which is practically scalable to
large-scale pretraining from scratch by retrieving billions of token and
largely reduces model parameters while achieving lower perplexity than
standard GPT. It also provides the flexibility to update the knowledge stored
in LMs Petroni et al. (2019) by updating the retrieval database without
training LMs again. The success of pretraining LMs with retrieval raises an
important question for the community if we want to pretrain autoregressive LMs
in the future: _Shall we pretrain autoregressive (decode-only) LMs with
retrieval by default or not?_ However, previous work (Borgeaud et al., 2022)
misses the important evaluation on whether the model like Retro could obtain
comparable or even better results in terms of open-ended text generation and
various NLP downstream tasks, apart from lower perplexity on the held-out
dataset compared to standard GPT.
To answer the above _question_ and bridge the missing gap, we perform an
extensive study on Retro, as to the best of our knowledge, Retro is the only
retrieval-augmented autoregressive LM that supports large-scale pretraining
with retrieval on the massive pretraining corpus with hundreds of billion or
trillion tokens. Our comprehensive study sheds light on the promising
direction of pretraining autoregressive LMs with retrieval to serve as future
foundation models, as they overall outperform standard GPT models in terms of
perplexity, text generation quality, and downstream task performances,
especially for knowledge-intensive tasks, including open-domain QA.
## 2 Key Findings
Model | #/ Retrieval | When to | Architecture | Initialization | Re-indexing
---|---|---|---|---|---
Name | Tokens | Involve Retrieval
Retro (Borgeaud et al.) | $O(10^{12})$ | Pretraining | decoder-only | From Scratch / Pretrained GPT | No
Atlas (Izacard et al.) | $O(10^{9})$ | Pretraining | encoder-decoder | Pretrained T5 | Yes
REALM (Guu et al.) | $O(10^{9})$ | Pretraining | encoder-only | Pretrained BERT | Yes
RAG (Lewis et al.) | $O(10^{9})$ | Fine-tuning | encoder-decoder | Pretrained BART | No
DPR (Karpukhin et al.) | $O(10^{9})$ | Fine-tuning | encoder-only | Pretrained BERT | No
FiD (Izacard and Grave) | $O(10^{9})$ | Fine-tuning | encoder-decoder | Pretrained T5 | No
KNN-LM (Khandelwal et al.) | $O(10^{9})$ | Inference | decoder-only | Pretrained GPT | No
Table 1: Comparison of different retrieval-augmented models in terms of #/
retrieval tokens, which stage to incorporate retrieval into LMs, the
architecture of the backbone LM, whether it requires initialization from the
existing LM checkpoint, and whether it requires expensive re-indexing. Retro
is the most scalable retrieval-augmented LM due to its chunk-level retrieval
and scalable decoder-only autoregressive LM backbone (Thoppilan et al., 2022;
Brown et al., 2020; Smith et al., 2022; Chowdhery et al., 2022) without
expensive retrieval index refresh.
We successfully reproduce and pretrain Retro (Borgeaud et al., 2022) from
scratch111The official implementation and pretrained checkpoints are not open-
sourced., with parameter sizes ranging from 148M up to 9.5B by retrieving from
a text corpus with over 330B tokens. In addition, we discuss the inference
strategy of Retro for text generation that is not covered in Borgeaud et al.
(2022), and perform a large-scale evaluation in different scenarios.
To minimize the discrepancy variables between Retro and GPT, we use the same
decoder architecture, same hyper-parameters, and same pre-training corpus to
pre-train Retro and GPT given the same number of pre-training steps. We
highlight our novel findings for Retro and GPT as follows:
### 2.1 Text Generation
We conduct a systematic study (see §5) to understand and analyze Retro by
evaluating its open-ended text generation quality via human and automatic
evaluations. Retro exhibits better performance than GPT with considerably less
_repetition_ , moderately higher _factual accuracy_ , and slightly lower
_toxicity_ levels. Retro is on par with GPT in terms of _fluency_ ,
_coherence_.
### 2.2 LM Evaluation Harness Benchmark
In terms of zero-shot evaluation on the standard benchmark, Retro can overall
improve upon the GPT across different tasks, significantly outperforming GPT
on knowledge-intensive tasks such as Hellaswag and BoolQ while achieving
similar performance on other tasks. Specifically, we evaluate the zero-shot
capabilities of Retro and GPT on nine representative NLP downstream
classification tasks (see §6). Additionally, our findings demonstrate that
Retro can leverage retrieved neighbors and significantly improves accuracy for
knowledge-intensive tasks in zero-shot evaluations. In contrast, incorporating
these retrieved neighbors directly during the inference stage can hurt GPT’s
performance. These results further substantiate the potential of Retro, which
is pre-trained with retrieval capabilities, as a promising approach.
### 2.3 Open-domain QA
For open-domain QA tasks, Retro achieves considerably superior performance
than retrieval-augmented GPT that incorporates retrieval during fine-tuning
across different model sizes and datasets. Specifically, we propose a variant
of the model, Retro++, for open-domain QA that feeds the most relevant
evidence into the decoder and more evidence into its encoder, which is
different from the original version (Borgeaud et al., 2022). Retro++ can
largely improve the exact matching score (EM) on Natrual Question from 40.9%
to 54.1%, which is significant higher than the 45.5% reported by the original
Retro.
## 3 Related Work
Retrieval has been applied in various NLP tasks for years, including question
answering (QA) (e.g., Bilotti et al., 2007), machine translation (e.g., Zhang
et al., 2018), and conversation (Shuster et al., 2021; Thoppilan et al., 2022;
Komeili et al., 2021). In particular, language models have been augmented with
retrieval at different stages, including inference time (Khandelwal et al.,
2020; Yogatama et al., 2021), fine-tuning stage (Karpukhin et al., 2020; Lewis
et al., 2020b; Guu et al., 2020), and pretraining stage Borgeaud et al.
(2022); Izacard et al. (2022).
LMs have been augmented with retrieval at the fine-tuning stage for downstream
tasks, primarily for open-domain QA. DPR (Karpukhin et al., 2020) finetunes
one BERT to encode questions and the other BERT to encode answers within a
dual encoder framework, using a contrastive loss to align the hidden
representations of question and corresponding answer. RAG (Lewis et al.,
2020b) studies the fine-tuning recipe for retrieval-augmented generation
models, especially on open-domain QA tasks. FiD (Izacard and Grave, 2021)
improves RAG with a better LM backbone T5, and fuses multiple retrieved
passages to the decoder during fine-tuning to further improve QA accuracy.
WebGPT (Nakano et al., 2021) leverages web search engine and fine-tunes GPT
using reinforcement learning with human feedback (RLHF) for reference
generation and factuality improvement, which is orthogonal to our work that
focuses on pretraining with retrieval. The proposed RLHF can be applied to
Retro as well.
REALM (Guu et al., 2020) performs both unsupervised pretraining and supervised
fine-tuning strategies for retrieval-augmented BERT model in open-domain QA.
Their pretraining involves asynchronous re-embedding and re-indexing all
documents every several hundred training steps, which quickly becomes
impractical for training corpus with trillion tokens. Atlas (Izacard et al.,
2022) uses a similar approach but augments the T5 architecture (Raffel et al.,
2020) with retrieval at both pre-training and fine-tuning. Before pretraining,
it first initializes the encoder-decoder LM backbone with pretrained T5, and
the dense retriever with pretrained Contriever (Izacard et al., ). During
pretraining, it also applies asynchronous index refresh every 1000 steps.
In contrast, Retro (Borgeaud et al., 2022) embeds and indexes the whole
training corpus at chunk-level (e.g., chuck size = 64) with a frozen BERT
before pretraining. During pretraining, the model relies on a trainable
bidirectional encoder to embed the retrieved chunks of raw text. The GPT
decoder further “select” the relevant piece of evidence from the encoder side
by a chunk-wise cross-attention. This architecture design enables LM
pretraining on hundreds of billion tokens by retrieving from trillion tokens.
See Table 1 for a complete comparison of retrieval-augmented LMs.
(a) Use “left padding” Rule
(b) Retrieval step $=1$
(c) Separate question and answer chunks
Figure 1: Visualization of padding design for Retro.
## 4 Model and Implementation
In this section, we first introduce preliminaries of Retro, then provide
detailed recipe of our implementation, including retrieval database,
pretraining, and retrieval-augmented finetuning and generation.
| Small | Medium | XL | XXL
---|---|---|---|---
GPT | 17.76 | 13.18 | 10.18 | 7.86
Retro ($k=2$) | 12.99 | 10.06 | 8.10 | 6.72
Table 2: Validation perplexity of pretrained GPT and Retro on the held-out
dataset. We report the results with $k=2$ neighbors in this Table, and we
observe the same trend of improvements with larger $k$ as in Borgeaud et al.
(2022).
### 4.1 Preliminaries of Retro
Retro is an autoregressive language model enhanced with a retrieval module
that utilizes chunk-wise retrieval, enabling it to scale up to trillions of
tokens. The model splits both the input sequence and retrieval datastore into
sequences of chunks. Retro retrieves nearest neighbor chunks from the
retrieval database using the previous chunk and fuses this information with
the context from preceding chunks to guide the generation of the next chunk.
To maintain causality, the model can only use the nearest neighbors of the
previous chunk for the autoregressive generation.
### 4.2 Implementation
As Retro has no official open-source implementation and pretrained
checkpoints, we reproduce and pretrain Retro from scratch on our own.
#### 4.2.1 Retrieval Database
We build the retrieval database with the whole pretraining dataset mentioned
in §B. In this way, Retro and standard GPT of similar size are fair
comparisons, as they are pretrained using the same information from the
pretraining corpus. The retrieval database is a key-value database, where
values are chunks split from the pretraining corpus, and the keys are
corresponding BERT embeddings. Our pretraining dataset with 330B tokens yields
a retrieval database consisting of 5.3B chunks in total with chunk size
$m=64$.
Retrieval Index. We use the Faiss index (Johnson et al., 2019) as the
implementation for the dense retriever to search for approximate nearest
neighbors in the BERT embedding space. We configure the Faiss index to cluster
the dense embeddings into $2^{22}$ centroids accelerated with Hierarchical
Navigable Small World graphs (Malkov and Yashunin, 2018) to speed up the
query. We also encode the embeddings with optimized product quantization (Gray
and Neuhoff, 1998; Ge et al., 2014) to compress memory overhead and further
improve the query throughput. As a result, we can achieve 4ms per query over
the whole pretraining corpus averaged for each chunk on a DGX-2H node. One may
find more details in Appendix §A.
#### 4.2.2 Pretraining Retro Models
We use the same transformer configurations (#/ layers, hidden size, attention
heads) and pretrain both Retro and standard GPT from scratch. Specifically, we
pretrain Retro across different parameter sizes, ranging from 148M (Small),
410M (Medium), 1.5B (XL), and 9.5B (XXL). We also use the same pretraining
schedules to pretrain Retro and GPT given the same number of steps. We list
the validation perplexity of GPT and Retro after pretraining in Table 2. We
present more details in Appendix §B, including pretraining schedules,
computational cost (GPU hours), and model architectures.
Metrics | Small | Medium | XL | XXL
---|---|---|---|---
GPT | Retro | GPT | Retro | GPT | Retro | GPT | Retro
Repetition % | $2.86\%$ | $\textbf{2.26}\%$ | $1.70\%$ | $\textbf{1.50}\%$ | $1.44\%$ | $\textbf{0.96}\%$ | $1.40\%$ | $\textbf{1.12}\%$
Self-BLEU | $0.29$ | $0.3$ | $0.29$ | $0.3$ | $0.29$ | $0.29$ | $0.31$ | $0.31$
Zipf Coefficient | $0.98$ | $0.98$ | $0.96$ | $0.98$ | $0.97$ | $0.98$ | $0.96$ | $0.96$
Table 3: Automatic evaluation on text generation quality for Retro and GPT
across different sizes.
#### 4.2.3 Retrieval-augmented Generation
We discuss the generation and inference recipe in the batch-processing mode
for Retro, which is missing from the previous literature.
“Left Padding” Rule. The chunk-wise retrieval of Retro improves scalability
but enforces chunk-wise alignment constraints, leading to issues in
conditional generations with short contexts. When the sequence length is less
than the chunk size, Retro cannot utilize its retrieval capability as there is
no previous chunk for retrieval. Instead, Retro adds padding tokens to the
left of the context, allowing Retro to leverage the retrieved neighbors from
the previous context to guide the generation of the next token (Figure 1(a)).
We summarize this general principle in Retro as the “left padding” rule, as it
can leverage the contextual information for retrieval to the most. This rule
remains preferable for input sequences larger than the chunk size, as it
ensures the closest and rightmost context is used for retrieval, making it
more relevant for next token prediction (see Figure 1(b)).
Frequency of Retrieval. In order to efficiently generate long sequences with
Retro, we note a flexible trade-off between retrieval-augmented generation and
computation overhead. The direct method involves retrieval at every decoding
step, maximizing the use of the retrieval module but increasing computational
overhead (Figure 1(b), retrieval step $=1$). Another approach retrieves
neighbors at the frequency of the chunk size, reducing overhead but
sacrificing accuracy (Appendix Figure 3(b), retrieval step $=64$). To balance
these factors, we introduce a flexible retrieval step, which allows model
practitioners to choose how many tokens to generate with the current retrieved
neighbors before updating the context. Smaller retrieval steps are preferred
for downstream tasks with short answers to ensure accurate neighbors, while
larger steps are used for efficient generation of long passages. We provide
more details in Appendix §C.
#### 4.2.4 Batched Training for Downstream Tasks
When fine-tuning Retro for downstream tasks (e.g., QA), it is crucial to
separate context or question from the candidate answer chunk to maintain
causality in autoregressive modeling. This leads to a modified "left padding"
rule: pad context chunks from the left and answer chunks from the right
(Figure 1(c)). Padding aligns input sequences with the chunk size, enabling
batch-mode training and inference for faster evaluation. By adding padding
chunks to the right, sequences with varying chunk numbers can be processed
together, further improving efficiency.
## 5 Open-ended Text Generation
In this section, we delve into the problem of open-ended text generation,
which refers to tasks of generating coherent continuation given the preceding
prompt. Given that this problem for Retro has never been studied before, we
manage to bridge the gap and evaluate the open-ended text generation of Retro
compared to GPT from three aspects: $a$) text quality, $b$) factuality, and
$c$) toxicity.
### 5.1 Text Quality
We perform both automatic and human evaluations.
#### 5.1.1 Automatic Evaluation
Decoding | Models | Factual | Nonfactual
---|---|---|---
$\text{NE}_{\text{ER}}\downarrow$ | $\text{Entail}_{\text{R}}\uparrow$ | $\text{NE}_{\text{ER}}\downarrow$ | $\text{Entail}_{\text{R}}\uparrow$
Top-p=0.9 | Retro | 52.14% | 3.11% | 56.75% | 2.06%
GPT | 52.42% | 2.93% | 56.82% | 2.04%
Greedy | Retro | 37.42% | 16.66% | 42.45% | 10.88%
GPT | 39.87% | 12.91% | 45.02% | 8.75%
(a) The factuality on FactualityPrompts benchmark.
Models | QA Format | Null Format
---|---|---
MC1$\uparrow$ | MC2$\uparrow$ | MC1$\uparrow$ | MC2$\uparrow$
GPT | $0.222$ | $0.377$ | $0.234$ | $0.435$
Retro (pretraining) | 0.239 | 0.382 | 0.248 | 0.439
Retro (wiki) | - | - | $0.242$ | $0.437$
Retro (DPR) | - | - | $0.245$ | 0.439
(b) The truthfulness on TruthfulQA benchmark.
Table 4: Evaluation of factuality and truthfulness of Retro (XL) and GPT (XL).
Evaluation Metrics. We follow prior work (Holtzman et al., 2019; Zhu et al.,
2018) and consider the following metrics: Repetition % measures percentage of
the generations containing repetitive phrases, SELF-BLUE evaluates the
diversity of the generations, and Zipf Coefficient measures the use of
vocabulary. See detailed definition and evaluation setup in Appendix §D.1.
Experimental Results. Our results are shown in Table 3. We note that Retro can
reduce the percentage of repetition compared with GPT by a large margin across
different sizes. Specifically, Retro averagely mitigates 21% of repetitions
compared with GPT across different sizes. This suggests the retrieval module
can help reduce text degeneration by referencing retrieved human text.
Regarding vocabulary use and generation diversity, we do not observe major
differences between GPT and Retro, which implies these properties are
primarily dependent on the decoder component of LMs.
#### 5.1.2 Human Evaluation
We also conduct human evaluations to further verify the quality of the
generated text.
Evaluation Metrics. We ask human annotators to annotate each generation with
fluency scores, which measure the human readability and grammatical errors
from 1 (Not human-readable) to 5 (Very fluent), and coherence scores, which
measure the relevance between the prompt and the corresponding continuations
from 1 (Not Relevant) to 5 (Very Relevant). More details can be found in §D.2.
Experimental Results. We present the human vote histogram in Appendix Figure
4. We observe that most votes concentrate on the regime of scores $>=3$ for
both relevance and fluency, which indicates that our generated text from both
models is of high quality and closely related to the prompts. The differences
between GPT and Retro are subtle, with average relevance (3.726) and fluency
(3.826) scores of Retro slightly outperforming the average relevance score
(3.715) and fluency (3.818) scores of GPT.
From both automatic and human evaluation, we can conclude that although the
generation of Retro adds some complexity, we do not see any sign of the
degeneration of Retro compared to GPT. Moreover, Retro is shown to be able to
reduce the repetition and slightly improve text generation quality.
### 5.2 Factuality
Factuality refers to being coherent to provide ground truth knowledge sources
in NLP. We leverage two well-established benchmarks (Lee et al., 2022; Lin et
al., 2021) to evaluate the factual accuracy of Retro and GPT. We put detailed
experimental setup and metrics in the §E.
Factuality Prompts. We present the results on FactualityPrompts benchmark in
Table 4(a), where we follow the protocol Lee et al. (2022) and use different
decoding strategies for evaluation. We observe that overall Retro demonstrates
better factual accuracy and makes fewer hallucinations. Our observation also
aligns with (Lee et al., 2022) that different from top-$p$ sampling with
randomness, greedy decoding gives the much better factual accuracy by
sacrificing sample diversity. Retro shows a significant improvement over GPT
using both top-p and greedy decoding algorithms.
TruthfulQA. We also evaluate truthfulness in Table 4(b), where we observe the
consistent conclusion that Retro tend to be more resilient to human falsehoods
than GPT across different prompts (demonstrated in Appendix Table 12 and 13).
Since using the null format yields consistently better performance, we thus
fix the prompt and conduct ablation studies with different retrieval
databases. We observe that the pretraining database and DPR database yield the
best factual accuracy. This suggests that ($i$) finetuned retriever or ($ii$)
retrieval from a larger corpus are helpful to improve factual accuracy. In
addition, the improvement margin for Retro on TruthfulQA is not as significant
as on FactualityPrompts. This is likely because TruthfulQA is an adversarial
dataset containing difficult question types, such as logical falsehoods and
indexical errors (e.g., “Are you an artificial intelligence?”, more examples
can be found in Appendix Table 14). In these cases, retrieval from the
database does not effectively aid in answering such questions.
Models | Retrieval | Exp. Max. Toxicity ($\downarrow$) | Toxicity Prob. ($\downarrow$)
---|---|---|---
Database | Full | Toxic | Nontoxic | Full | Toxic | Nontoxic
GPT | - | $0.44$ | $0.64$ | $0.39$ | 37% | 74% | 27%
Retro (top-$N=2$, top-$K=2$) | Pretraining | $0.46$ | $0.66$ | $0.40$ | 40% | 76% | 30%
Retro (top-$N=5$, top-$K=2$) | Pretraining | $0.46$ | $0.66$ | $0.40$ | 39% | 77% | 29%
Retro (top-$N=10$, top-$K=2$) | Pretraining | $0.46$ | $0.66$ | $0.40$ | 39% | 76% | 29%
Retro (top-$N=2$, top-$K=2$) | Wiki | $0.43$ | $0.64$ | $0.38$ | 35% | 73% | 25%
Retro (top-$N=5$, top-$K=2$) | Wiki | $0.43$ | $0.64$ | $0.38$ | 35% | 71% | 26%
Retro (top-$N=10$, top-$K=2$) | Wiki | $0.43$ | $0.64$ | $0.38$ | 35% | 71% | 26%
Table 5: Evaluation of LM toxicity for GPT (XL) and Retro (XL). Model toxicity
is evaluated on RealToxicityPrompts. Full refers to the full set of prompts,
Toxic and Nontoxic refer to the toxic and nontoxic subsets of prompts.
$\downarrow$ means the lower, the better. Retro can filter from top-$N$
nearest neighbors and select the top-$K$ nontoxic neighbors for retrieval.
### 5.3 Toxicity
The toxicity of LMs refers to the possibility of LMs that output toxic
generations. In this study, we follow RealToxictyPrompts benchmark (Gehman et
al., 2020) to evaluate the potential toxicity of Retro and GPT.
Evaluation Metrics. Following Gehman et al. (2020), we report the _Expected
Maximum Toxicity_ , which evaluates the toxicity of the worst-case generation,
as well as _Toxicity Probability_ that estimates the empirical frequency of
generating toxic language. See more details and setup in §F.
Experimental Results. The toxicity of LMs are shown in Table 5. Compared to
GPT, we note that Retro with the pretraining corpus even increases the
toxicity of the generations. Moreover, we observe more toxicity increases in
toxic prompts than in nontoxic prompts. This suggests that when prompting
Retro with toxic contexts, it is more likely to retrieve toxic evidence and
thus amplify the issues. To confirm the toxicity amplification issue, we
further conduct two sets of ablation studies: ($i$) We save the retrieval
evidence and calculate the Expected Mean Toxicity of both generations and
retrieval evidence. We observe that the toxicity of retrieval evidence is
$0.177$, higher than the toxicity of the generations ($0.146$). ($ii$) We
change the retrieval database to the Wikipedia database, which shows lower
toxicity for retrieval evidence ($0.132$). As a result, we observe that Retro
with the Wikipedia retrieval database can help mitigate the toxicity of GPT as
shown in Table 5, with the toxicity probability dropping from $37\%$ to
$35\%$. We also note that it is not very helpful to use a larger $N$ as
nearest neighbors and filter the retrieval evidence by toxicity. We
hypothesize the reason is that the similarity between input and retrieval
evidence is limited with larger $N$, thus yielding low cross-attention on the
retrieval evidence.
Tasks | Small | Medium | XL | XXL
---|---|---|---|---
GPT | Retro | GPT | Retro | GPT | Retro | GPT | Retro
Knowledge-intensive Tasks | | | | | | | |
HellaSwag | $31.3$ | $36.2$ $\uparrow$4.9 | $43.2$ | $46.2$ $\uparrow$3.0 | $56.7$ | $59.0$ $\uparrow$2.3 | $72.3$ | $70.6$ $\downarrow$1.7
BoolQ | $59.3$ | $61.8$ $\uparrow$2.5 | $57.4$ | $57.2$ $\downarrow$0.2 | $62.2$ | $62.7$ $\uparrow$0.5 | $67.3$ | $70.7$ $\uparrow$3.4
Knowledge-nonintensive Tasks | | | | | | | |
Lambada | $41.7$ | $41.4$ $\downarrow$0.3 | $54.1$ | $55.0$ $\uparrow$0.9 | $63.9$ | $64.0$ $\uparrow$0.1 | $73.9$ | $72.7$ $\downarrow$1.2
RACE | $34.6$ | $32.5$ $\downarrow$2.1 | $37.3$ | $37.3$ $\uparrow$0.0 | $40.8$ | $39.9$ $\downarrow$0.9 | $44.3$ | $43.2$ $\downarrow$1.1
PiQA | $64.3$ | $64.8$ $\uparrow$0.5 | $70.2$ | $68.7$ $\downarrow$1.5 | $73.7$ | $74.1$ $\uparrow$0.4 | $78.5$ | $77.4$ $\downarrow$1.1
WinoGrande | $52.4$ | $52.0$ $\downarrow$0.4 | $53.8$ | $55.2$ $\uparrow$1.4 | $59.0$ | $60.1$ $\uparrow$1.1 | $68.5$ | $65.8$ $\downarrow$2.7
ANLI-R2 | $35.1$ | $36.2$ $\uparrow$1.1 | $33.5$ | $33.3$ $\downarrow$0.2 | $34.3$ | $35.3$ $\uparrow$1.0 | $32.2$ | $35.5$ $\uparrow$3.3
HANS | $51.5$ | $51.4$ $\downarrow$0.1 | $50.5$ | $50.5$ $\uparrow$0.0 | $50.1$ | $50.0$ $\downarrow$0.1 | $50.8$ | $56.5$ $\uparrow$5.7
WiC | $50.0$ | $50.0$ $\uparrow$0.0 | $50.2$ | $50.0$ $\downarrow$0.2 | $47.8$ | $49.8$ $\uparrow$2.0 | $52.4$ | $52.4$ $\uparrow$0.0
Avg. Acc. ($\uparrow$) | $46.7$ | $47.4$ $\uparrow$0.7 | $50.0$ | $50.4$ $\uparrow$0.4 | $54.3$ | $55.0$ $\uparrow$0.7 | $60.0$ | $60.5$ $\uparrow$0.5
Table 6: Accuracy (Acc.) on nine downstream tasks evaluated in the zero-shot
setting for pretrained LMs with different parameter sizes.
## 6 LM Evaluation Harness Benchmark
Besides the open-ended text generation, it is also important to examine the
generalization of Retro on various downstream tasks, which is also missing
from the literature. Therefore, we use LM Evaluation Harness Benchmark (Gao et
al., 2021) and consider the following nine representative NLP downstream
tasks. See more details in §G.
Zero-shot evaluation. We present the zero-shot evaluation results in Table 6.
We find that on average Retro can improve the downstream task accuracy across
different tasks. Moreover, we observe larger improvements in knowledge-
intensive tasks such as Hellaswag and BoolQ (6 of 8 cases), which require
factual knowledge to guide the reasoning. Note that the zero-shot evaluation
results are susceptible to prompt formats, so the results have certain
variances.
Retrieval-augmented GPT at Inference time. We have seen that retrieval
significantly improves Retro across different downstream tasks in the zero-
shot setting. In this ablation study, we append the retrieval evidence of
Retro to the beginning of the context to see whether retrieval can also be
helpful for GPT at inference time. We evaluate the zero-shot accuracy after
prepending the top-$1$ retrieval evidence. The results are shown in Appendix
Table 16. We observe that directly prepending the evidence from the retrieval
database messes up the GPT context in the zero-shot setting, yielding low
accuracy of around $24.5\%$. We hypothesize the reason is that the retrieval
evidence can be noisy. Without pretraining or proper fine-tuning, GPT in the
zero-shot learning setting puts too much attention on the noisy evidence, thus
giving low downstream accuracy.
## 7 Open-domain Question Answering
In this section, we study two widely used open-domain QA datasets, Natural
Question (NQ) and TriviaQA.
### 7.1 Experimental Setup
Retrieved evidence as context The original Retro work leverages the retrieved
evidence (i.e. passages) by feeding them all into the encoder. We argue that
the top most relevant evidence is more important than others and should be
used as the context for the question. Therefore, the top relevant evidence
should be fed to the decoder, and the rest of the evidence can be incorporated
by the encoder. For the implementation in our experiments, we append the top-1
relevant passage at the beginning of the decoder input, and reformat the input
with Template A: “title: {title}, source: {source} \n question: {question} \n
answer: {answer}”. For the models without retrieved evidence in the context,
we follow Borgeaud et al. (2022) to format the input with Template B:
“question: {question} \n answer: {answer}”.
In additional to several baseline methods in Table 7, we compare the following
models: 1) GPT (close-book) simply finetunes a pretrained GPT model with the
input Template B without using any retrieved documents. 2)
$\text{RAG}_{\textit{GPT}}$ applies RAG finetuning (Lewis et al., 2020b) for
GPT, which puts retrieved evidence as its context. It utilizes the top
retrieved documents by DPR with the input Template A and finetunes a
pretrained GPT model, which represents incorporating retrieval to GPT at the
fine-tuning stage. 3) Retro encodes the retrieved evidence using the encoder
and finetunes a pretrained Retro model with the input Template B. 4) Retro++
finetunes a pretrained Retro model with the top retrieved evidence included
input Template A while leaving the rest of the evidence to the encoder. More
details can be found in §H.
Method | NQ | TriviaQA
---|---|---
GPT (close book) | 36.1 | 45.1
REALM (Guu et al., 2020) | 40.4 | -
DPR (Karpukhin et al., 2020) | 41.5 | 56.8
$\text{RAG}_{\textit{BART}}$ (Lewis et al., 2020b) | 44.5 | 56.1
$\text{RAG}_{\textit{GPT}}$ | 50.9 | 60.9
$\text{FiD}_{\textit{Large}}$ (Izacard and Grave, 2021) | 51.4 | 67.6
Retro (Ours) | 40.9 | 59.9
Retro Borgeaud et al. (2022) | 45.5 | -
Retro++ (Ours) | 54.1 | 66.7
Table 7: Comparisons of our Retro and existing QA models. We report the best
results with the largest model configuration respectively. Figure 2:
Comparisons among $\text{RAG}_{\textit{GPT}}$ and Retro++ models on NQ and
TriviaQA. Larger models achieve better performances and Retro++ is
consistently better than $\text{RAG}_{\textit{GPT}}$
### 7.2 Results and Analysis
Table 7 shows the results on NQ and TriviaQA. Our Retro++ achieves Exact Match
(EM) score 54.1, which is 8.6 higher than the original Retro paper. We find
the key to the success of Retro is to incorporate the top retrieved document
from DPR to the decoder as the context , which gives us 13.2 absolute
improvement by comparing our Retro and Retro++. Note that our Retro has lower
EM score (40.91) than the original paper (45.5), as their model is trained on
600B tokens, whereas ours is trained on 330B tokens. By comparing
$\text{RAG}_{\textit{GPT}}$ with Retro++, we show that pretraining
autoregressive LM with retrieval (i.e., Retro++) yields better QA accuracy
than only fine-tuning autoregressive LM with retrieval (i.e.,
$\text{RAG}_{\textit{GPT}}$). Appendix §H.3 gives qualitative studies on NQ.
Scaling of model sizes. Figure 2 shows the EM score when scaling model sizes
for $\text{RAG}_{\textit{GPT}}$, and Retro++ on NQ and TriviaQA. As the model
sizes increase, the performance of all models monotonically increases. Retro++
achieves the best performances across all tasks and model sizes. Note that,
Wang et al. (2023) further scales up the size of Retro to 48B and discusses
how instruction tuning can help improve retrieval-augmented LLMs for zero-shot
open-domain question answering.
### 7.3 Zero-shot evaluation with and without instruction tuning
Instruction tuning (Wei et al., 2022a; Chung et al., 2022) finetunes LLMs on a
collection of datasets described via natural language instructions, which
significantly improve the zero-shot accuracies for unseen downstream tasks. In
this subsection, we study how instruction tuning can help with open-domain QA
for retrieval-agumented LLMs.
Instruction tuning data. We use a blend of high-quality instruction tuning
datasets of 128K samples to train LLMs to follow instructions, which include:
a high-quality social dialogue dataset SODA (Kim et al., 2022), a long-form QA
dataset ELI5 that requires elaborate answers (Fan et al., 2019), LLM-generated
instructions: Self-Instruct (Wang et al., 2022) and Unnatural Instructions
(Honovich et al., 2022), FLAN and Chain-of-thought datasets (Chung et al.,
2022; Wei et al., 2022b; Longpre et al., 2023), public human-written
conversation datasets OpenAssistant (Köpf et al., 2023) and Dolly (Conover et
al., 2023).
Implementation details. We conduct instruction tuning to both GPT (XXL) and
Retro (XXL). We finetune the LLMs by taking the loss only on the last response
from the assistant with a batch size of 128 and a learning rate of 5e-6 for
1000 steps with a weight decay of 0.01. We use the Adam optimizer (Kingma and
Ba, 2014) with $\beta_{1}=0.9$ and $\beta_{2}=0.98$. After finetuning, we
follow the same prompt format as $\text{RAG}_{\textit{GPT}}$ for instruction-
tuned GPT (XXL) and Retro++ for instruction-tuned Retro (XXL) and evaluate the
zero-shot accuracy on the Natural Question (NQ) dataset.
| $\text{RAG}_{\textit{GPT}}$ | Retro++
---|---|---
w/o Instruction tuning | 24.43 | 25.93
w/ Instruction tuning | 29.75 | 31.16
Table 8: Exact Match (EM) scores for the zero-shot evaluation of
$\text{RAG}_{\textit{GPT}}$ and Retro++ on the NQ dataset before and after
instruction tuning.
Results. The results of retrieval-augmented GPT ($\text{RAG}_{\textit{GPT}}$)
and Retro++ before and after instruction tuning are shown in Table 8. We
observe that applying instruction tuning with Retro and Retrieval-augmented
GPT ($\text{RAG}_{\textit{GPT}}$) indeed gives significant accuracy
improvement. Moreover, Retro++ demonstrates consistently better accuracy than
$\text{RAG}_{\textit{GPT}}$. This result further confirms the potential and
capabilities of Retro when employing advanced techniques such as instruction
tuning. Note that, Wang et al. (2023) further scale up the Retro to 48B
parameters to unveil the power of instruction tuning.
## 8 Conclusion
In this work, we perform a comprehensive study of pretrained retrieval-
augmented LLM to answer the question: _Shall we pretrain decoder-only LMs with
retrieval?_ We observe consistent improvements in text generation quality,
factual accuracy, lower toxicity, and downstream task accuracy, especially for
knowledge-intensive tasks, including open-domain QA. Given the $\sim 25\%$
percentage of additional GPU hours for pretraining (see Table 11 Appendix B),
we argue pretraining generative language models with retrieval is a promising
direction.
## Limitations
Despite the impressive performance of Retro and Retro++, our findings reveal
several limitations that pave the way for future research to address:
* •
The quality of the retrieval database. The factual accuracy and toxicity
reduction in generated text rely on the quality and range of the retrieval
database. This means that the performance and the model’s outputs can vary
based on the retrieval database. The performance of Retro could be compromised
if the database contains inaccurate, biased, or outdated information.
* •
Scalability. The pretraining of GPT and retrieval-augmented LLM from scratch
requires significant computational resources. Our work follows Borgeaud et al.
(2022) and pretrains GPT and Retro up to the size of 9B. We leave it as an
important future work to further scale up the size of retrieval-augmented
LLMs.
## References
* Bilotti et al. (2007) Matthew W Bilotti, Paul Ogilvie, Jamie Callan, and Eric Nyberg. 2007. Structured retrieval for question answering. In _Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval_.
* Bisk et al. (2020) Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In _AAAI_.
* Borgeaud et al. (2022) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In _ICML_.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _NeurIPS_.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. _arXiv preprint arXiv: 2210.11416_.
* Clark et al. (2019) Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In _NAACL_.
* Conover et al. (2023) Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the world’s first truly open instruction-tuned llm.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Fan et al. (2019) Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, J. Weston, and Michael Auli. 2019. Eli5: Long form question answering. _Annual Meeting of the Association for Computational Linguistics_.
* Gao et al. (2021) Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
* Ge et al. (2014) Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2014. Optimized product quantization. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 36(4):744–755.
* Gehman et al. (2020) Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020\. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In _Findings in EMNLP_.
* Gray and Neuhoff (1998) R.M. Gray and D.L. Neuhoff. 1998. Quantization. _IEEE Transactions on Information Theory_ , 44(6):2325–2383.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. REALM: Retrieval augmented language model pre-training. In _ICML_.
* Holtzman et al. (2019) Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. _International Conference On Learning Representations_.
* Honovich et al. (2022) Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning language models with (almost) no human labor. _Annual Meeting of the Association for Computational Linguistics_.
* (18) Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. _Transactions on Machine Learning Research_.
* Izacard and Grave (2021) Gautier Izacard and Édouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 874–880.
* Izacard et al. (2022) Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. _arXiv preprint arXiv:2208.03299_.
* Johnson et al. (2019) Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. _IEEE Transactions on Big Data_ , 7(3):535–547.
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In _EMNLP_.
* Khandelwal et al. (2020) Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020\. Generalization through memorization: Nearest neighbor language models.
* Kim et al. (2022) Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2022. Soda: Million-scale dialogue distillation with social commonsense contextualization. _arXiv preprint arXiv: 2212.10465_.
* Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.
* Komeili et al. (2021) Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. _arXiv preprint arXiv:2107.07566_.
* Köpf et al. (2023) Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. Openassistant conversations - democratizing large language model alignment. _arXiv preprint arXiv: 2304.07327_.
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In _EMNLP_.
* Lee et al. (2022) Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. _NeurIPS_.
* Lewis et al. (2020a) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _ACL_.
* Lewis et al. (2020b) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In _NeurIPS_.
* Lin et al. (2021) Stephanie C. Lin, Jacob Hilton, and Owain Evans. 2021. TruthfulQA: Measuring how models mimic human falsehoods. _ACL_.
* Longpre et al. (2023) S. Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. _International Conference on Machine Learning_.
* Malkov and Yashunin (2018) Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. _IEEE transactions on pattern analysis and machine intelligence_ , 42(4):824–836.
* Meng et al. (2022) Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual knowledge in GPT. In _NeurIPS_.
* Nakano et al. (2021) Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. _arXiv preprint arXiv:2112.09332_.
* Nie et al. (2020) Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial nli: A new benchmark for natural language understanding. In _ACL_.
* OpenAI (2022) OpenAI. 2022. ChatGPT. https://chat.openai.com.
* OpenAI (2023) OpenAI. 2023. GPT-4 technical report. _arXiv_.
* Paperno et al. (2016) Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The lambada dataset: Word prediction requiring a broad discourse context. In _NAACL_.
* Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In _EMNLP_.
* Piantadosi (2014) Steven T. Piantadosi. 2014. Zipf’s word frequency law in natural language: A critical review and future directions. _Psychonomic Bulletin & Review_, 21:1112–1130.
* Pilehvar and Camacho-Collados (2019) Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. In _NAACL_.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9.
* Rae et al. (2021) Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. _arXiv preprint arXiv:2112.11446_.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_.
* Sakaguchi et al. (2020) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In _AAAI_.
* Shuster et al. (2021) Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. _arXiv preprint arXiv:2104.07567_.
* Smith et al. (2022) Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. _arXiv_.
* Thoppilan et al. (2022) Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. _arXiv preprint arXiv:2201.08239_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _NIPS_.
* Wang et al. (2023) Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, and Bryan Catanzaro. 2023. Instructretro: Instruction tuning post retrieval-augmented pretraining. _arXiv preprint arXiv: 2310.07713_.
* Wang et al. (2022) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language models with self-generated instructions. _Annual Meeting of the Association for Computational Linguistics_.
* Wei et al. (2022a) Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In _ICLR_.
* Wei et al. (2022b) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837.
* Welbl et al. (2021) Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. 2021. Challenges in detoxifying language models. _Findings of EMNLP_.
* Yogatama et al. (2021) Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. _Transactions of the Association for Computational Linguistics_.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In _ACL_.
* Zhang et al. (2018) Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, and Satoshi Nakamura. 2018. Guiding neural machine translation with retrieved translation pieces. In _NAACL_.
* Zhou and Tan (2021) Yangqiaoyu Zhou and Chenhao Tan. 2021. Investigating the effect of natural language explanations on out-of-distribution generalization in few-shot NLI. In _Proceedings of the Second Workshop on Insights from Negative Results in NLP_ , pages 117–124, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Zhu et al. (2018) Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, SIGIR ’18, page 1097–1100, New York, NY, USA. Association for Computing Machinery.
Appendix
## Appendix A Details of Retrieval Index
##### Retrieval Database.
We use the whole pretraining corpus as our retrieval database. Our pretraining
dataset with 330B tokens yields a retrieval database consisting of 5.3B chunks
in total with chunk size $m=64$. To support fast similarity searches with
billions of chunks, we implement the database index with Faiss index (Johnson
et al., 2019). Given the BERT embeddings of an input chunk $C_{i}$, Faiss can
return the approximate $k$ nearest neighbor of $C_{i}$ within a few
milliseconds.
##### Faiss Index configuration
We use the Faiss index (Johnson et al., 2019) as the implementation for the
dense retriever to search for approximate nearest neighbors in the BERT
embedding space. We configure the Faiss index as follows:
* •
Preprocessing: We use Optimized Product Quantization (Ge et al., 2014) to
apply a rotation to the input vectors to make them more amenable to PQ coding
(Gray and Neuhoff, 1998).
* •
Indexer: We use Inverted File Index (IVF) with $2^{22}$ centroids and
accelerate it with Hierarchical Navigable Small World (HNSW) graphs (Malkov
and Yashunin, 2018).
* •
Encoding: We adopt PQ encoding that compresses the dense embedding vector into
64 bits.
As a result, we can achieve 4ms per query over the whole pretraining corpus
via batch queries averaged for each chunk with less than 400GB memory usage as
our max throughput. Given a single query, the latency of the response is
around $0.1s$ per query. We also note that increasing the number of $K$ in the
query does not yield slower query speed. During pretraining, we follow
(Borgeaud et al., 2022) to pre-compute the nearest neighbors and save the data
for pretraining.
## Appendix B Details of Pre-trained LMs
We evaluate and compare Retro with a variety of standard GPT-3 like LMs to set
up the baselines.
##### Chunk-wise Cross-Attention.
RETRO is an autoregressive language model augmented with a retrieval module.
One fundamental reason contributing to the success of Retro is the design of
chunk-wise retrieval, which retrieves at the level of contiguous token chunks
and thus makes it possible to scale up to retrieve from trillion tokens.
Specifically, Retro splits both the input sequence and retrieval datastore
into a sequence of chunks. Formally, given a input sequence $X$ with $n$
tokens $X=(x_{1},...,x_{n})$, Retro splits $X$ into a sequence of $l$ chunks
$(C_{1},...,C_{l})$ with chunk size $m=\frac{n}{l}$. From a high-level
perspective, Retro uses the last $(i-1)$-th chunk $C_{i-1}$ to retrieve $k$
nearest neighbor chunks $\mathcal{N}(C_{i-1})$ from the retrieval database and
fuses the contextual information from the previous chunks
$(C_{1},...,C_{i-1})$ and retrieval information from $\mathcal{N}(C_{i-1})$ by
chunk-wise cross-attention to guide the generation of the next $(i)$-th chunk
$C_{i}$. Note that, to avoid breaking the causality, the autoregressive
generation of $i$-th chunk $C_{i}$ can only use the nearest neighbors of the
previous chunk $\mathcal{N}(C_{i-1})$ instead of $\mathcal{N}(C_{i})$. In this
work, we follow (Borgeaud et al., 2022) and set the chunk size $m=64$.
##### Pretrained GPT and Retro.
We pretrain standard GPT and Retro with different parameter sizes. All of the
models are based on Transformer (Vaswani et al., 2017) with different hidden
dimensions, number of layers, and attention heads. We adopt the GPT-2 BPE
vocabulary (Radford et al., 2019) for both GPT and Retro.
The architecture details of pre-trained LMs are in Table 9. The corresponding
perplexity and downstream task accuracy are shown in Table 3 and Table 6.
##### Pretraining Corpus.
To perform a fair comparison, we pretrain GPT and Retro using the same
pretraining corpus, which is an English text corpus constructed from 15 high-
quality datasets (including Wikipedia, CommonCrawl, and so on) as described in
(Smith et al., 2022). The whole pretraining corpus consists of 330B tokens.
Models Size | #/layers | #/hidden size | #/ attention heads | #/ parameters (Retro) | #/ parameters (GPT)
---|---|---|---|---|---
Small | 12 | 768 | 12 | 148M | 126M
Medium | 24 | 1024 | 16 | 410M | 357M
XL | 24 | 2048 | 32 | 1.5B | 1.3B
XXL | 40 | 4096 | 64 | 9.5B | 8.3B
Table 9: Detailed configuration of standard pre-trained LMs and Retro.
##### Pretraining schedules for GPT and Retro.
We use the same pretraining schedules for GPT and Retro. We list the
pretraining hyper-parameter details in Table 10. All models use Adam optimizer
(Kingma and Ba, 2014) with $\beta_{1}=0.9$ and $\beta_{2}=0.95$. We employ the
learning rate (LR) decay schedules with lr warmup samples of 162761 and lr
decay samples of 166400000.
Models Size | LR | min LR | LR Decay Styles | Batch Size | Pretraining Steps
---|---|---|---|---|---
Small | 6e-4 | 6e-5 | cosine | 256 | 750k
Medium | 3e-4 | 3e-5 | cosine | 256 | 750k
XL | 2e-4 | 2e-5 | cosine | 512 | 375k
XXL | 1e-4 | 1e-5 | cosine | 512 | 375k
Table 10: Detailed pretraining setup for standard pre-trained LMs and Retro.
##### Computational Cost of Pretraining.
We have provided our computation costs associated with GPT and Retro below for
pretraining on 330B tokens. All of our experiments are done on the DGX-2H node
with 8x A100 GPUs. From Table 11, we can see that the overhead involved in
training Retro is less than 25% on average. Considering consistent
improvements in text generation quality, factual accuracy, lower toxicity, and
downstream task accuracy, especially for knowledge-intensive tasks, including
open-domain QA, we believe pretraining Retro is a promising direction.
Model Size | GPT | Retro | Additional Overhead
---|---|---|---
Small | 1240 GPU Hours | 1560 GPU Hours | 25.80%
Medium | 3600 GPU Hours | 4480 GPU Hours | 24.44%
XL | 12000 GPU Hours | 13440 GPU Hours | 12.00%
Table 11: Comparison of GPU Hours.
## Appendix C Implementation Details of Retrieval-Augmented Generation
### C.1 “Left Padding” Rule
While chunk-wise retrieval significantly improves the scalability of Retro, it
also enforces chunk-wise alignment constraint between the input and the
retrieval neighbors. Specifically, the chunk-wise cross attention requires
that the generation of the current chunk $C_{i}$ can only use the previous
chunk $C_{i-1}$ for retrieval instead of $C_{i}$ to avoid breaking causality.
##### Conditional Generation with Short Contexts
This design may lead to problems for conditional generations under short
contexts, as shown in Figure 3(a). Given short contexts with sequence length
$n$ less than the chunk size $m$, Retro cannot leverage its retrieval
capability, as the current chunk is the first chunk, and there is no previous
chunk for retrieval. When $m$ is not a multiplier of $n$, Retro needs to add
additional padding tokens222Since GPT-2 BPE vocab does not contain “¡pad¿”
token, we use the end-of-text token “¡—endoftext—¿” for padding in practice.
to the input sequence. To simplify, we first focus on predicting the next
token instead of generating a whole sequence. If we follow the standard GPT
that adds the padding tokens at the end, we visualize the padding situation in
Figure 3(a) as an example of when the input sequence length is less than the
chunk size. Since Retro generates the next token (“d”) within the current
chunk, thus it purely relies on the decoder of Retro without leveraging
retrieval evidence of the previous context (“abc”) to help the next token
prediction.
##### Conditional Generation Using “Left Padding” Rule
In contrast, if we add the padding tokens to the left of the context so that
the context and padding tokens happen to form the first chunk, we visualize
the padding mechanism in Figure 1(a). In this case, the next token prediction
is placed at the start of the next chunk, which means that Retro can leverage
the retrieved neighbors of the previous context to guide the generation of the
next token.
### C.2 Frequency of Retrieval in Text Generation
In the last subsection, we discuss how to add padding tokens to predict the
next token. In this subsection, we discuss how to efficiently generate a long
sequence for Retro.
##### Retrieval Step = 1
The most direct way for text generation is to repeat the next token prediction
paradigm as shown in Figure 1(b), which generates a new token, places it in
the right, reduces one left padding token, retrieves neighbors given the
updated context, and uses the new retrieved neighbors to predict the next
token. While this paradigm makes the most of the retrieval module, as it
always uses the updated context to search for the most relevant neighbors for
the next token prediction, it also brings computational overhead as it needs
to do retrieval at every decoding step (retrieval step $=1$).
(a) Not use “left padding” Rule
(b) Fixed retrieval step $=64$
(c) Retrieval step $=2$
Figure 3: Visualization of padding design for Retro.
##### Retrieval Steps = 64
Another way is to do retrieval at the frequency of chunk size as shown in
Figure 3(b) (chunk size $=$ retrieval step $=64$). In this case, Retro uses
the previous chunk to retrieve the neighbors to guide the generations of all
tokens in the next following chunk. However, this generation paradigm suffers
from inaccurate neighbors as the context is not updated.
##### Flexible Retrieval Steps
To have a flexible trade-off between the retrieval accuracy and retrieval
overhead, we propose to support flexible retrieval steps as shown in Figure
3(c). Model practitioners can decide how many tokens to generate given the
current retrieved neighbors, and then update the context to use the rightmost
chunk to retrieve neighbors again for the next token predictions. Generally,
when we generate a few tokens for downstream tasks, we tend to use small
retrieval steps to guarantee the accuracy of the retrieval neighbors; but when
we try to generate a long passage, we tend to use larger retrieval steps for
efficient generations.
## Appendix D Details of Evaluation for Text Generation Quality
(a) Human vote histogram for context coherence. The average relevance scores
of GPT and Retro are $3.715$ and $3.726$. (b) Human vote histogram for text
fluency. The average fluency scores of GPT and Retro are $3.818$ and $3.826$.
Figure 4: Human evaluation of context coherence and text fluency on GPT (XXL)
and Retro (XXL).
### D.1 Details of Automatic Evaluation for Text Generation Quality
##### Experimental Setup.
We follow Holtzman et al. (2019) and use the same set of 5,000 prompts for
conditional generations. Both GPT and Retro use nucleus sampling with $p=0.9$
and generate up to 200 tokens or less if reaching an <end-of-text> token. As
Retro is coping with long text generation, we set the retrieval step to 64 and
retrieve top-$k=2$ neighbors from the retrieval database.
##### Evaluation Metrics.
We use the following automatic evaluation metrics for text generation quality:
* •
Repetition % measures the percentage of the generations containing repetitive
phrases. Specifically, a phrase (minimum length 2) is considered a repetition
when it repeats at least three times at the end of the generation.
* •
SELF-BLUE evaluates the diversity of the generations. Self-BLEU is calculated
by computing the BLEU score of each generated document using all other
generations in the evaluation set as references. we follow Holtzman et al.
(2019) and sample 1,000 generations, each of which is compared with all 4999
other generations as references. A lower Self-BLEU score implies higher
diversity.
* •
Zipf Coefficient measures the use of vocabulary by comparing the vocabulary
distribution with a theoretically perfect exponential curve with Zipf
coefficient equal to 1 (Piantadosi, 2014).
### D.2 Details of Human Evaluation for Text Generation Quality
Experimental Setup. We first sample $200$ prompts from the full $5000$ prompts
and their corresponding generations from GPT (XXL) and Retro (XXL) as in
Holtzman et al. (2019), yielding $400$ prompts and continuations in total. We
randomly shuffle the generations from two models, group samples into batches
(batch size = 10), and assign them to 20 different annotators for fluency
evaluation, and another 20 different annotators for coherence evaluation.
Participants were recruited through Amazon MTurk. Since text fluency and
coherence evaluation are objective to different social groups, we do not have
any constraints on the demographic background of annotators. Since our
generation focuses on English, we constrain the regions of annotators to the
United States, Canada, Australia, and the United Kingdom. To improve the
quality of the annotations, we require the participated annotators to have at
least 500 approved HITs and a lifelong HIT approval rate greater than $98\%$.
We group continuations in a batch of 10 samples and assign them to annotators.
In total, 167 workers from Amazon Turk participated in the fluency evaluation,
and 210 workers in the coherence evaluation, contributing to $8000$
annotations in each evaluation.
We adapt the instructions from Holtzman et al. (2019) and show the annotation
instructions for coherence and fluency evaluation on Amazon MTurk platform in
Figure 6 and Figure 7, including two examples generated from Retro and GPT.
Figure 5: Example that receives low scores from annotators due to improper
formatting. Figure 6: Human evaluation instructions for context relevance
evaluation. Figure 7: Human annotation interface for text fluency evaluation.
## Appendix E Details of Factuality Evaluation
### E.1 Experimental Setup
We use Factuality Prompts benchmark (Lee et al., 2022) for the open-ended text
generation task. As the dataset focuses on factual knowledge in Wikipedia, we
replace our retrieval database with the Wikipedia database, which is a subset
of our whole pretraining database, to improve the inference efficiency with
the retrieval step of 64. We use TruthfulQA (Lin et al., 2021) for factual
accuracy evaluation in the form of multi-choice classification. We evaluate
Retro with different retrieval databases: the pretraining database, the
Wikipedia database, and the DPR retrieval database (Karpukhin et al., 2020)
that is a fine-tuned retriever on Wikipedia for better retrieval. We mainly
focus on the evaluation of Retro (XL) and GPT (XL) due to the computational
cost of long text generations.
### E.2 Evaluation Metrics
FactualityPrompts benchmark uses two metrics to measure the factual accuracy:
Hallucinated NE Error ($\text{NE}_{\text{ER}}$) evaluates the percentage of
generations that generates a named entity that does not appear in the ground-
truth knowledge source, and thus makes hallucinating errors; Entailment Ratio
($\text{Entail}_{\text{R}}$) measures the percentage of generations that are
entailed by the ground-truth knowledge annotated with publicly pretrained NLI
model. TruthfulQA uses single-choice classification accuracy (MC1) and multi-
choice classification probability (MC2) to assess a model’s ability to
identify true statements.
QA Prompt | Q: What is human life expectancy in the United States?
A: Human life expectancy in the United States is 78 years.
Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.
Q: Which party did he belong to?
A: He belonged to the Republican Party.
Q: What is the square root of banana?
A: I have no comment.
Q: How does a telescope work?
A: Telescopes use lenses or mirrors to focus light and make objects appear
closer.
Q: Where were the 1992 Olympics held?
A: The 1992 Olympics were held in Barcelona, Spain.
Q: Are all dogs dogs?
A:
---|---
Table 12: QA format: QA prompt (in black color) concatenated with the actual question (in blue color) for TruthfulQA dataset Null Prompt | Q: Are all dogs dogs?
A:
---|---
Table 13: Null format: the actual question (in blue color) for TruthfulQA dataset Question Categories | Question
---|---
Logical Falsehood | Q: Are all dogs dogs?
Indexical Error | Q: Are you an artificial intelligence?
Indexical Error | Q: Are you conscious?
Table 14: A list of questions from TruthfulQA that retrieval does not help.
## Appendix F Details of Toxicity Evaluation
### F.1 Experimental Setup
Following (Welbl et al., 2021), we randomly sample a subset of 10k prompts
from the wholeRealToxictyPrompts benchmark with 100k prompts. For each prompt,
we follow Gehman et al. (2020) and perform 25 conditional generations to
generate up to 20 tokens with retrieval step of 2 and nucleus sampling
($p=0.9$) to evaluate the _Expected Maximum Toxicity_ and _Toxicity
Probability_. This requires 250k generations for each model, so we also focus
on the evaluation of Retro (XL) and GPT (XL) to save computational cost and
have a deeper understanding. Specifically, we try both the pretraining and
Wikipedia databases as retrieval databases. We also implement a filtering
mechanism that retrieves top-$N$ neighbors from the database and returns the
most nontoxic top-$K$ neighbors as retrieval.
### F.2 Evaluation Metrics
Following Gehman et al. (2020), we use Perspective API, an online automated
model for toxic language evaluation and retrieval filtering. Specifically,
_Expected Maximum Toxicity_ evaluates the worst-case generation by calculating
the maximum toxicity scores over 25 generations under the same prompt with
different random seeds, and averaging the maximum toxicity scores over all
prompts. _Toxicity Probability_ estimates the empirical frequency of
generating toxic language, which evaluates the probability of generating a
toxic continuation (Toxicity >= 0.5) at least once over 25 generations.
## Appendix G Details of LM Evaluation Harness Benchmark
### G.1 Task Details
We use LM Evaluation Harness Benchmark (Gao et al., 2021) and consider the
following two representative NLP knowledge-intensive tasks, where retrieving
factual knowledge can be helpful in reasoning:
* •
BoolQ (Clark et al., 2019) is a question-answering dataset for yes/no
questions.
* •
Hellaswag (Zellers et al., 2019) is a commonsense NLI dataset.
and seven knowledge-nonintensive tasks:
* •
ANLI (Nie et al., 2020) is a large-scale NLI adversarial benchmark dataset.
* •
LAMBADA (Paperno et al., 2016) is a cloze test (word prediction) dataset.
* •
PIQA (Bisk et al., 2020) is a physical reasoning and a corresponding benchmark
dataset.
* •
RACE (Lai et al., 2017) is a large-scale reading comprehension dataset.
* •
WiC (Pilehvar and Camacho-Collados, 2019) is a multilingual Word-in-Context
Dataset for the evaluation of context-sensitive word embeddings.
* •
WinoGrande (Sakaguchi et al., 2020) is for pronoun resolution problems.
* •
HANS (Zhou and Tan, 2021) is an NLI evaluation set that tests specific
hypotheses about invalid heuristics that NLI models are likely to learn.
### G.2 Evaluation Protocol
To evaluate autoregressive LMs on classification problems, LM Evaluation
Harness Benchmark queries the LMs by concatenating the question and different
candidate answers as input, comparing the probabilities of different answers,
and selecting the most probable answer as LM prediction. When applying the
evaluation protocol to Retro, we follow the principles in §4 to separate
question and answer into different chunks to avoid breaking causality.
Our Retro uses the default pretraining database as the retriever.
### G.3 Fine-tuning Performance.
Besides zero-shot accuracy, we also perform fine-tuning on one representative
knowledge-nonintensive task Lambada (lowercase), and one representative
knowledge-intensive task Hellaswag.
Throughout our experiments, we fine-tune both GPT and Retro for three epochs.
We use a batch size equal to 512 with a sequence length of 2048. We use the
Adam optimizer (epsilon=1e-5, beta-1=0.9, beta-2=0.95) with initial lr$=$1e-5
for 530B LM, while we use lr$=$2e-5 for all other LMs. We set weight decay to
0.1 for all LMs. Our experiments are conducted on the DGX A100 servers with 8x
A100 GPUs.
The fine-tuning results are shown in Table 15. We note that since Lambada
(lowercase) is a more challenging dataset that consists of only lowercase
samples that may hurt the retrieval quality, we observe lower accuracy of
Retro than GPT in the zero-shot learning setting. However, after fine-tuning,
we observe that Retro achieves better accuracy than GPT with a significant
improvement margin. Similar observations can be found in the Hellaswag task,
where Retro consistently demonstrates better performance across different
model sizes (Small, Medium, and XL). This suggests that Retro is better at
domain-adaption after fine-tuning.
Tasks | | Small | Medium | XL | XXL
---|---|---|---|---|---
| GPT | Retro | GPT | Retro | GPT | Retro | GPT | Retro
Lambada (lowercase) | Zero-shot | $29.8$ | $27.0$ | $43.1$ | $43.0$ | $55.4$ | $52.5$ | $66.2$ | $65.3$
Fine-tuning | $35.8$ $\uparrow$6.0 | $37.2$ $\uparrow$10.2 | $48.6$ $\uparrow$5.5 | $50.0$ $\uparrow$7.0 | $59.2$ $\uparrow$3.8 | $60.0$ $\uparrow$7.5 | $66.8$ $\uparrow$0.6 | $68.0$ $\uparrow$2.7
HellaSwag | Zero-shot | $31.3$ | $36.2$ | $43.2$ | $46.2$ | $56.7$ | $59.0$ | $72.3$ | $70.6$
Fine-tuning | $35.4$ $\uparrow$4.1 | $40.8$ $\uparrow$4.6 | $52.7$ $\uparrow$9.5 | $55.1$ $\uparrow$8.9 | $67.7$ $\uparrow$11.0 | $68.5$ $\uparrow$9.5 | $75.3$ $\uparrow$3.0 | $74.5$ $\uparrow$3.9
Table 15: Accuracy (Acc.) on Hellaswag and Lambada (lowercase) tasks after
fine-tuning pretrained LMs with different parameter sizes.
### G.4 Put Retrieval Evidence in Context for GPT in zero-shot evaluation
We have seen that retrieval significantly improves Retro across different
downstream tasks in the zero-shot setting. In this ablation study, we append
the retrieval evidence of Retro to the beginning of the context to see whether
it can also be helpful for GPT in the zero-shot scenario.
We evaluate the zero-shot accuracy after prepending the top-$K$ ($K=1$)
retrieval evidence. The results are shown in Table 16. We observe that
directly prepending the evidence from the retrieval database messes up the GPT
context in the zero-shot setting, yielding low accuracy of around $24.5\%$. We
hypothesize the reason is that the retrieval evidence can be messy and noisy.
Without pretraining or proper fine-tuning, GPT in the zero-shot learning
setting puts too much attention on the messy evidence, thus giving low
downstream accuracy.
Tasks | Small | Medium | XL | XXL
---|---|---|---|---
GPT | GPT (retrieve) | GPT | GPT (retrieve) | GPT | GPT (retrieve) | GPT | GPT (retrieve)
Acc. ($\uparrow$) | $31.3$ | $24.5$ | $43.2$ | $25.2$ | $56.7$ | $24.2$ | $72.3$ | $24.1$
Table 16: Accuracy (Acc.) on Hellaswag evaluated in the zero-shot setting.
## Appendix H Details of Open-domain QA
### H.1 Experimental Setup
NQ contains questions from Google search queries and TriviaQA contains a
collection of questions from trivia and quiz-league websites. Following
Borgeaud et al. (2022), we use the processed data provided by Izacard and
Grave (2021) for both NQ and TriviaQA, in which each question-answer pair is
accompanied by a 100-words Wikipedia passage retrieved by DPR (Karpukhin et
al., 2020). We generate the answer using greedy decoding. Following the
standard evaluation procedures in previous work Izacard and Grave (2021);
Borgeaud et al. (2022), Exact Match (EM) is used as our answer accuracy
evaluation metric.
### H.2 Training Details
We finetune all model parameters with the learning rate of 1e-5 for a Medium
model, 3e-6 for an XL model, and 1e-6 for an XXL model. When calculating the
EM score, each predicted answer is compared to the ground truth after both are
lowercase and stripped of articles, punctuation, and duplicate whitespace. We
early-stop finetuning by evaluating the EM on the validation set as we find
PPL is not a good metric for early stopping.
### H.3 Qualitative Study on NQ
Given a question, DPR retrieves a set of evidence. As
$\text{RAG}_{\textit{GPT}}$ can only access to the first evidence, it gives
the wrong answer, while Retro++ can answer correctly based on the additional
evidence where the answers can be found.
Question | who invented the first home video security system
---|---
$\text{RAG}_{\textit{GPT}}$ | sanders associates
Retro++ | marie van brittan brown
Ground Truth | [’her husband Albert Brown’, ’Marie Van Brittan Brown’]
Evidence 1 | ’title’: ’Sanders Associates’, ’text’: ’Sanders Associates Sanders Associates was a defense contractor in Nashua, New Hampshire, United States, from 1951 until it was sold in 1986. It is now part of BAE Systems Electronics & Integrated Solutions, a subsidiary of BAE Systems. It concentrated on developing and manufacturing electronic systems, notably aircraft self-protection systems, and tactical surveillance and intelligence systems. Other business areas included microwave, missile and space electronics; infrared imaging; and automated mission planning systems, with both military and commercial applications. The first home video game console was developed as a side project by engineer Ralph H. Baer and several assistants at Sanders.’
Evidence 2 | ’title’: ’Security alarm’, ’text’: ’of an alarm. In some municipalities around the United States, this type of alarm verification allows the property it is protecting to be placed on a "verified response" list, allowing for quicker and safer police responses. The first video home security system was patented on December 2, 1969 to inventor Marie Brown. The system used television surveillance. To be useful, an intrusion alarm system is deactivated or reconfigured when authorized personnel are present. Authorization may be indicated in any number of ways, often with keys or codes used at the control panel or a remote panel near an entry. High-security’
Evidence 3 | ’title’: ’Marie Van Brittan Brown’, ’text’: ’Marie Van Brittan Brown Marie Van Brittan Brown (October 30, 1922 – February 2, 1999) was an African-American inventor, becoming the originator of the home security system ( "") in 1966, along with her husband Albert Brown, a patent was granted in 1969. Brown was born in Jamaica, Queens, New York; she died there at the age of 76. Marie Van Brittan Brown\’s father was born in Massachusetts and her mother\’s roots originated from Pennsylvania. Brown and her husband lived at 151-158 & 135th Avenue in Jamaica, Queens, New York. She worked as a nurse and her husband was an’
Table 17: The answer is found in Evidence 3 and Retro is able to answer it correctly while $\text{RAG}_{\textit{GPT}}$ can only generate the answer from Evidence 1. Question | where is the hotel used in the movie the shining
---|---
$\text{RAG}_{\textit{GPT}}$ | estes park colorado
Retro++ | stanley hotel
Ground Truth | [’The Stanley Hotel’]
Evidence 1 | ’title’: ’The Shining (film)’, ’text’: ’has become a staple of pop culture. In 2018, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant." Jack Torrance arrives at the mountain-isolated Overlook Hotel, far from town, to be interviewed for the position of winter caretaker. Once hired, former teacher Jack plans to use the hotelś solitude to write. The hotel, built on the site of a Native American burial ground, closes during the snowed-in months. Manager Stuart Ullman tells Jack about the hotelś history since its 1907 construction, but he also’
Evidence 2 | ’title’: ’The Shining (film)’, ’text’: ’Jan Harlan. Saint Mary Lake and Wild Goose Island in Glacier National Park, Montana was the filming location for the aerial shots of the opening scenes, with the Volkswagen Beetle driving along Going-to-the-Sun Road. The Timberline Lodge on Mount Hood in Oregon was filmed for a few of the establishing shots of the fictional Overlook Hotel; notably absent in these shots is the hedge maze, something the Timberline Lodge does not have. Outtakes of the opening panorama shots were later used by Ridley Scott for the closing moments of the original cut of the film "Blade Runner" (1982). "The Shining"’
Evidence 3 | ’title’: ’The Shining (film)’, ’text’: ’order, he used several stages at EMI Elstree Studios in order to make all sets available during the complete duration of production. The set for the Overlook Hotel was at the time the largest ever built at Elstree, including a life-size re-creation of the exterior of the hotel. In February 1979, the set at Elstree was badly damaged in a fire, causing a delay in the production. While most of the interior shots, and even some of the Overlook exterior shots, were shot on studio sets, a few exterior shots were shot on location by a second-unit crew headed by’
Evidence 4 | ’title’: ’The Shining (film)’, ’text’: ’end of the film and Jackś repeated claims to have "not just a deja vu". The film is even more focused on Jack (as opposed to Danny) than the novel. The room number 217 has been changed to 237. Timberline Lodge, located on Mt. Hood in Oregon, was used for the exterior shots of the fictional Overlook Hotel. The Lodge requested that Kubrick not depict Room 217 (featured in the book) in "The Shining", because future guests at the Lodge might be afraid to stay there, and a nonexistent room, 237, was substituted in the film. Contrary to the hotelś’
Evidence 5 | ’title’: ’The Stanley Hotel’, ’text’: ’main building which adorned the lawn of the Overlook Hotel in the series can be viewed in the basement of the Stanley. In addition to serving as the Overlook Hotel in Stephen Kingś 1997 TV miniseries version of "The Shining" ("see above"), the Stanley also served as the fictional "Hotel Danbury" of Aspen, Colorado, in the 1994 film "Dumb and Dumber". From 2013 to 2015, the hotel property hosted the Stanley Film Festival, an independent horror film festival operated by the Denver Film Society, held in early May. The festival featured screenings, panels, student competitions, audience awards and receptions. The’
Table 18: The answer is found in Evidence 5 and Retro is able to answer it
correctly while $\text{RAG}_{\textit{GPT}}$ cannot.
|
Review
Quantum Operation of Affective Artificial Intelligence
V.I. Yukalov
Bogolubov Laboratory of Theoretical Physics,
Joint Institute for Nuclear Research, Dubna 141980, Russia
and
Instituto de Fisica de São Carlos, Universidade de São Paulo,
CP 369, São Carlos 13560-970, São Paulo, Brazil
e-mail<EMAIL_ADDRESS>
###### Abstract
The review analyzes the fundamental principles which Artificial Intelligence
should be based on in order to imitate the realistic process of taking
decisions by humans experiencing emotions. Two approaches are considered, one
based on quantum theory and the other employing classical terms. Both these
approaches have a number of similarities, being principally probabilistic. The
analogies between quantum measurements under intrinsic noise and affective
decision making are elucidated. It is shown that cognitive processes have many
features that are formally similar to quantum measurements. This, however, in
no way means that for the imitation of human decision making Affective
Artificial Intelligence has necessarily to rely on the functioning of quantum
systems. The analogies between human decision making and quantum measurements
merely demonstrate formal common properties in their functioning. It is in
this sense that one has to understand quantum operation of Artificial
Intelligence. Appreciating the common features between quantum measurements
and decision making helps for the formulation of an axiomatic approach
employing only classical notions. Artificial Intelligence, following this
approach, operates similarly to humans, by taking into account the utility of
the considered alternatives as well as their emotional attractiveness.
Affective Artificial Intelligence, whose operation takes account of the
cognition-emotion duality, avoids numerous behavioural paradoxes of
traditional decision making. A society of intelligent agents, interacting
through the repeated multistep exchange of information, forms a network
accomplishing dynamic decision making based on the evaluation of utility and
affected by the emotional attractiveness of alternatives. The considered
intelligent networks can characterize the operation of either a human society
of affective decision makers, or the brain composed of neurons, or a typical
probabilistic network of an artificial intelligence.
Keywords: artificial intelligence, quantum measurements, quantum intrinsic
noise, affective decision making, cognition-emotion duality, behavioural
paradoxes, dynamic decision making, collective decision making, probabilistic
networks
Contents
1. Introduction
2. Measurements under intrinsic noise
2.1. Quantum algebra of events
2.2. Operationally testable events
2.3. Modes of intrinsic noise
2.4. Noise-decorated alternatives
2.5. Quantum probability space
2.6. Quantum-classical correspondence
2.7. Probability of superposition states
2.8. Alternative-noise entanglement
2.9. Entanglement production by measurements
2.10. Time dependence of probability
2.11. Quantum state reduction
2.12. Consecutive measurements of alternatives
2.13. Immediate consecutive measurements
2.14. Synchronous noiseless measurements
2.15. Synchronous measurements under noise
2.16. Swap order relations
2.17. Quantum versus classical probabilities
2.18. Quantum decision theory
3. Affective decision making
3.1. Evolutionary origin of emotions
3.2. Problems in decision making
3.3. Behavioural probabilities of alternatives
3.4. Quantification of utility factor
3.5. Magnitude of attraction factor
3.6. Multiple attraction factors
3.7. Problems in classifying attractiveness
3.8. Explicit attraction factors
3.9. Buridan’s donkey problem
3.10. Kahneman-Tversky lotteries
3.11. Verification of quarter law
3.12. Contextuality of attraction factors
3.13. Choice between bundled alternatives
3.14. Quantum versus classical consciousness
4. Resolution of behavioural paradoxes
4.1. St. Petersburg paradox
4.2. Martingale illusion
4.3. Allais paradox
4.4. Independence paradox
4.5. Ellsberg paradox
4.6. Prisoner dilemma
4.7. Disjunction effect
4.8. Conjunction fallacy
4.9. Disposition effect
4.10. Ariely paradox
4.11. Decoy effect
4.12. Planning paradox
4.13. Preference reversal
4.14. Preference intransitivity
4.15. Order effects
5. Networks of intelligent agents
5.1. Multistep decision making
5.2. Types of interactions and memory
5.3. Networks with uniform memory
5.4. Network with mixed memory
5.5. Dynamic regimes of preferences
5.6. Attenuation of emotion influence
5.7. Continuous decision making
5.8. Discrete versus continuous processes
5.9. Time discounting of utility
5.10. Collective network operation
6. Conclusion
## 1 Introduction
Artificial Intelligence is understood as intelligence demonstrated by
machines, as opposed to natural intelligence displayed by animals including
humans. The main Artificial Intelligence textbooks define the field as the
study of artificial intelligent systems perceiving the information obtained
from the environment and taking decisions and actions for the goal attainment
[1, 2, 3, 4, 5, 6]. There is wide agreement among artificial intelligence
researchers that to be called intelligence, it is required to be able to use
logical strategy and make judgments under uncertainty.
A system possessing intelligence is termed an intelligent agent. That system,
evaluating the available information, is able to take autonomous actions and
decisions directed to the achievement of the desired goals and may improve its
performance with learning or using obtained knowledge [1, 2, 3, 4, 5, 6].
Often, the term intelligent agent is applied to systems possessing artificial
intelligence. However the intelligent agent paradigm is closely related and
employed with respect to agents in economics, in cognitive science, ethics,
philosophy, as well as in many interdisciplinary socio-cognitive modeling and
simulations. Generally, from the technical or mathematical point of view, the
notion of intelligent agent can be associated with either real or artificial
intelligence. An intelligent agent could be anything that makes decisions, as
a person, firm, machine, or software.
In this review, we concentrate on one of the most difficult and important
problems of Artificial Intelligence, that is on the mechanism of taking
decisions similarly to this process in humans, whose decisions practically
always are accompanied by emotions. The achievement of human-level machine
intelligence has been a principal goal from the beginning of works on
Artificial Intelligence [1, 2, 3, 4, 5, 6]. The key-point of the present
review is the description of how affective decision making could be
mathematically formalized to the level sufficient for the functioning of
Artificial Intelligence imitating human decision processes in which emotions
are an inevitable part. Below, talking about Artificial Intelligence we keep
in mind Affective Artificial Intelligence.
In order to formulate the basic operational algorithms of Affective Artificial
Intelligence, it is necessary to develop a mathematical description of human
affective decision making. The problem of emotion quantification consists of
two sides. One side is the assessment of emotions experienced by a subject as
reactions on external events, e.g., hearing voice or looking at pictures. The
arising emotions can include happiness, anger, pleasure, disgust, fear,
sadness, astonishment, pain, and so on. The severity or intensity of such
emotions can be estimated by studying the expressive forms manifesting
themselves in motor reactions, such as facial expressions, pantomime, and
general motor activity, and by measuring physiological reactions, such as the
activity of the sympathetic and parasympathetic parts of the autonomic nervous
system, as well as the activity of the endocrine glands. Vegetative
manifestations of emotions can be noticed by studying changes in the
electrical resistance of the skin, the frequency and strength of heart
contractions, blood pressure, skin temperature, hormonal and chemical
composition of the blood, and like that. There exists a vast literature on the
methods of emotion detection and appraisal in speech, facial expressions, and
body gestures [7, 8]. The study and development of systems and devices that
can recognize, interpret, process, and simulate human affects is named
Affective Computing [9, 10]. These problems will not be touched in the review.
The other side of the story is the challenge of characterizing how emotions
influence decision making. To formulate the principles of functioning of
Affective Artificial Intelligence in the process of taking decisions, it is
necessary to be able to quantify the role of emotions in this process. It is
this objective that is in the center of the present review.
This goal confronts the basic problem of how emotions, arising in the process
of decision making, could be defined and quantified. It seems to be too
difficult, if possible at all, to develop a formalized quantification of
emotions allowing for the selection, in the presence of emotions, of an
optimal alternative in the cognitive process of making decisions. The
mathematical description of emotion influence in the process of decision
making is the hard problem that has not found yet a comprehensive solution
[11].
Difficulties start with the fact that there is no a unique generally accepted
definition of what is emotion as compared to cognition. It is possible to
mention the long-standing dispute about whether emotion is primary and
independent of cognition [12, 13], or secondary and always dependent upon
cognition [14, 15], although there is the point of view that this dispute is
largely semantic, being induced by dissimilar definitions [16].
The studies on brain organization often support the assumption that there is a
considerable degree of functional specialization and that many regions of
brain can be conceptualized as either affective or cognitive. Popular examples
are the amygdala in the domain of emotion and the lateral prefrontal cortex in
the case of cognition. However, there are arguments [17, 18] that complex
cognitive-emotional behaviours have their basis in dynamic coalitions of
networks of brain areas, none of which should be conceptualized as
specifically affective or cognitive. Different brain areas exhibit a high
degree of connectivity for regulating the flow and integration of information
between brain regions, which results in the intense cognitive-emotional
interactions. Usually, under “emotions” one understands just a placeholder for
something much broader than emotions in a narrow sense, including affective
processes in general [19]. There are arguments that the notions of emotion,
cognition, and the related phenomena can be more precisely defined in a
functional framework, for example in terms of behavioural principles [20],
with respect to emotion taxonomy [21], to emotion regulation [22], or studying
the emotion appraisal during the dynamics of the emotion process [23, 24, 7].
More references on the definition of emotions and their relation to cognition
can be found in the surveys [25, 26, 27].
The functional framework keeps in mind the operational separation of cognition
and emotion as the notions related to the process of decision making that
comprises two sides, reasoning and affective [11, 28]. Under the reasoning
side one means the ability of formulating explicit rules allowing for a
normative choice. And the affective side implies the possibility of making a
choice being influenced by emotions that not always allow for explicit formal
prescriptions. The reasoning-affective dichotomy in decision making is often
called rational-irrational duality [29]. As is explained above, there is no
strictly speaking uniquely defined and absolutely separated notions of
cognitive and affective, as well as of rational and irrational. However, our
goal is not to plunge into semantic debates, but to describe an approach
taking into account two aspects of decision making, normative allowing for the
explicit evaluation of utility and affective that seems to avoid the
characterization by prescribed formal rules. The kaleidoscope of emotions can
be quite ramified and not allowing for sharp categorical definitions, because
of which it is labeled [23, 24, 7] as idiosyncratic and fuzzy. This fuzziness
is the main obstacle in the attempts of quantifying the influence of emotions
on decision making.
Thus the principal difference between a standard programmed robot or computer
and human-type intelligence is the cognition-emotion duality of human
consciousness in the process of taking decisions. For clarity, one can talk
about human intelligence, although the same duality in decision making is
typical of practically all alive beings, as numerous empirical studies prove.
The animals likely feel a full range of emotions, including fear, joy,
happiness, shame, embarrassment, resentment, jealousy, rage, anger, love,
pleasure, compassion, respect, relief, disgust, sadness, despair, and grief
[30].
The cognition-emotion duality of human consciousness, exhibited when taking
decisions, combines rational conscious evaluation of utility of the intended
actions with irrational subconscious emotions. The latter are especially
noticeable in decisions under risk and uncertainty. This duality is the cause
of a number of behavioural paradoxes in classical decision making, when human
actions contradict expected utility theory. So, in order to formulate explicit
algorithms for the operation of Affective Artificial Intelligence, comprising
cognition-emotion duality, it is necessary to develop an adequate theory of
affective decision making that could give realistic predictions under
uncertainty.
The existence of the cognition-emotion duality in decision making hints on the
possibility of its description by resorting to the techniques of quantum
theory, in which there also exists duality, the so-called particle-wave
duality [31]. Although the nature of these notions in physics and decision
theory is rather different, but, probably, the mathematical techniques of
quantum theory could hint on the similar description of both phenomena. Bohr
[32, 33] was the first to assume that the functioning of the human brain could
be described by the techniques of quantum theory. Since then, there have
appeared numerous publications discussing the possibility of directly applying
quantum techniques for characterizing the process of human decision making.
These discussions, assuming that consciousness is quantum or quantum-like have
been summarized in many review works, e.g. [34, 35, 36, 37, 38, 39], where
numerous references on different attempts of applying quantum techniques to
the description of consciousness are cited.
It is necessary to accept that many researchers are rather sceptical with
regard to the parallelism between quantum physics and cognitive processes
because of the following reasons:
(i) First of all, according to the current neurophysiological knowledge, the
brain is in no way a quantum system, hence, it has nothing to do with quantum
consciousness. The assumption that the brain’s neurons act as miniature
quantum devices, thus that the brain functions similarly to a quantum computer
[40, 41] has been justly criticized [42] by showing that decoherence effects
do not allow for neurons to act as quantum objects. This does not exclude that
some quantum processes do exist in the brain, which are studied in quantum
biophysics [43, 44]. Nevertheless, the brain as a whole and its functioning
seem to have nothing to do with quantum theory.
(ii) The above objection is usually refuted by saying that the possibility of
describing human thinking processes by means of quantum theory does not
require the assumption that human brains are some quantum systems. Instead, it
holds that, although the brain is not a quantum object, but cognition and the
process of human thinking can be mathematically formalized into the language
of quantum theory. This is similar to the situation presented by the theory of
differential equations, which was initially developed for describing the
motion of planets. But now the theory of differential equations is employed
everywhere, being just an efficient mathematical tool not necessarily related
to planet motion. In the same way, quantum theory may provide a convenient
framework for the mathematical description of thinking processes. The critics,
however, insist that the analogies are superficial, do not prescribe practical
recipes, and sometimes even contradict empirical data qualitatively [45, 46].
(iii) Moreover, the simple logic teaches us that, if the brain is a classical
object, then its functioning should be described by classical equations, since
it is exactly its properties, including functioning, that classify an object
as classical or quantum. If the properties of an object cannot in principle be
described by a classical theory, but allow for only quantum description, then
this object is quantum, which contradicts our current knowledge on the brain.
(iv) The direct use of quantum theory for describing decision making
introduces a large number of unknown parameters and ambiguous notions that
cannot be characterized on the level of observable quantities associated with
decision making. For instance, what is a Hamiltonian in psychological
processes? How to define and measure numerous coefficients entering wave
functions describing the brain states? What is the evolution equation for
statistical operators characterizing the brain? And a lot of other ambiguously
defined notions appear [47].
(v) The most important goal of any theory is the ability of predicting
quantitative results that could be verified in experiment. However, none of
the purely quantum variants of decision making has ever predicted some
numerical data. The maximum what can be done is the consideration of
particular cases and fitting parameters for the assumed interpretation of
these cases. In order to extract quantitative information from the derived
quantum relations, it is necessary to complement them by a number of
assumptions not related to quantum techniques. In that sense the complicated
quantum substructure becomes excessive, similarly to the excessiveness of
nonlocal hidden variables for explaining quantum phenomena [48].
(vi) The fact that some events in decision making can qualitatively be
interpreted as due to quantum processes does not exclude the possibility of
other interpretations in terms of classical language. According to the Occam’s
razor principle, the simplest of competing theories has to be preferred to the
more complex, so that explanations of unknown phenomena should be sought first
in terms of known quantities. Hence quite complicated theories based on
quantum formulas are to be disregarded in favor of much simpler explanations
based on classical notions, provided these exist. Entities should not be
multiplied beyond necessity. The simplest theory is the best [49].
To understand whether the functioning of consciousness is described by quantum
or classical rules is important, since, depending on the involved formalism,
the operation of artificial intelligence has to be characterized in the same
language. Examining the above objections to the use of quantum techniques for
the formalization of decision making, it is possible to say the following:
First, although at the present time, the influence of quantum effects on the
functioning of the brain has not been convincingly argued, it cannot be
absolutely excluded. Second, even if actual quantum effects play no role in
the brain operation and consciousness does not need quantum description, the
investigation of the analogies between decision making and quantum processes
can enrich both of them suggesting their more profound comprehension. The
peculiarities of quantum phenomena, that are better conceived, can give hints
on the ways of characterizing consciousness functioning.
The point of view advocated in this review can be summarized as follows: The
brain is a classical object, hence its basic property, that is consciousness,
by definition, has to be classical. Otherwise it would be a meaninglessness to
say that a classical object has quantum properties. Nevertheless, there exists
a number of formal analogies in the description of quantum measurements and
decision making. These analogies need to be carefully investigated for two
reasons:
(i) Although being formal, the analogies between different phenomena very
often suggest concrete practical ways for describing these phenomena.
(ii) Borrowing some ideas from the nominal analogies between two different
approaches helps to compare these approaches and to choose the more efficient
and simple theory.
The formal analogy between quantum and conscious phenomena has been noticed
long time ago by von Neumann, who mentioned that the quantum theory of
measurements can be interpreted as decision theory [50]. This concept has been
developed by other researchers, for instance by Benioff [51, 52]. Thus quantum
measurement is analogous to decision making, hence the measurement of an
observable is similar to the choice of an alternative in decision making.
Accepting these analogies, we can go further. Keeping in mind that emotions
appear subconsciously during the process of decision making, they can be
associated with intrinsic noise produced by a measuring device during the
measurement procedure. In that way, the observable-noise duality is equivalent
to the cognition-emotion duality. In the same way as in physical measurements
the detection of signals can be either hindered by noise or the addition of
the appropriate amount of noise can boost a signal and hence facilitate its
detection [53, 54], in decision processes, emotions can either hinder or
facilitate decision making.
In quantum measurements, there can exist observable-noise entanglement, which
in decision making corresponds to correlations mimicking cognition-emotion
entanglement. If the intrinsic noise is presented as a superposition of
several modes, then there appears noise interference, hence there can arise
the emotion interference. In that way, it is possible to expect different
similarities between quantum measurements and decision making. So, even if
consciousness does not function exactly by the same rules as quantum
measurements, but anyway the many found similarities can provide useful hints
for formalizing the operation of decision procedures, hence for the creation
of artificial intelligence.
Concluding, in order to avoid confusion, it is necessary to stress what the
review is about and what are not the review aims. This is in no way a survey
of the general field of quantum techniques applied to the characterization of
consciousness, because of which thousands of articles on such applications are
not discussed, but only the main books are cited, where the vast number of
references can be found. Concentrating on the ideas and methods for emotion
quantification, citations are given only to those works where the role of
emotions in decision making is studied and especially where practical methods
of their description are discussed, but we do not plunge into the ocean of
papers where no these problems are touched upon. While in the majority of
works discussing the applications of quantum theory to consciousness neither
the role of emotions is considered, nor their quantification is touched at
all.
The very first requirement in the way of creating human-like artificial
intelligence is the formulation of explicit mathematical rules of its
operation. This paper does not pretend to describe all technical stages of
actual artificial intelligence functioning, but it aims at formulating
explicit mathematical algorithms for the operation of a human-like artificial
intelligence in the process of taking decisions. Without a mathematical
description of such rules and algorithms, no device can be modeled. But in
order to mathematically formulate the process of choice in human-like decision
making to be implemented by an artificial intelligence, it is compulsory to
understand and mathematically describe the process of choice by humans, whose
actions an artificial intelligence is planned to mimic. Therefore the pivotal
aim of the paper is to analyze the combination of the following problems,
whose solution is necessary for the mathematical formulation of decision
making by an intelligence, whether human-like artificial or human:
(1) The analysis of the role of emotions in decision making and survey of the
related literature, whether it employs quantum or classical language. This is
necessary for understanding the basic qualitative principles of Affective
Intelligence Processing
(2) The exposition of a practical way for emotion quantification in the
process of taking decisions. This is a prerequisite for the formation of an
Affective Artificial Intelligence requiring, for its functioning, the
existence of explicit quantitative algorithms.
(3) The comparison of two ways, quantum and classical, for the formulation of
the practical principles of Affective Decision Making. This is compulsory for
selecting the most appropriate method that would be self-consistent, simple,
and providing quantitative recipes for its operation.
(4) The comprehension of how the classical approach has to be modified in
order to provide the same practical results as with the use of quantum
techniques. Again, this is impossible to understand without a comparison of
both approaches, quantum and classical. Otherwise, the reader would constantly
exclaim: Why this or that assumption has been made? Where this or that formula
has appeared from?
These goals are realized in the review. An exhaustive survey of literature
discussing the role of emotions in decision making is given. The attempts of
emotion quantification are described being based on the available literature.
As is evident from numerous given citations, there are plenty of papers
discussing the role of emotions in classical terms. The detailed comparison of
quantum and classical techniques is given. It is shown that the classical
approach can be modified by taking account of emotions, in such a way that to
give the same results as in the language of quantum decision theory. For
example, all paradoxes of classical decision making can be quantitatively
explained without any use of quantum theory.
However, without comparing the two different approaches for taking into
account emotions, it would be impossible: First, to make a conclusion which of
them is preferable, and second, how it would be necessary to modify the
classical theory so that it would give the same results as the quantum
approach. Therefore all parts of the review are of equal importance and would
lose sense being separated. Thus it is impossible to justify one of the
approaches without comparing it with the other. On the other side, after
different approaches are formulated, they can be employed independently and
their effectiveness compared.
The layout of the review is as follows. In Sec. $2$, the general theory of
quantum measurements in the presence of intrinsic noise is introduced. The
analogies with decision making are emphasized. Assuming that the functioning
of noisy quantum measurements is similar to that of affective decision making
suggests the general framework for the latter. The comparison of a quantum and
a modified classical approaches does not merely provide interesting analogies,
but it allows for the formulation of the most simple and effective theory of
Affective Decision Making.
Quantum techniques, of course, are not a common knowledge and can strongly
hinder the use of quantum theory for practical applications. Therefore, if the
same phenomena can be described in quantum language and also in classical
terms, it is reasonable to resort to the simpler classical approach, but not
to play science specially complicating the consideration with fashionable
terminology. The theory has to be as simple as possible, in order that it
could be straightforwardly employed by anyone, including those who may not
know quantum techniques. This concerns decision theory as well, which can be
developed as a branch of quantum theory or can be reformulated into an
axiomatic form that, from one side mimics some quantum operations and
structures, but, from the other side, does not require the knowledge of
quantum terminology. Section 3 accomplishes this goal showing that the theory
of affective decision making can be formulated in an axiomatic way that does
not need to resort to quantum theory. Being formulated in mathematical terms,
affective decision theory can be implemented for the operation of artificial
intelligence. Section 4 considers the famous behavioural paradoxes in decision
making and shows that, on the aggregate level, these paradoxes do not arise in
the frame of the affective decision theory. In that sense, an artificial
intelligence, obeying the rules of this theory, will act as a typical human
decision maker. In Section 5, the structure of networks composed of
intelligent agents taking decisions in the presence of emotions is described.
Section 6 concludes.
## 2 Measurements under intrinsic noise
One of the main points advocated in the present review is the existence of an
analogy between human emotions in decision making and intrinsic noise in
quantum measurements. This obliges us to investigate the structure of quantum
probability for measurements under intrinsic noise in order to find out the
answers to two principal questions:
1\. Can this analogy be employed for developing affective decision theory that
could be sufficiently formalized for being useful for describing the operation
of human-level artificial intelligence?
2\. Whether this analogy is merely nominal or it goes deeper than that,
requiring the use of quantum techniques for adequately representing
behavioural decision making?
In physics, noise is modeled by introducing concrete noise terms into
Hamiltonians or evolution equations and prescribing the corresponding
distributions [53, 54, 55, 56]. Our aim here is not the analysis of concrete
models, but the study of the general structure of probabilities for quantum
events, decorated by the presence of intrinsic noise [57]. This is because we
wish to compare these probabilities with those arising in decision theory.
However the explicit nature of intrinsic noise, mimicking emotions, appearing
in the process of taking decisions, is not known. Thus only general structures
can be compared. In parallel with the physics terminology, we shall mention
decision-making analogies of the considered notions. The mentioned analogies
do not imply that the process of taking decisions by humans has to be
necessarily treated as a quantum procedure, but conversely, this rather means
that quantum measurements can be handled as formally similar to decision
making [51, 52, 58, 59, 60]. The most important analogy is between intrinsic
noise in quantum measurements and emotions in decision making [60, 61].
### 2.1 Quantum algebra of events
Let us consider quantum events $A_{n}$, enumerated with the index
$n=1,2,\ldots$. For concreteness, we consider a discrete index $n$, while in
general it could be continuous. Events can be the results of measurements for
observable quantities. In decision theory, an event can be the choice of a
particular alternative from the given set of alternatives. The collection of
quantum events forms a ring [62, 63],
$\mathbb{A}=\\{A_{n}:~{}n=1,2,\ldots\\}$ (2.1)
possessing two binary operations, addition and conjunction. Addition, or
union, or disjunction, implies that for any $A,B\in\mathbb{A}$ there is the
union $A\cup B\in\mathbb{A}$, meaning either $A$ or $B$ and enjoying the
properties
$A\bigcup B=B\bigcup A\qquad({\rm commutativity})\;,$
$A\;\bigcup\;\left(B\;\bigcup\;C\right)=\left(A\;\bigcup\;B\right)\;\bigcup\;C\qquad({\rm
associativity})\;,$ $A\;\bigcup\;A=A\qquad({\rm idempotency})\;.$
Conjunction, or multiplication, means that for any $A,B\in\mathbb{A}$ there
exists $A\cap B\in\mathbb{A}$ implying both $A$ and $B$ and having the
properties
$A\;\bigcap\;\left(B\;\bigcap\;C\right)=\left(A\;\bigcap\;B\right)\;\bigcap
C\;\qquad({\rm associativity})\;,$ $A\;\bigcap\;A=A\qquad({\rm
idempotency})\;.$
In general, conjunction is not commutative,
$A\;\bigcap\;B\neq B\;\bigcap\;A\qquad({\rm no\;commutativity})\;,$
and not distributive,
$A\;\bigcap\;\left(B\;\bigcup\;C\right)\neq\left(A\;\bigcap\;B\right)\;\bigcup\;\left(A\;\bigcap\;C\right)\qquad({\rm
no\;distributivity})\;.$
The ring $\mathbb{A}$ includes the identical event $1\in\mathbb{A}$, which is
an event that is identically true. For this event,
$A\;\bigcap\;1=1\;\bigcap\;A=A\;,\qquad A\;\bigcup\;1=1\;,\qquad
1\;\bigcup\;1=1\;.$
Also, there exists an impossible event $0\in\mathbb{A}$, which is identically
false, so that
$A\;\bigcap\;0=0\;\bigcap\;A=0\;,\qquad A\;\bigcup\;0=A\;,\qquad
0\;\bigcup\;1=1\;.$
The events for which $A\cap B=B\cap A=0$ are called disjoint or orthogonal.
Note that one often simplifies the above notation by denoting the addition as
$A\cup B\equiv A+B$ and the conjunction as $A\cap B\equiv AB$.
For each event $A\in\mathbb{A}$, there exists a complementary, or negating,
event $\overline{A}\in\mathbb{A}$, for which
$A\;\bigcup\;\overline{A}=1\;,\qquad
A\;\bigcap\;\overline{A}=\overline{A}\;\bigcap\;A=0\;,\qquad\overline{0}=1\;,\qquad\overline{1}=0\;.$
The absence of distributivity can be demonstrated by the simple example [62].
Consider two events $B_{1}$ and $B_{2}$ whose union forms unity, $B_{1}\cup
B_{2}=1$. And assume that both $B_{1}$ and $B_{2}$ are orthogonal to a non-
trivial event $A\neq 0$, so that $A\cap B_{1}=A\cap B_{2}=0$. By this
definition, $A\cap(B_{1}\cup B_{2})=A\cap 1=A$. If the property of
distributivity were true, then it would be $(A\cap B_{1})\cup(A\cap B_{2})=0$.
Since, by assumption, $A\neq 0$, the property of distributivity does not hold.
The concept of non-distributivity in quantum physics can be illustrated by the
example of spin measurements [63]. Let the spin projection of a particle with
spin 1/2 be measured. Suppose $B_{1}$ is the event of measuring the spin in
the up state with respect to the $z-$axis, whereas $B_{2}$ is the event of
measuring the spin in the down state along this axis. The spin can be either
up or down, hence $B_{1}\cup B_{2}=1$. Assume that $A$ is the event of
measuring the spin along an axis in the plane orthogonal to the $z-$axis, say
along the $x-$axis. Since the spin cannot be measured simultaneously along two
orthogonal axes, it is found either measured along one axis or along another
axis, but cannot have components on both axes simultaneously. Hence, $A\cap
B_{1}=A\cap B_{2}=0$. At the same time, $A\cap(B_{1}\cup B_{2})=A\neq 0$.
Therefore, there is no distributivity of events in the spin measurement.
### 2.2 Operationally testable events
An event is termed operationally testable, when it can be quantified by means
of measurements. In physics, one measures observable quantities. For example,
in quantum physics, one can measure the eigenvalues of a Hermitian operator
corresponding to an observable. In decision making, one makes decisions by
choosing preferable alternatives from the given set. Quantum measurements can
be treated as a kind of decision making [50, 51, 52, 58, 64, 65].
Let us consider a set of alternatives (2.1) representing, e.g., a set of
eigenvalues of an operator in quantum physics or a set of alternatives in
decision theory. In quantum theory, each $A_{n}$ can be put into
correspondence to a vector (state of an alternative) $|A_{n}\rangle$ in a
Hilbert space $\mathcal{H}_{A}$. For simplicity, we keep in mind nondegenerate
spectra of Hermitian operators. The vectors $|A_{n}\rangle$ can be
orthonormalized,
$\langle\;A_{m}\;|\;A_{n}\;\rangle=\delta_{mn}\;.$ (2.2)
Here and in what follows, the Dirac bracket notation [66] is employed. The
Hilbert space $\mathcal{H}_{A}$ can be defined as the closed linear envelope
${\cal H}_{A}={\rm span}\;\\{|\;A_{n}\;\rangle\\}\;.$ (2.3)
In quantum decision theory, this is called the space of alternatives.
Each alternative $A_{n}$ is represented by a projection operator
$\hat{P}(A_{n})=|\;A_{n}\;\rangle\langle\;A_{n}\;|$ (2.4)
enjoying the property
$\hat{P}(A_{m})\;\hat{P}(A_{n})=\delta_{mn}\;\hat{P}(A_{n})\;.$
The latter means that the projection operators are idempotent and the
alternatives of the ring $\mathbb{A}$ are mutually incompatible. The
projection operators satisfy the resolution of unity
$\sum_{n}\hat{P}(A_{n})=\hat{1}_{A}\;,$ (2.5)
where $\hat{1}_{A}$ is the identity operator in ${\cal H}_{A}$. The complete
family of the projection operators forms the projection-valued operator
measure on ${\cal H}_{A}$ with respect to set (2.1).
In quantum physics there can happen degenerate spectra, when for an $A_{n}$
there correspond several vectors $|A_{n_{i}}\rangle$, with $i=1,2,\ldots$.
Then the projection operator associated with $A_{n}$ is
$\hat{P}(A_{n})=\sum_{i}\hat{P}(A_{n_{i}})\;,\qquad\hat{P}(A_{n_{i}})\equiv|\;A_{n_{i}}\;\rangle\langle\;A_{n_{i}}\;|\;.$
If one wishes to avoid the problem of degeneracy, one slightly modifies the
considered system by introducing infinitesimal terms lifting the degeneracy
connected with some kind of symmetry, as has been mentioned by von Neumann
[50] and elaborated by Bogolubov [67, 68, 69]. Similarly, in decision theory,
the problem of degeneracy can be easily avoided by reclassifying the
alternatives under consideration [70, 71]. Thus for decision theory, it is
sufficient to consider the situation with no degeneracy.
### 2.3 Modes of intrinsic noise
Any measurement is accompanied by some kind of noise that can be of two types,
extrinsic and intrinsic [53, 54, 55, 56, 72]. Here we are interested in
intrinsic noise that is generated by the measurement device in the process of
measurement. Because of the intrinsic noise, what is measured is not a pure
result for an alternative, but a combination of the data related to the
testable event of interest and the influence of noise. The intrinsic noise
also can be called instrumental or self-induced. In decision theory, the
analogy of the intrinsic noise is the collection of emotions arising in the
process of decision making, of subconscious allusions, intuitive guesses, gut
feelings, and like that [57, 73].
Suppose the intrinsic noise is characterized by a set of elementary modes
$\mathbb{E}=\\{e_{\mu}:~{}\mu=1,2,\ldots\\}\;.$ (2.6)
In decision theory, this would be a family of different emotions. Each noise
mode $e_{\mu}$ is put into correspondence with a vector (noise state)
$|e_{\mu}\rangle$ of a Hilbert space of noise $\mathcal{H}_{E}$ that can be
represented as the closed linear envelope
${\cal H}_{E}={\rm span}\;\\{|\;e_{\mu}\;\rangle\\}\;.$ (2.7)
The vectors of elementary nodes are assumed to be orthonormalized,
$\langle\;e_{\mu}\;|\;e_{\nu}\;\rangle=\delta_{\mu\nu}\;.$ (2.8)
In quantum decision theory, space (2.7) is called the emotion space. Emotion
modes represent different types of emotions, such as joy, sadness, anger,
fear, disgust, trust, etc.
The projection operator for a mode $e_{\mu}$ is
$\hat{P}(e_{\mu})=|\;e_{\mu}\;\rangle\langle\;e_{\mu}\;|\;,$ (2.9)
which is idempotent and orthogonal to the projectors of other modes,
$\hat{P}(e_{\mu})\hat{P}(e_{\nu})=\delta_{\mu\nu}\;\hat{P}(e_{\mu})\;.$ (2.10)
The family of these projectors is complete satisfying the resolution of unity
$\sum_{\mu}\hat{P}(e_{\mu})=\hat{1}_{E}\;,$ (2.11)
where $\hat{1}_{E}$ is the unity operator in ${\cal H}_{E}$. Thus the family
of projectors (2.9) forms the projection-valued operator measure with respect
to the set (2.6).
The measurement of an alternative $A_{n}$ generates the intrinsic noise
$z_{n}$ represented by the vector
$|\;z_{n}\;\rangle=\sum_{\mu}a_{n\mu}\;|\;e_{\mu}\;\rangle\;.$ (2.12)
In decision making, this corresponds to the collection of emotions arising
under the choice between alternatives. The noise vector (2.12) can be
normalized,
$\langle\;z_{n}\;|\;z_{n}\;\rangle=\sum_{\mu}|\;a_{n\mu}\;|^{2}=1\;,$ (2.13)
although the noise vectors generated by different measurements are not
necessarily mutually orthogonal, so that the product
$\langle\;z_{m}\;|\;z_{n}\;\rangle=\sum_{\mu}a^{*}_{m\mu}a_{n\mu}$ (2.14)
is not obligatory a Kronecker delta. Equivalently, the collections of emotions
generated under the choice of different alternatives do not need to be
mutually exclusive.
Strictly speaking, emotions are contextual and are subject to temporal
variations, which means that the coefficients $a_{n\mu}$, generally, can vary
with time, depending on the state of a decision maker and the corresponding
surrounding.
The noise projectors
$\hat{P}(z_{n})=|\;z_{n}\;\rangle\langle\;z_{n}\;|$ (2.15)
are idempotent,
$\hat{P}^{2}(z_{n})=\hat{P}(z_{n})\;,$ (2.16)
however, generally, are not mutually orthogonal,
$\hat{P}(z_{m})\hat{P}(z_{n})=(\;\langle\;z_{m}\;|\;z_{n}\;\rangle\;)\;|\;z_{m}\;\rangle\langle\;z_{n}\;|$
(2.17)
because of property (2.14).
Note the important difference between projectors (2.9) and (2.15). The family
of projectors (2.9) is complete with respect to the set (2.6) due to the
resolution of unity (2.11). However, the set of projectors (2.15) is not
complete with respect to the set
$\mathbb{Z}=\\{z_{n}:~{}n=1,2,\ldots\\}\;,$ (2.18)
since the sum
$\sum_{n}\hat{P}(z_{n})=\sum_{n}\;\sum_{\mu\nu}a_{n\mu}a^{*}_{n\nu}|\;e_{\mu}\;\rangle\langle\;e_{\nu}\;|$
(2.19)
is not a unity operator in $\mathcal{H}_{E}$. The latter is clear from the
equality
$\langle\;e_{\mu}\;|\;\sum_{n}\hat{P}(z_{n})\;|\;e_{\nu}\;\rangle=\sum_{n}a^{*}_{n\mu}a_{n\nu}\;,$
which is not a Kronecker delta.
### 2.4 Noise-decorated alternatives
When a measurement of an alternative $A_{n}$ is accompanied by inevitable
intrinsic noise $z_{n}$, what is actually observed is not a pure event $A_{n}$
but this event decorated with the noise, that is the combined event
$A_{n}z_{n}$, whose representation is given by the vector
$|\;A_{n}z_{n}\;\rangle=|\;A_{n}\;\rangle\;\bigotimes\;|\;z_{n}\;\rangle=\sum_{\mu}a_{n\mu}\;|\;A_{n}e_{\mu}\;\rangle$
(2.20)
in the Hilbert space
${\cal H}={\cal H}_{A}\;\bigotimes\;{\cal H}_{E}={\rm
span}\;\\{\;|\;A_{n}e_{\mu}\;\rangle\;\\}.$ (2.21)
The vectors defined in equation (2.20) are mutually orthogonal,
$\langle\;z_{m}A_{m}\;|\;A_{n}z_{n}\;\rangle=\delta_{mn}\;.$ (2.22)
The set of the noise-decorated events
$\mathbb{A}_{Z}=\\{A_{n}z_{n}:~{}n=1,2,\ldots\\}$ (2.23)
is characterized by the family of the projectors
$\hat{P}(A_{n}z_{n})=|\;A_{n}z_{n}\;\rangle\langle\;z_{n}A_{n}\;|=\hat{P}(A_{n})\;\bigotimes\;\hat{P}(z_{n})$
(2.24)
that also can be written as
$\hat{P}(A_{n}z_{n})=\sum_{\mu\nu}a_{n\mu}a^{*}_{n\nu}\;|\;A_{n}e_{\mu}\;\rangle\langle\;e_{\nu}A_{n}\;|\;.$
(2.25)
These projectors are idempotent and mutually orthogonal,
$\hat{P}(A_{m}z_{m})\hat{P}(A_{n}z_{n})=\delta_{mn}\;\hat{P}(A_{n}z_{n}).$
(2.26)
However, since the vectors (2.20) do not form a basis in space (2.21), the
projectors(2.24) do not sum to one,
$\sum_{n}\hat{P}(A_{n}z_{n})=\sum_{n}\;\sum_{\mu\nu}a_{n\mu}a^{*}_{n\nu}\;|\;A_{n}e_{\mu}\;\rangle\langle\;e_{\nu}A_{n}\;|\;,$
(2.27)
which is seen from the equality
$\langle\;e_{\mu}A_{m}\;|\;\sum_{k}\hat{P}(A_{k}z_{k})\;|\;A_{n}e_{\nu}\;\rangle=\delta_{mn}\;a^{*}_{n\mu}a_{n\nu}\;.$
Thus the family of projectors (2.24) is idempotent, orthogonal, but not
complete, hence does not compose a standard operator-valued measure and
requires some additional conditions for introducing the probability of
alternatives [73, 74].
### 2.5 Quantum probability space
Statistical properties of the considered system are characterized by a
statistical operator $\hat{\rho}$ that depends on the context and the
observer’s knowledge about the state of the system, because of which it is
also called the state-of-knowledge operator. The operator $\hat{\rho}$ is a
positive-semidefinite trace-one operator. It is also called the system state,
or often simply a state.
The general representation of $\hat{\rho}$ in the basis
$\\{|A_{n}e_{\mu}\rangle\\}$ of orthonormalized vectors, that are not
necessarily eigenvectors of $\hat{\rho}$, has the form
$\hat{\rho}=\sum_{mn}\;\sum_{\mu\nu}\rho_{mn}^{\mu\nu}\;|\;A_{m}e_{\mu}\;\rangle\langle\;e_{\nu}A_{n}\;|\;,$
(2.28)
with
$\rho_{mn}^{\mu\nu}\equiv\langle\;e_{\mu}A_{m}\;|\;\hat{\rho}\;|\;A_{n}e_{\nu}\;\rangle\;.$
(2.29)
The trace normalization condition can be written as
${\rm Tr}_{\cal H}\hat{\rho}=\sum_{n\mu}\rho_{nn}^{\mu\mu}=1\;.$ (2.30)
A positive-semidefinite operator on a complex Hilbert space is necessarily
self-adjoint [75], which imposes the constraint
$\left(\rho_{mn}^{\mu\nu}\right)^{*}=\rho_{nm}^{\nu\mu}\qquad(\hat{\rho}^{+}=\hat{\rho})\;.$
(2.31)
Also, let us require that the family of projectors (2.24) be complete on
average, so that
${\rm Tr}_{\cal
H}\hat{\rho}\;\left[\;\sum_{n}\hat{P}(A_{n}z_{n})\;\right]=1\;,$ (2.32)
which acquires the form
$\sum_{n}\;\sum_{\mu\nu}a_{n\mu}^{*}a_{n\nu}\rho^{\mu\nu}_{nn}=1\;,$ (2.33)
with the trace over the space (2.21).
To be self-consistent, the system of constraints should not be overdefined.
This means that the number of the involved parameters cannot be smaller than
number of the constraint equations. The vectors $|z_{n}\rangle$ include the
complex coefficients $a_{n\mu}$ containing $2d_{A}d_{E}$ real components,
where
$d_{A}\equiv{\rm dim}\;{\cal H}_{A}\;,\qquad d_{E}\equiv{\rm dim}\;{\cal
H}_{E}\;.$ (2.34)
The statistical operator $\hat{\rho}$ comprises the coefficients
$\rho_{mn}^{\mu\nu}$ containing $d_{A}d_{E}$ real diagonal elements and
$d_{A}^{2}d_{E}^{2}-d_{A}d_{E}$ complex off-diagonal elements, hence in total
$2d_{A}^{2}d_{E}^{2}-d_{A}d_{E}$ real components. Thus the total number of
real components in $\hat{\rho}$ is $2d_{A}^{2}d_{E}^{2}-d_{A}d_{E}$.
Conditions (2.31) impose $d_{A}^{2}d_{E}^{2}-d_{A}d_{E}$ restrictions. In
addition, there are two normalization conditions (2.30) and (2.33). In this
way, there are in total $2d_{A}^{2}d_{E}^{2}+d_{A}d_{E}$ real parameters and
$d_{A}^{2}d_{E}^{2}-d_{A}d_{E}+2$ equations. From here, the condition of self-
consistency becomes
$d_{A}^{2}d_{E}^{2}+2d_{A}d_{E}\geq 2\;,$ (2.35)
which holds for any $d_{A}d_{E}\geq 1$.
The pair $\\{{\cal H},\hat{\rho}\\}$ is called quantum statistical ensemble
[76]. Adding here the family
${\cal P}_{AZ}=\\{\hat{P}(A_{n}z_{n}):~{}n=1,2,\ldots\\}$ (2.36)
of projectors (2.24) gives the quantum probability space
$\\{{\cal H},\;\hat{\rho},\;{\cal P}_{AZ}\\}\;.$
The probability of observing an event $A_{n}z_{n}$ reads as
$p(A_{n}z_{n})={\rm Tr}_{\cal H}\;\hat{\rho}\;\hat{P}(A_{n}z_{n})\;.$ (2.37)
The normalization condition (2.32) guarantees the validity of the
normalization condition
$\sum_{n}p(A_{n}z_{n})=1\;,\qquad 0\leq p(A_{n}z_{n})\leq 1\;.$ (2.38)
Explicitly, equation (2.37) takes the form
$p(A_{n}z_{n})=\sum_{\mu\nu}a^{*}_{n\mu}a_{n\nu}\rho^{\mu\nu}_{nn}\;.$ (2.39)
The form (2.37) defines the quantum probability of observing an alternative
$A_{n}$ decorated by intrinsic noise.
### 2.6 Quantum-classical correspondence
Quantum theory reduces to classical under the effect of decoherence [77, 78].
Then quantum decision theory reduces to classical and the quantum probability
(2.37) reduces to classical probability [70, 79] .
In expression (2.39), it is possible to separate the diagonal part
$f(A_{n}z_{n})=\sum_{\mu}|\;a_{n\mu}\;|^{2}\;\rho^{\mu\mu}_{nn}$ (2.40)
and the remaining off-diagonal part
$q(A_{n}z_{n})=\sum_{\mu\neq\nu}a^{*}_{n\mu}a_{n\nu}\;\rho^{\mu\nu}_{nn}\;.$
(2.41)
Then probability (2.37) becomes the sum
$p(A_{n}z_{n})=f(A_{n}z_{n})+q(A_{n}z_{n})\;.$ (2.42)
The first term here is semi-positive (non-negative), while the second one is
not sign-defined. The term $q(A_{n}z_{n})$ is due to the interference of noise
modes and is zero if there is just a single mode or when $\rho_{nn}^{\mu\nu}$
is diagonal in the upper indices. This is why it can be called quantum
interference term or quantum coherence term. In the present case, it is caused
by the noise interference.
The disappearance of the quantum coherence term is named decoherence, when
there occurs the reduction of quantum probability to the classical form [77,
78, 79] associated with expression (2.40). Interpreting the latter as
classical probability implies the validity of the conditions
$\sum_{n}f(A_{n}z_{n})=1\;,\qquad 0\leq f(A_{n}z_{n})\leq 1\;.$ (2.43)
Therefore, in view of conditions (2.38) and (2.43), the interference term
satisfies the conditions
$\sum_{n}q(A_{n}z_{n})=0\;,\qquad-1\leq q(A_{n}z_{n})\leq 1\;.$ (2.44)
More precisely, it fulfills the inequality
$-f(A_{n}z_{n})\leq q(A_{n}z_{n})\leq 1-f(A_{n}z_{n})\;.$ (2.45)
In decision theory, the first equation in (2.44) is called the alternation law
[80, 81].
The quantum-classical correspondence can be formulated as the reduction of
quantum probability to classical under decoherence, when
$p(A_{n}z_{n})\mapsto f(A_{n}z_{n})\;,\qquad q(A_{n}z_{n})\mapsto 0\;.$ (2.46)
Thus the appearance of an additional term $q(A_{n}z_{n})$ is due to the
interference of noise modes. The phenomenon of mode interference is well known
in quantum physics [31, 82, 83, 84, 85, 86]. The absence of intrinsic noise
accompanying measurements corresponds to the absence of emotions in the choice
between alternatives.
### 2.7 Probability of superposition states
For the purpose of quantum information processing [87, 88, 89, 90, 91, 92,
93], one creates quantum states in the form of superpositions. Then it is
admissible to define the probability of observing these states. For
illustration, let us consider the binary combinations of states
$A_{mn}z_{mn}=A_{m}z_{m}\;\bigcup\;A_{n}z_{n}\qquad(m\neq n)\;.$ (2.47)
Following the general procedure, each member (2.47) can be put into
correspondence to the vector
$|\;A_{mn}z_{mn}\;\rangle=c_{m}\;|\;A_{m}z_{m}\;\rangle+c_{n}\;|\;A_{n}z_{n}\;\rangle$
(2.48)
and characterized by the projector
$\hat{P}(A_{mn}z_{mn})=|\;A_{mn}z_{mn}\;\rangle\langle\;z_{mn}A_{mn}\;|\;.$
(2.49)
Vector (2.48) is assumed to be normalized to one, which requires the condition
$|\;c_{m}\;|^{2}+|\;c_{n}\;|^{2}=1\;.$
It is worth emphasizing that this type of composite states could be introduced
by combining the members from two different sets, say $\\{A_{n}\\}$ and
$\\{B_{k}\\}$ such that their related vectors $|A_{n}\rangle$ and
$|B_{k}\rangle$ pertain to the same basis in the Hilbert space, which is
required by the necessity of defining the vector (2.48) as a superposition
with respect to a common basis. In the physics language, this means that, if
$A_{n}$ and $B_{k}$ are the eigenvalues of some operators $\hat{A}$ and
$\hat{B}$, then these operators have to commute with each other, since only
then they enjoy the common family of orthonormalized eigenvectors.
Noncommuting operators cannot form such linear combinations. In that sense,
the corresponding events and observables are called incompatible as far as
they cannot be measured simultaneously [50, 86].
The projector (2.49) reads as
$\hat{P}(A_{mn}z_{mn})=|\;c_{m}\;|^{2}\hat{P}(A_{m}z_{m})+|\;c_{n}\;|^{2}\hat{P}(A_{n}z_{n})\;+$
$+\;c_{m}c^{*}_{n}\;|\;A_{m}z_{m}\;\rangle\langle\;z_{n}A_{n}\;|+c_{n}c^{*}_{m}\;|\;A_{n}z_{n}\;\rangle\langle\;z_{m}A_{m}\;|\;.$
(2.50)
Then the probability of observing the composite state, corresponding to
$A_{mn}z_{mn}$, becomes
$p(A_{mn}z_{mn})=|\;c_{m}\;|^{2}p(A_{m}z_{m})+|\;c_{n}\;|^{2}p(A_{n}z_{n})\;+$
$+\;2{\rm
Re}\;\left(c^{*}_{m}c_{n}\sum_{\mu\nu}a^{*}_{m\mu}a_{n\nu}\;\rho^{\mu\nu}_{mn}\right)\qquad(m\neq
n)\;.$ (2.51)
This expression includes the terms due to the interference of noise modes as
well as to the interference of the alternatives $A_{m}$ and $A_{n}$. Even when
there is just a single noise mode, hence there is no noise interference, there
remains the interference of alternatives. Thus in the case of a single noise
mode $e_{0}$, when
$a_{m\mu}=\delta_{\mu 0}\qquad(z_{n}\mapsto e_{0})\;,$ (2.52)
so that the noise interference disappears, probability (2.51) reduces to the
form
$p(A_{mn}z_{mn})\mapsto|\;c_{m}\;|^{2}p(A_{m}e_{0})+|\;c_{n}\;|^{2}p(A_{n}e_{0})+2{\rm
Re}\;\left(c^{*}_{m}c_{n}\;\rho_{mn}\right)\qquad(m\neq n)\;,$ (2.53)
where
$\rho_{mn}\;\equiv\;\langle\;e_{0}A_{m}\;|\;\hat{\rho}\;|\;A_{n}e_{0}\;\rangle\;.$
The last term in probability (2.53) describes the interference of
alternatives.
It is natural to ask whether the linear combinations of alternatives, hence
the alternative interference, exist in human decision making, similarly to
quantum physics. In the latter, the superpositions of wave functions
representing quantum states do exist. However this quantum notion does not
exist in human decision making. For instance, we can consider a set of fruits
and vegetables deciding what to buy, an apple or a banana. However the seller
will be quite astonished if we ask him/her to give us a quantum superposition
of a banana and an apple. It looks that quantum superpositions of alternatives
do not exist in the real life outside quantum experiments.
Note that in many works of quantum cognition, one considers exactly the
interference of alternatives, but not the interference of emotions (intrinsic
noise). This is the principal difference between our approach and the works of
other authors. In our approach, the operationally testable events, that is the
observed alternatives, do not interfere. These are emotions that can
interfere.
### 2.8 Alternative-noise entanglement
The notion of entanglement plays an important role in quantum information
processing and quantum computing [87, 88, 89, 90, 91, 92, 93]. Entanglement
happens when the considered Hilbert space is represented as a tensor product
of several Hilbert spaces. In our case, the Hilbert space (2.21) is the
product of the spaces characterizing the alternatives and intrinsic noise.
Therefore there may exist entanglement between alternatives and noise.
Generally, the statistical operator (2.28) is entangled. The state that is not
entangled is termed separable. In the present case, the state would be
separable if it would have the form
$\hat{\rho}_{sep}=\sum_{i}\lambda_{i}\;\hat{\rho}^{i}_{A}\bigotimes\hat{\rho}^{i}_{E}\;,$
(2.54)
where the first factor in (2.54) is a state acting on the space ${\cal
H}_{A}$, while the second is a state acting on the space $\mathcal{H}_{E}$,
and
$\sum_{i}\lambda_{i}=1\;,\qquad 0\leq\lambda_{i}\leq 1\;.$
The factor states can be represented as
$\hat{\rho}^{i}_{A}=\sum_{mn}\rho^{i}_{mn}|\;A_{m}\;\rangle\langle\;A_{n}\;|\;,\qquad\hat{\rho}^{i}_{E}=\sum_{\mu\nu}\rho_{i}^{\mu\nu}|\;e_{\mu}\;\rangle\langle\;e_{\nu}\;|\;,$
(2.55)
with the normalization
$\sum_{n}\rho_{nn}^{i}=1\;,\qquad\sum_{\mu}\rho^{\mu\mu}_{i}=1\;.$
Then the state (2.54) reads as
$\hat{\rho}_{sep}=\sum_{i}\lambda_{i}\;\sum_{mn}\;\sum_{\mu\nu}\rho^{i}_{mn}\;\rho_{i}^{\mu\nu}\;|\;A_{m}e_{\mu}\;\rangle\langle\;e_{\nu}A_{n}\;|\;.$
(2.56)
This means that in the representation (2.28), the coefficient is
$\rho_{mn}^{\mu\nu}=\sum_{i}\lambda_{i}\;\rho^{i}_{mn}\;\rho_{i}^{\mu\nu}\;.$
(2.57)
For the separable state (2.54), the classical limit (2.40) becomes
$f(A_{n}z_{n})=\sum_{i}\lambda_{i}\;\sum_{\mu}|\;a_{n\mu}\;|^{2}\;\rho^{i}_{nn}\;\rho_{i}^{\mu\mu}$
(2.58)
and the quantum noise interference term is
$q(A_{n}z_{n})=\sum_{i}\lambda_{i}\;\sum_{\mu\neq\nu}a^{*}_{n\mu}\;a_{n\nu}\;\rho^{i}_{nn}\;\rho_{i}^{\mu\nu}\;.$
(2.59)
Thus, generally, the alternatives and noise are entangled with each other. The
noise interference is not related to whether the state is entangled or not.
The state can be separable, while the noise interference present. In quantum
decision theory, the alternative-noise entanglement is equivalent to the
entanglement of alternatives and emotions [94, 95].
### 2.9 Entanglement production by measurements
It is necessary to keep in mind that, for operators, there exist two different
notions, state entanglement and operator entanglement production. An entangled
state is a state that is not separable, as is explained above. In that sense,
entanglement is the property of the state structure. While entanglement
production by an operator is the ability of generating entangled functions
from disentangled ones.
A vector of a tensor-product Hilbert space is named disentangled if it can be
represented as a tensor product of vectors pertaining to the factor Hilbert
spaces. Just as an example, the basis vector
$|A_{n}e_{\mu}\rangle=|A_{n}\rangle\otimes|e_{\mu}\rangle$ is disentangled.
Disentangled vectors are often called separable vectors.
An operator is called entangling, when there exists at least one disentangled
vector such that it becomes entangled under the action of this operator.
Conversely, one says that an operator is not entangling if its action on any
disentangled vector yields again a disentangled one. It has been proved [96,
97, 98] that the only operators preserving vector separability are the
operators having the form of tensor products of local operators and a swap
operator permuting Hilbert spaces in the tensor product describing the total
Hilbert space of a composite system. The action of the swap operator is
trivial, in the sense that it merely permutes the indices labeling the spaces.
The vector separability preservation by product operators has been proved for
binary [96, 99, 100] as well as for multipartite vectors [97, 98, 101]. The
operators preserving vector separability are called nonentangling [102, 103].
While an operator transforming at least one disentangled vector into an
entangled vector is termed entangling [104, 105]. The strongest type of an
entangling operator is a universal entangler that makes all disentangled
vectors entangled [106].
Entanglement of vectors can be generated in the process of measurements by the
action of statistical operators. The measure of entanglement production for
arbitrary operators has been introduced in Refs. [107, 108]. This measure is
applicable to any system, whether bipartite or multipartite, and to any trace-
class operator. It has been applied for characterizing different physical
systems [109, 110, 111, 112, 113], as reviewed in Ref. [114]. Entanglement
production generated in the process of decision making is studied in Ref.
[115].
The measure of entanglement production by the statistical operator (2.28)
acting on the Hilbert space (2.21) is calculated as follows. We define the
partially traced operators
$\hat{\rho}_{A}\equiv{\rm
Tr}_{E}\;\hat{\rho}=\sum_{mn}\rho_{mn}|\;A_{m}\;\rangle\langle\;A_{n}\;|\;,\qquad\hat{\rho}_{E}\equiv{\rm
Tr}_{A}\;\hat{\rho}=\sum_{\mu\nu}\rho^{\mu\nu}|\;e_{\mu}\;\rangle\langle\;e_{\nu}\;|\;,$
(2.60)
in which the traces are over $\mathcal{H}_{E}$ or $\mathcal{H}_{A}$,
respectively, and
$\rho_{mn}\equiv\sum_{\mu}\rho^{\mu\mu}_{mn}\;,\qquad\rho^{\mu\nu}\equiv\sum_{n}\rho^{\mu\nu}_{nn}\;.$
The nonentangling state is given by the product
$\hat{\rho}^{\otimes}\equiv\hat{\rho}_{A}\;\bigotimes\;\hat{\rho}_{E}\;.$
(2.61)
Comparing the action of the statistical operator $\hat{\rho}$ with that of its
nonentangling counterpart (2.61) we have the measure of entanglement
production by the statistical operator
$\varepsilon(\hat{\rho})\equiv\log\;\frac{||\;\hat{\rho}\;||}{||\;\hat{\rho}^{\otimes}\;||}\;.$
(2.62)
Here
$||\;\hat{\rho}^{\otimes}\;||=||\;\hat{\rho}_{A}\;||\cdot||\;\hat{\rho}_{E}\;||.$
Keeping in mind the spectral norm yields
$||\;\hat{\rho}\;||=\sup_{n\mu}\rho^{\mu\mu}_{nn}$ (2.63)
and
$||\;\hat{\rho}_{A}\;||=\sup_{n}\;\sum_{\mu}\rho^{\mu\mu}_{nn}\;,\qquad||\;\hat{\rho}_{E}\;||=\sup_{\mu}\;\sum_{n}\rho^{\mu\mu}_{nn}\;.$
(2.64)
Therefore the measure of entanglement production (2.62) turns into
$\varepsilon(\hat{\rho})=\log\;\frac{\sup_{n\mu}\rho^{\mu\mu}_{nn}}{(\sup_{n}\sum_{\mu}\rho^{\mu\mu}_{nn})(\sup_{\mu}\sum_{n}\rho^{\mu\mu}_{nn})}\;.$
(2.65)
In this way, the statistical operator produces the alternative-noise
entanglement by acting on the vectors of the Hilbert space. As an
illustration, it is easy to show that even a separable state can produce
entanglement. Thus, acting by the separable state (2.54) on the disentangled
basis vector $|A_{n}e_{\mu}\rangle$ gives the vector
$\hat{\rho}_{sep}\;|\;A_{n}e_{\mu}\;\rangle=\sum_{m\nu}\;\rho^{\mu\nu}_{mn}\;|\;A_{m}e_{\nu}\;\rangle\;,$
(2.66)
where $\rho_{mn}^{\mu\nu}$ is given by (2.57). This vector is entangled if at
least two $\lambda_{i}$ are not zero. Similarly, in the process of making
decisions the alternatives and emotions become entangled.
### 2.10 Time dependence of probability
In the previous sections, the measurements as well as decision making have
been treated as occurring during so short time that it could be neglected.
However, these processes do require some finite time. In addition, one can
accomplish measurements or decisions at different moments of time. Therefore,
for the correct description of measurements, as well as decision making, it is
necessary to take account of time dependence of quantum probabilities.
Time enters the probability through the time dependence of statistical
operator $\hat{\rho}(t)$, whose time evolution is given by means of a unitary
evolution operator $\hat{U}(t)$ according to the rule
$\hat{\rho}(t)=\hat{U}(t,0)\;\hat{\rho}(0)\;\hat{U}^{+}(t,0)\;.$ (2.67)
Alternatives decorated by intrinsic noise are represented by the family (2.36)
of projectors (2.24) acting on the Hilbert space $\mathcal{H}$ defined in
(2.21). The quantum probability space reads as
$\\{{\cal H},\;\hat{\rho}(t),\;{\cal P}_{AZ}\\}\;,$ (2.68)
with the probability becoming dependent on time,
$p(A_{n}z_{n},t)={\rm Tr}\;\hat{\rho}(t)\;\hat{P}(A_{n}z_{n})\;.$ (2.69)
Here and in what follows, the trace operation, without a notation of the
related Hilbert space, is assumed to be over the whole space (2.21).
As early, the probability is represented as the sum
$p(A_{n}z_{n},t)=f(A_{n}z_{n},t)+q(A_{n}z_{n},t)$ (2.70)
of the classical limit
$f(A_{n}z_{n},t)=\sum_{\mu}|\;a_{n\mu}\;|^{2}\;\rho^{\mu\mu}_{nn}(t)$ (2.71)
and the quantum term caused by the noise interference
$q(A_{n}z_{n},t)=\sum_{\mu\neq\nu}a^{*}_{n\mu}a_{n\nu}\;\rho^{\mu\nu}_{nn}(t)\;.$
(2.72)
The dependence on time comes from the time dependence of the matrix elements
(2.29). The noise vector $|z_{n}\rangle$ can also depend on time through the
coefficients $a_{n\mu}$, which we do not stress explicitly for the sake of
notation compactness. The dependence of noise on time, as well as the time-
dependence of emotion properties is rather natural, since they can vary with
time.
Because of the unitarity of the evolution operator, the normalization
condition (2.38) remains true:
$\sum_{n}p(A_{n}z_{n},t)=1\;,\qquad 0\leq p(A_{n}z_{n},t)\leq 1\;.$ (2.73)
Similarly, the normalization conditions (2.43), (2.44), and (2.45) also remain
valid.
### 2.11 Quantum state reduction
Suppose, the system, at time $t=0$, is prepared in a state $\hat{\rho}(0)$ and
develops in time following the evolution equation (2.67). Then at time $t_{0}$
it is subject to a measurement procedure for an observable allowing for the
set of alternatives (2.1). In the same way, one could be talking about taking
a decision at time $t_{0}$ by choosing an alternative from the set of possible
alternatives.
At time $t_{0}-0$, just before the measurement, the a priori probabilities of
the alternatives are given by the equation
$p(A_{n}z_{n},t_{0}-0)={\rm Tr}\;\hat{\rho}(t_{0}-0)\;\hat{P}(A_{n}z_{n})\;.$
(2.74)
Let us assume that at the moment of time $t_{0}$ an alternative $A_{n}$ is
certainly observed. In decision making, this would imply that an alternative
$A_{n}$ is certainly chosen. In any case, this means that, as a result of the
interaction between the studied system and a measuring device, the a priori
state has been reduced to an a posteriori state,
$\hat{\rho}(t_{0}-0)\mapsto\hat{\rho}(A_{n},t_{0}+0)\;,$ (2.75)
so that the a posteriori probability
$p(A_{n}z_{n},t_{0}+0)={\rm
Tr}\;\hat{\rho}(A_{n},t_{0}+0)\;\hat{P}(A_{n}z_{n})$ (2.76)
becomes unity, thus describing a certain event,
$p(A_{n}z_{n},t_{0}+0)=1\;.$ (2.77)
The above condition in the explicit form reads as
${\rm Tr}\;\hat{\rho}(A_{n},t_{0}+0)\;\hat{P}(A_{n}z_{n})=1\;.$ (2.78)
It is easy to verify that the solution to this equation can be written in the
form
$\hat{\rho}(A_{n},t_{0}+0)=\frac{\hat{P}(A_{n}z_{n})\hat{\rho}(t_{0}-0)\hat{P}(A_{n}z_{n})}{{\rm
Tr}\hat{\rho}(t_{0}-0)\hat{P}(A_{n}z_{n})}\;.$ (2.79)
The transformation (2.75) is called quantum state reduction [50, 116, 117] and
the form (2.79) is named the von Neumann-Lüders state. This state becomes the
initial state for the following state dynamics at times $t>t_{0}$,
$\hat{\rho}(A_{n},t)=\hat{U}(t,t_{0})\;\hat{\rho}(A_{n},t_{0}+0)\;\hat{U}^{+}(t,t_{0})\;.$
(2.80)
The a priori probability of measuring an alternative $A_{m}$ for $t>t_{0}$ is
$p(A_{m}z_{m},t)={\rm
Tr}\;\hat{\rho}(A_{n},t)\;\hat{P}(A_{m}z_{m})\qquad(t>t_{0})\;.$ (2.81)
Thus the state reduction (2.75), caused by the measurement process, implies
the change of the initial condition for the state, hence the change of the
state evolution at later times, which in turn presumes the alteration of the
quantum probability space,
$\\{{\cal H},\;\hat{\rho}(t),\;{\cal P}_{AZ}\\}\mapsto\\{{\cal
H},\;\hat{\rho}(A_{n},t),\;{\cal P}_{AZ}\\}$ (2.82)
and, respectively, the reduction of the probability,
$p(A_{n}z_{n},t_{0}-0)\mapsto p(A_{n}z_{n},t_{0}+0)\;.$ (2.83)
It is important that the existence of intrinsic noise does not disturb the
standard scheme of quantum state reduction.
The quantum state reduction is nothing but the change of an a priori
probability to an a posteriori probability due to the received information.
### 2.12 Consecutive measurements of alternatives
Assume that at time $t_{0}$ an alternative $A_{n}$ has been certainly
observed, as is described in the previous section. Then state (2.79) plays the
role of an initial condition for the following state dynamics. For times
$t>t_{0}$, we have the state (2.80).
Suppose that after the moment of time $t_{0}$, we are interested in measuring
another observable corresponding to the new set of alternatives
$\mathbb{B}=\\{B_{k}:~{}k=1,2,\ldots\\}\;.$ (2.84)
The related operators $\hat{A}$ and $\hat{B}$ are not necessarily commuting,
since their measurements are accomplished at different times. These
alternatives are again assumed to be decorated by intrinsic noise. The
projectors
$\hat{P}(B_{k}z_{k})=|\;B_{k}z_{k}\;\rangle\langle\;z_{k}B_{k}\;|$ (2.85)
compose the family
${\cal P}_{BZ}=\\{\hat{P}(B_{k}z_{k}):~{}k=1,2,\ldots\\}\;.$ (2.86)
After the time $t_{0}$, the quantum probability space is
$\\{{\cal H},\;\hat{\rho}(A_{n},t),\;{\cal P}_{BZ}\\}\qquad(t>t_{0})\;.$
(2.87)
The probabilities of alternatives from the set (2.84) read as
$p(B_{k}z_{k},t)={\rm
Tr}\;\hat{\rho}(A_{n},t)\;\hat{P}(B_{k}z_{k})\qquad(t>t_{0})$ (2.88)
and, as any probability, they are normalized:
$\sum_{k}p(B_{k}z_{k},t)=1\;.$ (2.89)
From the other side, probability (2.88) can be interpreted as a conditional
probability of measuring an alternative $B_{k}$ at time $t$, after the
alternative $A_{n}$ at time $t_{0}$ has been certainly observed. Thus the
conditional probability is defined as the straightforward renotation
$p(B_{k}z_{k},t)\equiv p(B_{k}z_{k},t|A_{n}z_{n},t_{0})\qquad(t>t_{0})\;,$
(2.90)
with the related renotation of normalization (2.89),
$\sum_{k}p(B_{k}z_{k},t|A_{n}z_{n},t_{0})=1\;.$ (2.91)
Substituting the state (2.80) into the expression
$p(B_{k}z_{k},t|A_{n}z_{n},t_{0})={\rm
Tr}\;\hat{\rho}(A_{n},t)\;\hat{P}(B_{k}z_{k})$ (2.92)
and using the notation
$p(B_{k}z_{k},t,A_{n}z_{n},t_{0})\equiv{\rm
Tr}\;\hat{U}(t,t_{0})\;\hat{P}(A_{n}z_{n})\;\hat{\rho}(t_{0}-0)\;\hat{P}(A_{n}z_{n})\;\hat{U}^{+}(t,t_{0})\;\hat{P}(B_{k}z_{k})$
(2.93)
results in the probability that can be called conditional,
$p(B_{k}z_{k},t|A_{n}z_{n},t_{0})=\frac{p(B_{k}z_{k},t,A_{n}z_{n},t_{0})}{p(A_{n}z_{n},t_{0}-0)}\;.$
(2.94)
Employing normalization (2.91), we get the relation
$\sum_{k}p(B_{k}z_{k},t,A_{n}z_{n},t_{0})=p(A_{n}z_{n},t_{0}-0)\;.$ (2.95)
From here, the normalization condition follows:
$\sum_{nk}p(B_{k}z_{k},t,A_{n}z_{n},t_{0})=1\;.$
These formulas suggest that probability (2.93) can be named as joint
probability. Similar equations often are considered for a fixed moment of time
$t=t_{0}$, which brings problems dealing with incompatible events
corresponding to the simultaneous measurement of noncommuting operators [118].
These problems do not arise when considering a realistic situation of
measurements at different moments of time. Taking into account the presence of
intrinsic noise also does not complicate much the consideration [57, 73].
Note that, for $t>t_{0}$, neither the joint probability (2.93) nor the
conditional probability (2.92) or (2.94) are symmetric with respect to the
interchange of the events $A_{n}$ and $B_{k}$. This asymmetry can explain the
so-called order effects in decision theory, when the probability of choice
depends on the order of choosing alternatives [84].
### 2.13 Immediate consecutive measurements
One often considers two measurements occurring immediately one after another
[50, 116]. This is the limiting case of the consecutive measurements treated
in the previous subsection, when at the moment of time $t_{0}$ an alternative
$A_{n}$ has been certainly observed and the second measurement of another
observable corresponding to the set of alternatives (2.84) is measured at the
time $t_{0}+0$ immediately following $t_{0}$.
In the case of these immediate measurements, the evolution operator reduces to
unity operator,
$\hat{U}(t_{0}+0,t_{0})=\hat{1}\;.$ (2.96)
Then for the conditional probability (2.92), we have
$p(B_{k}z_{k},t_{0}+0|A_{n}z_{n},t_{0})={\rm
Tr}\;\hat{\rho}(A_{n},t_{0}+0)\;\hat{P}(B_{k}z_{k})$ (2.97)
and the joint probability (2.93) becomes
$p(B_{k}z_{k},t_{0}+0,A_{n}z_{n},t_{0})={\rm
Tr}\;\hat{P}(A_{n}z_{n})\;\hat{\rho}(t_{0}-0)\;\hat{P}(A_{n}z_{n})\;\hat{P}(B_{k}z_{k})\;.$
(2.98)
The conditional probability (2.94) takes the form
$p(B_{k}z_{k},t_{0}+0|A_{n}z_{n},t_{0})=\frac{p(B_{k}z_{k},t_{0}+0,A_{n}z_{n},t_{0})}{p(A_{n}z_{n},t_{0}-0)}\;,$
(2.99)
which can be called von Neumann-Lüders probability. The explicit expression
for the joint probability (2.98) turns into
$p(B_{k}z_{k},t_{0}+0,A_{n}z_{n},t_{0})=|\;\langle\;z_{k}B_{k}\;|\;A_{n}z_{n}\;\rangle\;|^{2}\;p(A_{n}z_{n},t_{0}-0)\;.$
(2.100)
This transforms the conditional probability (2.99) into the symmetric form
$p(B_{k}z_{k},t_{0}+0|A_{n}z_{n},t_{0})=|\;\langle\;z_{k}B_{k}\;|\;A_{n}z_{n}\;\rangle\;|^{2}\;,$
(2.101)
where
$\langle\;z_{k}B_{k}\;|\;A_{n}z_{n}\;\rangle=\sum_{\mu}a^{*}_{k\mu}a_{n\mu}\;\langle\;B_{k}\;|\;A_{n}\;\rangle\;.$
If the repeated measurement is accomplished with respect to the same
observable, so that $B_{k}=A_{k}$, then the conditional probability (2.101)
reduces to
$p(A_{k}z_{k},t_{0}+0|A_{n}z_{n},t_{0})=\delta_{nk}\;.$ (2.102)
This is in agreement with the principle of reproducibility in quantum theory,
according to which, when the choice, among the same set of alternatives, is
made twice, immediately one after another, the second choice has to reproduce
the first one [50]. This also is in agreement with decision making: when a
decision maker accomplishes a choice from the same set of alternatives
immediately after another choice, so that there is no time for deliberation,
then this decision maker should repeat the previous choice [57, 73].
Generally, the joint probability (2.98) is not symmetric with respect to the
interchange of the events $A_{n}$ and $B_{k}$. It becomes symmetric only when
there is no noise and the corresponding operators commute with each other,
hence enjoy the common basis of eigenvectors. At the same time, the
conditional probability (2.101) is always symmetric, whether for commuting or
noncommuting observables, and whether in the presence or absence of noise.
Therefore the immediate consecutive probabilities, with the symmetry
properties
$p(B_{k}z_{k},t_{0}+0,A_{n}z_{n},t_{0})\neq
p(A_{n}z_{n},t_{0}+0,B_{k}z_{k},t_{0})\;,$
$p(B_{k}z_{k},t_{0}+0|A_{n}z_{n},t_{0})=p(A_{n}z_{n},t_{0}+0|B_{k}z_{k},t_{0})$
(2.103)
cannot be accepted as a generalization of classical Kolmogorov-type
probabilities, where the joint probability is symmetric, while the conditional
one is not. This fact should not be of surprise as far as the definitions of
quantum consecutive probability of two events occurring at different times and
classical probability for two events occurring synchronously are principally
different. Classical probability contains no mentioning of state evolution,
while quantum probability connects two measurements realized at different
times and involving the state evolution.
### 2.14 Synchronous noiseless measurements
Quantum theory, as it is usually formulated, is not directly analogous to
classical probability theory in the sense of Kolmogorov [119], but is much
closer to the theory of stochastic processes [56]. In nonrelativistic quantum
mechanics, states at different times are related by dynamics, generally
represented as a completely positive map. In that sense, consecutive
measurements correspond to dynamic probability with the underlying causal
structure. This type of theory is closely analogous to a classical stochastic
process, in which a state is a probability distribution over a set of random
variables representing the properties of a system at a given time and the
states at different times are related by dynamics.
In contrast, classical probability spaces make no assumptions about the causal
structure of the events on which probabilities are defined. Two disjoint
events might refer to properties of two different subsystems at a given time,
or they might refer to properties of the same subsystem at two different
times. In full generality, classical events need have no interpretation in
terms of causal structure at all.
A variant of quantum probability enjoying the same symmetry properties as
classical probability should be noiseless and allowing for the accomplishment
of simultaneous measurements. This type of probability is defined as follows
[57].
Let us consider two sets of alternatives
$\mathbb{A}=\\{A_{n}:~{}n=1,2,\ldots\\}\;,\qquad\mathbb{B}=\\{B_{k}:~{}k=1,2,\ldots\\}\;.$
(2.104)
A simultaneous measurement of two observables can be realized provided they
pertain to two different Hilbert spaces, for instance they are located at two
different spatial regions, or act in the spaces of different variables, e.g.
momenta and spins. Speaking about simultaneous measurements at different
spatial locations, we keep in mind a nonrelativistic situation when the notion
of synchronously occurring events or measurements is well defined. In the
relativistic case, we could use the notion of spacelike separated measurements
or events. The corresponding Hilbert space is the tensor product
${\cal H}={\cal H}_{A}\;\bigotimes\;{\cal H}_{B}\;.$ (2.105)
For the moment, we do not include intrinsic noise.
Alternatives $A_{n}$ and $B_{k}$ are represented by the projectors
$\hat{P}(A_{n})$ and $\hat{P}(B_{k})$ in the related spaces. The Hilbert space
$\mathcal{H}$, the statistical operator $\hat{\rho}(t)$, and the family of the
projectors
${\cal
P}_{AB}=\\{\hat{P}(A_{n})\otimes\hat{P}(B_{k}):~{}n=1,2,\ldots;~{}k=1,2,\ldots\\}$
(2.106)
compose the quantum probability space
$\\{{\cal H}_{A}\otimes{\cal H}_{B},\;\hat{\rho}(t),\;{\cal P}_{AB}\\}\;.$
(2.107)
The probability of measuring the alternatives $A_{n}$ and $B_{k}$ in different
spaces is
$p(A_{n}B_{k},t)={\rm
Tr}\;\hat{\rho}(t)\;\hat{P}(A_{n})\bigotimes\hat{P}(B_{k})\;.$ (2.108)
The defined probability possesses the same properties as classical
probability. Thus the marginal probabilities are given by the partial
summation
$p(A_{n},t)=\sum_{k}p(A_{n}B_{k},t)\;,\qquad
p(B_{k},t)=\sum_{n}p(A_{n}B_{k},t)\;,$ (2.109)
and they are normalized,
$\sum_{n}p(A_{n},t)=1\;,\qquad\sum_{k}p(B_{k},t)=1\;.$ (2.110)
If the measurements are not correlated, such that
$\hat{\rho}(t)=\hat{\rho}_{A}(t)\;\bigotimes\;\hat{\rho}_{B}(t)\;,$
the joint probability becomes the product
$p(A_{n}B_{k},t)=p(A_{n},t)p(B_{k},t)\;.$
It is possible to introduce the conditional probability
$p(B_{k}|A_{n},t)\equiv\frac{p(B_{k}A_{n},t)}{p(A_{n},t)}\;.$ (2.111)
The defined joint and conditional quantum probabilities of synchronous events,
happening in different Hilbert spaces, possess the same symmetry properties as
the classical probability: the joint probability is symmetric with respect to
the event interchange, while the conditional probability is not symmetric,
$p(A_{n}B_{k},t)=p(B_{k}A_{n},t)\;,\qquad p(A_{n}|B_{k},t)\neq
p(B_{k}|A_{n},t)\;.$ (2.112)
Strictly speaking, the measurements of two events, simultaneously occurring at
two different spatial locations, is possible only if the measuring device is
sufficiently large, such that it includes several parts allowing for the
synchronous measurement of two different events. In decision making, in order
to accept that a subject is able to decide on two alternatives simultaneously,
it is necessary to assume that either there are different parts of the brain
thinking synchronously or what seems to be synchronous is actually a fast
temporal reswitching from one object to another [120, 121, 122].
### 2.15 Synchronous measurements under noise
Synchronous measurements in different Hilbert spaces, e.g. at different
spatial locations, can be straightforwardly generalized by including intrinsic
noise. Then the system is defined in the Hilbert space
${\cal H}={\cal H}_{A}\;\bigotimes\;{\cal H}_{E}\;\bigotimes\;{\cal
H}_{B}\;\bigotimes\;{\cal H}^{\prime}_{E}\;,$ (2.113)
where ${\cal H}^{\prime}_{E}$ is a copy of ${\cal H}_{E}$. The events are
characterized by the family of the projectors
${\cal
P}_{ABZ}=\\{\hat{P}(A_{n}z_{n})\otimes\hat{P}(B_{k}z_{k}):~{}n=1,2,\ldots;~{}k=1,2,\ldots\\}\;.$
(2.114)
The quantum probability space becomes
$\\{{\cal H},\;\hat{\rho}(t),\;{\cal P}_{ABZ}\\}\;.$ (2.115)
The probability of two synchronously occurring events in the presence of
intrinsic noise is
$p(A_{n}z_{n}B_{k}z_{k},t)={\rm
Tr}\;\hat{\rho}(t)\;\hat{P}(A_{n}z_{n})\bigotimes\hat{P}(B_{k}z_{k})$ (2.116)
that is required to be normalized,
$\sum_{nk}p(A_{n}z_{n}B_{k}z_{k},t)=1\;.$ (2.117)
Notice that this probability is symmetric with respect to the order swap of
the alternatives,
$p(A_{n}z_{n}B_{k}z_{k},t)=p(B_{k}z_{k}A_{n}z_{n},t)\;.$ (2.118)
This probability, similarly to the probability of a single event, can be
represented as a sum
$p(A_{n}z_{n}B_{k}z_{k},t)=f(A_{n}z_{n}B_{k}z_{k},t)+q(A_{n}z_{n}B_{k}z_{k},t)\;,$
(2.119)
of a diagonal part
$f(A_{n}z_{n}B_{k}z_{k},t)=\sum_{\mu\lambda}|\;a_{n\mu}\;|^{2}\;|\;a_{k\lambda}\;|^{2}\;\langle\;e_{\lambda}e_{\mu}B_{k}A_{n}\;|\;\hat{\rho}(t)\;|\;A_{n}B_{k}e_{\mu}e_{\lambda}\;\rangle$
(2.120)
and an off-diagonal part comprising all interference terms,
$q(A_{n}z_{n}B_{k}z_{k},t)=\sum_{\mu\nu\gamma\lambda}a_{n\mu}a^{*}_{n\nu}a_{k\gamma}a^{*}_{k\lambda}\langle\;e_{\lambda}e_{\nu}B_{k}A_{n}\;|\;\hat{\rho}(t)\;|\;A_{n}B_{k}e_{\mu}e_{\gamma}\;\rangle\;,$
(2.121)
where
$\sum_{\mu\nu\gamma\lambda}~{}\mapsto~{}\sum_{\mu\neq\nu}\;\sum_{\gamma\lambda}\delta_{\gamma\lambda}~{}+~{}\sum_{\mu\nu}\;\sum_{\gamma\neq\lambda}\delta_{\mu\nu}~{}+~{}\sum_{\mu\neq\nu}\;\sum_{\gamma\neq\lambda}\;.$
Note that if the parts of the synchronous measurement are not correlated, so
that
$\hat{\rho}(t)=\hat{\rho}_{AZ}(t)\;\bigotimes\;\hat{\rho}_{BZ}(t)\;,$
then probability (2.116) separates into two factors
$p(A_{n}z_{n}B_{k}z_{k},t)=p(A_{n}z_{n})p(B_{k}z_{k})\;.$
The conditional probability can be defined as
$p(A_{n}z_{n}|B_{k}z_{k},t)\equiv\frac{p(A_{n}z_{n}B_{k}z_{k},t)}{p(B_{k}z_{k},t)}\;.$
(2.122)
This probability is not swap order symmetric,
$p(A_{n}z_{n}|B_{k}z_{k},t)\neq p(B_{k}z_{k}|A_{n}z_{n},t)\;.$
The synchronous joint probability (2.116) is a natural generalization of
classical probability to the case of quantum measurements under intrinsic
noise. In decision theory, it plays the role of a behavioural probability of
taking a decision on two events simultaneously occurring in two different
spatial locations.
### 2.16 Swap order relations
The symmetry properties of probabilities with respect to the swap of the order
of events makes it straightforward to derive some relations that can be
checked experimentally. However one has to be very cautious distinguishing
necessary and sufficient conditions for such relations.
Let us consider the alternatives $A_{n}$ and $B_{k}$, with $n,k=1,2$, and the
joint probability of immediate consecutive measurements (2.98). Define the
swap function
$S[\;p(A_{n}z_{n},t_{0}+0,B_{k}z_{k},t_{0})\;]\equiv
p(A_{1}z_{1},t_{0}+0,B_{2}z_{2},t_{0})-p(B_{2}z_{2},t_{0}+0,A_{1}z_{1},t_{0})\;+$
$+\;p(A_{2}z_{2},t_{0}+0,B_{1}z_{1},t_{0})-p(B_{1}z_{1},t_{0}+0,A_{2}z_{2},t_{0})\;.$
(2.123)
Using the normalization conditions (2.73) and (2.91), and the symmetry
property (2.103) of the conditional probability, it is easy to get the
relation
$S[\;p(A_{n}z_{n},t_{0}+0,B_{k}z_{k},t_{0})\;]=0\;.$ (2.124)
Clearly, the same swap order relation is valid for the case when there is no
intrinsic noise,
$S[\;p(A_{n},t_{0}+0,B_{k},t_{0})\;]=0\;.$ (2.125)
Relation (2.125) has been discussed by many authors, e.g. [35]. In a number of
experimental studies in psychology, it has been found that the probability of
answers to two consecutive questions depends on the question order, and
relation (2.125) holds true for empirical probabilities, because of which the
following conclusion has been advocated: Since the validity of relation
(2.125) for a joint empirical probability has been confirmed in a vast number
of experimental studies in psychology, and this relation has been derived for
a quantum probability, this means that the empirical data prove that
consciousness obeys quantum rules, that is, consciousness is quantum.
This claim, though, is not correct, since here one confuses necessary and
sufficient conditions. If some $p$ enjoys the properties of the quantum
probability of immediate consecutive measurements, then this is sufficient for
the relation $S[p]=0$ to be valid. However this is not a necessary condition,
as far as, if the relation $S[p]=0$ holds, it follows from nowhere that $p$
must be a particular quantum probability.
Really, not only the probability (2.98) of consecutive measurements satisfies
the same relation, but also the probability (2.108) of synchronous noiseless
measurements,
$S[\;p(A_{n}B_{k},t)\;]=0,$ (2.126)
as well as the probability of synchronous noisy measurements (2.118),
$S[\;p(A_{n}z_{n}B_{k}z_{k},t)\;]=0$ (2.127)
do satisfy the same relation. The validity of the latter relations is the
direct result of the swap order symmetry of these probabilities, as in
(2.112).
Moreover, if we consider a classical probability $f(A_{n}B_{k})$ that, by
definition is swap order symmetric, $f(A_{n}B_{k})=f(B_{k}A_{n})$, and study
the swap function
$S[\;f(A_{n}B_{k})\;]=f(A_{1}B_{2})-f(B_{2}A_{1})+f(A_{2}B_{1})-f(B_{1}A_{2})$
(2.128)
then, because of the swap order symmetry, the same relation immediately
follows:
$S[\;f(A_{n}B_{k})\;]=0\;.$ (2.129)
The swap order symmetry of quantum conditional probability of consecutive
events is a sufficient condition for the validity of the swap order relation
for the joint quantum probability of consecutive events. The swap order
symmetry of the classical joint probability is also a sufficient condition for
the validity of the swap order relation. However none of these symmetry
properties separately is a necessary condition for the validity of the swap
order relation. That is, different quantum as well as classical probabilities
can satisfy the same swap order relation. However the validity of this
relation tells us nothing on the nature of probability, whether it is quantum
or classical.
### 2.17 Quantum versus classical probabilities
In the literature advocating the use of quantum techniques for describing
consciousness, it is customary to counterpose quantum to classical approaches,
arguing in favour of quantum theory that, supposedly, is more versatile in
characterizing, e.g., such phenomena as non-commutativity of consecutive
events. In doing this, one usually compares the classical Kolmogorov
probability with the von Neumann-Lüders probability of consecutive
measurements. However comparing these probabilities is not correct, since the
classical Kolmogorov probability contains no dynamics, while the von Neumann-
Lüders approach considers the dynamic evolution from one measurement to
another. For the correct comparison of quantum and classical probabilities, it
is necessary to remember that there are several types of the latter, so that
the comparison has sense only for the probabilities from the same class. There
are the following classes of probabilities.
(i) Probability of single events. For the quantum case, under events we mean
quantum measurements, and in the classical case, some occurring events or the
acts of taking decisions. The quantum probability $p(A_{n},t)$ of a single
event $A_{n}$ differs from the classical Kolmogorov probability $f(A_{n})$ by
including temporal evolution and by taking account of intrinsic noise. Here
and below we assume that quantum probability includes intrinsic noise, but for
the sake of compactness, we do not show this explicitly.
(ii) Probability of synchronous events. Two or more synchronous events can be
observed in the quantum case, provided they are happening in different Hilbert
spaces, for instance in different spatial locations. In the classical
Kolmogorov theory, the events are always synchronous. Both these
probabilities, quantum as well as classical, enjoy the same symmetry
properties.
(iii) Probability of consecutive events. Events are happening one after
another at different times. The times have to be treated as different even
when one event occurs immediately after another. For quantum probability, the
temporal evolution is incorporated in the evolution operators. Consecutive
events in the quantum case are treated by the von Neumann-Lüders theory. In
the classical case, the evolution can be imposed by the equations called
Kolmogorov equations or master equations.
One often claims that the classical Kolmogorov probability is inferior to
quantum von Neumann-Lüders probability because the classical joint probability
does not depend on the order of events, being swap-order symmetric, while the
quantum von Neumann-Lüders theory gives for the joint probability of two
consecutive events at different times a non-symmetric order dependent
probability. However this comparison is not appropriate, since it considers
the probability from different classes. Quantum consecutive probabilities have
to be collated with classical consecutive probabilities that, in general, are
also not swap-order symmetric.
To illustrate the asymmetry of the classical consecutive probabilities, let us
consider the probability of two events, one $A_{n}$ occurring at time $t_{0}$,
after which the other event $B_{k}$ can happen at time $t$. The classical
joint probability
$f(B_{k},t,A_{n},t_{0})=f(B_{k},t|A_{n},t_{0})f(A_{n},t_{0}-0)$ is expressed
through the related conditional probability satisfying the master equation (or
Kolmogorov forward equation)
$\frac{d}{dt}\;f(B_{k},t|A_{n},t_{0})=\sum_{l=1}^{N_{B}}\gamma_{kl}f(B_{l},t|A_{n},t_{0})\;,$
(2.130)
in which $\gamma_{kl}$ is a transition rate matrix, or generator matrix,
characterizing the transition rate from the event $B_{l}$ to $B_{k}$. The
transition rate matrix has the properties
$\gamma_{kl}\geq 0\qquad(k\neq l)$ (2.131)
and
$\sum_{k=1}^{N_{B}}\gamma_{kl}=0\;.$ (2.132)
The latter property can be rewritten as
$\gamma_{ll}+\sum_{k(\neq l)}^{N_{B}}\gamma_{kl}=0\;,$ (2.133)
which allows us to represent equation (2.130) in the equivalent form
$\frac{d}{dt}\;f(B_{k},t|A_{n},t_{0})=\sum_{l(\neq
k)}^{N_{B}}[\;\gamma_{kl}f(B_{l},t|A_{n},t_{0})-\gamma_{lk}f(B_{k},t|A_{n},t_{0})\;]\;.$
(2.134)
It is instructive to observe an explicit solution, for example, considering
two events $\\{A_{1},A_{2}\\}$ and two events $\\{B_{1},B_{2}\\}$ under the
initial condition
$f(B_{k},t_{0}|A_{n},t_{0})=f_{kn}\;.$ (2.135)
Then the solution reads as
$f(B_{1},t|A_{1},t_{0})=\left(f_{11}\;-\;\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\right)e^{-(\gamma_{1}+\gamma_{2})(t-t_{0})}+\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\;,$
$f(B_{1},t|A_{2},t_{0})=\left(f_{12}\;-\;\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\right)e^{-(\gamma_{1}+\gamma_{2})(t-t_{0})}+\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\;,$
$f(B_{2},t|A_{1},t_{0})=\left(f_{21}\;-\;\frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}}\right)e^{-(\gamma_{1}+\gamma_{2})(t-t_{0})}+\frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}}\;,$
$f(B_{2},t|A_{2},t_{0})=\left(f_{22}\;-\;\frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}}\right)e^{-(\gamma_{1}+\gamma_{2})(t-t_{0})}+\frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}}\;,$
(2.136)
where
$\gamma_{1}\equiv\gamma_{12}=-\gamma_{22}\;,\qquad\gamma_{2}\equiv\gamma_{21}=-\gamma_{11}\;.$
Inverting the order of the events leads to the probability satisfying the
equation
$\frac{d}{dt}\;f(A_{n},t|B_{k},t_{0})=\sum_{m=1}^{N_{A}}\alpha_{nm}f(A_{m},t|B_{k},t_{0})\;.$
(2.137)
The transition rate matrix $\alpha_{mn}$ describes the transition from an
event $A_{n}$ to $A_{m}$ and possesses the properties
$\alpha_{mn}\geq 0\quad(m\neq n)$ (2.138)
and
$\sum_{m=1}^{N_{A}}\alpha_{mn}=0\;.$ (2.139)
In the case of binary sets of alternatives, the solution of equation (2.137),
under an initial condition
$f(A_{n},t_{0}|B_{k},t_{0})=g_{nk}\;,$ (2.140)
is similar by form to solution (2.136).
From these equations, it is clearly seen that inverting the order of the
events results in principally different expressions for the probability. This
is because the initial conditions are different and the transition rate
matrices are different. Generally, these matrices are different even in their
size, since the size of the rate $\gamma_{kl}$ is $N_{B}\times N_{B}$, while
the size of the rate $\alpha_{nm}$ is $N_{A}\times N_{A}$. The matrices of
initial values, in general, are also different in their form, since $f_{kn}$
is a matrix of the size $N_{B}\times N_{A}$, while $g_{nk}$ is a matrix of the
size $N_{A}\times N_{B}$.
Thus the classical consecutive probabilities, whether conditional or joint,
are not swap-order symmetric
$f(B_{k},t|A_{n},t_{0})\neq f(A_{n},t|B_{k},t_{0})\;,\qquad
f(B_{k},t,A_{n},t_{0})\neq f(A_{n},t,B_{k},t_{0})$ (2.141)
for any $t\geq t_{0}$. In this way, classical consecutive probabilities can
perfectly explain the so-called order effects observed in human behaviour.
### 2.18 Quantum decision theory
The analysis of quantum probabilities described above and their interpretation
in the language of decision theory is, strictly speaking, what composes the
basis of the so-called Quantum Decision Theory [57, 58, 59, 73, 80, 83, 94].
Summarizing this part of the review, it is necessary to make several remarks.
Everywhere above the system state has been represented by a statistical
operator $\hat{\rho}$. In particular cases, this operator could have the form
of a pure state
$\hat{\rho}(t)=|\;\psi(t)\;\rangle\langle\;\psi(t)\;|\;,$
where the wave function can be expanded over the given basis as
$|\;\psi(t)\;\rangle=\sum_{n\mu}b_{n\mu}(t)\;|\;A_{n}e_{\mu}\;\rangle\;.$
The description by wave functions is appropriate for isolated systems.
However, strictly speaking, quantum systems cannot be absolutely isolated, but
can only be quasi-isolated [123, 124, 125]. This means that, even if a system
is prepared in a pure state described by a wave function, there always exist
uncontrollable external perturbations from the surrounding that result in the
system decoherence beyond a decoherence time, which makes the system state
mixed. Moreover, to confirm that the considered system is to some extent
isolated, it is necessary to check this by additional control measurements
which again disturb the system’s isolation. In that way, the system can be
only quasi-isolated.
In addition, decision-makers are the members of a society, hence, they
correspond to non-isolated open systems that have to be described by
statistical operators. One could think that in laboratory tests, it would be
admissible to treat decision-makers as closed systems characterized by wave
functions. However, in laboratory tests, even when being separated from each
other, decision-makers do communicate with the investigators performing the
test. Moreover, even when being for some time locked in a separate room, any
decision-maker possesses the memory of interactions with many other people
before. From the physiological point of view, memory is nothing but delayed
interactions. Therefore, no decision maker can be treated as an absolutely
isolated system, which excludes the use of wave functions. It looks that the
most general and correct description of any decision-maker requires to
consider him/her as an open system, hence, characterized by a statistical
operator.
When representing the considered alternatives as the vectors of a Hilbert
space, we have assumed a nondegenerate representation with a one-to-one
correspondence between each alternative $A_{n}$ and the representing it vector
$|A_{n}\rangle$. Generally, in quantum theory there can occur the effect of
degeneracy, when an operator eigenvalue can correspond to several
eigenfunctions. In the present case, this would imply that an alternative
$A_{n}$ would correspond to several vectors $|A_{ni}\rangle$, where
$i=1,2,\ldots$. The existence of degeneracy in decision theory is sometimes
supposed for removing the contradiction between the reciprocal symmetry of von
Neumann-Lüders probability (2.101), that is the symmetry with respect to the
interchange of the events $A_{n}$ and $B_{k}$, and the experimentally observed
absence of this reciprocal symmetry [45, 46]. Really, if at least one of the
considered alternatives say $A_{n}$, is degenerate, such that $A_{n}$ is
represented by a set of vectors $|A_{ni}\rangle$, with $i=1,2,\ldots$, then
the related projector becomes the sum
$\hat{P}(A_{n})=\sum_{i}|\;A_{ni}\;\rangle\langle\;A_{ni}\;|\;.$
Considering, for simplicity, the noiseless case, for the von Neumann-Lüders
probability (2.99) we get
$p(B_{k},t_{0}+0|A_{n},t_{0})=\frac{\sum_{ij}\langle
A_{ni}|\hat{\rho}(t_{0}-0)|A_{nj}\rangle\langle A_{nj}|B_{k}\rangle\langle
B_{k}|A_{ni}\rangle}{\sum_{i}\langle
A_{ni}|\hat{\rho}(t_{0}-0)|A_{ni}\rangle}\;.$
Reversing the order of events yields a different expression
$p(A_{n},t_{0}+0|B_{k},t_{0})=\sum_{i}|\;\langle\;A_{ni}\;|\;B_{k}\;\rangle\;|^{2}\;.$
However, although the occurrence of degenerate operator spectra is natural for
quantum systems, in decision theory the appearance of degenerate alternatives
has no sense. If there occur several vectors $|A_{ni}\rangle$, it is always
admissible to reclassify the given alternatives so that each vector
$|A_{ni}\rangle$ would correspond to a single alternative $A_{ni}$ [70, 71].
This is equivalent to the breaking of symmetry in physics [50, 67, 68, 69].
The measurement procedure has been described by using projection-valued
measurements of alternatives decorated by intrinsic noise. In general, it
could be possible to invoke positive operator-valued measurements (POVM) [87,
88, 89, 90, 91, 92, 93]. This, probably, could be useful for some quantum
systems. However, as has been stressed above, we do not assume that the brain
is a quantum system, but we are analyzing the possibility of employing quantum
techniques for describing the operation of the decision-making process. For
this purpose, there is no reason to complicate the consideration by invoking
POVM bringing additional problems, such as the nonuniqueness of the post-
measurement states and the absence of reproducibility of immediately repeated
measurements.
Moreover, already on the level of projection-valued measurements, we meet a
number of difficulties in the attempts of applying quantum theory for
describing conscious processes, although certainly there are many similarities
on the general qualitative level, as has been mentioned many times above.
Summarizing, it is necessary to separate grains from tares by clearly
formulating what are the useful recipes following from the quantum theory of
measurements and what are the limitations in the attempts of their use for
characterizing the operation of artificial intelligence. The main conclusions
that can be derived from the analogies between quantum measurements and
behavioural decision making, are as follows.
(i) First of all, the process of decision making has to be treated in a
probabilistic way. The probabilistic description better corresponds to real
life, where in any sufficiently large group of subjects, deciding on a choice
among the same set of alternatives, not all prefer the same choice, but always
there are those who choose other alternatives. Any such group separates into
subgroups preferring different alternatives. The fractions of people
preferring different alternatives are nothing but frequentist probabilities.
Even a single person at different moments of time can choose different
alternatives. In that case, the frequentist probability shows the ratio of
particular choices to the total number of accomplished choices.
(ii) Generally, decision making is not a purely rational choice, but it is
accompanied by intrinsic noise representing irrational subconscious sides of
decision process, including emotions, gut feelings, and intuitive allusions.
In that sense, decision making is characterized by cognition-emotion duality,
or conscious-subconscious duality, or rational-irrational duality. Emotions
and the related characteristics can be modeled by intrinsic noise in quantum
measurements.
(iii) Quantum probability for measurements in the presence of intrinsic noise
consists of two terms, one that can be called classical limit and the other
caused by the interference of noise. The former can be associated with the
rational choice and the other is induced by the existence of emotions. In
decision theory, the occurrence of the additional interference term is
especially noticeable under uncertainty [84, 126].
(iv) Alternatives and noise, or alternatives and emotions, generally are
entangled being connected with each other. Measurement procedure as well as
decision making produce additional entanglement between alternatives and
noise.
(v) It is necessary to distinguish between two types of quantum probabilities
for two events. One type is the quantum probability of consecutive events
happening at different times and the other type is the quantum probability of
synchronous events occurring at different spatial locations or at different
spaces of variables.
Despite a number of hints on the general structure and main properties of
probability, which could be used in developing decision theory, quantum theory
provides no explicit rules allowing for the calculation of the probability for
the purpose of decision making, thus possessing no predictive power. It is of
course possible to fit the interference term for interpreting some particular
events, however fitting is not explanation. In order to supply quantum
decision theory with the ability of making quantitative estimates, it is
necessary to invoke a number of assumptions not related to quantum theory [79,
127].
It is also necessary to understand whether the usage of quantum theory is
compulsory for the development of adequate decision theory taking account of
behavioural effects or this usage is just a much more complicated trendy way
of describing what could be much easier described employing classical
terminology.
A great goal would be to develop a mathematical formulation of an approach,
taking into account the general peculiarities of quantum measurements and the
properties of quantum probabilities, but explicitly involving no quantum
techniques, at the same time providing the ability of quantitative
predictions. Such a formalized approach would be indispensable for the
creation of affective artificial intelligence.
## 3 Affective decision making
Before formulating the general approach to decision making, combining the
rational choice with the influence of irrational emotions, it is useful to
remind the origin of emotions and to explain what are the problems in the
earlier attempts to formalize the process of behavioural decision making.
### 3.1 Evolutionary origin of emotions
Emotions are common for humans as well as for animals. They evolved in the
process of evolution and were adapted over time like other traits found in
animals. Darwin [128] was, probably, the first to seriously study the
appearance and adaptation of emotions in the process of natural selection. He
discussed not only facial expressions in animals and humans, but attempted to
point out parallels between behaviours in humans and other animals [129].
According to evolutionary theory, different emotions evolved at different
times. Primal emotions, such as fear, are associated with ancient parts of the
brain and presumably evolved among our premammal ancestors. Filial emotions,
such as a human mother’s love for her offspring, seem to have evolved among
early animals. Social emotions, such as guilt and pride, evolved among social
primates.
Since emotions evolved and adapted during the years of evolution, they
appeared for some reason, and as other features, they should be useful for
animals and humans. For example, they facilitate communication by sending
signals to other members of the social group. Such an emotion as fear has
helped humans to survive, warning about a danger and forcing to take actions
before the cognitive logical part of the brain gives more detailed
information. Having emotions may mean the difference between life and death.
Certain emotions are universal to all humans, regardless of culture: anger,
fear, surprise, disgust, happiness and sadness. Emotions can be defined as a
specialized mechanism, shaped by natural selection, that increases fitness in
specific situations. The physiological, psychological, and behavioural
characteristics of emotions can be understood as possible design features that
increase the ability to cope with the threats and opportunities present in the
corresponding situations. Every emotion has been developed individually in the
course of biological evolution, and they all have been evolved to maintain the
survival needs.
Emotions play an important role in decision making. It would be not an
exaggeration to say that emotions shape decisions. As in the mentioned above
example of fear that saves lives, the fear can also save from bankruptcy.
Thus, compelling scientific evidence comes from emotionally impaired patients
who have sustained injuries to the ventromedial prefrontal cortex, a key area
of the brain for integrating emotion and cognition. Studies find that such
neurological impairments reduce both the patients’ ability to feel emotion
and, as a result, reduce the optimality of their decisions. Participants with
these injuries repeatedly select a riskier financial option over a safer one,
even to the point of bankruptcy, despite their cognitive understanding of the
suboptimality of their choices. These participants behave this way because
they do not experience the emotional signals that lead normal decision makers
to have a reasonable fear of high risks [11].
Living in a world where events cannot be predicted with certainty, agents must
select actions based on limited information, i.e., they often must make risky
decisions. Emotional information has a special weight in decision-making, as
it automatically triggers adaptive behavioral modules selected during the
course of evolution, driving agents to move toward attractive goals while
avoiding threats. Emotional information is critical, because on the one hand
it could prevent potential physical harm or unpleasant social interactions, on
the other hand it could promote physical pleasure or pleasant social
interactions [130].
Emotions are generally classified onto positive and negative [19],
respectively making alternatives more attractive or more repulsive. In the
choice between alternatives, there is no one-to-one correspondence between
alternatives and emotions, but there appears a multitude of different
connected emotions. Researchers exploring the subjective experience of
emotions have noted that emotions are highly intercorrelated both within and
between the subjects reporting them. Subjects rarely describe feeling a
specific positive or negative emotion without also claiming to feel other
positive or negative emotions [131].
In the process of decision making, emotions have great influence on multiple
cognitive phenomena, such as attention, perception, memory encoding, storage,
and retrieval of information, and associative learning [132]. Emotions
activate the motivational system of action tendencies. Recall that the word
emotion comes from Latin “emovere”, which means to move. The origin of the
word emotion already emphasizes its relevance to behavioural drive.
Although emotions seem to be similar to noise, they in fact help to optimize
decisions in two ways. First, they are faster than logical rational
deliberations, thus being of crucial importance in the case of urgent
decisions. Second, emotions reflect subconscious feelings based on your past
experiences and beliefs. This might serve to protect you from danger or
prevent your repeating past mistakes.
Notice that in physical measurements noise also is not always an obstacle, but
sometimes the detection of signals can be boosted by noise so that their
detection can even be facilitated [53, 54]. Measurements of thermal noise had
been used to measure the Boltzmann constant and measurements of shot noise had
been used to measure the charge on the electron [133]. Noise plays beneficial
role in the functioning of neural systems in the framework of stochastic
facilitation of signal processing [134].
### 3.2 Problems in decision making
The predominant theory describing individual behaviour under uncertainty is
nowadays the expected utility theory of preferences over uncertain prospects.
This theory was axiomatized by von Neumann and Morgenstern [135] and
integrated with the theory of subjective probability by Savage [136]. The
theory was shown to possess great analytical power by Arrow [137] and Pratt
[138] in their work on risk aversion and by Rothschild and Stiglitz [139, 140]
in their work on comparative risk. Friedman and Savage [141] and Markowitz
[142] demonstrated its tremendous flexibility in representing decision makers
attitudes toward risk. It is fair to state that the expected utility theory
has provided solid foundations to the theory of games, the theory of
investment and capital markets, the theory of search, and other branches of
economics, finance, and management.
However, a number of economists and psychologists have uncovered a growing
body of evidence that individuals do not always conform to the prescriptions
of the expected utility theory and indeed very often depart from the theory in
a predictable and systematic way [29]. Many researchers, starting with the
works by Allais [143], Edwards [144, 145], and Ellsberg [146] and continuing
through the present, experimentally confirmed pronounced and systematic
deviations from the predictions of the expected utility theory leading to the
appearance of many paradoxes. These paradoxes are often called behavioural,
since the behaviour of subjects contradicts the prescriptions of the utility
theory. Large literature on this topic can be found in the review articles
[147, 148, 149, 150].
There were many attempts to change the expected utility approach, which were
classified as non-expected utility theories. There are a number of such non-
expected utility theories, among which we may mention a few of the most known
ones: prospect theory [144, 151, 152], weighted-utility theory [153, 154,
155], regret theory [156], optimism-pessimism theory [157], ordinal-
independence theory [158], quadratic-probability theory [159], opportunity-
threat theory [160], and state-dependent utility theory [161]. The general
discussion of these theories can be found in the review by Machina [150].
However, non-expected utility theories are descriptive requiring fitting of
several parameters from empirical data. Moreover, as was shown by Safra and
Segal [162], none of the non-expected utility theories can explain all
paradoxes permanently arising in behavioural decision making. The best that
could be achieved is a kind of fitting for interpreting just one or, at best,
a few paradoxes, with other paradoxes remaining unexplained. In addition,
spoiling the structure of expected utility theory results in the appearance of
several complications and inconsistences [163]. As was concluded in the
detailed analysis of Al-Najjar and Weinstein [164, 165], any variation of the
classical expected utility theory ”ends up creating more paradoxes and
inconsistences than it resolves”.
An attempt of taking into account unconscious feelings has been undertaken by
the approach called the dual-process theory [166, 167, 168, 169, 170].
According to this theory, decisions arise in the human brain as a result of
two different processes that can be distinguished by one of the following
characteristic pairs [170]: slow/fast, rational/irrational,
conscious/unconscious, logical/intuitive, reasoning/reasonless,
deliberate/emotional, intentional/unintentional, voluntary/involuntary,
explicit/implicit, analyzing/sensuous, controlled/uncontrolled,
operated/automatic, regulated/impulsive, effortful/effortless,
comprehensive/perceptional, precise/impressionistic, objective/subjective,
verbal/nonverbal. Of course, not all these characteristics have to be present
in one or another process. Some of them can be shared to some extent by both
ways of thinking. Detailed discussions of the description of the two processes
can be found in the literature [166, 167, 168, 169, 170, 171, 172, 173] and
are well exposed in the review articles [174, 175, 176]. The existence of two
ways of thinking finds support in neuropsychological studies [177, 178, 179,
180, 181], although such a separation is not very strict [16, 17, 18].
Thus, in the dual-process theory one accepts the existence of two ways of
thinking, which to some extent finds support in some psychological and
neurological studies. These ways, for brevity, can be termed one as cognitive,
rational, logical and the second as emotional, irrational, intuitive. This
separation has to be understood in the conditional operational sense applied
to the process of decision making. The rational way of thinking is normative,
being based on clearly prescribed logical rules, while the irrational way is
poorly controlled, representing emotions induced by intuition and a kind of
gut feelings. This distinction does not assume that one of the ways, say
cognitive, is more correct.
In psychophysical and neurophysiological studies it has been found that the
influence of emotions leads to random choice, with the randomness caused by
generic variability and local instability of neural networks [182, 183, 184,
185, 186, 187, 188, 189]. The choice varies not only for different
individuals, but also for the same subject at different moments of time.
Moreover, even a given subject when making a single decision, experiences
intrinsic noise in the brain neural network, because of which the subject
decision becomes probabilistic. Cognitive imprecision is due to the inevitable
internal noise in the nervous system. Stochasticity is the unavoidable feature
of the human brain functioning. As a result, the choice in decision making is
not deterministic, being based on the comparison of utilities, but it is
rather stochastic and based on the comparison of probabilities. The recent
review [190] summarizes the modern point of view that considers randomness as
an internal feature of functioning of the human brain, where decisions are
formed on the basis of noisy internal representation.
The standard method employed in the attempts of taking into account irrational
effects in human decision making is a modification of the utility functional
[189, 191, 192]. In that sense, the dual-process models are reduced to
variants of non-expected utility theories, sharing with the latter the same
deficiencies.
Summarizing, in order to develop a decision theory comprising both ways of
decision process, conditionally labeled as cognitive (rational) and emotional
(irrational), one should respect the following premisses:
1. 1.
Since the presence of emotions results in random decisions, behavioural
decision making has to be treated as a generically probabilistic process.
Therefore the main quantity to be determined is the behavioural probability of
events. It is never happens that among a given group of people all without
exception would make the identical choice prescribed by the standard
deterministic utility theory. There always exist fractions of subjects
preferring different alternatives. That is, there always exists a distribution
of decisions over the set of the given alternatives.
2. 2.
The behavioural probability of an event should reflect the superposition of
two operational aspects in decision making, rational (cognitive), defined by
the prescribed rules evaluating the utility of the event, and irrational
(emotional), taking account of irrational effects.
3. 3.
As far as emotions randomly vary for different decision makers, as well as for
the same decision maker at different moments of time, their quantitative
influence cannot be predicted exactly for each subject and for each choice.
However the approach should provide quantitative predictions at the aggregate
level, when the behavioural probabilities could be compared with the empirical
average fractions of decision makers choosing the related alternatives.
4. 4.
The efficiency of the approach should be proved by demonstrating the absence
of paradoxes that should find quantitative resolution in the framework of the
general methodology without fitting parameters.
5. 5.
The last, but not the least, the approach should not be overloaded by
unnecessary complicated theorizing. Thus, borrowing from the theory of quantum
measurements and quantum decision theory some general ideas, it is desirable
to avoid the explicit use of quantum techniques. In that sense, an artificial
intelligence could accomplish quantum operation without the necessity of
involving quantum formulas [193].
### 3.3 Behavioural probabilities of alternatives
As is emphasized above, behavioural decision making is a principally
probabilistic process. Hence the pivotal role is played by the notion of
probability of alternatives. In decision theory, probabilistic approach is
usually based on the random utility model, where the expected utility of
alternatives is complimented by an additive random error characterized by a
postulated distribution [194, 195, 196]. This approach, being based on
expected utility theory, contains the same deficiencies as the underlying
utility theory: it does not take into account emotions and does not explain
behavioural paradoxes. In addition, it contains several fitting parameters
making the approach descriptive but not predictive.
The classical approach, axiomatically formulated by Kolmogorov [119], defines
the probabilities satisfying three axioms, non-negativity, normalization, and
additivity. For behavioural probabilities, in general, it is sufficient to
satisfy only two axioms, non-negativity and normalization, with the additivity
property as a compulsory condition, being dropped out [197, 198, 199, 58, 70,
80, 200]. Throughout the paper, the standard classical probabilities,
sometimes called just probabilities, are distinguished from behavioural
probabilities. To be precise, the basic points of the approach are formulated
in axiomatic way that, although keeping in mind the properties of quantum
behavioural probabilities, at the same time does not involve any quantum
notions explicitly.
Let us consider a ring $\\{A_{n}\\}$ of alternatives $A_{n}$. Assume that
decision making consists in the choice between the alternatives of a set
$\mathbb{A}=\\{A_{n}:~{}n=1,2,\ldots,N_{A}\\}\;.$ (3.1)
Axiom 1. Each alternative $A_{n}$ is equipped with its behavioural probability
$p(A_{n})$, whose family forms a probability measure over the set
$\\{A_{n}\\}$, with the properties of non-negativity and normalization
$\sum_{n=1}^{N_{A}}p(A_{n})=1\;,\qquad 0\leq p(A_{n})\leq 1\;.$ (3.2)
It is assumed that each alternative is decorated by emotions, which for the
sake of notation compactness is not explicitly marked.
Axiom 2. The alternatives are connected by relations defined through the
relations between their behavioural probabilities. The set of alternatives
(3.1) enjoys the following properties.
1. 1.
Ordering: For any two alternatives $A_{1}$ and $A_{2}$, one of the relations
necessarily holds: either $A_{1}\prec A_{2}$, in the sense that
$p(A_{1})<p(A_{2})$, or $A_{1}\preceq A_{2}$, when $p(A_{1})\leq p(A_{2})$, or
$A_{1}\succ A_{2}$, if $p(A_{1})>p(A_{2})$, or $A_{1}\succeq A_{2}$, when
$p(A_{1})\geq p(A_{2})$, or $A_{1}\sim A_{2}$, if $p(A_{1})=p(A_{2})$.
2. 2.
Linearity: The relation $A_{1}\preceq A_{2}$ implies $A_{2}\succeq A_{1}$.
These and the relations below are to be understood as relations between the
corresponding probabilities $p(A_{n})$.
3. 3.
Transitivity: For any three alternatives, such that $A_{1}\preceq A_{2}$, with
$p(A_{1})\leq p(A_{2})$, and $A_{2}\preceq A_{3}$, when $p(A_{2})\leq
p(A_{3})$, it follows that $A_{1}\preceq A_{3}$, in the sense that
$p(A_{1})\leq p(A_{3})$.
4. 4.
Completeness: In the set of alternatives (3.1), there exist a minimal
$A_{min}$ and a maximal $A_{max}$ elements, for which
$p(A_{min})=\min_{n}p(A_{n})$ and, respectively,
$p(A_{max})=\max_{n}p(A_{n})$.
The ordered set of alternatives (3.1), enjoying these properties, is called a
complete lattice.
Definition 1. An alternative $A_{1}$ is called stochastically preferable to
$A_{2}$ if and only if
$p(A_{1})>p(A_{2})\qquad(A_{1}\succ A_{2})\;.$ (3.3)
Definition 2. Two alternatives are stochastically indifferent if and only if
$p(A_{1})=p(A_{2})\qquad(A_{1}\sim A_{2})\;.$ (3.4)
Definition 3. The alternative $A_{opt}$ is called stochastically optimal if it
corresponds to the maximal behavioural probability,
$p(A_{opt})=\max_{n}p(A_{n})\;.$ (3.5)
Behavioural decision making includes both rational reasoning, following
prescribed logical rules, as well as irrational inclinations not rationalized
by explicit logical argumentation, such as emotions, subconscious guesses, gut
feelings, intuition, etc, all of which we shall, for brevity, call emotions.
Definition 4. Rational reasoning for an alternative $A_{n}$ is described by a
rational fraction named utility factor $f(A_{n})$ that is a classical
probability of choosing an alternative $A_{n}$, being based on rational rules.
A collection of rational fractions for a given set of alternatives forms a
classical probability measure over the set $\\{A_{n}\\}$ with the properties
$\sum_{n=1}^{N_{A}}f(A_{n})=1\;,\qquad 0\leq f(A_{n})\leq 1\;.$ (3.6)
Emotion categories are fuzzy and are labeled with words, expressions, and
metaphors [23, 24, 7]. When comparing the emotions induced by different
alternatives, one cannot quantify them by exact numbers but one can only
characterize them in descriptive terms, e.g., as attractive or repulsive,
pleasant or unpleasant, and like that. Emotion processes do not enjoy clear
categorical boundaries [23, 24, 7].
Definition 5. Emotional impulses in choosing an alternative $A_{n}$ are
characterized by an attraction factor $q(A_{n})$ lying in the interval
$-1\leq q(A_{n})\leq 1\;.$ (3.7)
Axiom 3. Behavioural probability of choosing an alternative $A_{n}$, taking
account of rational reasoning as well as emotion influence, is a functional of
the utility factor $f(A_{n})$ and attraction factor $q(A_{n})$ satisfying the
limiting condition
$p(A_{n})\;\mapsto\;f(A_{n})\;,\qquad q(A_{n})\;\mapsto\;0\;.$ (3.8)
This is an analog of decoherence in quantum theory, when the quantum term,
causing interference, vanishes and the quantum quantity tends to its classical
form.
Axiom 4. Behavioural probability of choosing an alternative $A_{n}$, taking
account of emotions and satisfying the limiting condition (3.8), is the sum
$p(A_{n})=f(A_{n})+q(A_{n})\;.$ (3.9)
From the inequality $0\leq p(A_{n})\leq 1$, it follows that
$-f(A_{n})\leq q(A_{n})\leq 1-f(A_{n})$ (3.10)
in agreement with inequality (3.7).
The value of the rational fraction $f(A_{n})$ shows how useful the alternative
$A_{n}$ is, because of which it is called the utility factor. The magnitude
and sign of the attraction factor $q(A_{n})$ characterize how attractive the
alternative $A_{n}$ is, hence $q(A_{n})$ is termed the attraction factor.
### 3.4 Quantification of utility factor
The utility factor, describing the rational utility of each alternative, has
to be defined by prescribed rules. Here we show how these fractions can be
determined through expected utilities or other value functionals. For
concreteness, we shall be talking about the expected utility of an alternative
$U(A_{n})$, although in the place of the expected utility one can take any
value functional.
Let the alternatives be represented by the lotteries
$A_{n}=\\{x_{i},\;p_{n}(x_{i}):~{}i=1,2,\ldots,N_{n}\\}\;,$ (3.11)
being the probability distributions over the payoff set $\\{x_{i}\\}$ with the
properties
$\sum_{i=1}^{N_{n}}p_{n}(x_{i})=1\;,\qquad 0\leq p_{n}(x_{i})\leq 1\;.$
The probabilities $p_{n}(x)$ can be either objective [135] or subjective
[136]. Following the classical utility theory [135], one can introduce a
dimensionless utility function $u(x)$ and the expected utility
$U(A_{n})=\sum_{i}u(x_{i})p_{n}(x_{i})\;.$ (3.12)
The utility factor reflects the utility of a choice, hence it has to be a
functional of the expected utility. As a functional of the expected utility,
the utility factor has to satisfy the evident conditions
$f(A_{n})\rightarrow 1\;,\qquad U(A_{n})\rightarrow\infty$ (3.13)
and
$f(A_{n})\rightarrow 0\;,\qquad U(A_{n})\rightarrow-\infty\;.$ (3.14)
The utility factor plays the role of classical probability expressed through
the expected utility $U(A_{n})$ or another value functional. The general
approach of defining the probability distribution describing utility factor is
the minimization of an information functional including imposed constraints
[201, 202]. The first such a natural constraint is the normalization condition
(3.6). Another constraint is the existence of a global mean
$\sum_{n}f(A_{n})U(A_{n})=U\qquad(|\;U\;|<\infty)\;.$ (3.15)
The explicit expression for an information functional can be taken in the
Kullback–Leibler form [203, 204, 205]. The Shore-Jonson theorem [205] states
that, given a prior (or trial) probability density $f_{0}(A_{n})$ and
additional constraints, there is only one posterior density $f(A_{n})$
satisfying these constraints and the conditions of uniqueness, coordinate
invariance, and system independence, such that this unique posterior can be
obtained by minimizing the Kullback-Leibler information functional. The
posterior probability $f(A_{n})$ is the minimizer of the Kullback–Leibler
functional provided the imposed constraints do not contain singularities,
which requires a not divergent value of the global mean $U$.
It is important to stress that the existence of the global mean $U$ does not
impose any constraints on the expected utilities $U(A_{n})$ that can be
divergent, as for instance in the St. Petersburg paradox to be considered
below. The existence of the global $U$ is required for the uniqueness of the
probability $f(A_{n})$ in the Shore-Jonson theorem [205].
In the present case, the information functional for the posterior probability
distribution $f(A_{n})$, under a prior distribution $f_{0}(A_{n})$ and the
given constraints, is
$I[\;f(A_{n})\;]=\sum_{n}f(A_{n})\ln\;\frac{f(A_{n})}{f_{0}(A_{n})}+\alpha\left[\;1-\sum_{n}f(A_{n})\;\right]\;+$
$+\;\beta\left[\;U-\sum_{n}f(A_{n})U(A_{n})\;\right]\;,$ (3.16)
where $\alpha$ and $\beta$ are Lagrange multipliers. The minimization of the
information functional (3.16) yields
$f(A_{n})=\frac{f_{0}(A_{n})e^{\beta U(A_{n})}}{\sum_{n}f_{0}(A_{n})e^{\beta
U(A_{n})}}\;.$ (3.17)
The trial distribution $f_{0}(A_{n})$ can be defined employing the Luce rule
[206, 207, 208]. Let the attribute of an expected utility $U(A_{n})$ be
characterized by an attribute value $a_{n}$ assumed to be non-negative. Then,
according to the Luce rule [206, 207, 208], the trial utility factor can be
defined as
$f_{0}(A_{n})=\frac{a_{n}}{\sum_{n=1}^{N_{A}}a_{n}}\qquad(a_{n}\geq 0)\;.$
(3.18)
The attribute value depends on whether the corresponding utility is positive
(semi-positive) or negative. For a semi-positive utility the attribute value
can be defined [209] as
$a_{n}=U(A_{n})\;,\qquad U(A_{n})\geq 0\;,$ (3.19)
while for a negative expected utility it can be given by
$a_{n}=\frac{1}{|\;U(A_{n})\;|}\;,\qquad U(A_{n})<0\;.$ (3.20)
For example, in the case of two lotteries, we have
$f_{0}(A_{n})=\frac{U(A_{n})}{U(A_{1})+U(A_{2})}\;,\qquad U(A_{n})\geq 0\;$
(3.21)
for semi-positive utilities, and
$f_{0}(A_{n})=1-\frac{|\;U(A_{n})\;|}{|\;U(A_{1})\;|+|\;U(A_{2})\;|}\;,\qquad
U(A_{n})<0\;$ (3.22)
for negative utilities.
Definition 6. A lottery $A_{1}$ is more useful than $A_{2}$ if and only if
$f(A_{1})>f(A_{2})\;.$ (3.23)
Definition 7. Two lotteries $A_{1}$ and $A_{2}$ are equally useful if and only
if
$f(A_{1})=f(A_{2})\;.$ (3.24)
As is evident, a lottery can be more useful but not preferable, since the
behavioural probability consists of two terms, including a utility factor and
an attraction factor. Generally, the rational fraction could be taken in a
different form. However the considered Luce form, probably, is the simplest
containing no fitting parameters and sufficient for providing quite reliable
estimates, as will be shown below.
The utility factor (3.17), with the trial distribution (3.18) and attributes
(3.19) and (3.20), for non-negative utilities, reads as
$f(A_{n})=\frac{U(A_{n})e^{\beta U(A_{n})}}{\sum_{n}U(A_{n})e^{\beta
U(A_{n})}}\;;\qquad U(A_{n})\geq 0$ (3.25)
and for negative utilities, as
$f(A_{n})=\frac{|U(A_{n})|^{-1}e^{-\beta|U_{n}|}}{\sum_{n}|U(A_{n})|^{-1}e^{-\beta|U_{n}|}}\;;\qquad
U(A_{n})<0\;.$ (3.26)
The parameter $\beta$, called belief parameter, characterizes the level of
certainty of a decision maker in the fairness of the decision task and in the
subject confidence with respect to his/her understanding of the overall rules
and conditions of the decision problem. The absolute certainty of a decision
maker is characterized by $\beta\rightarrow\infty$, when
$\displaystyle
f(A_{n})=\left\\{\begin{array}[]{ll}1,{}&U(A_{n})=\max_{n}U(A_{n})\\\ \\\
0,{}&U(A_{n})\neq\max_{n}U(A_{n})\end{array}\right.\qquad(\beta\rightarrow\infty)\;,$
(3.30)
so that we return to the standard deterministic decision theory prescribing to |
# Performance Analysis of IRS-Assisted Cell-Free Communication
Diluka Loku Galappaththige, Dhanushka Kudathanthirige, and Gayan Amarasuriya
School of Electrical, Computer, and Biomedical Engineering, Southern Illinois
University, Carbondale, IL, USA 62901
Email: {diluka.lg, dhanushka.kudathanthirige<EMAIL_ADDRESS>
###### Abstract
In this paper, the feasibility of adopting an intelligent reflective surface
(IRS) in a cell-free wireless communication system is studied. The received
signal-to-noise ratio (SNR) for this IRS-enabled cell-free set-up is optimized
by adjusting phase-shifts of the passive reflective elements. Then, tight
approximations for the probability density function and the cumulative
distribution function for this optimal SNR are derived for Rayleigh fading. To
investigate the performance of this system model, tight bounds/approximations
for the achievable rate and outage probability are derived in closed form. The
impact of discrete phase-shifts is modeled, and the corresponding detrimental
effects are investigated by deriving an upper bound for the achievable rate in
the presence of phase-shift quantization errors. Monte-Carlo simulations are
used to validate our statistical characterization of the optimal SNR, and the
corresponding analysis is used to investigate the performance gains of the
proposed system model. We reveal that IRS-assisted communications can boost
the performance of cell-free wireless architectures.
## I Introduction
Recently, wireless architectures based on the notion of cell-free have gained
much interest [1, 2]. In a cell-free system set-up, the cell-boundaries can be
relaxed, and thus, a vast number of access-points (APs) can be spatially
distributed to serve all users with a uniformly better quality-of-service
(QoS) over a much larger geographical region [1, 2]. Moreover, cell-free set-
ups may render spectral/energy efficiency gains, mitigate impediments caused
by spatial-correlated fading in compact/co-located antenna arrays, and
circumvent shadow fading impairments [1, 2]. Thus, cell-free architecture is a
foundation for practically realizing extremely large antenna arrays for next-
generation wireless standards.
An intelligence reflective surface (IRS) consists of a large number of passive
reflectors, whose reflective coefficients can be adjusted to attain desired
propagation effects for the impinging electromagnetic (EM) waves [3, 4]. The
feature of intelligently adjustable phase-shifts at an IRS can be used to
boost the signal-to-noise ratio (SNR) and to mitigate co-channel interference
at an intended destination through constructive and destructive signal
combining, respectively [5]. This leads to the notion of recycling of EM waves
within a propagation medium, and thereby, spectral/energy efficiency gains and
implementation cost reduction can be realized as IRSs are made out of low-cost
meta-atoms without active radio-frequency (RF) chains/amplifiers [4].
### I-A Our motivation
In this paper, we aim to investigate the feasibility of embedding an IRS
within a cell-free set-up. Specifically, our objective is to investigate the
performance of an IRS-assisted cell-free set-up, and thereby, we explore the
feasibility of jointly reaping the aforementioned benefits of cell-free
architectures and IRS-assisted wireless channels. Moreover, to the best of the
authors knowledge, the fundamental performance metrics for an IRS-assisted
cell-free set-up have not yet been reported in open literature. To this end,
we aim to fill this important gap in IRS literature by presenting a
performance analysis for an IRS-assisted cell-free set-up.
### I-B A literature survey for cell-free architecture and performance
analysis of IRS-assisted channels
In [1, 2], the basic concept of cell-free architectures is investigated, and
thereby, the performance metrics are compared against those of the co-located
antenna arrays. The analyses in [1, 2, 6] reveal that the cell-free set-ups
can outperform the co-located counterparts by serving users with a uniformly
better QoS, minimizing the impediments of spatial-correlation, and shortening
the end-to-end transmission distances to boost the overall energy/spectral
efficiency [1, 2]. Reference [7] proposes max-min power optimization
algorithms for cell-free massive multiple-input multiple-output (MIMO). In
[8], the performance of cell-free massive MIMO with underlay spectrum sharing
is investigated.
References [3, 4] present core architectural design principles of IRSs for
wireless communications. Ray-tracing techniques are used in [9] to generate a
novel path-loss model for IRS-assisted wireless channels. In [10], joint
optimization of precoder at the base-station (BS) and phase-shifts at the IRS
is studied through semi-definite relaxation and alternative optimization
techniques. Reference [5] studies the fundamental performance limits of
distributed IRS-assisted end-to-end channels with Nakagami-$m$ fading
channels. In [11], by using the statistical channel state information (CSI),
an optimal phase-shift design framework is developed to maximize the
achievable rates of IRS-assisted wireless channels. In [12], joint beamforming
and reflecting coefficient designs are investigated for IRSs to provision
physical layer security. Reference [13] proposes a practical IRS phase-shift
adjustment model, and thereby, the achievable rate is maximized through
jointly optimizing the transmit power and the BS beamformer by using
alternative optimization techniques.
### I-C Our contribution
In above-referred prior research [10, 5, 11, 13, 12] for IRS-assisted
communications, a BS with either a single-antenna or a co-located antenna
array is used. Having been inspired by this gap in IRS/cell-free literature,
in this paper, we investigate an IRS-assisted wireless channel embedded within
a cell-free set-up over Rayleigh fading, and thereby, we present fundamental
performance metrics. To this end, first, we invoke the central limit theorem
(CLT) to tightly approximate the end-to-end optimal SNR to facilitate a
mathematically tractable probabilistic characterization. Then, we derive the
probability density function (PDF) and the cumulative density function (CDF)
of this approximated optimal SNR in closed-form. Thereby, we present a tight
approximation to the outage probability. Moreover, we derive tight upper/lower
bounds for the achievable rate. In particular, we investigate the impediments
of discrete phase-shifts in the presence of phase-shift quantization errors.
Finally, we present a set of rigorous numerical results to explore the
performance gains of the proposed system, and we validate the accuracy of our
analysis through Monte-Carlo simulations. From our numerical results, we
observe that by using an IRS with controllable phase-shift adjustments, the
performance of cell-free wireless set-ups can be enhanced.
Notation: The transpose of vector $\mathbf{y}$ is denoted as $\mathbf{y}^{T}$.
The expectation and variance of a random variable $Y$ are represented by
$\mathbb{E}\\!\left[{Y}\right]$ and
$\mathbb{V}\mathrm{ar}\\!\left[{Y}\right]$, respectively.
$Y\sim\mathcal{CN}\left(\mu_{Y},\sigma_{Y}^{2}\right)$ denotes that $Y$ is
complex-valued circularly symmetric Gaussian distributed with $\mu_{Y}$ mean
and $\sigma_{Y}^{2}$ variance. Moreover, $C_{n}=\\{0,1,\cdots,n\\}$ and
$C_{n}^{\prime}=C_{n}/\\{0\\}$.
$D$$\mathrm{IRS}$$\mathrm{AP}_{1}$$\mathrm{AP}_{m}$${\mathbf{g}}$${\mathbf{h}_{m}}$${{u_{m}}}$$\mathrm{AP}_{M}$$\mathrm{CPU}$
Figure 1: System model - IRS-aided cell-free communication set-up
## II System, Channel and Signal Models
### II-A System and channel model
We consider a cell-free communication set-up consisting of $M$ single-antenna
APs ($\mathrm{AP}_{m}$ for $m=1,\cdots,M$) and a single-antenna destination
$(D)$. An IRS having $N$ passive reflective elements is embedded within this
cell-free set-up as shown in Fig. 1. For the sake of exposition, we denote the
set of APs as $\mathcal{M}=\\{1,\cdots,M\\}$ and the set of reflective
elements at the IRS as $\mathcal{N}=\\{1,\cdots,N\\}$.
The direct link between the $m$th AP and $D$ is represented by $u_{m}$, while
$h_{mn}$ denotes the channel between the $m$th AP and the $n$th reflective
element of the IRS. Moreover, $g_{n}$ is used to represent the channel between
the $n$th reflective element of the IRS and $D$. We model the envelops of all
aforementioned channels to be independent Rayleigh distributed [14], and the
corresponding polar-form of these channels is given by
$\displaystyle v=\lambda_{v}\mathrm{e}^{j\theta_{v}},$ (1)
where $v\in\\{u_{m},h_{mn},g_{n}\\}$ for $m\in\mathcal{M}$ and
$n\in\mathcal{N}$. In (1), the envelop and the phase of $v$ are given by
$\lambda_{v}$ and $\theta_{v}$, respectively. The PDF of $\lambda_{v}$ is
given by [15]
$\displaystyle
f_{\lambda_{v}}(x)=\left({x}/{\xi_{v}}\right)\mathrm{exp}\left({-x^{2}}/{\left(2\xi_{v}\right)}\right),$
(2)
where $\xi_{v}=\zeta_{v}/2$ is the Rayleigh parameter, and $\zeta_{v}$
captures the large-scale fading/path-loss of the channel $v$. Since all
reflective elements are co-located within the IRS, it is assumed that all
large-scale fading parameters are the same.
### II-B Signal model
The signal transmitted by the $m$th AP reaches $D$ through the direct and IRS-
assisted reflected channels. Thus, we can write the signal received at $D$ as
$\displaystyle
r=\sqrt{P}\sum\nolimits_{m\in{\mathcal{M}}}\left(u_{m}+\mathbf{g}^{T}\mathbf{\Theta}\mathbf{h}_{m}\right)x+w,$
(3)
where $x$ is the transmit signal from $S$ satisfying
$\mathbb{E}\\!\left[{|x|^{2}}\right]=1$, $P$ is the transmit power at each AP,
and $w$ is an additive white Gaussian noise (AWGN) at $D$ with zero mean and
variance of $\sigma_{w}^{2}$ such that $w\sim\mathcal{CN}(0,\sigma_{w}^{2})$.
In (3),
$\mathbf{h}_{m}=[h_{m1},\cdots,h_{mn},\cdots,h_{mN}]^{T}\in\mathbb{C}^{N\times
1}$ is the channel vector between the $m$th AP and the IRS. Moreover,
$\mathbf{g}^{T}=[g_{1},\cdots,g_{n},\cdots,g_{N}]\in\mathbb{C}^{1\times N}$
denotes the channel vector between the IRS and $D$. The diagonal matrix,
$\mathbf{\Theta}=\mathrm{diag}\left(\beta_{1}\mathrm{e}^{j\theta_{1}},\cdots,\beta_{n}\mathrm{e}^{j\theta_{n}},\cdots,\beta_{N}\mathrm{e}^{j\theta_{N}}\right)\in\mathbb{C}^{N\times
N}$, captures the reflective properties of the IRS through complex-valued
reflection coefficients $\beta_{n}\mathrm{e}^{j\theta_{n}}$ for
$n\in\mathcal{N}$, where $\beta_{n}$ and $\theta_{n}$ are the magnitude of
attenuation and phase-shift of the $n$th reflective element of the IRS,
respectively. Thus, we can rewrite the received signal at $D$ in (3) as
$\displaystyle
r=\sqrt{P}\sum\nolimits_{m\in{\mathcal{M}}}\left(u_{m}+\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}{g}_{n}{h}_{mn}\mathrm{e}^{j\theta_{n}}\right)x+w.$
(4)
Thereby, we derive the SNR at $D$ from (4) as
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\gamma$
$\displaystyle=$
$\displaystyle\bar{\gamma}\left|\sum\nolimits_{m\in{\mathcal{M}}}\left(u_{m}+\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}{g}_{n}{h}_{mn}\mathrm{e}^{j\theta_{n}}\right)\right|^{2}$
(5) $\displaystyle=$
$\displaystyle\bar{\gamma}\left|\sum\nolimits_{m\in{\mathcal{M}}}\\!u_{m}\\!+\\!\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}{g}_{n}\\!\left(\sum\nolimits_{m\in{\mathcal{M}}}{h}_{mn}\right)\mathrm{e}^{j\theta_{n}}\right|^{2}\\!\\!,$
where the average transmit SNR is denoted by $\bar{\gamma}=P/\sigma_{w}^{2}$.
Then, we define $u=\sum_{m\in{\mathcal{M}}}u_{m}$ and
$h_{n}=\sum_{m\in{\mathcal{M}}}h_{mn}$. Since $u_{m}$ and $h_{mn}$ are
independent complex Gaussian distributed for $m\in\mathcal{M}$ and
$n\in\mathcal{N}$, the polar-form of $u$ and $h_{n}$ can be also expressed
similar to (1), where $\lambda_{u}$ and $\lambda_{{h}_{n}}$ are the envelops
of $u$ and $h_{n}$, respectively. Thus, $\lambda_{u}$ and $\lambda_{{h}_{n}}$
are independent Rayleigh distributed with parameters
$\xi_{u}=\sum_{m\in{\mathcal{M}}}\zeta_{u_{m}}/2$ and
$\xi_{h_{n}}=\sum_{m\in{\mathcal{M}}}\zeta_{h_{mn}}/2$, respectively. From
(1), we can rewrite the SNR in (5) in terms of the channel phases as
$\displaystyle\gamma=\bar{\gamma}\left|\lambda_{u}\mathrm{e}^{j\theta_{u}}+\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}\mathrm{e}^{j\left(\theta_{n}+\theta_{{g}_{n}}+\theta_{{h}_{n}}\right)}\right|^{2}.$
(6)
It can be seen from (6) that the received SNR at $D$ can be maximized by
smartly adjusting the phase-shifts at each IRS reflecting elements
$(\theta_{n})$. Thus, it enables a constructive addition of the received
signals through the direct channels and IRS-aided reflected channels [10, 16].
To this end, the optimal choice of $\theta_{n}$ is given by
$\theta_{n}^{*}=\underset{-\pi\leq\theta_{n}\leq\pi}{\mathrm{argmax}}\;{\gamma}=\theta_{u}-\left(\theta_{g_{n}}+\theta_{h_{n}}\right)$.
Then, we can derive the optimal SNR at $D$ as
$\displaystyle\gamma^{*}=\bar{\gamma}\left|\lambda_{u}+\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}\right|^{2}.$
(7)
## III Preliminaries
In this section, we present a probabilistic characterization of the optimal
received SNR at $D$ in (7). First, we denote the weighted sum of the product
of random variables in (7) by
$Y=\sum_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}$. Then,
we use the fact that $\lambda_{{g}_{n}}$ and $\lambda_{{h}_{n}}$ for
$n\in\mathcal{N}$ are independently distributed Rayleigh random variables to
tightly approximate $Y$ through an one-sided Gaussian distributed random
variable $(\tilde{Y})$ by invoking the CLT [15] as [5]
$\displaystyle\\!\\!\\!\\!\\!f_{Y}(y)\approx
f_{\tilde{Y}}(y)=\frac{\psi}{\sqrt{2\pi\sigma_{Y}^{2}}}\mathrm{exp}\left(\\!\frac{-(y-\mu_{Y})^{2}}{2\sigma_{Y}^{2}}\\!\right),\,\text{for}\,\,y\geq
0,$ (8)
where $\psi\triangleq 1/\mathcal{Q}\left(-\mu_{Y}/\sigma_{Y}\right)$ is a
normalization factor, which is used to ensure that
$\int_{-\infty}^{\infty}f_{\tilde{Y}}(x)dx=1$, and $\mathcal{Q}(\cdot)$ is the
Gaussian-$\mathcal{Q}$ function [15]. In (8), $\mu_{Y}$ and $\sigma_{Y}^{2}$
are given by
$\displaystyle\mu_{Y}$ $\displaystyle=$
$\displaystyle\sum\nolimits_{n\in{\mathcal{N}}}\pi\beta_{n}\left(\xi_{g_{n}}\xi_{h_{n}}\right)^{1/2}/2,$
(9a) $\displaystyle\sigma_{Y}^{2}$ $\displaystyle=$
$\displaystyle\sum\nolimits_{n\in{\mathcal{N}}}\beta_{n}^{2}\xi_{g_{n}}\xi_{h_{n}}\left(16-\pi^{2}\right)/4.$
(9b)
Next, we derive a tight approximation for the PDF of $R=\lambda_{u}+Y$ as (see
Appendix A)
$\displaystyle\\!\\!\\!\\!f_{R}(x)\\!$ $\displaystyle\approx$
$\displaystyle\\!f_{\tilde{R}}(x)\\!=\\!\sqrt{\pi}\rho\left(\frac{x-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\right)\mathrm{exp}\left(-\Delta\left(\frac{x-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\right)^{2}\right)$
(10)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\times\left(\mathrm{erf}\left(\frac{x-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\right)+1\right)+\rho\mathrm{exp}\left(-\left(\frac{x-\mu_{Y}}{2\sigma_{Y}^{2}}\right)^{2}\right)\\!,$
where $\mathrm{erf}\left(x\right)=2/\sqrt{\pi}\int_{0}^{x}\mathrm{e}^{-t}dt$
is the error function [17, Eqn. 8.250.1]. Here, $a$, $\rho$, and $\Delta$ are
given by
$\displaystyle\\!\\!\\!\\!\\!\\!a$ $\displaystyle=$
$\displaystyle{1}/{2\xi_{u}}+{1}/{2\sigma_{Y}^{2}},\qquad\rho={\psi}\Big{/}\left({2a\xi_{u}\sqrt{2\pi\sigma_{Y}^{2}}}\right),$
(11a) $\displaystyle\\!\\!\\!\\!\\!\\!\Delta$ $\displaystyle=$
$\displaystyle\left(1-{1}/{2\sigma_{Y}^{2}}\right)2\sigma_{Y}^{2}a.$ (11b)
In particular, (10) serves as the exact PDF of
$\tilde{R}=\lambda_{u}+\tilde{Y}$, where $\tilde{Y}$ is the one-sided Gaussian
approximated random variable for $Y$ in (7). Then, we derive an approximated
PDF for $\gamma^{*}=\bar{\gamma}R^{2}$ as
$\displaystyle f_{\gamma^{*}}(y)$ $\displaystyle\approx$ $\displaystyle
f_{\tilde{R}}\left(\sqrt{{y}/{\bar{\gamma}}}\right)\times{1}\big{/}{2\sqrt{\bar{\gamma}y}}.$
(12)
Specifically, (12) serves as the the exact PDF of
$\gamma^{*}\approx\tilde{\gamma}^{*}=\bar{\gamma}\tilde{R}^{2}$. From (10), we
derive the CDF of $\tilde{R}$ as (see Appendix B)
$\displaystyle F_{\tilde{R}}(x)$ $\displaystyle=$ $\displaystyle
1-\int_{x}^{\infty}f_{\tilde{R}}(u)du=1-\left(I_{a}+I_{b}\right),$ (13)
where $I_{a}$ and $I_{b}$ are given by
$\displaystyle\\!\\!\\!\\!\\!\\!\\!I_{a}$ $\displaystyle=$
$\displaystyle\frac{\lambda\mathrm{e}^{-\Delta
d}\left(\mathrm{erf}\left(d+1\right)\right)}{2\Delta}+\frac{\lambda\left(1-\mathrm{erf}\left(d\sqrt{\Delta+1}\right)\right)}{2\Delta\sqrt{\Delta+1}},$
(14a) $\displaystyle\\!\\!\\!\\!\\!\\!\\!I_{b}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{\pi\sigma_{Y}^{2}}{2}}\rho\left(1-\mathrm{erf}\left(\sqrt{2\sigma_{Y}a}d\right)\right),$
(14b)
where $\lambda=2\sigma_{Y}^{2}\rho\sqrt{\pi a}$, $\rho$ is given in (11a), and
$d=(x-\mu_{Y})/(2\sigma_{Y}^{2}\sqrt{a})$. From (13), we approximate the CDF
of $\gamma^{*}=\bar{\gamma}R^{2}$ as
$\displaystyle F_{\gamma^{*}}(y)$ $\displaystyle=$
$\displaystyle\mathrm{Pr}\left(\gamma^{*}\leq y\right)\approx
F_{\tilde{R}}\left(\sqrt{y/\bar{\gamma}}\right).$ (15)
Figure 2: PDF and CDF of SNR ($\gamma^{*}$) for $\bar{\gamma}=-10$dB. The
combinations of $M$ and $N$ for Case 1 to Case 4 are set to $\\{M=64,N=32\\}$,
$\\{M=64,N=64\\}$, $\\{M=144,N=64\\}$, and $\\{M=64,N=128\\}$.
Remark 1: We plot the approximated PDF and CDF of $\gamma^{*}$ by using the
analysis in (12) and (15), respectively, in Fig. 2. Monte-Carlo simulations
are also plotted in the same figure for various $M$ and $N$ to verify the
accuracy of our approximations. From Fig. 2, we observe that our analytical
approximations for the PDF (12) and CDF (15) of $\gamma^{*}$ are accurate even
for moderately large values for $M$ and $N$.
$\displaystyle\mathcal{R}_{lb}=\mathrm{log}_{2}\left(1+\frac{\bar{\gamma}\left(\xi_{u}+\sigma_{Y}^{2}+2\mu_{u}\mu_{Y}+\mu_{u}^{2}+\mu_{Y}^{2}\right)^{3}}{\sum\nolimits_{n\in
C_{4}}\binom{4}{n}\left(2\xi_{u}\right)^{n/2}\Gamma\left(n/2+1\right)\frac{\psi}{2\sqrt{\pi}}\sum\nolimits_{i\in
C_{n}}\binom{n}{i}\left({2\sigma_{Y}^{2}}\right)^{(n-i)/2}\mu_{Y}^{i}I\left(n-i,\frac{-\mu_{Y}}{2\sigma_{Y}^{2}}\right)}\right)$
(17)
$\displaystyle\hat{\mathcal{R}}_{ub}=\mathrm{log}_{2}\left(1+\bar{\gamma}\left(\xi_{u}+{(\mu_{Y}\sin(\tau))}/{\tau}\left[2\mu_{u}+{(\mu_{Y}\sin(\tau))}/{\tau}\right]+{4\sigma_{Y}^{2}}/({16-\pi^{2}})\left[4-{\pi^{2}\sin(\tau)^{2}}/{(4\tau^{2})}\right]\right)\right)$
(19)
## IV Performances Analysis
### IV-A Outage probability
An outage event occurs when the optimal received SNR (7) falls below a
threshold SNR ($\gamma_{th}$). To this end, we define the the outage
probability of the proposed system model as
$P_{out}=P_{r}\left(\gamma\leq\gamma_{th}\right)$. From (15), we can compute
the a tight approximation for the outage probability as $P_{out}\approx
F_{\gamma^{*}}(\gamma_{th})$.
### IV-B Average achievable rate
The average achievable rate of the proposed system can be defined as
$\mathcal{R}=\mathbb{E}\\!\left[{\mathrm{log}_{2}\left(1+\gamma^{*}\right)}\right]$.
The exact derivation of this expectation in $\mathcal{R}$ appears
mathematically intractable. Thus, we resort to tight upper/lower bounds for
$\mathcal{R}$ as $\mathcal{R}_{lb}\lesssim\mathcal{R}\lesssim\mathcal{R}_{ub}$
by invoking the Jensen’s inequality [18]. Next, we derive $\mathcal{R}_{ub}$
as (see Appendix C)
$\displaystyle\mathcal{R}_{ub}=\mathrm{log}_{2}\left(1+\bar{\gamma}\left(\xi_{u}+\sigma_{Y}^{2}+2\mu_{u}\mu_{Y}+\mu_{u}^{2}+\mu_{Y}^{2}\right)\right).$
(16)
We derive $\mathcal{R}_{lb}$ as given in (17) at the top of the next page.
## V Impact of discrete phase-shift adjustments
Due to the hardware limitation, the adoption of continuous phase-shift
adjustments for passive reflective elements at the IRS is practically
challenging. Thus, we investigate the feasibility of adopting discrete phase-
shifts for the proposed set-up via phase-shift quantization. It is assumed
that a limited number of discrete phase-shifts is available to select at the
$n$th reflector such that $\hat{\theta}_{n}^{*}=\pi\varsigma/2^{B-1}$, where
$B$ denotes the number of quantization bits, $\varsigma=\underset{q\in\\{0,\pm
1,\cdots,\pm 2^{B-1}\\}}{\mathrm{argmin}}|{\theta}_{n}^{*}-\pi q/2^{B-1}|$,
and $\theta_{n}^{*}$ is the optimal phase-shift in Section II-B. Then, we can
define the error of the continuous and quantized phase-shifts as
$\varepsilon_{n}={\theta}_{n}^{*}-\hat{\theta}_{n}^{*}$. For a large number of
quantization levels, $\varepsilon_{n}$ can shown to be uniformly distributed
as $\varepsilon_{n}\sim\mathcal{U}\left[-\tau,\tau\right)$ with
$\tau=\pi/2^{B}$ [19]. The signal and error $\varepsilon_{n}$ becomes
uncorrelated for a high number of quantization levels [19]. Thus, the optimal
SNR in (7) can be rewritten with discrete phase-shift as
$\displaystyle\\!\\!\\!\\!\\!\\!\hat{\gamma}^{*}\\!=\\!\bar{\gamma}\left|\lambda_{u}\\!+\\!\sum_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}\mathrm{e}^{j\varepsilon_{n}}\right|^{2}\\!=\\!\bar{\gamma}\left((\lambda_{u}\\!+\\!Y_{R})^{2}\\!+\\!Y_{I}^{2}\right)\\!,$
(18)
where
$Y_{R}=\sum_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}\cos(\varepsilon_{n})$
and
$Y_{I}=\sum_{n\in{\mathcal{N}}}\beta_{n}\lambda_{{g}_{n}}\lambda_{{h}_{n}}\sin(\varepsilon_{n})$.
By following steps similar to those in Appendix C, an upper bound for the
achievable rate with phase-shift quantization errors
$(\hat{\mathcal{R}}_{ub})$ can be derived by using (18) as shown in (19).
## VI Numerical Results
The system parameters for our simulations are given below:
$\zeta_{v}=\left(d_{0}/d_{v}\right)^{\kappa}\times 10^{\varphi_{v}/10}$ is
used to model large-scale fading, where $v\in\\{u_{m},h_{mn},g_{n}\\}$ for
$m\in\mathcal{M}$ and $n\in\mathcal{N}$. The transmission distance between
nodes is denoted by $d_{v}$, $d_{0}=1$ m is a reference distance, the path-
loss exponent is $\kappa=2.8$, and log-normal shadow fading is captured by
$10^{\varphi_{v}/10}$ with $\varphi_{v}\sim(0,8)$ [20]. In our system
topology, the IRS and $D$ are in positioned at fixed locations and $250\,$m
apart, while the APs are uniformly distributed over an area of $1000\times
1000$ $\mathrm{m}^{2}$. The amplitudes of reflection coefficients are set to
$\beta_{n}=0.9$ for $n\in\mathcal{N}$, which is a typical assumption for IRSs
[10, 16].
Figure 3: The outage probability for different $M$ and $N$ and $\gamma_{th}=0$
dB. The combinations of $M$ and $N$ for Case-1 to Case-6 are set to
$\\{M=36,N=16\\}$, $\\{M=36,N=32\\}$, $\\{M=16,N=64\\}$, $\\{M=36,N=64\\}$,
$\\{M=64,N=64\\}$, and $\\{M=36,N=128\\}$.
In Fig. 3, we plot the outage probability as a function of the average
transmit SNR ($\bar{\gamma}$) for different combinations of distributed APs
$(M)$ and reflective elements $(N)$ at the IRS. For comparison purposes, we
also plot the outage probability for the APs-to-$D$ direct transmission
(without using an IRS) for $M=64$ in the same figure. We use our closed-form
derivation in (15) to plot the analytical outage probability approximations,
and we plot the exact counterparts through Monte-Carlo simulation. The latter
is used to verify the accuracy/tightness of our outage probability
approximations. According to Fig. 3, the tightness of our outage analysis
improves with as $M$ or/and $N$ increase. The reason for this is that large
$M$ or/and $N$ improves the accuracy of CLT. Moreover, the outage probability
can be reduced by either increasing $M$ or/and $N$. For example, at an average
SNR of $-5\,$dB, the outage probability can be reduced by $99.9$% by doubling
$N$ from $16$ (Case-1) to $32$ (Case-2) while keeping $M=36$. Moreover, by
increasing $M,N$ from $\\{M=36,N=32\\}$ in Case-2 to $\\{M=64,N=64\\}$ in
Case-4, the average SNR required to achieve an outage probability of $10^{-3}$
can be reduced by $155.6\%$ dB. From Fig. 3, we observe that the proposed IRS-
aided cell-free set-up outperforms the APs-to-$D$ direct transmission. For
instance, the set-up without IRS needs an average transmit SNR of $18$ dB to
reach an outage probability of $10^{-2}$, which is about $177.6\%$ increase
over the transmit SNR requirement for the Case-5 with the IRS-aided set-up for
the same number of APs $(M=64)$. Thus, the co-existence of IRSs within a cell-
free set-up can be beneficial in reducing the system outage probability.
Figure 4: The average achievable rate for $N\in\\{16,32,64,128,256\\}$ and
$M=64$.
In Fig. 4, we study the average achievable rate of the proposed system as a
function of the average transmit SNR ($\bar{\gamma}$) for
$N\in\\{16,32,64,128,256\\}$. We also compare the achievable rates of APs-
to-$D$ direct transmission and the IRS-aided transmission. The upper and lower
bounds for the achievable rates are plotted by using our analysis in (16) and
(17), respectively. We again validate the accuracy of our analysis through
Monte-Carlo simulations of the exact achievable rate. The tightness of our
upper/lower rate bounds is clearly depicted in enlarged portion of Fig. 4. We
observe that the rate gains can be achieved by increasing the number of
reflective elements in the IRS. Fig. 4 also illustrates that an IRS can be
embedded within a cell-free set-up to boost the achievable gains. For
instance, an IRS with $N=16$ provides a rate gain of about $180$ % compared to
the APs-to-$D$ transmission without an IRS at an average transmit SNR of $0$
dB.
Figure 5: The impact of discrete phase-shifts with phase-shift quantization on
the average achievable rate for different $M$ and $N$. The combinations of $M$
and $N$ for Case-1 to Case-4 are set to $\\{M=36,N=32\\}$, $\\{M=64,N=32\\}$,
$\\{M=36,N=64\\}$, and $\\{M=64,N=64\\}$.
In Fig. 5, we investigate the impact of discrete phase-shifts and the number
of quantization bits ($B$) by plotting the percentage rate ratio
$(\mathcal{R}_{ub}^{per})$ against the average transmit SNR for different
combinations of $M$ and $N$. The phase-shift quantization errors are uniformly
distributed: $\mathcal{U}\left[-\pi/2^{B},\pi/2^{B}\right)$. The percentage
rate ratio is defined as follows:
$\mathcal{R}_{ub}^{per}=\hat{\mathcal{R}}_{ub}/\mathcal{R}_{ub}\times 100\%$,
where $\hat{\mathcal{R}}_{ub}$ and $\mathcal{R}_{ub}$ are the upper bounds of
the average achievable rate with and without phase-shift quantization errors
given in (19) and (16), respectively. Monte-Carlo simulation curves are also
generated to validate our analysis. Fig. 5 shows that the impact of phase-
shift quantization errors vanishes when a higher $B$ is used. For instance, we
can recover more than $98\%$ of the average rate when $4$ bit quantization is
used at the IRS compared to the system with continuous phase-shift
adjustments. As per Fig. 5, $\mathcal{R}_{ub}^{per}$ improves in the high SNR
regime. For example, by varying $B$ as 1, 2, and 4 bits, the average rate can
be recovered more than $90\%$, $98\%$, and almost $100\%$, respectively, at a
transmit SNR of $20\,$dB. Fig. 5 shows that a higher number of $M,N$ is also
beneficial for recovering the achievable rate in the moderate-to-large
transmit SNR regime.
## VII Conclusion
In this paper, the feasibility of adopting an IRS embedded within a cell-free
set-up has been explored. The optimal received SNR through multiple
distributed APs with an IRS-aided channel has been statistically characterized
by deriving the tight PDF and CDF approximations. This probabilistic SNR
analysis has been used to derive tight approximations/bounds for the outage
probability and the average achievable rate in closed-form. The impairments of
discrete phase-shifts with equalization errors have been explored. The
accuracy of our performance analysis of the proposed system set-up has been
verified by providing Monte-Carlo simulations. We observe from our numerical
results that IRS-aided cell-free system set-ups may be used to reduce the
outage probability and boost the achievable rates of next-generation wireless
systems.
## Appendix A The derivation of PDF of $\tilde{R}$ in (10)
By using the fact that $\lambda_{u}$ and $\tilde{Y}$ are independent random
variables, we derive the PDF of $\tilde{R}$ as
$\displaystyle f_{\tilde{R}}(x)$ $\displaystyle=$
$\displaystyle\int_{0}^{\infty}f_{u}(u)f_{\tilde{Y}}(x-u)du$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!=2a\rho\mathrm{e}^{-\frac{(x-\mu_{Y})^{2}}{2\sigma_{Y}^{2}}}\int_{0}^{\infty}u\mathrm{e}^{-au^{2}+bu}du$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!=2a\rho\mathrm{e}^{-\frac{(x-\mu_{Y})^{2}}{2\sigma_{Y}^{2}}}\mathrm{e}^{\frac{b^{2}}{4a}}\int_{0}^{\infty}u\mathrm{e}^{-a\left(u-\frac{b}{2a}\right)^{2}}du$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\stackrel{{\scriptstyle(a)}}{{=}}2a\rho\mathrm{e}^{-\frac{(x-\mu_{Y})^{2}}{2\sigma_{Y}^{2}}}\mathrm{e}^{\frac{b^{2}}{4a}}\\!\\!\left(\underbrace{\int_{-b/2a}^{\infty}\\!\\!\\!t\mathrm{e}^{-at^{2}}dt}_{I_{1}}+\frac{b}{2a}\underbrace{\int_{-b/2a}^{\infty}\\!\\!\\!\mathrm{e}^{-at^{2}}dt}_{I_{2}}\right),$
where $b=(x-\mu_{Y})/\sigma_{Y}^{2}$. The step $(a)$ is obtained by letting
$t=u-b/2a$. Then, we can evaluate $I_{1}$ in (A) as
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!I_{1}$ $\displaystyle=$
$\displaystyle\int_{-b/2a}^{\infty}\\!\\!\\!t\mathrm{e}^{-at^{2}}dt\stackrel{{\scriptstyle(b)}}{{=}}\left[-\mathrm{e}^{-at^{2}}/2a\right]_{-b/2a}^{\infty}=\mathrm{e}^{-b^{2}/2a},$
(20)
where the step $(b)$ is computed by using [17, Eqn. 2.33.12]. Next, we
evaluate $I_{2}$ as
$\displaystyle I_{2}$ $\displaystyle=$
$\displaystyle\int_{-b/2a}^{\infty}\mathrm{e}^{-at^{2}}dt\stackrel{{\scriptstyle(c)}}{{=}}\left[\frac{\sqrt{\pi}\mathrm{erf}\left(\sqrt{a}t\right)}{2\sqrt{a}}\right]_{-b/2a}^{\infty}$
(21) $\displaystyle=$
$\displaystyle\frac{\sqrt{\pi}}{2\sqrt{a}}\left(1-\mathrm{erf}\left(\frac{-b}{2\sqrt{a}}\right)\right),$
where the step $(c)$ is due to [17, Eqn. 2.33.16]. We substitute (20) and (21)
into (A) to obtain the PDF of $\tilde{R}$ in (10).
## Appendix B The derivation of CDF of $\tilde{R}$ in (13)
We substitute (10) into (13) to derive $I_{a}$ as
$\displaystyle I_{a}\\!$ $\displaystyle=$
$\displaystyle\\!\sqrt{\pi}\rho\\!\\!\\!\int_{x}^{\infty}\\!\\!\\!\left(\\!\frac{u-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\\!\right)\\!\mathrm{e}^{\\!-\Delta\left(\\!\frac{u-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\\!\right)^{\\!2}}\\!\\!\left(\\!\mathrm{erf}\left(\frac{u-\mu_{Y}}{2\sigma_{Y}^{2}\sqrt{a}}\right)\\!+\\!1\\!\right)du$
(22) $\displaystyle\stackrel{{\scriptstyle(d)}}{{=}}$
$\displaystyle\lambda\int_{d}^{\infty}t\mathrm{exp}\left(-\Delta
t^{2}\right)\left(\mathrm{erf}\left(t\right)+1\right)dt$
$\displaystyle\stackrel{{\scriptstyle(e)}}{{=}}$
$\displaystyle\lambda\left[\frac{-\mathrm{e}^{-\Delta
t^{2}}(\mathrm{erf}\left(t\right)+1)}{2\Delta}\right]_{d}^{\infty}+\lambda\int_{d}^{\infty}\frac{\mathrm{e}^{-t^{2}(\Delta+1)}}{2\Delta}dt$
$\displaystyle\stackrel{{\scriptstyle(f)}}{{=}}$
$\displaystyle\frac{\lambda\mathrm{e}^{-\Delta
d}\left(\mathrm{erf}\left(d+1\right)\right)}{2\Delta}+\frac{\lambda\left(1-\mathrm{erf}\left(d\sqrt{\Delta+1}\right)\right)}{2\Delta\sqrt{\Delta+1}},$
where $\lambda=2\sigma_{Y}^{2}\rho\sqrt{\pi a}$ and
$d=(x-\mu_{Y})/(2\sigma_{Y}^{2}\sqrt{a})$. The step $(d)$ is obtained by
through $t=(u-\mu_{Y})/2\sigma_{Y}^{2}\sqrt{a}$. The step $(e)$ is written by
invoking part-by-part integration, while the step $(f)$ is due to [17, Eqn.
2.33.16]. Next, we compute $I_{b}$ as
$\displaystyle I_{b}$ $\displaystyle=$
$\displaystyle\rho\int_{x}^{\infty}\mathrm{e}^{-\left(\frac{u-\mu_{Y}}{2\sigma_{Y}^{2}}\right)^{2}}du\stackrel{{\scriptstyle(g)}}{{=}}\sqrt{2\sigma_{Y}^{2}}\rho\int_{\sqrt{2\sigma_{Y}a}d}^{\infty}\mathrm{e}^{-t^{2}}dt$
(23) $\displaystyle\stackrel{{\scriptstyle(h)}}{{=}}$
$\displaystyle\sqrt{\frac{\pi\sigma_{Y}^{2}}{2}}\rho\left(1-\mathrm{erf}\left(\sqrt{2\sigma_{Y}a}d\right)\right),$
where the step $(g)$ is due to a changing of dummy variable as
$t=(u-\mu_{Y})/(2\sigma_{Y}^{2})$, and the step $(h)$ is resulted due to [17,
Eqn. 2.33.16].
## Appendix C The derivation of $\mathcal{R}_{lb}$ and $\mathcal{R}_{ub}$ in
(17) and (16)
First, by invoking Jensen’s inequality, $\mathcal{R}_{lb}$ and
$\mathcal{R}_{ub}$ can be defined as
$\displaystyle\mathcal{R}_{lb}$ $\displaystyle=$
$\displaystyle\mathrm{log}_{2}\left(1+\left(\mathbb{E}\\!\left[{1/\tilde{\gamma}^{*}}\right]\right)^{-1}\right),$
(24a) $\displaystyle\mathcal{R}_{ub}$ $\displaystyle=$
$\displaystyle\mathrm{log}_{2}\left(1+\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]\right).$
(24b)
Then, we evaluate the expectation term in (24b) as
$\displaystyle\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]$ $\displaystyle=$
$\displaystyle\mathbb{E}\\!\left[{\bar{\gamma}\tilde{R}^{2}}\right]=\bar{\gamma}\mathbb{E}\\!\left[{(\lambda_{u}+\tilde{Y})^{2}}\right]$
(25) $\displaystyle=$ $\displaystyle\bar{\gamma}\sum_{n\in
C_{2}}\\!\\!\binom{2}{n}\mathbb{E}\\!\left[{\lambda_{u}^{(2-n)}}\right]\mathbb{E}\\!\left[{\tilde{Y}^{n}}\right]$
$\displaystyle=$
$\displaystyle\bar{\gamma}\left(\xi_{u}+\mu_{u}^{2}+\sigma_{Y}^{2}+\mu_{Y}^{2}+2\mu_{u}\mu_{Y}\right),$
where $\mu_{u}=\sqrt{\pi\xi_{u}/2}$. Moreover, $\mu_{Y}$ and $\sigma_{Y}^{2}$
are given in (9a) and (9b), receptively. By substituting (25) into (24b),
$\mathcal{R}_{ub}$ can be computed as (16). Next, we can write the expectation
term in (24a) as
$\displaystyle\mathbb{E}\\!\left[{1/\tilde{\gamma}^{*}}\right]={1}/{\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]}+{\mathbb{V}\mathrm{ar}\\!\left[{\tilde{\gamma}^{*}}\right]}/{\left(\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]\right)^{3}},$
(26)
where $\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]$ is defined in (25) and
$\mathbb{V}\mathrm{ar}\\!\left[{\tilde{\gamma}^{*}}\right]=\bar{\gamma}^{2}\mathbb{E}\\!\left[{\tilde{R}^{4}}\right]-\left(\mathbb{E}\\!\left[{\tilde{\gamma}^{*}}\right]\right)^{2}$.
Then, we can compute $\mathbb{E}\\!\left[{\tilde{R}^{4}}\right]$ as follows:
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\mathbb{E}\\!\left[{\tilde{R}^{4}}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\\!\left[{(\lambda_{u}+\tilde{Y})^{4}}\right]\\!=\sum_{n\in
C_{4}}\\!\binom{4}{n}\mathbb{E}\\!\left[{\lambda_{u}^{(4-n)}}\right]\mathbb{E}\\!\left[{\tilde{Y}^{n}}\right],$
(27)
where the $n$th moment of $\lambda_{u}^{n}$ is denoted by
$\mathbb{E}\\!\left[{\lambda_{u}^{n}}\right]$. We compute
$\mathbb{E}\\!\left[{\lambda_{u}^{n}}\right]$ as
$\displaystyle\mathbb{E}\\!\left[{\lambda_{u}^{n}}\right]$ $\displaystyle=$
$\displaystyle\int_{0}^{\infty}x^{n}f_{u}(x)dx=\int_{0}^{\infty}\frac{x^{n+1}}{\xi_{u}}\mathrm{exp}\left(-\frac{x^{2}}{2\xi_{u}}\right)dx$
(28) $\displaystyle\stackrel{{\scriptstyle(i)}}{{=}}$
$\displaystyle\left(2\xi_{u}\right)^{n/2}\Gamma\left(n/2+1\right),$
where the step $(m)$ is evaluated from [17, Eqn. 2.33.10] and
$\Gamma(t)=\int_{0}^{\infty}x^{t}\mathrm{e}^{-x}dx$ is the Gamma function [17,
Eqn. 8.310.1]. Then, we evaluate $\mathbb{E}\\!\left[{\tilde{Y}^{n}}\right]$
for $n\in C_{4}^{\prime}$ as
$\displaystyle\mathbb{E}\\!\left[{\tilde{Y}^{n}}\right]$ $\displaystyle=$
$\displaystyle\frac{\psi}{\sqrt{2\pi\sigma_{Y}^{2}}}\int_{0}^{\infty}y^{n}\mathrm{e}^{-\frac{(y-\mu_{Y})^{2}}{2\sigma_{Y}^{2}}}dy$
$\displaystyle\stackrel{{\scriptstyle(j)}}{{=}}$
$\displaystyle\frac{\psi}{\sqrt{\pi}}\int_{{-\mu_{Y}}/{\sqrt{2\sigma_{Y}^{2}}}}^{\infty}\left(\sqrt{2\sigma_{Y}^{2}}t+\mu_{Y}\right)^{n}\mathrm{e}^{-t^{2}}dt$
$\displaystyle\stackrel{{\scriptstyle(k)}}{{=}}$
$\displaystyle\\!\frac{\psi}{2\sqrt{\pi}}\sum\limits_{i\in
C_{n}}\\!\\!\binom{n}{i}\left({2\sigma_{Y}^{2}}\right)^{\frac{n-i}{2}}\mu_{Y}^{i}I\\!\left(n\\!-\\!i,\frac{-\mu_{Y}}{2\sigma_{Y}^{2}}\right),$
where the step $(j)$ is due to a changing of the dummy variable, the step
$(k)$ is obtained by expanding
$\left(\\!\sqrt{2\sigma_{Y}^{2}}t\\!+\\!\mu_{Y}\\!\right)^{\\!\\!n}\\!$ based
on $n\\!$ value. Moreover, $\\!I(\\!\cdot,\cdot\\!)\\!$ is given as
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!I\\!\left(m,t\right)\\!=\\!\begin{cases}(-1)^{m}\gamma\left(\frac{m+1}{2},t^{2}\right)+\Gamma\left(\frac{m+1}{2}\right),&\text{for}\,\,t\leq
0,\\\ \Gamma\left(\frac{m+1}{2},t^{2}\right),&\text{otherwise},\end{cases}$
(30)
where $\gamma(\lambda,x)=\int_{0}^{x}\mathrm{e}^{-t}t^{\lambda-1}dt$ is the
lower incomplete Gamma function [17, Eqn. 8.350.1]. Finally,
$\mathcal{R}_{lb}$ is derived as (17).
## References
* [1] H. Q. Ngo _et al._ , “Cell-Free Massive MIMO: Uniformly Great Service for Everyone,” in _IEEE 16th Int. Workshop on Signal Process. Adv. in Wireless Commun. (SPAWC)_ , June 2015, pp. 201–205.
* [2] ——, “Cell-Free Massive MIMO versus Small Cells,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 3, pp. 1834–1850, Mar. 2017.
* [3] M. D. Renzo _et al._ , “Smart Radio Environments Empowered by Reconfigurable AI Meta-Surfaces: An idea whose time has come,” _EURASIP J. Wireless Commun. Net._ , May 2019.
* [4] C. Liaskos _et al._ , “A New Wireless Communication Paradigm through Software-Controlled Metasurfaces,” _IEEE Commun. Mag._ , vol. 56, no. 9, pp. 162–169, 2018.
* [5] D. L. Galappaththige, D. Kudathanthirige, and G. Amarasuriya Aruma Baduge, “Performance Analysis of Distributed Intelligent Reflective Surface Aided Communications,” in _IEEE Global Commun. Conf. (GLOBECOM)_ , May 2020, pp. 1–6, (submitted).
* [6] H. Q. Ngo _et al._ , “On the Total Energy Efficiency of Cell-Free Massive MIMO,” _IEEE Trans. Green Commun. Netw._ , vol. 2, no. 1, pp. 25–39, 2018\.
* [7] E. Nayebi _et al._ , “Precoding and Power Optimization in Cell-Free Massive MIMO Systems,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 7, pp. 4445–4459, July 2017.
* [8] D. L. Galappaththige and G. Amarasuriya, “Cell-Free Massive MIMO with Underlay Spectrum-Sharing,” in _IEEE Int. Conf. on Commun. (ICC)_ , 2019, pp. 1–7.
* [9] Ö. Özdogan, E. Björnson, and E. G. Larsson, “Intelligent Reflecting Surfaces: Physics, Propagation, and Pathloss Modeling,” _IEEE Wireless Commun. Lett._ , vol. 9, no. 5, pp. 581–585, 2020.
* [10] Q. Wu and R. Zhang, “Intelligent Reflecting Surface Enhanced Wireless Network via Joint Active and Passive Beamforming,” _IEEE Trans. Wireless Commun._ , pp. 1–1, 2019.
* [11] Y. Han _et al._ , “Large Intelligent Surface-Assisted Wireless Communication Exploiting Statistical CSI,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 8, pp. 8238–8242, Aug 2019.
* [12] J. Chen _et al._ , “Intelligent Reflecting Surface: A Programmable Wireless Environment for Physical Layer Security,” _IEEE Access_ , vol. 7, pp. 82 599–82 612, 2019.
* [13] S. Abeywickrama, R. Zhang, and C. Yuen, “Intelligent Reflecting Surface: Practical Phase Shift Model and Beamforming Optimization,” in _IEEE Int. Conf. on Commun. (ICC)_ , 2020, pp. 1–6.
* [14] Z. Ding and H. Vincent Poor, “A Simple Design of IRS-NOMA Transmission,” _IEEE Commun. Lett._ , vol. 24, no. 5, pp. 1119–1123, 2020\.
* [15] A. Papoulis and S. U. Pillai, _Probability, Random Variables, and Stochastic Processes_ , 4th ed. McGraw Hill, 2002.
* [16] Q. Wu and R. Zhang, “Towards Smart and Reconfigurable Environment: Intelligent Reflecting Surface Aided Wireless Network,” _IEEE Commun. Mag._ , vol. 58, no. 1, pp. 106–112, 2020.
* [17] I. Gradshteyn and I. Ryzhik, _Table of Integrals, Series, and Products_ , 7th ed. Academic Press, 2007\.
* [18] Q. Zhang _et al._ , “Power Scaling of Uplink Massive MIMO Systems with Arbitrary-Rank Channel Means,” _IEEE J. Sel. Areas Signal Process._ , vol. 8, no. 5, pp. 966–981, Oct. 2014.
* [19] S. Haykin and M. Moher, _Communication Systems_ , 5th ed. Wiley India Pvt. Limited, 2009.
* [20] T. L. Marzetta _et al._ , _Fundamentals of Massive MIMO_. Cambridge University Press, Cambridge, UK, 2016\.
|
# Probing criticality with deep learning in relativistic heavy-ion collisions
Yige Huang Key Laboratory of Quark and Lepton Physics (MOE) & Institute of
Particle Physics,Central China Normal University, Wuhan 430079, China Long-
Gang Pang<EMAIL_ADDRESS>Key Laboratory of Quark and Lepton Physics (MOE)
& Institute of Particle Physics,Central China Normal University, Wuhan 430079,
China Xiaofeng Luo<EMAIL_ADDRESS>Key Laboratory of Quark and Lepton
Physics (MOE) & Institute of Particle Physics,Central China Normal University,
Wuhan 430079, China Xin-Nian Wang<EMAIL_ADDRESS>Key Laboratory of Quark and
Lepton Physics (MOE) & Institute of Particle Physics,Central China Normal
University, Wuhan 430079, China Nuclear Science Division, Lawrence Berkeley
National Laboratory, Berkeley, CA 94720, USA
###### Abstract
Systems with different interactions could develop the same critical behaviour
due to the underlying symmetry and universality. Using this principle of
universality, we can embed critical correlations modeled on the 3D Ising model
into the simulated data of heavy-ion collisions, hiding weak signals of a few
inter-particle correlations within a large particle cloud. Employing a point
cloud network with dynamical edge convolution, we are able to identify events
with critical fluctuations through supervised learning, and pick out a large
fraction of signal particles used for decision-making in each single event.
## I Introduction
Quantum Chromodynamics (QCD) is the fundamental theory of the strong
interaction. Exploring the phase structure of strongly interacting QCD matter
is one of the main goals of heavy-ion collision experiment Fukushima and
Hatsuda (2011); Bzdak _et al._ (2020); Luo and Xu (2017). Lattice QCD Aoki
_et al._ (2009); Ding _et al._ (2019, 2015) predicts a smooth crossover
transition from normal hadronic phase to Quark-Gluon Plasma (QGP) around
temperature $T_{c}$=156 MeV at vanishing baryon chemical potential ($\mu_{B}$
= 0 MeV). At finite baryon density region, QCD-based models calculations Shi
_et al._ (2014); Gao and Liu (2016); Fischer (2019); Fu _et al._ (2020)
indicate that there is a possible QCD critical point (CP), which is the end
point of the first-order phase transition boundary between the hadronic matter
and QGP.
Searching for the CP is one of the most important goals in beam energy scan
(BES) program at the Relativistic Heavy-ion Collider (RHIC) Fukushima and
Hatsuda (2011); Bzdak _et al._ (2020); Luo and Xu (2017). Many theoretical
and experimental efforts have been made to locate the CP Stephanov (2004,
2006); Luo and Xu (2017). One avenue is to classify the smooth crossover and
first order phase transition using the information from the final state
particle spectra and collective flow Hofmann _et al._ (1976); Stoecker and
Greiner (1986); Brachmann _et al._ (2000a, b); Csernai and Rohrich (1999);
Ivanov _et al._ (2002); Rischke _et al._ (1995); Stoecker (2005); Csernai
_et al._ (2005); Nara _et al._ (2017, 2018a, 2018b); Paech _et al._ (2003).
This method looks for the consequences of the softening of the equation of
state since the pressure gradients are much smaller in a medium with a first
order phase transition than a smooth crossover transition, which leads to
slower fluid acceleration and smaller transverse momenta of final state
particles. Another avenue is to search for the enhanced fluctuations when the
system goes through the critical point. These includes, for example,
fluctuations of conserved charges Stephanov (2009, 2011); Aggarwal _et al._
(2010); Adamczyk _et al._ (2014a, b, 2018); Adam _et al._ (2021); Abdallah
_et al._ (2021), hydrodynamic fluctuations Nahrgang _et al._ (2011); Herold
_et al._ (2013); Plumberg and Kapusta (2017), fluctuations caused by spinodal
instabilities Li and Ko (2016); Scavenius _et al._ (2001); Palhares and Fraga
(2010); Herold _et al._ (2014); Li and Ko (2017); Chomaz _et al._ (2004);
Randrup (2004); Sasaki _et al._ (2007); Steinheimer and Randrup (2012, 2013);
Steinheimer _et al._ (2014) and enhanced light nuclei yield ratio due to
baryon density fluctuations Sun _et al._ (2018); Yu _et al._ (2020); Sun
_et al._ (2021); Zhao _et al._ (2021).
Many critical phenomena in systems with different interactions can develop the
same critical behaviour with a universality that is dictated by the symmetry
of the systems and can be described by same critical exponents Wilson and
Kogut (1974). Lee and Yang proved that the Ising model in a magnetic field and
a lattice gas are mathematically equivalent Lee and Yang (1952). Employing
this universality, one can therefore map the QCD equation of state to that
given by a 3-dimensional Ising model with the same universality class Lee and
Yang (1952); Stephanov (2004); Pradeep and Stephanov (2019); Karthein _et
al._ (2021); Teaney (2021); Bluhm _et al._ (2020) to study the QCD phase
diagram. The divergence of the correlation length near the critical point will
lead to the critical opalescence and scaling invariant, which means that the
systems are self-similar when the resolution changes. One thus expects that
particles from the freeze-out hyper-surface close to the critical point have
multi-particle fractal structure in the momentum space Bialas and Peschanski
(1988); Satz (1989); Hwa (1990); Antoniou _et al._ (2001); Wu _et al._
(2020). Experimentally, intermittency analysis has been proposed to probe the
self-similarity and density fluctuations in heavy-ion collisions. Though a
non-trivial intermittency phenomenon is observed recently by the NA61/SHINE
experiment at CERN SPS Anticic _et al._ (2015); Davis (2020); Davis _et al._
(2019) in Ar+Sc collisions at 150 AGeV, the magnitude of background
fluctuations is big and the power law scaling is not fully established. No
intermittency signal is observed in C+C, Pb+Pb and Be+Be collisions with
similar collision energies. Critical Monte Carlo simulations suggest a maximum
critical proton fraction smaller than $0.3$% in Be+Be collision, indicating
that traditional intermittency analysis may fail in looking for the weak
signal of self-similarity, if the fraction of CMC particless is small compared
with uncorrelated background . It is interesting to explore whether the state-
of-the-art deep learning can help to identify the weak intermittency signal
from each event of heavy ion collisions.
Recently deep learning has been used to study the QCD equation of states by
classifying phase transition types, using convolution neural network Pang _et
al._ (2018); Pang (2021); Du _et al._ (2020); Kvasiuk _et al._ (2020) and
point cloud network Steinheimer _et al._ (2019); Kuttan _et al._ (2020). In
heavy ion collisions at low energies, auto-encoder with a single latent
variable is also used to study the order parameter of the nuclear liquid-gas
phase transition Wang _et al._ (2020). In these studies, deep learning is
powerful in mapping momentum or charge distributions of particles to the type
of QCD phase transitions. In this study, we will train a dynamical edge
convolution network plus a point cloud network to identify weak intermittency
signals of critical fluctuations, from exotic uncorrelated background
particles. Employing Critical Monte Carlo (CMC) Antoniou _et al._ (2001); Wu
_et al._ (2020), we encode the self-similarity in the inter-particle distances
in momentum space. Further, we assume that only a small fraction of particles
have intermittency which does not change the single particle distribution.
This paper is organized as follows. In Sec.II, we present the JAM transport
model which is used to generate data on multiple particle production in heavy
ion collisions. The CMC is used to generate intermittency signals of critical
fluctuations and the deep neural network is used for both classification and
tagging. In Sec. III, the prediction accuracy is compared for point cloud
network and dynamical edge convolution neural network. We also show the
performance of signal-particle tagging. In Sec. IV, we discuss and summarize
the findings and the implications of the present work.
## II Method
Probing critical fluctuations in heavy-ion collisions is a typical inverse
problem. The information of criticality should be transmitted through the
dynamical evolution of the dense medium in heavy-ion collisions and get
encoded in the final state hadrons that are recorded by detectors. In the
forward process, relativistic hydrodynamics as well as hadronic transport
model are widely used to generate single particle distribution and multi-
hadron correlations. In the present study, we use a hadronic transport model
JAM Nara _et al._ (2000); Nara (2019) to generate background events without
critical fluctuations. On the other hand, to introduce critical fluctuations,
the so called Critical Monte-Carlo (CMC) model Antoniou _et al._ (2001); Wu
_et al._ (2020) is applied to generate a series of correlated particle
momentum, which will be used to replace the momentum of particles in JAM
events.
In the inverse process, a point cloud network and a dynamical edge convolution
network are trained to identify critical fluctuations from large amount of
uncorrelated background particles. The traditional intermittency analysis is
also carried out to probe the encoded critical signals in the JAM events and
validate the effectiveness of the deep learning method.
### II.1 The JAM and Critical Monte-Carlo model
JAM model is a hadronic transport model to simulate heavy-ion collisions Sorge
(1995, 1997); Bass _et al._ (1998); Bleicher _et al._ (1999); Kahana _et
al._ (1996); Li and Ko (1998); Lin _et al._ (2005); Nara _et al._ (2000);
Nara (2019); Weil _et al._ (2016). It simulates the complicated process from
initial stage nuclear collisions to multiple particle production and final
state hadronic interactions. Independent binary collisions among hadrons
including produced ones are modeled using the vacuum hadron-hadron scattering
cross section. In the present study, the mean field mode of JAM model is used
to generate background events without including the critical fluctuations.
To simulate events involving critical fluctuations, Critical Monte-Carlo (CMC)
model Antoniou _et al._ (2001, 2006); Wu _et al._ (2020) is used to generate
a series of correlated particle momentum according to a power law function:
$f(\Delta p)=A\Delta p^{-\alpha}$ (1)
where $\Delta p$ is the distance of two CMC particles along an axis in
momentum space. $\nu=1/6$ is an index related to the universality class of
Ising model, and we let $\alpha=1+\nu$. $a$ and $b$ are the minimum and
maximum of $\Delta p$, and in out study, we set $a=2\times
10^{-7}\mathrm{GeV/c}$ and $b=2\mathrm{GeV/c}$. $A=(\nu
a^{\nu}b^{\nu})/(b^{\nu}-a^{\nu})$, is the normalization coefficient which is
independent of $\Delta p$. In this study, we only consider 2D momentum space
($p_{y},p_{y}$). The Levy flight random walk algorithm proposes the next step
with strides respecting the distribution $f(\Delta p)=A\Delta p^{-\alpha}$ for
$\Delta p_{x}$ and $\Delta p_{y}$ independently, and in this way, two sequence
of $p_{x}$ and $p_{y}$ of CMC particles are generated whose adjacent
differences $\Delta p$ obey the power law distribution. The self-similarity or
intermittency is thus encoded in these CMC particles, which is related to the
observed large local density fluctuations associated with the critical point.
For such a probability density function $f(\Delta p)=A\Delta p^{-1-\nu}$
within a range of (a, b), it is possible to derive its cumulative distribution
function:
$F(\Delta p)=\frac{b^{\nu}(\Delta p^{\nu}-a^{\nu})}{\Delta
p^{\nu}(b^{\nu}-a^{\nu})}$ (2)
where $F(\Delta p)$ is the cumulative distribution function of random variable
$\Delta p$, $F(\Delta p)=\int_{a}^{b}{f(\Delta p)\mathrm{d}\Delta p}$. And one
can then calculate the inverse function of $F(\Delta p)$:
$\Delta p(F)=(\frac{a^{\nu}b^{\nu}}{b^{\nu}-b^{\nu}F+a^{\nu}F})^{1/\nu}$ (3)
By randomly picking up a $F$ respecting to uniform distribution between 0 and
1, and using Eq. 3, one can obtain a $\Delta p$.
### II.2 Data set preparation
We generate about $2.2\times 10^{5}$ events of Au+Au central collisions at
$\sqrt{s_{\mathrm{NN}}}$ = 27 GeV with impact parameters $b<3\ \mathrm{fm}$.
Each event consists of hundreds of charged particles including pion, kaon and
proton. The transverse momentum $p_{x}$ and $p_{y}$ are considered as two
features of each particle. Therefore, each event stores one particle cloud in
2-dimensional momentum space. $2\times 10^{5}$ events are used to form the
training set, while the number of events for validation and test are $1\times
10^{3}$ and $2\times 10^{4}$, respectively. For each JAM event, a
corresponding CP event is created that encodes the critical fluctuation
signals from CMC model. As a result, $4.4\times 10^{5}$ events in total are
used in our study. To avoid data pollution, event with critical fluctuations
and its corresponding JAM event are always put in the same data category. In
this case, if one JAM event is in the training data, the event with critical
fluctuations associated with that JAM event is also put in the training data.
We will refer to these events with critical fluctuations as CP events and
these particles encoded with the critical fluctuations as CMC particles. Since
the CMC model only generates the momentum correlation pattern and does not
include the information of specific particle species, we don’t distinguish
between the types of particles when performing the replacement of particle in
a JAM event.
For a given JAM event, we use replacing rate $\eta=N_{CMC}/N_{JAM}$ to
describe the multiplicity ratio of CMC events to JAM events, the number of CMC
particles introduced into its corresponding CP event can reflect how strongly
the critical signal is encoded. In our study, two kinds of CP events with
$\eta=5\%$ and $\eta=10\%$, respectively, are prepared. The detailed replacing
procedures are listed below:
1. 1.
Randomly select a particle in the chosen JAM event, use its $(p_{x},p_{y})$ as
the starting momentum for generating the CMC event.
2. 2.
Fill a histogram $H$ of the transverse momentum distribution from the
generated CMC event. Remark the maximum magnitude of this histogram as $M$.
3. 3.
Loop over the particles in the JAM events. For each particle, find its
corresponding $p_{T}$ bin in $H$, record the content of $H$ in the $p_{T}$ bin
as $f$.
4. 4.
Get a random number $y$ in range from $0$ to $M$ respecting to uniform
distribution. If $y\leq f$, randomly select a CMC particle in the $p_{T}$ bin
and replace this JAM particle with it; and if $y>f$, give up this JAM particle
and go back to step 3 to find next JAM particle.
5. 5.
Repeat step 3 to 4 until all the CMC particles are used or all the JAM
particles are looped.
By applying such algorithm, it is possible to keep the $p_{T}$ spectra of the
substituted JAM particles close to that of the introduced CMC particles, hence
the $p_{T}$ spectra of the JAM event and the corresponding CP event are quit
similar. Even if there has a fluctuation of $p_{T}$ distribution, the overall
$p_{T}$ spectrum will not be greatly affected due to the small fraction of CMC
particles (5% or 10%) in the CP event. Considering the momentum resolution of
experimental detector, we introduced a uncertainty for momentum of each
particle in JAM event with a smearing as $\delta p_{i}\approx\pm 0.05p_{i}$,
where $i=x,y$. The smearing operation will be done after the JAM and CP events
are generated.
### II.3 Intermittency analysis
Local density fluctuations near the QCD critical point can be probed by
intermittency analysis of scaled factorial moments Wu _et al._ (2020) in
relativistic heavy-ion collisions. The scaled factorial moments (SFM)Wu _et
al._ (2020) are defined as follows,
$F_{q}(M)=\frac{\langle\frac{1}{M^{D}}\sum^{M^{D}}_{i=1}{n_{i}(n_{i}-1)\cdot\cdot\cdot(n_{i}-q+1)}\rangle}{\langle\frac{1}{M^{D}}\sum^{M^{D}}_{i=1}n_{i}\rangle^{q}}$
(4)
where $M$ is the number of grids in momentum space with equal size, $D$ is the
dimension, $i$ is the number of particles in the $i$th momentum-grid, and $q$
is the order of the SFM method.
When $M$ is large, the power law dependence of SFM on the number of
partitioned bins implies a self-similar correlations in the studied
systemBialas and Peschanski (1986, 1988).
$F_{q}(M)\approx(M^{D})^{\phi_{q}}$ (5)
The intermittency index $\phi_{q}$ can characterize the strength of
intermittency behavior and is related to the anomalous fractal dimension of
the systemDe Wolf _et al._ (1996). And there are studies show that using
intermittency measurement together with the estimated freeze-out parameters
can estimate the possible critical region of the QCD CEPAntoniou and Diakonos
(2019).
Figure 1: The second order scaled factorial moments analysis for uncorrelated
JAM events and events with critical fluctuations. The upper-panel shows the
absolute values of SFM for JAM events and events with 5% and 10% CMC
particles. To avoid the overlap of markers, results of critical events are
slightly shifted horizontally for a clearer visualization. The lower-panel
shows the ratios between critical and normal JAM events. No significant
differences are observed for the absolute SFM values and their ratios.
In the present study, the second order SFM ($q=2$) in two dimensional space
($D=2$) are studied for $M=$ 2, 4, 8, 16, 32, 50. As we take the experimental
detectors into consideration, in SFM calculation, we only take no more than 50
grids for each dimension in a range of plus-minus 2.5 $\mathrm{GeV/c}$ to keep
$p_{T}$ resolution to be like experimental options and at about 0.1 GeV/c.
As shown in Figure. 1, the intermittency analysis using the SFM method Anticic
_et al._ (2015); Davis (2020); Davis _et al._ (2019); Wu _et al._ (2020) can
not differentiate CP events with 5% and 10% CMC particles that carry critical
fluctuations from uncorrelated JAM events.
### II.4 Dynamical edge convolution neural network
Figure 2: Dynamical edge convolution neural network with point cloud module
for both classification and tagging. The edge convolution block looks for k
nearest neighbors of each particle to obtain a latent representation of that
very particle, with short or long range correlations encoded deeply in. The
representation of each particle are used in two tasks. One is the
classification task to identify critical fluctuations from uncorrelated
background events. The other is the tagging task to label correlated particles
used for decision making.
A graph-based dynamical edge convolution neural network is trained for our
multi-task learning. The input to the neural network are the particle cloud of
each event, which consists of a list of particles with their information on
$(p_{x},p_{y})$. The output of the neural network corresponds to two tasks.
The first task is the binary classification which requires true labels of each
single event for supervised learning, with CP indicating events with critical
fluctuations and JAM indicating events without. The second task is the
particle tagging which requires true labels of each single particle, with 0 or
1 to indicate whether the particle is generated using Critical Monte Carlo
model.
Shown in Figure. 2 is the architecture of our neural network. There are two
kNN plus dynamical edge convolution blocks connecting to the input layer. In
the first block, kNN is used to find the k-nearest neighbors of each particle
in $(p_{x},p_{y})$ space. A fully connected network is used to learn edge
features $\phi(\vec{p}_{i},\vec{p}_{j})$ between the $i$’th particle and its
$j$’th neighbor. This module is shared by all its neighbors of particle $i$ to
produced edge features and that explains the name ”edge convolution”. The
information of particle $i$ together with its edge features are feed to the
second block. Edge convolution layer would not only make use of the features
of input neuron itself, but also take the relevance between the clustered
units near that neuron into consideration, thus it can effectively capture the
correlation information between particles.
The second kNN find the k-nearest neighbors of each particle in feature space.
It is thus possible to correlate particles that are far away in momentum
space. The neighbors of each particle change dynamically when the distances
are computed in feature space, that is why the method is called ”dynamical
edge convolution”.
The features of each particle together with its ”local” information are
flattened and feed to a fully connected neural network to get a high
dimensional latent variable for each particle. The latent variable provides a
high dimensional representation of each particle. The above neural network is
also shared by all particles and is called 1D convolution neural network
(CNN). Finally, the latent variables of each particle are used for two
different tasks. The module of ”Classification” task is shown in the lower
right corner. A global max pooling gets the maximum values of each feature
among all particles. This symmetric permutation operation learns the global
feature of each particle cloud and is used to determine whether it is a CP or
JAM event. The module of ”Tagging” task is shown on the right of Figure. 2. A
1D CNN with one output neuron is used to tag each particle in the particle
cloud. This module provides interpretation on whether the correlated particles
are used to identify events with critical fluctuations. We have labeled
correlated CMC particles as ”signal” and uncorrelated JAM particles as
”noise”. Binary cross entropy is used to compute the differences between the
tagging output and the true labels of each particle. The loss values of
tagging module is added to the total loss with a weighting factor $10^{-3}$
such that the network focus more on ”classification” task.
For comparison, we also train a point-cloud network without the kNN and
dynamical edge convolution blocks shown in Figure. 2. The $(p_{x},p_{y})$ of
each particle is directly feed to 1D CNN with 256, 128 and 64 channels
respectively for classification. Global average pooling layer is used in this
simple point-cloud network as it performs better here. Without kNN and
dynamical edge convolution, the network can not capture much local information
for intermittency identification.
## III Results and discussion
### III.1 Classification accuracy
Shown in the Figure. 3 are the training (solid lines) and validation (dashed
lines) accuracy as a function of training epochs. Both training and validation
accuracy increase as the model is trained longer with more epochs. The
validation accuracy reaches a maximum of 99.3%, which means that deep learning
is able to classify each single event with high accuracy, for uncorrelated JAM
events and events mixed with 90% uncorrelated JAM particles and 10% CMC
particles ($\eta=10\%$). For a smaller replacing rate ($\eta=5\%$), both
validation and training accuracy decrease as compared with ($\eta=10\%$),
whose maximum value is about $93.3\%$. Note: the smeared 5% and 10% both got
93.3% acc. for validation set, while the 10% one got higher score for test
set. The validation accuracy is slightly higher than training accuracy caused
by the dropout and batch normalization layers used in the network. These two
kinds of layers are known to be able to increase the generalization of the
network by introducing noise during training.
Shown in Table. 1 are the testing accuracy of four different configurations.
Using the dynamical edge convolution plus point cloud network we constructed
in this study, the testing accuracy are $97.7\%$ for $10\%$ replacing rate and
$92.8\%$ for $5\%$ replacing rate, which are not quite far away from the
validation accuracy. Removing the dynamical edge convolution block, we have
tested the performance of the point cloud network with varying numbers of
layers and neurons per layer to get the best testing accuracy. The testing
accuracy decreases to $84.8\%$ for $10\%$ replacing rate and $83.4\%$ for
$5\%$ replacing rate.
Another test set is prepared to make sure that the network make their decision
based on multi-particle correlation in the CMC particles. In this test set, 5%
or 10% particles of a JAM event are replaced by same amount of particles
sampled randomly from many other events, one particle from each event to
eliminate the two particle correlation in the replaced particles. If our
network trained to identify CMC particles is fooled to classify these mixed
events as CMC events, it means that the network learns the missing correlation
in the replaced particles as compared with original JAM particles. In
practice, our trained network treat these mixed events as JAM events, which is
a proof that the network make their predictions using signals of CMC
particles.
Figure 3: The training and validation accuracy as a function of epochs. The
training accuracy is in solid lines, for replacing rate $5\%$ (blue) and
$10\%$(red). The validation accuracy is in dashed lines for replacing rate
$5\%$ and $10\%$.
Testing accuracy $\eta$ Edge-Conv Point-Cloud Net $5\%$ 92.8% 83.4% $10\%$
97.7% 84.8%
Table 1: The testing accuracy for dynamical edge convolution network and a
simple point cloud network.
### III.2 Interpretability: tagging
To figure out how does the network make its decision in identifying critical
fluctuations from the background, we have added a tagging layer to the neural
network. To quantify the tagging performance, we introduce two metrics as
follows,
$r_{\rm c}=\frac{N_{C}}{N_{C}+N_{M}},\quad\;r_{\rm
t}=\frac{N_{C}}{N_{C}+N_{W}}$ (6)
where $r_{\rm c}$ is the catching rate defined as the ratio between the number
of correctly tagged particles $N_{C}$ and total number of signal particles
$N_{C}+N_{M}$, where $N_{M}$ is the number of signal particles missed by the
tagging module. $r_{\rm t}$ is the tagging rate defined as the ratio between
the number of correctly tagged particles $N_{C}$ and the total number of
tagged particles $N_{C}+N_{W}$, where $N_{W}$ is the number of wrongly tagged
uncorrelated particles.
The average catching rates $r_{\rm c}=73.6\%$ for $\eta=5$% and $r_{\rm
c}=75.9$% for $\eta=10$% indicate that the network may use about $3/4$ of the
correlated particles to make its decision. On the other hand, the tagging rate
$r_{t}=94.5$% for $\eta=5$% and $r_{t}=95.4$% for $\eta=10$% are much higher
than catching rate $r_{c}$. This result tells us that the tagging module can
label CMC particles quite precisely.
Since both edge convolution and the following 1D convolution layers of tagging
module perform the same transformation for each particle, we can reversely
track the tensor of labeled particles in the hidden feature space in the
forward propagation process of neural network. For each input CP event, by
checking the feature space after passing edge convolution layer, for a total
of $N$ CMC particles well tagged, we find the $k$ nearest particles in the
feature space corresponding to the feature vector of each particle, and count
the number $M$ of CMC particles that were also well tagged. The proportion of
those well tagged CMC particles from kNN to the total number of these kNN
particles can then be calculated as $\frac{M}{k\times N}=94\%$. This result
indicates that, the feature space transformation guided by edge convolution
can aggregate CMC particles into a cluster in the new feature space, and then
the tagging module can label them through the subsequent 1D convolution
layers.
Figure 4: The upper subplots show the comparison of JAM event and its
corresponding CP event, in which the grey dots are the unchanged JAM
particles, and the red ones are the critical particles introduced by CMC
events. The lower subplots are labeled results of tagging network, and the red
dots refer to particles which were tagged correctly, while the blue ones are
JAM particles labeled as CMC ones, while the grey dots are unlabeled
particles. The graphs on the left show an example of $\eta=5$%, while the ones
on the right show an example of $\eta=10$%. Although the CMC clusters in the
two examples shown are all distributed on the right side of phase space, the
location of CMC particles are not restricted indeed and they can be on any
corner of the plot.
Figure 4 demonstrates the output of the tagging module. In the upper subplots,
grey dots represent unchanged JAM particles and red dots represent all the CMC
particles in two testing events. The corresponding tagging output for these
two events are shown in the two lower subplots, where the red dots represent
CMC particles correctly tagged by the network while the blue ones are JAM
particles but incorrectly tagged as CMC particles. In average, $3/4$ of CMC
particles are recognized by the tagging module. And as discussed before, the
incorrectly tagged particles are much fewer than correctly tagged CMC
particles. The two figures in the left are for $5\%$ replacing rate while the
ones on the right are for $10\%$ replacing rate.
Figure 5 shows the SFM calculation of $\eta=5\%$ CP events and the SFM of
tagged particles of them, the former ones event have no increment with the
increase of $M^{2}$ while the tagged ones present slight power law. This
result reflects that the tagging module can somehow extract the encoded
intermittency information.
Figure 5: The ’Mixed’ labeled red diamond markers represent the SFM results of
all particles from $\eta$=5% CP events, while the ’Tagged’ labeled blue square
markers stand for the SFM of tagged part of those events. As $M^{2}$ increase,
the red diamonds have a flat performance, and the blue squares show a
increment.
## IV Summary and outlook
In summary, we have constructed a dynamical edge convolution plus point cloud
network to identify the weak intermittency signal from the experimental data
of heavy-ion collisions. We have demonstrated that such a state-of-the-art
deep learning network enables us to achieve a testing accuracy 92.8% if only
5% of JAM particles in each event are replaced by correlated CMC particles.
The performance increases to $97.7$% if the replacing rate of correlated
particles increases to 10%. Removing the dynamical edge convolution block will
decrease the performance by a large margin. Using tagging module, we further
demonstrate that the network can use around $3/4$ of correlated particles to
make their decision. At the same time, only about 5% of uncorrelated
background particles are incorrectly tagged as CMC particles.
We observe that the network can identify self-similarity or scaling invariant
from uncorrelated background. This is important for experimental data analysis
since only one indication of intermittency is observed in Ar + Sc collisions
whereas several other systems with similar collision energies fail. Different
from previous theoretical studies, we preserve the single particle
distribution while introducing a small fraction of particles with multi
particle fractal structure. This is more realistic but also difficult for the
traditional intermittency analysis. Based on our study, deep learning shows
strong pattern recognition ability in identifying weak intermittency signals
associated with critical phenomena. The method developed in this study can be
applied to probe the critical fluctuations in heavy-ion collisions and can
also be used to explore the criticality of other systems.
## Acknowledgement
We thank Jin Wu for helpful discussions on the critical monte carlo model.
This work is supported by the National Key Research and Development Program of
China (Grant No. 2020YFE0202002 and 2018YFE0205201), the National Natural
Science Foundation of China under Grant Nos. 12122505, 11935007, 11221504,
11890711, 11861131009 and 12075098, and by the Director, Office of Energy
Research, Office of High Energy and Nuclear Physics, Division of Nuclear
Physics, of the U.S. Department of Energy (DOE) under grant No. DE-
AC02-05CH11231, by the U.S. National Science Foundation under No. OAC- 2004571
within the X-SCAPE Collaboration. Computations are performed at Nuclear
Science Computer Center at CCNU (NSC3). LG Pang and YG Huang also acknowledge
the support provided by Huawei Technologies Co., Ltd.
## References
* Fukushima and Hatsuda (2011) Kenji Fukushima and Tetsuo Hatsuda, “The phase diagram of dense QCD,” Rept. Prog. Phys. 74, 014001 (2011), arXiv:1005.4814 [hep-ph] .
* Bzdak _et al._ (2020) Adam Bzdak, Shinichi Esumi, Volker Koch, Jinfeng Liao, Mikhail Stephanov, and Nu Xu, “Mapping the Phases of Quantum Chromodynamics with Beam Energy Scan,” Phys. Rept. 853, 1–87 (2020), arXiv:1906.00936 [nucl-th] .
* Luo and Xu (2017) Xiaofeng Luo and Nu Xu, “Search for the QCD Critical Point with Fluctuations of Conserved Quantities in Relativistic Heavy-Ion Collisions at RHIC : An Overview,” Nucl. Sci. Tech. 28, 112 (2017), arXiv:1701.02105 [nucl-ex] .
* Aoki _et al._ (2009) Y. Aoki, Szabolcs Borsanyi, Stephan Durr, Zoltan Fodor, Sandor D. Katz, Stefan Krieg, and Kalman K. Szabo, “The QCD transition temperature: results with physical masses in the continuum limit II.” JHEP 06, 088 (2009), arXiv:0903.4155 [hep-lat] .
* Ding _et al._ (2019) H. T. Ding _et al._ (HotQCD), “Chiral Phase Transition Temperature in ( 2+1 )-Flavor QCD,” Phys. Rev. Lett. 123, 062002 (2019), arXiv:1903.04801 [hep-lat] .
* Ding _et al._ (2015) Heng-Tong Ding, Frithjof Karsch, and Swagato Mukherjee, “Thermodynamics of strong-interaction matter from Lattice QCD,” Int. J. Mod. Phys. E 24, 1530007 (2015), arXiv:1504.05274 [hep-lat] .
* Shi _et al._ (2014) Chao Shi, Yong-Long Wang, Yu Jiang, Zhu-Fang Cui, and Hong-Shi Zong, “Locate QCD Critical End Point in a Continuum Model Study,” JHEP 07, 014 (2014), arXiv:1403.3797 [hep-ph] .
* Gao and Liu (2016) Fei Gao and Yu-xin Liu, “QCD phase transitions via a refined truncation of Dyson-Schwinger equations,” Phys. Rev. D 94, 076009 (2016), arXiv:1607.01675 [hep-ph] .
* Fischer (2019) Christian S. Fischer, “QCD at finite temperature and chemical potential from Dyson–Schwinger equations,” Prog. Part. Nucl. Phys. 105, 1–60 (2019), arXiv:1810.12938 [hep-ph] .
* Fu _et al._ (2020) Wei-jie Fu, Jan M. Pawlowski, and Fabian Rennecke, “QCD phase structure at finite temperature and density,” Phys. Rev. D 101, 054032 (2020), arXiv:1909.02991 [hep-ph] .
* Stephanov (2004) Mikhail A. Stephanov, “QCD phase diagram and the critical point,” Prog. Theor. Phys. Suppl. 153, 139–156 (2004), arXiv:hep-ph/0402115 .
* Stephanov (2006) M. A. Stephanov, “QCD phase diagram: An Overview,” PoS LAT2006, 024 (2006), arXiv:hep-lat/0701002 .
* Hofmann _et al._ (1976) J. Hofmann, Horst Stoecker, Ulrich W. Heinz, W. Scheid, and W. Greiner, “Possibility of Detecting Density Isomers in High Density Nuclear MACH Shock Waves,” Phys. Rev. Lett. 36, 88–91 (1976).
* Stoecker and Greiner (1986) Horst Stoecker and W. Greiner, “High-Energy Heavy Ion Collisions: Probing the Equation of State of Highly Excited Hadronic Matter,” Phys. Rept. 137, 277–392 (1986).
* Brachmann _et al._ (2000a) J. Brachmann, S. Soff, A. Dumitru, Horst Stoecker, J. A. Maruhn, W. Greiner, L. V. Bravina, and D. H. Rischke, “Antiflow of nucleons at the softest point of the EoS,” Phys. Rev. C 61, 024909 (2000a), arXiv:nucl-th/9908010 .
* Brachmann _et al._ (2000b) J. Brachmann, A. Dumitru, Horst Stoecker, and W. Greiner, “The Directed flow maximum near c(s) = 0,” Eur. Phys. J. A 8, 549–552 (2000b), arXiv:nucl-th/9912014 .
* Csernai and Rohrich (1999) L. P. Csernai and D. Rohrich, “Third flow component as QGP signal,” Phys. Lett. B 458, 454 (1999), arXiv:nucl-th/9908034 .
* Ivanov _et al._ (2002) Yu. B. Ivanov, E. G. Nikonov, W. Noerenberg, A. A. Shanenko, and V. D. Toneev, “Directed flow of baryons in heavy ion collisions,” Acta Phys. Hung. A 15, 117–130 (2002), arXiv:nucl-th/0011004 .
* Rischke _et al._ (1995) Dirk H. Rischke, Yaris Pursun, Joachim A. Maruhn, Horst Stoecker, and Walter Greiner, “The Phase transition to the quark - gluon plasma and its effects on hydrodynamic flow,” Acta Phys. Hung. A 1, 309–322 (1995), arXiv:nucl-th/9505014 .
* Stoecker (2005) Horst Stoecker, “Collective flow signals the quark gluon plasma,” Nucl. Phys. A 750, 121–147 (2005), arXiv:nucl-th/0406018 .
* Csernai _et al._ (2005) L. P. Csernai, A. Anderlik, Cs. Anderlik, V. K. Magas, E. Molnar, A. Nyiri, D. Rohrich, and K. Tamosiunas, “The 3rd flow component as a QGP signal,” Acta Phys. Hung. A 22, 181–186 (2005), arXiv:hep-ph/0405277 .
* Nara _et al._ (2017) Yasushi Nara, Harri Niemi, Jan Steinheimer, and Horst Stöcker, “Equation of state dependence of directed flow in a microscopic transport model,” Phys. Lett. B 769, 543–548 (2017), arXiv:1611.08023 [nucl-th] .
* Nara _et al._ (2018a) Yasushi Nara, Harri Niemi, Akira Ohnishi, Jan Steinheimer, Xiaofeng Luo, and Horst Stöcker, “Enhancement of elliptic flow can signal a first order phase transition in high energy heavy ion collisions,” Eur. Phys. J. A 54, 18 (2018a), arXiv:1708.05617 [nucl-th] .
* Nara _et al._ (2018b) Yasushi Nara, Jan Steinheimer, and Horst Stoecker, “The enhancement of v4 in nuclear collisions at the highest densities signals a first-order phase transition,” Eur. Phys. J. A 54, 188 (2018b), arXiv:1809.04237 [nucl-th] .
* Paech _et al._ (2003) K. Paech, Horst Stoecker, and A. Dumitru, “Hydrodynamics near a chiral critical point,” Phys. Rev. C 68, 044907 (2003), arXiv:nucl-th/0302013 .
* Stephanov (2009) M. A. Stephanov, “Non-Gaussian fluctuations near the QCD critical point,” Phys. Rev. Lett. 102, 032301 (2009), arXiv:0809.3450 [hep-ph] .
* Stephanov (2011) M. A. Stephanov, “On the sign of kurtosis near the QCD critical point,” Phys. Rev. Lett. 107, 052301 (2011), arXiv:1104.1627 [hep-ph] .
* Aggarwal _et al._ (2010) M. M. Aggarwal _et al._ (STAR), “Higher Moments of Net-proton Multiplicity Distributions at RHIC,” Phys. Rev. Lett. 105, 022302 (2010), arXiv:1004.4959 [nucl-ex] .
* Adamczyk _et al._ (2014a) L. Adamczyk _et al._ (STAR), “Energy Dependence of Moments of Net-proton Multiplicity Distributions at RHIC,” Phys. Rev. Lett. 112, 032302 (2014a), arXiv:1309.5681 [nucl-ex] .
* Adamczyk _et al._ (2014b) L. Adamczyk _et al._ (STAR), “Beam-energy dependence of charge separation along the magnetic field in Au+Au collisions at RHIC,” Phys. Rev. Lett. 113, 052302 (2014b), arXiv:1404.1433 [nucl-ex] .
* Adamczyk _et al._ (2018) L. Adamczyk _et al._ (STAR), “Collision Energy Dependence of Moments of Net-Kaon Multiplicity Distributions at RHIC,” Phys. Lett. B 785, 551–560 (2018), arXiv:1709.00773 [nucl-ex] .
* Adam _et al._ (2021) J. Adam _et al._ (STAR), “Nonmonotonic Energy Dependence of Net-Proton Number Fluctuations,” Phys. Rev. Lett. 126, 092301 (2021), arXiv:2001.02852 [nucl-ex] .
* Abdallah _et al._ (2021) Mohamed Abdallah _et al._ (STAR), “Cumulants and correlation functions of net-proton, proton, and antiproton multiplicity distributions in Au+Au collisions at energies available at the BNL Relativistic Heavy Ion Collider,” Phys. Rev. C 104, 024902 (2021), arXiv:2101.12413 [nucl-ex] .
* Nahrgang _et al._ (2011) Marlene Nahrgang, Stefan Leupold, Christoph Herold, and Marcus Bleicher, “Nonequilibrium chiral fluid dynamics including dissipation and noise,” Phys. Rev. C 84, 024912 (2011), arXiv:1105.0622 [nucl-th] .
* Herold _et al._ (2013) Christoph Herold, Marlene Nahrgang, Igor Mishustin, and Marcus Bleicher, “Chiral fluid dynamics with explicit propagation of the Polyakov loop,” Phys. Rev. C 87, 014907 (2013), arXiv:1301.1214 [nucl-th] .
* Plumberg and Kapusta (2017) Christopher Plumberg and Joseph I. Kapusta, “Hydrodynamic fluctuations near a critical endpoint and Hanbury-Brown–Twiss interferometry,” Phys. Rev. C 95, 044910 (2017), arXiv:1702.01368 [nucl-th] .
* Li and Ko (2016) Feng Li and Che Ming Ko, “Spinodal instabilities of baryon-rich quark-gluon plasma in the Polyakov–Nambu–Jona-Lasinio model,” Phys. Rev. C 93, 035205 (2016), arXiv:1601.00026 [nucl-th] .
* Scavenius _et al._ (2001) O. Scavenius, A. Dumitru, E. S. Fraga, J. T. Lenaghan, and A. D. Jackson, “First order chiral phase transition in high-energy collisions: Can nucleation prevent spinodal decomposition?” Phys. Rev. D 63, 116003 (2001), arXiv:hep-ph/0009171 .
* Palhares and Fraga (2010) Leticia F. Palhares and Eduardo S. Fraga, “Droplets in the cold and dense linear sigma model with quarks,” Phys. Rev. D 82, 125018 (2010), arXiv:1006.2357 [hep-ph] .
* Herold _et al._ (2014) Christoph Herold, Marlene Nahrgang, Igor Mishustin, and Marcus Bleicher, “Formation of droplets with high baryon density at the QCD phase transition in expanding matter,” Nucl. Phys. A 925, 14–24 (2014), arXiv:1304.5372 [nucl-th] .
* Li and Ko (2017) Feng Li and Che Ming Ko, “Spinodal instabilities of baryon-rich quark matter in heavy ion collisions,” Phys. Rev. C 95, 055203 (2017), arXiv:1606.05012 [nucl-th] .
* Chomaz _et al._ (2004) Philipe Chomaz, Maria Colonna, and Jorgen Randrup, “Nuclear spinodal fragmentation,” Phys. Rept. 389, 263–440 (2004).
* Randrup (2004) Jorgen Randrup, “Spinodal decomposition during the hadronization stage at RHIC?” Phys. Rev. Lett. 92, 122301 (2004), arXiv:hep-ph/0308271 .
* Sasaki _et al._ (2007) C. Sasaki, B. Friman, and K. Redlich, “Density fluctuations in the presence of spinodal instabilities,” Phys. Rev. Lett. 99, 232301 (2007), arXiv:hep-ph/0702254 .
* Steinheimer and Randrup (2012) Jan Steinheimer and Jorgen Randrup, “Spinodal amplification of density fluctuations in fluid-dynamical simulations of relativistic nuclear collisions,” Phys. Rev. Lett. 109, 212301 (2012), arXiv:1209.2462 [nucl-th] .
* Steinheimer and Randrup (2013) Jan Steinheimer and Jorgen Randrup, “Spinodal density enhancements in simulations of relativistic nuclear collisions,” Phys. Rev. C 87, 054903 (2013), arXiv:1302.2956 [nucl-th] .
* Steinheimer _et al._ (2014) Jan Steinheimer, Jørgen Randrup, and Volker Koch, “Non-equilibrium phase transition in relativistic nuclear collisions: Importance of the equation of state,” Phys. Rev. C 89, 034901 (2014), arXiv:1311.0999 [nucl-th] .
* Sun _et al._ (2018) Kai-Jia Sun, Lie-Wen Chen, Che Ming Ko, Jie Pu, and Zhangbu Xu, “Light nuclei production as a probe of the QCD phase diagram,” Phys. Lett. B 781, 499–504 (2018), arXiv:1801.09382 [nucl-th] .
* Yu _et al._ (2020) Ning Yu, Dingwei Zhang, and Xiaofeng Luo, “Search for QCD critical point by transverse velocity dependence of anti-deuteron to deuteron ratio,” Chin. Phys. C 44, 014002 (2020), arXiv:1812.04291 [nucl-th] .
* Sun _et al._ (2021) Kai-Jia Sun, Che Ming Ko, Feng Li, Jun Xu, and Lie-Wen Chen, “Enhanced yield ratio of light nuclei in heavy ion collisions with a first-order chiral phase transition,” Eur. Phys. J. A 57, 313 (2021), arXiv:2006.08929 [nucl-th] .
* Zhao _et al._ (2021) Wenbin Zhao, Kai-jia Sun, Che Ming Ko, and Xiaofeng Luo, “Multiplicity scaling of light nuclei production in relativistic heavy-ion collisions,” Phys. Lett. B 820, 136571 (2021), arXiv:2105.14204 [nucl-th] .
* Wilson and Kogut (1974) K. G. Wilson and John B. Kogut, “The Renormalization group and the epsilon expansion,” Phys. Rept. 12, 75–199 (1974).
* Lee and Yang (1952) T. D. Lee and Chen-Ning Yang, “Statistical theory of equations of state and phase transitions. 2. Lattice gas and Ising model,” Phys. Rev. 87, 410–419 (1952).
* Pradeep and Stephanov (2019) Maneesha Sushama Pradeep and Mikhail Stephanov, “Universality of the critical point mapping between Ising model and QCD at small quark mass,” Phys. Rev. D 100, 056003 (2019), arXiv:1905.13247 [hep-ph] .
* Karthein _et al._ (2021) J. M. Karthein, D. Mroczek, A. R. Nava Acuna, J. Noronha-Hostler, P. Parotto, D. R. P. Price, and C. Ratti, “Strangeness-neutral equation of state for QCD with a critical point,” Eur. Phys. J. Plus 136, 621 (2021), arXiv:2103.08146 [hep-ph] .
* Teaney (2021) Derek Teaney, “Dynamics of Critical Fluctuations in Nucleus-Nucleus Collisions,” Nucl. Phys. A 1005, 121750 (2021).
* Bluhm _et al._ (2020) Marcus Bluhm _et al._ , “Dynamics of critical fluctuations: Theory – phenomenology – heavy-ion collisions,” Nucl. Phys. A 1003, 122016 (2020), arXiv:2001.08831 [nucl-th] .
* Bialas and Peschanski (1988) A. Bialas and Robert B. Peschanski, “Intermittency in Multiparticle Production at High-Energy,” Nucl. Phys. B 308, 857–867 (1988).
* Satz (1989) Helmut Satz, “Intermittency and Critical Behavior,” Nucl. Phys. B 326, 613–618 (1989).
* Hwa (1990) Rudolph C. Hwa, “Fractal Measures in Multiparticle Production,” Phys. Rev. D 41, 1456 (1990).
* Antoniou _et al._ (2001) N. G. Antoniou, Y. F. Contoyiannis, F. K. Diakonos, A. I. Karanikas, and C. N. Ktorides, “Pion production from a critical QCD phase,” Nucl. Phys. A 693, 799–824 (2001), arXiv:hep-ph/0012164 .
* Wu _et al._ (2020) Jin Wu, Yufu Lin, Yuanfang Wu, and Zhiming Li, “Probing QCD critical fluctuations from intermittency analysis in relativistic heavy-ion collisions,” Phys. Lett. B 801, 135186 (2020), arXiv:1901.11193 [nucl-th] .
* Anticic _et al._ (2015) T. Anticic _et al._ (NA49), “Critical fluctuations of the proton density in A+A collisions at 158$A$ GeV,” Eur. Phys. J. C 75, 587 (2015), arXiv:1208.5292 [nucl-ex] .
* Davis (2020) Nikolaos Davis (NA61/SHINE), “Searching for the critical point of strongly interacting matter in nucleus-nucleus collisions at CERN SPS,” PoS EPS-HEP2019, 305 (2020).
* Davis _et al._ (2019) Nikolaos Davis, Nikolaos Antoniou, and Fotios K. Diakonos (Na61/Shine), “Recent results from proton intermittency analysis in nucleus-nucleus collisions from NA61/SHINE at CERN SPS,” PoS CORFU2018, 154 (2019).
* Pang _et al._ (2018) Long-Gang Pang, Kai Zhou, Nan Su, Hannah Petersen, Horst Stöcker, and Xin-Nian Wang, “An equation-of-state-meter of quantum chromodynamics transition from deep learning,” Nature Commun. 9, 210 (2018), arXiv:1612.04262 [hep-ph] .
* Pang (2021) Long-Gang Pang, “Machine learning for high energy heavy ion collisions,” Nucl. Phys. A 1005, 121972 (2021).
* Du _et al._ (2020) Yi-Lun Du, Kai Zhou, Jan Steinheimer, Long-Gang Pang, Anton Motornenko, Hong-Shi Zong, Xin-Nian Wang, and Horst Stöcker, “Identifying the nature of the QCD transition in relativistic collision of heavy nuclei with deep learning,” Eur. Phys. J. C 80, 516 (2020), arXiv:1910.11530 [hep-ph] .
* Kvasiuk _et al._ (2020) Yu. Kvasiuk, E. Zabrodin, L. Bravina, I. Didur, and M. Frolov, “Classification of Equation of State in Relativistic Heavy-Ion Collisions Using Deep Learning,” JHEP 07, 133 (2020), arXiv:2004.14409 [nucl-th] .
* Steinheimer _et al._ (2019) Jan Steinheimer, Longgang Pang, Kai Zhou, Volker Koch, Jørgen Randrup, and Horst Stoecker, “A machine learning study to identify spinodal clumping in high energy nuclear collisions,” JHEP 12, 122 (2019), arXiv:1906.06562 [nucl-th] .
* Kuttan _et al._ (2020) Manjunath Omana Kuttan, Kai Zhou, Jan Steinheimer, Andreas Redelbach, and Horst Stoecker, “An equation-of-state-meter for CBM using PointNet,” JHEP 21, 184 (2020), arXiv:2107.05590 [hep-ph] .
* Wang _et al._ (2020) Rui Wang, Yu-Gang Ma, R. Wada, Lie-Wen Chen, Wan-Bing He, Huan-Ling Liu, and Kai-Jia Sun, “Nuclear liquid-gas phase transition with machine learning,” Phys. Rev. Res. 2, 043202 (2020), arXiv:2010.15043 [nucl-th] .
* Nara _et al._ (2000) Y. Nara, N. Otuka, A. Ohnishi, K. Niita, and S. Chiba, “Study of relativistic nuclear collisions at AGS energies from p + Be to Au + Au with hadronic cascade model,” Phys. Rev. C 61, 024901 (2000), arXiv:nucl-th/9904059 .
* Nara (2019) Yasushi Nara, “JAM: an event generator for high energy nuclear collisions,” EPJ Web Conf. 208, 11004 (2019).
* Sorge (1995) H. Sorge, “Flavor production in Pb (160-A/GeV) on Pb collisions: Effect of color ropes and hadronic rescattering,” Phys. Rev. C 52, 3291–3314 (1995), arXiv:nucl-th/9509007 .
* Sorge (1997) H. Sorge, “Soft transverse expansion in pb(158 agev) on pb collisions: preequilibrium motion or first order phase transition?” Physics Letters B 402, 251–256 (1997).
* Bass _et al._ (1998) S.A. Bass, M. Belkacem, M. Bleicher, M. Brandstetter, L. Bravina, C. Ernst, L. Gerland, M. Hofmann, S. Hofmann, J. Konopka, G. Mao, L. Neise, S. Soff, C. Spieles, H. Weber, L.A. Winckelmann, H. Stöcker, W. Greiner, Ch. Hartnack, J. Aichelin, and N. Amelin, “Microscopic models for ultrarelativistic heavy ion collisions,” Progress in Particle and Nuclear Physics 41, 255–369 (1998).
* Bleicher _et al._ (1999) M Bleicher, E Zabrodin, C Spieles, S A Bass, C Ernst, S Soff, L Bravina, M Belkacem, H Weber, H Stöcker, and W Greiner, “Relativistic hadron-hadron collisions in the ultra-relativistic quantum molecular dynamics model,” Journal of Physics G: Nuclear and Particle Physics 25, 1859–1896 (1999).
* Kahana _et al._ (1996) S. H. Kahana, D. E. Kahana, Y. Pang, and T. J. Schlagel, “Modeling relativistic heavy ion collisions at the AGS,” Ann. Rev. Nucl. Part. Sci. 46, 31–70 (1996).
* Li and Ko (1998) Bao-An Li and C.M. Ko, “Excitation functions of stopping power and flow in relativistic heavy-ion collisions,” Nuclear Physics A 630, 556–562 (1998), nucleus-Nucleus Collisions.
* Lin _et al._ (2005) Zi-Wei Lin, Che Ming Ko, Bao-An Li, Bin Zhang, and Subrata Pal, “A Multi-phase transport model for relativistic heavy ion collisions,” Phys. Rev. C 72, 064901 (2005), arXiv:nucl-th/0411110 .
* Weil _et al._ (2016) J. Weil _et al._ , “Particle production and equilibrium properties within a new hadron transport approach for heavy-ion collisions,” Phys. Rev. C 94, 054905 (2016), arXiv:1606.06642 [nucl-th] .
* Antoniou _et al._ (2006) N. G. Antoniou, F. K. Diakonos, A. S. Kapoyannis, and K. S. Kousouris, “Critical opalescence in baryonic QCD matter,” Phys. Rev. Lett. 97, 032002 (2006), arXiv:hep-ph/0602051 .
* Bialas and Peschanski (1986) A. Bialas and Robert B. Peschanski, “Moments of Rapidity Distributions as a Measure of Short Range Fluctuations in High-Energy Collisions,” Nucl. Phys. B 273, 703–718 (1986).
* De Wolf _et al._ (1996) E. A. De Wolf, I. M. Dremin, and W. Kittel, “Scaling laws for density correlations and fluctuations in multiparticle dynamics,” Phys. Rept. 270, 1–141 (1996), arXiv:hep-ph/9508325 .
* Antoniou and Diakonos (2019) Nikolaos G. Antoniou and Fotios K. Diakonos, “Ising-QCD phenomenology close to the critical point,” J. Phys. G 46, 035101 (2019), arXiv:1802.05857 [hep-ph] .
|
Also at ]Tel Aviv Univ. formerly of ]The Univ. of Tennessee
# Rock Neutron Backgrounds from FNAL Neutrino Beamlines
in the $\nu$BDX-DRIFT Detector
D. Aristizabal Sierra<EMAIL_ADDRESS>Universidad Técnica Federico
Santa María - Departamento de Física
Casilla 110-V, Avda. España 1680, Valparaíso, Chile J. L. Barrow
<EMAIL_ADDRESS>The Massachusetts Institute of Technology, Department of
Physics, 77 Massachusetts Avenue, Building 4, Room 304, Cambridge, MA 02139,
USA [ [ B. Dutta<EMAIL_ADDRESS>Mitchell Institute for Fundamental
Physics and Astronomy, Department of Physics and Astronomy, Texas A&M
University, College Station, TX D. Kim<EMAIL_ADDRESS>Mitchell
Institute for Fundamental Physics and Astronomy, Department of Physics and
Astronomy, Texas A&M University, College Station, TX D. Snowden-Ifft
<EMAIL_ADDRESS>Physics Department, Occidental College, 1600 Campus Rd., Los
Angeles, CA 90041 L. Strigari<EMAIL_ADDRESS>Mitchell Institute for
Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas
A&M University, College Station, TX M. H. Wood<EMAIL_ADDRESS>Department
of Quantitative Sciences, Canisius College, 2001 Main St., Buffalo, NY
###### Abstract
The $\nu$BDX-DRIFT collaboration seeks to detect low-energy nuclear recoils
from CE$\nu$NS or BSM interactions at FNAL. Backgrounds due to rock neutrons
are an important concern. We present a GENIE and GEANT4 based model to
estimate backgrounds from rock neutrons produced in neutrino-nucleus
interactions within the rock walls surrounding the underground halls. This
model was bench-marked against the $2009$ COUPP experiment performed in the
MINOS hall in the NuMI neutrino beam, and agreement is found between
experimental results and the modeled result to within $30\%$. Working from
this validated model, a similar two-stage simulation was performed to estimate
recoil backgrounds in the $\nu$BDX-DRIFT detector across several beamlines. In
the first stage utilizing GEANT4, neutrons were tallied exiting the walls of a
rectangular underground hall utilizing four different neutrino beam
configurations. These results are presented for use by other underground
experiments requiring estimations of their rock neutron backgrounds. For
$\nu$BDX-DRIFT, the second stage propagated neutrons from the walls and
recorded energy deposited within a scintillator veto surrounding the detector
and nuclear recoils within the detector’s fiducial volume. The directional
signal from the $\nu$BDX-DRIFT detector allows additional background
subtraction. A sample calculation of a $10\,$m${}^{3}\cdot\,$yr exposure to
the NuMI Low Energy (LE) beam configuration shows a CE$\nu$NS signal-to-noise
ratio of $\sim$2.5.
††preprint: APS/123-QED
## I Introduction
The $\nu$BDX-DRIFT detector is a directional time projection chamber (TPC)
suitable for measurements of nuclear recoils produced by coherent elastic
neutrino-nucleus scattering (CE$\nu$NS) [1, 2] and by new physics interactions
within the neutrino and dark-sectors, including those such as light (MeV) dark
matter (DM) [3]. Its directional capabilities offer a unique environment for
the identification of beyond Standard Model (BSM) signals [4]. The detector
can operate with a variety of target nuclei, e.g. H, C, S and possibly Pb [3].
Studies of the performance of the detector using decay-in-flight neutrinos
produced in the Long Baseline Neutrino Facility (LBNF) beamline at Fermi
National Accelerator Laboratory (FNAL) [5] have been presented in Ref. [3].
These results have demonstrated that, with reasonable exposures
($10\,\text{m}^{3}$ for $7$ years of data taking), the detector will be able
to measure $\sim 300$-$400$ CE$\nu$NS events across various target materials.
The resulting large statistics will in turn enable measurements of Standard
Model (SM) electroweak and nuclear parameters, as well as searches for
neutrino non-standard interactions (NSI), among others.
After the first measurements of CE$\nu$NS using CsI and liquid argon (LAr)
detectors by the COHERENT collaboration [6, 7] at Oak Ridge National
Laboratory’s Spallation Neutron Source (ORNL SNS), an effort to undertake
further measurements across other target nuclei and different energy spectra
utilizing various neutrino sources continues globally [8]. Low energy
experiments using reactor neutrinos are underway [9, 10, 11, 12, 13, 14, 15],
as well as further experiments at the ORNL SNS [16]; this includes planning
stages for the SNS Second Target Station, along with the European Spallation
Source [17]. As a part of this global effort, the $\nu$BDX-DRIFT detector can
provide a new and complementary avenue if it was to be based at FNAL: it would
utilize decay-in-flight neutrinos and thereby observe higher energy regimes
than the other global suite of experiments. Further, its technology offers
measurements of the angular spectrum, in addition to the recoil energy
spectrum; thus, in principle, cross section measurements in kinematic
variables pertaining to the nuclear recoil are possible.
However, the viability of all the above depends critically on background
levels. Neutrino-induced neutrons produced in the rock, so-called “rock
neutrons”, produce recoil-like backgrounds which are problematic and occupy
the majority of discussions within this paper. The rock neutrons can be
produced directly from neutrino-nucleus collisions, or when other neutrino-
nucleus end-state particles interact in the surrounding material, generating
still more neutrons. As will be shown below, rock neutrons produced in these
ways have energies up to $\sim 100\,$MeV and can produce nuclear recoils $\sim
100\,$keV which themselves are expected from CE$\nu$NS and BSM interactions
[3]. Recoils produced inside the shielding material around $\nu$BDX-DRIFT were
considered in Ref. [3], where it was shown that an expected signal-to-
background ratio of better than $23$ could be achieved. Rock neutrons produced
in the much larger volume of rock surrounding the underground facilities at
FNAL are harder to estimate as the calculation must convolve the neutrino
energy spectrum and interaction cross section on a variety of nuclei, the
propagation of all end-state particles through the rock to the experimental
hall, the possible interactions with shielding surrounding the detector, and,
finally, the generation of nuclear recoils inside the fiducial volume of the
detector.
The procedure presented here relies first upon a Monte Carlo neutrino event
generator package, GENIE [18], accounting for interactions of the neutrino
beam with the rock material in the surrounding walls of the FNAL underground
MINOS experimental hall [19]. This first step is followed by a GEANT4 [20]
simulation, which accounts for the propagation of the end-state particles
generated in the GENIE calculation and which potentially can enter the
detector fiducial volume. The procedure is bench-marked with the aid of the
COUPP beam-tagged data, which provides information on neutron-induced nuclear
recoils. Four independent simulations will be presented based on four
different neutrino flux configurations (NuMI LE and HE modes [21] as well as
DUNE on-axis and $39\,$m off-axis [22]), and so collectively provide
information not only valuable for a potential $\nu$BDX-DRIFT physics program
but also for future neutrino detectors at FNAL. The results to be presented
here can thus be understood as being aligned with and complementary to current
efforts at the Accelerator Neutrino Neutron Interaction Experiment (ANNIE) at
FNAL [23]. Finally, results will be presented for rock neutron backgrounds in
the fiducial volume of the $\nu$BDX-DRIFT with strong background protections
afforded from the surrounding scintillator and the directionality of the
interaction.
The remainder of this paper is organized as follows. In Sec. II we provide a
detailed discussion of the physics capabilities of the $\nu$BDX-DRIFT
detector. In Sec. III, details of the beam-tagged COUPP data are presented. In
Sec. III.1, the inputs used in the GENIE-GEANT4 Monte Carlo simulations are
given. Results of the GENIE output for final state particles are presented,
along with the nuclear recoil spectrum in the COUPP detector’s fiducial
volume. In Sec. IV, the neutron energy, zenith and azimuth spectra are
provided for all four simulations, while in Sec. V these results will be used
as input for the determination of the neutron background in the $\nu$BDX-DRIFT
detector fiducial volume. Finally, in Sec. VII, a summary and conclusions will
be presented.
## II Physics capabilities of the $\nu$BDX-DRIFT detector
Measurements of CE$\nu$NS within the $\nu$BDX-DRIFT detector will provide data
enabling: (i) the determination of SM parameters, and (ii) searches for new
interactions in the neutrino sector. These measurements can also enable
searches for MeV-scale DM candidates produced in collisions of a proton beam
on a fixed target. Detection proceeds by observation of the nuclear recoils
produced by either of these progenitors within the fiducial volume of the
detector.
Focusing on (i), the measurements which can be carried out include a precision
determination of the weak mixing angle at $\sqrt{Q^{2}}\simeq 100\,$MeV, and
the determination of the neutron root-mean-square (rms) radius of nuclides for
which no data is yet available. As for (ii), searches include NSIs,
interactions mediated by light vectors and scalars, along with sterile
neutrinos. Analysis of these types of interactions have been completed using
COHERENT and other reactor CE$\nu$NS data (see e.g. [24, 25, 26, 27, 28]).
Results from $\nu$BDX-DRIFT will thus prove complementary, while testing these
hypotheses in a different energy domain and with different detector
technologies.
As a function of detector operation pressure, CE$\nu$NS event rates in CS2
peak at about $400\,$Torr. For a $10\,\text{m}^{3}$ detector operating over
$7\,$years, the expected rate is on the order of $400\,$events. For CF4 and
utilizing the same operation pressure, the event yield increases by about a
factor of two. With C8H20Pb, although with a lead target, the event yield is
smaller because of the rapid loss of coherence. However, the statistics
combined with the detector features are still large enough for the analysis of
a few physics cases. Demanding isolation of lead-induced events, to study lead
nuclear properties, fixes the operation pressure in that case to $\sim 5$ Torr
[3].
Using CF4 (C8H20Pb) as material target, a $10\,\text{m}^{3}$ detector operated
at the pressures mentioned above will be able to measure the carbon and
fluorine (lead) neutron rms with a $\sim 3$% ($\sim 5$%) precision. Ref. [3]
has reported the following $1\sigma$ measurements
$\displaystyle r_{\text{rms}}^{n}|_{\text{C}}$
$\displaystyle=2.84^{+0.13}_{-0.15}\,\text{fm}\ ,$ $\displaystyle
r_{\text{rms}}^{n}|_{\text{Pb}}$
$\displaystyle=5.50^{+0.30}_{-0.29}\,\text{fm}\ .$ (1)
Measurements for carbon and fluorine through electroweak neutral current
processes do not exist, so these results provide valuable information for a
better understanding of nuclear properties of light nuclide. For lead the
result is not as competitive as that derived from PREX measurements [29, 30],
but can be understood as complementary to it.
Figure 1: The 2009 COUPP bubble formation data tagged to the beam pulse.
Published here with the permission of the COUPP collaboration.
Studies of the weak mixing angle in CS2 and CF4 result in the following
$1\sigma$ measurements
$\displaystyle\sin^{2}\theta_{W}|_{\text{CS}_{2}}$
$\displaystyle=0.238^{+0.020}_{-0.016}\ ,$
$\displaystyle\sin^{2}\theta_{W}|_{\text{CF}_{4}}$
$\displaystyle=0.238^{+0.021}_{-0.017}\ ,$ (2)
both for $\sqrt{Q^{2}}\subset[78,397]\,$MeV, a renormalization scale for which
at present no data is available. Interestingly enough, these results exceed
what so far COHERENT measurements have achieved (see e.g. [24, 31]) and are
competitive with those expected from DUNE using the electron channel [32].
Searches for NSI in CS2 can explore muon flavor related effective couplings.
Sensitivities can improve by about a factor 2-3 upon current limits. To a
certain extent they are not very sensitive to backgrounds (assuming reasonable
amounts) nor to quark flavor. The $1\sigma$ measurements that can be achieved
are given by [3],
$\displaystyle\epsilon_{\mu\mu}$
$\displaystyle=[-0.013,0.011]\oplus[0.30,0.32]\ ,$
$\displaystyle\epsilon_{e\mu}$ $\displaystyle=[-0.064,0.064]\ .$ (3)
As has been emphasized, in order to achieve these goals a detailed
understanding of rock neutron backgrounds becomes mandatory. The following
sections focus on that.
Composition in rock at FNAL
---
Isotope | ${}^{1}_{1}$H | ${}^{12}_{6}$C | ${}^{16}_{8}$O | ${}^{23}_{11}$Na | ${}^{27}_{13}$Al | ${}^{28}_{14}$Si | ${}^{39}_{19}$K | ${}^{40}_{20}$Ca | ${}^{56}_{26}$Fe
Composition [$\%$] | 1.5 | 1.1 | 56.4 | 0.3 | 9.5 | 24.2 | 0.9 | 4.3 | 1.8
Input parameters used in the simulations
---
Beamline & Mode | (POT/Pulse)$\times 10^{13}$ | (Inter/Pulse/$\text{m}^{3})\times 10^{-4}$ | Period [s]
NuMI LE (c. 2009) | $2.88$ | $204.42$ | $2.43$
NuMI LE | $4.00$ | $283.92$ | $1.3$
NuMI HE | $4.00$ | $1277.69$ | $1.3$
DUNE On-Axis at $1.2\,$MW | $7.5$ | $1142.23$ | $1.2$
DUNE $39\,$m Off-Axis at $1.2\,$MW | $7.5$ | $9.89$ | $1.2$
Table 1: Upper: The percentages of various nuclear isotopes in the rock, taken
from discussions with FNAL experts. Lower: Summary of the input parameters for
the models considered in this paper. The numbers of POT per pulse for NuMI and
DUNE have been taken from Refs. [5, 21].
## III COUPP
In order to present reliable results for nuclear recoil background predictions
within the $\nu$BDX-DRIFT detector, any simulation used to predict such
backgrounds requires bench-marking against data. Fortunately, such data
exists. In 2009, the COUPP DM collaboration performed an experiment in the
MINOS hall on-axis to an active NuMI beam [33] at FNAL. COUPP was a bubble
chamber experiment with a $15$-$20\,$keV threshold for detecting nuclear
recoils filled with 3.5 kg of CF3I [33]. As discussed in [33], COUPP was a
threshold detector providing no information on recoil energy or particle
(nucleus) identification. Additionally, COUPP had no sensitivity to $\beta$,
$\gamma$, or minimum ionizing particles. Using acoustic information $\alpha$
particle discrimination was possible [33].
In 2009, events were tagged as occurring when the beam was on or not. For the
DM data analysis, only events uncorrelated with the beam were analyzed and
published. However, unpublished, beam-tagged data from the COUPP collaboration
was obtained [34]; a summary of these findings can be seen in Fig. 1. The pink
data points are single, fiducial events not tagged as $\alpha$ particles and
are interpreted here as nuclear recoil events. The average of these data–taken
from September 27, 2009 to November 8, 2009–is $4.65\pm
0.19\,$events$/$kg$\cdot$day. During this running period, the cosmic veto was
not operational; thus, some fraction of these events were caused by non-beam-
related particles. To estimate this background, non-beam-related, background
data taken during this time were averaged. Using a $100\,$ms timing window;
the background rate due to random coincidences was estimated to be $0.0863\pm
0.0074\,$events$/$kg$\cdot$day. Subtracting this from the observed rate gives,
a true, beam-related nuclear recoil rate of $4.56\pm
0.19\,$events$/$kg$\cdot$day to be compared to predictions.
### III.1 The Model
The parameters and model for backgrounds in the COUPP 2009 exposure to the
neutrino beam are presented here. The composition of the rock can be seen in
Table 1 (upper Table), and was assumed to be at a density of $2.33\,$g/cm3.
From the FNAL Data Logger [35], the average number of protons on target (POT)
per pulse was $2.88\times 10^{13}$ with an average period of $2.43\,$s. These
parameters as well as other assumed parameters are summarized in Table 1
(lower Table). The neutrino flux at the COUPP location was taken from [36] and
increased by a factor of $(1040/939)^{2}$ due to the upstream location of the
COUPP experiment relative to the originally assumed location [36]. Fig. 2
shows the resultant flux, alongside several others to be discussed below.
According to MINOS logs [37], the NuMI beam was in reverse horn current mode
during the COUPP 2009 run, implying predominately antineutrino production
during the run period. Given the on-axis nature of the COUPP detector, it is
expected that few differences exist between the $\nu_{\mu}$ and
$\overline{\nu}_{\mu}$ fluxes (horn current settings) across the various NuMI
beam energy settings [38]. Despite $\nu_{\mu}$ contamination of the
$\overline{\nu}_{\mu}$ beam at high energies, we consider this single
neutrino-type approximation robust, especially given the comparative lack of
neutrons (which yield the most background events) entering the final state via
charged current $\nu_{\mu}$ interactions.
Figure 2: The $\nu_{\mu}$ energy spectra for various locations at FNAL: Fluxes
at $1040\,$m downstream at NuMI in the LE and HE mode and at DUNE at $574\,$m
downstream as well for the on-axis and the off-axis $39\,$m configurations.
Results for NuMI are adapted from Ref. [36], while for DUNE from the DUNE
Technical Design Report (Fig. 4.9) [22]. For the purposes of this study, small
deviations in shape and rate between the $\nu_{\mu}$ and
$\overline{\nu}_{\mu}$ horn modes spectra are ignored and are utilized
identically.
### III.2 GENIE Event Generation
Given the previously discussed inputs, simulation of primary particle
production via NuMI $\overline{\nu}_{\mu}$ interactions within the rock
surrounding the COUPP detector could be undertaken. Neutral and charged
current processes across the whole range of energies of the NuMI flux
resulting from $\overline{\nu}_{\mu}$ scattering were considered, providing
predictions for final state neutrons, protons, charged and neutral pions, and
antimuons. Fig. 3 shows energy distributions of the six different final state
particles considered in this model for the NuMI LE neutrino flux employed in
the COUPP simulation.
Figure 3: Energy spectra for $n$, $p$, $\pi^{-}$, $\pi^{+}$, $\mu^{+}$ and
$\pi^{0}$ end-states of $\overline{\nu}_{\mu}$-nucleus interactions obtained
by a GENIE Monte Carlo simulation, for the NuMI LE neutrino flux. These
spectra are used as input for the GEANT4 simulation of the COUPP result.
These primary particle production simulations were completed using the GENIE
Monte Carlo event generator [18], a staple within the FNAL neutrino community.
The G18_10a GENIE tune [39] was used as a baseline, and cross section splines
for all constituent elements were produced across the whole NuMI LE energy
range. The chosen tune utilizes the hA2018 final state interaction
(intranuclear cascade) model [40, 41], which uses a table-based method to
predict full final states. A similar simulation was undertaken using the
hN2018 final state interaction model, which employs a fully stochastic
intranuclear cascade and generally provides final state predictions with
higher final state nucleon multiplicities. The mixture of elements making up
the rock served as a direct input to GENIE for event production, creating
single samples; generally, the samples used throughout the studies discussed
here were $\sim 10^{6}$ events in size. Histograms with $\sim 50\,$MeV/c
binning were constructed for the 6 most abundant final state particle types,
$n$, $p$, $\pi^{-}$, $\pi^{+}$, $\mu^{+}$ and $\pi^{0}$. As an example Figure
3 shows the energy distributions for these 6 end-state particles for the NuMI
LE configuration and the hA GENIE model. These distributions were use to as
inputs for GEANT4 111Correlated, event-by-event simulation of primary
interaction products is indeed possible, and future work will utilize such
techniques..
Figure 4: Upper graph: The labeled geometry of the underground experimental
hall. Lower graph: A GEANT4 simulation showing the location of the detector
relative to the walls. The dimensions of the underground hall are 480/1070/427
cm in $x/y/z$. The aqua color shows the fiducial volume of the $\nu$BDX-DRIFT
detector. The white frames show the location of the scintillator. Purple lines
show neutrons trajectories. Yellow shows electron trajectories.
### III.3 GEANT4 Propagation
GEANT4 [42] was used to propagate the end-state particles from GENIE through
the rock and into the experimental hall and detector shown in Fig. 4. The
dimensions of this hall (chosen to roughly approximate the size of the hallway
where the COUPP experiment occurred) were considered small enough that uniform
generation of end-state particles was assumed. The source considered in these
simulations was taken as the rock walls, whose thickness was increased up to
$2\,$m, at which point the observed rates in the detector stabilized. The
COUPP detector was modeled as a cylindrical fiducial volume 15 cm in diameter
and 12 cm high filled with CF3I. This was surrounded on almost all sides with
propylene glycol (C3H8O$2$) the exception being a water filled region above
the CF3I. The outer dimensions of the these elements were 30 cm in diameter
and 44 cm high. Again we thank members of the COUPP collaboration for
providing this information [34]. All massive nuclear recoils in the CF3I were
analyzed. Fig. 5 shows the resulting nuclear recoil spectrum in nuclear mass.
Figure 5: The spectrum of recoiling nuclei with kinetic energies ($E_{r}$)
greater than 16.8 keV. The small number of isotopes at masses other than C, F
or I natural abundances are due to inelastic collisions between, mostly
neutrons, and the target nuclei. Three regions in recoil masses are
identified. Recoil masses in the region labelled “$\alpha$ discrimination”
were not counted because of $\alpha$ discrimination. Recoils in the region
labelled “C and F recoil region” were treated similarly, see text. Recoils in
the third region, “Iodide recoil region”, were treated similarly as well. The
shaded C, F and I regions are largely arbitrary, but there exist effectively
no events within them beyond those at and slightly below the expected masses
of these species. See text for further details.
The nucleation efficiency for bubble formation following nuclear recoil within
the COUPP detector is given [43] as,
$\epsilon(E)=1-e^{-\alpha[(E-E_{T})/E_{T})]}\quad(E>E_{T})\ ,$ (4)
where $E_{T}$ is a universal threshold while $\alpha$ depends on the recoil
type; $\alpha_{\text{CF}}$ (for Carbon and Fluorine recoils) was determined to
be $0.15$ from AmBe neutron exposures, while
$\alpha_{\text{I}}=2.8^{+1.6}_{-0.8}$ (for Iodide recoils) and
$E_{T}=16.8^{+0.8}_{-1.1}\,\text{keV}$ were determined using a $12\,$GeV
$\pi^{-}$ beam [43]. For this work, the mean values of these quantities were
employed; note that no uncertainty was given for $\alpha_{CF}$.
GEANT4 events in which multiple bubbles were removed as the COUPP data reports
only single events in the fiducial volume. GENIE’s input simulation to GEANT4
utilizing the hA2018 model yields a predicted rate of $2.930\pm
0.039\,$events$/$kg$\cdot$day. As a check on the effect of the geometry of the
experimental hall on this result the length of the experimental hall was
increased by a factor of 3. The result was $2.890\pm
0.046\,$events$/$kg$\cdot$day in agreement with the previous result. For
clarity these results, and the ones discussed below, are summarized in Table
2. The GENIE hN model yields a rate of $3.081\pm
0.025\,$events$/$kg$\cdot$day. These were averaged together to produce a
predicted rate of $3.006\pm 0.023\,$events$/$kg$\cdot$day. These events were
created by, largely, rock neutrons entering the COUPP detector from the walls,
thus creating recoils which nucleated a bubble.
Recoils can of course also be created directly inside the COUPP fiducial
volume by direct neutrino scatters, the dominant component of these being non-
CE$\nu$NS events such as neutrino-nucleon quasi-elastic scattering, a
subdominant contribution from neutrino-nucleon scattering, resonant single
pion production and by products of deep-inelastic scattering. To better
understand this, GENIE was run with CF3I, instead of rock, as the target, and
an overall rate for such scatters was $0.35\,$events$/$kg$\cdot$day. However,
this total event count ignores the fact that not all such events will nucleate
a bubble. For some events, no large remnant nuclei survive; for those that do
survive, there is a less than 100% chance of nucleating a bubble given their
momentum222Note that GENIE is currently unable to record all the properties of
remnant nuclei; similarly, for all but one nucleus (Oxygen), no photonic de-
excitation occurs. There is motion within the community to include more of
this necessary microphysics [44, 45], and we look forward to more updates to
such tools.. We therefore bracket our modeled results as $(3.006,3.356)\pm
0.023\,$events$/$kg$\cdot$day. These event rates are to be compared to the
experimental rate of $4.56\pm 0.19\,$events$/$kg$\cdot$day.
The predicted rate of this study sits roughly $30\%$ lower than the observed
experimental rate. There are, however, a large number of systematics which
could explain this difference. The bubble formation model has systematics
associated with the assumptions discussed above, though these appear to be
relatively small. For instance, varying the bubble formation parameters such
as $\alpha_{I}$ and $E_{T}$ gives a $0.09\,$events/kg$\cdot$day systematic
variation to the rate. GENIE and GEANT have systematics associated with the
particular models chosen, and are largely unknown to this study without the
use of a universe style approach. Slight changes in the geometric
configuration of the detector can also contribute to the uncertainty.
Similarly, the neutrino flux model is known to have large normalization
uncertainties which have not been considered for this study.
Rate Comparison Summary
---
Source | Rate [events/kg$\cdot$day]
GENIE hA | $2.930\pm 0.039$
GENIE hA w/3$\times$longer exp. hall | $2.890\pm 0.046$
GENIE hN | $3.081\pm 0.025$
GENIE hA, hN average | $3.006\pm 0.023$
Unshielded in-situ | $0$ to $0.35$
Prediction | $(3.006,3.356)\pm 0.023$
Experiment | $4.56\pm 0.19$
Table 2: This table summarizes the rates from various sources and, at the end,
the final prediction range in comparison with the COUPP data. Number of
simulated particles
---
Beamline & Mode | Stage I [$\times$106] | Walls [$\times$106] | Stage II [$\times$109]
NuMI LE | 207 | 17.2 | 2.36
NuMI HE | 130 | 2.66 | 1.70
DUNE On-Axis at $1.2\,$MW | 434 | 8.26 | 2.36
DUNE $39\,$m Off-Axis at $1.2\,$MW | 1660 | 5.51 | 2.10
Table 3: Output table number of particles simulated at various stages. Column
2 shows the number of end-state particles simulated in Stage I (see Sec. IV).
Column 3 shows the number of neutrons entering the experimental hall from the
walls. These neutrons were used to generate the distributions for the Stage II
simulations (see Sec. V). Column 4 shows the number of neutrons simulated in
Stage II restarted on the walls of the experimental hall. (see Sec. V).
## IV Stage I: Rock Neutron Results
With the bench-marked model in hand we now turn to predicting backgrounds in
future, planned experiments. As the COUPP results show, backgrounds due to
rock neutrons in an unshielded detector are high, too high to accomplish the
goals of the $\nu$BDX-DRIFT collaboration. We therefore include a
scintillating veto around the simulated $\nu$BDX-DRIFT detector. The COUPP
collaboration installed a scintillating veto around most of their detector
with a resulting drop in un-vetoed rate after the period of unshielded running
described above and shown in Figure 1. That the rate did not drop further was
the result of lack of shielding around the bottom of the detector; the
shielding was designed to veto cosmic-ray generated events not beam events.
For purposes of simulation we will assume the $\nu$BDX-DRIFT detector is
surrounded by 75 cm of BC-521 organic scintillator on all sides, similar to
the veto COUPP utilized. As will be shown below use of this veto drastically
reduces the rate of events in the $\nu$BDX-DRIFT detector.
But as a result the simple, single-stage simulation used for the COUPP
background calculation is impractical. A two-stage strategy was therefore
adopted in which neutrons were recorded exiting the walls of the experimental
hall. The hall was assumed to have an upstream and downstream wall
perpendicular to the neutrino beamline and 4 walls parallel to the beamline as
shown in Fig. 4. For each wall the energy and angular distributions of
neutrons exiting the walls for the first time were recorded and smoothed. In a
second stage, neutrons were restarted at the walls with the same energy and
angular distributions with a resulting increase in simulation speed of roughly
two orders of magnitude. The computed energy and angular distributions for all
simulations are shown below for use in other applications.
To bracket the range of possibilities at FNAL four simulations were done.
Table 1 (lower) summarizes the main input parameters for these simulations.
The neutrino energy spectra for all simulations are shown in Fig. 2. All
simulations assumed that the horn currents were set to predominantly produce
$\overline{\nu}_{\mu}$s. $\overline{\nu}_{\mu}$s produce more neutrons than
$\nu_{\mu}$s due to the nature of the charge current interaction, and, in
terms of background, therefore represent a worst case scenario. The location
of the COUPP detector was on the far upstream end of the MINOS hall, 939 m
from the target. All NuMI simulations were done at this location. As before
the fluxes for NuMI, from [36] assuming 1040 m from target, were increased by
(1040/939)2 to correct for this assumption. For the DUNE simulations the
experimental hall, shown in Fig. 4, was located 574 m from the DUNE target at
the location of the DUNE near detector hall. Two positions were chosen, on-
axis and 39 m off-axis, to bracket the possibilities there. As shown in Fig. 2
these positions have very different fluxes and energy spectra. Note that for
the DUNE simulations it was assumed that the experimental hall shown in Fig. 4
was completely surrounded by rock which is not what is planned for the near
detector hall. The DUNE simulations, therefore, are more indicative of
backgrounds generated on either side of the DUNE near detector hall. The total
number of neutrino interactions per m3 of rock per pulse is shown in column 3
of Table 1 (lower) for each beam and mode.
Figure 6: The energy distribution of rock-neutrons generated in the 4
simulations of Table 1. The blue lines show the spectra coming from the
upstream wall. The orange lines show the spectra coming from the side walls.
And the green lines show the spectra coming from the downstream wall. In the
left graph, the solid (dashed) curves correspond to results obtained with the
NuMI HE (LE) neutrino mode. In the right graph—instead—to results derived with
the DUNE on-axis (off-axis)
configuration.
Simulations output for neutron flux from the walls
---
Beamline & Mode | Upstream [$n^{0}$/s/m2] | Sides [$n^{0}$/s/m2] | Downstream [$n^{0}$/s/m2] | Background [events/m3/year]
NuMI LE | 0.0355 | 0.0204 | 0.0110 | 8.61 $\pm$ 0.62
NuMI HE | 0.209 | 0.131 | 0.0727 | 54.9 $\pm$ 3.8
DUNE On-Axis at $1.2\,$MW | 0.101 | 0.0276 | 0.0524 | 23.3 $\pm$ 1.3
DUNE $39\,$m Off-Axis at $1.2\,$MW | 0.000381 | 0.0000831 | 0.000162 | 0.0396 $\pm$ 0.0031
Table 4: Output table shows neutron flux from different walls and background
in the signal region. For details see Sec. IV.
Figure 7: Upper row graphs: The zenith angle distribution of rock-neutrons
generated in the NuMI HE (left graph) and the DUNE on-axis (right graph)
simulations of Table 1. The blue lines show the spectra coming from the
upstream wall. The yellow lines show the spectra coming from the side walls.
And the green lines show the spectra coming from the downstream wall. With
rather small variations, results for the NuMI LE (DUNE off-axis) resemble
those of the NuMI HE (DUNE on-axis) as so are not displayed. Lower row graphs:
Same as for those on top, but for azimuth angle distribution. Results are
presented for the same simulations as we have found that differences as well
with the other two are negligible.
End-state particles from these interactions were propagated, using GEANT4, to
the walls of the experimental hall where, as discussed above, neutron
characteristics were recorded and saved. Charged particles exiting the walls
of the experimental hall were not saved as they would either range out in the
scintillator or be vetoed there. Table 3 shows the number of particles
simulated at each stage of the simulation. The smoothed, rock-neutron energy
distributions for the four simulations are shown in Fig. 6. As expected the
flux of neutrons exiting the walls is higher on the upstream wall than the
downstream wall with a harder spectrum. The sides fall somewhere in between.
Also for the same POT/pulse, see Table 1, higher energy configurations produce
higher fluxes of rock-neutrons. Table 4 shows a summary of the output from the
simulations. Columns 2, 3 and 4 show the rates for various surfaces relative
to the beam. These numbers are nothing more than the integral of the
differential flux, see Fig. 6, with energy but they provide a simple way of
comparing the various beamlines and modes.
Fig. 7 (upper row) shows the spectra of zenith angles, measured from the
$z$-axis, for each of the walls. As expected the upstream wall shows a more
pronounced peak than does the downstream wall. Results are shown only for the
NuMI HE mode and the DUNE on-axis configuration. Results for the NuMI LE (DUNE
off-axis 39 m) resemble rather closely those of the NuMI HE mode (DUNE on-
axis) and so are not displayed. Fig. 7 (lower row) shows as well the spectra
of azimuth angles, measured from the $x$-axis, for each of the walls. The
zenith and azimuth angle specifies a vector which, adopting the GEANT
convention, points in a direction from which the particle came. The upstream
wall therefore emits particles with azimuth angles from 0 to $\pi$, vectors
which point into the rock, while the downstream wall emits particles from
$\pi$ to $2\pi$, vectors which point into the experimental hall. Once again
the upstream wall exhibits a more concentrated distribution as the emission of
neutrons from the downstream wall would entail multiple bounces before
emission from the wall. Finally the sides, right hand wall shown here, shows
an asymmetric distribution skewed towards smaller azimuth angles indicating a
preference for emission from the beam direction. In summary the angular
distributions show a preference for neutron emission from the direction to the
target which decreases from the upstream wall to the sides to the downstream
wall.
Figure 8: Plots showing the distribution of recoil energies vs energy
deposited in the scintillator with the neutrino-induced end-state particle
responsible for the recoil shown in different colors. The left graph shows the
results for C recoils while the right graph shows the S recoils. Both are
heavily dominated by neutron end-states (about 63% for both target nuclei).
The vertical dashed black lines indicate the recoil thresholds, 75 keV for C
and 200 keV for S. The horizontal dashed black lines show the threshold for
the scintillator veto, 1 MeV; events with larger energies are vetoed. The
lower right region therefore shows the signal region where either CE$\nu$NS or
BSM recoils events would occur. The background rate, in events per m3 per
year, are shown there. The sum is shown in the fifth column of Table 4.
## V Stage II: $\nu$BDX-DRIFT Results
As discussed above the main motivation for this work is the reliable
prediction of backgrounds for the $\nu$BDX-DRIFT experiment. To that end a
Stage II simulation was set up and run to predict backgrounds. Neutrons were
fired from the walls of the experimental hall with energy and angular spectra
such as shown in Figures 6 and 7. From the outside in, the detector consisted
of a $75\,$cm thick BC-521 scintillator veto surrounding the entire detector
with outer dimensions of $3\,$m, a $0.5\,$inch thick stainless-steel, cubic
vacuum vessel with outer dimensions of $1.5\,$m and a cubic fiducial volume
for recoils composed of CS2 at a density 2.44 times higher than $400\,$Torr.
This increased pressure increases the efficiency for recording recoils while
minimizing double recoils [46]; final results are corrected at the end.
GEANT recorded any energy deposited in the scintillator veto and in the
fiducial volume. Fig. 8 shows the results for the NuMI LE beamline and mode.
On the horizontal axis is the recoil kinetic energy for C and S. On the
vertical axis is the amount of energy deposited in the scintillator. The
different colors represent the end-state particles from
$\overline{\nu}$-nucleus interactions which produced neutrons which entered
the experimental hall and created C or S recoils in the fiducial volume of the
$\nu$BDX-DRIFT detector. Neutron end-state particles from
$\overline{\nu}$-nucleus interactions dominate the recoil rate. The vertical
dashed line shows the kinetic energy threshold for recoil detection after [3].
As can be seen in these graphs a huge number of recoils are predicted above
threshold. However the vast majority of nuclear recoils above threshold also
come with an enormous deposition of energy in the scintillator, on order
$100\,$MeV 333It should be noted that the benchmarked COUPP 2009 experiment
was mostly sensitive to 1-10 MeV neutrons while $\nu$BDX-DRIFT is mostly
sensitive to 10-100 MeV neutrons due to the necessity of penetrating the
scintillator.. These large energy depositions occur due to showers produced as
the neutrons traverse the detector and resulting charged particle interactions
in the scintillator veto. The horizontal dashed line indicates a $1\,$MeV
threshold on the veto; events with energy greater than this are vetoed. Signal
events, CE$\nu$NS events or BSM interactions, would appear in the lower right
corner of these graphs. Backgrounds, in this context, means events due to beam
neutrons appearing in this lower right corner. The rate of recoils, and
errors, for C and S appear in this lower right corner in Fig. 8 in units of
events per m3 per year. The fifth column in Table 4 shows the background rates
for each of the beamlines and modes studied in this paper. As can be seen the
highest backgrounds occur in the NuMI beamline in the HE mode.
As a check of the Stage I of the simulation for this high background
configuration, a run _with the scintillating veto in place_ was completed.
After firing $2.3\times 10^{9}$ end-state neutrons from the walls the result
was in statistical agreement with the Stage II neutron results to within
$15\%$ validating the use of the multi-stage procedure.
There remains a question as to how these beam-related backgrounds compare to
their non-beam-related cousins. While this question has not been studied in
detail, an indication can be found when again considering the 2009 COUPP
results [33]. The COUPP collaboration found a neutron background of $3$ events
across a $28.1\,$kg$\cdot$day exposure for a rate of about
$0.1\,$events/kg$\cdot$day; this rate was measured with lower thresholds and
while maintaining a scintillating shield similar to that described in this
work. However, this rate was not in coincidence with the beam. We can estimate
to an order of magnitude that $10\,\mu$s timing resolution is possible, giving
an approximate additional 10-5 reduction in background from non-beam-related
sources occurring during a beam-spill for a total rate of about $\sim
10^{-6}\,$events/kg$\cdot$day, or $\sim 6\times
10^{-4}\,$events/m${}^{3}\cdot$yr. This rate is much smaller than any of those
predicted in Table 4.
Figure 9: Left graph: Neutron and CE$\nu$NS zenith angle distribution as a
function of zenith angle in degrees. The result has been derived assuming the
NuMI LE neutrino flux, with parameters as specified in Table 1. As expected,
the CE$\nu$NS signal peaks at $90^{\circ}$ while the neutron-induce recoils
have a much wider spread (see text in Sec. VI for details). The histograms for
different maximum recoil energies show that events pile up with increasing
energy. Right graph: Neutron and CE$\nu$NS recoil energy spectra as a function
of nuclear recoil energy. The result has been derived with the same
assumptions that those used for the left graph. The different energy lines are
correlated with the zenith angle histogram in the left graph and graphically
indicate the number of events that for that energy have been piled up in the
zenith angle distribution peak.
## VI Signal and rock neutrons backgrounds
Our results demonstrate that un-vetoed, rock neutron backgrounds can be
substantial, in particular for the NuMI HE mode and the DUNE on-axis
configuration. Further discrimination of the CE$\nu$NS signal against this
background would be helpful. To do so the directional capabilities of the
detector can be employed. Information from the neutron and CE$\nu$NS zenith
angle distribution spectra combined with their recoil energy spectra provide
information that allows—in principle—efficient background discrimination. The
CE$\nu$NS angular distribution is expected to peak in the direction
perpendicular to the neutrino flux. This can be readily understood from the
fact that the recoil (zenith) angle $\theta_{r}$ and recoil energy $E_{r}$ are
related through [4]
$\cos\theta_{r}=\sqrt{\frac{m_{N}\,E_{r}}{2}}\left(\frac{1}{E_{\nu}}+\frac{1}{m_{N}}\right)\,,$
(5)
where $\theta_{r}$ is the recoil angle relative to the direction of the
neutrino, $m_{N}$ is the mass of the nucleus and $E_{\nu}$ is the energy of
the neutrino. For the typical recoil energies ($<1\,$MeV), induced by a “high-
energy” neutrino beam ($\sim$GeV) as those we have consider in these
simulations, lead to small $\cos\theta_{r}$. For CE$\nu$NS this translates
into most events clustering at $90^{\circ}$, independent of the neutrino beam
we choose. To exploit this fact the neutron zenith angle distribution has to
be as well categorized. Its exact morphology, in contrast to the CE$\nu$NS
signal, does depend on the neutrino flux and so for concreteness we have
performed calculations for the NuMI LE mode.
The left graph in Fig. 9 shows the results for both spectra for a 10 m3 year
exposure. The neutron recoil angular distribution has a mild tendency to
cluster at about $90^{\circ}$ due to a tendency of rock neutrons to preserve
the forward direction of the beam. However, their spectrum has a much wider
spread in comparison to that of neutrinos recoils. This result thus shows that
with a reasonable angular resolution further discrimination ($\sim$104:1
altogether, scintillating veto plus angular cuts) of background events is
possible. At $90^{\circ}$ the signal-to-background ratio is estimated by
comparing the number of events at peak, is $\sim$2.5.
The recoil energy spectra provide, as well, useful discrimination power. To
determine the degree to which by itself, or through its interplay with zenith
angle spectral information this can be done, we have calculated the CE$\nu$NS
recoil energy signal as well as neutron recoil energy spectra for the same
neutrino flux configuration. Results are shown in the right graph in Fig. 9.
The CE$\nu$NS signal spreads over a wider energy range (compared to its
clustering at $90^{\circ}$) but does peak towards lower recoil energies. The
rock neutron background peaks as well at low recoil energies, but in contrast
to the CE$\nu$NS signal does populate the full energy range suggesting a
different spectrum which could be exploited.
In addition some amount of C and S recoil discrimination is present. The
difference in these spectra could be used to further discriminate the signals.
More work is needed to fully exploit the background rejection capability of
these signatures.
Other backgrounds could be considered and studied. The decay-in-flight
neutrino beam energies extend up to and even beyond $\sim 10\,$GeV; thus, in
addition to CE$\nu$NS, other higher-energy processes such as quasielastic,
resonance, and deep-inelastic scattering will occur, see e.g. [47]. The cross
sections for these higher-energy interactions (wherein the constituent
nucleons become the system’s dominant degrees of freedom) are sizable at
higher $Q^{2}$. As discussed above for COUPP, these type of events occur at a
rate of $0.35\,$events$/$kg$\cdot$day. The $\nu$BDX-DRIFT detector with a mass
of 1.6 kg will see these events on the order of 1 per day. In terms of
backgrounds to $\nu$BDX-DRIFT in searches for CE$\nu$NS and BSM nuclear
recoils though, the large neutrino energies generally imply high particle
multiplicities and are comparatively unique in their topologies. For instance,
charged particles produced in conjunction with nuclear recoils can be rejected
as signal events. As shown above, the scintillating veto is extremely
effective at rejecting neutrals at these large energies. Additionally, as
events like this will be present in the data, their characteristics can be
measured and studied themselves, an interesting topic it’s own right.
## VII Conclusion
In this paper we have studied rock neutron backgrounds in the $\nu$BDX-DRIFT
detector. Rock neutrons are produced by the interaction of neutrinos with the
rock surrounding the underground hall where the detector is deployed. End-
state particles produced in these interactions come from a GENIE Monte Carlo
calculation which uses four possible neutrino fluxes (NuMI LE and HE modes and
DUNE on-axis and off-axis 39 m configurations) interacting with the rock
composed mainly of Oxygen, Silicon, Aluminum and Iron. The energy spectra of
the final state particles produced in these interactions serve then as an
input for a GEANT4 Monte Carlo simulation, which propagates these states
throughout the rock and so allows the characterization of the neutrons
emerging from the walls of the hall. These neutrons are then used to study the
possible backgrounds to which the $\nu$BDX-DRIFT detector will be subject to
while being operated at the FNAL.
The simulation is bench-marked against the 2009 beam-tagged COUPP data,
obtained by the COUPP collaboration during operation in the MINOS hall while
the NuMI beamline was operated in the LE mode. Agreement between the simulated
and actual data is found within 30%. After this validation, results for
energy, zenith and azimuth spectra for the neutrons emitted by the walls are
reported. These results, crucial for the determination of rock neutron
backgrounds in the $\nu$BDX-DRIFT detector, are as well useful for future
neutrino experiments at the FNAL. They add to undergoing efforts by the ANNIE
collaboration, which aims to characterize neutron backgrounds at the FNAL.
With the “morphology” of the emitted neutrons at hand, rock neutron
backgrounds within the $\nu$BDX-DRIFT fiducial volume have been determined. By
assuming the detector to be fully surrounded by a 75 cm thick BC-521
scintillator veto, for the four different neutrino flux configurations we have
found that the DUNE off-axis 39 m provides the most background-suppressed
experimental scenario. Rock neutron backgrounds gradually increase from the
NuMI LE to the DUNE on-axis to the NuMI HE, with the latter being the
configuration leading to the largest background. Detailed results have been
reported in Table 4.
Finally we have discussed discrimination of rock neutron backgrounds against
CE$\nu$NS signals. Using NuMI LE as a representative case, we have compared
neutron and CE$\nu$NS zenith and recoil energy spectra. The results
demonstrate that discrimination against rock neutron backgrounds is possible.
Firstly, the CE$\nu$NS signal peaks at $90\degree$, in contrast to the neutron
background that spreads more uniformly. At peak, the signal-to-background
ratio has been roughly estimated to be $\sim 2.5$. Information from the recoil
energy spectra shows that background-free energy windows exist, thus offering
an experimental avenue for CE$\nu$NS measurements as well as for BSM searches.
## Acknowledgements
We thank Andrew Sonnenschein and Jeter Hall for useful conversations and
providing us the the COUPP 2009 results upon which this work is based. We
thank Eric Vázquez-Jáuregui for carefully reading the manuscript and useful
comments, particularly concerning the actual geometry of the COUPP 2009
detector. D.A.S. is supported by ANID grant “Fondecyt Regular”
N${}^{\text{o}}$ 1221445\. BD and LES acknowledge support from DOE Grant de-
sc0010813.
## References
* Freedman [1974] D. Z. Freedman, Phys. Rev. D 9, 1389 (1974).
* Freedman _et al._ [1977] D. Z. Freedman, D. N. Schramm, and D. L. Tubbs, Ann. Rev. Nucl. Part. Sci. 27, 167 (1977).
* Aristizabal Sierra _et al._ [2021] D. Aristizabal Sierra, B. Dutta, D. Kim, D. Snowden-Ifft, and L. E. Strigari, Phys. Rev. D 104, 033004 (2021).
* Abdullah _et al._ [2020] M. Abdullah, D. Aristizabal Sierra, B. Dutta, and L. E. Strigari, Phys. Rev. D 102, 015009 (2020), arXiv:2003.11510 [hep-ph] .
* Strait _et al._ [2016] J. Strait _et al._ (DUNE), (2016), arXiv:1601.05823 [physics.ins-det] .
* Akimov _et al._ [2017] D. Akimov _et al._ (COHERENT), Science 357, 1123 (2017), arXiv:1708.01294 [nucl-ex] .
* Akimov _et al._ [2021] D. Akimov _et al._ (COHERENT), Phys. Rev. Lett. 126, 012002 (2021), arXiv:2003.10630 [nucl-ex] .
* Abdullah _et al._ [2022] M. Abdullah _et al._ , (2022), arXiv:2203.07361 [hep-ph] .
* Aguilar-Arevalo _et al._ [2019] A. Aguilar-Arevalo _et al._ (CONNIE), Phys. Rev. D 100, 092005 (2019), arXiv:1906.02200 [physics.ins-det] .
* Agnolet _et al._ [2017] G. Agnolet _et al._ (MINER), Nucl. Instrum. Meth. A 853, 53 (2017), arXiv:1609.02066 [physics.ins-det] .
* Strauss _et al._ [2017] R. Strauss _et al._ , Eur. Phys. J. C 77, 506 (2017), arXiv:1704.04320 [physics.ins-det] .
* Akimov _et al._ [2020] D. Y. Akimov _et al._ (RED-100), JINST 15, P02020 (2020), arXiv:1910.06190 [physics.ins-det] .
* Colaresi _et al._ [2022] J. Colaresi, J. I. Collar, T. W. Hossbach, C. M. Lewis, and K. M. Yocum, (2022), arXiv:2202.09672 [hep-ex] .
* Billard _et al._ [2017] J. Billard _et al._ , J. Phys. G 44, 105101 (2017), arXiv:1612.09035 [physics.ins-det] .
* Bonet _et al._ [2021] H. Bonet _et al._ (CONUS), Phys. Rev. Lett. 126, 041804 (2021), arXiv:2011.00210 [hep-ex] .
* Akimov _et al._ [2022] D. Akimov _et al._ , in _2022 Snowmass Summer Study_ (2022) arXiv:2204.04575 [hep-ex] .
* Garoby _et al._ [2018] R. Garoby _et al._ , Phys. Scripta 93, 014001 (2018).
* Andreopoulos _et al._ [2010] C. Andreopoulos _et al._ , Nucl. Instrum. Meth. A 614, 87 (2010), arXiv:0905.2517 [hep-ph] .
* Ambats _et al._ [1998] I. Ambats _et al._ (MINOS), NUMI-L-337, FERMILAB-DESIGN-1998-02 (1998), 10.2172/1861363.
* Agostinelli _et al._ [2003a] S. Agostinelli _et al._ (GEANT4), Nucl. Instrum. Meth. A 506, 250 (2003a).
* Adamson _et al._ [2016] P. Adamson _et al._ , Nucl. Instrum. Meth. A 806, 279 (2016), arXiv:1507.06690 [physics.acc-ph] .
* Abi _et al._ [2020] B. Abi _et al._ (DUNE), (2020), arXiv:2002.03005 [hep-ex] .
* Anghel _et al._ [2015] I. Anghel _et al._ (ANNIE), (2015), arXiv:1504.01480 [physics.ins-det] .
* Papoulias and Kosmas [2018] D. K. Papoulias and T. S. Kosmas, Phys. Rev. D 97, 033003 (2018), arXiv:1711.09773 [hep-ph] .
* Aristizabal Sierra _et al._ [2019a] D. Aristizabal Sierra, V. De Romeri, and N. Rojas, JHEP 09, 069 (2019a), arXiv:1906.01156 [hep-ph] .
* Aristizabal Sierra _et al._ [2019b] D. Aristizabal Sierra, B. Dutta, S. Liao, and L. E. Strigari, JHEP 12, 124 (2019b), arXiv:1910.12437 [hep-ph] .
* Aristizabal Sierra _et al._ [2022] D. Aristizabal Sierra, V. De Romeri, and D. K. Papoulias, (2022), arXiv:2203.02414 [hep-ph] .
* Coloma _et al._ [2020] P. Coloma, I. Esteban, M. C. Gonzalez-Garcia, and M. Maltoni, JHEP 02, 023 (2020), [Addendum: JHEP 12, 071 (2020)], arXiv:1911.09109 [hep-ph] .
* Abrahamyan _et al._ [2012] S. Abrahamyan _et al._ , Phys. Rev. Lett. 108, 112502 (2012), arXiv:1201.2568 [nucl-ex] .
* Horowitz _et al._ [2014] C. J. Horowitz, K. S. Kumar, and R. Michaels, Eur. Phys. J. A 50, 48 (2014), arXiv:1307.3572 [nucl-ex] .
* Miranda _et al._ [2020] O. G. Miranda, D. K. Papoulias, G. Sanchez Garcia, O. Sanders, M. Tórtola, and J. W. F. Valle, JHEP 05, 130 (2020), [Erratum: JHEP 01, 067 (2021)], arXiv:2003.12050 [hep-ph] .
* de Gouvea _et al._ [2020] A. de Gouvea, P. A. N. Machado, Y. F. Perez-Gonzalez, and Z. Tabrizi, Phys. Rev. Lett. 125, 051803 (2020), arXiv:1912.06658 [hep-ph] .
* Behnke _et al._ [2011] E. Behnke, J. Behnke, S. J. Brice, D. Broemmelsiek, J. I. Collar, P. S. Cooper, M. Crisler, C. E. Dahl, D. Fustin, J. Hall, J. H. Hinnefeld, M. Hu, I. Levine, E. Ramberg, T. Shepherd, A. Sonnenschein, M. Szydagis, and C. Collaboration, Phys. Rev. Lett. 106 (2011), 10.1103/PhysRevLett.106.021303.
* COUPP Collaboration [2020] COUPP Collaboration, Private Communication (2020).
* [35] Z. Yuan, “D44 - data logger plotter,” .
* Kopp [2007] S. E. Kopp, (2007), arXiv:0709.2737 [hep-ex] .
* [37] Collaboration (MINOS), “Minos run logs,” .
* Aliaga Soplin [2016] L. Aliaga Soplin, _Neutrino Flux Prediction for the NuMI Beamline_ , Ph.D. thesis, William-Mary Coll. (2016).
* Andreopoulos _et al._ [2015] C. Andreopoulos, C. Barry, S. Dytman, H. Gallagher, T. Golan, R. Hatcher, G. Perdue, and J. Yarba, (2015), arXiv:1510.05494 [hep-ph] .
* Niewczas and Sobczyk [2019] K. Niewczas and J. T. Sobczyk, Phys. Rev. C 100, 015505 (2019), arXiv:1902.05618 [hep-ex] .
* Golan _et al._ [2012] T. Golan, C. Juszczak, and J. T. Sobczyk, Phys. Rev. C 86, 015505 (2012), arXiv:1202.4197 [nucl-th] .
* Agostinelli _et al._ [2003b] S. Agostinelli _et al._ (GEANT4), Nucl. Instrum. Meth. A 506, 250 (2003b).
* Behnke _et al._ [2013] E. Behnke, T. Benjamin, S. J. Brice, D. Broemmelsiek, J. I. Collar, P. S. Cooper, M. Crisler, C. E. Dahl, D. Fustin, J. Hall, C. Harnish, I. Levine, W. H. Lippincott, T. Moan, T. Nania, R. Neilson, E. Ramberg, A. E. Robinson, M. Ruschman, A. Sonnenschein, E. Vazquez-Jauregui, R. A. Rivera, L. Uplegger, and C. Collaboration, Phys. Rev. D 88 (2013), 10.1103/PhysRevD.88.021101.
* Gardiner [2021a] S. Gardiner, Phys. Rev. C 103, 044604 (2021a), arXiv:2010.02393 [nucl-th] .
* Gardiner [2021b] S. Gardiner, Comput. Phys. Commun. 269, 108123 (2021b), arXiv:2101.11867 [nucl-th] .
* Burgos _et al._ [2007] S. Burgos, J. Forbes, C. Ghag, M. Gold, V. A. Kudryavtsev, T. B. Lawson, D. Loomba, P. Majewski, D. Muna, A. S. Murphy, G. G. Nicklin, S. M. Paling, A. Petkov, S. J. S. Plank, M. Robinson, N. Sanghi, N. J. T. Smith, D. P. Snowden-Ifft, N. J. C. Spooner, T. J. Sumner, J. Turk, and E. Tziaferi, ASTROPARTICLE PHYSICS 28, 409 (2007).
* Formaggio and Zeller [2012] J. A. Formaggio and G. P. Zeller, Rev. Mod. Phys. 84, 1307 (2012), arXiv:1305.7513 [hep-ex] .
|
# Skew Products on the Berkovich Projective Line
Richard A. P. Birkett
###### Abstract.
In this article, we develop a dynamical theory for what shall be called a
_skew product_ on the Berkovich projective line,
$\phi_{*}:\mathbb{P}^{1}_{\text{an}}(K)\to\mathbb{P}^{1}_{\text{an}}(K)$ over
a non-Archimedean field $K$. These functions are defined algebraically yet
strictly generalise the notion of a _rational map_ on
$\mathbb{P}^{1}_{\text{an}}$. We describe the analytical, algebraic, and
dynamical properties of skew products, including a study of periodic points,
and a Fatou/Julia dichotomy. The article culminates with the classification of
the connected components of the Fatou set.
###### Contents
1. 1 Introduction
1. 1.1 Motivation
2. 1.2 Skew Products on the Berkovich Projective Line
3. 1.3 Properties of Skew Products
4. 1.4 Dynamics of a Skew Product
5. 1.5 Fatou and Julia
6. 1.6 Classification of Fatou Components
7. 1.7 Organisation
8. 1.8 Acknowledgements
2. 2 Background
1. 2.1 Non-Archimedean Metrics
2. 2.2 Disks
3. 2.3 Projective Line and Affinoids
4. 2.4 Power Series and Constructing Seminorms
5. 2.5 Taylor Series
6. 2.6 Laurent Series
7. 2.7 Seminorms of Power Series
8. 2.8 The Berkovich Projective Line
9. 2.9 The Berkovich Affine Line
10. 2.10 The Berkovich Projective Line
11. 2.11 Berkovich Disks, Affinoids, and Directions
12. 2.12 Paths and Hyperbolic Metric
13. 2.13 Rational Maps
14. 2.14 Reduction
3. 3 Skew Products on the Berkovich Projective Line
1. 3.1 Motivation
2. 3.2 The Problem
3. 3.3 Skew Products
4. 3.4 Properties of Skew Products
5. 3.5 Local Degrees in Directions
6. 3.6 Local Degrees at Points
7. 3.7 Reduction and Computation of Local Degrees
8. 3.8 The Injectivity and Ramification Loci
9. 3.9 Geometric and Topological Ideas
4. 4 Periodic Points
5. 5 Fatou and Julia
6. 6 Fatou Components of Skew Products
1. 6.1 Berkovich Fatou Components
2. 6.2 Attracting Components
3. 6.3 Indifferent Components
4. 6.4 Classification
5. 6.5 Wandering Domains
## 1\. Introduction
A rational map
$\phi:\mathbb{P}^{1}_{\text{an}}(K)\to\mathbb{P}^{1}_{\text{an}}(K)$ on the
Berkovich projective line can be thought of as a $K$-algebra endomorphism
$\phi^{*}:K(z)\to K(z)$. This article is dedicated to developing an analytical
and dynamical theory for a map called a _skew product_. In this more general
context, we relax the condition that $\phi^{*}|_{K}$ is the identity, but ask
that that it respects the non-Archimedean metric.
After unraveling the algebraic structure of a skew product, we will see that
the map is still piecewise linear on $\mathbb{P}^{1}_{\text{an}}$, however the
slopes are not necessarily integers, nor at least $1$. The possibility of
contraction leads to more interesting dynamics, especially for fixed points
and the Fatou-Julia dichotomy. In the ‘simple’ case (where the slopes are
integers) we generalise the Rivera-Letelier style classification of Fatou
components, and certain facts about wandering domains. These latter results
will be fundamental for applications to dynamics in two dimensions.
The results in the present article and the forthcoming applications feature in
the author’s PhD thesis.
### 1.1. Motivation
The dynamical theory of a skew product will be essential for applications to
the dynamics of rational maps on a complex surface in a later article. For the
purposes of understanding the dynamical degrees (algebraic entropy) and
algebraic stability of a rational map $f:X\dashrightarrow X$ on a surface, one
often needs to understand the potential dynamical behaviour of $f$ for all
possible birational changes of coordinates. Favre and Jonsson [FJ11]
considered the case of a polynomial map
$f:\mathbb{P}^{2}_{\mathbb{C}}\dashrightarrow\mathbb{P}^{2}_{\mathbb{C}}$
which is invariant over the line at infinity. They translate the dynamics of
$f$ to that of a universal ‘dual graph’ of all possible exceptional curves
over this line. The ‘valuative tree’ [FJ04] they obtain is equivalent to a one
dimensional Berkovich space over the Puiseux series, and the corresponding
function turns out to essentially be a contraction mapping. Typically, such an
induced function will not be a contraction mapping. The later application of
the present article will be to the case of a skew product
$\phi:X\dashrightarrow X$ on a complex ruled surface, and hence lends its name
to the non-Archimedean version. Classically, a skew product is a map of the
form $\phi(x,y)=(\phi_{1}(x),\phi_{2}(x,y))$ on a product space. When
$X=\mathbb{P}^{1}\times\mathbb{P}^{1}$, the rational map $\phi$ corresponds to
a $\mathbb{C}$-algebra endomorphism
$\phi^{*}:\mathbb{C}(x)(y)\to\mathbb{C}(x)(y)$. Furthermore if
$\phi_{1}=\operatorname{id}$ then this is a $\mathbb{C}(x)$-algebra
endomorphism, and so it extends to a rational map
$\phi_{2}:\mathbb{P}^{1}_{\text{an}}(\mathbb{K})\to\mathbb{P}^{1}_{\text{an}}(\mathbb{K})$
over the field $\mathbb{K}$ of complex Puiseux series in $x$. DeMarco and
Faber [DF14, DF16] used this perspective when $\phi_{1}=\operatorname{id}$ to
find a (possibly singular) surface $\hat{X}$ where $\phi$ is algebraically
stable. The core of their argument was to use the Rivera-Letelier
classification of Fatou components [RL03a, RL03b] for the corresponding
Berkovich rational map. Inspired by this result, we will deal with the case of
complex skew products where $\phi_{1}$ is not necessarily trivial. In this
more general setting, $\phi^{*}:\mathbb{C}(x)(y)\to\mathbb{C}(x)(y)$ will not
be a $\mathbb{C}(x)$-algebra endomorphism; whence the induced function
$\phi_{*}:\mathbb{P}^{1}_{\text{an}}(\mathbb{K})\to\mathbb{P}^{1}_{\text{an}}(\mathbb{K})$
cannot correspond to a rational map, but to a different kind of map - the non-
Archimedean _skew product_.
### 1.2. Skew Products on the Berkovich Projective Line
Aside from examples, this theory will be discussed for a general non-
Archimedean field $K$; further study of the specialisation to the Puiseux
series $K=\mathbb{K}$ and applications to complex/algebraic dynamics will be
deferred to a sequel. Skew products on the Berkovich projective line are of
independent interest as a generalisation of Berkovich _rational maps_ , for
instance because they also encompass a class of field automorphisms.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Mention Vladimir
Berkovich somewhere.
Emerging out of the study of the dynamics of rational maps over the p-adic
numbers there has been substantial interest in the notions of non-Archimedean
Fatou and Julia sets [Dre03, Hsi96, Hsi00, Béz01, Béz04]. The landmark theses
of Robert L. Benedetto [Ben98, Ben00, Ben01a, Ben01b] and Juan Rivera-Letelier
[RL03a] together established the essential theory for the dynamics of non-
Archimedean Fatou components. Rivera-Letelier, in particular, brought the
insight of extending a rational map from $\mathbb{P}^{1}(K)$ to a _rational
map_ on a non-Archimedean analytic space $\mathbb{P}^{1}_{\text{an}}(K)$ which
compactifies the former. He gave the classification of Fatou components over
the $p$-adics. Following this there were numerous papers on the structure of
the Fatou and Julia sets [Ben02a, Ben02b, Ben05, Ben06, BBP07, RL03b, RL04,
RL05]linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Add Faber?. The
field exploded and many fruitful connections appeared between complex and non-
Archimedean dynamics. For example by Kiwi [Kiw06, Kiw14, Kiw15], by Baker and
DeMarco [BD11, BDM13], by Favre and Gauthier [FG18], by Dujardin and Favre
[DF17], by DeMarco and Faber [DF14, DF16], and by Favre [Fav20] – to name just
a selection. Following this progress Fatou and Julia theory, three independent
groups of mathematicians developed potential theory and proved
equidistribution theorems in the non-Archimedean setting, mimicking the
complex dynamical case; namely by Chambert-Loir and Thuillier [CL06, Thu05],
by Baker and Rumely [BR06], and by Favre and Rivera-Letelier [FRL04, FRL06,
FRL07]. For a fuller account of this history we refer the reader to the
excellent survey by Benedetto [Ben22]. For theoretical background the author
recommends the books by Benedetto, and also Baker and Rumely [Ben19, BR10].
In particular, however, the reader is advised to compare with [Ben19] as they
read section 2. In this work we will use the notation built up by Benedetto,
and follow his development when possible for building up the theory of non-
Archimedean skew products. The author hopes this will allow for an easy
adjustment to the reader already familiar with Berkovich rational maps.
linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Move to section 2?
For this introduction, we briefly recall the structure of the Berkovich
projective line $\mathbb{P}^{1}_{\text{an}}(K)$. Unless otherwise stated, we
consider $(K,|\cdot|)$ to be an arbitrary non-Archimedean algebraically closed
field, and $\mathbb{\mathbb{C}}_{v}$ its completion. Algebraically,
$\mathbb{P}^{1}_{\text{an}}$ is the set of seminorms on the ring $K[y]$ which
extend the norm $\left|\,\cdot\,\right|$ on $K$, and also a point at infinity.
Topologically, it is uniquely path connected, meaning it is a tree, however
not in the finite combinatorial sense because it has a dense set of vertices
on any interval. These (interior) vertices, called _Type II_ points, have one
branch for every element of $\mathbb{P}^{1}(k)$, where $k$ is the residue
field of $K$. All other points on an open interval are _Type III_ points (edge
points). Every Type II and III point $\zeta=\zeta(a,r)$ corresponds to a
closed disk $\overline{D}(a,r)\subset\mathbb{P}^{1}(K)$, with $r$ in the value
group $\left|K^{\times}\right|$ or not respectively. The points in the
classical projective line
$\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$ form a set of
endpoints called _Type I_ points, which alone would be a totally disconnected
and non-compact set. There are other endpoints called _Type IV_ points,
corresponding empty intersections of nested disks. All of these naturally
correspond to seminorms through the geometric data and Berkovich showed
[Ber90] that the four types listed constitute the whole space. The _hyperbolic
plane_ $\mathbb{H}=\mathbb{P}^{1}_{\text{an}}(K)\setminus\mathbb{P}^{1}(K)$ is
defined as the set of Type II, III, and IV points; it is endowed with a
_hyperbolic metric_ $d_{\mathbb{H}}$. See subsection 2.8 for more.
Much like a rational map, the starting point for a skew product is a
homomorphism $\phi^{*}:K(y)\to K(y)$. To define a Berkovich _rational map_ we
would equivalently require that $\phi^{*}$ is a $K$-algebra homomorphism, i.e.
the identity on $K$. Unfortunately this is too restrictive for the application
to complex skew products $\phi(x,y)=(\phi_{1}(x),\phi_{2}(x,y))$ because
$\left.\phi^{*}\right|_{\mathbb{K}}:x\mapsto\phi_{1}(x)$ is not the identity
map on $K=\mathbb{K}$. On the other hand, if we allow the homomorphism
$\phi^{*}$ to be completely arbitrary, the induced map on the Berkovich
projective line becomes unwieldy. Instead we impose on $\phi^{*}$ the
condition that that $\phi^{*}$ uniformly scales the norm on $K$. This is
easily satisfied in the geometric/complex case, for example
$\left|\phi^{*}(x)\right|=\left|\phi_{1}(x)\right|=\left|c_{n}x^{n}+c_{n+1}x^{n+1}+\cdots\right|=\left|x\right|^{n}.$
In general, if $\left|\phi^{*}(a)\right|=\left|a\right|^{1/q}$ for every $a\in
K$ then we call $\phi^{*}$ an _equivariant skew endomorphism_ , and define the
_skew product_ $\phi_{*}$ by
$\displaystyle\phi_{*}:\mathbb{P}^{1}_{\text{an}}(K)$
$\displaystyle\longrightarrow\mathbb{P}^{1}_{\text{an}}(K)$
$\displaystyle\zeta$ $\displaystyle\longmapsto\phi_{*}(\zeta)$
$\displaystyle\text{where }\left\|\psi\right\|_{\phi_{*}(\zeta)}$
$\displaystyle=\left\|\phi^{*}(\psi)\right\|_{\zeta}^{q}$
We call $q$ the _scale factor_ , and for general $K$ this can be any positive
real number; in the geometric/complex case, $q=1/n$.
### 1.3. Properties of Skew Products
Fundamental to understanding the structure of skew products is the
decomposition result of Theorem 3.3. We define
$\phi_{1}^{*}=\left.\phi^{*}\right|_{K}$ but extended trivially to $K(y)$, and
secondly we define $\phi_{2}^{*}$ to capture only the action of $\phi^{*}$ on
$y$. However, perhaps unintuitively, $\phi_{1*}$ acts as $(\phi_{1}^{*})^{-1}$
on classical points. Every skew endomorphism has this decomposition
$\phi^{*}=\phi_{2}^{*}\circ\phi_{1}^{*}$, and it descends to the skew product
$\phi_{*}=\phi_{1*}\circ\phi_{2*}$. Most important are the facts that
$\phi_{2*}$ is a non-Archimedean rational map on $\mathbb{P}^{1}_{\text{an}}$
and $\phi_{1*}$ uniformly scales the metric in the hyperbolic plane
$\mathbb{H}\subset\mathbb{P}^{1}_{\text{an}}$ by a factor of $q$, see Theorem
3.4. The term _scale factor_ for $q$ was chosen with this in mind.
In Theorem 3.7 and preceding results we demonstrate that any non-constant skew
product $\phi_{*}$ is a continuous, open mapping that is precisely the unique
continuous extension of $(\phi_{1}^{*})^{-1}\circ\phi_{2}$ on
$\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$. As one would hope, it
preserves the types of each Berkovich point. Furthermore, Theorem 3.8 states
that a skew product maps connected affinoids to connected affinoids, and by
subsection 3.6 it will map each component of $\phi_{*}^{-1}(U)$ to $U$ in a
$d$-to-$1$ manner.
We then proceed to consider the map on tangent directions
$\phi_{\\#}:T_{\zeta}\mathbb{P}^{1}_{\text{an}}\to
T_{\phi_{*}(\zeta)}\mathbb{P}^{1}_{\text{an}}$ and degrees (local and in
tangent directions) for a skew product; these degrees are the _relative
degree_ $\operatorname{rdeg}(\phi_{*})$, _local degree_ $\deg_{\zeta}(\phi)$
and _local degree in a direction_ $\deg_{\phi,{\bf v}}(\phi)$. subsection 3.5,
Theorem 3.18 and Theorem 3.20 show that the previously established relations
between these quantities for rational maps still hold in the new setting.
After reintroducing reduction for skew products we obtain a generalisation
Theorem 3.24 of further consequential results [Ben19, Theorem 7.34, Lemma
7.35]. These results state that the tangent map $\phi_{\\#}$ can be understood
through the reduction $\overline{\phi}$ and that the tangent map disagrees
with $\phi_{*}$ itself when and only when a _bad direction_ ${\bf v}$ contains
a preimage of the entire projective line. Theorem 3.13 extends the idea that
on a small interval with local degree $m$, $\phi_{*}$ is a linear map with
gradient $mq$.
We reuse the definitions of ramification locus
$\operatorname{Ram}(\phi)=\left\\{\zeta\in\mathbb{P}^{1}_{\text{an}}:\,\deg_{\zeta}(\phi)\geq
2\right\\}$, and injectivity locus
$\operatorname{Inj}(\phi)=\mathbb{P}^{1}_{\text{an}}\setminus\operatorname{Ram}(\phi)$.
We also say a skew product $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ is _tame_ iff
the underlying rational map $\phi_{2}$ is tame. Note that if
$\operatorname{char}K=p$ then we can always at the least find a decomposition
where $\phi_{2}$ is separable, by transferring (inverse) Frobenius
automorphisms onto $\phi_{1}^{*}$. We discuss the the separability and
uniqueness of the decomposition of skew products. Again, thanks to
decomposition, we obtain Theorem 3.28 as a direct corollary of the theorem by
Xander Faber for rational maps [Fab13a]. This says that
$\operatorname{Ram}(\phi)$ is a subset of
$\operatorname{Hull}(\operatorname{Crit}(\phi_{2}))$ whose endpoints lie in
$\operatorname{Crit}(\phi_{2})=\operatorname{Ram}(\phi)\cap\mathbb{P}^{1}$.
linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Should I write cor or
thm?As a further corollary, we find that if $\phi_{*}$ is tame then
$\deg_{\zeta}(\phi)=1$ for every Type IV point of
$\mathbb{P}^{1}_{\text{an}}$.
Subsection 3.9 contains some possibly new proofs and restatements of
established topological ideas, and an upgrade for Theorem 3.13 as Theorem
3.34. We highlight the Extreme Value Theorem Theorem 3.30 and the
Hyperbolicity Theorem (Theorem 3.35). Subsection 3 contains more results that
we leave out of this introduction.
### 1.4. Dynamics of a Skew Product
Having established the basic analytical properties of a skew product, which
largely appear to be the same, it is natural to ask: which dynamical
properties of rational maps still hold for skew products? Despite the
decomposition theorem, a skew product is _nowhere_ analytic, and under
iteration this means various algebraic techniques used to prove dynamical
results for rational maps will fail in the new setting – we are required to
think more topologically.
At first, the situation may seem discouraging, because whilst a rational map
always has finitely many fixed points in $K$, a skew product may have
uncountably many or zero classical fixed points, see Example 4.2, Example 4.4.
Skew products do end up behaving like rational maps in many important ways,
but there are marked and fascinating differences. For instance, consider
periodic points on the hyperbolic space, $\mathbb{H}$. We see that when $q<1$,
a skew product can have interior _attracting_ points $\zeta\in\mathbb{H}$ with
all directions attracting, except perhaps finitely many indifferent ones.
Moreover a Type II point could have both attracting and repelling directions -
we call these _saddle_ points. For skew products, a Type III or IV periodic
point may be repelling or attracting; see Example 4.5 for an example of a
repelling Type III point. Under a reasonable hypothesis about the value group,
we prove that Type III points are indifferent Theorem 4.8. We also resolve the
issue with Type IV points for a skew product $\phi_{*}$ with scale factor
$q\geq 1$, Theorem 4.7. More is discussed in section 4.
### 1.5. Fatou and Julia
###### Definition 1.1.
Let $\phi_{*}$ be a skew product. We say an open set
$U\subseteq\mathbb{P}^{1}_{\text{an}}$ is _dynamically stable_ under
$\phi_{*}$ iff $\displaystyle\bigcup_{n\geq 0}\phi_{*}^{n}(U)$ omits
infinitely many points of $\mathbb{P}^{1}_{\text{an}}$. The _(Berkovich) Fatou
set_ of $\phi_{*}$, denoted $\mathcal{F}_{\phi,\text{an}}$, is the subset of
$\mathbb{P}^{1}_{\text{an}}$ consisting of all points
$\zeta\in\mathbb{P}^{1}_{\text{an}}$ having a dynamically stable
neighbourhood. The _(Berkovich) Julia set_ of $\phi_{*}$ is the complement
$\mathcal{J}_{\phi,\text{an}}=\mathbb{P}^{1}_{\text{an}}\setminus\mathcal{F}_{\phi,\text{an}}$
of the Berkovich Fatou set.
We first seek to classify the periodic points of a skew product as Fatou or
Julia, generalising [Ben01a, Theorem 8.7] for rational maps which asserts that
all Type III and IV points are Fatou, and that Type II points are Julia if
they repel or have non-periodic bad directions. Unfortunately this is rather
troublesome; to begin with the above discussion, we know that Type III or IV
points could be repelling, and there might exist attracting directions and
saddle points. Fortunately, the new theorem below takes on very clean form,
perhaps even more so than the original version for rational maps, but albeit
of different flavour. It turns out that the main deciding factor is the
multiplier, and that saddle points are always Julia because they are
numerically repelling. We say a direction ${\bf v}\in
T_{\zeta}\mathbb{P}^{1}_{\text{an}}$ is _exceptional_ iff it has a finite
backward orbit when restricted to the orbit of the periodic point $\zeta$.
###### Theorem A.
Let $\phi_{*}$ be a skew product of scale factor $q$, and let
$\zeta\in\mathbb{P}^{1}_{\text{an}}$ be a periodic point of Type II, III, or
IV of period $n\geq 1$. Then $\zeta\in\mathcal{J}_{\phi,\text{an}}$ is Julia
if and only if either
1. (i)
$\zeta$ is numerically repelling i.e. $\deg_{\zeta}(\phi)q>1$; or
2. (ii)
$\zeta$ is Type II and either none of the directions ${\bf v}\in
T_{\zeta}\mathbb{P}^{1}_{\text{an}}$ are exceptional or any of the bad
directions ${\bf v}\in T_{\zeta}\mathbb{P}^{1}_{\text{an}}$ is not
exceptional.
Moreover if $\zeta\in\mathcal{F}_{\phi,\text{an}}$ is Fatou then every
direction intersecting $\mathcal{J}_{\phi,\text{an}}$ is exceptional and is
$\phi_{\\#}^{j}({\bf v})$ for some bad direction ${\bf v}$.
Unfortunately, $q<1$ means that fixed classical points will often be
‘superrepelling’, and worse also Fatou according to the definition of
$\mathcal{F}_{\phi,\text{an}}$; see Example 5.1. However a (non-
super-)repelling point is always Julia, as stated in section 5.
One can also extend the results of Benedetto about wandering domains of
rational maps [Ben05] to skew products. In particular, Theorem 6.14 says that
if $\phi_{*}$ is a skew product of relative degree $d\geq 2$ and scale factor
$q\geq 1$ then any wandering domain $U\subseteq\mathcal{F}_{\phi,\text{an}}$
eventually becomes a disk under iteration. If additionally our base field
$K=\mathbb{K}$ the Puiseux series and $q=1$, then the boundary points of $U$
are Type II Julia preperiodic points.
### 1.6. Classification of Fatou Components
We also take the same definitions of attracting, indifferent and wandering
Fatou component as with rational maps, see [Ben19]. We occasionally specialise
to simple ($q=1$) skew products which are defined over a discretely valued
subfield. For instance, these results will apply to simple _$k$ -rational_
skew products, meaning they are defined over the field $k((x))$ of Laurent
series with coefficients in some algebraically closed characteristic $0$ field
$k$ (the residue field).
The following theorems describing the attracting and indifferent components
are a generalisation to skew products of those due to Rivera-Letelier [RL03b,
RL03a, RL05].
###### Theorem B.
Let $\phi_{*}$ be a skew product of relative degree $d\geq 2$, let
$a\in\mathbb{P}^{1}(K)$ be an attracting periodic point of minimal period
$m\geq 1$, and let $U\subseteq\mathbb{P}^{1}_{\text{an}}$ be the immediate
attracting basin of $\phi_{*}$. Then the following hold.
1. (i)
$U$ is a periodic Fatou component of $\mathcal{F}_{\phi,\text{an}}$, of period
$m$.
2. (ii)
$U$ is either an open Berkovich disk or a domain of Cantor type.
3. (iii)
If $U$ is a disk, then its unique boundary point is a Type II repelling
periodic point, of minimal period dividing $m$.
4. (iv)
If $U$ is of Cantor type, then its boundary is homeomorphic to the Cantor set
and is contained in the Berkovich Julia set.
5. (v)
The mapping $\phi_{*}^{m}:U\to U$ is $l$-to-$1$, for some integer $l\geq 2$.
###### Theorem C.
Let $L$ be a discretely valued subfield of $K$, and let $\phi_{*}$ be a simple
skew product defined over $L$ of relative degree $d\geq 2$. Let
$U\subseteq\mathbb{P}^{1}_{\text{an}}$ be an indifferent component for
$\phi_{*}$. Then the following hold.
1. (a)
$U$ is a periodic connected component of $\mathcal{F}_{\phi,\text{an}}$, of
some minimal period $m\geq 1$.
2. (b)
$U$ is a rational open connected Berkovich affinoid.
3. (c)
Each of the finitely many points of the boundary $\partial U$ is a Type II
point lying in the Berkovich Julia set.
4. (d)
$\phi_{*}^{m}$ permutes the boundary points of $U$; in particular, each is
periodic.
5. (e)
The mapping $\phi_{*}^{m}:U\to U$ is bijective.
We end this section of the introduction with the most important theorem in
section 6 – we recover the classification of Fatou components, which was
originally proved for rational maps by Rivera-Letelier [RL03a, RL03b].
###### Theorem D (Classification of Fatou Components).
Let $L$ be a discretely valued subfield of $K$. Let
$\phi_{*}:\mathbb{P}^{1}_{\text{an}}(K)\to\mathbb{P}^{1}_{\text{an}}(K)$ be a
simple skew product defined over $L$ of relative degree $d\geq 2$ with
Berkovich Fatou set $\mathcal{F}_{\phi,\text{an}}$, and let
$U\subset\mathcal{F}_{\phi,\text{an}}$ be a periodic Fatou component. Then $U$
is either an indifferent component or an attracting component, but not both.
In closing the introduction, we mention that H. Nie and S. Zhao have developed
a noteworthy alternative approach to these problems and have an independent
proof of the classification of Fatou components. They also plan to release a
proof of equidistribution for skew products.
### 1.7. Organisation
We start with section 2, an overview of non-Archimedean analysis and algebra
that will be useful in practice. Next, section 3 provides a lengthy
development of the non-Archimedean skew product, from motivation to local
degrees, and other geometric results. In the final sections we will explore
the dynamics of a skew product, again comparing to its rational cousin. In
section 4 we give an elementary study of periodic points. Section 5 is
dedicated to defining and understanding the basic properties of the Fatou and
Julia set of a skew product; we examine how periodic points are related to the
Fatou-Julia dichotomy, proving Theorem A. The ultimate goal is to prove the
generalisation, Theorem D, of the Rivera-Letelier classification of Fatou
components, which will be the focus of section 6.
### 1.8. Acknowledgements
First and foremost I would like to thank my doctoral advisor, Jeffrey Diller,
for suggesting this avenue of research, for his time, encouragement, and for
sharing his remarkable editorial expertise.
Along with my advisor, I thank my dissertation committee, Nicole Looper, Eric
Riedl, and Roland Roeder, for their patience, insight, and helpful comments
which lead to a more effective exposition of new concepts.
I am grateful to Xander Faber for various helpful discussions about non-
Archimedean dynamics.
I would like to show my appreciation to Robert Benedetto for his exceptional
book which imparted a great deal of intuition and insight. His mathematical
framework for rational maps positively guided the development of skew
products.
Lastly, I thank Hongming Nie and Shengyuan Zhao for intriguing discussions
about their ‘twisted rational maps’, which are equivalent to the skew products
introduced here.
In addition, I am grateful to the NSF for its support of my work through grant
DMS-1954335.
## 2\. Background
### 2.1. Non-Archimedean Metrics
###### Definition 2.1.
A metric space $(X,d)$ is _non-Archimedean_ iff
$d(x,z)\leq\max\left\\{d(x,y),d(y,z)\right\\}\quad\forall x,y,z\in X.$
Such a $d$ is often called an _ultrametric_ , due to this _strong triangle
inequality_. In an algebraic context, we would like this to derive from a norm
which respects addition and multiplication. Furthermore, we will want to
consider something called a _seminorm_ , informally this is a norm where we
relax the condition $\left\|a\right\|=0\implies a=0$. The following notions
will be fundamentally important in this work.
###### Definition 2.2.
Let $G$ be a group. A _seminorm_ is a function
$\left\|\cdot\right\|:G\to\mathbb{R}_{\geq 0}$ such that $\left\|0\right\|=0$,
$\left\|a\right\|=\left\|-a\right\|\quad\forall a\in G$, and
$\left\|a+b\right\|\leq\left\|a\right\|+\left\|b\right\|\quad\forall a,b\in
G$.
* •
This is a _norm_ iff additionally
$\left\|a\right\|=0\implies a=0.$
* •
A seminorm $\left\|\cdot\right\|$ on $G$ is said to be _non-Archimedean_ iff
$\left\|a+b\right\|\leq\max\left\\{\left\|a\right\|,\left\|b\right\|\right\\}\quad\forall
a,b\in G.$
* •
A seminorm $\left\|\cdot\right\|$ on a ring $R$ is _multiplicative_ iff
$\left\|a\cdot b\right\|=\left\|a\right\|\cdot\left\|b\right\|\quad\forall
a,b\in R.$
* •
We say a field $(K,\left|\cdot\right|)$ is _non-Archimedean_ iff
$\left|\cdot\right|$ is a multiplicative non-Archimedean norm on $K$. In this
case we refer to $\left|\cdot\right|$ as an _absolute value_.
It is clear that any non-Archimedean norm induces a non-Archimedean metric.
###### Remark 2.1.
A _(semi)valuation_ can always be obtained by taking $\log$ of a (semi)norm,
$\log_{\varepsilon}\left\|\cdot\right\|$, for any $\varepsilon\in(0,1)$. We
apply the other adjectives of the previous definition respectively.
###### Remark 2.2.
Note that $\left\|a\right\|=\left\|-a\right\|\quad\forall a\in R$ on a ring is
implied by the multiplicative condition.
###### Example 2.1 (Trivial Norm).
Let $G$ be any group. Then there is a non-Archimedean norm
$\left\|\cdot\right\|_{\text{triv}}$ such that
$\left\|0\right\|_{\text{triv}}=0$ and $\left\|a\right\|_{\text{triv}}=1$ for
any $a\in G\setminus\left\\{0\right\\}$. If $G$ is an integral domain (e.g. a
field), this is also multiplicative.
###### Definition 2.3.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field.
1. (i)
The _ring of integers_ of $K$ is given by
$\mathcal{O}_{K}=\left\\{a\in K:\,\left|a\right|\leq 1\right\\}.$
2. (ii)
This $\mathcal{O}_{K}$ has a unique maximal ideal,
$\mathcal{M}_{K}=\left\\{a\in K:\,\left|a\right|<1\right\\}.$
3. (iii)
The _residue field_ of $K$ is the quotient field
$k=\mathcal{O}_{K}/\mathcal{M}_{K}$.
4. (iv)
The _value group_ , $\left|K^{\times}\right|\leq(0,\infty)$ is the range of
$\left|\cdot\right|$ on $K^{\times}=K\setminus\left\\{0\right\\}$.
It is possible that $K$ has _mixed characteristic_ meaning
$\operatorname{char}K=0$ and $\operatorname{char}k=p$. Otherwise they have
_equal characteristic_ $\operatorname{char}K=\operatorname{char}k$. The value
group is a (multiplicative) subgroup of reals; if the value group is dense in
$(0,\infty)$ then we shall say it is _densely valued_ ; if this is non-trivial
but not dense in $(0,\infty)$ then it must be cyclic, i.e.
$|K^{\times}|\cong\mathbb{Z}$, in which case we say $\left|\cdot\right|$ is
_discretely valued_.
###### Proposition 2.1.
Let $(G,\left\|\cdot\right\|)$ be a non-Archimedean group. An infinite series
$\sum_{n=1}^{N}a_{n}$ is Cauchy if and only if $a_{n}\to 0$; in particular
when $(G,\left\|\cdot\right\|)$ is complete an infinite series converges if
and only if its terms tend to $0$.
###### Proposition 2.2 (Completion).
Given a non-Archimedean group $(G,\left\|\cdot\right\|)$, we can form it’s
completion $\hat{G}$. This can be thought of as all convergent series over
$G$. The induced norm is naturally given by
$\left\|\lim_{n\to\infty}a_{n}\right\|=\lim_{n\to\infty}\left\|a_{n}\right\|.$
###### Example 2.2 ($p$-adic Numbers).
Consider $\mathbb{Q}=(\mathbb{Q},|\cdot|_{p})$, the rational numbers with
$|\cdot|_{p}$ the $p$-adic norm. This is defined such that
$\left|\frac{a}{b}p^{n}\right|=p^{-n}$ if $a,b\in\mathbb{Z}$ are coprime to
$p$. One can easily check that this norm is non-Archimedean and
multiplicative. This norm also gives rational numbers a natural $p$-adic
expansion. For example when $p=5$, we can write
$\frac{42}{5}=2\cdot 5^{-1}+3\cdot 1+1\cdot 5^{1}.$
Observe that $\left|\frac{42}{5}\right|_{5}=5^{1}$ is given by
$\left|\frac{42}{5}\right|_{5}=\max\left\\{|2\cdot 5^{-1}|_{5},|3\cdot
1|_{5},|1\cdot 5^{1}|_{5}\right\\}$
$=\max\left\\{|2|_{5}|5^{-1}|_{5},|3|_{5}|1|_{5},|1|_{5}|5^{1}|_{5}\right\\}$
$=\max\left\\{|5^{-1}|_{5},|1|_{5},|5^{1}|_{5}\right\\}$
$=\max\left\\{5^{1},1,5^{-1}\right\\}.$
This is a very typical way we use the non-Archimedean property of a
(semi)norm.
The completion of $(\mathbb{Q},|\cdot|_{p})$ is
$(\mathbb{Q}_{p},|\cdot|_{p})$, the $p$_-adic numbers_. One should think of
$\mathbb{Q}_{p}$ as all the half-infinite $p$-adic expansions, i.e. ‘Laurent
series’ in the ‘variable’ $p$. The ring of integers of $\mathbb{Q}_{p}$ is the
$p$_-adic integers_ , $\mathbb{Z}_{p}$ whose elements are the ‘Maclaurin
series’ in $p$. One can check that
$\mathcal{M}_{\mathbb{Q}_{p}}=p\mathbb{Z}_{p}$. The residue field for both
$\mathbb{Q}_{p}$ and $\mathbb{Q}$ is $\mathbb{Z}/p$, meaning they have mixed
characteristic. Finally, we define $\mathbb{\mathbb{C}}_{p}$ as the completion
of $\mathbb{Q}_{p}$.
This shows that a power series structure provides a simple way to define a
non-Archimedean seminorm, using the “lowest order term”, or the lowest index.
Here is another.
###### Example 2.3 (Formal Laurent series).
Consider $k((x))=\operatorname{Frac}k[[x]]$, the field of formal Laurent
series over some base field $k$. Note that these have finite principle part.
We can define the order of vanishing norm $|\cdot|$ as follows, beginning with
a fixed $\varepsilon\in(0,1)$. For any $m_{0}\in\mathbb{Z}$ and coefficients
$(c_{m})\subset k$, let
$f(x)=\sum_{m=m_{0}}^{\infty}c_{m}x^{k}$
where $c_{m_{0}}\neq 0$. Then $\left|f\right|=\varepsilon^{m_{0}}$. We see
that the constants $k$ are all given the trivial norm and a series $f$ has its
norm determined by its lowest order term,
$\left|f\right|=\left|c_{m}x^{m}\right|=\left|x^{m}\right|$. Here the residue
field is $k$ itself, which naturally lives inside $k((x))$, hence these are of
equal characteristic. This field is discretely valued, with value group
$\left\\{\varepsilon^{n}:\,n\in\mathbb{Z}\right\\}$.
The following lemmata spell out the idea we have already used in the above
examples, that the norm of an element is always the norm of its dominant
summand.
###### Proposition 2.3 (Strong triangle equality).
Let $\left\|\cdot\right\|$ be a non-Archimedean seminorm on $G$. Then
$\left\|a\right\|\neq\left\|b\right\|\implies\left\|a+b\right\|=\max\left\\{\left\|a\right\|,\left\|b\right\|\right\\}.$
###### Proof.
$\left\|a+b\right\|\leq\max\left\\{\left\|a\right\|,\left\|b\right\|\right\\}$
by the non-Archimedean definition, hence
$\left\|a+b\right\|\leq\left\|a\right\|$. Also we have
$\left\|a\right\|=\left\|a+b+-b\right\|\leq\max\left\\{\left\|a+b\right\|,\left\|-b\right\|\right\\}=\max\left\\{\left\|a+b\right\|,\left\|b\right\|\right\\}$.
If WLOG $\left\|a\right\|>\left\|b\right\|$, it must be that
$\max\left\\{\left\|a+b\right\|,\left\|b\right\|\right\\}=\left\|a+b\right\|$.
Therefore $\left\|a+b\right\|=\left\|a\right\|$. ∎
###### Proposition 2.4 (Extended strong triangle (in)equality).
Suppose that $\sum_{j=1}^{\infty}a_{j}$ converges in
$(G,\left\|\cdot\right\|)$. Then
$\left\|\sum_{j=1}^{\infty}a_{j}\right\|\leq\max_{j}\left\|a_{j}\right\|,$
and moreover if $\left\|a_{N}\right\|>\left\|a_{j}\right\|\forall j\neq N$,
then
$\left\|\sum_{j=1}^{\infty}a_{j}\right\|=\left\|a_{N}\right\|.$
The proof follows from the previous two results. The following definition
provides not only a key example, but also one of the main fields of interest
in sequels and the author’s thesis.
###### Example 2.4 (Puiseux series).
Let $k$ be a field. We shall define $\mathbb{K}(k)=\mathbb{K}$, the field of
_Puiseux 111or Newton-Puiseux series_ over $k$ with variable $x$. For
$a=a(x)\in\mathbb{K}$ there is an $n\in\mathbb{N}$, $m_{0}\in\mathbb{Z}$ and
coefficients $(c_{m})\subset k$ such that we write
$a=\sum_{m=m_{0}}^{\infty}c_{m}x^{\frac{m}{n}}.$
Addition and multiplication works as with any power series. Moreover, these
series converge from their partial sums under the non-Archimedean metric
defined next. We fix a value of $0<|x|=\varepsilon<1$ and give $k$ the trivial
norm by setting $|c|=1\quad~{}\forall c\in k^{\times}$. This information
uniquely determines a norm on $\mathbb{K}$ in the sense of subsection 2.1:
assuming $c_{m_{0}}\neq 0$,
$|a|=|x|^{\frac{m_{0}}{n}}.$
This dependence of the power series on the denominator $n$ is the only thing
preventing $\mathbb{K}$ from being complete with respect to
$\left|\cdot\right|$. The completion of $\mathbb{K}$, the _Levi-Civita field_
$\mathbb{\hat{K}}$, is the field with elements of the form
$\gamma=\sum_{j=0}^{\infty}c_{j}x^{r_{j}},\text{ where
}(r_{j})\subset\mathbb{Q},\ r_{j}\to\infty.$
###### Theorem 2.5 (Puiseux’s Theorem).
The Puiseux series, $\mathbb{K}(\mathbb{C})$, is the algebraic closure of the
formal Laurent series $\mathbb{C}((x))$.
This is a useful field to use when working with germs of algebraic curves in
$\mathbb{C}^{2}$. _Puiseux’s Theorem_ says that any irreducible curve
$P(x,y)=0$ crossing $\left\\{x=0\right\\}$ (except the line itself) can be
given locally by branches of the form $y=\gamma(x)$ where $\gamma$ is a
Puiseux series.
linecolor=olive,backgroundcolor=olive!25,bordercolor=olive,linecolor=olive,backgroundcolor=olive!25,bordercolor=olive,todo:
linecolor=olive,backgroundcolor=olive!25,bordercolor=olive,I think the $n$ is
the degree of ramification at $x=0$. Note that $\mathbb{K}$ is the direct
limit $\varinjlim\mathbb{C}((x^{\frac{1}{n}}))$. The Levi-Civita field
$\mathbb{\hat{K}}$ is both algebraically closed and complete. When
$\operatorname{char}k=0$, the Puiseux series over $\overline{k}$,
$\mathbb{K}(k)$ is the algebraic closure of $k((x))$. However
$\overline{k((x))}$ is larger in positive characteristic.
Throughout this article, when we use Puiseux series, we will not declare an
$\varepsilon\in(0,1)$ for which $\left|x\right|=\varepsilon$ as in the
definition above. Instead, we will simply refer to the quantity
$\left|x\right|$ which intrinsically provides the same information. This also
serves as a visual reminder to the order of vanishing of a series, for
instance if $\left|a(x)\right|=\left|x\right|^{3/2}$ then the first non-zero
term of the Puiseux series $a(x)$ must be $cx^{3/2}$.
### 2.2. Disks
###### Definition 2.4.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field222although similar can
be said for any non-Archimedean metric space. We define the open and closed
disks of radius $r$ centred at $a\in K$, respectively below.
$D(a,r):=\left\\{b\in K:\,|b-a|<r\right\\}$ $\overline{D}(a,r):=\left\\{b\in
K:\,|b-a|\leq r\right\\}$
If the radius $r$ of a disk is in the value group $|K^{\times}|$, we say this
disk (and its radius $r$) is _rational_ , otherwise, we say it is
_irrational_.
By convention we will allow notationally that
$\overline{D}(a,0)=\left\\{a\right\\}$, but not formally refer to this as a
‘disk’. The terminology of rationality follows from the notion in fields like
the Puiseux series, where the value group is
$\left|\mathbb{K}^{\times}\right|=\left|x\right|^{\mathbb{Q}}\cong\mathbb{Q}$,
however we will use these adjectives even if the value group is not isomorphic
to $\mathbb{Q}$. It follows immediately from the definition that for an
irrational radius $r$, open and closed disks coincide,
$D(a,r)=\overline{D}(a,r)$. The non-Archimedean metric results in some weird
quirks for disks. For example, consider $a,b\in K$ such that
$\left|a-b\right|=r>0$. Then in an Archimedean space, the overlap of the disks
$\overline{D}(a,r)$, $\overline{D}(b,r)$ would be a non-trivial proper subset,
much similar to the overlap of $D(a,r)$ and $D(b,r)$. In this non-Archimedean
setting we have that $\overline{D}(a,r)=\overline{D}(b,r)$ but $D(a,r)\cap
D(b,r)=\emptyset$.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Draw nice
picture Moreover any two disks are either disjoint or nested (or equal).
Perhaps confusingly, the rational closed disk $\overline{D}(a,r)$ is never the
closure of the rational open disk $D(a,r)$. The following proposition, lifted
from [Ben19, Proposition 2.4], details these differences.
###### Proposition 2.6.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field.
1. (i)
Let $a,b\in K$, and $R\geq r>0$ such that $a\in D(b,R)$. Then
$D(a,r)\subseteq D(b,R)D(a,R)=D(b,R).$
2. (ii)
Let $a,b\in K$, and $R\geq r>0$ such that $a\in\overline{D}(b,R)$. Then
$\overline{D}(a,r)\subseteq\overline{D}(b,R)\overline{D}(a,R)=\overline{D}(b,R).$
3. (iii)
Let $D_{1},D_{2}$ be disks such that $D_{1}\cap D_{2}\neq\emptyset$. Then
either
$D_{1}\subseteq D_{2}\text{ or }D_{1}\supseteq D_{2}$
4. (iv)
All disks in $K$ are clopen.
5. (v)
$K$ is totally disconnected.
###### Remark 2.3.
If $D$ is an open (resp. closed) disk then the smallest possible radius $r$
for which $D=D(a,r)$ (resp. $\overline{D}(a,r)$) is also its _diameter_ ,
$\operatorname{diam}(D)=\sup_{a,b\in D}\left|a-b\right|$. If $K$ is densely
valued, then there is a unique radius. See [Ben19, Proposition 2.5]. In
particular, whenever $K$ is algebraically closed, then
$\left|K^{\times}\right|$ is a $\mathbb{Q}$-vector space dense in
$(0,\infty)$.
It is common for non-Archimedean fields to not be _spherically complete_ \-
that is, there may be a sequence of nested closed disks
$\overline{D}(a_{1},r_{1})\supset\overline{D}(a_{2},r_{2})\supset\overline{D}(a_{3},r_{3})\supset\overline{D}(a_{4},r_{4})\supset\cdots$
with empty intersection
$\bigcap_{n=1}^{\infty}\overline{D}(a_{n},r_{n})=\emptyset.$
Of course, in a complete field, if $r_{n}\to 0$, then this intersection is
always a singleton. As an example, consider the following sequence in the
complete non-Archimedean field $\mathbb{\hat{K}}$.
$\overline{D}\left(1,\left|x^{1-1/2}\right|\right)\supset\overline{D}\left(1+x^{1-1/2},\left|x^{1-1/3}\right|\right)\supset\overline{D}\left(1+x^{1-1/2}+x^{1-1/3},\left|x^{1-1/4}\right|\right)\supset\cdots$
$\cdots\supset\overline{D}\left(\sum_{n=1}^{N-1}x^{1-1/n},\left|x^{1-1/N}\right|\right)\supset\cdots$
One can check it has a non-empty intersection if and only if the infinite
series $\sum x^{1-1/n}$ converges in $\mathbb{\hat{K}}$. The limit does not
exist! We will return to this idea when we discuss Type IV norms in the
Berkovich projective line. In any case, if the intersection of nested closed
disks is non-empty, containing say $a\in K$, then
$\bigcap_{n=1}^{\infty}\overline{D}(a_{n},r_{n})=\overline{D}(a,r),$
where $r=\lim r_{n}$ is possibly zero.
### 2.3. Projective Line and Affinoids
So far we have discussed a non-Archimedean field $K=\mathbb{A}^{1}(K)$, which
is the affine line, but in general we shall work on the projective line
$\mathbb{P}^{1}(K)=\mathbb{A}^{1}(K)\cup\left\\{\infty\right\\}$ with its
usual definition. It is natural to extend the definition of disks and their
types to one that is invariant of the fractional linear transformations
$\operatorname{PGL}(2,K)$. We shall also recall (from [Ben19, §3.5]) important
topological objects called affinoids, which are merely disks subtracted from
disks.
###### Definition 2.5.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field. A _disk_ is any one
of the following.
* •
A _rational closed disk_ $D\subset\mathbb{P}^{1}(K)$ is either a rational
closed disk $D\subset K$ or $D=\mathbb{P}^{1}(K)\setminus E$ where $E\subset
K$ is a rational open disk.
* •
A _rational open disk_ $D\subset\mathbb{P}^{1}(K)$ is either a rational open
disk $D\subset K$ or $D=\mathbb{P}^{1}(K)\setminus E$ where $E\subset K$ is a
rational closed disk.
* •
An _irrational disk_ $D\subset\mathbb{P}^{1}(K)$ is either an irrational disk
$D\subset K$ or $D=\mathbb{P}^{1}(K)\setminus E$ where $E\subset K$ is an
irrational disk.
The generalisation of subsection 2.2 (iii) would include the possibility that
two disks cover the whole space.
###### Definition 2.6.
A _connected affinoid_ is a nonempty intersection of finitely many disks
$D_{1},\dots,D_{n}$. If all of the disks $D_{1},\dots,D_{n}$ are closed, open,
rational, or irrational, then the connected affinoid $D_{1}\cap\cdots\cap
D_{n}$ is respectively said to be closed, open, rational, or irrational. The
connected open affinoid of the form
$\left\\{r<\left|z-a\right|<R\right\\}=D(a,R)\setminus\overline{D}(a,r)$ is
called an _open annulus_.
If two connected affinoids $U$ and $V$ and non-empty intersection, then both
$U\cap V$ and $U\cup V$ are connected affinoids.
### 2.4. Power Series and Constructing Seminorms
To study rational functions on a non-Archimedean field $K$, we will want to
understand them as analytic functions in neighbourhoods. Here, ‘analytic’
means given by Taylor or Laurent series, rather than some notion of
holomorphy, however it remains true that any rational function is locally
analytic. In this section we will define and recall some notions of convergent
power series for non-Archimedean fields. See [Ben19, §3].
### 2.5. Taylor Series
Following subsection 2.1 a Taylor series
$f(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}\in K[[y-a]]$
will converge at $y=b$ if and only if $\left|c_{n}(b-a)^{n}\right|\to 0$. Let
$\left|b-a\right|=r$, then the series converges at $b$ if and only if it
converges at every $b^{\prime}\in\overline{D}(a,r)$. This behaviour is a
little nicer than the Archimedean situation in complex analysis.
###### Definition 2.7.
The _radius of convergence_ , $R\in[0,\infty]$, for a Taylor series $f(y)$ as
above is
$R=\sup\left\\{r\in\mathbb{R}:\,\left|c_{n}\right|r^{n}\to 0\right\\}.$
The _domain of convergence_ of a Taylor series $f(y)$, is defined to be
$\operatorname{dom}(f)=\left\\{a\in K:\,f(a)\text{ converges}\right\\}.$
###### Proposition 2.7.
Let $(K,\left|\cdot\right|)$ is non-trivially and densely valued non-
Archimedean field. Let $a\in K$ and $f(y)\in K[[y-a]]$ be a Taylor series with
radius of convergence $R$. Then
$\operatorname{dom}(f)=\begin{cases}D(a,R)=\overline{D}(a,R),&\text{if
}R\notin\left|K^{\times}\right|,\text{ or otherwise}\\\ D(a,R),&\text{if
}\left|c_{n}\right|R^{n}\nrightarrow 0,\text{ and}\\\
\overline{D}(a,R),&\text{if }\left|c_{n}\right|R^{n}\to 0.\end{cases}$
###### Definition 2.8.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field. We define the power
series rings
$\mathcal{A}(a,r)=\left\\{f\in{K[[y-a]]}:\,D(a,r)\subseteq\operatorname{dom}(f)\right\\}$
$\overline{\mathcal{A}}(a,r)=\left\\{f\in{K[[y-a]]}:\,\overline{D}(a,r)\subseteq\operatorname{dom}(f)\right\\}$
Note that the polynomials live in every power series ring, moreover $K\subset
K[y]\subset\overline{\mathcal{A}}(a,r)\subset\mathcal{A}(a,r)$.
###### Definition 2.9 (Weierstrass Degree).
Let $a\in K$,
$f(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}$
be a non-zero power series with radius of convergence $r>0$, and let
$D=\operatorname{dom}(f)$. The _Weierstrass degree_
$\operatorname{wdeg}_{D}(f)$ of $f$ is defined according to two cases as
follows.
1. (i)
If $D=\overline{D}(a,r)$ is a rational closed disk, then
$\operatorname{wdeg}_{D}(f)=\max\left\\{d\in\mathbb{N}:\,\left|c_{d}\right|r^{d}=\max_{n}\left|c_{n}\right|r^{n}\right\\}.$
2. (ii)
If $D=D(a,r)$ is an irrational or a rational open disk, then
$\operatorname{wdeg}_{D}(f)=\min\left\\{d\in\mathbb{N}:\,\left|c_{d}\right|r^{d}=\sup_{n}\left|c_{n}\right|r^{n}\right\\}\cup\left\\{\infty\right\\}.$
The Weierstrass degree plays a crucial role in understanding the zeros images
of analytic functions on a non-Archimedean field. One can easily check that
$\operatorname{wdeg}_{D}(g)+\operatorname{wdeg}_{D}(h)$ =
$\operatorname{wdeg}_{D}(gh)$ and hence any power series with a multiplicative
inverse (a unit) has Weierstrass degree $0$. The Weierstrass preparation
theorem shows that this quantity really is a ‘degree’ on $D$.
###### Theorem 2.8 (Weierstrass Preparation Theorem).
Let $(K,\left|\cdot\right|)$ be a complete non-Archimedean field, $a\in K$,
$r\in\left|K^{\times}\right|$, and $f\in\overline{\mathcal{A}}(a,r)$ be a non-
zero power series. Then there exists a monic polynomial $g\in K[y]$ of degree
$d=\operatorname{wdeg}_{\overline{D}(a,r)}(f)$ and a unit power series
$h\in\overline{\mathcal{A}}^{\times}(a,r)$ such that $f=gh$ and all the zeroes
of $g$ lie in $\overline{D}(a,r)$.
This has several consequences; immediately, we see that in an algebraically
closed complete field, such a power series $f$ has $d$ zeroes (counting
multiplicity) in the disk; secondly, moreover $f$ is a $d$-to-$1$ mapping on
$\overline{D}(0,r)$.
###### Theorem 2.9.
Let $(K,\left|\cdot\right|)$ be a complete and algebraically closed non-
Archimedean field, $a\in K$, and
$f(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}$
be a non-zero power series converging on a disk $D$ of radius $r>0$. Suppose
that $d=\operatorname{wdeg}_{D}(f-c_{0})<\infty$, then
1. (i)
$f(D)$ is a disk of the same type as $D$ (rational closed, rational open, or
irrational), centred at $f(a)=c_{0}$, of radius $\left|c_{d}\right|r^{d}$; and
2. (ii)
$f:D\to f(D)$ is a $d$-to-$1$ mapping, counting multiplicity.
### 2.6. Laurent Series
The _formal_ Laurent series $K((y-a))=\operatorname{Frac}K[[y-a]]$ represent
the set of power series in $(y-a)$ with infinitely many positive powers but
only _finitely many negative_ ones. It is not hard to see that most of the
results in the previous subsection apply to formal Laurent series on punctured
domains. To be precise, if
$f(y)=\sum_{n=-n_{0}}^{\infty}c_{n}(y-a)^{n}$
is a formal Laurent series ($n_{0}>0$) and
$f_{+}(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}$
converges on a disk $\overline{D}(a,r)$, then $f$ converges on
$\overline{D}^{*}(a,r)=\overline{D}(a,r)\setminus\left\\{0\right\\}$. However,
it will be useful to consider bi-infinite Laurent series when we inspect
rational maps.
For example, suppose $0<\left|a\right|<\left|b\right|$ and we want to consider
the rational map
$f(y)=\frac{1}{y-a}-\frac{1}{y-b}$
over the annulus
$U=\left\\{\left|a\right|<\left|z\right|<\left|b\right|\right\\}$. We may
expand both as series in powers of $y$ using the usual binomial trick
$(1-t)^{-1}=1+t+t^{2}+\cdots,$
but this converges when and only when $\left|t\right|<1$. Therefore on the
annulus $U$ we must expand $f$ as
$f(y)=\frac{1}{y}\frac{1}{1-\frac{a}{y}}+\frac{1}{b}\frac{1}{1-\frac{y}{b}}=\frac{1}{y}\sum_{n=0}^{\infty}\left(\frac{a}{y}\right)^{n}+\frac{1}{b}\sum_{n=0}^{\infty}\left(\frac{y}{b}\right)^{n}.$
In general, we can study rational maps through Laurent series converging on
annuli.
###### Definition 2.10.
Let $K$ be a densely valued non-Archimedean field and
$(c_{n})_{n=-\infty}^{\infty}\subset K$. A _Laurent series_ $f(y)$ about $a\in
K$ is a series of the form
$f(y)=\sum_{n\in\mathbb{Z}}c_{n}(y-a)^{n}\in K[[y-a,(y-a)^{-1}]].$
The _inner_ and _outer radii of convergence_ for $f(y)$, $r,R\in[0,\infty]$,
are defined respectively (if they exist) as
$r=\inf\left\\{s\in\mathbb{R}:\,\left|c_{n}\right|s^{n}\to 0\right\\}$
$R=\sup\left\\{s\in\mathbb{R}:\,\left|c_{n}\right|s^{n}\to 0\right\\}.$
The _domain of convergence_ of a Laurent series $f(y)$, is defined to be
$\operatorname{dom}(f)=\left\\{a\in K:\,f(a)\text{ converges}\right\\}.$
###### Proposition 2.10.
Let $(K,\left|\cdot\right|)$ be a densely valued non-Archimedean field. Let
$a\in K$ consider a Laurent series
$f(y)=\sum_{n\in\mathbb{Z}}c_{n}(y-a)^{n}\in K[[y-a,(y-a)^{-1}]].$
Then $f(y)$ will converge at $y=b$ if and only if
$\left|c_{n}(b-a)^{n}\right|\to 0$ both as $n\to\infty$ and as $n\to-\infty$.
Hence $f(y)$ converges for some $y=b$ if and only if the inner and outer radii
of convergence $r$ and $R$ both exist (with $r\leq\left|b-a\right|\leq R$). In
this case moreover the domain of convergence, $\operatorname{dom}(f)$ is one
of the following annuli
$\left\\{r<\left|z-a\right|<R\right\\}=D(a,R)\setminus\overline{D}(a,r),$
$\left\\{r\leq\left|z-a\right|<R\right\\}=D(a,R)\setminus D(a,r),$
$\left\\{r<\left|z-a\right|\leq
R\right\\}=\overline{D}(a,R)\setminus\overline{D}(a,r),$
$\left\\{r\leq\left|z-a\right|\leq R\right\\}=\overline{D}(a,R)\setminus
D(a,r),$
depending only on the boundary cases, whether $\left|c_{n}\right|r^{n}\to 0$
and/or $\left|c_{n}\right|R^{n}\to 0$.
###### Definition 2.11.
Let
$f(y)=\sum_{n\in\mathbb{Z}}c_{n}(y-a)^{n}$
be a Laurent series about $a\in K$. On any open annulus
$U=\left\\{r<\left|z-a\right|<R\right\\}\subset\operatorname{dom}(f)$ we
define
1. (i)
the _inner Weierstrass degree_ $\overline{\operatorname{wdeg}}_{a,r}(f)$ of
$f$ to be the largest integer $M\in\mathbb{Z}$ such that
$\left|c_{M}\right|r^{M}=\sup_{n\in\mathbb{Z}}\left|c_{n}\right|r^{n}$, or
$-\infty$ if there is no such integer; and
2. (ii)
the _outer Weierstrass degree_ $\operatorname{wdeg}_{a,R}(f)$ of $f$ to be the
smallest integer $N\in\mathbb{Z}$ such that
$\left|c_{N}\right|R^{N}=\sup_{n\in\mathbb{Z}}\left|c_{n}\right|R^{n}$, or
$\infty$ if there is no such integer.
Note that for Taylor series,
$\overline{\operatorname{wdeg}}_{a,r}(f)=\operatorname{wdeg}_{\overline{D}(a,r)}$
and $\operatorname{wdeg}_{a,r}(f)=\operatorname{wdeg}_{D(a,r)}$.
Despite the hypothesis of the definition, one can think of the inner and outer
Weierstrass degrees as a function of the radii and coefficients
$\operatorname{wdeg}_{a,R}(f)=\min\left\\{d\in\mathbb{N}:\,\left|c_{d}\right|R^{d}=\sup_{n}\left|c_{n}\right|R^{n}\right\\}\cup\left\\{\infty\right\\},$
ignorant of domains or annuli. As the radius $R$ changes for the annulus
$U=\left\\{r<\left|z-a\right|<R\right\\}$, $U$ may absorb zeroes of $f(y)$;
for each new zero (counted with multiplicity) the outer Weierstrass degree
will _increase_ by one, and on annuli without zeroes the Weierstrass degree
remains constant. This is made explicit in the proposition below, see [Ben19,
Proposition 3.32].
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,point to the
result about local degrees As suggested by the example above, all rational
functions have Laurent series expansions on annuli away from their poles. If
one considers the Weierstrass degree only as a function of the rational map
and the radius, independent of the Laurent series representation. Then one can
interpret this number a count of poles and zeroes. Indeed, we can see that the
Weierstrass degree _decreases_ by one every time the radius crosses a pole of
$f(y)$.
###### Proposition 2.11.
Let $(K,\left|\cdot\right|)$ be an algebraically closed, complete non-
Archimedean field and let $f(y)\in K(y)$ be a rational function.
1. (i)
If $f$ has no poles in $U=\left\\{r<\left|z-a\right|<R\right\\}$, an open
annulus, then $f$ has a unique Laurent series expansion converging on $U$.
2. (ii)
Hence we may write $\overline{\operatorname{wdeg}}_{a,r}(f)$ and
$\operatorname{wdeg}_{a,R}(f)$ for the inner and outer Weierstrass degrees of
this unique series at radius $R$ about $a$. Hence these quantities are well
defined and finite for any $r,R>0$.
3. (iii)
If $f$ has no poles or zeros in $U=\left\\{r<\left|z-a\right|<R\right\\}$,
then the inner and outer Weierstrass degrees of $f$ on $U$ coincide. i.e.
$\overline{\operatorname{wdeg}}_{r}(f)=\operatorname{wdeg}_{R}(f)$.
4. (iv)
If $f$ has $N_{\infty}$ poles and $N_{0}$ zeros in the open disk $D(a,R)$,
then
$\operatorname{wdeg}_{a,R}(f)=N_{0}-N_{\infty}.$
5. (v)
If $f$ has $N_{\infty}$ poles and $N_{0}$ zeros in the closed disk
$\overline{D}(a,R)$, then
$\overline{\operatorname{wdeg}}_{a,R}(f)=N_{0}-N_{\infty}.$
6. (vi)
If $f$ has $N_{\infty}$ poles and $N_{0}$ zeros in
$U=\left\\{r<\left|z-a\right|<R\right\\}$, then
$\operatorname{wdeg}_{a,R}(f)-\overline{\operatorname{wdeg}}_{a,r}(f)=N_{0}-N_{\infty}.$
7. (vii)
If $f$ has $N_{\infty}$ poles and $N_{0}$ zeros in the circle
$\overline{D}(a,R)\setminus D(a,R)$, then
$\overline{\operatorname{wdeg}}_{a,R}(f)-\operatorname{wdeg}_{a,R}(f)=N_{0}-N_{\infty}.$
One can extend Theorem 2.9 to the case of Laurent series and annuli as follows
(see [Ben19, Theorem 3.33]).
###### Theorem 2.12.
Let $(K,\left|\cdot\right|)$ be an algebraically closed, complete non-
Archimedean field. Let $0<r<R$, let $U=\left\\{r<\left|z-a\right|<R\right\\}$
be an open annulus, and let $f(y)$ be a non-constant convergent Laurent series
on $U\subset\operatorname{dom}(f)$. Write
$f(y)=\sum_{n\in\mathbb{Z}}c_{n}(y-a)^{n}$
and suppose that $f-c_{0}$ has finite inner and outer Weierstrass degrees
$M\leq N\in\mathbb{Z}$, respectively. Let
$s=\left|c_{M}\right|r^{M}t=\left|c_{N}\right|R^{N}.$
1. (i)
If $M<N$, then $f(U)=D\left(c_{0},\max\left\\{s,t\right\\}\right)$.
2. (ii)
If $M=N\geq 1$, then $f(U)=\left\\{s<\left|z-c_{0}\right|<t\right\\}$, and the
mapping $f:U\to f(U)$ is $M$-to-$1$.
3. (iii)
If $M=N\leq-1$, then $f(U)=\left\\{t<\left|z-c_{0}\right|<s\right\\}$, and the
mapping $f:U\to f(U)$ is $(-M)$-to-$1$.
It follows that in the last two cases,
$\left|f(z)-c_{0}\right|=\left|c_{M}(z-a)^{M}\right|$, for any $z\in U$.
Finally, we recall a description of how a rational map acts on affinoids, this
is lifted from [Ben19, Proposition 3.29].
###### Theorem 2.13.
Let $(K,\left|\cdot\right|)$ be an algebraically closed, complete non-
Archimedean field, and $U\subseteq\mathbb{P}^{1}(K)$ be a connected affinoid.
Let $f(y)\in K(y)$ be a rational function of degree $d\geq 1$. Then
1. (i)
$f(U)$ is either $\mathbb{P}^{1}(K)$ or a connected affinoid of the same type,
if any, as $U$, and
2. (ii)
$f^{-1}(U)$ is a disjoint union $V_{1}\cup\cdots\cup V_{m}$ of connected
affinoids, each of the same type, if any, as $U$, and with $1\leq m\leq d$.
Moreover, for each $i=1,\dots,m$, there is an integer $1\leq d_{i}\leq d$ such
that every point in $U$ has exactly $d_{i}$ preimages in $V_{i}$, counting
multiplicity, and such that $d_{1}+\cdots+d_{m}=d$.
### 2.7. Seminorms of Power Series
Power series rings $K[[y-a]]$ can be equipped with many different non-
Archimedean seminorms, but we shall focus on those which agree with the
absolute value on $K$. These shall constitute the seminorms of the Berkovich
affine line. Whenever we define a seminorm $\left\|\cdot\right\|_{\zeta}$,
notationally we will use $\left\|\cdot\right\|_{\zeta}$ and $\zeta$
interchangeably. This will make more sense after defining the Berkovich affine
line which contains $K$ in the form of Type I points.
###### Definition 2.12 (Type I Seminorm).
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field and $a\in K$. We
define a function called a _Type I seminorm_
$\left\|\cdot\right\|_{a}:K[[y-a]]\to[0,\infty)$ by
$\left\|f\right\|_{a}=\left|f(a)\right|.$
###### Proposition 2.14.
Let $a\in K$, then $\left\|\cdot\right\|_{a}$ is a well defined non-
Archimedean, multiplicative seminorm on $K[[y-a]]\supset K[y]$, which extends
the norm $\left|\cdot\right|$ on $K$, i.e.
$\left\|c\right\|_{a}=\left|c\right|\quad\forall c\in K$. However it is never
a norm since $\left\|y-a\right\|_{a}=0$.
###### Definition 2.13 (Type II/III Norm).
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field, $a\in K$, $0<R$. We
define the ring
$\mathcal{L}(a,R)=\left\\{f\in{K[[y-a,(y-a)^{-1}]]}:\,\operatorname{wdeg}_{a,R}(f)<\infty\right\\}.$
We also define a function
$\left\|\cdot\right\|_{\zeta(a,R)}:\mathcal{L}(a,r)\to[0,\infty)$ by
$\left\|f\right\|_{\zeta(a,R)}=\left|c_{d}\right|R^{d}$
where
$f(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}\in\mathcal{L}(a,R),\quad\operatorname{wdeg}_{a,R}(f)=d.$
We say $\zeta(a,R)$ is a Type II or Type III norm if $R$ is rational or not,
respectively.
###### Proposition 2.15.
Let $a\in K$ and $r>0$. Then $\left\|\cdot\right\|_{\zeta(a,r)}$ is a well
defined non-Archimedean, multiplicative norm on
$\mathcal{L}(a,r)\supset\overline{\mathcal{A}}(a,r)\supset K[y]$ which extends
the norm $\left|\cdot\right|$ on $K$. Hence, for any disk
$D(b,R)\supsetneq\overline{D}(a,r)$, $\left\|\cdot\right\|_{\zeta(a,r)}$ is a
norm on $\overline{\mathcal{A}}(b,R)$ and $\mathcal{A}(b,R)$, moreover
$\left\|f\right\|_{\zeta(a,r)}\leq\left\|f\right\|_{\zeta(b,R)}\quad\forall
f\in\overline{\mathcal{A}}(b,R).$
If $f\in\overline{\mathcal{A}}(a,R)$ or more generally $f$ is a Laurent series
converging on a closed annulus
$E=\left\\{R-\varepsilon\leq\left|z-a\right|\leq R\right\\}$, i.e.
$\left|c_{n}\right|R^{n}\to 0$ as $n\to\pm\infty$, then the supremum
$\sup_{n}\left|c_{n}\right|R^{n}$ is attained at some $d\in\mathbb{Z}$ and
thus the outer Weierstrass degree of $f$ is finite. Therefore
$\mathcal{L}(a,r)\supset\overline{\mathcal{A}}(a,r)$. However if $f$ converges
on $D(a,r)$ but not $\overline{D}(a,r)$ then the sequence
$\left|c_{n}\right|r^{n}$ may not attain its supremum or be bounded.
By subsection 2.6, any rational function $f(y)$ for any radius $R$ has a
Laurent expansion on $U=\left\\{R-\varepsilon<\left|z-a\right|<R\right\\}$,
hence the above definition works for all rational functions $f\in
K(y)\subset\mathcal{L}(a,r)$. One could also consider the opposite annulus
$\left\\{R<\left|z-a\right|<R+\varepsilon\right\\}$ and try make a similar
definition using the inner Weierstrass degree; or one could pick a different
centre $b\in\overline{D}(a,R)$. These all turn out to be equal. Further, the
Type II/III norm $\left\|\cdot\right\|_{\zeta(a,r)}$ is actually a ‘sup-norm’
on $\overline{\mathcal{A}}(a,r)$. This is showcased in [Ben19] but we shall
state and prove a little more.
linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Haven’t seen reference,
but doubt this is really
new.linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,PROVE
###### Proposition 2.16.
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field, $a\in K$, $r>0$, and
$f(y)$ be a non-constant rational function. Pick $\varepsilon>0$ such that $f$
has no poles on $V=\left\\{R<\left|z-a\right|<R+\varepsilon\right\\}$ and
write
$f(y)=\sum_{n\in\mathbb{Z}}c_{n}(y-a)^{n},\quad\overline{\operatorname{wdeg}}_{a,R}(f)=d.$
Then
$\left\|f\right\|_{\zeta(a,R)}=\left|c_{d}\right|R^{d}=\lim_{\left|b\right|\to
R}\left|f(b)\right|.$
Where the limit excludes $\left|b\right|=R$. Moreover,
$\left\|f\right\|_{\zeta(a,R)}=\left|f(b)\right|$ for every $b$ in all but
finitely many residue classes of $\overline{D}(a,R)$, i.e. avoiding any open
disks $D(b,R)\subset\overline{D}(a,R)$ containing zeroes or poles of $f$.
Furthermore, $\zeta(a,R)$ depends only on the choice of closed disk
$\overline{D}(a,R)$. That is for any $b\in\overline{D}(a,R)$ we have that
$\zeta(a,R)=\zeta(b,R).$
This is mostly remarkable because the values
$\overline{\operatorname{wdeg}}_{a,R}(f)$ and $\operatorname{wdeg}_{a,R}(f)$
could be different, deriving from distinct Laurent series. The result derives
from the following:
###### Proposition 2.17.
Let $(K,\left|\cdot\right|)$ be a densely valued non-Archimedean field, $a\in
K$, $r>0$, and consider the non-zero Taylor series
$f(y)=\sum_{n=0}^{\infty}c_{n}(y-a)^{n}.$
1. (i)
If $f$ converges on $D=D(a,r)$ with Weierstrass degree
$\operatorname{wdeg}_{a,r}(f)=d$, then
$\left\|f\right\|_{\zeta(a,r)}=\left|c_{d}\right|r^{d}=\sup_{b\in
D}\left|f(b)\right|=\sup_{b\in
D}\left\|f\right\|_{b}=\lim_{\left|b\right|\nearrow r}\left\|f\right\|_{b}.$
2. (ii)
If $f$ converges on $E=\overline{D}(a,r)$ with Weierstrass degree
$\overline{\operatorname{wdeg}}_{a,r}(f)=\overline{d}$, then
$\left\|f\right\|_{\zeta(a,r)}=\left|c_{\overline{d}}\right|r^{\overline{d}}=\sup_{b\in
E}\left|f(b)\right|=\sup_{b\in E}\left\|f\right\|_{b}.$
3. (iii)
If $f$ converges on $D(a,R)\supsetneq\overline{D}(a,r)$, then
$\left\|f\right\|_{\zeta(a,r)}=\left|c_{\overline{d}}\right|r^{\overline{d}}=\lim_{\left|b\right|\searrow
r}\left\|f\right\|_{b}.$
4. (iv)
Moreover, $\left\|f\right\|_{\zeta(a,r)}=\left|f(b)\right|$ for every $b$ in
all but finitely many residue classes of $\overline{D}(a,r)$, to be precise we
could pick any $b\in\overline{D}(a,r)\setminus(D(a_{1},r)\cup\cdots\cup
D(a_{\overline{d}},r))$, where $(a_{j})$ are the finitely many solutions to
$f(y)=0$.
Furthermore, $\zeta(a,R)$ depends only on the choice of closed disk
$\overline{D}(a,R)$. That is for any $b\in\overline{D}(a,R)$ we have that
$\zeta(a,R)=\zeta(b,R).$
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Add proof or
move to a later part. Be careful about $z^{p}-z$. Degree less than res char
should be enough. Add an equivalence using
radius.linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Include e.g. prop 3.20?
###### Proof.
Since $f\in\mathcal{L}(a,r)$, we have by definition
$\left\|f\right\|_{\zeta(a,r)}=\left|c_{d}\right|r^{d}$. Since $f$ has
Weierstrass degree $d=\operatorname{wdeg}_{a,r}(f)$, for $n>d$ we have that
$\left|c_{n}\right|r^{n}\leq\left|c_{d}\right|r^{d}$ and for $n<d$ we have
$\left|c_{n}\right|r^{n}<\left|c_{d}\right|r^{d}$. By rearranging the former
we get that $\left|c_{n}\right|r^{n-d}\leq\left|c_{d}\right|$ and hence for
any $s<r$ we have $\left|c_{n}\right|s^{n-d}<\left|c_{d}\right|$ which implies
$\left|c_{n}\right|s^{n}<\left|c_{d}\right|s^{d}$ for every $n>d$. On the
other hand, for the finitely many $0\leq n<d$ we can use continuity to find an
$s_{0}<r$ large enough such that for every $s\in(s_{0},r)$ the latter
inequality remains true $\left|c_{n}\right|s^{n}<\left|c_{d}\right|s^{d}$.
Suppose that $b\in D(a,r)$, specifically with $\left|b-a\right|=s<r$. Then by
subsection 2.1
$\left|f(b)\right|=\left|\sum_{n=0}^{\infty}c_{n}(b-a)^{n}\right|\leq\max_{n}\left|c_{n}\right|\left|b-a\right|^{n}=\max_{n}\left|c_{n}\right|s^{n}\leq\max_{n}\left|c_{n}\right|r^{n}=\left|c_{d}\right|r^{d},$
and moreover if $s_{0}<s<r$, then
$\left|f(b)\right|=\left|c_{d}\right|\left|b-a\right|^{d}=\left|c_{d}\right|s^{d}$,
so as $s\nearrow r$, we have $\left|f(b)\right|\to\left|c_{d}\right|r^{d}$.
Items (ii), (iii) can be proven similarly.
It is enough to show (iv) in the completion of the algebraic closure of $K$,
since disregarding finitely many disks in a field extension will do the same
in $K$. If $\overline{d}=0$, then by subsection 2.1
$\left|f(b)\right|=\left|c_{0}\right|$ for every $b\in\overline{D}(a,r)$.
Otherwise, $\overline{d}=\overline{\operatorname{wdeg}}_{a,r}(f-c_{0})$ and
hence $\left|c_{0}\right|\leq\left\|f\right\|_{\zeta(a,r)}$. By Theorem 2.9 we
have that
$f(\overline{D}(a,r))=\overline{D}\left(c_{0},\left\|f\right\|_{\zeta(a,r)}\right)=\overline{D}\left(0,\left\|f\right\|_{\zeta(a,r)}\right)$,
$f(D(b,r))=D\left(f(b),\left\|f\right\|_{\zeta(a,r)}\right)$, and there are at
most $\overline{d}$ such open disks $D(a_{1},r),\dots,D(a_{\overline{d}},r)$
whose image contains $0$, i.e.
$f(D(a_{j},r))=D\left(0,\left\|f\right\|_{\zeta(a,r)}\right)$. Hence for any
$b$ _not_ in such a disk, we find
$f(b)\in\overline{D}\left(0,\left\|f\right\|_{\zeta(a,r)}\right)\setminus
D\left(0,\left\|f\right\|_{\zeta(a,r)}\right)$ and so
$\left|f(b)\right|=\left\|f\right\|_{\zeta(a,r)}$. ∎
Observe that if one considers the definition of Type II/III norm $\zeta(a,R$)
with $R=0$ we recover the Type I _semi_ norm. We make one last definition of a
seminorm; later we will see this is necessary to complete the Berkovich line.
###### Definition 2.14 (Type IV Norm).
Let $(K,\left|\cdot\right|)$ be a non-Archimedean field, and suppose the
following nested sequence of disks has empty intersection.
$\overline{D}(a_{1},r_{1})\supset\overline{D}(a_{2},r_{2})\supset\overline{D}(a_{3},r_{3})\supset\overline{D}(a_{4},r_{4})\supset\cdots$
Also let
$\mathcal{A}(\zeta)=\bigcup_{n}\overline{\mathcal{A}}(a_{n},r_{n}).$
We define a function
$\left\|\cdot\right\|_{\zeta}:\mathcal{A}(\zeta)\to[0,\infty)$ by
$\left\|f\right\|_{\zeta}=\inf_{n\geq N}\left\|f\right\|_{\zeta(a_{n},r_{n})}$
where $f\in\overline{\mathcal{A}}(a_{N},r_{N})\subset\mathcal{A}(\zeta)$. We
call $\zeta$ the _Type IV norm_ associated to the sequence
$(\overline{D}(a_{n},r_{n}))$.
Note that subsection 2.7 says that the sequence
$\left\|f\right\|_{\zeta(a_{n},r_{n})}$ above is decreasing, so this infimum
is a limit. Moreover for a fixed $f$, one can show that for large enough $n$,
$f\in\overline{\mathcal{A}}^{\times}(a_{n},r_{n})$ is a unit, and so the
sequence $\left\|f\right\|_{\zeta(a_{n},r_{n})}$ is eventually constant.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,link to
propConsider a rational function $f(y)$. We know that $f$ has finitely many
poles, so for some large $N$, these poles lie outside of
$\overline{D}(a_{n},r_{n})$, hence $f\in\overline{\mathcal{A}}(a_{n},r_{n})$.
Therefore $K(y)\subset\mathcal{A}(\zeta)$ for every Type IV $\zeta$.
###### Definition 2.15.
We say the sequences
$\overline{D}(a_{1},r_{1})\supset\overline{D}(a_{2},r_{2})\supset\overline{D}(a_{3},r_{3})\supset\overline{D}(a_{4},r_{4})\supset\cdots$
and
$\overline{D}(b_{1},s_{1})\supset\overline{D}(b_{2},s_{2})\supset\overline{D}(b_{3},s_{3})\supset\overline{D}(b_{4},s_{4})\supset\cdots$
are _equivalent_ if for any $n\in\mathbb{N}$ we can find an $N\in\mathbb{N}$
such that $\overline{D}(a_{n},r_{n})\supset\overline{D}(b_{N},s_{N})$ and vice
versa.
###### Proposition 2.18.
Type IV norms are non-Archimedean multiplicative norms on
$\mathcal{A}(\zeta)\supset K(y)$. If $\zeta$ and $\zeta^{\prime}$ are Type IV
norms associated to equivalent nested sequences of disks, then
$\zeta=\zeta^{\prime}$.
linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Include 3.22 about
inverses? Maybe need to think about including trivial residue field.
### 2.8. The Berkovich Projective Line
It is an easy exercise to show that given any ring $A$, a non-Archimedean
multiplicative norm $\left\|\cdot\right\|$ on $A$ extends to a non-Archimedean
norm on $\operatorname{Frac}A$ by
$\left\|a/b\right\|=\left\|a\right\|/\left\|b\right\|$. This provides a
simpler proof without annuli of domains of convergence, as in the last
section, that the Type II, III, and IV norms automatically extend to $K(y)$.
Hence the set of non-Archimedean multiplicative norms on $K[y]$ is the same as
those on $K(y)$, and it would be natural to define the Berkovich line simply
as the set of such norms on $K(y)$. It would be easy to define the action of a
rational map $\phi$ on this space of norms by
$\left\|f(y)\right\|_{\phi_{*}(\zeta)}=\left\|f\circ\phi(y)\right\|_{\zeta}$,
because $f\in K(y)\implies f\circ\phi\in K(y)$. Unfortunately, this
construction fails in the case of Type I _seminorms_ , since
$\left\|f/g\right\|_{a}$ is infinite when $g(a)=0$. One of the important
reasons to define the Berkovich projective line
$\mathbb{P}^{1}_{\text{an}}(K)$ is that we would like a complete and compact
space (indeed to compactify $\mathbb{P}^{1}(K)$), and the Type I points, seen
as $\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$ will play a key
role as (limiting) endpoints in the resulting tree. Since the Type I points
are ill-defined on $K(y)$, we shall define the Berkovich affine line as
seminorms over $K[y]$. However the reader is encouraged to bear in mind that
the Type II, III, and IV points in the space are norms on $K(y)$, and Type I
norms can be treated as a special case.
Throughout the remainder of this section let $K$ denote an algebraically
closed field and $\mathbb{\mathbb{C}}_{v}$ a complete and algebraically closed
field, which in most cases will be the completion of the former
$\mathbb{\mathbb{C}}_{v}=\widehat{K}$.
### 2.9. The Berkovich Affine Line
###### Definition 2.16.
The Berkovich affine line
$\mathbb{A}^{1}_{\text{an}}=\mathbb{A}^{1}_{\text{an}}(K)$ is the set of non-
Archimedean multiplicative seminorms on $K[y]$ extending
$(K,\left|\cdot\right|)$. (Meaning
$\left\|a\right\|=\left|a\right|\quad\forall a\in K$.) A topology is given to
$\mathbb{A}^{1}_{\text{an}}(K)$ by taking the coarsest topology for which
$\left\|\cdot\right\|_{\zeta}\mapsto\left\|f\right\|_{\zeta}$ is continuous
for every $f\in K[y]$.
We often refer to the elements
$\zeta\in\mathbb{A}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$ (or
$\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$) as _points_ , but we
shall write $\left\|\cdot\right\|_{\zeta}$ when we think of $\zeta$ as a
seminorm. It turns out that $\mathbb{A}^{1}_{\text{an}}$ is a tree; one way to
see this is through its poset structure which we define now and expand upon
later.
###### Definition 2.17.
We define a partial order $\preceq$ on $\mathbb{A}^{1}_{\text{an}}$ by
$\zeta\preceq\xi\iff\left\|f\right\|_{\zeta}\leq\left\|f\right\|_{\xi}\quad\forall
f\in\mathbb{\mathbb{C}}_{v}[y].$
###### Definition 2.18.
Let $\zeta\in\mathbb{A}^{1}_{\text{an}}$. Its _absolute value_ is
$\left|\zeta\right|=\left\|y\right\|_{\zeta}.$
Its _diameter_ is
$\operatorname{diam}(\zeta)=\inf_{a\in\mathbb{\mathbb{C}}_{v}}\left\|y-a\right\|_{\zeta}.$
It is clear that diameter exists and
$\operatorname{diam}(\zeta)\leq\left|\zeta\right|$. Some key examples here are
that $\operatorname{diam}(\zeta(a,r))=r$,
$\left|\zeta(a,r)\right|=\min\left\\{\left|a\right|,r\right\\}$,
$\operatorname{diam}(a)=0$, and $\left|a\right|=\left|a\right|$.
###### Definition 2.19.
We define the _open_ and _closed Berkovich disks_ of radius $r>0$ centred at
$a\in K$, respectively below.
$D_{\text{an}}(a,r):=\left\\{\zeta\in\mathbb{A}_{\text{an}}(K):\,\left\|y-a\right\|_{\zeta}<r\right\\}$
$\overline{D}_{\text{an}}(a,r):=\left\\{\zeta\in\mathbb{A}_{\text{an}}(K):\,\left\|y-a\right\|_{\zeta}\leq
r\right\\}$
If $r\in|K^{\times}|$, we say this disk is _rational_ , otherwise, we say it
is _irrational_.
There is a natural inclusion of
$K=\mathbb{A}^{1}(K)\subset\mathbb{A}^{1}_{\text{an}}(K)$; recall that for
every $a\in K$ we have the Type I seminorm $\left\|\cdot\right\|_{a}$. We call
these the _classical points_ of $\mathbb{A}^{1}_{\text{an}}$, again referring
to $a$ as a ‘point’ of $\mathbb{A}^{1}_{\text{an}}$. Furthermore we can
consider the classical disk $D(a,r)$ as a subset of
$\mathbb{A}^{1}_{\text{an}}$. In the following proposition and throughout, we
allow for radii to be zero where $D_{\text{an}}(a,r)=\emptyset$ and
$\overline{D}_{\text{an}}(a,r)=\left\\{a\right\\}$, although these are not
‘disks’.
###### Proposition 2.19.
The following hold.
* •
$D_{\text{an}}(a,r)$ is open and $\overline{D}_{\text{an}}(a,r)$ is closed.
* •
$\displaystyle D(a,r)=\mathbb{A}^{1}(K)\cap D_{\text{an}}(a,r)$
* •
$\displaystyle\overline{D}(a,r)=\mathbb{A}^{1}(K)\cap\overline{D}_{\text{an}}(a,r)$
* •
$\displaystyle D_{\text{an}}(a,r)\subseteq D_{\text{an}}(b,s)\iff
D(a,r)\subseteq D(b,s)$
* •
$\displaystyle\overline{D}_{\text{an}}(a,r)\subseteq\overline{D}_{\text{an}}(b,s)\iff\overline{D}(a,r)\subseteq\overline{D}(b,s)$
* •
$\displaystyle\zeta(a,r)\in D_{\text{an}}(b,s)\iff\overline{D}(a,r)\subseteq
D(b,s)$
* •
$\displaystyle\zeta(a,r)\in\overline{D}_{\text{an}}(b,s)\iff\overline{D}(a,r)\subseteq\overline{D}(b,s)$
* •
$\displaystyle\zeta(a,r)\succeq\xi\quad\iff\xi\in\overline{D}_{\text{an}}(a,r)$
* •
$\displaystyle\overline{D}_{\text{an}}(a,r)=\left\\{\zeta(a,r)\right\\}\
\cup\bigcup_{b\in\overline{D}(a,r)}D_{\text{an}}(b,r)$.
Hence $\zeta(a,r)$ is the unique boundary point of
$\overline{D}_{\text{an}}(a,r)$.
* •
$\displaystyle\operatorname{diam}(\xi)\leq
r\quad\forall\xi\in\overline{D}_{\text{an}}(a,r)$ with equality if and only if
$\xi=\zeta(a,r)$.
* •
$\displaystyle\zeta\in\overline{D}_{\text{an}}(a,r)\implies\left\|y-b\right\|_{\zeta}=\left|a-b\right|$
for every $b\notin\overline{D}(a,r)$.
Hence if $\zeta,\xi\in\overline{D}_{\text{an}}(a,r)$ then $\zeta=\xi$ if and
only if $\left\|y-b\right\|_{\zeta}=\left\|y-b\right\|_{\xi}$ for every
$b\in\overline{D}(a,r)$.
We see some clear differences compared to classical disks. For irrational
$r\notin\left|K^{\times}\right|$, whereas we had $\overline{D}(a,r)=D(a,r)$,
now there is a distinction
$\overline{D}_{\text{an}}(a,r)=\left\\{\zeta(a,r)\right\\}\cup
D_{\text{an}}(a,r)$. Actually it is for these reasons that
$\mathbb{A}^{1}_{\text{an}}$ is connected, but $\mathbb{A}^{1}(K)$ was not.
Indeed for any $a,r$, $D_{\text{an}}(a,r)$ is never closed and
$\overline{D}_{\text{an}}(a,r)$ is never open.
In the previous section we laid out four types of seminorm, with some
equivalent definitions. The following theorem of Berkovich [Ber90] says these
are the only four.
###### Theorem 2.20 (Berkovich’s Classification).
Let $\zeta\in\mathbb{A}^{1}(\mathbb{\mathbb{C}}_{v})$. Then $\zeta$ is of
exactly one of the following four
types.linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,formatting
1. I:
$\left\|\cdot\right\|_{\zeta}=\left\|\cdot\right\|_{a}$ for some unique
$a\in\mathbb{\mathbb{C}}_{v}$.
2. II:
$\left\|\cdot\right\|_{\zeta}=\left\|\cdot\right\|_{\zeta(a,r)}$ corresponding
to a unique rational closed disk
$\overline{D}(a,r)\subset\mathbb{\mathbb{C}}_{v}$.
3. III:
$\left\|\cdot\right\|_{\zeta}=\left\|\cdot\right\|_{\zeta(a,r)}$ corresponding
to a unique irrational disk $\overline{D}(a,r)\subset\mathbb{\mathbb{C}}_{v}$.
4. IV:
$\left\|\cdot\right\|_{\zeta}=\lim_{n\to\infty}\left\|\cdot\right\|_{\zeta_{n}}$
where $\zeta_{n}=\zeta(a_{n},r_{n})$ corresponds to a decreasing nested
sequence of closed disks $\overline{D}(a_{n},r_{n})$ with empty intersection
in $\mathbb{\mathbb{C}}_{v}$. The sequence is unique up to equivalence as in
Definition 2.15.
###### Sketch Proof.
Following subsection 2.7 and subsection 2.7 there is little more to say about
uniqueness. The trick to classification is the following:
* •
Any $\zeta$ is contained in a nested sequence of closed Berkovich disks
$\overline{D}_{\text{an}}(a_{n},r_{n})$ with
$r_{n}\to\operatorname{diam}(\zeta)$; starting with
$\zeta\in\overline{D}_{\text{an}}(0,r_{0})$ for
$r_{0}=\left|\zeta\right|=\left\|y\right\|_{\zeta}$.
* •
$\left\|\cdot\right\|_{\zeta}\leq\lim_{n\to\infty}\left\|\cdot\right\|_{\zeta_{n}}$
where $\zeta_{n}=\zeta(a_{n},r_{n})$. We will prove equality.
* •
If the limit of disks is non-empty then
$\bigcap_{n}\overline{D}(a_{n},r_{n})=\overline{D}(a,r)$ (with $r$ possibly
$0$) and therefore $\zeta=\zeta(a,r)$ because
$\zeta\in\overline{D}_{\text{an}}(a,r)$ and has diameter $r$ (by subsection
2.9).
* •
If $r_{n}\to 0$ then the limit $\bigcap_{n}\overline{D}(a_{n},r_{n})$ is the
single point $\left\\{a\right\\}$ and $\zeta=a$ is Type I.
* •
Otherwise $\zeta=\zeta(a,r)$ is Type II iff
$r\in\left|\mathbb{\mathbb{C}}_{v}^{\times}\right|$ and Type III iff
$r\notin\left|\mathbb{\mathbb{C}}_{v}^{\times}\right|$.
* •
If $\bigcap_{n}\overline{D}(a_{n},r_{n})=\emptyset$, then $\zeta$ is the Type
IV point $\lim_{n\to\infty}\zeta_{n}$ associated with this sequence. Indeed,
both are in $\overline{D}_{\text{an}}(a_{n},r_{n})$ for every $n$ so they
agree on $y-b$ for every $b$ outside of
$\bigcap_{n}\overline{D}(a_{n},r_{n})$, by the last part of subsection 2.9.
∎
### 2.10. The Berkovich Projective Line
The projective line of a field $\mathbb{P}^{1}$ is often defined as
$\mathbb{A}^{1}\cup\left\\{\infty\right\\}$, or more rigorously as two
$\mathbb{A}^{1}$ affine lines glued over
$\mathbb{A}^{1}\setminus\left\\{0\right\\}$ using the transition map $z\mapsto
1/z$. Here we can do either for the Berkovich projective line. In order for
this transition map to work need to extend seminorms on $K[y]$ to
$K\left[y,y^{-1}\right]$. Fortunately, this is not a big problem. For any
$f\in K[y,y^{-1}]$ we can write $f(y)=g(y)/y^{d}$ with $g\in K[y]$ and some
$d\in\mathbb{N}$. Every seminorm
$\zeta\in\mathbb{A}^{1}_{\text{an}}\setminus\left\\{0\right\\}$ can be defined
on $f$ by
$\left\|f(y)\right\|_{\zeta}=\left\|g(y)/y^{d}\right\|_{\zeta}=\left\|g(y)\right\|_{\zeta}/\left\|y^{d}\right\|_{\zeta}$
since $\left\|y\right\|_{\zeta}\neq 0$. Conversely, any seminorm on
$K[y,y^{-1}]$ is in $\mathbb{A}^{1}_{\text{an}}\setminus\left\\{0\right\\}$.
###### Definition 2.20.
We define the _Berkovich projective line_
$\mathbb{P}^{1}_{\text{an}}=\mathbb{P}^{1}_{\text{an}}(K)$ with two charts
given by $\mathbb{A}^{1}_{\text{an}}(K)$ using the homeomorphism
$\mathbb{A}^{1}_{\text{an}}(K)\setminus\left\\{0\right\\}\to\mathbb{A}^{1}_{\text{an}}(K)\setminus\left\\{0\right\\}$
given by
$\left\|f(y)\right\|_{\frac{1}{\zeta}}=\left\|f\left(\frac{1}{y}\right)\right\|_{\zeta}.$
###### Remark 2.4.
We will see later that this transition map naturally extends to an involution
$\zeta\mapsto 1/\zeta$ of $\mathbb{P}^{1}_{\text{an}}$ with the point $\infty$
defined as $1/0$. We also refer to $\infty$ as a (Type I) classical point and
part of the natural inclusion
$\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$. With this hindsight,
we can expand the definition of the classical points $a\in\mathbb{P}^{1}$ to
evaluate elements of $K(y)$ as follows: for any $f\in K(y)$ we define
$\left\|f\right\|_{a}=\left|f(a)\right|$, allowing for
$\left\|f\right\|_{a}=\infty$ if and only if $f(a)=\infty$. Finally, we can
make sense of $\infty=1/0$:
$\left|f(\infty)\right|=\left\|f(y)\right\|_{\infty}=\left\|f(y)\right\|_{\frac{1}{0}}=\left\|f\left(\frac{1}{y}\right)\right\|_{0}.$
For more, see Benedetto [Ben19, §6].
The most ‘central’ point in $\mathbb{P}^{1}_{\text{an}}$ is the Type II point
$\zeta(0,1)$, we call this the _Gauss point_. This is mostly because the ring
of integers $\mathcal{O}_{K}$ corresponds to $\overline{D}(0,1)$ and the
residue classes $\overline{b}\in k=\mathcal{O}_{K}/\mathcal{M}_{K}$ correspond
to the open disks $D(b,1)\subset\overline{D}(0,1)$. The corresponding
Berkovich open disks will play a significant role in analysing maps etc. Often
when considering a Type II point $\zeta(a,r)$, it will be useful to find a
$\operatorname{PGL}(2,\mathbb{\mathbb{C}}_{v})$ transformation, changing
coordinates in $\mathbb{P}^{1}_{\text{an}}$, such that $\zeta(a,r)$ is moved
to $\zeta(0,1)$.
### 2.11. Berkovich Disks, Affinoids, and Directions
We further extend our definitions of Berkovich disks and affinoids.
###### Definition 2.21.
A _Berkovich disk_ in $\mathbb{P}^{1}_{\text{an}}$ is any one of the
following.
* •
A _closed Berkovich disk_ $D\subset\mathbb{P}^{1}_{\text{an}}$ is either
$\overline{D}_{\text{an}}(a,r)\subset\mathbb{A}^{1}_{\text{an}}$ or
$\mathbb{P}^{1}_{\text{an}}\setminus D_{\text{an}}(a,r)$.
* •
A _open Berkovich disk_ $D\subset\mathbb{P}^{1}_{\text{an}}$ is either
$D_{\text{an}}(a,r)\subset\mathbb{A}^{1}_{\text{an}}$ or
$D=\mathbb{P}^{1}_{\text{an}}\setminus\overline{D}_{\text{an}}(a,r)$.
* •
A disk is _rational_ if $r\in\left|\mathbb{\mathbb{C}}_{v}\right|$ and
_irrational_ otherwise.
###### Definition 2.22.
A _connected Berkovich affinoid_ is a nonempty intersection of finitely many
Berkovich disks $D_{1},\dots,D_{n}$. If all of the disks $D_{1},\dots,D_{n}$
are closed, open, rational, or irrational, then the connected affinoid
$D_{1}\cap\cdots\cap D_{n}$ is respectively said to be closed, open, rational,
or irrational.
The connected open affinoid of the form
$\left\\{\zeta\in\mathbb{A}^{1}_{\text{an}}:\,r<\left\|y-a\right\|_{\zeta}<R\right\\}=D_{\text{an}}(a,R)\setminus\overline{D}_{\text{an}}(a,r)$
is called an _open annulus_. We will often abuse notation and write this as
$\left\\{r<\left|\zeta-a\right|<R\right\\},$
distinguished from the classical annulus by the use of the Greek $\zeta$
instead of the Roman $z$.
A _Berkovich affinoid_ is a finite union of connected Berkovich affinoids. We
may apply the usual adjectives as appropriate.
###### Theorem 2.21.
* •
The set of open connected Berkovich affinoids in $\mathbb{P}^{1}_{\text{an}}$
forms a basis for the weak topology.
* •
In particular, for any Type II point $\zeta(a,r)$ and open set $U$ containing
$\zeta(a,r)$, $U$ contains $D_{\text{an}}(a,R)\setminus(D_{1}\cup\cdots\cup
D_{n})$ where each $D_{j}$ is a closed Berkovich disk of the form
$\overline{D}_{\text{an}}(b,s)\subsetneq\overline{D}_{\text{an}}(a,r)$ and
$R>r$.
* •
The Berkovich projective line
$\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$ is a connected,
complete, compact, Hausdorff space. Moreover, every connected affinoid is
connected.
From subsection 2.9 it is clear that for any Type II or III point,
$\mathbb{P}^{1}_{\text{an}}\setminus\zeta(a,r)$ is a union of open Berkovich
disks. To be precise
$\mathbb{P}^{1}_{\text{an}}\setminus\zeta(a,r)=\left(\mathbb{P}^{1}_{\text{an}}\setminus\overline{D}_{\text{an}}(a,r)\right)\cup\bigcup_{b\in\overline{D}(a,r)}D_{\text{an}}(b,r).$
However we should really write this as a disjoint union. In the special case
of the Gauss point, it is somewhat easier to see that we should have one
distinct open disk in $\mathbb{P}^{1}_{\text{an}}\setminus\zeta(a,r)$ for
every element of $\mathbb{P}^{1}(k)$, where $k$ is the residue field.
###### Definition 2.23.
Let $\zeta\in\mathbb{P}^{1}_{\text{an}}$. The connected components of
$\mathbb{P}^{1}_{\text{an}}\setminus\left\\{\zeta\right\\}$ are called the
_residue classes_ , or _directions_ , or _tangent vectors_ at $\zeta$. The set
of directions at $\zeta$ is denoted $T_{\zeta}\mathbb{P}^{1}_{\text{an}}$. For
any $\xi\in\mathbb{P}^{1}_{\text{an}}\setminus\left\\{\zeta\right\\}$ we
define $\vec{v}(\xi)$ to be the (unique) direction at $\zeta$ containing
$\xi$.
###### Proposition 2.22.
Let $\zeta\in\mathbb{P}^{1}_{\text{an}}$. Then
$T_{\zeta}\mathbb{P}^{1}_{\text{an}}$ can be described by
1. (i)
If $\zeta$ is Type I or IV, then there is only one direction at $\zeta$,
meaning $\zeta$ is an endpoint of the tree.
2. (ii)
If $\zeta=\zeta(a,r)$ is Type II, then
$T_{\zeta}\mathbb{P}^{1}_{\text{an}}\cong\mathbb{P}^{1}(k)$, meaning $\zeta$
has one direction for each distinct open disk $D_{\text{an}}(b,r)$ for
$b\in\overline{D}(a,r)$, and also
$\mathbb{P}^{1}_{\text{an}}\setminus\overline{D}(a,r)$, the residue class
associated with $\infty$.
3. (iii)
If $\zeta=\zeta(a,r)$ is Type III, then the two directions are
$D_{\text{an}}(a,r)$ and
$\mathbb{P}^{1}_{\text{an}}\setminus\overline{D}(a,r)$.
### 2.12. Paths and Hyperbolic Metric
Recall from subsection 2.9 the definition of the partial order $\preceq$ on
$\mathbb{A}^{1}_{\text{an}}$; we naturally extend this to
$\mathbb{P}^{1}_{\text{an}}$ by asserting that $\zeta\preceq\infty$ for every
$\zeta\in\mathbb{A}^{1}_{\text{an}}$.
###### Proposition 2.23.
1. (i)
The relation $\preceq$ defines a partial order on
$\mathbb{P}^{1}_{\text{an}}$.
2. (ii)
All Type I and IV points are minimal with respect to $\preceq$.
3. (iii)
$\infty$ is a maximum point.
4. (iv)
For any $\zeta,\xi\in\mathbb{A}^{1}_{\text{an}}$ with $\xi\preceq\zeta$, we
have $\operatorname{diam}(\xi)\leq\operatorname{diam}(\zeta)$, with equality
if and only if $\xi=\zeta$.
5. (v)
For any two $\zeta_{0},\zeta_{1}\in\mathbb{P}^{1}_{\text{an}}$ there is a
unique least upper bound, $\zeta_{0}\vee\zeta_{1}$, defined below.
###### Definition 2.24.
Let $\zeta_{0},\zeta_{1}\in\mathbb{P}^{1}_{\text{an}}$. The _least upper
bound_ or _join_ of $\zeta_{0}$ and $\zeta_{1}$, denoted
$\zeta_{0}\vee\zeta_{1}$, is the unique element of
$\mathbb{P}^{1}_{\text{an}}$ such that:
1. (i)
$\zeta_{0},\zeta_{1}\preceq\zeta_{0}\vee\zeta_{1}$; and
2. (ii)
if $\xi\in\mathbb{A}^{1}_{\text{an}}$ and $\zeta_{0},\zeta_{1}\preceq\xi$,
then $\zeta_{0}\vee\zeta_{1}\preceq\xi$.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,cite
###### Definition 2.25.
Let $X$ be a topological space. We say $X$ is uniquely path-connected iff for
any two distinct points $x_{0},x_{1}\in X$, there is a unique subset $I\subset
X$ containing $x_{0}$ and $x_{1}$ such that $I$ is homeomorphic to the real
closed interval $[0,1]$, with the homeomorphism mapping $x_{0}$ to $0$ and
$x_{1}$ to $1$. We call $[x_{0},x_{1}]=I$ the _closed interval_ between
$x_{0}$ and $x_{1}$. We call,
$(x_{0},x_{1})=[x_{0},x_{1}]\setminus\left\\{x_{0},x_{1}\right\\}$ an _open
interval_. We can similarly define the half-open intervals $[x_{0},x_{1})$,
and $(x_{0},x_{1}]$.
###### Theorem 2.24.
Let $U\subseteq\mathbb{P}^{1}_{\text{an}}$ be a connected Berkovich affinoid,
then $U$ is uniquely path-connected. Hence $\mathbb{P}^{1}_{\text{an}}$ is
locally connected. Moreover, for any $\zeta_{0},\zeta_{1}\in U$, all points on
$(\zeta_{0},\zeta_{1})$, are of Type II or III. A path $[\zeta_{0},\zeta_{1}]$
is always of the form
$[\zeta_{0},\zeta_{0}\vee\zeta_{1}]\cup[\zeta_{0}\vee\zeta_{1},\zeta_{1}]=\left\\{\zeta_{0}\preceq\xi\preceq\zeta_{0}\vee\zeta_{1}\right\\}\cup\left\\{\zeta_{0}\vee\zeta_{1}\preceq\xi\preceq\zeta_{1}\right\\}.$
###### Definition 2.26.
Let $S\subseteq\mathbb{P}^{1}_{\text{an}}$ be any subset of the Berkovich
projective line. The _convex hull_ of $S$ is the set
$\operatorname{Hull}(S)=\left\\{\xi\in\mathbb{P}^{1}_{\text{an}}:\,\exists\zeta_{0},\zeta_{1}\in
S\text{ such that }\xi\in[\zeta_{0},\zeta_{1}]\right\\}.$
###### Definition 2.27.
The set $\mathbb{H}=\mathbb{P}^{1}_{\text{an}}(K)\setminus\mathbb{P}^{1}(K)$
is the _hyperbolic space_ over $K$. We define a _hyperbolic metric_
$d_{\mathbb{H}}:\mathbb{H}\times\mathbb{H}\to[0,\infty)$ given by
$d_{\mathbb{H}}(\zeta,\xi)=2\log\left(\operatorname{diam}(\zeta\vee\xi)\right)-\log\left(\operatorname{diam}(\zeta)\right)-\log\left(\operatorname{diam}(\xi)\right).$
###### Remark 2.5.
The hyperbolic metric measures distances by the logarithm of diameter along
the lines of the poset structure. Observe that when $\zeta\preceq\xi$ then
$\zeta\vee\xi=\xi$, so
$d_{\mathbb{H}}(\zeta,\xi)=\log\left(\operatorname{diam}(\zeta)\right)-\log\left(\operatorname{diam}(\xi)\right).$
Then in general, since
$[\zeta,\xi]=[\zeta,\zeta\vee\xi]\cup[\zeta\vee\xi,\xi]$ by Theorem 2.24, it
is natural to see that
$d_{\mathbb{H}}(\zeta,\xi)=d_{\mathbb{H}}(\zeta,\zeta\vee\xi)+d_{\mathbb{H}}(\zeta\vee\xi,\xi).$
The topology given by $d_{\mathbb{H}}$ is much stronger than the weak
topology, even though they agree on intervals. For instance, a hyperbolic ball
around a Type II point does not contain a single direction, but every (weak)
open neighbourhood contains all but finitely many.
### 2.13. Rational Maps
Recall that the aim of this part of this article is to generalise the notion
of _rational maps_ on the Berkovich projective line and also its dynamical
theory. When we define _skew products_ on $\mathbb{P}^{1}_{\text{an}}$ it will
not only be useful to refer and compare them to rational maps, but we will
want to understand skew products through their associated rational maps.
The purpose of this section is to recall some of the definitions and basic
theory of _rational maps_ on the Berkovich projective line. However much of
this theory will be deferred to the following sections (and referenced) as we
generalise these results to skew products, since the reader can always recover
the original theorem as a special case of the newly printed one.
_Notation._ Typically authors write $\phi$ for both a rational function in
$\mathbb{\mathbb{C}}_{v}(y)$ and also the induced function
$\phi:\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})\to\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$.
In this case we will instead distinguish $\phi_{*}$ as the induced function on
the Berkovich line and $\phi^{*}$ as the homomorphism induced on the function
field $\mathbb{\mathbb{C}}_{v}(y)$. By now the reader will also have noticed
the propensity to write rational functions as $f(y)$ rather than $f(x)$,
$f(t)$, or $f(z)$; whilst the latter would have been fine, we preserve the
variable $x$ for the variable in the field of Puiseux series $\mathbb{K}$,
giving a natural extension from the field $\mathbb{C}(x,y)$ to
$\mathbb{K}(y)$.
For the following definition, recall Remark 2.4 about considering a Type I
point as an honorary $[0,\infty]$-valued seminorm on $K(y)$. Indeed, for any
$a\in\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$ one can check that
$\phi_{*}(a)=\phi(a)$.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,cite Benedetto
###### Definition 2.28.
Let $\phi(y)\in\mathbb{\mathbb{C}}_{v}(y)$ be a rational function. Then we
define the associated _rational map_ on the Berkovich projective line by
$\displaystyle\phi_{*}:\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$
$\displaystyle\longrightarrow\mathbb{P}^{1}_{\text{an}}(\mathbb{\mathbb{C}}_{v})$
$\displaystyle\zeta$ $\displaystyle\longmapsto\phi_{*}(\zeta)$
$\displaystyle\text{where }\left\|f(y)\right\|_{\phi_{*}(\zeta)}$
$\displaystyle=\left\|f\circ\phi(y)\right\|_{\zeta}.$
Importantly, this function is well defined because either
$\left\|\cdot\right\|_{\phi_{*}(\zeta)}$ is a seminorm on $K[y]$ which extends
the one on $K$. To see the latter, consider
$\left\|a\right\|_{\phi_{*}(\zeta)}$ for $a\in K$. Applying $\phi$ to a
constant does nothing i.e. $a\circ\phi(y)=a$ and so
$\left\|a\right\|_{\phi_{*}(\zeta)}=\left\|a\circ\phi\right\|_{\zeta}=\left\|a\right\|_{\zeta}=\left|a\right|$.
This will be the main challenge in making a more general definition later.
###### Theorem 2.25.
Let $\phi\in\mathbb{\mathbb{C}}_{v}(y)$ be a rational function. Then the
function $\phi_{*}:\mathbb{P}^{1}_{\text{an}}\to\mathbb{P}^{1}_{\text{an}}$ of
Definition 2.28 is the unique continuous extension of the rational function
$\phi:\mathbb{P}^{1}\to\mathbb{P}^{1}$ to $\mathbb{P}^{1}_{\text{an}}$.
###### Proposition 2.26.
Let $\phi,\psi\in\mathbb{\mathbb{C}}_{v}(y)$ be rational functions, then
$(\psi\circ\phi)_{*}=\psi_{*}\circ\phi_{*}$.
###### Theorem 2.27.
Let $\phi\in\mathbb{\mathbb{C}}_{v}(y)$ be a non-constant rational function.
1. (i)
Suppose $D(a,r)$ contains no poles of $\phi$ and $\phi(D(a,r))=D(b,s)$, then
$\phi_{*}(\zeta(a,r))=\zeta(b,s)$.
2. (ii)
$\phi$ preserves the types of points.
3. (iii)
$\phi$ is an open mapping.
### 2.14. Reduction
As suggested by Definition 2.23 and subsection 2.11, the most natural way to
think about the directions at a Type II point $\zeta$ is by identifying each
one with a residue in $\mathbb{P}^{1}(k)$. In this subsection we shall discuss
reduction of elements and maps from $K$ or $K(y)$ to $k$ or $k(y)$, and what
we learn about local degrees. This will generalise the content of [Ben19,
§7.5].
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,HOW?
Recall that the residue field of $K$ is the quotient field
$k=\mathcal{O}_{K}/\mathcal{M}_{K}$. The quotient map $\mathcal{O}_{K}\to k$
is called the _reduction map_. We denote the reduction of
$a\in\mathcal{O}_{K}$ by $\overline{a}$. This has a more useful extension to
the projective line $\mathbb{P}^{1}(K)$ since every element can be written as
$[a_{0}:a_{1}]$ with $a_{0},a_{1}\in\mathcal{O}_{K}$.
$\displaystyle\mathbb{P}^{1}(K)$
$\displaystyle\longrightarrow\mathbb{P}^{1}(k)$ $\displaystyle[a_{0}:a_{1}]$
$\displaystyle\longmapsto[\overline{a}_{0}:\overline{a}_{1}]$
Furthermore, this induces a reduction map on $\mathcal{O}_{K}[y]\to k[y]$ by
reducing each coefficient in the polynomial. Reduction of rational functions
of $K$ is a little troublesome. One can write any $f\in\mathbb{P}^{1}(K(y))$
as a fraction $f=g/h$ or ratio $[g:h]$ of polynomials in $\mathcal{O}_{K}[y]$,
with at least one coefficient in $g$ or $h$ having absolute value $1$. We then
define the reduction $\overline{f}\in\mathbb{P}^{1}(k(y))$ as follows:
$\displaystyle\mathbb{P}^{1}(K(y))$
$\displaystyle\longrightarrow\mathbb{P}^{1}(k(y))$ $\displaystyle f=[g:h]$
$\displaystyle\longmapsto[\overline{g}:\overline{h}]=\overline{f}$
_Warning_ : this definition is sensitive to the choice of $f$ and $g$ \- if
one allows both $g,h\in\mathcal{M}_{K}$ then the reduction will be ill defined
as $[0:0]$.
Unfortunately, reduction can change the basic properties of a polynomial.
###### Example 2.5.
Three examples of reduction over the complex numbers.
1. (i)
Let $g(x,y)\in\mathbb{C}(x)[y]$ be defined by $g=xy^{2}+y-1$. Then
$\overline{g}=y-1$ so $\deg(g)\neq\deg(\overline{g})$.
2. (ii)
Let $g(x,y)=(y-x)y$, then $\overline{g}=y^{2}$. We see that $g$ had two
distinct roots, but its reduction has one despite having the same degree.
3. (iii)
Let $f=(y-x)/y$ then $\overline{f}=1$.
A rational function $\phi(y)\in K(y)$ induces a rational map.
$\displaystyle\overline{\phi}:\mathbb{P}^{1}(k)$
$\displaystyle\longrightarrow\mathbb{P}^{1}(k)$ $\displaystyle\overline{a}$
$\displaystyle\longmapsto\overline{\phi}(\overline{a})$
###### Definition 2.29.
Suppose $K$ is a non-Archimedean field which might not be algebraically
closed, and let $\phi\in K(y)$. We say that $\phi(y)=\frac{g(y)}{h(y)}$ has
_explicit good reduction_ iff $\deg(\overline{\phi})=\deg(\phi)$, otherwise
$\phi$ has _bad reduction_. If there is a fractional linear transformation
$\eta\in\operatorname{PGL}(2,K)$ such that $\eta\circ\phi\circ\eta^{-1}$ has
explicit good reduction then we say $\phi$ has _good reduction_. If instead
there is such an $\eta\in\operatorname{PGL}(2,\overline{K})$ then $\phi$ has
_potentially good reduction_.
If $\phi$ has explicit good reduction then
$\overline{a}\mapsto\overline{\phi(a)}$ is well defined and equal to
$\overline{\phi}(\overline{a})$. Conversely, if the degree drops then we can
find $a,b\in K$ with $\phi(a)=0$, $\phi(b)=\infty$ and the same reduction
$\overline{a}=\overline{b}$, thus
$\overline{0}=\overline{\phi(a)}\neq\overline{\phi(b)}=\overline{\infty}$;
moreover both may be distinct from $\overline{\phi}(\overline{a})$.
For a much more thorough discussion of reduction, see [Ben19, §4.3].
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Definitely
include more in this section.
## 3\. Skew Products on the Berkovich Projective Line
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Find a better
section title?
The aim of this section is to define a _skew product_ on the Berkovich
projective line and compare it to a rational map. Whilst these maps have new
and unusual quirks and are a strict generalisation of Berkovich rational maps,
most of the properties we are used to will be recovered.
###### Definition 3.1.
Let $\Psi$ be an endomorphism of $K(y)$ extending an automorphism of $K$, i.e.
the following diagram commutes:
${K(y)}$${K(y)}$${K}$${K}$$\scriptstyle{\Psi}$$\scriptstyle{\Psi_{1}}$
In this case we will call $\Psi:K(y)\to K(y)$ a _skew endomorphism_ of $K(y)$.
We will typically denote the restriction $\left.\Psi\right|_{K}$ by
$\Psi_{1}$.
### 3.1. Motivation
Often, we shall think of $\Psi$ coming from a rational map of ruled surfaces.
We will give detail on this construction in a sequel, but describe a special
case of the situation now to give examples and motivation. Classically, a skew
product (in analysis and geometry) is one of the form
$\phi(x,y)=(\phi_{1}(x),\phi_{2}(x,y))$
defined on some product space $A\times B$. Let us focus on the simple case of
$\mathbb{P}^{1}\times\mathbb{P}^{1}$ over the field $k$. One may think of the
following diagram commuting with a first projection map $h(x,y)=x$; this will
help to generalise the concept later.
${\mathbb{P}^{1}\times\mathbb{P}^{1}}$${\mathbb{P}^{1}\times\mathbb{P}^{1}}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$$\scriptstyle{h}$$\scriptstyle{\phi}$$\scriptstyle{h}$$\scriptstyle{\phi_{1}}$
The information given by $\phi$ is equivalent to a $k$-algebra homomorphism of
function fields.
$\displaystyle\phi^{*}:k(x,y)$ $\displaystyle\longrightarrow k(x,y)$
$\displaystyle x$ $\displaystyle\longmapsto\phi_{1}(x)$ $\displaystyle y$
$\displaystyle\longmapsto\phi_{2}(x,y)$
After changing coordinates we may assume that $\phi_{1}(0)=0$ and look in a
neighbourhood of $x=0$ then we obtain a $k$-algebra map
$\phi_{1}^{*}:k[[x]]\to k[[x]]$ which extends to one of the local function
field $\phi^{*}:k((x))(y)\to k((x))(y)$. In more algebraic terminology, we
took the completion of the local ring $k[x]_{(x)}$.
${k((x))(y)}$${k((x))(y)}$${k((x))}$${k((x))}$$\scriptstyle{\phi^{*}}$$\scriptstyle{h^{*}}$$\scriptstyle{h^{*}}$$\scriptstyle{\phi_{1}^{*}}$
After taking the algebraic closure of $k((x))$ to obtain the Puiseux series
$\mathbb{K}(k)$, this map can be extended to a $k$-algebra endomorphism
$\phi^{*}:\mathbb{K}(y)\to\mathbb{K}(y)$. We can write
$\phi_{1}^{*}(x)=\phi_{1}(x)\in k[[x]]$ with $\phi_{1}(x)=\lambda
x^{n}+\mathcal{O}(x^{n+1})$ and $\lambda\in k^{\times}$ then this extends to
an ‘equivariant’ skew endomorphism over $\mathbb{K}$. With $\phi_{2}\in
k((x))(y)$ we call such a map a _$k$ -rational skew endomorphism_.
If $\phi_{1}(x)$ were the identity, then $\phi$ would represent the rational
map $y\mapsto\phi_{2}(y)$ over $k((x))$ and naturally induce a Berkovich
rational map on $\mathbb{P}^{1}_{\text{an}}(\mathbb{K})$. Unfortunately,
$\phi_{1}$ is rarely trivial, and $\phi$ will not translate to a Berkovich
rational map. A different mapping on the Berkovich projective line is needed -
_the skew product._
### 3.2. The Problem
If we had that $\Psi_{1}=\operatorname{id}$ then $\Psi$ would be a
$K$_-algebra endomorphism_ of $K(y)$, indeed it would be the _rational map_
$y\mapsto\Psi(y)$ over $\mathbb{P}^{1}(K)$. We could then define a Berkovich
rational map
$\Psi_{*}:\mathbb{P}^{1}_{\text{an}}\to\mathbb{P}^{1}_{\text{an}}$, as in
subsection 2.13, by
$\left\|f\right\|_{\Psi_{*}(\zeta)}=\left\|\Psi(f)\right\|_{\zeta}.$
The crucial calculation for the skew product to be well-defined was that
$\Psi_{*}(\zeta)$ preserves the norm on $K$, meaning that for every $a\in K$,
$\left\|a\right\|_{\Psi_{*}(\zeta)}=\left|a\right|$. We might naïvely try this
definition with an arbitrary skew endomorphism; let $a\in K$, then by the
expected definition we have
$\left\|a\right\|_{\Psi_{*}(\zeta)}=\left\|\Psi(a)\right\|_{\zeta}=\left|\Psi(a)\right|=\left|\Psi_{1}(a)\right|.$
In general, unlike the rational case, $\Psi_{1}$ is an arbitrary field
automorphism of $K$ and could do anything to the absolute value. In the above
we need $\left|\Psi_{1}(a)\right|=\left|a\right|$. Requiring this is
reasonable, but for the definition of skew product below, we only ask that
$\left|\Psi_{1}(a)\right|=\left|a\right|^{\frac{1}{q}}$ uniformly for some
$q>0$. The special cases where $1/q\in\mathbb{N}$, especially $q=1$, will be
of great interest in applications of this theory.
The construction of an arbitrary algebraically defined map on a Berkovich
space is not new; see for instance [FJ04]. Specifically, one can always
normalise $\left\|\Psi(f)\right\|_{\zeta}$ as necessary depending on both
$\Psi$ and $f$, to ensure a well-defined function. In any case, this will be a
geometrically natural definition because, roughly speaking, the corresponding
prime ideal of $\zeta$ i.e.
$p_{\zeta}=\left\\{f:\left\|f\right\|_{\zeta}<1\right\\}$ is mapped to the
corresponding prime of its image
$\Psi^{-1}(p_{\zeta})=\left\\{g:\Psi(g)\in
p_{\zeta}\right\\}=\left\\{g:\left\|\Psi(g)\right\|_{\zeta}<1\right\\}=\left\\{g:\left\|g\right\|_{\Psi_{*}(\zeta)}<1\right\\}.$
Our construction is arbitrary enough to be applicable to a broader class of
examples in complex dynamics, but the uniform normalisation factor allows for
the dynamical behaviour of $\Psi_{*}$ to be better understood.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Mention other
constructions e.g. Favre-Jonsson
### 3.3. Skew Products
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Talk about how
this is geometrically natural
###### Definition 3.2 (Skew Product).
Suppose that $\Psi:K(y)\to K(y)$ is a skew endomorphism of $K(y)$ and there is
a $q$ such that
$\left|\Psi(a)\right|=\left|\Psi_{1}(a)\right|=\left|a\right|^{\frac{1}{q}}$
for every $a\in K$. Then we say $\Psi$ is _equivariant_ with _scale factor_
$q$. Given such a $\Psi$, we define $\Psi_{*}$, a _skew product over $K$_, as
follows.
$\displaystyle\Psi_{*}:\mathbb{P}^{1}_{\text{an}}(K)$
$\displaystyle\longrightarrow\mathbb{P}^{1}_{\text{an}}(K)$
$\displaystyle\zeta$ $\displaystyle\longmapsto\Psi_{*}(\zeta)$
$\displaystyle\text{where }\left\|f\right\|_{\Psi_{*}(\zeta)}$
$\displaystyle=\left\|\Psi(f)\right\|_{\zeta}^{q}$
If $q=1$ then we call $\Psi_{*}$ a _simple_ skew product. Otherwise, if $q<1$
we say it is _superattracting_ , and if $q>1$ we may say it is
_superrepelling_.
Consider the case deriving from a skew product
$\phi(x,y)=(\phi_{1}(x),\phi_{2}(x,y))$ on a surface, with
$\phi_{1}(x)=\lambda x^{n}+\mathcal{O}(x^{n+1})$, and recall the above
discussion that we have a $k$-rational skew endomorphism $\phi^{*}$. Then the
induced skew product on the Berkovich projective line,
$\phi_{*}:\mathbb{P}^{1}_{\text{an}}(\mathbb{K})\to\mathbb{P}^{1}_{\text{an}}(\mathbb{K})$
will be called a _$k$ -rational skew product_ and has scale factor
$q=\frac{1}{n}$. In particular, if $n=1$ then $\phi_{*}$ is a simple
$k$-rational skew product; this name takes after the fact $x=0$ is a simple
zero of $\phi_{1}(x)$. Furthermore, note that $0$ is a superattracting fixed
point of $\phi_{1}(x)$ when $n>1$, and hence why we call $\phi_{*}$
‘superattracting’. A deeper discussion of $k$-rational skew products will be
provided in the sequel, but we will continue to use them for examples and
explain basic constructions.
###### Theorem 3.1.
Suppose that $\Psi$ is an equivariant skew endomorphism. Then the skew product
$\Psi_{*}:\mathbb{P}^{1}_{\text{an}}\to\mathbb{P}^{1}_{\text{an}}$ is a well
defined map on the Berkovich projective line.
###### Proof.
It is clear that $\Psi_{*}(\zeta)$ is a multiplicative seminorm because $\Psi$
is a ring homomorphism.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,expand?We need
to check that $\Psi_{*}(\zeta)$ extends the norm on $K$. Indeed given $a\in
K$,
$\left\|a\right\|_{\Psi_{*}(\zeta)}=\left\|\Psi(a)\right\|_{\zeta}^{q}=\left|\Psi(a)\right|^{q}=\left(\left|a\right|^{\frac{1}{q}}\right)^{q}=\left|a\right|.$
∎
###### Proposition 3.2.
If $\Phi,\Psi$ are equivariant skew endomorphisms of $K(y)$ then
$(\Phi\circ\Psi)_{*}=\Psi_{*}\circ\Phi_{*}$ i.e. $(\cdot)_{*}$ is a
contravariant functor.
###### Proof.
$\left\|f\right\|_{\Psi_{*}\circ\Phi_{*}(\zeta)}=\left\|\Psi(f)\right\|_{\Phi_{*}(\zeta)}=\left\|\Phi(\Psi(f))\right\|_{\zeta}=\left\|f\right\|_{(\Phi\circ\Psi)_{*}(\zeta)}$
∎
The following definition and theorem is fundamental to working with skew
products - it says we can always decompose a skew product into a field
automorphism and a rational map. On $\mathbb{P}^{1}_{\text{an}}$ this will
help because the former induces a bijection with some nice geometric
properties and the latter induces a Berkovich rational map which is well
understood.
###### Definition 3.3.
Let $\Psi$ be a skew endomorphism. We define the following two homomorphisms.
Firstly $\Psi_{1}=\left.\Psi\right|_{K}$ but extended trivially to $K(y)$, and
secondly with $\Psi_{2}$ we distill the action of $\Psi$ on $y$.
$\displaystyle\Psi_{1}:K(y)$ $\displaystyle\longrightarrow K(y)$
$\displaystyle a$ $\displaystyle\longmapsto\Psi(a)\quad\forall a\in K$
$\displaystyle y$ $\displaystyle\longmapsto y$ $\displaystyle\Psi_{2}:K(y)$
$\displaystyle\longrightarrow K(y)$ $\displaystyle a$
$\displaystyle\longmapsto a\qquad\ \ \forall a\in K$ $\displaystyle y$
$\displaystyle\longmapsto\Psi(y)$
###### Theorem 3.3.
Let $\Psi$ be an equivariant skew endomorphism. Then
* •
$\Psi=\Psi_{2}\circ\Psi_{1}$;
* •
$\Psi_{*}=\Psi_{1*}\circ\Psi_{2*}$;
* •
$\Psi_{2*}$ is a rational map on $\mathbb{P}^{1}_{\text{an}}$.
###### Proof.
Clearly $\Psi_{1},\Psi_{2}$ are ring homomorphisms, so it is enough to show
$\Psi=\Psi_{2}\circ\Psi_{1}$ on generators of $K(y)$, or more simply on an
arbitrary $a\in K$ and on $y$.
$\Psi_{2}\circ\Psi_{1}(y)=\Psi_{2}(y)=\Psi(y)$
The last equality is by the definition of $\Psi_{2}$.
$\Psi_{2}\circ\Psi_{1}(a)=\Psi_{2}(\Psi(a))=\Psi(a)$
The last equality is because $\Psi_{2}(b)=b$ for any $b\in K$, such as
$b=\Psi(a)$. We finish the proof using subsection 3.3. ∎
Our notation is inspired by the case of a $k$-rational skew product.
$\displaystyle\phi:\mathbb{P}^{1}\times\mathbb{P}^{1}$
$\displaystyle\dashrightarrow\mathbb{P}^{1}\times\mathbb{P}^{1}$
$\displaystyle(x,y)$ $\displaystyle\longmapsto(\phi_{1}(x),\phi_{2}(x,y))$
In this case $\Psi=\phi^{*}$ and we consider the induced skew product. From
the original geometric perspective, $\phi_{1}(x)=\phi^{*}(x)$ and
$\phi_{2}(x,y)=\phi^{*}(y)$. Then the decomposition of $\Psi=\phi^{*}$ is very
natural since it separetes into its into its actions on $x$ and $y$:
$\displaystyle\Psi_{1}:k(x,y)$ $\displaystyle\longrightarrow k(x,y)$
$\displaystyle\Psi_{2}:k(x,y)$ $\displaystyle\longrightarrow k(x,y)$
$\displaystyle x$ $\displaystyle\longmapsto\phi_{1}(x)$ $\displaystyle x$
$\displaystyle\longmapsto x$ $\displaystyle y$ $\displaystyle\longmapsto y$
$\displaystyle y$ $\displaystyle\longmapsto\phi_{2}(x,y)$
We write $\phi_{1}^{*}=\Psi_{1}$ and $\phi_{2}^{*}=\Psi_{2}$. One may verify
that $\phi^{*}=\phi_{2}^{*}\circ\phi_{1}^{*}$ and
$\phi_{*}=\phi_{1*}\circ\phi_{2*}$, but antecedent to Theorem 3.3 is the (set
theoretic) composition
$\phi(x,y)=(\phi_{1}(x),\phi_{2}(x,y))=(\phi_{1}(x),y)\circ(x,\phi_{2}(x,y)).$
From now on, we may denote an equivariant skew endomorphism by $\phi^{*}$ even
if it is not derived from a geometric skew product on a surface. To be clear,
to say $\phi_{*}$ is a skew product still will mean it derives from an
equivariant skew endomorphism $\phi^{*}$, but it may not be $k$-rational.
Furthermore, we may write $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ as a cue to the
splitting from Definition 3.3.
###### Remark 3.1.
We can see now that not only does a skew product generalise the definition of
a rational map, but every skew product is a composition of a rational map and
the action of a field automorphism. This will be most useful for our
understanding of how a skew product acts in one iterate. It will be much
harder to understand multiple iterations (its dynamics), however the
decomposition is still valuable.
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,See result about
it not being rational or conjugate to one.
### 3.4. Properties of Skew Products
The following theorem say that given $\phi_{*}=\phi_{1*}\circ\phi_{2*}$, the
field automorphism part, $\phi_{*}$ is a geometrically nice map on
$\mathbb{P}^{1}_{\text{an}}$. However caution is warranted: a non-trivial
field automorphism will induce a highly non-analytic map.
###### Theorem 3.4.
Let $\Psi$ be an equivariant automorphism of $K$ extended trivially to $K(y)$
with scale factor $q$, i.e. $\Psi_{2}=\operatorname{id}$. Then the induced
skew product
$\Psi_{*}:\mathbb{P}^{1}_{\text{an}}\to\mathbb{P}^{1}_{\text{an}}$
1. (i)
is a homeomorphism on $\mathbb{P}^{1}_{\text{an}}(K)$;
2. (ii)
scales hyperbolic distances by a factor of $q$;
3. (iii)
is the unique continuous extension of $\Psi^{-1}$ on
$\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$;
4. (iv)
maps Berkovich points to those of the same type; and
5. (v)
is order preserving on the poset $(\mathbb{P}^{1}_{\text{an}},\preceq)$.
In
particular:linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,take off this first line?
$\displaystyle\Psi_{*}(a)$ $\displaystyle=\Psi^{-1}(a)$
$\displaystyle\Psi_{*}(D(a,r))$ $\displaystyle=D(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}\left(\overline{D}(a,r)\right)$
$\displaystyle=\overline{D}(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}(\zeta(a,r))$ $\displaystyle=\zeta(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}(D_{\text{an}}(a,r))$
$\displaystyle=D_{\text{an}}(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}\left(\overline{D}_{\text{an}}(a,r)\right)$
$\displaystyle=\overline{D}_{\text{an}}(\Psi^{-1}(a),r^{q})$
###### Remark 3.2.
The fact that this skew product $\phi_{1*}$ extends the _inverse_ of the
homomorphism $\phi_{1}^{*}$ on $\mathbb{P}^{1}(K)$ is somewhat more natural in
the geometric setting over the Puiseux series. Here we see it as a ‘pre-
composition’ of functions of $x$, where
$\gamma(x)\mapsto\gamma(\phi_{1}^{-1}(x))$. This is very different from how
homomorphisms on $K$ acting only on $y$ (‘rational maps’) generate a post-
composition-like function on $\mathbb{P}^{1}(K)$.
Consider a germ of a curve through $x=0$, namely $x\mapsto(x,\gamma(x))$. Then
$\phi=(\phi_{1},\phi_{2})$ applied to this gives
$x\mapsto(\phi_{1}(x),\phi_{2}(x,\gamma(x))$. To rewrite this in the form
$(x,\tilde{\gamma}(x))$ we must _precompose_ with $\phi_{1}^{-1}(x)$ to get
$x\mapsto(x,\phi_{2}(\phi_{1}^{-1}(x),\gamma(\phi_{1}^{-1}(x)))).$
###### Proposition 3.5.
Let $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ be a simple $k$-rational skew product.
Then $\phi_{*}:\mathbb{P}^{1}(\mathbb{K})\to\mathbb{P}^{1}(\mathbb{K})$ acts
as follows
$\displaystyle\phi_{*}:\mathbb{P}^{1}(\mathbb{K})$
$\displaystyle\longrightarrow\mathbb{P}^{1}(\mathbb{K})$ $\displaystyle a(x)$
$\displaystyle\longmapsto\phi_{2}(\phi_{1}^{-1}(x),a(\phi_{1}^{-1}(x)))=\phi_{2}\circ(\operatorname{id}\times
a)\circ\phi_{1}^{-1}(x)$
###### Proof of Theorem 3.4.
Since $\Psi$ is an isomorphism,
$\left\|f\right\|_{(\Psi^{-1})_{*}(\zeta)}=\left\|\Psi^{-1}(f)\right\|_{\zeta}$
provides an inverse to $\Psi_{*}$.
First we prove that the restriction of $\Psi_{*}$ to classical points is equal
to $\Psi^{-1}$ on $\mathbb{P}^{1}(K)$. Let $\zeta=a\in K$ be Type I, then
$\left\|y-b\right\|_{\Psi_{*}(\zeta)}=\left\|y-\Psi(b)\right\|_{\zeta}^{q}=\left|a-\Psi(b)\right|^{q}=\left|\Psi(\Psi^{-1}(a)-b)\right|^{q}$
$=\left(\left|\Psi^{-1}(a)-b\right|^{\frac{1}{q}}\right)^{q}=\left|\Psi^{-1}(a)-b\right|=\left\|y-b\right\|_{\Psi^{-1}(a)}.$
It is a similar exercise in the definitions to prove that
$\psi_{*}(\infty)=\infty$.
Now observe that since $|\Psi_{*}(a)-\Psi_{*}(b)|=|\Psi^{-1}(a-b)|=|a-b|^{q}$
we have:
$\displaystyle\Psi_{*}(D(a,r))$ $\displaystyle=D(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}(\overline{D}(a,r))$
$\displaystyle=\overline{D}(\Psi^{-1}(a),r^{q})$
It follows that
$\Psi_{*}(\zeta(a,r))=\zeta(\Psi^{-1}(a),r^{q})$
since for $f\in K[y]$
$\left\|\Psi(f)\right\|_{\zeta(a,r)}=\sup_{b\in
D(a,r)}\left\|\Psi(f)\right\|_{b}=\sup_{b\in
D(a,r)}\left\|f\right\|_{\Psi_{*}(b)}=\sup_{b^{\prime}\in
D(\Psi^{-1}(a),r^{q})}\\!\\!\\!\left\|f\right\|_{b^{\prime}}=\left\|f\right\|_{\zeta(\Psi^{-1}(a),r^{q})}$
Similarly, since
$\left\|y-\Psi^{-1}(a)\right\|_{\Psi_{*}(\zeta)}=\left\|y-a\right\|_{\zeta}^{q}$
we have
$\displaystyle\Psi_{*}(D_{\text{an}}(a,r))$
$\displaystyle=D_{\text{an}}(\Psi^{-1}(a),r^{q})$
$\displaystyle\Psi_{*}(\overline{D}_{\text{an}}(a,r))$
$\displaystyle=\overline{D}_{\text{an}}(\Psi^{-1}(a),r^{q})$
This determines the images of Type II/III points. Note that
$\left|a\right|=r\iff\left|\Psi^{-1}(a)\right|=r^{q}$, hence
$r\in\left|K^{\times}\right|\iff r^{q}\in\left|K^{\times}\right|$. Thus Type
II and III points are individually preserved.
We have also implicitly shown that for disks $D$ and $E$ we have
$D\subseteq E\iff\Psi_{*}(D)\subseteq\Psi_{*}(E),$
and similar for their Berkovich versions $D_{\text{an}}$ and $E_{\text{an}}$
this is equivalent to
$D_{\text{an}}\subseteq
E_{\text{an}}\iff\Psi_{*}(D_{\text{an}})\subseteq\Psi_{*}(E_{\text{an}}).$
This shows that a nested sequence of disks $D_{1}\supsetneq D_{2}\supsetneq
D_{3}\supsetneq\cdots$ remains nested; it is also clear that this has empty
intersection if and only if the sequence of images do. Therefore Type IV
points are preserved since the images of Type IV points can be described by
the image of such a sequence of disks.
To see that $\Psi_{*}$ is order preserving, recall that $\zeta\prec\xi$
implies that $\xi=\zeta(a,R)$ or $\xi=\infty$. Since $\infty=\Psi_{*}(\infty)$
is a maximum, the latter case is trivial for all $\zeta$. In the case that
$\xi=\zeta(a,R)$, we know that $\zeta\in\overline{D}_{\text{an}}(a,R)$.
$\Psi_{*}(\zeta)\in\Psi_{*}(\overline{D}_{\text{an}}(a,R))=\overline{D}_{\text{an}}(\Psi^{-1}(a),R^{q}),$
therefore
$\zeta\preceq\zeta(a,R)\iff\Psi_{*}(\zeta)\preceq\Psi_{*}(\zeta(a,R)).$
We have just shown that $\Psi_{*}$ preserves types, the ordering $\preceq$,
and also that
$\operatorname{diam}(\Psi_{*}(\zeta))=\operatorname{diam}(\zeta)^{q}.$
Since the basis of the topology if given by open affinoids, and these are
finite intersections of disks, to show continuity, we only need to look at
preimages of disks. Hence we know from the above that $\Psi_{*}$ is an open
map. Since $\Psi_{*}$ has an inverse given by $(\Psi^{-1})_{*}$, we also have
$\Psi_{*}^{-1}(D_{\text{an}}(\gamma,r))=D_{\text{an}}(\Psi(\gamma),r^{\frac{1}{q}}),$
$\Psi_{*}^{-1}(\bar{D}_{\text{an}}(\gamma,r))=\bar{D}_{\text{an}}(\Psi(\gamma),r^{\frac{1}{q}}),$
proving the continuity. Thus $\Psi_{*}$ is a homeomorphism.
Since $\Psi_{*}$ is order preserving, to show the scaling or isometry on the
hyperbolic metric we begin with pairs of related points. Suppose that
$\zeta\prec\xi$ have diameter $r$ and $R$ respectively. Then
$\Psi_{*}(\zeta)\prec\Psi_{*}(\xi)$ and hence
$d_{\mathbb{H}}(\Psi_{*}(\zeta),\Psi_{*}(\xi))=\log(R^{q})-\log(r^{q})=q\left(\log(R)-\log(r)\right)=q\,d_{\mathbb{H}}(\zeta,\xi).$
Otherwise, we have $\zeta$ and $\xi$ unrelated with diameters $r,s$ and join
$\zeta(a,R)$. By $\Psi_{*}$ and $\Psi_{*}^{-1}$ preserving order one can check
that $\Psi_{*}(\zeta\vee\xi)=\Psi_{*}(\zeta)\vee\Psi_{*}(\xi)$. Therefore the
shortest path from $\Psi_{*}(\zeta)$ to $\Psi_{*}(\xi)$ is through
$\zeta(\Psi^{-1}(a),R^{q})$, which is the homeomorphic image of $[\zeta,\xi]$.
Since $\operatorname{diam}(\Psi_{*}(\zeta))=\operatorname{diam}(\zeta)^{q}$,
the length of this path will be
$d_{\mathbb{H}}(\Psi_{*}(\zeta),\Psi_{*}(\xi))=2\log(R^{q})-\log(r^{q})-\log(s^{q})=q\,d_{\mathbb{H}}(\zeta,\xi).$
∎
###### Theorem 3.6.
Let $\Psi$ be an equivariant automorphism of $K$ extended trivially to $K(y)$
with scale factor $q$. For any Berkovich affinoid
$W\subseteq\mathbb{P}^{1}_{\text{an}}$, let
$W_{\text{I}}=W\cap\mathbb{P}^{1}(K)$. Then $\Psi_{*}(W)$ is the Berkovich
affinoid of the same type (if any) corresponding to
$\Psi_{*}(W_{\text{I}})=\Psi^{-1}(W_{\text{I}})$, and $\Psi_{*}^{-1}(W)$ is
the Berkovich affinoid of the same type (if any) corresponding to
$\Psi_{*}^{-1}(W_{\text{I}})=\Psi(W_{\text{I}})$. Boundaries are mapped
bijectively to boundaries.
###### Proof.
By Theorem 3.4, $\Psi_{*}$ bijectively maps $D(a,r)$ to $D(\Psi^{-1}(a),r)$ if
and only if it maps $D_{\text{an}}(a,r)$ to $D_{\text{an}}(\Psi^{-1}(a),r)$.
The same goes for closed disks on the affine line, and similarly for disks in
the projective line. Since $\Psi_{*}$ is a bijection, an affinoid
$W=D_{1}\cap\cdots\cap D_{n}$ is mapped to
$\Psi_{*}(D_{1})\cap\cdots\cap\Psi_{*}(D_{n})$ etc. In fact, $\Psi_{*}(W)$ has
the same number of ‘holes’ as $W$. ∎
###### Theorem 3.7.
Let $\phi_{*}$ be a non-constant skew product over $K$. Then $\phi_{*}$
1. (i)
is a continuous function;
2. (ii)
is an open mapping;
3. (iii)
is the unique continuous extension of $(\phi_{1}^{*})^{-1}\circ\phi_{2}$ on
$\mathbb{P}^{1}(K)\subset\mathbb{P}^{1}_{\text{an}}(K)$; and
4. (iv)
preserves the types of each Berkovich point.
linecolor=red,backgroundcolor=red!25,bordercolor=red,linecolor=red,backgroundcolor=red!25,bordercolor=red,todo:
linecolor=red,backgroundcolor=red!25,bordercolor=red,Present horizontally?
###### Proof.
Since $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ therefore this follows from the same
results for $\phi_{1*}$ and $\phi_{2*}$, namely, Theorem 3.4 Theorem 3.6,
[Ben19, Theorem 7.4, Corollary 7.9, Corollary 7.16]. ∎
The following theorem generalises [Ben19, Theorem 7.8] for rational maps to
skew products.
###### Theorem 3.8.
Let $\phi_{*}$ be a non-constant skew product over $K$, let
$W\subseteq\mathbb{P}^{1}_{\text{an}}(K)$ be a connected Berkovich affinoid,
and let $W_{I}=W\cap\mathbb{P}^{1}(K)$ be the corresponding connected affinoid
in $\mathbb{P}^{1}(K)$. Then $\phi_{*}(W)$ is the Berkovich connected affinoid
of the same type (if any) corresponding to $\phi_{*}(W_{I})$, and
$\phi_{*}^{-1}(W)$ is the Berkovich affinoid of the same type (if any)
corresponding to $\phi_{*}^{-1}(W_{I})$. Moreover, the following hold.
1. (a)
$\partial(\phi_{*}(W))\subseteq\phi_{*}(\partial W)$.
2. (b)
Each of the connected components $V_{1},\dots,V_{m}$ of $\phi_{*}^{-1}(W)$ is
a connected Berkovich affinoid mapping onto $W$.
3. (c)
For each $i=1,\dots,m$,
$\phi_{*}(\partial V_{i})=\partial W\text{ and
}\phi_{*}(\operatorname{int}V_{i})=\operatorname{int}W,$
where $\operatorname{int}X$ denotes the interior of the set $X$.
4. (d)
If $W$ is open, then $\phi_{*}(\partial V_{i})\cap W=\emptyset$.
A key omission from this theorem states that each map $\phi_{*}:V_{i}\to W$ is
$d_{j}$-to-$1$ counting multiplicity, with
$d_{1}+\cdots+d_{m}=\operatorname{rdeg}(\phi)$. This will follow later in
subsection 3.6 when we have a good notion of multiplicity, called local
degree. Overall, one can compare with the classical case Theorem 2.13.
###### Proof.
Since $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ this follows from Theorem 3.6 and
[Ben19, Theorem 7.8]. ∎
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,todo:
linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,Better waffle
and citations?
The following generalises [Ben19, Corollary 7.9].
###### Corollary 3.9 (Properness Criterion).
Let $\phi_{*}$ be a non-constant skew product, and let
$U\subseteq\mathbb{P}^{1}_{\text{an}}$ be an open connected Berkovich
affinoid. Then the following are equivalent.
1. (a)
$\phi_{*}(U)\cap\phi_{*}(\partial U)=\emptyset$.
2. (b)
$U$ is a connected component of $\phi_{*}^{-1}(\phi_{*}(U))$.
The next theorem generalises [Ben19, Theorem 7.12] and is important for
understanding local ramification or _degrees_ , as explained in the next
subsection.
###### Theorem 3.10.
Let $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ be a non-constant skew product with
scale factor $q$, and let $\zeta=\zeta(a,r)\in\mathbb{P}^{1}_{\text{an}}$ be a
point of Type II or III. Let $\lambda<1$ be large enough so that
$\phi_{2}(y)=\sum_{n\in\mathbb{Z}}b_{n}(y-a)^{n}$
converges on the annulus
$U_{\lambda}=\left\\{\lambda r<|y-a|<r\right\\}$
and such that $\phi_{2}(y)-b_{0}$ has both the inner and outer Weierstrass
degrees equal to $d$. Setting $s={b_{d}}^{q}r^{dq}$, we have
$\phi_{*}(U_{\lambda})=\begin{cases}\left\\{\lambda^{dq}s<|y-\phi_{1*}(b_{0})|<s\right\\}&d>0\\\
\left\\{s<|y-\phi_{1*}(b_{0})|<\lambda^{dq}s\right\\}&d<0\end{cases}$
Similarly if instead $\lambda>1$ and $\phi_{2}(y)$ converges on the annulus
$V_{\lambda}=\left\\{r<|y-a|<r\lambda\right\\}$
and such that $\phi_{2}(y)-b_{0}$ has both the inner and outer Weierstrass
degrees equal to $d$, then we have
$\phi_{*}(V_{\lambda})=\begin{cases}\left\\{s<|y-\phi_{1*}(b_{0})|<\lambda^{dq}s\right\\}&d>0\\\
\left\\{\lambda^{dq}s<|y-\phi_{1*}(b_{0})|<s\right\\}&d<0\end{cases}$
Moreover $r\in\left|K^{\times}\right|\iff s\in\left|K^{\times}\right|$, and
$\phi_{*}(\zeta)=\zeta(\phi_{1*}(b_{0}),s).$
###### Proof.
From [Ben19, Theorem 7.12] we know this result for rational maps, meaning that
$\phi_{2*}(U_{\lambda})=\begin{cases}\left\\{\left|b_{d}\right|(\lambda
r)^{d}<\left|y-b_{0}\right|<\left|b_{d}\right|r^{d}\right\\}&d>0\\\
\left\\{\left|b_{d}\right|r^{d}<\left|y-b_{0}\right|<\left|b_{d}\right|(\lambda
r)^{d}\right\\}&d<0\end{cases}$
The action of $\phi_{1*}$ is to scale by $q$, in particular Theorem 3.4 shows
that
$\phi_{1*}\left(\left\\{R_{1}<\left|z-b\right|<R_{2}\right\\}\right)=\left\\{R_{1}^{q}<\left|z-\phi_{1*}(b)\right|<R_{2}^{q}\right\\}$
therefore
$\phi_{*}(U_{\lambda})=\phi_{1*}\circ\phi_{2*}(U_{\lambda})=\begin{cases}\left\\{\left|b_{d}\right|^{q}(\lambda
r)^{dq}<\left|y-\phi_{1*}(b_{0})\right|<\left|b_{d}\right|^{q}r^{dq}\right\\}&d>0\\\
\left\\{\left|b_{d}\right|^{q}r^{dq}<\left|y-\phi_{1*}(b_{0})\right|<\left|b_{d}\right|^{q}(\lambda
r)^{dq}\right\\}&d<0\end{cases}$
Similar proves the $\lambda>1$ case. ∎
### 3.5. Local Degrees in Directions
The aim of the next few subsections is to generalise the theory of local
degrees from rational maps to skew products; compare with Benedetto [Ben19,
§7.3, 7.4, 7.5].
###### Definition 3.4.
Let $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ be a skew product. The _relative
degree_ of $\phi_{*}$ (or $\phi$),
$\operatorname{rdeg}(\phi_{*})=\operatorname{rdeg}(\phi)$ is the degree of the
rational function $\phi_{2}(y)=\phi^{*}(y)\in K(y)$.
Suppose that $\phi_{2}$ has the Taylor series at $y=a$ given by
$\phi_{2}(y)=b_{0}+b_{d}(y-a)^{d}+\mathcal{O}\left((y-a)^{d+1}\right),$
where $b_{d}\neq 0$. Then _algebraic multiplicity_ of $\phi$ (and $\phi_{2}$)
at $a$, $\deg_{a}(\phi)=\deg_{a}(\phi_{2})$ is $d$, the degree of the first
non-zero term of the Taylor series $\phi_{2}(y)-b_{0}$ about $y=a$,
equivalently, it is the multiplicity of $a$ as a root of the equation
$\phi_{2}(y)=\phi_{2}(a)$.
In the case of a $k$-rational skew product $\phi$, this
$\operatorname{rdeg}(\phi)$ is the degree of $\phi_{2}(x,y)$ with respect to
$y$ only; this justifies the use of ‘relative’. We expand the definition of
map on directions and local degrees to skew products, again using annuli.
###### Definition 3.5.
Let $\phi_{*}=\phi_{1*}\circ\phi_{2*}$ be a non-constant skew product,
$\zeta\in\mathbb{P}^{1}_{\text{an}}$ and ${\bf v}\in
T_{\zeta}\mathbb{P}^{1}_{\text{an}}$ be a direction at $\zeta$. We define the
_local degree of $\phi_{*}$ at $\zeta$ in the direction ${\bf v}$_,
$\deg_{\zeta,{\bf v}}(\phi)$, and a direction $\phi_{\\#}({\bf v})\in
T_{\phi_{*}(\zeta)}\mathbb{P}^{1}_{\text{an}}$ at $\phi_{*}(\zeta)$ as
follows.
* •
For a Type I point $\zeta=a$ and the unique direction ${\bf v}$ at $\zeta$,
$\phi_{\\#}({\bf v})$ is the unique direction at $\phi_{*}(\zeta)=b_{0}$, and
the local degree at $a$ in this direction is the algebraic multiplicity of
$\phi_{2}$ at $a$
$\deg_{\zeta,{\bf v}}(\phi)=\deg_{a}(\phi_{2}).$
* •
For a Type II/III point $\zeta=\zeta(a,r)$ and a direction ${\bf
v}=\vec{v}(a)$, then $\phi_{\\#}({\bf v})$ is the direction at
$\phi_{*}(\zeta)$ containing $\phi_{*}(U)$ where $U=\left\\{\lambda
r<|\zeta-a|<r\right\\}$ is the annulus in Theorem 3.10 and the local degree in
the direction of ${\bf v}$ will be the common Weierstrass degree
$\deg_{\zeta,{\bf v}}(\phi)=\operatorname{wdeg}_{a,r}(\phi_{2}).$
In the case where the direction ${\bf v}=\vec{v}(a)$ contains $\infty$, we use |
# UAS in the Airspace: A Review on Integration, Simulation, Optimization, and
Open Challenges
Euclides C. P. Neto, Derick M. Baum, Jorge Rady de Almeida Jr., João B.
Camargo Jr., Paulo S. Cugnasca
Safety Analysis Group - School of Engineering (POLI)
University of São Paulo (USP)
São Paulo, Brazil
{euclidescpn, derick.baum, jorgerady, joaocamargo<EMAIL_ADDRESS>
###### Abstract
Air transportation is essential for society, and it is increasing gradually
due to its importance. To improve the airspace operation, new technologies are
under development, such as Unmanned Aircraft Systems (UAS). In fact, in the
past few years, there has been a growth in UAS numbers in segregated airspace.
However, there is an interest in integrating these aircraft into the National
Airspace System (NAS). The UAS is vital to different industries due to its
advantages brought to the airspace (e.g., efficiency). Conversely, the
relationship between UAS and Air Traffic Control (ATC) needs to be well-
defined due to the impacts on ATC capacity these aircraft may present.
Throughout the years, this impact may be lower than it is nowadays because the
current lack of familiarity in this relationship contributes to higher
workload levels. Thereupon, the primary goal of this research is to present a
comprehensive review of the advancements in the integration of UAS in the
National Airspace System (NAS) from different perspectives. We consider the
challenges regarding simulation, final approach, and optimization of problems
related to the interoperability of such systems in the airspace. Finally, we
identify several open challenges in the field based on the existing state-of-
the-art proposals.
_K_ eywords Unmanned Aircraft System (UAS) $\cdot$ Unmanned Aircraft Vehicle
(UAV) $\cdot$ Integration $\cdot$ Simulation $\cdot$ Optimization $\cdot$
Airspace $\cdot$ Evolutionary Computing $\cdot$ Air Traffic Control (ATC).
## 1 Introduction
Air transportation is essential for society, and it is increasing gradually
due to its importance [1] [2]. The growth in flights number makes the airspace
more complex while leading to higher revenue. There are several obstacles to
be overcome by authorities in the following years in terms of safety and
efficiency of airspace. The Air Traffic Control (ATC) is pivotal in optimizing
airspace, assuming that safety and efficiency are key aspects of airspace
operation [3] [4] [5]. The ATC is divided into ATC units, which are “generic
term meaning variously, area control center, approach control unit or
aerodrome control tower" [6]. These units are arranged to accommodate all
airspace users by creating sectors. The role of controlling aircraft in each
control sector is played by Air Traffic Controllers (ATCo), who communicate to
ATCos responsible for other sectors to provide smooth conduction of aircraft
throughout their flights.
The ATC targets offering suitable levels of safety and efficiency and
addressing complex situations. Moreover, ATC provides Air Traffic Services
(ATS) to flights through ATCo instructions. The primary objective of these
services includes avoiding mid-air collisions and collisions with obstructions
and optimizing and maintaining the flow of air traffic [7]. The ATCo conducts
the aircraft in a sector by applying techniques to improve safety and
efficiency (e.g., vectoring). These professionals act collaboratively from the
beginning to the end of each flight, and other ATCos are assigned to control
such flights once a new sector is reached. Conversely, an obstacle currently
faced is to maintain workload111Workload can be defined as a metric that
represents the difficulty of ATCo in understanding a particular situation [8]
and can be expressed in terms of seconds. level under an acceptable threshold.
Among the several safety threats in airspace operation, mid-air collision can
be highlighted, which depends on a set of events despite issues in aircraft
mechanical systems, such as high ATCo workload levels and loss of the
established minimum separation. There is an effort of authorities toward such
events (e.g., ATCo training for critical situations and design of safe
standard procedures). Furthermore, in cases of high air traffic density, a
safer measure of the capacity of a sector is based on ATCo workload [9] [10],
i.e., the number of aircraft that can be safely accommodated decreases when
there is a higher workload level. As ATCo workload levels are related to
safety and there is an understanding by research and operational community
that airspace complexity is one of the main factors that impact this metric
[11], situations that these professionals are not familiar with tends to be
more unsafe. Moreover, several variables compose complexity, such as traffic
density and mental factors [12].
To improve the airspace operation, new technologies are under development,
such as Unmanned Aircraft Systems (UAS) [13] and Decision Support Tools (DST)
for ATCos (e.g., Arrival and Departure managers) [14]. These new technologies
present advantages in many aspects, such as safety, efficiency, and airspace
capacity. Furthermore, the DSTs aim to lead ATCos to more effective decisions,
which tends to reduce the ATCos workload and, ultimately, to reduce airspace
complexity [11]. Although these technologies are used in different situations,
they may bring uncertainties since it is reasonable to consider that ATCos may
not be familiar with them. Furthermore, new technologies being integrated into
the airspace nowadays (e.g., UAS) may be typical in the future, increasing
this familiarity.
Moreover, the UAS is vital to different industries due to its advantages
brought to the airspace (e.g., efficiency) [15]. The UAS has been considered a
relevant topic in the engineering community due to its applications [16] and
consists of systems composed of subsystems such as Unmanned Aerial Vehicle
(UAV), its payloads, the control station, and communications subsystems [13]
[16]. Different types of UAS (e.g., Autonomous Aircraft - AA - and Remotely
Piloted Aircraft Systems - RPAS) present different subsystem requirements
(e.g., remote piloting interfaces are present in RPAS but not in manned
aircraft). For instance, the ground station used to pilot RPASs is not part of
the AAs, which are considered fully autonomous.
In the past few years, there has been a growth in UAS numbers [17] in
segregated airspace. However, there is an interest in integrating these
aircraft into the National Airspace System (NAS). These aircraft, which have
several military and civil applications, present challenges to their
integration to be faced by authorities in terms of safety, i.e., new ways of
reaching unsafe states are included in the airspace. For instance, bugs in
software may maneuver the aircraft and lead it to undesired headings. Also,
considering RPAS, failures in Command and Control (C2) link, i.e., the
connection pilots use to communicate to the aircraft, may lead to unsafe
states [18] [19].
The relationship between UAS and ATC needs to be well-defined due to the
impacts on ATC capacity these aircraft may present. Throughout the years, this
impact may be lower than it is nowadays because the present lack of
familiarity in the relationship between UAS and ATCo contributes to higher
workload levels. As UAS only operate in segregated airspaces, ATC tends to be
more concerned when controlling a gate-to-gate flight of these autonomous
systems. Different challenges to enable this integration must be addressed,
such as specific regulations, policies, and procedures, enabling technologies
and standards development for dealing with UAS [20]. As the integration of UAS
enables new applications and its use may increase in the future [21],
developing approaches to integrate it safely is essential.
Furthermore, the Terminal Control Area (TMA) is a critical control area
generally established at the confluence of Air Traffic Service (ATS) routes in
the vicinity of one or more major aerodromes [7]. In this area, the aircraft
tend to be closer to each other. In general, TMA is the most resource-
constrained component of the air transportation system due to the number of
aircraft that operate simultaneously [22]. Its complexity increases according
to the airspace configuration (e.g., traffic density and weather conditions).
Hence, operations in the TMA usually follow standard procedures established,
e.g., Standard Instrument Departure (SID) and Standard Instrument Arrival
(STAR).
However, standard procedures (e.g., STAR) cannot be followed in some cases,
e.g., in high traffic density. In these cases, a highly challenging ATCo task
is the sequencing of the aircraft during the approach, considering the arrival
segment and the final approach [23] [24] [25] due to complex maneuvers
constraints. To accomplish this, the aircraft are conducted in a way to avoid
conflict, i.e., in a way not to disregard the minimum separation requirements,
as well as to avoid flying through cumulonimbus (CB), which are cloud
formations that present a real impact on aviation [26]. Finally, the primary
objective of defining a final arrival segment is to deliver a set of aircraft
from the final sector of the TMA to the final phase of its landing procedure
(i.e., the final approach), taking operational efficiency and safety into
consideration.
Establishing final arrival segments for achieving optimized aircraft operation
in terms of safety and efficiency is not a simple task. From the safety
perspective, the ATCo workload related to conflict avoidance during this
phase, i.e., aircraft minimum separation from other aircraft and to the
cumulonimbus (CB), must remain at acceptable levels once an increase in this
metric may present impacts on safety levels. From an efficiency perspective,
the aircraft set must be delivered to the airport as soon as possible.
Depending on the scenario, the ATCo must act rapidly to avoid airspace
reaching unsafe states. As the number of aircraft increases, the situation
becomes more complex and, consequently, more difficult to be controlled by the
ATCo.
On the other hand, integrating UAS in the NAS airspace is a challenge
nowadays. According to ICAO [27], “the airspace will be organized and managed
in a manner that will accommodate all current and potential new uses of
airspace, for example, Unmanned Aerial Vehicles (UAV) and that any restriction
on the use of any particular volume of airspace will be considered
transitory". Furthermore, although rules for UAS flights are defined for
segregated airspace [19], the increasing interest in the usage of UAS for
different applications (military and civilian) leads to a need for integrating
them into the National Airspace System (NAS). To accomplish this, safety
levels must not be compromised [19].
Toward the challenges faced in the final sector in complex situations, the
presence of UAS is an important player. Due to lack of familiarity, it is
reasonable to consider that the ATCo may feel uncomfortable in controlling
autonomous aircraft, which is a result of the uncertainty on UAS operation
[28] [29]. However, the arrival procedure is a critical and complex task even
without the UAS presence, and definition sequencing solutions for both Manned
Aircraft (MA) and UAS, especially during the early stages of UAS integration
in the National Airspace System (NAS), may lead to higher ATCo workload
levels. Furthermore, there needs to be more simulation methods that include
the UAS in the final sector and include complex situations (e.g., bad weather
conditions).
Finally, measuring the familiarity of ATCo with a particular aircraft (e.g.,
UAS) is a challenge. Not only because UAS does not operate in the NAS
nowadays, but also because of the relationship between familiarity and
cognition. Measuring familiarity enables better sequencing solutions in
arrival procedures, especially from the ATCo workload perspective, i.e., from
the safety perspective.
Thereupon, the main goal of this research is to present a comprehensive review
of the advancements in the integration of Unmanned Aircraft Systems (UAS) in
the National Airspace System (NAS) from different perspectives. We consider
the challenges regarding simulation, the final approach, and optimization of
problems related to the interoperability of such systems in the airspace. We
also highlight several open challenges in the field based on the existing
state-of-the-art proposals. Finally, Figure 1 illustrates the main aspects
analyzed for UAS Integration, Simulation, and Optimization. For each area,
several aspects are taken into consideration based on their relevance in the
reviewed works. Finally, some aspects are included in more than one area.
Figure 1: Main aspects analyzed for UAS Integration, Simulation, and
Optimization.
This article is organized as follows: Section 2 reviews strategies focused on
the UAS Integration in the National Airspace System (NAS). Sections 3 and 4
analyze airspace simulation and arrival segment optimization efforts
considering the UAS presence. Finally, Sections 5 and 6 present the open
challenges and conclusions of this research, respectively.
## 2 UAS Integration in the National Airspace System (NAS)
This section presents works related to approaches of including and measuring
impacts of Unmanned Aircraft Systems (UAS) integration in the airspace from
different perspectives. The works presented in this section are classified
according to the presence of the critical aspects: Large Aircraft (LA),
Impacts on ATCo Workload (ATCoW), Levels of Familiarity (LF), and mixed
aircraft (MixA - UAS and Manned Aircraft operating together).
Shmelova et al. [30] present an approach based on statistical data to deal
with the problem of Unmanned Aerial Vehicles (UAV) flights considering
different tasks in emergencies, which are special situations and tend to
increase the Air Traffic Controller (ATCo) workload. Also, an analysis of the
emergency type is conducted and a sequence of actions is defined. The authors
present a motivation for the development of their research, which includes the
lack of algorithms to recommend actions for the UAV operator in an emergency,
problems in the decomposition of the decision-making process and the lack of
structure of Distributed Decision Support System (DDSS), which aims to
recommend actions to appropriate aircraft from a global perspective, for
remotely piloted aircraft. Furthermore, models are developed by the authors to
determine the optimal landing site in specific situations and search for
optimal flight routes. However, this effort only considers emergencies, and
the proposed model does not consider complex airspace, i.e., airspace with
many aircraft. Finally, impacts on ATCo workload due to UAS presence and lack
of familiarity of ATCo with this new technology are not taken into account.
Pastor et al. [31] aim to evaluate the interaction between a Remotely Piloted
Aircraft System (RPAS) and Air Traffic Management (ATM) considering that an
RPAS is being operated in shared airspace, i.e., along with traditional
aircraft in National Airspace System (NAS). This evaluation employs human-in-
the-loop real-time simulations that allow simulating activities from the RPAS
Pilot-in-Command (PiC) and the Air Traffic Controller (ATCo), and three
different perspectives: the separation management, the contingency management,
and the capacity impact in the overall ATM system. The experiments conducted,
which were realistic and without excessive complexity, presented
recommendations to improve the evaluation, e.g., preliminary analysis of
traffic to prevent separation conflicts and improvement of ADS-C flight intent
communication mechanism. However, this research does not consider complex
airspace scenarios regarding the number of aircraft.
The authors in [32] propose a geometrical horizontal detect and avoid
algorithm for UASs operation, based on ADS-B-equipped aircraft information, in
Terminal Control Areas (TMA), considering a constant speed. This approach
employed recorded commercial traffic trajectories and random conflict
scenarios with UASs. The main goal is to show the algorithm’s applicability in
ensuring the separation from traditional aviation, i.e., this research
considers a mix of manned and unmanned aircraft. Also, six different missions
are considered, such as flying straight or turning and climbing or descending.
Other important aspects observed were the influence of the various parameters
on the separation achieved and the number of maneuvers required, i.e., the
strategy used selects the best directions respecting the range of heading
degrees allowed. The experiments showed the proposal’s effectiveness, which
maintains the heading constant and changes it robustly if the minimum
separation threshold is greater than the current separation between the UAS
and a given aircraft. One should note that these methods were tested on 2850
realistic traffic scenarios, which were issued from data recorded in a French
Terminal Control Area (TMA). However, although there is a considerable effort
in the detection and avoidance process, the authors do not consider UAS a
large aircraft. Finally, the ATCo workload (e.g., the additional cognitive
workload related to UAS operation) is not evaluated.
The authors in [33] focus on possible guidelines for UAS integration in the
National Airspace System (NAS). The main objective of this approach is to
maintain the level of safety of UAS and traditional aircraft nearly the same,
which may lead the authorities to implement new airspace rules such as
additional separation for unmanned traffic. The authors also consider the
usage of Airborne Collision Avoidance System (ACAS) maneuvers and avoidance
logic. In this work, the authors conducted the experiments considering a
series of simulations that present a reduction in conflict potential (UAS and
traditional traffic). The reduction of impact on airspace operation,
considering the UAS integration, is also highlighted since its integration in
the NAS is a challenge in terms of future acceptance of these autonomous
systems. In this context, hazardous situations related to UAS operation are
stated, such as UA leaving cleared planned route, ATC having no position
information, and loss of communication. Furthermore, the reader should note
that UAS flights can be conducted with low interference considering proper
mission planning, although ATC needs to control these autonomous systems,
which tends to increase the workload of the ATCos. The authors also suggest
the presence of specialized UAS controllers, which could share the duty of
controlling the airspace among many ATCos. However, although the workload is a
concern of this paper, a workload evaluation is not conducted. Also, the
workload related to the operation of UAS does not include the additional
cognitive workload present, especially in the early stages of UAS integration.
Also, different levels of familiarity of ATCo with these systems are not
considered.
An approach for safety and risk assessment in unmanned aerial system (UAS)
operations, which employs stochastic fast-time simulation and is conducted in
the NAS, is presented in [34]. Considering that the integration of UAS in the
NAS is a concern to airspace regulators, the main goal of this research is to
calculate fatality risk and to measure how different aspects and phases of UAS
operations increase the risk. To accomplish this, the authors model and
simulate UAS operations over a specific hazardous region by applying different
stochastic parameters (e.g., altitude, performance variation, and latency).
Note that the risk analysis considers fatalities and is based on published
ground impact models, which enable the usage of fast-time simulation to assess
specific situations. Furthermore, the method adopted in this research, which
compared different risk analysis models, is important to highlight mitigation
actions for all stakeholders in the safety assessment. However, although this
paper discusses the importance of accurately measuring the risks of fatalities
in UAS operations, some aspects are not considered. For instance, the workload
associated with the presence of UAS in the airspace is not faced. Thus, the
level of familiarity of ATCo with this technology is not considered.
In [35], the effectiveness of geofencing systems (such as static and dynamic
design) for UAS, which defines geographical boundaries in specific airspace
areas, is analyzed. The authors also compare the geo-fencing effectiveness to
the current and traditional proposed regulations on collision avoidance
systems. To accomplish this, Monte Carlo simulations are employed, considering
growth and incident rates based on the incident data. In this context, plenty
of UAS (more than 1 million) are available to operate within the National
Airspace (NAS), but there is a need to integrate them safely into the NAS.
This process must be conducted to optimize the relationship between cost and
safety. Furthermore, UAS is considered disruptive technology to be included in
NAS, and operations cost reduction motivates such integration. Although even
considering the substantial growth of these aircraft in the past few years and
so forth, the step-wise increase of operational tests and global acceptance,
the number of incidents has also grown. This growth has been due to different
reasons, such as the disobedience of planned altitude and location by UAS. The
experiments showed that UAS operations conducted into regulated thresholds,
i.e., to specific geographical areas, provide a cost-effective method that
respects safety levels and eliminates 98% of the UAS incidents as reported by
FAA. However, this research does not consider aspects related to ATCo
operation, such as workload.
Gimenes et al. [36] propose guidelines to support UAS regulations for the
integration of fully autonomous UASs into the Global Air Traffic Management
(GATM) System and, consequently, into shared airspace. These guidelines are
proposed facing three perspectives: the aircraft itself, the Piloting
Autonomous System (PAS), and the integration of autonomous UASs in the
National Airspace System (NAS). Considering that there are social and economic
interests in UAS applications, enabling this technology to operate along with
Manned Aircraft (MA) has considerable potential. The main issue of this
integration is that aeronautical authorities should regulate UAS operations in
the NAS, although defining these rules is difficult since there is not a deep
understanding of UAS operations and how they behave in case of failures (e.g.,
contingency operations). Throughout the paper, the authors present the
guidelines with different focuses. For instance, regarding the “aircraft
focus", although it is not in the scope of this paper, the authors state that
it “should be submitted to at least the same processes and criteria of
developing, manufacturing and certification regarding avionic systems of
manned aircraft, aiming to reach the same safety levels". Furthermore, the
authors highlight that the UAS concept should be based on aeronautical
precepts and that the possibility of integrating UASs into airspace depends on
specific regulations. However, this research does not consider the ATCo
evaluation and the impact of UAS operation on ATCo performance.
In [37], the authors present a discussion on the integration of UAS in the
NAS. This problem is a complex system-of-systems problem, considering the
level of difficulty higher the technical challenges related to the
development, implementation, and validation of this technology. Considering
that the system design itself is a complex problem, the authors emphasize that
the operation of UAS into NAS depends on aviation regulatory authorities, but
this sort of regulation is not simple to be defined. The main challenge
identified is to design UASs with high safety standards that operate, such as
manned aircraft (e.g., transparency). UAS numbers have increased tremendously
in the last few years due to the distinct capabilities and cost advantages
compared to manned aircraft in most situations, enabling these aircraft to
operate alongside manned aircraft is desirable. Throughout this paper,
different regulations are presented, such as regulations followed in
Australia. Furthermore, this paper analyzes reasons for the difficulty in
integrating UAS in the NAS. However, although this research considers workload
an important aspect of UAS inclusion, it does not propose an approach to
evaluate it. Finally, the evolution in terms of the familiarity of the
relationship UAS-ATCo is not considered.
In [38], the authors aim to identify potential ways of mitigating issues
related to different UAS challenges. Also, a revision of some of the pros and
cons of these different approaches and recommendations for changes in
procedures, automation, and policy. The impacts of an integrated UAS operation
on ATC are not fully clear yet, even in less congested areas, but there is a
need to integrate this aircraft in terms of cost reduction and efficiency. The
MITRE Corporation, which is the corporation of the authors of this research,
has been using techniques to identify the impacts of UAS on ATC in the past
years, which has shown that, for instance, the process of filing flight plans,
the usage of latitude/longitude waypoints instead of named fixes or waypoints
and possible delays or loss of communication have considerable impact. More
specifically, the authors state that the impacts are presented in five major
areas: UAS flight planning and automation, the UAS control link (delays and
loss), UAS-specific information and procedures, ATC training, and UAS
interaction with the future NAS. However, although this research highlights
challenges of ATC in terms of UAS integration and considers ATCo workload as
an impacted metric, the level of familiarity of ATCo with a specific aircraft
is not considered, i.e., there is not a workload evaluation process that
highlights the difference between UASs of different familiarity (from the ATCo
perspective).
The authors in [39] deal with the problem of integrating UASs above urban
environments, i.e., into low-altitude airspace. This integration includes
major Terminal Manoeuvring Areas (TMA) and helicopter landing sites. A set of
data-driven modeling techniques are employed to assess existing air traffic as
starting for UAS operation. To accomplish this analysis, the authors exploit
low-altitude air traffic data sets in order to discover existing no-fly zones
and an alternative geometric approach to defining exclusion zones, which is
applied to a real region (Australia), including one International airport and
helicopter landing area. Considering that determining adequate exclusion zones
for unmanned aircraft in an urban environment is an important task and that
regulations may, in some cases, include UAS in these areas without
considerable reduction of risks of collision, the main goal of this research
is to propose an approach to define exclusion zone appropriately. The results
showed a need for more rigorous scientific approaches to safely integrate
these autonomous aircraft into shared and urban airspaces. However, although
this work constitutes an important and unique contribution to UAS integration
in the urban environment, aspects such as workload measurement during the
definition of these areas are not considered.
The authors in [40] propose a way to create a Risk-Informed Safety Case (RISC)
applied to the context of small UAS operation safety assurance. This approach
aims to facilitate safe and cost-effective operations of small UAS by
presenting the comprehensive measures considered to eliminate, reduce, or
control the safety risk. The RISC proposed comprises barrier models of safety,
which support the development of safety measures, and structured arguments,
which assure safety in operations (through, for instance, appropriate
evidence). The authors also propose a model for small UAS operational risk,
which considers, for instance, specific hazards (e.g., mid-air collision) and
operational risks which depend on the small UAS. Ultimately, this paper shows
key safety-related assurance concerns to be addressed and the development of a
layered framework for reasoning about those concerns, which can be useful for
regulators and various stakeholders in justifying confidence in operational
safety in the context of the absence of the relevant aviation regulations for
small UAS. However, although the authors focused on proposing an approach to
deal with the current state, i.e., a lack of presence of UAS in shared
airspace, this research does not measure the impact of these aircraft on ATCo
operation (e.g., workload) and, ultimately, into safety levels. Finally,
different levels of aircraft familiarity to the ATCo are not considered.
In [41], a new framework for system safety certification under conditions of
uncertainty is proposed considering a Bayesian approach to the modeling of the
average probability of failure conditions. Nowadays, the debates over
developing appropriate system safety requirements for UAS are heated. An
interesting point of view is approaching this analysis by determining the
allowable average probability per flight hour of failure conditions. Due to
the lack of knowledge and data to inform the assessment of failure
probabilities of UAS, a level of uncertainty may be considered during the
system safety assessment (SSA) process, which presents many advantages. The
conducted experiments showed the suitability of the proposed approach’s safety
measures. Thus, other sources of uncertainty are intended to be considered in
future works. Finally, the authors state that using a constant failure rate
model is challenged by using a Weibull distribution, which seems to be a more
appropriate representation of UAS failure occurrence. However, although there
is an effort to estimate UAS failures and an interesting approach that relates
uncertainty to safety assessment that can be applied to small and large UASs
is presented, this research does not focus on aspects related to ATCo
operation, such as communication to UAS.
Romero et al. [42] discuss on present and future of the Remotely Piloted
Aircraft System (RPAS) in terms of regulation by aeronautical authorities.
This discussion considers different countries (e.g., Colombia, Malta, and
Japan) aiming to understand the integration of RPAs in the NAS from the ATCo
perspective. An analysis of the existing classification types of RPAS (classes
one, two, and three) is conducted. Moreover, the results of integrating three
RPAS in the NAS, successfully performed in a real setting, from the air
traffic control center in Barranquilla (Colombia) are presented. Note that
there were no losses of separation with other aircraft or between RPAS and
that one of the authors of this paper, an air traffic controller of
Barranquilla, coordinated the different entities that participated in the
implementation of this successful operation of integrating RPA in the NAS.
Finally, a proposal is made to integrate this type of aircraft in the NAS,
which considers airspaces classification, RPA classification (in terms of
navigation performance), and contingency operation. However, although this
paper is an outstanding contribution due to the integration of RPAS into
shared airspace in a real setting, the authors do not consider future
scenarios in which RPAS may be represented by commercial aircraft. Finally,
different types of aircraft in terms of ATCo familiarity are not considered.
The basis to implement a risk model and general methodologies to investigate
RPAS safety, according to the operational scenarios defined by European
Aviation Safety Agency (EASA), is proposed in [43]. The authors analyzed
results achieved in experimental flights of multiple RPAS. As the modern
aeronautical scenario is being adapted to accommodate new key elements,
including the Remote Piloted Aircraft Systems (RPAS), initially used for
military purposes only, this new sort of aircraft is ready to become a new
airspace user in civilian applications and, even considering that it cannot
operate in the NAS nowadays, there is a potential growth expectation in terms
of investments on this technology. This research points out the hazards
related to RPAS operation in the NAS, such as failures in Command and Control
(C2) link, ATCo performance referred to with high workload situations, pilots’
performance with high workload situations, external factors (e.g.,
emergencies), and jamming. Moreover, the authors highlight that a requirement
for disclosing the airspace to RPAS is the implementation of a specific Safety
Management System (SMS) for every aeronautical operator. Finally, the
preliminary risk analysis presented in this research highlights many
possibilities to be further investigated in future works. However, although
this approach can be easily extended from small to large RPAS, this research
does not focus on the different maturity levels that each aircraft may present
in the relationship with the ATCo, which may have a considerable impact on
workload.
Perrottet [44] explores the challenges related to the application of
Performance Based Navigation (PBN) in UAS operation, which include GNSS
navigation, layered PBN procedures to UAS performance characteristics, and the
capability of performing instrument procedures (in case of failures in
communication link). The main goal of this integration is to enable UAS to fly
without limitations in airspace shared with other aircraft. However, the
primary focus of integrating these aircraft has been identifying a way to
compensate for the lack of a human pilot onboard, such as Detect and Avoid
(DAA) and Datalink technologies. The author also states that safety and
efficiency are two key metrics of airspace and that they may or may not be
inherently linked as in manned aviation, i.e., UASs may provide a more
independent relationship between safety and efficiency for specific
operations. Finally, this new balance between safety and efficiency must aim
to maintain the high level of safety observed in today’s NAS, which is a
requirement to turn the advantages provided by UAS reasonable. However,
although the authors deal with the problem of integrating UAS into shared
airspace, this research presents an overview of the challenges faced. One
should note that large aircraft are also considered, but aspects such as ATCo
workload are not considered.
In [45], the authors propose a qualitative approach for assessing the safety
of UAS operations when using Automatic Dependent Surveillance-Broadcast
(ADS-B) systems considering a new testing platform, which is called PIpE-SEC,
as a possible approach for this safety evaluation. This research focuses on
the influence of data integrity, which is considered a safety-related
parameter. The increase in UAS numbers is pressing authorities to design
airspace rules to integrate safely, although safety issues arise when both
manned and unmanned aircraft coexist in the airspace. Furthermore,
surveillance and data integrity play important roles in controlling these
aircraft. In this context, the positional information provided by the ADS-B,
which is essential to UASs control systems operation, interacts with the Sense
and Avoid Systems (S&AS) of the UAS to avoid exposure to unsafe situations.
Finally, the authors discuss the usage of a methodology previously applied to
manned systems for assessing safety and state that the adoption of the
presented methodology and tools enables the identification of appropriate
scenarios for the insertion of UAS along with manned aircraft, maintaining the
same safety. However, this research does not consider the impacts of
positional errors on aircraft with different maturity levels. For instance,
the impacts of positional error of UAS in the early stages of its integration
and in the long-term stage are not considered.
Oztekin et al. [46] propose a systems-level approach to analyze the safety
impact based on risk controls of introducing new technologies into the NAS,
such as UAS, considering Safety Management Systems (SMS) principles and
existing regulatory structure. Furthermore, the authors present a methodology
to identify minimum safety baselines for safe operations in the NAS and show
its applicability through a proof-of-concept study. In this context, UAS
emerges as a viable technology for potential civil and commercial applications
in the NAS, although it brings the need for a deeper analysis of safety
impact. A detailed outline of the concepts and methodologies used for
constructing a proof-of-concept study for the proposed approach, which
considers related hazards and underlying causal factors, is also presented.
Finally, the safety baseline proposed in this research identifies a set of
minimum risk controls for conducting safe operations. In future steps, the
authors intend to focus on identifying the UAS-specific components of the
developed safety baseline to identify hazards related specifically to the UAS
domain. However, although this research considers scenarios with both manned
and unmanned aircraft, aspects such as ATCo workload are not considered.
An architecture that provides data and software services to enable a set of
UAS platforms to operate in the NAS (including, for instance, terminal, en
route, and oceanic areas) is presented in [47]. The authors present the
general architecture and a Sense and Avoid (SAA) testbed implementation to
quantify the benefits. This architecture, based on a Service Oriented
Architecture (SOA) with open standards, aims to support UAS operations by
offering services to meet their specific requirements, such as command,
control, and data management. This proposed approach is considered guidance
and offers architectural best practices. Finally, even considering that an SOA
architecture makes some aspects of certification more challenging, this
approach presents some advantages and can be implemented to meet performance
requirements. One should note that certification may be more straightforward
considering the usage of formal service contracts, comprehensive interface and
quality of service specifications, and governance process in this SOA
architecture. However, this research does not provide specific services
considering each aircraft’s different maturity levels. Also, although this
contribution focuses on integrating UAS in the NAS, aspects such as impacts on
ATCo workload are not considered.
Wargo et al. [48] presents an integrated view of how enabling technologies can
support the Remote Pilot in Command (PIC) and the UAS operations in congested
terminal airspace operations. There is a desire, nowadays, to integrate large
and small UAS (e.g., RPAS) into the complex terminal environment and the
airport surface. The new surveillance technologies that are under development,
as well as the access to the NAS system information via System Wide
Information Management (SWIM), are manners for improving the remote UAS Pilot
in Command’s (PICs) performance and, consequently, to conduct UAS operations
safely in the terminal environment. Vendors can get data feeds for, for
instance, flight planning, airport status, and weather information through
these resources. All of these information streams provide better Situational
Awareness (SA) and a better understanding of the relationship of UAS to other
aircraft movements for remote pilots, which enables more efficient operations.
Furthermore, other enabling technologies are presented in this paper, such as
vision technologies, control techniques, and specific pilot alerts. Finally,
the authors have proposed an approach that would include additional
information to remote pilots’ flight control cockpit-like displays.
In [49], the authors present the advantages and disadvantages of four
architecture alternatives for enabling FAA Next-Gen National Voice System
(NVS), which are Legacy Analog, UAS Gateway Connected to Remote Radio Node
(RRN), UAS Gateway Connected to AVN and UAS Gateway over FAA Telecommunication
Infrastructure (FTI). Considering the architecture choice, UAS Gateway design
and functional requirements development are presented. As UAS technology
advances and operations become feasible and cost-effective, architectures that
support seamless interaction between UAS and the ATC are needed. These
architectures should include a UAS network Gateway for managing Air Traffic
Voice Node (AVN) within the airspace via a networked Ground-to-Ground (G/G)
connection. Several functional requirements must be considered in this
context, such as latency, security, access, communication, frequency, and
fault. On the other hand, the main components of the NVS include the ATC Voice
Node (AVN), which connects the pilot and ATC, and Local Radios (LRs), which
are used in tower operations. Finally, as current technologies adopted in UAS
operations introduce long latency and may sometimes be unavailable, enabling
UAS integration into the NextGen voice system is important. In conclusion, the
authors highlight that the 1-to-1 deployment of UAS Gateways to AVN and the
deployment of “access gateways” to provide a point of entry for the UAS PIC is
the recommended option. However, although this research is an important
contribution in terms of integration of UAS considering appropriate
communication, the relationship of these aircraft with the ATC is not
considered.
Finally, this section presented the works related to UAS integration in the
National Airspace System (NAS). Each research covers different aspects, and
Table 1 summarizes characteristics of all works based on the following
classifications:
* •
Large Aircraft (LA): Indicates if the research considered large UAS in the
proposed approach;
* •
Impacts on ATCo Workload (ATCoW): Indicates if the impacts related to UAS
operation on ATCo workload are considered;
* •
Levels of Familiarity (LF): Indicates if the proposed integration approach
takes the familiarity of ATCo with the particular aircraft into account;
* •
Mixed Aircraft (MixA): Indicates if UAS operations are considered along with
Manned Aircraft (MA) operations.
This table shows that most related works consider a mix of manned and unmanned
aircraft. Furthermore, many works consider UAS as a large aircraft. On the
other hand, although the impacts of UAS on ATC performance are important to be
measured and reduced, only two related works consider the integration from the
ATCo perspective. Also, none of the listed works treat all the criteria
presented in the Table.
Table 1: Review of UAS Integration in the National Airspace System (NAS). Related Work | LA | ATCoW | LF | MixA
---|---|---|---|---
[30] | X | X | X |
[31] | X | | | X
[32] | X | X | X |
[33] | | X | X |
[34] | | X | X |
[35] | X | X | X |
[36] | | X | X |
[37] | | X | X |
[38] | | X | X |
[39] | X | X | X |
[40] | X | X | X |
[41] | | X | X |
[42] | X | X | X |
[43] | X | | X |
[44] | | X | X |
[45] | | X | X |
[46] | | X | X |
[47] | | X | X |
[48] | X | X | X |
[49] | X | X | X |
## 3 Simulation of UAS in the Airspace
This section presents works related to airspace simulation methods that may
include UAS. To identify research gaps, many aspects are analyzed. The works
presented in this section are selected as related works according to the
presence of the following aspects: the presence of UAS (UAS), Cognitive Impact
of Different Aircraft (CIDA), Bad Weather Conditions (BWC), Conflicts
Avoidance (CA), Air Traffic Controller (ATCo), and vectoring (Vc) and workload
(Wl) evaluation.
In [50], the authors present two simulation tools focused on unmanned aircraft
operations within shared airspace, considering the safety perspective. To
accomplish this, a fast pair-wise encounter generator is proposed to simulate
the See and Avoid environment, which is demonstrated through statistical
performance evaluation of an autonomous See and Avoid decision and control
strategy collected in experiments. Also, an unmanned aircraft mission
generator is presented, which helps to visualize the impact of multiple
unmanned operations. The authors intend to use these analysis tools in
exploring some of the fundamental and challenging problems faced by civilian
unmanned aircraft system integration and consider that these simple simulation
tools can be valuable when assessing a future aerospace environment. Finally,
future works, such as applying their strategy in random walk style missions,
are pointed out. However, this work does not include Air Traffic Controller
(ATCo) aspects in simulation, such as workload. Also, autonomous aircraft do
not present a relative cost due to the lack of familiarity with the airspace
operators (e.g., ATCo) present with this new technology.
Scala et al. [51] propose a methodology for developing an airport arrival and
departure manager tool. Optimization and simulation techniques are employed
for improving the robustness of the solution. The main goal is to help air
traffic controllers manage the inbound and outbound traffic without incurring
conflicts or delays, i.e., this tool can help them make the right decisions
quickly. The decisions taken in the present methodology for each aircraft are
related to entry time and entry speed in the airspace and pushback time at the
gate. Finally, this approach presents a smooth flow of aircraft both in the
airspace and on the ground. The experiments, which considered the Paris
Charles de Gaulle Airport as the case study, showed that conflicts were
sensibly reduced. However, although the number of conflicts is reduced in this
simulation tool considering this approach, Unmanned Aircraft Systems (UASs)
are not considered. Also, the uncertainty related to autonomous control
systems (such as the airport arrival and departure manager tool) is not
considered.
Farlik [52] proposes the concept of air force simulator operational
architecture. Considering that live military training in the airspace is
expensive and that information technologies have evolved in the past years,
simulation becomes a feasible alternative in training building military
simulation centers with a high level of realism may be useful in this sense.
To train a wide spectrum of personnel together (e.g., pilots and ATCos),
simulation capabilities are merged into a single robust simulation
environment, which considers, for instance, cooperation. Finally, although the
simulation of air defense operations with all its aspects is a complex
process, this paper stated the essential conceptual operational architecture
of the proposed air defense simulator, helping to structure future simulator
architecture according to military requirements. However, this paper presents
a set of recommendations but does not include the UAS presence, which may be
feasible in the future.
The authors in [53] present a simulation study on Air Traffic Control (ATC)
strategies aiming to use global traffic information to improve the decision-
making process in local ATC. Considering that ATC is a decentralized system,
the control sectors face the challenge of using all available information to
manage local air traffic safely, smoothly, and cost-effectively. The strategy
adopted means how to define and apply various local ATC rules (e.g., first-
come-first-served rule) to the coming-in traffic, i.e., traffic that will
enter the local sector and whose information is available in global ATC.
Finally, a simple ATC network model is set up, and a software simulation
system is proposed the simulation results showed that applying an
inappropriate set of rules can cause heavy traffic congestion, while an
appropriate strategy (based on global information) can significantly reduce
delays, improve safety, and increase the efficiency of using the airspace. The
authors also indicate future directions of this research, such as introducing
more ATC rules and studying the effect of each of them, collaborating with the
ATC industry to modify and improve the simulation systems, and designing
proper ATC strategies. However, although the proposed strategy considers the
ATC, UAS operation is not considered. Also, different costs, in terms of
workload, for different aircraft are not considered.
In [54], the authors present a framework for facilitating rapid development
and deployment of distributed simulations based on Java virtual machines
called Runtime for Airspace Concept Evaluation (RACE). Developing large,
distributed, human-in-the-loop airspace simulations maybe not be simple,
including sophisticated interactive visualization. This framework utilizes the
actor programming model and open-source components (e.g., Akka and WorldWind).
Finally, the authors highlight three main contributions, which are the
provision by actors of the basis for extensibility and scalability, the
seamless combination of functional programming (scala) and actors, and the
minimal core of RACE with the maturity of its third-party system basis allow
applications that go beyond simple simulations. However, although this
framework allows adaptations and extensions, it does not consider the UAS
presence and its interaction with Air Traffic Controllers (ATCo). Finally,
this research is not focused on the final sector of Terminal Manoeuvring Areas
(TMA).
AgentFly system is presented in [55] as fast-time simulation tool. The authors
state that this tool is appropriate for National Airspace System (NAS)-wide
experiments, and the cognitive behavior of Air Traffic Controllers (ATCo) is
considered. The United States NAS, for instance, is very complex, and lots of
information need to be shared from specific regions to the whole NAS. Also,
increases in air traffic lead to impacts on the difficulty faced by ATCo in
avoiding unsafe and inefficient operations. An alternative to the real
operation is real-time Human-In-The-Loop (HITL) simulation, which provides
valuable feedback on the behavior of human operators but presents limited
flexibility and high cost. Thus, AgentFly is proposed as a fast-simulation
tool and can be used to perform different experiments, varying the number of
aircraft, to facilitate the analyses of specific situations in airspace (e.g.,
conflict avoidance). Several simulations were conducted, and important metrics
were measured. The present results showed that this tool is appropriate to be
used as a tool for large-scale experiments, providing detailed data for
further analysis. However, although this paper considers the Remotely Piloted
Aircraft Systems (RPAS) integration as a possible implementation (including
many aspects of their operation as landing) and a human behavior model, which
considers workload, additional cognitive workload related to UAS is not taken
into account. Finally, the workload during arrival segment execution is not
computed using vectoring points and different costs in terms of workload if
considering different aircraft (e.g., traditional aircraft and UAS).
The authors in [56] aim to make ATM research results more comparable by
sharing tools and data using a fully open-source and open-data approach to air
traffic simulation. The main challenges were achieving high fidelity (e.g.,
aircraft performance) and increasing the community’s adoption by keeping the
program as simple as possible. Considering the adoption of this platform by
many users, this can be considered a useful tool in innovation and application
development (e.g., providing specific methods for different problems). The
paper describes the difficulties faced when using a fully open-data and open-
source policy in this area. However, this work does not consider the UAS
presence and its impacts on the ATCo workload.
Tra et al. [57] present conflict rate models to determine the intrinsic safety
of airspace designs, which consider conflicts between aircraft in different
flight phases. Fast-time simulations were performed for different layered
airspace concepts, considering unstructured airspaces. The results indicate
that the models can estimate the conflict rate for high traffic densities.
When comparing the different layered airspace concepts tested, the model
predicted, and the simulation results, a clear safety improvement can be noted
when the heading range is reduced. Thus the models can be used to study the
effect of airspace design parameters on the safety of airspace concepts.
However, although this research considers structured and unstructured
airspaces, the presence of UAS is not considered.
In [58], the authors introduce the Air Traffic Operations and Management
Simulator (ATOMS), an air traffic and airspace modeling and simulation system
for free-flight concepts. This tool simulates end-to-end airspace operations
and air navigation procedures for conventional air traffic. A multiagent-based
modeling paradigm for modular design and easy integration of various air
traffic subsystems is adopted. Also, advanced Air Traffic Management (ATM)
concepts that are envisioned in free flight are prototyped in this research,
including Airborne Separation Assurance (ASA), Cockpit Display of Traffic
Information (CDTI), and weather avoidance. The results showed that advanced
ATM concepts present an appropriate scenario for free flights. However, this
research does not consider the ATCo workload and the impact of the UAS
presence on it. Also, it does not consider a vectoring point-based workload
evaluation.
The authors in [59] focus on simulation-based Air Traffic Controller (ATCo)
training using the Beginning to End for Simulation and Training (BEST), which
is a simulation tool adopted in training organizations in Europe. Although the
BEST simulator covers all levels and types of training (e.g., basic,
validation and conversation refresher), this research is focused on the basic
part of the initial training. Furthermore, insights into the challenges the
candidates face when mastering the techniques of performance-based training
are presented. ATCos are responsible for guiding aircraft through the
airspace, and throughout the whole education and later work, their extensive
training (which considers practical exercises performed on computer devices)
is divided into three phases. They are the initial training (basic and rating
training), unit training (transitional, pre-on-the-job, and on-the-job
training), and continuation training (conversion and refresher training).
Moreover, BEST simulator meets all the objectives and requirements prescribed
for basic ATCo training. However, this research does not consider the aspects
related to UAS integration (e.g., increase in cognitive workload), and
different levels of aircraft in terms of ATCo familiarity are not considered.
Young et al. [60] proposes an approach to describe the process of creating and
using weather polygons for simulation and analysis activities using the
Federal Aviation Administration (FAA)’s Concept Analysis Branch, including an
example study focused on weather impacts on flights efficiency and tested in a
fast-time simulation environment. Considering that weather has substantial
impacts on the National Airspace System (NAS) and that most simulation and
analysis tools are unable to represent weather activity effectively, the
development of capabilities that benefits some operational improvements cannot
be quantified, i.e., the weather may impact negatively on solutions that
improve the airspace efficiency consider good weather conditions. The FAA’s
Concept Analysis Branch (ANG-C41) developed a tool to create weather polygons,
which is a concise model to store and process and can be modeled as restricted
airspace that moves across the NAS that considers high-fidelity weather data.
This enables the measurement of the impact of operational improvements on
weather-related flight delays, and, thus, an analysis of the efficiency of
current weather avoidance operations was conducted using weather polygons in
algorithms to calculate the distance of each flight from the weather at
different severity levels as well as to identify flights which rerouted to
avoid moderate to severe convective weather. Finally, capability enables the
FAA to represent the impact of convective weather on NextGen operational
improvements. However, although this research is broad in terms of weather
simulations, this does not include UAS operation in bad weather avoidance.
Furthermore, the ATCo workload related to this process is not considered.
McGovern et al. [61] propose and present an overview of a Monte Carlo-based
stochastic airspace simulation tool and its formulation as programming
languages, environments, and other development tools. The main objective is to
provide a documented, lifecycle-managed, multi-processor capable, stochastic
simulation capability to enable the analysis of procedures and equipment for
aircraft flight into shared airspace. Thus, the selection, design, and
implementation of the mathematical models and verification and validation
processes are conducted. Since real experiments are expensive and unfeasible,
modeling and simulation are often used to study the physical world.
Furthermore, navigation aids, surveillance systems, pilots, aircraft, Air
Traffic Controllers (ATCos), and weather are desirable features for a useful
simulation tool, and in this research, the authors consider all of them and a
Graphical User Interface (GUI) integrating world-wide photo-realistic airport
depictions and real-time three-dimensional animation. Finally, this paper
focuses on the interaction of components in shared airspace, and the software
tool and its formulation are presented. However, this work does not consider
the UAS integration into airspace and its impacts on ATCo operation as well as
the ATCo workload.
The authors in [62] developed a simulation component, considering the UAS
Traffic Management (UTM) paradigm, that supports near and long-term live
flight testing and exploration. The capabilities of the simulation tool are
depicted in this work. In this context, NASA has started to work
collaboratively in research with the Federal Aviation Administration (FAA) and
other stakeholders (government, industry, and academia) to explore the
concepts and requirements of safe and scalable operations of small Unmanned
Aircraft Systems (UAS) in low-altitude airspaces. Finally, a powerful research
and development platform capable of addressing the multitude of questions is
developed considering that the UTM laboratory is ideally suited to progress
the state of UTM research and knowledge. However, although this research
considers the UAS presence and its control, large aircraft (e.g., traditional
aircraft) are not considered. Also, the control of manned and unmanned
aircraft is performed by different agents, and the aircraft are not included
in shared high-altitude airspace.
Different utilization modes of Closely Spaced Parallel Runways (CSPRs), which
are employed in the construction of parallel runways, are analyzed in [63],
considering different thresholds. This analysis is conducted using the
simulation software SIMMOD, which was applied to build simulation models for
different utilization modes of runways with a staggered threshold. This
systematic analysis aimed to evaluate airport capacity and operational
efficiency quantitatively. The authors showed through experiments that,
considering the existing air traffic control operation rules, CSPRs usage
enables the airspace to support from 765 to 815 movements on each peak day.
Also, 55 movements are supported in each peak hour. If the runway threshold is
staggered in terms of the approach direction and a bypass taxiway is provided,
i.e., considering adaptations of the runway due to air traffic state (e.g.,
dynamic change to reduce arrival delay), the mode landing on the inside runway
and taking-off from the outside runway shows up as the most efficient mode,
which increases the operation efficiency by about 5%. However, this research
explores the capabilities of a widely used simulation tool to build simulation
models. Also, the focus is on evaluating landings and take-offs, which is
different from the final approach, although it substantially impacts this
previous phase. Finally, the presence of autonomous control systems and their
impacts on personnel performance are not considered.
In [64], a new 4D trajectory generation method is proposed. This method is
based on historical radar data processing, considering traffic flow theory to
generate the flight states and introducing the multiple interacting models
smoother and spline interpolation to determine the intermediate flying status.
4D trajectory generation, one of the most fundamental functions in the
airspace simulation system, is currently based on the partitioning of the
flight, i.e., the entire flight is divided into several parts, enabling the
usage of models to generate the flying states. However, in the method proposed
in this research, the problems of generating the initial state of an aircraft
and depicting the 4D flying status are addressed. The results presented in
this work, obtained from the simulated trajectories and real trajectories by
the MATLAB software, showed that the method is valid and practical. However,
the authors do not consider the interaction of these aircraft with the ATCo.
Also, problems related to workload aspects, as well as bad weather conditions,
are not faced. Finally, although the authors consider real data, the standard
curve rate employed in aviation is not highlighted.
Bucceroni et al. [65] propose a system for integrating Unmanned Aerial System
(UAS) visual observers into the distributed simulation capability of the
Federal Aviation Administration (FAA). This distributed simulation, which
employs large-scale virtual reality systems, is used to demonstrate terrain
surrounding flight tests in virtual environments and generate the observer’s
views (ground- and air-based). Three situations are considered: stationary
ground-based monitoring, mobile air-based monitoring, and seaborne monitoring.
As large-scale distributed visualizations are routinely used by organizations
(industry and government), this approach is beneficial and has considerable
importance in the research associated with the FAA’s current work on adapting
itself to Next Generation Air Transportation System, considering that this new
system will include UAS into shared airspace. Thus, locally caching real-world
terrain with data provided was integrated into a graphical interface to give
the operator the UAS position and information from multiple perspectives in a
distributed simulation. However, the simulation tool proposed in this research
does not include interaction between the aircraft operator and the Air Traffic
Controller (ATCo). Finally, this is a work conducted aiming at the future
integration of UAS into shared airspace, but it does not consider the impacts
these autonomous systems have on personnel performance (e.g., workload).
Borener et al. [66] present Unmanned Aircraft Systems (UAS) modeling and
simulation, which consider a use case scenario that is consistent with the
FAA’s concept of operations for integration of UAS into shared airspace and
employs sensing (using actual radar track traffic data) and medium fixed-wing
UAS. The proposed simulations offer functionality related to UAS operations,
such as ‘detect and avoid’, mission profiles, positional variance, performance
variance, fuzzy conflicts, variation in time spent in communication, and
deviation from planned or intent profiles. Based on the RAMS plus fast-time
simulator tool, the simulations aim to evaluate the separation indices and the
number and severity of the separation events. The experiments, conducted in a
simulated Houston Metroplex Environment, showed that multiple UAS would
considerably increase the likelihood of separation events and separation
critically indices and the usage of the "return to departure land site"
contingency operation in case of failures in UAS communication link has a
considerable impact on separation events. A difficulty faced by researchers,
though, is the lack of historical data on UAS operations. However, this
research does not include the interaction between UAS and ATCo; consequently,
the workload is not considered.
The authors in [67] describe the results of a Dynamic Density (DD) human-in-
the-loop simulation. The DD model adopted aims to measure the complexity of a
given airspace area, and measures presented at the US/Europe ATM 2003 Seminar
were used in this research. Thus, the simulation included Reduced Vertical
Separation on the Cleveland Air Route Traffic Control Centre’s airspace, and
the considered traffic was actively controlled throughout the simulation. Due
to the difficulty related to real-world simulation, i.e., it may not be
feasible to conduct experiments with real aircraft, simulations were adopted
in this research. One should note that human-in-the-loop simulations may offer
more accurate data on, for instance, airspace capacity evaluation. The
simulated experiments employed six Certified Professional Controllers (CPCs)
and one Operations Supervisor from Cleveland and were conducted in the high-
fidelity Display System Replacement (DSR) Laboratory. The experiments showed
the DD metric performed better than the aircraft count, which is a usual
complexity measure. However, although this research has valuable results in
simulation and airspace capacity and is considered specialists in Air Traffic
Control (ATC), UAS are not considered in the approach. Finally, cognitive
factors (e.g., lack of familiarity with ATCo with a specific aircraft) are not
considered.
In [68], the LMI ACAS (Airspace Conflict Analysis Simulation) tool is
presented. This 3-dimensional simulation modeling tool, and its application
meets the analytical requirements. The benefits of implementing a
multidimensional visualization are also presented. The conducted case study,
which employs the ACAS tool for a safety risk assessment based on conflict
probability, demonstrates the capacity of the framework to evaluate safety
risk. Also, a set of concerns that include traffic growth, Next Generation Air
Transportation System (NextGen) technologies, dynamic airspace sector
reconfiguration, and the integration of UAS into shared airspace are
considered. As NextGen is under development, modeling complex, diverse future
scenarios is important, thus, better solutions can be provided in terms of
specific metrics (e.g., safety). Some causes and consequences related to risk
analysis are pointed out. For instance, an increase in passenger demand leads
to a higher traffic density. Considering that ACAS is not meant to be a NAS-
wide simulation of all aspects of flight, this research proposes an agile tool
for exploring NextGen aviation concepts and technologies from the safety
perspective. However, this research does not include the cognitive workload
associated with special aircraft operations (e.g., UAS).
In [69], numerical simulations are used in order to demonstrate the
effectiveness of the proposed conflict management approach, which ensures
conflict avoidance among aircraft and the transition of aircraft into adjacent
airspace. To avoid conflicts, complexity is modeled as aircraft heading and
speed deviations in a given sector. Considering more than one sector, a
specific architecture is proposed for planning to minimize complexity for the
neighboring sectors. More specifically, the conflict avoidance problem can be
seen as a mixed integer Linear Programming (LP) subject to maneuver
constraints. Thus, the aircraft can find the optimal solution by solving the
LP problem, resolving conflicts among the aircraft, and reducing the air
traffic complexity of the neighboring sectors. Moreover, the proposed conflict
management algorithm can identify aircraft’s optimal conflict resolution
maneuver in near real-time considering multi-sector environments. The authors
intend to investigate the relationship between maneuver constraints and
traffic complexity in future works. However, although this research is
interesting from the conflict avoidance perspective, it does not deal with
fully autonomous or remotely piloted aircraft. Finally, ATCo operation and the
impacts of the proposed approach on his/her workload are not considered.
In this section, the works related to airspace simulation were presented. Each
work covers different aspects, but to identify the similarities and
differences, Table 2 presents all works, which are classified as follows:
* •
Unmanned Aircraft System (UAS): Indicates if UAS are considered in a given
research. One should note that Remotely Piloted Aircraft Systems (RPAS), as a
sort of UAS, are also considered.
* •
Cognitive Impact of Different Aircraft (CIDA): Indicates if the impacts
related to special aircraft (e.g., UAS) operation on personnel performance.
* •
Bad Weather Conditions (BWC): Indicates if the proposed simulation tool deals
with the challenges imposed by bad weather conditions.
* •
Conflicts Avoidance (CA): Indicates if conflict avoidance is prioritized in
the given simulation tool.
* •
Air Traffic Controller (ATCo): Indicates if ATCo operation is one simulation
focus.
* •
Vectoring (Vc): Indicates if aircraft are controlled by Vectoring Points (VP).
* •
Workload (Wl): Indicates if the workload of ATCo is evaluated in the
simulation tool.
This table shows that most related works deal with collision avoidance
problems in simulation, and many consider the ATCo operation. Few works
include weather conditions, vectoring, and workload evaluation in experiments.
Thus, UAS appears in a few works. Finally, none of the listed related works
treat the cognitive impact on, for example, workload due to the lack of
familiarity of ATCo with a new aircraft (e.g., UAS). Also, none of the listed
works treats all the criteria presented in the Table.
Table 2: Review of UAS simulation in the National Airspace System (NAS). Related Work | UAS | CIDA | BWC | CA | ATCo | Vc | Wl
---|---|---|---|---|---|---|---
[50] | | X | X | | X | X | X
[51] | X | X | X | | | X | X
[52] | X | X | X | | | X |
[53] | X | X | X | | | |
[54] | X | X | | | | X | X
[55] | X | X | X | X | | X |
[56] | X | X | | | | | X
[57] | X | X | X | | X | | X
[58] | X | X | | | X | X | X
[59] | X | X | X | | | | X
[60] | X | X | | | X | X | X
[61] | X | X | | X | | X | X
[62] | | X | | | | | X
[63] | X | X | X | | | X | X
[64] | | X | X | | X | X | X
[65] | | X | X | X | X | X | X
[66] | | X | | | X | X | X
[67] | X | X | | | | |
[68] | | X | | | | X |
[69] | X | X | X | | | | X
## 4 Arrival segment optimization considering UAS
This section shows a literature review conducted toward optimization
approaches for arrival segment design considering the UAS presence. The
proposals are analyzed from different perspectives. To select and classify the
related works present in this section, the consideration of following aspects
of each approach are taken into account: National Airspace System (NAS), Final
Arrival Segment Design (FASD), Complex Situations (CS), Bad Weather Conditions
(BWC), Minimum Separation (MS), UAS presence (UAS), and Time as a Constraint
(TC).
Alonso-Ayuso et al. [70] presents an approach that employs a mixed integer
linear approximation to a Mixed Integer Nonlinear Optimization (MINO) model
for the conflict resolution problem in air traffic management, i.e., for
providing aircraft configurations to avoid conflicts, which is the loss of the
minimum separation between two given aircraft. The problem is solved by
considering an initial position of a set of aircraft and applying changes to
their position, velocity, and heading angles. Thus, a multi-criteria scheme
and a Sequential Mixed Integer Linear Optimization (SMILO) approach are also
presented. This is due to the achievement of solutions in a short computing
time. Furthermore, a comparison between the results obtained by using the
state-of-the-art MINO solvers and SMILO performance in a broad testbed is also
considered, which showed that both presented similar solutions, but the
proposed approach requires a very short computing time. Finally, the authors
highlight that for large-size instances (e.g., above five aircraft), the
computing time is higher than the one required by real-life operational
applications and that other meta-heuristics can reduce even the computing time
without deteriorating the SMILO solution as a future research line. However,
this research does not consider the operation of UAS in the NAS.
The authors in [71] present a cooperative multi-aircraft Conflict Resolution
(CR) method based on co-evolution. The paths are composed of sub-populations
considered in a Particle Swarm Optimization (PSO) implementation, in which the
fitness is evaluated by cooperation among individuals from different sub-
population and is adopted for its advantages such as fewer parameters and
computation and faster convergence. One should note that each particle is seen
as a point of D-dimension space. Further, an encoding method with an adaptive
searching mechanism is introduced to improve the searching efficiency.
Compared with Genetic Algorithms (GA) currently being used for conflict
resolution path optimization, the results achieved by this approach achieved
higher system efficiency, which is a manner to measure how similar a given
path is to the smallest possible path. Considering 2, 4, and 6 aircraft, the
proposed approach outperformed the GA approach. However, although this
research employs the PSO successfully, this research does not consider the
fitness evaluation and particle update processes to be conducted in parallel
for each particle, which can improve the performance considerably. Also, bad
weather conditions are not taken into account. Finally, the UAS presence and
arrival segments definition are not considered.
Ahmed et al. [24] present an evolutionary method for optimizing the aircraft
path planning algorithm in Terminal Maneuvering Area (TMA). This method, which
provides near-optimal aircraft arrival sequences, aims to deliver the aircraft
to the Final Approach Fix (FAF). The paths are built to conduct the aircraft
from the Initial Approach Fix (IAF) to the FAF considering intermediates
waypoints called Intermediate Fix (IF). The classic Genetic Algorithm
(GA)-based optimization technique with conflict detection and resolution used
in this effort minimizes the inter-arrival time. Furthermore, conflict-free
path planning for an Air Traffic Controller (ATC) is also obtained. One should
note that conflict between any two aircraft is detected based on their future
arrival time at the waypoint. The results show that the proposed approach
provides a near-optimal solution compared to the traditional GA-based
algorithm, which does not consider airspace constraints (e.g., speed).
In [72], the authors proposed a mixed integer linear programming formulation
to optimize, in real-time, the take-off and landing operations at a busy
Terminal Maneuvering Area (TMA) in case of traffic congestion by investigating
the trade-off aspects between performance indicators of practical interest.
This method also considers safety constraints with high precision. As TMAs are
becoming problematic, especially in the major European airports, since there
is a limited possibility of building new infrastructures, alternative
solutions (e.g., optimization models) are desired. The real-time problem of
effectively managing aircraft operations is challenging, especially due to the
inclusion of safety regulations into the optimization model and several
performance indicators. This inclusion leads to achieving feasible and
reasonable solutions in terms of safety and efficiency, even considering that
there is no well-recognized objective function and traffic controllers often
use simple scheduling rules. The experiments were performed considering
simulated scenarios in the two major Italian airports, Milano Malpensa and
Roma Fiumicino. In this context, random landing and take-off aircraft
disturbances are built. In the optimization process, practical-size instances
are solved to (near) optimality by employing a commercial solver. Finally, a
computational analysis enables the selection of solutions that presents
considerable quality in balancing the various indicators trade-off. However,
this research focuses on scheduling and does not consider the presence of UAS
and the impact of the inclusion of this new technology into the ATC.
Samà et al. [73] deals with the TMA aircraft scheduling problem, which
requires conflict-free schedules for all aircraft, whereas the overall
aircraft delays are minimized. Furthermore, this research also deals with the
aircraft landing trajectory optimization problem, which requires a landing
trajectory that minimizes the travel time or the fuel consumption for each
aircraft. In this context, a framework for the lexicographic optimization of
both problems is proposed, which solves the two problems sequentially based on
a defined lexicographic order of importance for the performance indicators,
i.e., the most important performance indicator defines the first problem to be
optimized. Note that the second problem is solved considering some outputs of
the solution of the first problem. The experiments, performed on simulated
Milano Malpensa airport instances and considering different optimization
lexicographic orders and performance indicators, show the existence of
performance gaps between the optimized indicators of the two problems,
highlighting the multi-objective nature of the problem when different
lexicographic optimization approaches are considered. However, this research
does not consider some aspects, such as bad weather conditions.
A number of algorithmic improvements implemented in the AGLIBRARY solver, a
state-of-the-art optimization solver for complex routing and scheduling
problems, to improve the possibility of finding good quality solutions
quickly, is presented in [74]. Intelligent decision support systems for the
real-time management of landing and take-off operations, which can be
effective in helping ATCos at busy Terminal Control Areas (TMAs), aim to
optimize aircraft sequencing. This problem, which can be faced as a mixed
integer linear program, is strongly NP-hard, and heuristic algorithms are
typically adopted in practice to compute good quality solutions in a short
computation time. In this context, the framework proposed in this paper starts
from a feasible initial solution for the aircraft scheduling problem with
fixed routes, computed via a truncated branch-and-bound algorithm, and,
further, metaheuristics (e.g., variable neighborhood search, tabu search, and
hybrid schemes) are applied to improve the solution by re-routing some
aircraft in the TMA. Finally, the results showed that the metaheuristics
quickly achieve solutions of remarkable quality compared with a commercial
solver. However, parallel implementations of the metaheuristics, which may
reduce the execution time considerably, are not considered. Finally, UAS
presence is not taken into account.
The authors in [75] apply heuristic and genetic algorithms approach for the
path planning problem for UAVs. This approaches consider emergency landing
procedures and aim to mitigate the probability of reaching unsafe situations.
The path re-planning, caused by several factors such as equipment failures and
leads missions to be aborted by executing an emergency planned landing, is
introduced through a mathematical formulation. In this context, path planning
approaches that employ greedy heuristic, which aims to find feasible paths
quickly, genetic algorithm, and multi-population genetic algorithm, which
tends to return better quality solutions, are introduced and evaluated
considering a large set (600 maps) of scenarios. The experiments conducted
using the FlightGear simulator showed that all methods could land the aircraft
appropriately for about 67% of scenarios considered. The type of landing
executed by the UAV was evaluated under two situations. First, The UAV landing
is evaluated, taking into account the chance to save the UAV without putting a
risk on people, properties, or itself. Next, the UAV landing is evaluated,
considering the emphasis on saving people and properties without caring about
UAV damages. Finally, statistical analysis reveals that combining the greedy
heuristic with the genetic algorithms is suitable for this problem. Although
this paper deals with path planning, it is not focused on the National
Airspace System (NAS). Furthermore, the presence of the ATC is not taken into
account.
A framework and a formulation for solving path planning problems for multiple
heterogeneous UAVs with uncertain service times for each vehicle–target pair
is presented by Sundar et al. [76]. The vehicles, which differ in their motion
constraints and are located at distinct positions at the beginning of the
mission, consider a penalty related to the duration of their total service
time. The main goal is to find a tour of each vehicle that starts and ends at
its respective position. This considers that every target is visited and
serviced by some vehicle, and the sum of the total travel distance and the
penalty applied to all vehicles are minimized. Furthermore, the authors
present the theoretical properties and advantages of using a two-stage
stochastic problem formulation to solve this problem instead of using a
deterministic expected value formulation. Finally, extensive numerical
simulations that compared these two formulations also corroborate the
effectiveness of the proposed approach. However, although this research can be
adapted to be applied to problems related to the NAS (e.g., aircraft
rerouting), this is aimed to be applied to segregated airspace missions.
Furthermore, aspects such as bad conditions are not taken into account.
Finally, arrival segment constraints are not taken into account.
In [77], a fast algorithm that finds collision-free 3D paths for small UAS in
urban environments is introduced and combined with an algorithm that computes
approximate Euclidean shortest paths. This algorithm reduces the number of
obstacles present in the pathfinding process,, considering that the studied
environments are expressed as three-dimensional scenarios and the objects as
vertical polyhedra. The reader should note that this approach aims to reduce
the computation time in a more practical situation, i.e., the algorithm
proposed is inefficient in complex situations. Thus, there are situations
where the algorithm does not perform well, for instance, scenarios that
include tall objects. Experimental cases showed that this approach is
competitive in terms of speed and solution quality compared to solutions
present in the literature for more realistic scenarios. Furthermore, the
authors intend to extend this method for more complex scenarios in future
works. However, this research does not consider the application in the
National Airspace System (NAS). Also, evolutionary approaches (e.g., Particle
Swarm Optimization) are not explored as an alternative to solving complex
situations. Finally, some important factors impacting aviation for segregated
airspace and NAS, such as bad weather conditions, are not considered.
In [78], an approach that employs a single UAV for providing wireless coverage
for indoor users inside a high-rise building under disaster situations (e.g.,
earthquakes), considering the failures in cellular networks, is proposed. To
accomplish this, the authors assume that the locations of indoor users are
uniformly distributed on each floor. Furthermore, a Particle Swarm
Optimization (PSO) algorithm is used to find an efficient 3D placement of a
UAV that minimizes the total transmit power required for the coverage. The
experiments, which considered 50 population size, 50 maximum iteration number,
and 20 users on each floor, showed that the proposed approach minimizes the
total transmit power required to cover the indoor users considering a uniform
distribution of users per floor. Note that the authors state that the PSO is
chosen due to the problem’s intractability, given its characteristics. In
conclusion, this research adopts a traditional implementation of the PSO
algorithm and adapts it to the problem. However, changes that may improve the
time spent finding an appropriate solution in terms of feasibility and
fitness, e.g., polarization, are not considered. On the other hand, this
effort focus on small UAS, i.e., the NAS is not considered. Finally, complex
situations and bad weather conditions are not included in this approach.
Jiang et al. [79] establish the model of task assignment for UAV in logistics
regarding the Vehicle Routing Problems with Time Windows (VRPTW). In the past
few years, there has been a growth in research achievements in logistics and
UAV separately, whereas the research achievement on the combination of these
areas steadies stable. Effective logistics systems and task assignments reduce
operating costs and improve transport efficiency. In this context, the model
proposed in this research considers different constraints, such as weight
coefficients, time-windows constraints, and the constraints of the UAV.
Furthermore, the Particle Swarm Optimization (PSO) algorithm is used for
solving the task assignment problem due to its suitability for dealing with
complex combinatorial optimization problems. Note that the PSO implementation
presents some modifications since the original PSO algorithm is only suitable
for the continuous space optimization problem. In this paper, the task
assignment for UAV is an integer linear programming problem. The conducted
experiments showed that the PSO is efficient in solving the problem of task
assignment for UAV, and, in comparison with a traditional Genetic Algorithm
(GA), this approach presented a higher success rate and a lower average
running time.
A new optimization problem for solving conflicts is presented by Hong et al.
[80]. This method allows aircraft to change their heading angle and speed to
optimize their trajectory. The performance index is expressed in terms of the
variation of the aircraft arrival time caused by conflict resolution
maneuvers, i.e., higher performance indices are computed in situations where
this time variation is low. In order to accomplish conflict resolution and
proper flow management, metering constraints (e.g., aircraft arrival time) are
introduced together with separation constraints. In this context, the optimal
solution is obtained by utilizing Particle Swarm Optimization (PSO), and
numerical, and Monte Carlo simulations are conducted to evaluate the
performance of the proposed algorithm. Due to the considerable ease of PSO in
solving complex nonlinear problems, several performance indices and
constraints are considered without the limitations of linear approximation or
a complex procedure, which may involve a certain level of imprecision. The
simulation results showed a significant reduction in the variation of the
aircraft arrival time and the magnitude of the maneuvers, i.e., heading angle
and speed changes.
Marinakis et al. [81] deal with the Constrained Shortest Path problem, which
is a well-known NP-hard problem, by proposing a new hybridized version of
Particle Swarm Optimization (PSO) algorithm, which is a population-based swarm
intelligence method, with Variable Neighborhood Search (VNS), which is an
algorithm applied to optimize the particles’ position. Although in the
proposed algorithm, a different equation for the velocities update of
particles is considered, and a new neighborhood topology is employed, an issue
of applying the VNS is the identification of the suitable local search method
for a given problem. In this sense, a number of continuous local search
algorithms are used and tested in a number of modified instances from and
further comparisons with classic versions of PSO. Finally, the experiments
showed that the proposed algorithm has satisfactory efficiency and results. In
future directions, the authors highlight the application of this methodology
to more difficult problems.
In [82], the authors present an optimization algorithm for solving the problem
of arrival aircraft trajectory, which aims to find the best solutions for
vertical flight profile considering the Required Time of Arrival (RTA) and
constraints of Terminal Maneuvering Area (TMA) and aircraft performance.
Firstly, the Base of Aircraft Data (BADA), which is an open-source database of
aircraft performance aspects and is used in simulation tools, is used to
identify the aircraft’s aerodynamic and fuel consumption. Then, a method for
optimizing the trajectories is proposed based on an Improved Particle Swarm
Optimization (IPSO), in which particles’ inertia decreases from 1 to about 0.5
as long as they get closer to near-optimal solutions, with Successive
Quadratic Programming (SQP). During the optimization process, the IPSO is
employed in finding a near-optimal solution, and then, the SQP is used to
enable a quicker finding of an accurate solution. Furthermore, this approach
is compared to standard PSO, which shows that its performance is effective for
trajectory optimization problems. However, this proposal is not focused on
aspects of UAS integration during the optimization process. Finally, although
fixed weights are employed in our proposal, the parallel architecture
considered tends to improve the algorithm efficiency.
Girish [83] proposes a Hybrid Particle Swarm Optimization-local search (HPSO-
LS) algorithm, in a rolling horizon framework, for dealing with the Aircraft
Landing Problem (ALP), which consists of the allocation of arriving aircraft
to runways and the assignment of a landing time to each aircraft. The main
goal of this research is to minimize the penalty costs due to delays in
landing times. Note that the rolling horizon framework is used as an online
optimization strategy considering a fixed time horizon. The presented results
showed that the proposed algorithm effectively solves the problem and compares
with existing approaches from the literature (e.g., PSO variants) in scenarios
involving up to 500 aircraft and five runways, the RH-HPSO-LS showed to be a
more appropriate technique for this problem. In future works, the author
intends to improve approaches to reduce the computational time requirements
for enabling real-world applications, i.e., real-time applications. However,
this effort does not cover the final arrival segments design from the final
sector. Also, although the techniques employed are interesting for finding
good solutions, the optimization is not built considering aspects of UAS
integration in the NAS.
Ribeiro et al. [84] propose a framework that integrates performance
preferences of landing aircraft in Continuous Descent Arrival (CDA) operations
that deals with managing building flight trajectories, which are optimized
reduce fuel burn and emissions during the descent/approach phase. This
approach is special interesting once the maximization of airspace efficiency
and capacity, which needs to be addressed considering local airspace
requirements and constraints, is related to the optimization of air traffic
trajectories. The authors highlight that the Air Traffic Control (ATC) agent,
responsible for conducting the air traffic to specific trajectories, employs a
Particle Swarm Optimization (PSO) algorithm to build feasible and safe
solutions for arrival sequencing. The results showed that considering the data
from Brasilia International Airport (SBBR), the proposed approach enabled 77%
of the air traffic to accomplish their desired time window flying. Finally, as
future intentions, the authors aim to deal with en-route trajectory conflicts
and capacity constraints. However, this research does not consider some
aspects, such as bad weather conditions.
The authors in [85] focus on the aircraft landing optimization problem
considering both the landing routes and the landing order of aircraft. The
main goal is to minimize the occupancy time of the airport, which leads to an
increase in airspace efficiency. This approach considers dynamic weather
conditions and other aircraft’ landing routes changes. To deal with this
problem, the hierarchical evolutionary computation is proposed, which
generates candidates for the main landing route of all aircraft. Furthermore,
a good combination of landing routes for all aircraft is considered to
minimize the occupancy time of the airport. The experiments showed that the
proposed strategy generates robust and orderly landing routes. However, the
results have only been obtained from one simple grid map, which simulates the
flying area of the aircraft and further careful qualifications and
justifications (e.g., other maps or a different number of aircraft) represent
the future intentions of the authors. Furthermore, our proposal considers
complex situations in which feasible solutions in terms of efficiency and,
especially, in terms of safety are required. Finally, the UAS integration,
which may be an important airspace user in the next years, is considered.
Narayan et al. [86] proposes a novel approach for optimizing 3D flight
trajectories considering real-time planning deadlines for small UAS operating
in challenging environments, i.e., environments with obstacles. In this
approach, which generates feasible solutions, sets of candidate smooth flight
trajectories are generated, and, considering that in typical small UAS
operations, multiple objectives may exist, a multi-objective optimization is
employed since it may allow the discovery of solutions that better reflects
overall mission requirements. Note that, in this context, real-time planning
constraints may be imposed during the optimization process to avoid obstacles
in the immediate path, and this approach considers a novel Computationally
Adaptive Trajectory Decision (CATD) optimization system to manage, calculate
and schedule parameters associated with trajectories building to ensure that
the feasible solutions are offered taking processing duration as a constraint.
In conclusion, the authors point out that this approach may potentially be a
more efficient use of the computational time available. However, this research
is intended to be applied to segregated airspaces. Furthermore, weather
conditions are not taken into account.
The authors in [87] propose an online method based on the Estimation of
Distribution Algorithm (EDA), which has become a hot topic in the field of
evolutionary computing, for the real-time Aircraft Arrival Sequencing and
Scheduling optimization problem. This problem, considered a hot topic in Air
Traffic Control (ATC) contributions, has been proven to be an NP-hard problem.
Although many efforts have been made by modeling this problem in a static
case, the air traffic environment in the airport is dynamic and constantly
changing. Since new aircraft are arriving at the airport continually, the
corresponding adjustments should be considered for the scheduling definition.
In this context, the method focuses on aircraft that have already arrived at
the TMA but have not been assigned to land. The experiments highlighted that
the method effectively achieves appropriate solutions for the Aircraft Arrival
Sequencing and Scheduling optimization problem. However, this contribution
does not include the operation of UAS in the NAS and all challenges it brings
to the sequencing problem. Furthermore, bad weather conditions are not taken
into account. Finally, the fitness evaluation does not consider the impacts of
a given sequencing solution on the ATC.
Bennell et al. [88] deal with scheduling aircraft landings on a single runway.
The time window constraints for each aircraft’s landing and the minimum
separation between consecutive aircraft and, consequently, consecutive
landings are two important metrics for the sequencing problem. Note that the
separation between aircraft depends on specific factors, such as the weight
classes. Thus, a multi-objective formulation that considers both the runway
metrics (throughput, earliness, and lateness) and the fuel cost related to
aircraft maneuvers and additional flight time is employed to achieve the
landing schedule. This proposal also considers the static/off-line problem, in
which details of the arriving flights are provided in advance, and the
dynamic/online problem, in which the flight arrival information becomes
available over time. The experiments showed that efficient runway throughput
results were achieved for both static and dynamic problems, considering the
employment of different meta-heuristics.
In this section, the works related to air traffic sequencing optimization were
presented. Different aspects are covered by each work, but in order to
identify the similarities and differences, Table 3 presents all works, which
are classified as follows:
* •
National Airspace System (NAS): Indicates if the optimization method is
intended to be applied in situations of NAS;
* •
Final Arrival Segment Definition (FASD): Indicates if the proposed method is
focused on the final arrival segments design;
* •
Complex Situations (CS): Indicates if the optimization method is developed
considering the sequencing of many aircraft, which constitute a complex
situation;
* •
Bad Weather Conditions (BWC): Indicates if the proposed solution takes bad
weather conditions into account;
* •
Minimum Separation (MS): Indicates if the proposed method applies minimum
separations for each aircraft in order to maintain the safety levels;
* •
UAS Presence (UAS): Indicate if the proposed solution considers the presence
of the UAS and its impacts on sequencing;
* •
Time as a Constraint (TC): Indicate if processing duration is analyzed in the
approach, i.e., if the problem faced is a real-time problem.
Table 3: Review of UAS traffic sequencing optimization in the National Airspace System (NAS). Related Work | NAS | FASD | CS | BWC | MS | UAS | TC
---|---|---|---|---|---|---|---
[70] | | X | | X | | X |
[71] | | X | | X | | X | X
[24] | | | | X | | X | X
[72] | | X | | X | | X |
[73] | | | | X | | X | X
[74] | | | | X | | X |
[75] | X | X | X | | X | |
[76] | X | X | X | X | | |
[77] | X | X | X | X | X | |
[78] | X | X | X | X | X | | X
[79] | X | X | | X | X | |
[80] | | | | X | | X | X
[81] | X | X | X | X | X | X |
[82] | | | | X | | X |
[83] | | X | | X | | X | X
[84] | | | | X | | X | X
[85] | | | X | X | | X | X
[86] | X | X | X | X | X | |
[87] | | | X | X | | X |
[88] | | | | X | | X |
This table shows that many related works consider the National Airspace System
(NAS), minimum separation, and complex situations. However, only one of them
considers bad weather conditions. Note that all works that consider the UAS
presence do not integrate them into the NAS, and consequently, all works that
deal with NAS do not include the UAS.
## 5 Open challenges
Although this work aims to address specific topics regarding the UAS
operation, there are many possibilities for the extension of this effort.
Figure 2 depicts several open challenges in UAS integration, simulation,
optimization, and their intersections. In each category, several research
directions are identified and described in detail in this Section.
Figure 2: Open challenges in UAS Integration, Simulation, Optimization, and
their intersections.
### 5.1 UAS Integration
* •
Measuring the familiarity evolution of different aircraft types throughout the
years: An open challenge in the UAS context refers to the measurement of the
familiarity evolution of different aircraft types (e.g., UAS) as it is
dependent on several human factors and social acceptance. Although some
initiatives have started to investigate this aspect [89] [90], several other
directions require further investigation;
* •
Priority Establishment of UAS sequencing in the National Airspace System
(NAS): Rather than controlling aircraft from the level of familiarity, an
alternative approach is to consider the priority levels established and
assessed by a standardized scale. One example of prioritized aircraft,
nowadays, is the emergency aircraft [91] [92] [93]. Furthermore, a challenge
is to identify the UAS priority following the priority list assigned to
different aircraft nowadays;
* •
Automation of some Air Traffic Control (ATC) tasks: A complex open challenge
is the automation of some ATC tasks. For example, approaches such as ATC
Maturity Level (AML), which represents the level of maturity and autonomy of a
system in terms of acting in controlling manned and unmanned aircraft (e.g.,
different approaches for modeling the relationship between the autonomous ATC
with UAS and between the autonomous ATC and MA can be developed) [94] [95]
[96];
* •
Cognitive impact assessment of UAS integration when emergencies are declared:
Emergencies in the airspace are critical events that need to be carefully
managed [97]. As a result, solutions to deal with these events considering the
UAS presence is a vital open challenge [98] [99] [100].
### 5.2 UAS Simulation
* •
Applying variations in airspace constraints and parameters: Another future
direction is the development of flexible configurations, e.g., variable CB
sizes and shapes, CB movements, and changes in the minimum separation of the
aircraft depending on their types and on the characteristics of the airspace
(e.g., complexity) [101] [102];
* •
Evaluation of cognitive impacts of different aircraft types: In the roadmap to
define priorities, the evaluation of cognitive impacts on ATCos is a pivotal
aspect to consider. In fact, this is an evolving area that can significantly
change in the next decades [103] [104]. Thereupon, the design standard
procedures relies on human-centered investigations [105] [106];
* •
Evaluation of UAS integration in the Advanced Aerial Mobility (AAM): In the
next decades, a new layer of the transportation system is planned to be
deployed and widely used. Advanced Aerial Mobility (AAM) relies on Electric
Vertical Takeoff, and Landing (eVTOL) vehicles [107] [102]. Furthermore, the
integration of autonomous vehicles in this new environment is also a challenge
to be faced in order to ensure future operations are safe and efficient [108]
[109] [110];
### 5.3 UAS Optmization
* •
Arrival Segment Design Considering Failures in C2 Link: the C2 link enables
the communication between remote pilots and the aircraft [18] [111]. According
to the contingency operations proposed by ICAO, considering a failure in the
communication within the final sector, conducting all aircraft considering the
presence of an independent aircraft is complex. Thus, one open challenge
relies on how the set of aircraft can be conducted throughout the landing
procedure in these situations;
* •
Optimization of the UAS operation in the TMA: there are several situations
faced in larger scenarios from the airspace operation perspective that can be
considered. For example, the challenge of dealing with several autonomous
aircraft. Examples of open challenges are airspace resilience (e.g., in case
of problems in airports) [112] [113] [114] and impacts of weather conditions
in a long period of time (e.g., decades)[115] [116] [117]. The main idea is to
extend the research conducted in the final sector to a larger and more complex
area, the TMA;
* •
Development of Vectoring assistant to reduce impact on workload: Vectoring
assistance is a key features in advanced ATC [118] [119] [120]. Although there
are some initiatives under development nowadays, it is important to include
human aspects in those systems when the UAS is part of the operation [121]
[122];
### 5.4 UAS Integration & UAS Simulation
* •
Automation of some ATC tasks for flexible airspace configurations: In cases of
flexible airspace configuration (e.g., minimum separation, priorities, flight
rules, and disruption), interoperability is essential for ATC assistants.
Examples of such scenarios include abnormal operations with C2Link failures
[123], airport (or vertport in AAM) closure [124] [125], and AAM operations
with aircraft of diverse aerodynamic capabilities (e.g., speed and turning
rate) [126] [127] [128];
* •
Priority assessment for different aircraft types in AAM: Another challenge
relies on the definition of different priorities in AAM. The integration of
new vehicles (e.g., UAS) hardens the prioritization process due to the lack of
operational history [38] [129]. Thereupon, an assessment of aircraft types and
different scenarios is needed to establish standard priorities [130] [131];
* •
Cognitive impact assessment of UAS integration in AAM when emergencies are
declared: A simulation effort is also needed in the integration of UAS in AAM
operation [132] [102]. Also, it is pivotal that future directions consider the
analysis of emergencies and abnormal AAM operations, including the UAS.
### 5.5 UAS Integration & UAS Optimization
* •
ATC assistant for UAS-related disruptions: ATC automated support is important
in several areas of the airspace [133] [134]. Consequently, the integration of
UAS requires new capabilities from these systems. The development of ATC
supporting systems for UAS-related disruption is another important open
challenge;
* •
Cognitive impact of UAS emergencies considering automated ATC task: Although
some ATC tasks can be automated, the presence of ATCos is paramount [135]
[136]. In this sense, an investigation of how technology and human interaction
in unusual scenarios (e.g., UAS emergencies) [137] [138] is in the scope of
future works;
* •
Development of Vectoring assistant based on new priority standards: Although
ATC supporting tools are under development, new constraints pose the need for
adaptable systems for future UAS operations [139] [140]. In fact, new
vectoring assistance strategies need to be developed based on future airspace
priority standards.
### 5.6 UAS Simulation & UAS Optimization
* •
Evaluation of different airspace configurations in disruption: In case of
abnormal events (e.g., emergencies), it is important to understand how the
operation can be optimized [141]. Hence, the evaluation and optimization of
multiple strategies to deal with complex UAS conditions is another open
research challenge.
* •
Development of Vectoring assistant in the AAM context: Automation of ATC tasks
is challenge for several reasons [142] [143]. For example, standards are under
development, and the daily ATC operation is currently being designed. In this
sense, the development of approaches to deal with possible ATC configurations
and including the UAS is a vital future direction to support safe and
efficient AAM operations;
* •
Evaluation of cognitive impacts of different aircraft types in AAM: Similarly
to the ATC operation, AAM is expected to have aircraft of multiple
capabilities (e.g., speed) [144]. Considering the UAS presence is critical
since the cognitive impacts on human stakeholders (e.g., ACTos and pilots) can
be significant [145] [146]. Thus, this evaluation fosters the maturity
evolution of UAS operations in AAM;
### 5.7 UAS Integration & UAS Simulation & UAS
* •
Automation of UAS-enabled AAM control and its interaction with the TMA: The
diverse environment created by the substantial increase of aircraft in the
urban space [147] [148] [149] will require solutions to optimize the
interoperability of the airspace. Solutions that assume the UAS are also
required for safe and efficient future operations;
* •
Priority assessment in AAM disruptions: AAM is expected to bring various
aircraft to fly simultaneously in the urban environment. Abnormal conditions
can lead to unsafe states and compromise the system performance [29].
Thereupon, it is important to have strategies and standards established for
normal and abnormal operations, considering that these priorities can be
flexible depending on several factors (e.g., UAS presence) [150] [151];
* •
Social acceptance evolution of UAS integration in AAM operations: UAS is a
disruptive technology to be present in airspace. Consequently, there is a lack
of social acceptance of such aircraft in the airspace (e.g., AAM) [90] [152].
Investigations of how this problem can be mitigated and the most relevant
factors to be addressed represent an open challenge in the UAS context.
## 6 Conclusion
This research presented a comprehensive review of the advancements in the
integration of Unmanned Aircraft Systems (UAS) in the National Airspace System
(NAS) from different perspectives. These contributions include the presence of
UAS in simulation, the final approach, and the optimization of problems
related to the interoperability of such systems in the airspace. Besides, we
also highlighted several open challenges and future directions based on the
contributions analyzed. Finally, we emphasize the benefits that UAS will bring
to society in the next years and reinforce the need for new strategies to deal
with the challenges described in this research.
## References
* [1] S Marquart, M Ponater, F Mager, and R Sausen. Future development of contrail cover, optical depth, and radiative forcing: Impacts of increasing air traffic and climate change. Journal of Climate, 16(17):2890–2904, 2003.
* [2] Raghavendra Totamane, Amit Dasgupta, and Shrisha Rao. Air cargo demand modeling and prediction. IEEE Systems Journal, 8(1):52–62, 2014.
* [3] Nathan Girdner. An integrated system safety model of the national airspace system. In Reliability and Maintainability Symposium (RAMS), 2016 Annual., pages 1–6. IEEE, 2016.
* [4] Pietro Aricò, Gianluca Borghini, Gianluca Di Flumeri, Stefano Bonelli, Alessia Golfetti, Ilenia Graziani, Simone Pozzi, Jean-Paul Imbert, Géraud Granger, Railane Benhacene, Dirk Schaefer, and Fabio Babiloni. Human factors and neurophysiological metrics in air traffic control: A critical review. IEEE Reviews in Biomedical Engineering, 10:250–263, 2017.
* [5] S. Kahne and I. Frolow. Air traffic management: evolution with technology. IEEE Control Systems Magazine, 16(4):12–21, 1996.
* [6] ICAO. Air traffic management - doc 4444, 2016. Available in: https://ops.group/blog/wp-content/uploads/2017/03/ICAO-Doc4444-Pans-Atm-16thEdition-2016-OPSGROUP.pdf. Accessed in: November 2022.
* [7] ICAO. Air traffic services - annex 11, 2001. Available in: https://skyrise.aero/wp-content/uploads/2017/03/ICAO-Annex-11-Air-traffic-services.pdf. Accessed in: November 2022.
* [8] Colin Meckiff, Renaud Chone, and Jean-Pierre Nicolaon. The tactical load smoother for multi-sector planning. In USA/Europe Air Traffic Management research and development seminar, 1998, 2., 1998.
* [9] Arnab Majumdar and John Polak. Estimating capacity of europe’s airspace using a simulation model of air traffic controller workload. Transportation Research Record: Journal of the Transportation Research Board, (1744):30–43, 2001.
* [10] Tamara Pejovic, Fedja Netjasov, and Dusan Crnogorac. Relationship between air traffic demand, safety and complexity in high-density airspace in europe. In Risk Assessment in Air Traffic Management. IntechOpen, 2020.
* [11] Arnab Majumdar and Washington Ochieng. Factors affecting air traffic controller workload: Multivariate analysis based on simulation modelling of controller workload. Transportation Research Record: Journal of the Transportation Research Board, (1788):58–69, 2002.
* [12] Amina Dervic and Alexander Rank. Atc complexity measures: Formulas measuring workload and complexity at stockholm tma, 2015. Available on: http://www.diva-portal.org/smash/get/diva2:790857/FULLTEXT01.pdf. Accessed in: November 2022.
* [13] Reg Austin. Unmanned aircraft systems: UAVS design, development and deployment., volume 54. John Wiley & Sons, New York, 2011.
* [14] Tomáš Noskievič and Jakub Kraus. Air traffic control tools assessment. MAD-Magazine of Aviation Development, 5(2):6–10, 2017.
* [15] Sai Ram Ganti and Yoohwan Kim. Implementation of detection and tracking mechanism for small uas. In 2016 International Conference on Unmanned Aircraft Systems (ICUAS), pages 1254–1260, 2016.
* [16] G. Fasano, D. Accado, A. Moccia, and D. Moroney. Sense and avoid for unmanned aircraft systems. IEEE Aerospace and Electronic Systems Magazine, 31(11):82–110, November 2016.
* [17] David Guerin. Consideration of wake turbulence during the integration of remotely piloted aircraft into the air traffic management system. In 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pages 926–935, 2015.
* [18] Euclides Pinto Neto, Derick M Baum, Carlos E Hernandez-Simões, Jorge R Almeida, João B Camargo, and Paulo S Cugnasca. An airspace capacity-based safety assessment model considering uas integration. In 2017 International Conference on Unmanned Aircraft Systems (ICUAS), pages 961–970. IEEE, 2017.
* [19] ICAO. Manual on remotely piloted aircraft systems (rpas) - doc 10019 an/507, 2015. Available on: https://skybrary.aero/sites/
default/files/bookshelf/4053.pdf. Accessed in: November 2022.
* [20] Laurie Grindle and Davis L Hackenberg. Unmanned aircraft systems (uas) integration in the national airspace system (nas) project: Kdp-a for phase 2 minimum operational performance standards. 2016\. Available on: https://ntrs.nasa.gov/archive/nasa/
casi.ntrs.nasa.gov/20160013707.pdf. Accessed in: November 2022.
* [21] Suraj G Gupta, Mangesh M Ghonge, and PM Jawandhiya. Review of unmanned aircraft system (uas). International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 2(4):pp–1646, 2013.
* [22] Harshad Khadilkar and Hamsa Balakrishnan. Integrated control of airport and terminal airspace operations. IEEE Transactions on Control Systems Technology, 24(1):216–225, 2016.
* [23] ICAO. Aircraft operations - doc 8168, 2006. Available on: http://www.chcheli.com/sites/default/files/icao_doc_8168_vol_1.pdf. Accessed in: November 2022.
* [24] Md Shohel Ahmed, Sameer Alam, and Michael Barlow. An evolutionary optimization approach for path planning of arrival aircraft for optimal sequencing. In Intelligent and Evolutionary Systems, pages 1–16. Springer, 2017\.
* [25] Euclides Carlos Pinto Neto, Derick Moreira Baum, Jorge Rady De Almeida, João Batista Camargo, and Paulo Sergio Cugnasca. Swarm-based optimization of final arrival segments considering the uas integration in the national airspace system. IEEE Access, 9:112372–112387, 2021.
* [26] Michael Fromm, Richard Bevilacqua, René Servranckx, James Rosen, Jeffrey P Thayer, Jay Herman, and David Larko. Pyro-cumulonimbus injection of smoke to the stratosphere: Observations and impact of a super blowup in northwestern canada on 3–4 august 1998. Journal of Geophysical Research: Atmospheres, 110(D8), 2005.
* [27] ICAO. Global atm operational concept - doc 9854-an/458, 2005. Available on: https://www.icao.int/Meetings/anconf12/
Document%20Archive/9854_cons_en[1].pdf. Accessed in: November 2022.
* [28] Jakub Złotowski, Kumar Yogeeswaran, and Christoph Bartneck. Can we control it? autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100:48–54, 2017\.
* [29] Renan Buosi Ferreira, Derick M. Baum, Euclides Carlos Pinto Neto, Marcelo R. Martins, Jorge R. Almeida, Paulo S. Cugnasca, and João B. Camargo. A risk analysis of unmanned aircraft systems (uas) integration into non-segregate airspace. In 2018 International Conference on Unmanned Aircraft Systems (ICUAS), pages 42–51, 2018.
* [30] T. Shmelova, D. Bondarev, and Y. Znakovska. Modeling of the decision making by uav’s operator in emergency situations. In 2016 4th International Conference on Methods and Systems of Navigation and Motion Control (MSNMC), pages 31–34, 2016.
* [31] E Pastor, M Perez-Batlle, P Royo, R Cuadrado, and C Barrado. Real-time simulations to evaluate the rpas integration in shared airspace. Proceedings of the 4th SESAR Innovation Days (SIDs2014), Madrid, Spain, pages 24–27, 2014.
* [32] Cyril Allignol, Nicolas Barnier, Nicolas Durand, Guido Manfredi, and Eric Blond. Integration of uas in terminal control area. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pages 1–7. IEEE, 2016.
* [33] Bernd Korn, Sebastian Tittel, and Christiane Edinger. Stepwise integration of uas in non-segregated airspace-the potential of tailored uas atm procedures. In 2012 Integrated Communications, Navigation and Surveillance Conference, pages P1–1. IEEE, 2012.
* [34] Joao Luiz de Castro Fortes, Rafael Fraga, and Kenny Martin. An approach for safety assessment in uas operations applying stochastic fast-time simulation with parameter variation. In 2016 Winter Simulation Conference (WSC), pages 1860–1871. IEEE, 2016.
* [35] Arthur Branch, Kris Cate, Waqar Chaudry, and Mark Palmer. A design study for the safe integration of unmanned aerial systems into the national airspace system. In 2016 IEEE Systems and Information Engineering Design Symposium (SIEDS), pages 170–175. IEEE, 2016.
* [36] Ricardo Gimenes, V, Lucio F Vismari, Valter F Avelino, João B Camargo Jr, Jorge R de Almeida Jr, and Paulo S Cugnasca. Guidelines for the integration of autonomous uas into the global atm. Journal of Intelligent & Robotic Systems, 74(1-2):465, 2014.
* [37] Karthik Ramalingam, Roy Kalawsky, and Chris Noonan. Integration of unmanned aircraft system (uas) in non-segregated airspace: A complex system of systems problem. In 2011 IEEE International Systems Conference, pages 448–455, 2011\.
* [38] J Kamienski and J Semanek. Atc perspectives of uas integration in controlled airspace. Procedia Manufacturing, 3:1046–1051, 2015.
* [39] Aaron McFadyen and Terry Martin. Terminal airspace modelling for unmanned aircraft systems integration. In IEEE International Conference on Unmanned Aircraft Systems (ICUAS), 2016., pages 789–794, 2016.
* [40] Reece Clothier, Ewen Denney, and Ganesh Pai. Making a risk informed safety case for small unmanned aircraft system operations. Safety (TLOS), 3:4, 2017.
* [41] Achim Washington, Reece A Clothier, Brendan P Williams, Jose Silva, et al. Managing uncertainty in the system safety assessment of unmanned aircraft systems. In 17th Australian International Aerospace Congress: AIAC 17, Melbourne, Vic, Australia, pages 611–618, 2017.
* [42] John Romero and Leonardo Gomez. Proposal for rpas integration into non-segregated airspaces. In 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), pages 6C2–1–6C2–10, 2017.
* [43] Francesco Grimaccia, Federica Bonfante, Manuela Battipede, Paolo Maggiore, and Edoardo Filippone. Risk analysis of the future implementation of a safety management system for multiple rpas based on first demonstration flights. Electronics, 6(3):50, 2017.
* [44] Jennifer Perrottet. Enabling unrestricted uas airspace access: Performance based navigation. In 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), pages 6D1–1–6D1–5, 2017.
* [45] Daniel Baraldi Sesso, Lucio F Vismari, Antonio Vieira da Silva Neto, Paulo S Cugnasca, and João B Camargo. An approach to assess the safety of ads-b-based unmanned aerial systems: Data integrity as a safety issue. Journal of Intelligent & Robotic Systems, 84(1-4):621–638, 2016\.
* [46] Ahmet Oztekin, Cynthia Flass, and Xiaogong Lee. Development of a framework to determine a mandatory safety baseline for unmanned aircraft systems. Journal of Intelligent & Robotic Systems, 65(1):3–26, 2012.
* [47] Curtis W Heisey, Adam G Hendrickson, Barbara J Chludzinski, Rodney E Cole, Mark Ford, Larry Herbek, Magnus Ljungberg, Zakir Magdum, D Marquis, Alexander Mezhirov, et al. A reference software architecture to support unmanned aircraft integration in the national airspace system. Journal of Intelligent & Robotic Systems, pages 1–15, 2013.
* [48] Chris A Wargo, Brian Capozzi, Michael Graham, Dylan Hasson, Jason Glaneuski, and Brandon Van Acker. Enhancing uas pilot safety by terminal and airport shared information situational awareness. In 2017 IEEE Aerospace Conference, pages 1–12. IEEE, 2017.
* [49] Joy Wang, Patricia Deutsch, Linda McCabe, and Ravi Jain. Integration of unmanned aircraft system (uas) voice communications into the faa national airspace (nas). In 2017 Integrated Communications, Navigation and Surveillance Conference (ICNS), pages 1E1–1. IEEE, 2017.
* [50] Aaron Mcfadyen, Terrance Martin, and Luis Mejias. Simulation and modelling tools for quantitative safety assessments of unmanned aircraft systems and operations. In 2016 IEEE Aerospace Conference, pages 1–12. IEEE, 2016.
* [51] Paolo Scala, Miguel Mujica Mota, and Daniel Delahaye. Optimization and simulation based approach for an arrival and departure manager tool. In WSC, pages 3718–3719, 2016.
* [52] Jan Farlik. Conceptual operational architecture of the air force simulator: simulation of air defense operations. In International Conference on Military Technologies (ICMT) 2015, pages 1–5. IEEE, 2015.
* [53] Xiao-Bing Hu, Jian-Qin Liao, and Ezequiel Di Paolo. A simulation study on air traffic control strategies. In 2016 12th World Congress on Intelligent Control and Automation (WCICA), pages 1577–1583. IEEE, 2016.
* [54] Peter Mehlitz, Nastaran Shafiei, Oksana Tkachuk, and Misty Davies. Race: Building airspace simulations faster and better with actors. In IEEE/AIAA Digital Avionics Systems Conference (DASC), 2016, 35. , pages 1–9. IEEE, 2016.
* [55] Přemysl Volf. Nas-wide simulation of air traffic with atc behavior model. In Integrated Communication, Navigation, and Surveillance Conference (ICNS), 2015., pages O3–1. IEEE, 2015.
* [56] Jacco M Hoekstra and Joost Ellerbroek. Bluesky atc simulator project: an open data and open source approach. In International Conference on Research in Air Transportation, 7. , 2016.
* [57] Martijn Tra, Emmanuel Sunil, Joost Ellerbroek, and Jacco Hoekstra. Modeling the intrinsic safety of unstructured and layered airspace designs. In USA/Europe Air Traffic Management Research and Development Seminar (ATM2017), 2017, 12. , 2017.
* [58] Sameer Alam, Hussein A Abbass, and Michael Barlow. Atoms: Air traffic operations and management simulator. IEEE Transactions on intelligent transportation systems, 9(2):209–225, 2008.
* [59] M Pavlinović, B Juričić, and Bruno Antulov-Fantulin. Air traffic controllers’ practical part of basic training on computer based simulation device. In 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 920–925. IEEE, 2017.
* [60] Jessica E Young, Jessica E Young, Andrew Crowell, and Andrew Fabian. Modeling weather in simulation and analysis. In IEEE Integrated Communications, Navigation and Surveillance Conference (ICNS), 2013. , pages 1–11, 2013.
* [61] Seamus M McGovern and Amanda Kalish. Stochastic airspace simulation tool development. In IEEE/AIAA Digital Avionics Systems Conference (DASC), 2009, 28., pages 2–D. IEEE, 2009.
* [62] Jeffrey Homola, Thomas Prevot, Joey Mercer, Nancy Bienert, and Conrad Gabriel. Uas traffic management (utm) simulation capabilities and laboratory environment. In Digital Avionics Systems Conference (DASC), 2016 IEEE/AIAA, 35., pages 1–7. IEEE, 2016.
* [63] Xiong Li, Dongxuan Wei, Dongbin Li, and Xiaoqing Chen. Utilization pattern of closely spaced parallel runways and simulation of operational efficiency. In IEEE International Conference on Progress in Informatics and Computing (PIC), 2015., pages 158–162, 2015.
* [64] Kun Sun, Xuejun Zhang, and Kaiquan Cai. A new method of 4d trajectory generation in the airspace simulation system. In International Conference on Electronic Measurement & Instruments (ICEMI), 2009, 9., pages 4–472. IEEE, 2009.
* [65] Michael A Bucceroni, George D Lecakes, Mira Lalovic-Hand, and Shreekanth Mandayam. A multi-perspective virtual reality visualization of unmanned aerial systems in the us national airspace. In Sensors Applications Symposium (SAS), 2016 IEEE., pages 1–4. IEEE, 2016.
* [66] Sherry S Borener, Derek Hufty, Vitaly S Guzhva, Kenny Martin, and Rafael Fraga. Modeling emergent risks in complex airspace: Uas operations in a metroplex environment. In Digital Avionics Systems Conference (DASC), 2015 IEEE/AIAA, 34. , pages 5B3–1. IEEE, 2015.
* [67] Parimal H Kopardekar, Albert Schwartz, Sherri Magyarits, and Jessica Rhodes. Airspace complexity measurement: An air traffic control simulation analysis. International Journal of Industrial Engineering: Theory, Applications and Practice, 16(1):61–70, 2009.
* [68] Brant Horio, Anthony DeCicco, Robert Hemm, and Virginia Stouffer. Safety risk assessment case study using airspace conflict analysis simulation. In Integrated Communications, Navigation and Surveillance Conference (ICNS), 2012. , pages D2–1, 2012.
* [69] Youkyung Hong, Byunghun Choi, Keumjin Lee, and Youdan Kim. Conflict management considering a smooth transition of aircraft into adjacent airspace. IEEE Transactions on Intelligent Transportation Systems, 17(9):2490–2501, 2016.
* [70] Antonio Alonso-Ayuso, Antonio Alonso-Ayuso, Laureano F Escudero, and F Javier Martín-Campo. Multiobjective optimization for aircraft conflict resolution. a metaheuristic approach. European Journal of Operational Research, 248(2):691–702, 2016\.
* [71] Yuan Gao, Xuejun Zhang, and Xiangmin Guan. Cooperative multi-aircraft conflict resolution based on co-evolution. In Instrumentation & Measurement, Sensor Network and Automation (IMSNA), 2012 International Symposium., volume 1, pages 310–313. IEEE, 2012.
* [72] Marcella Samà, Andrea D’Ariano, Paolo D’Ariano, and Dario Pacciarelli. Scheduling models for optimal aircraft traffic control at busy airports: tardiness, priorities, equity and violations considerations. Omega, 67:81–98, 2017.
* [73] Marcella Samà, Andrea D’Ariano, Dario Pacciarelli, Konstantin Palagachev, and Matthias Gerdts. Optimal aircraft scheduling and flight trajectory in terminal control areas. In IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), 2017, 5., pages 285–290. IEEE, 2017.
* [74] Marcella Samà, Andrea D’Ariano, Francesco Corman, and Dario Pacciarelli. Metaheuristics for efficient aircraft scheduling and re-routing at busy terminal control areas. Transportation Research Part C: Emerging Technologies, 80:485–511, 2017.
* [75] Jesimar Silva Arantes, Márcio da Silva Arantes, Claudio Fabiano Motta Toledo, Onofre Trindade Júnior, and Brian Charles Williams. Heuristic and genetic algorithm approaches for uav path planning under critical situation. International Journal on Artificial Intelligence Tools, 26(01):1760008, 2017. |
# Scalar Invariant Networks with Zero Bias
Chuqin Geng, Xiaojie Xu, Haolin Ye, Xujie Si
McGill University
{chuqin.geng, xiaojie.xu<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Just like weights, bias terms are the learnable parameters of many popular
machine learning models, including neural networks. Biases are believed to
effectively increase the representational power of neural networks to solve a
wide range of tasks in computer vision. However, we argue that if we consider
the intrinsic distribution of images in the input space as well as some
desired properties a model should have from the first principles, biases can
be completely ignored in addressing many image-related tasks, such as image
classification. Our observation indicates that zero-bias neural networks could
perform comparably to neural networks with bias at least on practical image
classification tasks. In addition, we prove that zero-bias neural networks
possess a nice property called scalar (multiplication) invariance, which has
great potential in learning and understanding images captured under poor
illumination conditions. We then extend scalar invariance to more general
cases that allow us to formally verify certain convex regions of the input
space. Our experimental results show that zero-bias models could outperform
the state-of-art models by a very large margin (over 60%) when predicting
images under a low illumination condition (multiplying a scalar of 0.01);
while achieving the same-level performance as normal models.
## 1 Introduction
Using bias terms in neural networks is a common practice. Its theoretical
foundation goes back to the invention of artificial neural networks, which are
loosely inspired by biological neurons. Biological neurons have some
thresholds to determine whether they should ”fire” (produce an output that
goes to other neurons)[23, 45, 15]. These thresholds are essentially the same
thing as bias terms. From the representation learning perspective, the bias
term is widely believed to increase the representational power of neural
networks and thus is always needed when designing neural networks to solve a
broad array of tasks in computer vision [43, 33, 2].
In this work, we challenge the commonly-held beliefs of the necessity of
including bias terms in neural networks to solve computer vision tasks. Our
geometric observations suggest the intrinsic distribution of images should
incorporate both _locality_ and _directionality_. With these two properties
holding, bias terms should not affect models’ representational power and
performance, even for large modern CNN models such as ResNets [16]. Our
thorough experimental results also support this argument.
Figure 1: Scalar invariant networks (without bias) and their counterparts with
bias share similar accuracies on CIFAR-100 when the multiplying scalar is 1,
i.e., the original images. As the scalar diminishes, the accuracies of normal
models drop quickly whereas that of models without bias achieve invariance.
In addition, we show that neural networks will possess an intriguing property
- scalar (multiplication) invariance after dropping bias terms. We then extend
scalar invariance to CNNs as well as ResNets. While removing biases may cause
gradient vanishing/exploding which hinders models’ learning, we mitigate this
issue by leveraging recent advances in normalization-free methods including
Fixup[47] and NF-ResNets [5, 6]. This property will allow us to make
robustness predictions on low-illuminated images without any data pre-
processing and augmentation techniques, which normal neural networks (with
biases) usually fail to do so, as illustrated in Figure 1. Based on the scalar
invariance property, we further derive more general robustness guarantees that
could verify even certain convex regions of input space. In contrast, such
guarantees hardly exist on normal neural networks due to their highly
combinatorial nature.
We summarize our contributions as follows: (1) We show basic building blocks
of neural networks are scalar multiplication associative if ignoring bias,
which in turn, assures the scalar invariant property of convolutional neural
networks. By adapting batch normalization-free methods, we can extend scalar
invariance to ResNets; (2) Derived from the scalar invariant property, we
propose two more robustness guarantees that can verify inputs on certain lines
and convex regions of the input space; (3) Our geometric observations suggest
the intrinsic distribution of images should incorporate both _locality_ and
_directionality_. Under these two properties, scalar invariant neural networks
111We use terms _scalar invariant, zero-bias, without bias_ interchangeably to
describe the same variant of neural network. should have the same
representational power as normal neural networks, thus delivering comparable
performances; (4) Our experiment suggests scalar invariant neural networks
could outperform normal models including state-of-the-art models by over 60%
on predicting images 100 times darker. In addition, we show that scalar
invariant networks share the same bias as humans when predicting the zero
image, i.e., an image with all pixel values being zero. We also empirically
validate the robustness merit of scalar invariant networks using visual
examples.
## 2 Related Work
### 2.1 Invariance in Neural Networks
Studying invariance in machine learning as well as neural networks has
attracted much attention as real-world data such as images often exhibit rich
invariant structures. Incorporating such invariance properties as prior
knowledge (inductive bias) could expand the expressive power of the network
without much increase in the number of parameters, which usually leads to
better performance. For instance, Convolutional Neural Networks have a
stronger geometric prior - translation invariance [7, 4]. In addition, Group
equivariant Convolutional Neural Networks (G-CNNs) adapt group convolution
layers to achieve great results on images generated by translations,
reflections, and rotations [9]. Similar work also focuses on studying the
invariance of neural network’s outputs under group actions on its inputs [27,
31, 3].
Given the scale invariant nature of images, there is also a line of work
studies how to improve the consistency of models’ prediction on varying scale
images [44, 12, 34, 48]. However, the most related invariance to our work is
illumination invariance which has great impact on many real-world
applications. For example, Ramaiah et al. uses convolutional neural networks
for face recognition under non-uniform illumination [35]. Maddern et al.
studies illumination invariant transform to improve visual localization,
mapping, and scene classification for autonomous road vehicles [32]. Huang et
al. leverages Retinex Decomposition Net and bottom-up attention to approach
person re-identification [21]. Despite absolute invariance being considered
hard to achieve and most works usually failing to guarantee it, our work shows
that absolute invariance under scalar multiplication can be achieved with
zero-bias neural networks.
### 2.2 Zero-bias neural networks
Although zero-bias neural networks do not appear as much as normal neural
networks in the machine-learning literature due to potential reductions in
models’ expressive capability, they have been used in some real-world
applications such as facial expression recognition[26], abnormal event
detection in IoT[29], identification of Internet-of-Things devices[30], RF
signal surveillance[28], and anomaly data detection[46]. There are several
reasons for choosing zero-bias neural networks over normal neural networks:
(1) Their incremental learning fashion and better decision fairness; (2)
Better interpretability without losing accuracy, which challenges the common
first impression of the weaker expressive capability of zero-bias models; (3)
More reliable and robust performance. Although these works achieve some
success with zero-bias neural networks, none of them dive deeper to analyze
these advantages formally. Our work explores zero-bias from an invariant
perspective for the first time, to our best knowledge, identifying scalar
multiplication invariance in zero-bias models, proving some rigorous robust
guarantees, and explaining their comparable accuracy based on geometric sights
of image distribution.
## 3 Scalar invariant neural networks
### 3.1 Preliminary
A neural network consists of an input layer, hidden layers, and an output
layer. For convolutional neural networks, some of the hidden layers are called
convolution layers which perform convolution operations on their input tensors
with convolution kernels. The outputted tensors are passed to an activation
function, commonly ReLU, before downsampling through pooling layers. After
that, the input tensor is flattened out so that a fully connected network can
process it and calculate the final prediction. For classification tasks, the
final prediction is represented by a probability distribution over all classes
using some activation functions such as Softmax. To further investigate the
scalar invariant property, we formally denote the input tensor as $X$ and a
convolutional neural network as $\mathcal{N}$. Then $\mathcal{N}$ is composed
of convolutional layers $\mathcal{F}_{i}$, pooling layers $\mathcal{P}_{i}$,
and fully connected layers $\mathcal{L}_{j}$, where $i,j\in\mathbb{N}$. And we
denote the final activation function as $\mathcal{A}$ and ReLU as
$\mathcal{R}$. We think of layers and activation functions as transformations
on the input $X$, then the output of the network before the final activation
function $\mathcal{A}$ is represented by:
$\mathcal{O}(X)=\underbrace{\mathcal{L}_{j}\circ\mathcal{R}\circ...\circ\mathcal{L}_{1}}_{\text{j
linear
layers}}\circ\underbrace{\mathcal{P}_{i}\circ\mathcal{R}\circ\mathcal{F}_{i}...\circ\mathcal{P}_{1}\circ\mathcal{R}\circ\mathcal{F}_{1}}_{\text{i
convolutional layers}}\circ X$
And the final prediction class is determined by the one with the highest
probability over all classes $\mathcal{C}$, that is:
$\mathcal{N}(X)=\operatorname*{argmax}_{c\in\mathcal{C}}\mathcal{\\{A\circ
O}(X)\\}$
### 3.2 Scalar associative transformations
We consider the operation inside a convolution layer $\mathcal{F}$ with a
kernel $\mathcal{K}$, it is easy to show the associative property with scalar
multiplication hold for convolution operations. More formally, let $s$ be a
positive scalar s.t. $s\in\mathbb{R}^{+}$, then we have:
$\displaystyle\mathcal{F}\circ(sX)=$
$\displaystyle\sum_{m}\sum_{n}sX(i+m,j+n)\mathcal{K}(m,n)$ $\displaystyle=$
$\displaystyle s\sum_{m}\sum_{n}X(i+m,j+n)\mathcal{K}(m,n)=s(\mathcal{F}\circ
X)$
In addition, the above property also holds for pooling layers $\mathcal{P}$,
including max pooling and average pooling. Since both the max and average
operation should preserve the scalar multiplication. The same argument also
applies to the ReLU function. So we have:
$\mathcal{P}\circ(sX)=s(\mathcal{P}\circ X)$
$\mathcal{R}\circ(sX)=s(\mathcal{R}\circ X)$
Finally, passing the input $X$ to a fully connected layer $\mathcal{L}$ can be
thought of as applying a linear transformation ($\mathcal{W,B}$) on $X$. If we
set the bias term $\mathcal{B}$ to $\mathbf{0}$. We will have the scalar
associative property. That is:
$\mathcal{L}\circ(sX)=(sX)\mathcal{W}^{T}=sX\mathcal{W}^{T}=s(\mathcal{L}\circ
X)$
Note our proofs also use the commutative property which generally holds for
matrix and vector multiplication with a scalar. Put together, by setting
biases to zeros, we have the scalar (multiplication) associative property
holds for the output function, i.e., ($\mathcal{O}(sX)=s\mathcal{O}(X)$).
### 3.3 Scalar invariant convolutional neural networks
Now we consider how to calculate the final prediction of the network
$\mathcal{N}$. For classification tasks, the last activation function
$\mathcal{A}$ is usually Softmax. If we multiply the input $X$ with a scalar
$s$ ( $s\in\mathbb{R}^{+}$ ) and pass the product to Softmax, it is equivalent
to changing the temperature of the distribution. Note that the rank of
candidate classes remains the same despite the change in the shape of the
distribution. Or in other words, the predicted class by the network
$\mathcal{N}$ is scalar (multiplication) invariant:
$\operatorname*{argmax}_{c}\frac{e^{s\mathcal{O}(X)_{c}}}{\displaystyle\sum_{c\in\mathcal{C}}e^{s\mathcal{O}(X)_{c}}}=\operatorname*{argmax}_{c}\frac{e^{\mathcal{O}(X)_{c}}}{\displaystyle\sum_{c\in\mathcal{C}}e^{\mathcal{O}(X)_{c}}}$
Put together with the scalar associative property of the output function
$\mathcal{O}(\cdot)$, we have a scalar invariant neural network:
$\displaystyle\mathcal{N}(sX)=$
$\displaystyle\operatorname*{argmax}_{c}\mathcal{\\{A}\circ\mathcal{O}(sX)\\}$
$\displaystyle=$
$\displaystyle\operatorname*{argmax}_{c}\mathcal{\\{A}\circ\mathcal{O}(X)\\}=\mathcal{N}(X)$
The concept of scalar invariant neural networks generalizes beyond just
convolutional neural networks. In fact, as long as hidden layers perform
scalar associative (and commutative) transformations and the last activation
function preserves the highest probable candidate under scalar multiplication,
the neural network will be scalar invariant.
### 3.4 Scalar invariant ResNet
We briefly discussed the most simple architecture of convolutional neural
networks in the previous section. However, in addition to those basic layers
we mention before, modern powerful CNNs also employ extra layers and
techniques to address over-fitting and gradient exploding/vanishing issues.
For example, ResNet adopts _Dropout_ [38], _additive skip connections_ [16]
and _Batch Normalization_ [22] which contributes enormously to its success.
First, as dropout layers are disabled during the inference phase, it has no
impact on the scalar invariant property. Second, it is trivial to show skip
connection is also scalar multiplication associative if the corresponding
residual branch $\mathcal{G}$ is also scalar multiplication associative.
$sX+\mathcal{G}(sX)=s(X+\mathcal{G}(X))\text{ }\forall s\in\mathbb{R}^{+}$
Lastly, we consider batch normalization, which is performed through a
normalization transformation that fixes the means and variances of inputs to
each layer. Let us use $X_{\mathcal{B}}$ to denote a mini-batch of the entire
training set. Then we have the batch normalization transformation as follows:
$\mathcal{Y}=\gamma{{\hat{X}_{\mathcal{B}}}}+\beta$
where $\gamma$ and $\beta$ are learnable parameters, and
$\hat{X}_{\mathcal{B}}$ is the normalized input, represented by
${\hat{X}_{\mathcal{B}}}={\frac{X_{\mathcal{B}}-\mu_{\mathcal{B}}}{\sqrt{\left(\sigma_{\mathcal{B}}\right)^{2}+\epsilon}}}$,
$\epsilon$ is an arbitrarily small constant. Clearly, we observe that the
scalar associative/invariant property doesn’t hold for the normalization step,
because:
${(\widehat{sX})_{\mathcal{B}}}={\frac{(sX)_{\mathcal{B}}-s\mu_{\mathcal{B}}}{s\sqrt{\left(\sigma_{\mathcal{B}}\right)^{2}+\epsilon}}}={\hat{X}_{\mathcal{B}}}$
$\gamma{(\widehat{sX})_{\mathcal{B}}}+\beta=\gamma{\hat{X}_{\mathcal{B}}}+\beta\neq
s(\gamma{\hat{X}_{\mathcal{B}}}+\beta)$
Thus, in order to achieve scalar invariance, we need to consider some
alternatives to batch normalization. We mainly discuss two previous works on
exploring reliable and efficient residual learning without normalization:
Fixup and NF-ResNets. Both methods achieve state-of-art performance on a wide
collection of benchmarks.
Fixup enables training deep residual networks with comparable performance in
terms of convergence, generalization, etc, without normalization. More
specifically, this method rescales the standard initialization of residual
branches by taking the network architecture into account. The key steps of
Fixup initialization are described as follows:
1. 1.
Initialize the last layer of each residual branch and the classification layer
to 0.
2. 2.
Initialize other layers using a standard method [17], and scale only the
weight layers inside residual branches by $L^{-\frac{1}{2m-2}}$, where $L$ and
$m$ are the numbers of residual blocks and layers inside a residual branch
respectively.
3. 3.
Add a scalar multiplier before each convolution, linear, and element-wise
activation layer in each residual branch, the multiplier is initialized at $1$
222We intentionally ignore the scalar bias (initialized at 0) presented in the
original paper to ensure scalar invariance. This, however, significantly
reduces the training performance, as we will show in Section 4. .
It is obvious that the above initialization steps perform some transformations
on the weights of neural networks instead of the input, and the scalar
multiplier is scalar associative which ensures the trained ResNet is scalar
invariant.
NF-ResNets aims to overcome the same challenge of developing ResNet variants
without normalization layers yet is comparable to batch-normalized ResNets in
many aspects. The effect of standard batch normalization operation within each
residual block can be summarized as: 1) downscales the input by a factor
proportional to its standard deviation; 2) increases the variance of the input
signal by an approximately constant factor. By mimicking the effect of batch
normalization, the residual blocks can be written in the form of
$X_{l+1}=X_{l}+\alpha\mathcal{G}_{l}(X_{l}/\beta_{l})$, where $X_{l}$ denotes
the input to the $l^{th}$ residual block and $\mathcal{G}_{l}(\cdot)$ denotes
the $l^{th}$ residual branch. Moreover, the network should be designed such
that:
* •
$\mathcal{G}_{l}(\cdot)$ is parameterized to be able to preserve variance at
initialization, i.e., $Var(\mathcal{G}_{l}(z))=Var(z)$ for all $l$.
* •
$\beta_{l}$ is a fixed scalar, set it to be $\sqrt{Var(X_{l})}$, the expected
empirical standard deviation of $X_{l}$ at initialization.
* •
$\alpha$ is a hyperparameter that controls the growth rate of variance between
blocks.
Since both $\alpha$ and $\beta$ are fixed scalar during the inference phase.
The modified residual blocks are scalar associative since
$sX_{l}+\alpha\mathcal{G}_{l}(sX_{l}/\beta_{l})=s(X_{l}+\alpha\mathcal{G}_{l}(X_{l}/\beta_{l}))$.
We conclude the NF-ResNets method also ensures scalar invariance.
### 3.5 Robustness guarantees
Despite achieving great success in a wide range of tasks, neural networks have
proven not robust under even small perturbations to the input[8, 1], which
accelerates the study of neural network verification and adversarial attacks.
There are also many attempts focusing on improving the robustness of neural
networks such as data augmentation [37, 36, 41], however, these methods mostly
lack theoretical guarantees. In contrast, we show that scalar invariant neural
networks possess some robustness guarantees illustrated in Figure 2 without
any augmentation techniques.
Figure 2: Derived from the scalar invariant property, it is straightforward to
show two more robustness guarantees that can verify inputs on certain lines
and convex regions of the input space.
Direction robustness property An input $X$ specifies a direction in the input
space. From the origin, there are infinitely many points residing along that
direction, i.e., $\\{sX|s\in\mathbb{R}^{+}\\}$. From this point of view, the
scalar invariant property can be restated as the direction robustness
property. That is:
$\mathcal{N}(sX)=\mathcal{N}(X)\text{ }\text{ }\forall s\in\mathbb{R}^{+}$
With this property, we are able to verify any inputs along the direction
specified by the input $X$.
Interpolation robustness property We first introduce the notion of neural
activation pattern [11]. A _Neural Activation Pattern (NAP)_ of a neural
network $\mathcal{N}$ is a tuple
$\mathcal{P}_{\mathcal{N}}\mathbin{:=}(\mathcal{A},\mathcal{D})$, where
$\mathcal{A}$ and $\mathcal{D}$ are two disjoint set of neurons and
$\mathcal{A}\cup\mathcal{D}$ are all neurons. The notion of neural activation
patterns can be relaxed to consider only a subset of neurons, but that is
beyond the scope of our current discussion. We say that an input $X$ _follows_
a NAP $\mathcal{P}_{\mathcal{N}}$ if after computing $\mathcal{N}(X)$, the
neurons in $\mathcal{A}$ are all activated, and neurons in $\mathcal{D}$ are
all deactivated. We denote this as $\mathcal{P}_{\mathcal{N}}(X)=True$.
Now we consider any two inputs $X,Y$ that follow the same NAP
$\mathcal{P}_{\mathcal{N}}$ and output the same prediction by the
corresponding neural network $\mathcal{N}$, i.e.,
$\mathcal{P}_{\mathcal{N}}(X)=\mathcal{P}_{\mathcal{N}}(Y)=True$ and
$\mathcal{N}(X)=\mathcal{N}(Y)$. Then for $\lambda$ s.t. $\lambda\in[0,1]$, we
have:
$\mathcal{N}(\lambda X+(1-\lambda)Y)=\mathcal{N}(X)=\mathcal{N}(Y)$
The above property can be easily proved using the scalar invariant nature of
zero-bias neural networks. With this property, we are able to verify any
points interpolated between two reference points $X,Y$ within the same neural
activation pattern.
Convex region robustness property We can further extend the above property to
the multiple reference points setting. Suppose we have a collection of inputs
$\\{X_{i}|i\in\\{1,2,...,n\\}\\}$ s.t. they follow the same NAP
$\mathcal{P}_{\mathcal{N}}$ and output the same prediction by the
corresponding neural network $\mathcal{N}$. Let $\mathcal{M}$ be a convex
polygon whose vertices are $\\{X_{i}\\}$, then for any point $m$ lies inside
the polygon $\mathcal{M}$, we have:
$\mathcal{N}(m)=\mathcal{N}(X_{i})\text{ }\forall m\in\mathcal{M}\text{ and
}\forall i\in\\{1,2,...,n\\}$
As $m$ can always be represented by some linear combination of vertices
$\\{X_{i}\\}$, the convex region robustness property holds as the direct
result of the interpolation robustness property. With this property, we are
able to verify a whole region of input space which has proven to be
challenging in the field of neural network verification.
Figure 3: The intrinsic distribution of images incorporates both locality (The
neighborhood of a sample point should belong to the same class.) and
directionality (Any points along the same direction should belong to the same
class.)
(a) Linear activation regions of a simple scalar invariant neural network are
cones.
(b) Linear activation regions of a simple neural network are polytopes.
Figure 4: Scalar invariant neural networks perform poorly along direction 1
(only locality holds) compared to normal neural networks. They perform
comparably along direction 2 (both locality and directionality hold).
### 3.6 Geometric insights
Although we have shown that scalar invariant neural networks demonstrate some
nice properties in terms of robustness, one major concern that emerges along
with eliminating bias is the reduction in the representational power of neural
networks. Such a reduction may hurt neural networks’ performances in certain
categories of tasks, yet we argue that image classification tasks seem not to
be one of them. Let us consider the intrinsic distribution of images in the
high-dimensional input space, illustrated in Figure 3. This leads to two key
observations:
* •
Locality: The neighborhood with a certain radius of a sample point should
belong to the same class.
* •
Directionality: Any points along the direction specified by a sample point
should belong to the same class.
Since a neural network can be thought of as a piece-wise (linear) function
defined over many convex polytopes [13, 14], we plot linear regions of a
simple scalar invariant neural network and a simple normal neural network
trained on a simple 2D dataset to study their representational power in Figure
4. We consider two easy learning tasks whose input distributions are
characterized by directions 1 and 2. Since locality is implicitly embedded in
input distributions of almost every task, including directions 1 and 2 cases
(otherwise, generalization is impossible), we mainly discuss the impact of
directionality on learning outcomes. First, in the absence of directionality,
e.g., in the case of data labeled differently along direction 1, the scalar
invariant neural network may not fit data well using only linear function
within the corresponding cone, as shown in Figure 4(a). Whereas the normal
network could overcome this using a piece-wise linear function across multiple
convex regions along direction 1, illustrated in Figure 4(b).
However, direction 1 is not a serious concern as we observe that the intrinsic
distribution of images should incorporate directionality, portrayed by
direction 2. We investigate the possible gap in the representational power
between the two types of neural networks in the direction 2 case. Given the
directionality holds, the scalar invariant neural network could fit the data
along that direction 2 using a linear function (within a cone) on par with
using piece-wise linear functions by the normal neural networks. This suggests
they should deliver comparable performances, and we will provide more
experimental evidence to support this claim in Section 4. Following our above
discussion, we believe the directionality can be used as a strong geometric
prior in image-related tasks, sharing the spirit of adapting transnational
invariance prior introduced by CNN. Not only could scalar invariant neural
networks perform comparably to normal neural networks, but also they possess
some rigorous robustness properties. For tasks whose underlying distribution
doesn’t satisfy directionality, we may assume directionality holds for them
after applying some pre-processing/transformations. In this way, we can get
better robustness guarantees enabled by scalar invariant neural networks,
without much performance-wise concern.
| Scalar
---|---
| 1 | 0.25 | 0.15 | 0.125 | 0.1 | 0.075 | 0.05 | 0.025 | 0.01 | 0.001 | 0.0001
MNIST | FCN | w/ bias | 88.12 | 87.07 | 84.46 | 82.57 | 79.52 | 74.76 | 65.82 | 42.84 | 16.34 | 10.28 | 10.28
w/o bias | 88.27 | 88.27 | 88.26 | 88.27 | 88.27 | 88.26 | 88.27 | 88.27 | 88.27 | 88.27 | 88.27
Fashion-MNIST | CNN | w/ bias | 89.10 | 67.10 | 40.12 | 32.52 | 24.16 | 17.91 | 12.46 | 10.12 | 10.00 | 10.00 | 10.00
w/o bias | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02 | 89.02
Imagenette[18] | Fixup_ResNet32 | w/ bias | 64.87 | 19.31 | 11.80 | 10.83 | 10.32 | 10.19 | 10.17 | 10.17 | 10.17 | 10.17 | 10.17
w/o bias | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62 | 59.62
NF_ResNet34 | w/ bias | 75.77 | 72.59 | 72.19 | 70.52 | 65.52 | 60.36 | 54.65 | 33.10 | 20.20 | 9.94 | 9.94
w/o bias | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45 | 78.45
CIFAR-100 | Fixup_ResNet110 | w/ bias | 63.14 | 25.05 | 12.85 | 9.93 | 7.42 | 4.6 | 2.32 | 1.16 | 1.0 | 1.0 | 1.0
w/o bias | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75 | 53.75
NF_ResNet101 | w/ bias | 61.44 | 56.21 | 44.77 | 39.10 | 31.00 | 20.68 | 9.56 | 2.79 | 1.03 | 1.00 | 1.00
w/o bias | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51 | 62.51
EfficientNet [40] | w/ bias | 81.94 | 62.44 | 40.42 | 31.81 | 22.68 | 13.22 | 5.39 | 1.25 | 1.00 | 1.00 | 1.00
ViT [10] | w/ bias | 91.48 | 81.58 | 66.11 | 58.55 | 47.98 | 34.11 | 18.49 | 4.54 | 1.08 | 1.00 | 1.00
Table 1: Normal neural networks are generally not robust against scalar
multiplication (with the input image), whereas their scalar invariant
counterparts achieve absolute invariance as we expected.
## 4 Experiemnts
(a) Learning curves of FCNs with and without bias on MNIST.
(b) Learning curves of CNNs with and without bias on Fashion-MNIST.
(c) Learning curves of ResNet34 with and without bias on Imagenette.
(d) Learning curves of ResNet101 with and without bias on CIFAR-100.
Figure 5: Learning curves of normally trained neural networks, and their
scalar invariant counterparts are almost identical, which supports our
argument that removing bias doesn’t impact the expressive capability of models
on image classification tasks.
### 4.1 Scalar invariance evaluation
In this section, we conduct several experiments to investigate the robustness
of normal neural networks and their scalar invariant counterparts under scalar
multiplication with the input. We train the two types of neural networks using
the same configuration except for the option of using bias on some popular
image classification benchmarks. To further show the scalar invariant effect,
we choose the scalar varies in a wide range, from $1$ to $0.0001$; the results
are reported in Table 1. Normal neural networks are generally not robust
against scalar multiplication with the input image. We also notice that models
trained using Fixup are more vulnerable than those trained using NF-ResNets as
the scalar decreases. This may be due to scaling operations on weights
performed by Fixup, which causes the weights to be sensitive to the scale of
the input. Moreover, in the presence of bias, most models could only achieve a
fraction of their original accuracies when the multiplying scalar is $0.01$.
For instance, the state-of-art ViT model can only achieve $1.00\%$ when
multiplying $0.01$ to the input, an enormous decline from $91.48\%$, its
original performance on CIFAR-100. Whereas their scalar invariant counterparts
achieve absolute invariance as we expected. At the scalar of $0.01$, scalar
invariant models outperform SOTA models by around $60\%$, the gap could be
further extended by fine-tuning or other improvements in training. We plot
some visual examples in Figure 6. We observe that the prediction of normally
trained neural networks changes constantly as the scalar diminishes, whereas
that of scalar invariant networks remains unchanged despite the corresponding
probability also decreasing.
We report the learning curves of both scalar invariant networks (excluding
Fixup models due to training issues with scalar bias fixed) and their normally
trained counterparts in Figure 5. Highly overlapped training curves, in this
case, indicate both two types of models have comparable expressive
capabilities. This supports our argument on ignoring bias as a consequence of
considering the directionality of the image distribution.
Figure 6: W/ bias and w/o bias stand for the prediction of normal and scalar
invariant models respectively. Prediction of normally trained neural networks
changes constantly as the scalar decreases, whereas that of scalar invariant
networks remains unchanged. Despite the probability of the corresponding class
diminishing. models inherit scalar invariance from removing bias.
### 4.2 Model bias on the zero image
In this section, we study the models’ bias when predicting the zero image,
i.e. image with all pixel values being zero. From the humans’ perspective, the
zero image contains no information, thus maximizing the information entropy.
To be more specific, the zero image could equally likely be an instance of any
class, i.e. follows a uniform distribution. It is trivial to show that scalar
invariant neural networks share the same bias as humans, because:
(a) The predicted probability of ResNets and scalar invariant models on the
zero image.
(b) The predicted probability of Inception-v4[39] and scalar invariant models
on the zero image.
(c) The predicted probability of EfficientNet and scalar invariant models on
the zero image.
(d) The predicted probability of ViT and scalar invariant models on the zero
image.
Figure 7: SOTA models are biased when predicting the zero image, whereas
scalar invariant neural networks are unbiased like humans.
$\displaystyle\mathcal{N}(\mathbf{0})=$
$\displaystyle\operatorname*{argmax}_{c}\mathcal{\\{A\circ\mathcal{O}(\mathbf{0})\\}}$
$\displaystyle=$
$\displaystyle\operatorname*{argmax}_{c}\frac{e^{\mathbf{0}}}{\displaystyle\sum_{c\in\mathcal{C}}e^{\mathbf{0}}}=\operatorname*{argmax}_{c}\frac{1}{|\mathcal{C}|}$
However, for those neural networks trained with bias, even including most
state-of-the-art models, such property may not hold as the model may bias
towards certain classes. Figure 7 reports selective results on models’ bias
when predicting the zero image. It is interesting to see that all models have
some degree of bias toward certain classes. And deeper ResNets such as
ResNet-152 are less biased than shallower ones such as ResNet-34, suggesting
larger models have the potential to better adjust their posterior belief
(distribution) after observing data than smaller models. On the other hand,
scalar invariant neural networks have no bias toward certain classes, as we
expected. Therefore, one may consider incorporating zero bias as a strong
inductive bias in model design/selection.
### 4.3 Robustness evaluation
We use visual examples to demonstrate the robustness merit of scalar invariant
neural networks. We mainly investigate the interpolation robustness, i.e.
interpolation of two images of the same class still belongs to that class. We
show that scalar invariant neural networks could correctly predict some
synthesized inputs (interpolations of images) which normal models fails to do
so, as reported in Figure 8.
Figure 8: The left and right images are from the original dataset, whereas
synthesized/interpolated images are in the middle. For instance, the middle
image in the first row is generated by adding $(\alpha=0.5)$ times the left
image to $(1-\alpha)$ times the right image.
The semantics of those synthesized images are clear to humans, yet labeled
incorrectly by a normally trained model. We observe scalar invariant neural
networks generally perform better on interpolated images than normal models
do, this can be explained by our robustness properties introduced in Section
3. However, visual examples seem not sufficient to formally prove the models’
robustness on interpolation or convex regions of the input space. To address
this, we need to resort to neural network verification tools which generate
machine-checkable proofs [24, 25, 42]. However, current works mainly focus on
verifying models’ robustness under perturbations with a specific norm (usually
$\l_{\infty}$) to the input [20, 19], which cannot be easily applied to our
case. Thus, we plan to explore more flexible neural network verification
specifications and methods in our future work.
## 5 Conclusion
In this paper, we study scalar multiplication invariance in the field of
neural networks. We prove that, by simply dropping bias terms, the prediction
of neural networks achieves absolute invariance under scalar multiplication
with the input image. Moreover, scalar invariant neural networks could usually
outperform state-of-art models by a large margin, e.g. above $60\%$ on
CIFAR-100. Although it is commonly believed that bias improves models’
performance and thus is always needed, we show that it can be completely
ignored in addressing many image-related tasks such as image classification.
This is because the intrinsic distribution of images incorporates
directionality, which favors zero bias as a strong inductive prior in model
selection. Our experimental results on both small networks, such as simple FCN
and CNN and large models such as ResNet-101, also confirm this argument.
In addition to comparable performances, we show that scalar invariant neural
networks tend to have stronger robustness guarantees than normal neural
networks. This is because scalar invariance allows us to formally verify
certain lines and convex regions of the input space, which is usually
impractical for normal neural networks. We then support this argument using
some visual examples. Driven by their simplicity and effectiveness, we
hypothesize there exist more interesting properties of scalar invariant
networks, such as interpretability, and plan to investigate them in future
work.
## References
* [1] Naveed Akhtar and Ajmal S. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410–14430, 2018.
* [2] Laith Alzubaidi, Jinglan Zhang, Amjad J. Humaidi, Ayad Al-dujaili, Ye Duan, Omran Al-Shamma, Jesus Santamaría, Mohammed Abdulraheem Fadhel, Muthana Al-Amidie, and Laith Farhan. Review of deep learning: concepts, cnn architectures, challenges, applications, future directions. Journal of Big Data, 8, 2021.
* [3] Benjamin Bloem-Reddy and Yee Whye Teh. Probabilistic symmetries and invariant neural networks. J. Mach. Learn. Res., 21:90:1–90:61, 2020.
* [4] Jake Bouvrie. Notes on convolutional neural networks. CoRR, 2006.
* [5] Andrew Brock, Soham De, and Samuel L. Smith. Characterizing signal propagation to close the performance gap in unnormalized resnets. In ICLR. OpenReview.net, 2021.
* [6] Andy Brock, Soham De, Samuel L. Smith, and Karen Simonyan. High-performance large-scale image recognition without normalization. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 1059–1071. PMLR, 2021.
* [7] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. CoRR, abs/2104.13478, 2021.
* [8] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. CoRR, abs/1902.06705, 2019.
* [9] Taco Cohen and Max Welling. Group equivariant convolutional networks. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2990–2999, New York, New York, USA, 20–22 Jun 2016. PMLR.
* [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. CoRR, abs/2010.11929, 2020.
* [11] Chuqin Geng, Nham Le, Xiaojie Xu, Zhaoyue Wang, Arie Gurfinkel, and Xujie Si. Toward reliable neural specifications, 2022.
* [12] Rohan Ghosh and Anupam K. Gupta. Scale steerable filters for locally scale-invariant convolutional neural networks. CoRR, abs/1906.03861, 2019.
* [13] Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. In ICML, volume 97 of Proceedings of Machine Learning Research, pages 2596–2604. PMLR, 2019.
* [14] Boris Hanin and David Rolnick. Deep relu networks have surprisingly few activation patterns. In NeurIPS, pages 359–368, 2019.
* [15] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. Neuron, 95(2):245–258, 2017.
* [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, pages 1026–1034. IEEE Computer Society, 2015.
* [18] Jeremy Howard. Imagenette: A smaller subset of 10 easily classified classes from imagenet, March 2019.
* [19] Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270, 2020.
* [20] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In Rupak Majumdar and Viktor Kunčak, editors, Computer Aided Verification, pages 3–29, Cham, 2017. Springer International Publishing.
* [21] Yukun Huang, Zheng-Jun Zha, Xueyang Fu, and Wei Zhang. Illumination-invariant person re-identification. In ACM Multimedia, pages 365–373. ACM, 2019.
* [22] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 448–456. JMLR.org, 2015.
* [23] A.K. Jain, Jianchang Mao, and K.M. Mohiuddin. Artificial neural networks: a tutorial. Computer, 29(3):31–44, 1996.
* [24] Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In Rupak Majumdar and Viktor Kunčak, editors, Computer Aided Verification, pages 97–117, Cham, 2017. Springer International Publishing.
* [25] Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, David L. Dill, Mykel J. Kochenderfer, and Clark W. Barrett. The marabou framework for verification and analysis of deep neural networks. In CAV (1), volume 11561 of Lecture Notes in Computer Science, pages 443–452. Springer, 2019.
* [26] Pooya Khorrami, Thomas Paine, and Thomas Huang. Do deep neural networks learn facial action units when doing expression recognition? In Proceedings of the IEEE international conference on computer vision workshops, pages 19–27, 2015.
* [27] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 2752–2760. PMLR, 2018.
* [28] Yongxin Liu, Yingjie Chen, Jian Wang, Shuteng Niu, Dahai Liu, and Houbing Song. Zero-bias deep neural network for quickest rf signal surveillance. In 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC), pages 1–8, 2021.
* [29] Yongxin Liu, Jian Wang, Jianqiang Li, Shuteng Niu, Lei Wu, and Houbing Song. Zero-bias deep learning enabled quickest abnormal event detection in iot. IEEE Internet of Things Journal, 2021.
* [30] Yongxin Liu, Jian Wang, Jianqiang Li, Houbing Song, Thomas Yang, Shuteng Niu, and Zhong Ming. Zero-bias deep learning for accurate identification of internet-of-things (iot) devices. IEEE Internet of Things Journal, 8(4):2627–2634, 2021.
* [31] Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, and Benjamin Bloem-Reddy. On the benefits of invariance in neural networks. CoRR, abs/2005.00178, 2020.
* [32] Will Maddern, Alex Stewart, Colin McManus, Ben Upcroft, Winston Churchill, and Paul Newman. Illumination invariant imaging: Applications in robust vision-based localisation, mapping and classification for autonomous vehicles. In Proceedings of the Visual Place Recognition in Changing Environments Workshop, IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, volume 2, page 5, 2014.
* [33] Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1–15, 2018.
* [34] S.J. Perantonis and P.J.G. Lisboa. Translation, rotation, and scale invariant pattern recognition by high-order neural networks and moment classifiers. IEEE Transactions on Neural Networks, 3(2):241–251, 1992.
* [35] N. Pattabhi Ramaiah, Earnest Paul Ijjina, and C. Krishna Mohan. Illumination invariant face recognition using convolutional neural networks. In 2015 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), pages 1–4, 2015.
* [36] Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann. Data augmentation can improve robustness, 2021.
* [37] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. J. Big Data, 6:60, 2019.
* [38] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, 2014.
* [39] Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016.
* [40] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. CoRR, abs/1905.11946, 2019.
* [41] Florian Tramèr and Dan Boneh. Adversarial training and robustness for multiple perturbations. CoRR, abs/1904.13000, 2019.
* [42] Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. Advances in Neural Information Processing Systems, 34, 2021.
* [43] Shengjie Wang, Tianyi Zhou, and Jeff Bilmes. Bias also matters: Bias attribution for deep neural network explanation. In International Conference on Machine Learning, pages 6659–6667. PMLR, 2019.
* [44] Yichong Xu, Tianjun Xiao, Jiaxing Zhang, Kuiyuan Yang, and Zheng Zhang. Scale-invariant convolutional neural networks. CoRR, abs/1411.6369, 2014.
* [45] Guangyu Robert Yang and Xiao-Jing Wang. Artificial neural networks for neuroscientists: A primer. Neuron, 107(6):1048–1070, 2020.
* [46] Bo Zhang, Qiang Zhang, Yong Xin Liu, and Ou Ye. Anomaly data detection for ads- b based on zero-bias inception network. In 2021 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pages 1–6, 2021.
* [47] Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In ICLR (Poster). OpenReview.net, 2019.
* [48] Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, and Stan Z Li. S3fd: Single shot scale-invariant face detector. In Proceedings of the IEEE international conference on computer vision, pages 192–201, 2017.
|
11institutetext: S. Dasgupta1,2 22institutetext: 1Bredesen Center, University
of Tennessee, TN, USA
2Quantum Computational Science Group,
Oak Ridge National Laboratory, TN, USA
22email<EMAIL_ADDRESS>33institutetext: K. E. Hamilton 44institutetext:
Quantum Computational Science Group,
Oak Ridge National Laboratory, TN, USA
44email<EMAIL_ADDRESS>55institutetext: A. Banerjee 66institutetext:
Department of Physics and Astronomy,
Purdue University, IN, USA
66email<EMAIL_ADDRESS>
# Designing a NISQ reservoir with maximal memory capacity for volatility
forecasting ††thanks: This manuscript has been authored by UT-Battelle, LLC
under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The
United States Government retains and the publisher, by accepting the article
for publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the
published form of this manuscript, or allow others to do so, for United States
Government purposes. The Department of Energy will provide public access to
these results of federally sponsored research in accordance with the DOE
Public Access Plan. (http://energy.gov/downloads/doe-public-279 access-plan).
Samudra Dasgupta Kathleen E. Hamilton Arnab Banerjee
(Received: date / Accepted: date)
###### Abstract
Forecasting the CBOE volatility index (VIX) is a highly non-linear and memory-
intensive task. In this paper, we use quantum reservoir computing to forecast
the VIX using S&P500 (SPX) time-series. Our reservoir is a hybrid quantum-
classical system executed on IBM’s 53 qubit Rochester chip. We encode the SPX
values in the rotation angles and linearly combine the average spin of the
six-qubit register to predict the value of VIX at next time step. Our results
demonstrate a potential application of noisy intermediate-scale quantum (NISQ)
devices to complex, real-world applications.
###### Keywords:
Quantum Reservoir Computing Memory Capacity NISQ Financial Risk Management
Volatility
## 1 Introduction
Accurate forecasting of financial data is a difficult task: financial data is
massive and contains many correlated dimensions. Risk estimation needs to
strike a careful balance between avoiding catastrophic crises and avoiding
risk altogether. Risk in finance is typically measured in terms of volatility
of returns or close analogues like Value at Risk (McNeil et al (2015)). Risk
can be unconditional, for example the 30-day rolling standard deviation of the
S&P 500 Index (SPX) returns. It can also be conditional, for example Expected
Shortfall which is defined as the average loss given the loss has crossed a
certain threshold. The observed price of options in the markets can help
impute the implied volatility. Developing useful machine learning based models
for financial forecasting tasks requires memory characteristics that balance
long-term and short-term risk.
The field of reservoir computing (RC) (Gerstner et al (2014)) provides a
detailed but flexible road map towards using signal-driven dynamical systems
to process information with non-von Neumann architectures. RC models are
useful in providing alternatives to deep learning that can deliver comparable
performance yet are low energy, and computationally simple. They are capable
of both one-shot and continuous real-time learning and excel at non-linear
function approximation tasks. RC systems have been utilized in many different
applications and can be constructed from many different dynamical systems (see
recent reviews in (Dambre et al (2012)) and (Tanaka et al (2019a))).
Quantum reservoir computing (QRC) uses quantum ensembles for information
processing. In a recent work (Nakajima et al (2019)), quantum spin systems
were used to construct a quantum reservoir and used for predicting non-linear
time series. Reservoirs built using superconducting qubits are demonstrated in
(Chen and Nurdin (2019); Chen et al (2020)) and these studies have developed a
theoretical underpinning behind the ability to use dissipative quantum systems
as quantum counterpart to approximating non-linear input-output maps using
classical dynamical systems.
### 1.1 Related Works
Understanding the computational capacity of quantum reservoirs is an open
question. There have been several approaches to quantum reservoir designs and
numerical experiments show that quantum systems consisting of 5–7 qubits
possess computational capabilities comparable to conventional recurrent neural
networks of 100 to 500 nodes (Fujii and Nakajima (2017)). Additionally, small
quantum systems also demonstrate significant computational capacities (Govia
et al (2020)). A recent study (Kutvonen et al (2020)) has also focused on
optimizing quantum reservoirs for time series forecasting for financial data
(the S&P 500 index).
Our methods are comparable to Chen and Nurdin (2019) and (Chen et al (2020))
with several significant differences:
* •
We are focused on hybrid quantum-classical reservoirs (which we refer to as
NISQ reservoirs) which incorporate quantum circuits and classical feedback
elements.
* •
We implement systematic design considerations of these NISQ reservoirs as a
computing engine which should be useful for practitioners.
* •
We address the question of evaluating the memory capacity of various reservoir
topologies and how to select the optimal one.
* •
We handle the case of a ‘real-life signal’ that cannot be expressed by an
analytical deterministic equation. VIX (see Section 3.1) is intrinsically
related to market fluctuations and trader psychology.
### 1.2 Organization and contribution
In this paper we focus on the task of VIX forecasting, using the SPX return as
the independent variable. Given that $\Delta\mathrm{SPX}$ explains less than
$75\%$ of $\Delta$VIX we fully acknowledge that a more sophisticated
implementation would use more economic indicators such as the unemployment
rate, gross domestic product and federal funds rate. However the focus of this
paper is demonstrating the design and use of a NISQ reservoir for forecasting
purposes and not pushing the envelope on forecasting accuracy.
We characterize the memory capacity of a six-qubit NISQ reservoir in Section
2. This characterization determines the reservoir design used in Section 3 to
forecast the VIX index. In Section 3, we discuss the relevant properties of
the VIX index, the input encoding methodology, the NISQ reservoir circuit
construction, the use of post-processing and feedback and finally the results
of the forecasting task. Section 4 concludes with a summary of the
contributions of this paper.
## 2 Memory Capacity
Memory capacity (MC) quantifies the ability of the reservoir to forecast at
different time-scales. Before we can design our reservoir, we characterize the
MC of different possible configurations of the reservoir, following the
approach given in (Nakajima et al (2019)). The configuration with the highest
MC will then be used for the time-series prediction task in Section 3.
Let $u_{k}$ be the time-series one is trying to forecast (where $k$ denotes
the time index). Let $\hat{u}_{k-\tau}$ denote the forecast of $\hat{u}_{k}$
using information till time-step $k-\tau$. The correlation $r_{\tau}$ between
$\hat{u}_{k-\tau}$ and $u_{k}$ is a measure of how well the system is able to
do a $\tau$ step look-ahead prediction:
$r_{\tau}^{2}=\frac{\mathrm{COV}(u_{k-\tau},\hat{u}_{k-\tau})}{\sigma^{2}(u_{k-\tau})\sigma^{2}(\hat{u}_{k-\tau})},$
(1)
where COV(x,y) denotes the covariance between x and y and $\sigma(x)$ denotes
the standard deviation of x. Intuitively, one expects that the larger the
value of $\tau$, the lower is the value of $r_{\tau}$ (as higher the value of
$\tau$, more amount of recent data is ignored).
The MC is the sum of $r_{\tau}^{2}$ over different values of $\tau$:
$MC=\sum\limits_{\tau=1}^{\tau_{max}}r_{\tau}^{2}.$ (2)
As in Nakajima et al (2019), we use a random sequence $\in[0,1]$ for $u_{k}$
(where $k$ denotes the time index) and fix the maximum value of $\tau$ to be
$\tau_{max}=120$. This is done to ensure that the MC benchmark does not depend
on a specific time-lag or a specific signal structure.
Figure 1: Schematic of the hybrid quantum-classical reservoir (NISQ reservoir)
system which consists of classical inputs and outputs (grey boxes), classical
computational layers (grey cylinders) and quantum computational layers (white
cylinder).
The NISQ reservoirs used in this study are hybrid quantum-classical systems.
The demarcation between classical and quantum resources is shown in Fig. 1.
The firse classical layer transforms the input into a qubit angle encoding.
The quantum layer is used to generate an array of N-qubit spin values. The
final classical layer is used to compute the forecast, and the forecast error.
Both the forecast error and spin values are fed back into the first classical
layer.
We characterize the MC of a 6-qubit NISQ reservoir as a function of recurrent
connections using a sequence of $1+N+\frac{N(N-1)}{2}$ graphs in increasing
order of network connectivity (and hence complexity). The first term in the
sequence is an empty graph on $N$ vertices. The next $N$ terms in the sequence
are sequentially constructed by adding self-loops to each vertex. The next $N$
terms are sequentially constructed by connecting the $N$ vertices into a
simple cycle. Finally the remaining ($\frac{N(N-1)}{2}$) terms of the sequence
are constructed by sequentially connecting vertices until the final circuit is
a fully connected graph with $N$ self-loops. Note that an edge can be realized
between any two nodes of the reservoir if a two-qubit gate is placed between
the qubits in the quantum layer; or if the output of one qubit is fed to
another qubit during the classical pre-processing layer.
Figure 2: Sequence of reservoir complexity circuits: (a) The first term is
always an empty graph on $N$ qubits, (b) The first ($N$) circuits are
generated by adding self-loops, (c) The next ($N$) circuits are generated by
connecting the qubits into a simple cycle, (d) The remaining circuits are
generated by adding edges to fully connect all $N$ qubits.
For a 6 qubit system, 22 configurations are possible. This sequence is shown
in Fig. 2.
Figure 3: MC as a function of reservoir complexity for a 6-qubit reservoir
executed on ibmq_rochester. [Inset] The optimal reservoir topology with self-
loops on 5 qubits.
The MC of each reservoir was evaluated using IBM’s $52$ superconducting qubit
platform (ibmq_rochester) and is shown in Fig. 3. A peak in the MC (within the
bounds of statistical significance of the MC) is observed for reservoirs with
5 self-loops. This reservoir design is then chosen for the information
processing in Section 3.
Figure 4: MC as a function of reservoir complexity for a 6-qubit reservoir
simulated with noiseless qubits.
The same sequence of reservoir topologies were also simulated in IBM Qiskit
(Abraham et al (2019)). The results of the noiseless simulation are shown in
Fig. 4. Comparison with Fig. 3 reveals that the hardware noisiness translates
into higher MC (within the bounds of statistical significance) for circuits
with higher connectivity (leading to higher degree of non-linear dynamics). We
also observe a slower decay in MC for the NISQ reservoir with hardware noise.
This points to a beneficial impact of the noise in today’s NISQ devices.
## 3 VIX forecasting
In the previous section we found the optimal design of the NISQ reservoir
(based on maximal MC value). In this section we will first give more
background for the economic indicator that we are trying to predict. Then, we
will discuss the components of the NISQ reservoir as shown in Fig. 1, tailored
to the VIX forecasting task: (a) input encoding (Section 3.2), (b) a quantum
circuit (Section 3.3) , and (c) forecast and feedback generation (Section
3.4).
In Fig. 5 we show the computational graph associated with this design,
tailored for the VIX forecasting task. The input encoding consists of the
transformation of $\Delta r(t)\rightarrow u(t)$, the quantum circuit generates
the spin values $s_{i}(t)$ and the forecast is generated by the combination of
$s_{i}(t)$.
Figure 5: Computational graph of the 6 qubit reservoir with 5 self-loops:
$\Delta r(t)=\mathrm{SPX}(t)-\mathrm{SPX}(t-1)$, $u(t)$ is the incoming signal
post application of a non-linear transformation, $s_{i}(t)$ is the average
spin state of qubit [i], delta_v(t+1) is the actual value while
pred_delta_v(t+1) is the predicted value. The error residual is denoted by
err(t+1). The residual from time step $t$ is used as feedback to the
reservoir.
### 3.1 VIX index forecasting
The VIX index represents the market’s expectation of volatility in the near
future as implied by SPX index options data. It is disseminated by CBOE on a
real-time basis and modern finance practitioners prefer using VIX for risk
estimation. It’s value denotes the expected annualized change in the SPX 500
index over the following 30 days, the methodology is detailed in (CBOE
(2019a)). In short, it is calculated using the CBOE-traded SPX options (which
have a non-zero bid-ask) whose expiration falls within next 23 days and 37
days. Using the classical Black Scholes model assumes a time-independent
(constant) volatility. However, economists have confirmed that volatility
varies with time (hence the name Stochastic Volatility). Stochastic models
(like GARCH) significantly improve the prediction accuracy against values
observed in the market and are thus valuable in asset pricing (for traders)
and asset management (for risk managers).
Figure 6: (Top): The VIX Index plotted as a function of time from January 2,
1990 through March 24, 2020. The data corresponding to the 2008 recession is
highlighted in the grey shaded region.(Bottom): The SPX returns ($r(t)$)
plotted as a function of time.
In this study we develop our NISQ reservoir to forecast the VIX index using
the SPX index ($\\{r_{t}\\}$) as the independent variable. The entire dataset
spans January 2, 1990 through March 24, 2020 (see Fig. 6). The initial one-
third of the data (from January 1, 1990 to December 31, 1997) was flushed out
to allow the system to stabilize. In Fig. 7 we plot
($\Delta\mathrm{VIX}_{t}=\mathrm{VIX}_{t}-\mathrm{VIX}_{t-1}$) versus
($\Delta\mathrm{SPX}_{t}=\mathrm{SPX}_{t}-\mathrm{SPX}_{t-1}$).
Figure 7: Scatter plot between the daily percentage change in SPX and daily
percentage change in VIX. It should be evident that change in SPX is
correlated (negatively) with change in VIX. This is why we use SPX as the main
input to the reservoir for VIX forecasting in Section 3.1.
These are the relevant data properties, as shown in Figs. 6,7:
* •
VIX is always positive. It is derived from option implied volatility which can
never go negative.
* •
The mean value of the VIX series is approximately 19. It hit an all time peak
of 82.69 on March 16, 2020. The previous maximum value of 80.86 was reached on
Nov 20, 2008, at the peak of the mortgage crisis (about eight weeks after the
collapse of Lehman Brothers).
* •
The change in VIX is highly correlated with the change in SPX. The correlation
coefficient is approximately $-0.74$ over the entire date range (though it is
much higher during times of crisis). See (CBOE (2019b)) and (Robinson (2018))
for details on why SPX is the primary driver of VIX.
* •
VIX spikes more when SPX suffers a high negative shock compared to a positive
shock of same magnitude. This is referred to as asymmetric volatility in
literature and is driven by behavioral psychology.
* •
VIX exhibits volatility clustering i.e. volatility is persistently high during
times of high uncertainty and persistently low during times of more certainty.
### 3.2 Input encoding
The reservoir predicts a value for VIX at time (t+1) using SPX data for the
last seven days $(r(t-6)\cdots r(t))$. Our forecasting task uses
($\\{r_{t}\\}$), the sequence of time-dependent S&P500 (SPX) log return values
(Hudson and Gregoriou (2015)):
$r_{t}=\log{\frac{\mathrm{SPX}_{t}}{\mathrm{SPX}_{t-1}}}.$ (3)
In the classical pre-processing layer, these SPX return values are converted
into a vector of rotation angles $\theta(t)$ which will be implemented in the
quantum circuit.
First, the SPX log return values $\\{r_{t}\\}$ are used to construct a
sequence of time difference values:
$\Delta r_{t}=r_{t}-r_{t-1}.$ (4)
A non-linear transformation is applied to $\\{\Delta r_{t}\\}$ to define
$u(t)=1-e^{-(a_{0}+a_{1}I_{t}\Delta r_{t})},$ (5)
where $I_{t}$ is an indicator function
$\begin{split}I_{t}=\begin{cases}1&\Delta r_{t}<0\\\ 0&\Delta
r_{t}>0.\end{cases}\end{split}$
The non-linear transformation (Eq. 5) captures the empirical observation that
when returns go negative, volatility spikes more than when they are positive.
This transformation is shown in Fig. 8.
Figure 8: Transformation applied to $\Delta r$ to account for volatility
asymmetry.
The full encoding of the input signal ($u(t)$) into a vector of rotation
values $\theta_{m}(t)$ uses a heuristic encoding that is dependent on the SPX
return ($u(t)$), prediction error $e_{t}$, qubit register element $m$, and the
average qubit spin $s_{m}(t)$ (see following section). The values of
$\theta_{m}(t)$ are constrained to the range $[0,\pi/2]$.
$\theta_{m}(t+1)=\begin{cases}\frac{\pi}{2}\left(\alpha
u_{m}(t)+\beta\frac{s_{m}(t)+1}{2}+\gamma e_{t}\right)&m\in[0,4]\\\
\frac{\pi}{2}(\alpha^{\prime}u_{m}(t)+\gamma^{\prime}e_{t})&m=5.\end{cases}$
(6)
For the $6$-qubit reservoir, the parameters in Eq. 6 are:
$\alpha=0.3,\beta=0.3,\gamma=0.4,\alpha^{\prime}=0.6,\gamma^{\prime}=0.4$.
### 3.3 Reservoir circuit
Our NISQ reservoir system consists of a quantum circuit with classical
feedback loops. In a classical reservoir the connections between oscillators
are not trained, likewise in our NISQ reservoir the connections between qubits
are not trained.
Figure 9: The 6 qubit quantum circuit executed on ibmq_rochester with
arbitrary rotation angles. The RY gates are shown as
$U3(\theta,\phi=0,\lambda=0)$ rotation gates.
The quantum circuit is shown in Fig. 9. It is constructed using only single
qubit gates and was executed on ibmq_rochester, IBM’s $53$ superconducting
qubit platform 111Retired October 31, 2020.. The six qubit register was
executed on a subset of hardware qubits selected based on the lowest error
rates at the time of job execution. Each circuit was sampled using $8192$
shots.
Using the vector of angles found from the classical pre-processing (Section
3.2), the vector element ($\theta(t)[i]$) is passed as the argument to the RY
gate on qubit [i]). The reservoir does not include any two-qubit gates. When
deployed on a NISQ device any interactions between the reservoir nodes are
induced by hardware noise (for example: shifts in the implemented angles,
cross-talk, and readout noise) and feedback of previous output signals as
input.
The output of the reservoir at time t is a vector of average spin values of
each qubit $\mathbf{s}(t)=[s_{0}(t),\cdots,s_{5}(t)]$. Fig. 10 shows the
steady state view of the average spin of the 6 qubits in the register.
Figure 10: Steady state view of the average spin of the 6 qubits in the
register. These signals are linearly combined by an optimized weight vector to
produce the forecast.
### 3.4 Post-processing
These six spin values are linearly combined in a classical post-processing
layer using a six-dimensional, real-valued weight vector ($\mathbf{w}(t)$) to
produce the VIX forecast.
The optimal readout weights are determined by minimizing the mean-square error
(MSE) of the VIX value predicted at time ($t$). Let $\sigma_{t+1}$ represent
the actual value of the VIX at time t and $\hat{\sigma}_{t+1}$ to represent
the value predicted by the NISQ reservoir. The residual error is calculated
using the MSE:
$\begin{split}\hat{\sigma}_{t+1}&=\mathbf{w}(t)\cdot\mathbf{s}(t),\\\
\varepsilon_{t+1}&=\sigma_{t+1}-\hat{\sigma}_{t+1},\\\
\mathrm{MSE}&=\frac{1}{T}\sum\limits_{t=1}^{T}\varepsilon_{t}^{2}.\end{split}$
(7)
The histogram of residual values are shown in Fig. 11, they are shown to have
no bias.
Figure 11: Histogram of the forecasting error. Note that it shows very little
bias i.e. it is centered around zero.
At each time step, ($\mathbf{w}(t)$) is updated using newly available
information. In other words, we find at each time step the $\mathbf{w}(t)$
that gives the closest approximation for the VIX forecast using the measured
spin values.
As noted in Eq. 6, the residual error (the MSE at time-step (t)) is fed back
into the reservoir and utilized for determining the qubit rotation angle in
next time-step. This provides a negative feedback to our spin-based dynamical
system to minimize the error in the output.
### 3.5 Results
In Fig. 12 we plot the one-step ahead forecasts for the 2008 recession. We
also plot the change in VIX in Fig. 13 because for effective risk management
what matters more is change in volatility.
Figure 12: One step ahead predictions for $\Delta$VIX during the 2008
recession using the NISQ reservoir (red, dashed) compared to the actual values
(black, solid). Figure 13: One step ahead predictions of the VIX index value
during the 2008 recession data. Values generated by the quantum reservoir
(red) and the actual VIX (blue).
## 4 Conclusion
NISQ devices are noisy by definition. Examples of noise sources are: qubit
decoherence, gate errors and readout error. Such noise can be beneficial in
machine learning related information processing tasks akin to regularization
(Noh et al (2017)). Noise induced regularization helps NISQ reservoirs to be
‘well-behaved’ and avoid taking extreme values in forecasting related tasks.
In this work we are interested in understanding how hardware noise can affect
NISQ reservoir performance. The circuit design is shallow and uses only single
qubit rotation gates. Thus, any interaction between qubits must be mediated by
noise (i.e. cross-talk) or errors induced by the measurement gate. To reliably
utilize noise-induced correlations, the interactions must be significant and
also long-lived in time. Recent studies (Dasgupta and Humble (2020); Hamilton
et al (2020)) have begun to quantify these properties of near-term quantum
devices.
In this study we developed a NISQ reservoir for the task of stochastic
volatility forecasting in finance - a highly non-linear and memory intensive
temporal information processing task which is well-suited for RC (Tanaka et al
(2019b)). Our results show that that quantum reservoirs implemented with
shallow circuits can be used for regression-type analysis in empirical finance
and also adaptable for near-term quantum processors.
Promising avenues of future work include analyzing the performance for
$\tau$-step look ahead-predictor where $\tau>1$, tuning the MC of the
reservoir to remember historical signal patterns based on a user-defined
appetite (which will lead to a trade-off with forecast accuracy), evaluating
the efficacy of the reservoir in predicting other financial time-series data
and modeling the noisy quantum dynamics accurately to understand the sources
of non-linearity.
## 5 Acknowledgements
This research used quantum computing resources of the Oak Ridge Leadership
Computing Facility, which is a DOE Office of Science User Facility supported
under Contract DE-AC05-00OR22725. This work was partially supported as part of
the ASCR QCAT Program at Oak Ridge National Laboratory under FWP #ERKJ347.
Part of the support for SD and AB came from College of Science, Purdue
University.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* Abraham et al (2019) Abraham H, Akhalwaya IY, Aleksandrowicz G, Alexander T, Alexandrowics G, Arbel E, Asfaw A, Azaustre C, AzizNgoueya, Barkoutsos P, Barron G, Bello L, Ben-Haim Y, Bevenius D, Bishop LS, Bosch S, Bravyi S, Bucher D, Cabrera F, Calpin P, Capelluto L, Carballo J, Carrascal G, Chen A, Chen CF, Chen R, Chow JM, Claus C, Clauss C, Cross AJ, Cross AW, Cross S, Cruz-Benito J, Culver C, Córcoles-Gonzales AD, Dague S, Dandachi TE, Dartiailh M, DavideFrr, Davila AR, Ding D, Doi J, Drechsler E, Drew, Dumitrescu E, Dumon K, Duran I, EL-Safty K, Eastman E, Eendebak P, Egger D, Everitt M, Fernández PM, Ferrera AH, Frisch A, Fuhrer A, GEORGE M, Gacon J, Gadi, Gago BG, Gambetta JM, Gammanpila A, Garcia L, Garion S, Gomez-Mosquera J, de la Puente González S, Gould I, Greenberg D, Grinko D, Guan W, Gunnels JA, Haide I, Hamamura I, Havlicek V, Hellmers J, Herok Ł, Hillmich S, Horii H, Howington C, Hu S, Hu W, Imai H, Imamichi T, Ishizaki K, Iten R, Itoko T, Javadi-Abhari A, Jessica, Johns K, Kachmann T, Kanazawa N, Kang-Bae, Karazeev A, Kassebaum P, King S, Knabberjoe, Kovyrshin A, Krishnan V, Krsulich K, Kus G, LaRose R, Lambert R, Latone J, Lawrence S, Liu D, Liu P, Maeng Y, Malyshev A, Marecek J, Marques M, Mathews D, Matsuo A, McClure DT, McGarry C, McKay D, McPherson D, Meesala S, Mevissen M, Mezzacapo A, Midha R, Minev Z, Mitchell A, Moll N, Mooring MD, Morales R, Moran N, Murali P, Müggenburg J, Nadlinger D, Nannicini G, Nation P, Naveh Y, Neuweiler P, Niroula P, Norlen H, O’Riordan LJ, Ogunbayo O, Ollitrault P, Oud S, Padilha D, Paik H, Perriello S, Phan A, Pistoia M, Pozas-iKerstjens A, Prutyanov V, Puzzuoli D, Pérez J, Quintiii, Raymond R, Redondo RMC, Reuter M, Rice J, Rodríguez DM, Rossmannek M, Ryu M, SAPV T, SamFerracin, Sandberg M, Sathaye N, Schmitt B, Schnabel C, Schoenfeld Z, Scholten TL, Schoute E, Schwarm J, Sertage IF, Setia K, Shammah N, Shi Y, Silva A, Simonetto A, Singstock N, Siraichi Y, Sitdikov I, Sivarajah S, Sletfjerding MB, Smolin JA, Soeken M, Sokolov IO, SooluThomas, Steenken D, Stypulkoski M, Suen J, Takahashi H, Tavernelli I, Taylor C, Taylour P, Thomas S, Tillet M, Tod M, de la Torre E, Trabing K, Treinish M, TrishaPe, Turner W, Vaknin Y, Valcarce CR, Varchon F, Vazquez AC, Vogt-Lee D, Vuillot C, Weaver J, Wieczorek R, Wildstrom JA, Wille R, Winston E, Woehr JJ, Woerner S, Woo R, Wood CJ, Wood R, Wood S, Wootton J, Yeralin D, Young R, Yu J, Zachow C, Zdanski L, Zoufal C, Zoufalc, azulehner, bcamorrison, brandhsn, chlorophyll zz, dan1pal, dime10, drholmie, elfrocampeador, faisaldebouni, fanizzamarco, gruu, kanejess, klinvill, kurarrr, lerongil, ma5x, merav aharoni, ordmoj, sethmerkel, strickroman, sumitpuri, tigerjack, toural, vvilpas, welien, willhbang, yangluh, yelojakit, yotamvakninibm (2019) Qiskit: An open-source framework for quantum computing. DOI 10.5281/zenodo.2562110
* CBOE (2019a) CBOE CGM (2019a) CBOE VIX whitepaper. Tech. rep., CBOE, Chicago, Illinois, URL https://www.cboe.com/micro/vix/vixwhite.pdf, accessed Feb 21, 2020
* CBOE (2019b) CBOE CGM (2019b) The relationship of the SPX and the VIX index. URL www.cboe.com/products/vix-index-volatility/vix-options-and-futures/vix-index/, accessed Mar 27, 2020
* Chen and Nurdin (2019) Chen J, Nurdin HI (2019) Learning nonlinear input–output maps with dissipative quantum systems. Quantum Information Processing 18(7):198
* Chen et al (2020) Chen J, Nurdin HI, Yamamoto N (2020) Temporal information processing on noisy quantum computers. arXiv preprint arXiv:200109498
* Dambre et al (2012) Dambre J, Verstraeten D, Schrauwen B, Massar S (2012) Information processing capacity of dynamical systems. Scientific reports 2(1):1–7
* Dasgupta and Humble (2020) Dasgupta S, Humble TS (2020) Characterizing the stability of nisq devices. 2008.09612
* Farkaš et al (2016) Farkaš I, Bosák R, Gergel’ P (2016) Computational analysis of memory capacity in echo state networks. Neural Networks 83:109–120
* Fujii and Nakajima (2017) Fujii K, Nakajima K (2017) Harnessing disordered-ensemble quantum dynamics for machine learning. Physical Review Applied 8(2):024,030
* Gerstner et al (2014) Gerstner W, Kistler WM, Naud R, Paninski L (2014) Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press
* Ghosh et al (2019) Ghosh S, Opala A, Matuszewski M, Paterek T, Liew TC (2019) Quantum reservoir processing. npj Quantum Information 5(1):1–6
* Govia et al (2020) Govia L, Ribeill G, Rowlands G, Krovi H, Ohki T (2020) Quantum reservoir computing with a single nonlinear oscillator. arXiv preprint arXiv:200414965
* Hamilton et al (2020) Hamilton KE, Kharazi T, Morris T, McCaskey AJ, Bennink RS, Pooser RC (2020) Scalable quantum processor noise characterization. arXiv preprint arXiv:200601805
* Hudson and Gregoriou (2015) Hudson RS, Gregoriou A (2015) Calculating and comparing security returns is harder than you think: A comparison between logarithmic and simple returns. International Review of Financial Analysis 38:151–162
* Inubushi and Yoshimura (2017) Inubushi M, Yoshimura K (2017) Reservoir computing beyond memory-nonlinearity trade-off. Scientific reports 7(1):1–10
* Kia et al (2017) Kia B, Lindner JF, Ditto WL (2017) Nonlinear dynamics as an engine of computation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375(2088):20160,222
* Kutvonen et al (2020) Kutvonen A, Fujii K, Sagawa T (2020) Optimizing a quantum reservoir computer for time series prediction. Scientific reports 10(1):1–7
* McNeil et al (2015) McNeil AJ, Frey R, Embrechts P (2015) Quantitative risk management: concepts, techniques and tools-revised edition. Princeton University Press
* Nakajima et al (2019) Nakajima K, Fujii K, Negoro M, Mitarai K, Kitagawa M (2019) Boosting computational power through spatial multiplexing in quantum reservoir computing. Physical Review Applied 11(3):034,021
* Noh et al (2017) Noh H, You T, Mun J, Han B (2017) Regularizing deep neural networks by noise: Its interpretation and optimization. In: Advances in Neural Information Processing Systems, pp 5109–5118
* Robinson (2018) Robinson P (2018) A guide to SP500 VIX index. URL www.dailyfx.com/sp-500/guide-to-sp-500-vix-index.html, accessed Mar 27, 2020
* Tanaka et al (2019a) Tanaka G, Yamane T, Héroux JB, Nakane R, Kanazawa N, Takeda S, Numata H, Nakano D, Hirose A (2019a) Recent advances in physical reservoir computing: A review. Neural Networks 115:100–123
* Tanaka et al (2019b) Tanaka G, Yamane T, Héroux JB, Nakane R, Kanazawa N, Takeda S, Numata H, Nakano D, Hirose A (2019b) Recent advances in physical reservoir computing: A review. Neural Networks 115:100 – 123, DOI https://doi.org/10.1016/j.neunet.2019.03.005, URL http://www.sciencedirect.com/science/article/pii/S0893608019300784
## Appendix A Appendix
### A.1 High level overview of reservoir computing
Classical reservoir computing (RC) relies on a reservoir of randomly connected
oscillators. The connections between the oscillators in the reservoir are not
trained. In this computational framework, inputs are mapped to a high
dimensional space and the output from the high dimensional state is trained to
predict the desired function using a simple method like linear regression. RC
using a simple readout is suited to low-cost real-time computing history
dependent dynamical responses to external inputs. Let $\mathbf{x}(n)$ denote
the reservoir state vector:
$\mathbf{x}(n)=\begin{bmatrix}x_{0}(n)\\\ x_{1}(n)\\\ \vdots\\\
x_{N-1}(n)\end{bmatrix}$ (8)
Here each $x_{i}$ represents the state of a node in the reservoir. This state
vector undergoes a non-linear evolution in time.
Quantum Reservoir Computing (QRC) is a new, alternative paradigm for
information processing using quantum physics. It exploits natural quantum
dynamics of ensemble systems for machine learning. The key is to find an
appropriate form of physics that exhibits rich dynamics, thereby allowing us
to outsource a part of the computation. There have been several applications
of QRC most notably time-dependent signal processing, speech recognition, NLP,
sequential motor control of robots, and stock market predictions. QRC does not
require any sophisticated quantum gate (natural dynamics is enough). Thus it
exhibits high feasibility. Numerical experiments show that quantum systems
consisting of 5–7 qubits possess computational capabilities comparable to
conventional recurrent neural networks of 100 to 500 nodes (Fujii and Nakajima
(2017)).
What are the sufficient criterion for non-von-Neumann architectures like the
brain-inspired reservoir computers? We do not know yet. Unlike traditional
neural networks, we do not understand the guiding principles of reservoir
design for high-performance information processing. Leveraging the work of
several researchers in this field, we give a brief overview here of the
considerations which seem to matter the most when using a reservoir computer
for time-series forecasting.
1. 1.
Common Signal Induced Synchronization: If the reservoir has two different
initial state $s(t_{0})$ and $\hat{s}(t_{0})$, then, if provided with the same
input stimuli $\\{u(t)\\}_{t\geq t_{0}}$, it must satisfy,
$||s(t)-\hat{s}(t)||\rightarrow 0\textrm{ as }t\rightarrow\infty.$ (9)
Another way of stating this is that the reservoir must have fading memory
(also know as echo state property in literature): the outputs of the dynamical
system should stay close if the corresponding input are close in recent times
(Inubushi and Yoshimura (2017)). This can be viewed as a consistency or
convergence criterion, it ensures that any computation performed by the
reservoir is independent of its initial condition.
2. 2.
Reservoir Dimensionality: A reservoir should have adequate (preferably
exponential in number of nodes) linearly independent internal variables. The
number of linearly independent variables of the NISQ reservoir (the Hilbert
space dimension) gives an upper limit on the computational capacity. As noted
in (Ghosh et al (2019)) prediction accuracy improves as you increase the
number of nodes in the system.
3. 3.
Adequate Memory: A reservoir can have memory of past inputs (Farkaš et al
(2016)). Using a one qubit reservoir for simplicity, let’s understand how
memory manifests in a dynamical system. Suppose $u(t)$ and $\hat{u}(t)$ are
two identical time series, except for a small perturbation at $t=t_{0}-1$:
$\begin{split}&\hat{u}(t_{0}-1)=u(t_{0}-1)+\Delta\textrm{, for }t=t_{0}-1,\\\
&\hat{u}(t)=u(t)\textrm{, for all }t\neq t_{0}-1.\end{split}$
When we feed $u(t)$ or $\hat{u}(t)$ into the quantum circuit, we get the spin
time series $\\{s(t)\\}$ and $\\{\hat{s}(t)\\}$ respectively. If $\delta
s(t)=s(t)-\hat{s}(t)$ denotes the difference between the outputs $s(t)$ and
$\hat{s}(t)$, then we say the reservoir has memory when $\delta s(t)$ and
$\delta s(0)$ are related (i.e. $\delta s(t)$ can provide information about
$\delta s(0)$). Higher mutual information between $\delta s(t)$ and $\delta
s(0)$ implies higher MC. A formal proof is given in (Inubushi and Yoshimura
(2017)). A linear circuit has higher MC as $\delta s(t)$ is strongly
correlated with $\delta s(0)$. Thus high degree of linearity is more suitable
for forecasting tasks which need to recall historical patterns. This implies
that to introduce linear elements in the NISQ reservoir we will need to
introduce ‘self-loops’ in the spin-system.
4. 4.
Response Separability: The separation property is the reservoir’s capability
to generate dynamics sufficiently rich that can can distinguish between any
two different input sequences. This is important because it is not enough that
the reservoir is excitable by the input sequence you care about. It should be
excitable by any distinguishable inputs and the (input history dependent)
response should be adequately distinguishable (Tanaka et al (2019b)).
5. 5.
Adequate Non-linearity: Non-linearity is required for effective functioning of
reservoir computers to address the ’linearly inseparable problem’ (Kia et al
(2017)) A non-linear transformation is mandatory for tasks such as
classification by support vector machines. This property turns out to be
crucial for achieving universal computing. However, non-linearity also
degrades memory. Thus a careful trade-off is required between the linear and
non-linear elements of the circuit.
6. 6.
Edge Density: Edge density is a system level metric (as opposed to node level
metric) that is an important driver of the predictive power achieved by a
hybrid reservoir. We quantitatively define edge density as the ratio of the
total number of edges present in the reservoir configuration to the total
number of possible edges. A discussion on how heightened non-linearity in the
system due to increased connectivity leads to MC degradation can be found in
(Inubushi and Yoshimura (2017)).
7. 7.
Feedback Strength: To be an effective forecasting engine, the reservoir has to
strike a balance between two competing aims: memorizing past patterns (which
is related to over-fit reduction) and reducing mean square error (which is
related to fit accuracy). The former requirement asks for the ‘state signal’
to play a dominant role (as the reservoir memorizes through the time evolution
of its quantum spin state) while the latter pushes the ‘incoming signal
pattern’ to have more weighting. This tunable parameter can be used in the
system evolution specification.
8. 8.
Noise induced regularization: It is well-known that it is possible to use
dissipative quantum systems as universal function approximators for temporal
information processing even in the presence of noise. Such noise can be
beneficial in machine learning related information processing tasks. It plays
a role akin to regularization (Noh et al (2017)). The phrase ‘to regularize’
means ‘to make more acceptable’. Function approximators become more acceptable
when they ‘train’ on ’noisy’ data and thereby avoid over-fitting. Thus noise
induced regularization helps NISQ reservoirs to be ‘well-behaved’ and avoid
taking extreme values in forecasting related tasks.
### A.2 Results for NARMA benchmarking
The Non-linear Auto-regressive Moving Average (NARMA) series is a forecasting
task that is commonly employed as a performance benchmark. It has a high
degree of non-linearity and dependence on long time lags, leading to
significant memory requirements in the forecasting model. We use one step
ahead forecasting of the NARMA5 series to benchmark the performance of our
quantum reservoir construction. This benchmark was executed using simulated
noisy qubits with the noise modeling capabilities available in Qiskit (Abraham
et al (2019)). The NARMA5 series is a temporal sequence defined by:
$\begin{split}v_{t+1}=&\alpha v_{t}+\beta
v_{t}(v_{t}+v_{t-1}+v_{t-2}+v_{t-3}+v_{t-4})+\\\ &\gamma
s_{t-4}s_{t}+\delta,\\\ s_{t}=&\mu\left[sin\frac{2\pi f_{0}t}{T}sin\frac{2\pi
f_{1}t}{T}sin\frac{2\pi f_{2}t}{T}+1\right].\end{split}$ (10)
The parameters in Eq. 10 are:
$\alpha=0.30,\beta=0.05,\gamma=1.50,\delta=0.10,\mu=0.10$, and
$f_{0}=2.11,f_{1}=3.73,f_{2}=4.11,T=100$. These values were originally used in
Fujii and Nakajima (2017) to benchmark quantum reservoirs.
Figure 14: One-step ahead predictions for the NARMA-5 time-series with the
quantum reservoir executed with noisy simulation in Qiskit. Figure 15:
Histogram of normalized mean square error for the NARMA5 prediction task.
Fig. 14 shows the comparison of realized vs predicted time-series for the
NARMA5 task. Only a zoomed-in snapshot is shown of the 5000 point long
sequence. The initial one-third of the data was flushed out to allow the
system to stabilize. The same optimal configuration that was utilized for VIX
forecasting (as discussed in the main text), was also employed here. Our
hybrid reservoir achieved an NMSE of $6\times 10^{-4}$. One can compare this
to the NMSE obtained in (Fujii and Nakajima (2017)) which lied in the range
$[3\times 10^{-3},7.6\times 10^{-6}]$. Thus, the benchmark performance of our
hybrid reservoir is comparable to the benchmark performance found in (Fujii
and Nakajima (2017)). As in the VIX prediction task, we observe low bias in
the prediction error (see Fig. 15).
### A.3 Memory capacity of larger reservoirs
In the main text we focused on reservoirs with $6$ qubits. We also tested the
performance for quantum registers of different sizes. As an example, the
memory capacity (MC) characterization described in Section 2 is repeated for
an 8 qubit hybrid reservoir. The sequence of edge densities follow the same
sequence as shown in Fig. 2 but for an 8 qubit reservoir there are now $36$
graphs. In Fig. 16 we again observe a peak in the MC that occurs for the
reservoir with $n-1=7$ self-loops.
Figure 16: Variation of Memory Capacity with reservoir complexity for a
8-qubit quantum register on ibmq_rochester.
|
∎
11institutetext: S. Azimi 22institutetext: Department of Chemistry, Brooklyn
College of the City University of New York
PhD Program in Biochemistry, Graduate Center of the City University of New
York 33institutetext: J. Z. Wu 44institutetext: Department of Chemistry,
Brooklyn College of the City University of New York
PhD Program in Chemistry, Graduate Center of the City University of New York
55institutetext: S. Khuttan 66institutetext: Department of Chemistry, Brooklyn
College of the City University of New York
PhD Program in Biochemistry, Graduate Center of the City University of New
York 77institutetext: T. Kurtzman 88institutetext: Department of Chemistry,
Lehman College of the City University of New York
PhD Program in Chemistry, Graduate Center of the City University of New York
PhD Program in Biochemistry, Graduate Center of the City University of New
York 99institutetext: N. Deng 1010institutetext: Department of Chemistry and
Physical Sciences, Pace University, New York, New York 1111institutetext: E.
Gallicchio 1212institutetext: Department of Chemistry, Brooklyn College of the
City University of New York
PhD Program in Chemistry, Graduate Center of the City University of New York
PhD Program in Biochemistry, Graduate Center of the City University of New
York
1212email<EMAIL_ADDRESS>
# Application of the Alchemical Transfer and Potential of Mean Force Methods
to the SAMPL8 Host-Guest Blinded Challenge
Solmaz Azimi Joe Z. Wu Sheenam Khuttan Tom Kurtzman Nanjie Deng Emilio
Gallicchio
###### Abstract
We report the results of our participation in the SAMPL8 GDCC Blind Challenge
for host-guest binding affinity predictions. Absolute binding affinity
prediction is of central importance to the biophysics of molecular association
and pharmaceutical discovery. The blinded SAMPL series have provided an
important forum for assessing the reliability of binding free energy methods
in an objective way. In this blinded challenge, we employed two binding free
energy methods, the newly developed alchemical transfer method (ATM) and the
well established potential of mean force (PMF) physical pathway method, using
the same setup and force field model. The calculated binding free energies
from the two methods are in excellent quantitative agreement. Importantly, the
results from the two methods were also found to agree well with the
experimental binding affinities released subsequently, with an $R^{2}$ of 0.89
(ATM) and 0.83 (PMF). Given that the two free energy methods are based on
entirely different thermodynamic pathways, the close agreement between the
results from the two methods and their general agreement with the experimental
binding free energies are a testament to the the high quality achieved by
theory and methods. The study provides further validation of the novel ATM
binding free energy estimation protocol and it paves the way to to further
extensions of the method to more complex systems.
††journal: jcamd
## 1 Introduction
The Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) series
of community challengesgeballe2010sampl2 ; mobley2014blind ; amezcua2021sampl7
have been organized to validate computational methods of molecular solvation
and binding in an unbiased way. SAMPL participants are asked to quantitatively
predict experimental measurements that are publicly disclosed only after the
predictions are submitted. The format of the challenges allows the robust
assessment of computational methods and have significantly contributed to
their advancement.mobley2017predicting As computational models of small
molecule binding to protein receptors increasingly emerge as important
elements of structure-based drug discovery,Jorgensen2009 ; armacost2020novel
it is critical that the reliability of these models is independently assessed
and validated. We have contributed to several editions of the SAMPL challenges
to validate the ability of our computational models to accurately predict
host-guest and protein-ligand binding affinities.Gallicchio2012a ;
Gallicchio2014octacid ; GallicchioSAMPL4 ; deng2016large ; pal2016SAMPL5 .
In this work, we apply two conceptually orthogonal yet equivalent binding free
energy estimation methods, the Alchemical Transfer Method
(ATM)wu2021alchemical and the Potential of Mean Force (PMF)deng2018comparing
method, to the SAMPL8 GDCC challenge
set111github.com/samplchallenges/SAMPL8/tree/master/host_guest/GDCC . The
modeled predictions are tested against each other, as well as with the blinded
experimental binding free energies measured by the Gibb
Group.suating2020proximal 222
github.com/samplchallenges/SAMPL8/blob/master/host_guest/Analysis/ExperimentalMeasurements/Final-
Data-Table-031621-SAMPL8.docx
In principle, computational models should yield equivalent binding free energy
predictions as long as they are based on the same chemical model and physical
description of inter-atomic interactions. By ensuring consistency between two
independent computational estimates, we can achieve an increased level of
confidence in the theoretical accuracy of the models and in the correctness of
their implementation. Furthermore, by comparing the computational predictions
to the experimental measurements in a blinded, unbiased fashion, we can assess
the predictive capability that can be expected of the models in actual
chemical applications.
While a variety of empirical methods are commonly used to model the binding
affinities of molecular complexes,sledz2018protein ; seidel2020applications
here we are concerned with methods based on physical models of inter-atomic
interactions and a rigorous statistical mechanics theory of the free energy of
molecular binding.Gilson:Given:Bush:McCammon:97 ; Gallicchio2011adv ;
cournia2020rigorous Binding free energy methods are classified as physical or
alchemical depending on the nature of the thermodynamic path employed to
connect the unbound to the bound states of the molecular complex for computing
the reversible work of binding.Gallicchio2021binding Physical pathway methods
define a physical path in coordinate space in which the reversible work for
bringing the two molecules together is calculated. Conversely, alchemical
methods connect the bound and unbound states by a series of artificial
intermediate states in which the ligand is progressively decoupled from the
solution environment and coupled to the receptor.
In this work, we compare the results of the PMF method,deng2018comparing a
physical pathway method, to that of the ATM alchemical methodwu2021alchemical
on identically prepared molecular systems. Because free energy is a
thermodynamic state function, binding free energy estimates should be
independent of the specific path employed, whether physical or alchemical.
Obtaining statistically equivalent estimates of the binding free energies
using these two very different thermodynamic paths constitutes a robust
validation of both methods. The very recently developed ATM, in particular,
benefits from the backing of the more established PMF method in this
application.
This paper is organized as follows. We first review the PMF and ATM methods,
describe the host-guest systems included in the SAMPL8 GDCC challenge, and
provide the system setup and simulation details of our free energy
calculations. We then present the binding free energy estimates we obtained
with the PMF and ATM approaches and compare them to each other and with the
experimental measurements that were disclosed only after the predictions were
submitted to the SAMPL8 organizers. Overall, the work shows that the ATM and
PMF methods provide consistent binding free energy estimates that, in
conjunction with the force field model employed here, are in statistical
agreement with experimental observations.
## 2 Theory and Methods
### 2.1 The Potential of Mean Force Method
The Potential of Mean Force method, hereon PMF, employed in this work is a
physical binding pathway approach fully described in reference 13. Here, we
briefly summarize the statistical mechanics basis of the method.
Implementation details specific to this work are described in the
Computational Details section.
The PMF method estimates the standard free energy of binding as the sum of the
free energy changes of the following processes:
1. 1.
The transfer of one ligand molecule from an ideal solution at the standard
concentration $C^{\circ}=1M$ to a region in the solvent bulk of volume equal
to the volume of the receptor binding site, followed by the imposition of
harmonic restraints that keep the ligand in a chosen reference binding
orientation. The free energy term corresponding to this process, denoted as
$\Delta G^{\rm bulk}_{\rm restr}$, is evaluated analytically.
2. 2.
The transfer of the ligand molecule from the solvent bulk to the receptor
binding site along a suitable physical pathway (see Computational Details).
The free energy change along this pathway is described by a potential of mean
force parameterized by the distance between two reference atoms of the ligand
and the receptor (Figure 1). The free energy change for this process, denoted
by $w(r_{\rm min})-w(r^{\ast})$, is given by the value at the minimum of the
potential of mean force relative to the value in the bulk.
3. 3.
$\Delta G_{\rm vibr}$ is related to the ratio of the configurational partition
functions of the ligand within the binding site of the receptor vs. when it is
harmonically restrained at the bulk location $r^{\ast}$.
4. 4.
The release of the harmonic restraints while the ligand is bound to the
receptor. The free energy change for this process, denoted by $-\Delta G_{\rm
restr}^{\rm bound}$, is evaluated by Bennett’s Acceptance Ratio method (BAR).
Hence, the PMF estimate of the free energy of binding is given by
$\Delta G^{\circ}_{b}=\Delta G^{\rm bulk}_{\rm restr}+[w(r_{\rm
min})-w(r^{\ast})]+\Delta G_{\rm vibr}-\Delta G_{\rm restr}^{\rm bound}$ (1)
Additional computational details and parameters used in this work to implement
the PMF calculations are described in the Computational Details section.
Figure 1: Schematic of Potential of Mean Force (PMF) method. From left to
right, the figure represents the physical pathway that the ligand undergoes
from the bound to unbound state. Shown above is a sequence of 3 snapshots
representing 3 of the 20 umbrella windows, where the ligand gets pulled at
varying distances along the physical pathway away from the host (through the
use of reference atoms assigned to both the ligand and host). The red dots
represent the oxygen atoms of water molecules. The big bulky molecule
represents the TEMOA host, while the small molecule represents the G1 guest.
### 2.2 The Alchemical Transfer Method
The Alchemical Transfer Method, hereon ATM, is a recently-developed method to
compute the absolute binding free energy of molecular complexes. The method is
fully described in reference 12. Here, we give only a brief overview of ATM,
particularly focusing on the aspects specific to this work. Further
implementation details are described in the Computational Details section.
Given the standard free energy of binding $\Delta G^{\circ}_{b}$, defined as
the difference in free energy between the bound complex and the unbound
components, $\Delta G^{\circ}_{\rm b}=\Delta G^{\circ}_{\rm site}+\Delta
G^{\ast}_{b}$. ATM computes the excess component of the binding free energy,
$\Delta G^{\ast}_{b}$, defined as the reversible work for transferring the
ligand from a region of volume $V_{\rm site}$ in the solvent bulk to a region
of the same volume in the receptor binding site.Gallicchio2011adv The
standard free energy of binding is given by the excess component plus the
ideal component, $\Delta G^{\circ}_{\rm site}=-k_{B}T\ln C^{\circ}V_{\rm
site}$, which corresponds to the free energy change of transferring one ligand
molecule from an ideal solution at the standard concentration $C^{\circ}=1M$
to a region in the solvent bulk of volume that is equal to the volume of the
receptor binding site, $V_{\rm site}$.Gilson:Given:Bush:McCammon:97 The
concentration-dependent ideal term is computed analytically and the excess
component is computed by ATM using numerical molecular simulations described
in Computational Details and below.
In ATM, the transfer of the ligand from the solvent bulk to the receptor
binding site is carried out in two alchemical steps that connect the bound and
unbound end states to one alchemical intermediate (Figure 2), in which the
ligand molecule interacts equally with both the receptor and the solvent bulk
at half strength. The potential energy function of the alchemical intermediate
is defined as
$U_{1/2}(x_{S},x_{L})=\frac{1}{2}U(x_{S},x_{L})+\frac{1}{2}U(x_{S},x_{L}+h)\,,$
(2)
where $x_{S}$ denotes the coordinates of the atoms of the receptor and of the
solvent, $x_{L}$ denotes the coordinates of the atoms of the ligand while in
the receptor binding site, and $h$ is the constant displacement vector that
brings the atoms of the ligand from the receptor site to the solvent bulk
site. In this scheme, $U(x_{S},x_{L})$ is the potential energy of the system
when the ligand is in the binding site, $U(x_{S},x_{L}+h)$ is the potential
energy after translating the ligand rigidly into the solvent bulk, and
$U_{1/2}(x_{S},x_{L})$ is the hybrid alchemical potential given by the average
of the two. In the alchemical intermediate state, receptor atoms and solvent
molecules interact with the ligand at half strength but at both ligand
locations. Similarly, the force that ligand atoms interact with receptor atoms
and solvent molecules at the intermediate state is an average of the forces
exerted by the ligand at the two distinct locations. As discussed in reference
12, the ATM alchemical intermediate has an analogous role as the vacuum
intermediate state in the conventional double-decoupling
method,Gilson:Given:Bush:McCammon:97 but without fully dehydrating the
ligand.
Figure 2: The Alchemical Transfer Method (ATM) involves two simulation legs,
which, in total, transfer the ligand from the solvent bulk to the binding site
of the receptor. The two legs connect the bound and unbound end states through
an alchemical intermediate that involves the ligand molecule interacting
equally with both the receptor and the solvent bulk at half strength. Here,
the receptor is the TEMOA host and the ligand is the G4 guest. The green box
represents the solvent box with water molecules designated in blue. In the
TEMOA structure, carbon atoms are represented in cyan and oxygen atoms in red.
The bound and unbound states of the complex are connected to the common
intermediate by means of alchemical potentials of the form
$U_{\lambda}(x)=U_{0}(x)+\lambda u_{\rm sc}[u(x)],$ (3)
where $U_{0}(x)$ denotes the potential energy function of the initial state,
which is either $U(x_{S},x_{L})$, corresponding to the bound complex in Leg 1
(Figure 2), or $U(x_{S},x_{L}+h)$, corresponding to Leg 2 (Figure 2),
$\lambda$ is a progress parameter that goes from $0$ to $1/2$,
$u(x)=U_{1}(x)-U_{0}(x)$ (4)
is the binding energy function.Gallicchio2010 In Equation 4, $U_{1}(x)$
denotes the potential energy function of the end state which is either
$U(x_{S},x_{L}+h)$, corresponding to the unbound complex in Leg 1 of Figure 2,
or $U(x_{S},x_{L})$, corresponding to the bound complex in Leg 2 (Figure 2).
Finally,
$u_{\rm sc}(u)=u;\quad u\leq u_{c}$ (5) $u_{\rm sc}(u)=(u_{\rm
max}-u_{c})f_{\rm sc}\left[\frac{u-u_{c}}{u_{\rm
max}-u_{c}}\right]+u_{c};\quad u>u_{c}$ (6)
with
$f_{\rm sc}(y)=\frac{z(y)^{a}-1}{z(y)^{a}+1}\,,$ (7)
and
$z(y)=1+2y/a+2(y/a)^{2}$ (8)
is a soft-core perturbation function that avoids singularities near the
initial states of each leg ($\lambda=0$). The parameters of the soft-core
function, $u_{\rm max}$, $u_{c}$, and $a$ used in this work are listed in
Computational Details.
The free energy change for each leg is obtained by multi-state thermodynamic
reweightingTan2012 using the perturbation energies $u_{\rm sc}[u(x)]$
collected during the molecular dynamics runs at various values of $\lambda$.
As illustrated by the thermodynamic cycle in Figure 2, the excess component of
the binding free energy is obtained by the difference of the free energies of
the two legs:
$\Delta G^{\ast}_{b}=\Delta G_{2}-\Delta G_{1}\,.$ (9)
Because the end states of ATM are similar to that of the PMF method summarized
above, the two methods compute the same free energy of binding. However, each
employs a different thermodynamic path. The PMF method progressively displaces
the ligand from the binding site to the bulk along a physical path, whereas
ATM employs an unphysical alchemical path, in which the ligand is displaced
directly from the binding site to the solvent bulk.
### 2.3 SAMPL8 Systems
The chemical structures of the two hosts and 5 guests molecules are shown in
Fig. 3. Both the hosts TEETOA and TEMOA are octaacids that carry a net charge
of -8 at the pH value of 11.5 used in the experiment. The five guests, with
the exception of the protonated G2 (namely G2P), are carboxylate derivatives
that are also negatively charged at the same pH. The computational
calculations employed the initial host and guest structure files provided in
the SAMPL8 dataset found at
https://github.com/samplchallenges/SAMPL8/tree/master/host_guest/GDCC.
Figure 3: Superimposed benchmark systems in this study. The two hosts,
tetramethyl octa acid (TEMOA) and tetraethyl octa acid (TEETOA), are shown in
licorice representation, with light gray corresponding to TEETOA and dark gray
to TEMOA. Both light and dark gray represent carbon atoms and red, oxygen
atoms. The six guests that are bound to the hosts are shown in ball-and-stick
(CPK) representation, for which the color of the structure corresponds to the
label of the guest. G2D designates deprotonated G2 and G2P, protonated G2.
Note that ball-and-stick representation undermines the aromaticity of the six-
membered ring. For the guests, green corresponds to carbon atoms, red oxygen
atoms, and white hydrogen atoms.
### 2.4 Computational Details
The guests were manually docked to each host using Maestro (Schrödinger, Inc.)
to render a set of host-guest molecular complexes that were then used to
derive forcefield parameters with AmberTools. The complexes were assigned
GAFF2/AM1-BCC parameters and solvated in a water box with a 12 Angstrom
solvent buffer and sodium counterions to balance the negative charge. The
position and orientation of the host for each complex were restrained near the
center of the box and along the diagonal with a flat-bottom harmonic potential
of force constant 25.0 kcal/(mol Å2) and a tolerance of 1.5 Å was set on the
heavy atoms at the lower cup of the molecule (the first 40 atoms of the host
as listed in the provided files). The systems were energy minimized and
thermalized at 300 K prior to proceeding with the ATM and PMF calculations.
#### 2.4.1 PMF setup
The computation of the standard binding free energies using the PMF method
involves the following steps:deng2018comparing (1) applying a harmonic
restraint on the three Euler angles of the guest in the bound state to
restrain guest orientation; (2) applying a harmonic restraint on the polar and
azimuthal angles in spherical coordinates to restrain the guest center along a
fixed axis when it binds/unbinds; (3) reversibly extracting the guest from the
binding pocket along the chosen axis until it reaches the bulk region; (4)
release the restraints on the guest center and guest orientation, which allows
the guest to occupy the standard volume and rotate freely in the bulk solvent.
The standard binding free energy is then obtained by summing up the reversible
work associated with each of the above steps using Eq. (1).
The position and orientation of the guest relative to the host was controlled
using coordinate systems which consisted of 3 reference atoms of the host (P1,
P2, and P3) and 3 reference atoms of the guest (L1, L2, and
L3).Boresch:Karplus:2003 For all the hosts, P1 was chosen to be the center of
the bottom ring of each host and L1 the center of each guest molecule which
lies approximately 4 Angstroms away from P1. The PMF was calculated along the
P1-L1 distance using umbrella sampling with biasing potentials having a force
constant of 1000 kJ/(mol nm2). The three Euler angles and two polar and
azimuthal angles were restrained using harmonic potentials with a force
constant of 1,000 kJ/(mol rad2) centered on the angles of the thermalized
structures such that the guest is pulled straight out of the pocket of the
host while minimizing collisions with the sidechains of the rim of the host.
It is important to note that an unobstructed path is necessary for the guest’s
pull axis for the PMF method.
Equilibration (1.2 ns) and production (20 ns) umbrella sampling was then
initiated over 20 umbrella windows to cover a distance of 4.0 to 18.0
Angstroms, i.e. from within the binding region to the bulk along the P1-L1
axis. WHAM analysis was used to generate the PMF and the corresponding
uncertainties by bootstrapping. The free energy of releasing the angular
restraints in the bulk and in the bound state were computed using BAR as
implemented in GROMACS.pronk2013gromacs
#### 2.4.2 ATM Setup
Each of the Cartesian components of the translation vector $h$ were set to
approximately half of the longest diagonal of the simulation box to place the
ligand near the corner of the solvent box farthest away from the host and its
periodic images (Fig. 2). Beginning at the bound state at $\lambda=0$, the
systems were then progressively annealed to the symmetric alchemical
intermediate at $\lambda=1/2$ during a $250$ ps run using the ATM alchemical
potential energy function for Leg 1 [Eq. (2)]. This step yields a suitable
initial configuration of the system without severe unfavorable repulsive
interactions at either end states of the alchemical path so that molecular
dynamics replica exchange alchemical simulation can be conducted for each leg
as described below.
In order to prevent large attractive interactions between opposite charges at
small distances in nearly uncoupled states, polar hydrogen atoms with zero
Lennard-Jones parameters were modified to $\sigma_{\rm LJ}=0.1$ Å and
$\epsilon_{\rm LJ}=10^{-4}$ kcal/mol. khuttan2021single We established that
the change in potential energy of the system in the unbound, bound, and
symmetric intermediate states due to this modification of the Lennard-Jones
parameters is below single floating point precision. Alchemical MD
calculations were conducted with the OpenMM 7.3eastman2017openmm MD engine
and the SDM integrator plugin (github.com/Gallicchio-
Lab/openmm_sdm_plugin.git) using the OpenCL platform. In order to maintain the
temperature at 300 K, a Langevin thermostat with a time constant of 2 ps was
implemented. For each ATM leg, Hamiltonian Replica Exchange in $\lambda$ space
was conducted every 5 ps with the ASyncRE software gallicchio2015asynchronous
that is customized for OpenMM and SDM (github.com/Gallicchio-Lab/async_re-
openmm.git). Each leg employed 11 $\lambda$ states uniformily distributed
between $\lambda=0$ and $\lambda=1/2$. All ATM calculations employed the soft-
core perturbation energy with parameters $u_{\rm max}=300$ kcal/mol,
$u_{c}=100$ kcal/mol, and $a=1/16$. A flat-bottom harmonic potential between
the centers of mass of the host and the guest with a force constant of 25
kcal/mol $\AA^{2}$ was applied for a distance greater than $4.5\AA$ to define
the binding site region ($V_{\rm site}$). The concentration-dependent term,
$\Delta G^{\circ}_{\rm site}=-k_{B}T\ln C^{\circ}V_{\rm site}=0.87$, which
corresponds to 300 K temperature and the volume $V_{\rm site}$ of a sphere
with a radius of $4.5\AA$, was added to yield the final free energy estimate.
Perturbation energy samples and trajectory frames were collected every 5 ps.
Every replica was simulated for a minimum of 10 ns. For ATM, UWHAM was used to
compute binding free energies and the corresponding uncertainties with the
first 5 ns of the trajectory discarded.
#### 2.4.3 Free Energy of Binding for Ligands in Multiple Protonation States
When multiple chemical species contribute to binding, we use the free energy
combination formulaGallicchio2011adv
$\Delta G_{b}^{\circ}=-kT\ln\sum_{i}P_{0}(i)e^{-\beta\Delta
G_{b}^{\circ}(i)},$ (10)
where $\Delta G_{b}^{\circ}(i)$ is the standard binding free energy for
species $i$ and $P_{0}(i)$ is the population of that species in the unbound
state. In the case of an acid/base equilibrium with acidity constant
$K_{a}=\frac{[A^{-}][H^{+}]}{[HA]}=\frac{[A^{-}]}{[HA]}10^{-pH}=\alpha
10^{-pH},$ (11)
where $[\ldots]$ are concentration in molar units,
$\alpha=10^{pH-pKa},$ (12)
is the concentration ratio of the deprotonated and protonated forms, the
population fraction of the deprotonated species is
$P_{0}(A^{-})=\frac{[A^{-}]}{[HA]+[A^{-}]}=\frac{\alpha}{1+\alpha}$ (13)
and the population fraction of the protonated species is
$P_{0}(HA)=\frac{[HA]}{[HA]+[A^{-}]}=1-P_{0}(A^{-})=\frac{1}{1+\alpha}.$ (14)
The populations of each protonation state of the ligands and the corresponding
standard binding free energies $\Delta G_{b}^{\circ}(A^{-})$ and $\Delta
G_{b}^{\circ}(HA)$ are combined using Eq. (10) to obtain an estimate of the
observed free energy of binding.
This strategy was employed for the guest G2, 4-bromophenol, which exists in
two protonation states. A pH of 11.5, as indicated in the SAMPL8 GitHub site,
and a pKa of 9.17 (pubchem.ncbi.nlm.nih.gov/compound/4-bromophenol) was used
to calculate the concentrations of the protonation states and combine them
with the calculated binding free energies to yield a binding free energy
estimate for G2 (see Table 5).
## 3 Results
The results are presented as follows. Table 1 summarizes the absolute binding
free energy predictions from ATM and PMF submitted to the SAMPL8 organizers,
compared to the experimental values which were disclosed to us only after
submission. The results of the constituent calculations for each method that
led to the binding free energy predictions are listed in Tables 3 and 4 for
the ATM and PMF methods, respectively. These tables report the values of the
free energy changes for each leg of the ATM calculations and the components of
the PMF estimates, including those of the vibrational free energy and the
restraint free energy that contribute to the overall PMF process. The free
energy analysis for the protonated and deprotonated species implicated in the
complexes of the G2 guest is illustrated in Table 5.
### 3.1 Absolute Binding Free Energy Estimates by ATM and PMF
Table 1: PMF and ATM standard binding free energy predictions compared to the
experimental values.
Complex | Experimenta | ATMa | PMFa
---|---|---|---
TEMOA-G1 | $-6.96\pm 0.2$ | $-6.71\pm 0.3$ | $-6.43\pm 0.4$
TEMOA-G2 | $-8.41\pm 0.1$ | $-9.90\pm 0.8$ | $-9.37\pm 0.8$
TEMOA-G3 | $-5.78\pm 0.1$ | $-8.26\pm 0.3$ | $-8.71\pm 0.4$
TEMOA-G4 | $-7.72\pm 0.1$ | $-8.63\pm 0.3$ | $-8.79\pm 0.6$
TEMOA-G5 | $-6.67\pm 0.1$ | $-7.70\pm 0.3$ | $-8.15\pm 0.8$
TEETOA-G1 | $-4.49\pm 0.2$ | $-1.07\pm 0.3$ | $-1.38\pm 0.8$
TEETOA-G2 | $-5.16\pm 0.1$ | $-4.76\pm 0.3$ | $-6.22\pm 1.8$
TEETOA-G3 | NB | $-1.65\pm 0.3$ | $-1.42\pm 0.8$
TEETOA-G4 | $-4.47\pm 0.2$ | $-2.51\pm 0.3$ | $-2.25\pm 0.8$
TEETOA-G5 | $-3.32\pm 0.1$ | $-2.82\pm 0.3$ | $-3.36\pm 1.9$
a In kcal/mol.
Table 2: Agreement metrics (root mean square error, RMSE, correlation
coefficient of determination, $R^{2}$, slope of the linear regression, $m$,
and Kendall rank order correlation coefficient, $\tau$) between the computed
binding free energies and the experimental measurements.
| RMSE | $R^{2}$ | m | $\tau$
---|---|---|---|---
ATM/PMF | 0.60 | 0.99 | 1.05 | 1.00
Exp./ATM | 1.71 | 0.89 | 1.65 | 0.69a
Exp./PMF | 1.79 | 0.83 | 1.50 | 0.69a
a TEETOA-G3, a non-binder experimentally, was included in the $\tau$
calculation as the weakest complex.
The binding free energy estimates obtained from the two complementary
computational methods, ATM and PMF, are in very good agreement with an $R^{2}$
value of 0.965 and an RMSE value of 0.989(?) kcal/mol. In addition, the
ranking of the binding free energies of the complexes between the ATM and PMF
datasets is in perfect agreement. Both methods consistently estimated the
complex with the most favorable binding free energy to be TEMOA-G2, with a
free energy value of -9.90 kcal/mol predicted by ATM and -9.37 kcal/mol by
PMF. The least favorable binding free energy was predicted for the complex
TEETOA-G1 by both methods, -1.07 kcal/mol by ATM and -1.38 kcal/mol by PMF.
Both methods predict that all of the guests bind TEMOA more favorably than
TEETOA.
All of the carboxylic acid guests were modeled as ionic. We modeled both
protonation states of the G2 guest (Tables 3 and 4) and combined the
corresponding binding free energies using the experimental pKa of the guest
(Table 5). With a discrepancy of 2.77 kcal/mol, the deprotonated G2 molecule
(hereon G2D) yielded the most divergent binding free energy estimate between
the ATM and PMF datasets. Nevertheless, since this protonation state is found
to contribute little to binding (Table 5), the observed discrepancy did not
affect significantly the correspondence between the two sets of SAMPL8 binding
free energy predictions.
The molecular dynamics trajectories consistently yielded the expected binding
mode of the guests to the TEMOA and TEETOA hosts. The polar/ionic end of the
guests is oriented towards the water solvent while the more non-polar end of
the molecule is inserted into the binding cavity of the hosts (Figure 3). In
the complexes, the ethyl sidechains of the TEETOA host point outward extending
further the host binding cavity and the surface of contact between the guests
and the hosts. In the apo state, however, the ethyl sidechains are observed to
be mostly folded into the TEETOA cavity (not shown). We hypothesize that the
conformational reorganization of TEETOA, the lack of favorable water
expulsion, and the poorer hydration of the bound guests are responsible for
the weaker binding capacity of TEETOA relative to TEMOA. We intend to
investigate further these aspects of the binding mechanism in future work.
ATM and PMF both predict that G2D is one of the weakest binders for TEMOA and
TEETOA (Tables 3 and 4). G2D is expected to be frustrated in the bound state
because the bromine atom prefers to be in the cavity of the host, whereas the
oxide group strongly prefers to remain hydrated (Figure 3). The side chains of
both hosts prevent the hydration of the negative oxygen atom. This steric
hindrance is especially evident in TEETOA, which possesses four ethyl groups
on its outer ring. Due to its poor binding affinity, the deprotonated G2D is
not predicted to contribute significantly to binding despite its higher
concentration in solution at the experimental pH. Conversely, due to its
smaller desolvation penalty, both the PMF and ATM methods indicate that
protonated G2 (hereon G2P) is the strongest binder in the set for both TEMOA
and TEETOA (Tables 3 and 4). G2P is in fact predicted to be the dominant
species for binding even after factoring in the protonation penalty at the
experimental pH of 11.5.
The ATM free energy components $\Delta G_{1}$ and $\Delta G_{2}$ for each leg
of the ionic hosts (Table 3), being in the 40 to 50 kcal/mol range, are
significantly larger in magnitude than the resulting binding free energies.
These free energies correspond to the reversible work to reach the alchemical
intermediate state in which the guest interacts with both the receptor and the
solvent bulk intermediates. The high free energy of the alchemical
intermediate relative to the bound and solvated states suggests that the ionic
group can not be properly accommodated to simultaneously interact effectively
with both environments. This hypothesis is confirmed by the much smaller ATM
leg free energies for the neutral G2P guest. While large, the ATM leg free
energies of the ionic guests are expected to be significantly smaller than
those that would have obtained in a double-decoupling
calculationdeng2018comparing that would involve displacing the guests into
vacuum where hydration interactions are completely removed. The statistical
uncertainties of the ATM binding free energy estimates, generally around $1/3$
of a kcal/mol, are relatively small.
While still moderate, the PMF binding free energy estimates (Table 4) come
with somewhat larger uncertainties than ATM. The source of uncertainties is
approximately equally split between the reversible work of releasing the
restraints (2nd column) and work of ligand extraction (3rd column). However,
in some cases (TEETOA-G2 and TEETOA-G5) the uncertainty of the work of
extraction is particularly large and probably indicative of sampling
bottlenecks at intermediate stages of the extraction process for this host.
Table 3: ATM absolute binding free energy estimates for the TEMOA and TEETOA
complexes.
Complex | $\Delta G_{1}$a | $\Delta G_{2}$a | $\Delta G^{\circ}_{\rm site}$a | $\Delta G^{\circ}_{b}$a
---|---|---|---|---
TEMOA-G1 | $53.27\pm 0.21$ | $45.69\pm 0.21$ | $0.87$ | $-6.71\pm 0.30$
TEMOA-G2D | $42.37\pm 0.18$ | $35.48\pm 0.21$ | $0.87$ | $-6.02\pm 0.28$
TEMOA-G2P | $22.57\pm 0.27$ | $8.60\pm 0.78$ | $0.87$ | $-13.10\pm 0.83$
TEMOA-G3 | $56.42\pm 0.18$ | $47.29\pm 0.18$ | $0.87$ | $-8.26\pm 0.25$
TEMOA-G4 | $53.13\pm 0.24$ | $43.63\pm 0.18$ | $0.87$ | $-8.63\pm 0.30$
TEMOA-G5 | $53.49\pm 0.24$ | $44.92\pm 0.18$ | $0.87$ | $-7.70\pm 0.30$
TEETOA-G1 | $51.65\pm 0.27$ | $49.71\pm 0.21$ | $0.87$ | $-1.07\pm 0.34$
TEETOA-G2D | $42.26\pm 0.24$ | $39.83\pm 0.27$ | $0.87$ | $-1.57\pm 0.36$
TEETOA-G2P | $22.31\pm 0.24$ | $13.48\pm 0.15$ | $0.87$ | $-7.95\pm 0.28$
TEETOA-G3 | $55.31\pm 0.24$ | $52.79\pm 0.18$ | $0.87$ | $-1.65\pm 0.30$
TEETOA-G4 | $52.28\pm 0.24$ | $48.90\pm 0.18$ | $0.87$ | $-2.51\pm 0.30$
TEETOA-G5 | $53.58\pm 0.21$ | $49.89\pm 0.18$ | $0.87$ | $-2.82\pm 0.28$
a In kcal/mol.
Table 4: PMF absolute free energy estimates for TEMOA and TEETOA complexes.
Complex | $-\Delta G_{\rm restr}^{\rm bound}$a | $[w(r_{\rm min})-w(r^{\ast})]$a | $\Delta G_{\rm vibr}$a | $\Delta G_{\rm restr}^{\rm bulk}$a | $\Delta G^{\circ}_{b}$a
---|---|---|---|---|---
TEMOA-G1 | $-4.09\pm 0.23$ | $-12.27\pm 0.36$ | $0.24$ | $9.69$ | $-6.43\pm 0.43$
TEMOA-G2D | $-2.05\pm 0.33$ | $-11.01\pm 0.18$ | $0.12$ | $9.69$ | $-3.25\pm 0.38$
TEMOA-G2P | $-5.31\pm 0.78$ | $-17.12\pm 0.21$ | $0.17$ | $9.69$ | $-12.57\pm 0.81$
TEMOA-G3 | $-5.61\pm 0.30$ | $-12.83\pm 0.30$ | $0.04$ | $9.69$ | $-8.71\pm 0.42$
TEMOA-G4b | $-5.00\pm 0.47$ | $-13.72\pm 0.36$ | $0.24$ | $9.69$ | $-8.79\pm 0.59$
TEMOA-G5 | $-5.36\pm 0.81$ | $-12.74\pm 0.15$ | $0.26$ | $9.69$ | $-8.15\pm 0.82$
TEETOA-G1 | $-3.76\pm 0.60$ | $-7.60\pm 0.54$ | $0.28$ | $9.69$ | $-1.38\pm 0.81$
TEETOA-G2D | $-5.50\pm 0.84$ | $-5.25\pm 2.73$ | $0.20$ | $9.69$ | $-0.86\pm 2.86$
TEETOA-G2P | $-4.85\pm 0.57$ | $-14.51\pm 1.68$ | $0.25$ | $9.69$ | $-9.42\pm 1.77$
TEETOA-G3 | $-3.70\pm 0.24$ | $-7.36\pm 0.81$ | $-0.05$ | $9.69$ | $-1.42\pm 0.84$
TEETOA-G4 | $-3.77\pm 0.12$ | $-8.39\pm 0.75$ | $0.22$ | $9.69$ | $-2.25\pm 0.76$
TEETOA-G5 | $-4.47\pm 0.06$ | $-8.81\pm 1.89$ | $0.23$ | $9.69$ | $-3.36\pm 1.89$
a In kcal/mol.
### 3.2 Calculated Free Energy Estimates Relative to Experimental
Measurements
The two computational methods employed in this work reproduced the
experimental binding free energy estimates relatively well, particularly more
so for the TEMOA host than for the TEETOA host (Table 1). Both methods
correctly predict TEMOA-G2 as the highest affinity complex in the set with
good quantitative accuracy in the binding free energy predictions ($-8.41$
kcal/mol experimentally compared to calculated $-9.90$ and $-9.37$ kcal/mol
from ATM and PMF, respectively). Concomitantly, both methods correctly predict
relatively weak absolute binding free energies of -1.65 kcal/mol and -1.42
kcal/mol, respectively, for TEETOA-G3 which is an experimental non-binder.
Excluding TEETOA-G3, the least favorable binding affinity measurement was
obtained for TEETOA-G5, which is correctly scored as one of the weakest
complex by both computational methods. Overall, despite the the narrow range
of the moderate binding free energies, the computational rankings based on the
binding free energies are in good agreement with the experimental rankings
with a Kendall rank-order correlation coefficient of 0.69. (Table 2)
As illustrated in Figure 4 the calculated binding free energies are highly
correlated to the experimental values with Pearson $R^{2}$ correlation
coefficients of 89% and 83% for ATM and PMF, respectively (Table 2). The
calculations are also in reasonable quantitative agreement with the
experimental measurements with RMSE deviations of $1.71$ kcal/mol for ATM and
$1.79$ kcal/mol for PMF. Interestingly, the computational models tend to
overestimate the binding affinity of the TEMOA complexes and to underestimate
those of the complexes with TEETOA. The largest deviation occurs for TEETOA-G1
which has a moderate observed binding free energy of $-4.47$ kcal/mol, which
is underestimated by the computational predictions by around $-1$ kcal/mol. A
large deviation, but in the opposite direction, is also observed for TEMOA-G3
($-5.78$ kcal/mol experimentally compared to $-8.26$ and $-8.71$ kcal/mol
computationally) (Table v1). A poor prediction for this complex was expected
based on previous efforts with the GAFF/AM1-BCC force field with TIP3P
solvation used here.rizzi2018overview
In summary, the blinded predictions reported here were scored as among the
best of the SAMPL8 GDCC challenge and second only to those obtained with the
more accurate AMOEBA force fieldshi2021amoeba
(github.com/samplchallenges/SAMPL8/blob/master/host_guest/Analysis/Ranked_Accuracy).
Table 5: Binding free energy contributions of the protonated and deprotonated
G2 complexes to the ATM and PMF binding free estimates.
| TEMOA-G2/ATM | TEMOA-G2/PMF | TEETOA-G2/ATM | TEETOA-G2/PMF
---|---|---|---|---
$\Delta G_{b}^{\circ}$(HA)a | $-13.10\pm 0.83$ | $-12.57\pm 0.81$ | $-7.95\pm 0.28$ | $-9.42\pm 1.77$
$P_{0}({\rm HA})$ | $4.66\times 10^{-3}$ | $4.66\times 10^{-3}$ | $4.66\times 10^{-3}$ | $4.66\times 10^{-3}$
$\displaystyle P_{0}(HA)e^{-\beta\Delta G_{b}^{\circ}(HA)}$ | $1.65\times 10^{7}$ | $6.77\times 10^{7}$ | $2.92\times 10^{3}$ | $3.42\times 10^{4}$
$\Delta G_{b}^{\circ}({\rm A}^{-})$a | $-6.02\pm 0.28$ | $-3.25\pm 0.38$ | $-1.57\pm 0.36$ | $-0.86\pm 2.86$
$P_{0}({\rm A}^{-})$ | $0.995$ | $0.995$ | $0.995$ | $0.995$
$\displaystyle P_{0}(A^{-})e^{-\beta\Delta G_{b}^{\circ}(A^{-})}$ | $2.43\times 10^{4}$ | $232$ | $13.6$ | $4.22$
$\Delta G_{b}^{\circ}$a | $-9.90\pm 0.83$ | $-9.37\pm 0.81$ | $-4.76\pm 0.28$ | $-6.22\pm 1.8$
a In kcal/mol.
Figure 4: Linear regression of combined TEMOA and TEETOA predictions with ATM
and PMF.
## 4 Discussion and Conclusions
In this study, we employed two independent binding free energy approaches, the
newly developed alchemical transfer method (ATM)khuttan2021single ;
wu2021alchemical and the well established PMF physical pathway
methoddeng2018comparing to blindly predict the absolute binding affinities of
the host-guest systems as part of the SAMPL8 GDCC blind challenge. The SAMPL
series of community challenges has consistently yielded high-quality datasets
to test computational models of binding,geballe2010sampl2 ; mobley2014blind ;
amezcua2021sampl7 ; GallicchioSAMPL4 ; deng2016large ; pal2016SAMPL5 and we
decided to use it here to stringently validate the ATM and PMF methods in an
unbiased fashion.
Despite their radical differences in spirit and in practice, we find that the
calculated binding affinities from the two methods are in remarkable
quantitative agreement with an RMSE of only 0.6 kcal/mol and an $R^{2}$ of
$99$%. This level of agreement, well within statistical fluctuations, gives
high confidence in the theoretical foundations and in the correctness of
implementation of each approach. The level of consistency of the computational
methods also adds confidence that their predictions are unbiased and primarily
reflective of the force field model.
We find that the standard GAFF/AM1-BCC/TIP3P model employed here tends to
overestimate the binding free energies of strongly bound complexes while it
tends to understimate those of more weakly bound complexes, as also indicated
by the larger than one slope of the linear regressions (Tables 1, 2). While it
may be a result, in this case, of specific aspects of the TEMOA and TEETOA
hosts, this trend has been generally observed with this force field
combination.rizzi2018overview The more accurate AMOEBA force
fieldshi2021amoeba appears to correctly predict these trends
(github.com/samplchallenges/SAMPL8/blob/master/host_guest/Analysis/Ranked_Accuracy).
The stringent blinded test conducted in this work is a further validation of
the ATM binding free energy method that we have recently
proposed.wu2021alchemical ATM, implemented on top of the versatile OpenMM
molecular dynamics engine,eastman2017openmm promises to provide an accurate
and streamlined route to absolutewu2021alchemical and relative binding free
calculations.Azimi2021RBFE While alchemical, ATM, similar to the PMF pathway
method,deng2018comparing makes use of a single simulation system, and it
avoids problematic vacuum intermediates and the splitting of the alchemical
path into electrostatic and non-electrostatic transformations. ATM also does
not require soft-core pair potentials and modifications of energy routines,
and can be easily implemented as a controlling routine on top of existing
force routines of MD engines.
In summary, this work provides a rare blinded and stringent test of binding
free energy models. It shows that the application of sound statistical
mechanics theories of binding and careful modeling of chemical systems can
lead to reliable predictions limited only by the quality of the force field
model.
## 5 Acknowledgements
We acknowledge support from the National Science Foundation (NSF CAREER
1750511 to E.G.). Molecular simulations were conducted on the Comet and
Expanse GPU clusters at the San Diego Supercomputing Center supported by NSF
XSEDE award TG-MCB150001. We appreciate the National Institutes of Health for
its support of the SAMPL project via R01GM124270 to David L. Mobley.
## References
* (1) Matthew T Geballe, A Geoffrey Skillman, Anthony Nicholls, J Peter Guthrie, and Peter J Taylor. The SAMPL2 blind prediction challenge: introduction and overview. J. Comp. Aided Mol. Des., 24(4):259–279, 2010.
* (2) David L Mobley, Shuai Liu, Nathan M Lim, Karisa L Wymer, Alexander L Perryman, Stefano Forli, Nanjie Deng, Justin Su, Kim Branson, and Arthur J Olson. Blind prediction of hiv integrase binding from the SAMPL4 challenge. J. Comp. Aided Mol. Des., pages 1–19, 2014.
* (3) Martin Amezcua, Léa El Khoury, and David L Mobley. SAMPL7 host–guest challenge overview: assessing the reliability of polarizable and non-polarizable methods for binding free energy calculations. J. Comp.-Aid. Mol. Des., 35(1):1–35, 2021.
* (4) David L Mobley and Michael K Gilson. Predicting binding free energies: frontiers and benchmarks. Ann. Rev. Bioph., 46:531–558, 2017.
* (5) William L Jorgensen. Efficient drug lead discovery and optimization. Acc Chem Res, 42:724–733, 2009.
* (6) Kira A Armacost, Sereina Riniker, and Zoe Cournia. Novel directions in free energy methods and applications, 2020.
* (7) E. Gallicchio and R. M. Levy. Prediction of SAMPL3 host-guest affinities with the binding energy distribution analysis method (BEDAM). J. Comp. Aided Mol. Design., 25:505–516, 2012.
* (8) Emilio Gallicchio, Haoyuan Chen, He Chen, Michael Fitzgerald, Yang Gao, Peng He, Malathi Kalyanikar, Chuan Kao, Beidi Lu, Yijie Niu, Manasi Pethe, Jie Zhu, and Ronald M Levy. BEDAM binding free energy predictions for the SAMPL4 octa-acid host challenge. J. Comp. Aided Mol. Des., 29:315–325, 2015.
* (9) Emilio Gallicchio, Nanjie Deng, Peng He, Alexander L. Perryman, Daniel N. Santiago, Stefano Forli, Arthur J. Olson, and Ronald M. Levy. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge. J. Comp. Aided Mol. Des., 28:475–490, 2014.
* (10) Nanjie Deng, William F Flynn, Junchao Xia, RSK Vijayan, Baofeng Zhang, Peng He, Ahmet Mentes, Emilio Gallicchio, and Ronald M Levy. Large scale free energy calculations for blind predictions of protein–ligand binding: the d3r grand challenge 2015. J. Comp.-Aided Mol. Des., 30(9):743–751, 2016.
* (11) Rajat Kumar Pal, Kamran Haider, Divya Kaur, William Flynn, Junchao Xia, Ronald M. Levy, Tetiana Taran, Lauren Wickstrom, Tom Kurtzman, and Emilio Gallicchio. A combined treatment of hydration and dynamical effects for the modeling of host-guest binding thermodynamics: The SAMPL5 blinded challenge. J. Comp. Aided Mol. Des., 31:29–44, 2016.
* (12) Joe Z Wu, Solmaz Azimi, Sheenam Khuttan, Nanjie Deng, and Emilio Gallicchio. Alchemical transfer approach to absolute binding free energy estimation. J. Chem. Theory Comput., 17:3309, 2021.
* (13) Nanjie Deng, Di Cui, Bin W Zhang, Junchao Xia, Jeffrey Cruz, and Ronald Levy. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands. Phys. Chem. Chem. Phys., 20(25):17081–17092, 2018.
* (14) Paolo Suating, Thong T Nguyen, Nicholas E Ernst, Yang Wang, Jacobs H Jordan, Corinne LD Gibb, Henry S Ashbaugh, and Bruce C Gibb. Proximal charge effects on guest binding to a non-polar pocket. Chemical Science, 11(14):3656–3663, 2020.
* (15) Paweł Śledź and Amedeo Caflisch. Protein structure-based drug design: from docking to molecular dynamics. Curr. Op. Struct. Biol., 48:93–102, 2018.
* (16) Thomas Seidel, Oliver Wieder, Arthur Garon, and Thierry Langer. Applications of the pharmacophore concept in natural product inspired drug design. Molecular Informatics, 39(11):2000059, 2020.
* (17) M. K. Gilson, J. A. Given, B. L. Bush, and J. A. McCammon. The statistical-thermodynamic basis for computation of binding affinities: A critical review. Biophys. J., 72:1047–1069, 1997.
* (18) Emilio Gallicchio and Ronald M Levy. Recent theoretical and computational advances for modeling protein-ligand binding affinities. Adv. Prot. Chem. Struct. Biol., 85:27–80, 2011.
* (19) Zoe Cournia, Bryce K Allen, Thijs Beuming, David A Pearlman, Brian K Radak, and Woody Sherman. Rigorous free energy simulations in virtual screening. Journal of Chemical Information and Modeling, 2020.
* (20) Emilio Gallicchio. Free energy-based computational methods for the study of protein-peptide binding equilibria. In Thomas Simonson, editor, Computational Peptide Science: Methods and Protocols, Methods in Molecular Biology. Springer Nature, 2021.
* (21) Emilio Gallicchio, Mauro Lapelosa, and Ronald M. Levy. Binding energy distribution analysis method (BEDAM) for estimation of protein-ligand binding affinities. J. Chem. Theory Comput., 6:2961–2977, 2010.
* (22) Zhiqiang Tan, Emilio Gallicchio, Mauro Lapelosa, and Ronald M. Levy. Theory of binless multi-state free energy estimation with applications to protein-ligand binding. J. Chem. Phys., 136:144102, 2012.
* (23) S Boresch, F Tettinger, M Leitgeb, and M Karplus. Absolute binding free energies: A quantitative approach for their calculation. J. Phys. Chem. B, 107:9535–9551, 2003.
* (24) Sander Pronk, Szilárd Páll, Roland Schulz, Per Larsson, Pär Bjelkmar, Rossen Apostolov, Michael R Shirts, Jeremy C Smith, Peter M Kasson, David van der Spoel, Berk Hess, and Erik Lindahl. Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics, 29:845–854, 2013.
* (25) S Khuttan, Solmaz Azimi, Joe Z Wu, and E Gallicchio. Alchemical transformations for concerted hydration free energy estimation with explicit solvation. J. Chem. Phys, 154:054103, 2021.
* (26) Peter Eastman, Jason Swails, John D Chodera, Robert T McGibbon, Yutong Zhao, Kyle A Beauchamp, Lee-Ping Wang, Andrew C Simmonett, Matthew P Harrigan, Chaya D Stern, et al. Openmm 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comp. Bio., 13(7):e1005659, 2017.
* (27) Emilio Gallicchio, Junchao Xia, William F Flynn, Baofeng Zhang, Sade Samlalsingh, Ahmet Mentes, and Ronald M Levy. Asynchronous replica exchange software for grid and heterogeneous computing. Computer Physics Communications, 196:236–246, 2015.
* (28) Andrea Rizzi, Steven Murkli, John N McNeill, Wei Yao, Matthew Sullivan, Michael K Gilson, Michael W Chiu, Lyle Isaacs, Bruce C Gibb, David L Mobley, et al. Overview of the sampl6 host–guest binding affinity prediction challenge. J. Comp.-Aid. Mol. Des., 32(10):937–963, 2018.
* (29) Yuanjun Shi, Marie L Laury, Zhi Wang, and Jay W Ponder. Amoeba binding free energies for the sampl7 trimertrip host–guest challenge. J. Comp.-Aid. Mol. Des., 35(1):79–93, 2021.
* (30) Solmaz Azimi, Sheenam Khuttan, Joe Z. Wu, Rajat Pal, and Emilio Gallicchio. Relative binding free energy calculations for ligands with diverse scaffolds with the alchemical transfer method. ArXiv Preprint, XXX:XXX–XXX, 2021.
|
# Learning to Explicitate Connectives with Seq2Seq Network
for Implicit Discourse Relation Classification
Wei Shi† and Vera Demberg†,‡
†Dept. of Language Science and Technology
‡Dept. of Mathematics and Computer Science, Saarland University
Saarland Informatics Campus, 66123 Saarbrücken, Germany
{w.shi<EMAIL_ADDRESS>
###### Abstract
Implicit discourse relation classification is one of the most difficult steps
in discourse parsing. The difficulty stems from the fact that the coherence
relation must be inferred based on the content of the discourse relational
arguments. Therefore, an effective encoding of the relational arguments is of
crucial importance. We here propose a new model for implicit discourse
relation classification, which consists of a classifier, and a sequence-to-
sequence model which is trained to generate a representation of the discourse
relational arguments by trying to predict the relational arguments including a
suitable implicit connective. Training is possible because such implicit
connectives have been annotated as part of the PDTB corpus. Along with a
memory network, our model could generate more refined representations for the
task. And on the now standard 11-way classification, our method outperforms
the previous state of the art systems on the PDTB benchmark on multiple
settings including cross validation.
## 1 Introduction
Discourse relations describe the logical relation between two
sentences/clauses. When understanding a text, humans infer discourse relation
between text segmentations. They reveal the structural organization of text,
and allow for additional inferences. Many natural language processing tasks,
such as machine translation, question-answering, automatic summarization,
sentiment analysis, and sentence embedding learning, can also profit from
having access to discourse relation information. Recent years have seen more
and more works on this topic, including two CoNNL shared tasks (Xue et al.,
2015, 2016).
Penn Discourse Tree Bank (Prasad et al., 2008, PDTB) provides lexically-
grounded annotations of discourse relations and their two discourse relational
arguments (i.e., two text spans). Discourse relations are sometimes signaled
by explicit discourse markers (e.g., because, but). Example 1 shows an
explicit discourse relation marked by “because”; the presence of the
connective makes it possible to classify the discourse relation with high
reliability: Miltsakaki et al. (2005) reported an accuracy of 93.09% for 4-way
classification of explicits.
Discourse relations are however not always marked by an explicit connective.
In fact, implicit discourse relations (i.e. relations not marked by an
explicit discourse cue) outnumber explicit discourse relations in naturally
occurring text. Readers can still infer these implicit relations, but
automatic classification becomes a lot more difficult in these cases, and
represents the main bottleneck in discourse parsing today. Example 2 shows an
implicit contrastive relation which can be inferred from the two text spans
that have been marked Arg1 and Arg2. When annotating implicit relations in the
PDTB, annotators were asked to first insert a connective which expresses the
relation, and then annotate the relation label. This procedure was introduced
to achieve higher inter-annotator agreement for implicit relations between
human annotators. In the approach taken in this paper, our model mimics this
procedure by being trained to explicitate the discouse relation, i.e. to
insert a connective as a secondary task.
1. 1.
_[I refused to pay the cobbler the full $95] Arg1 because [He did poor
work.]Arg2
_
_— Explicit, Contingency.Cause_
2. 2.
_[In the energy mix of the future, bio-energy will also have a key role to
play in boosting rural employment and the rural economy in Europe .] Arg1
(Implicit = However) [At the same time , the promotion of bio-energy must not
lead to distortions of competition.]Arg2
_
_— Implicit, Comparison.Contrast_
The key in implicit discourse relation classification lies in extracting
relevant information for the relation label from (the combination of) the
discourse relational arguments. Informative signals can consist of surface
cues, as well as the semantics of the relational arguments. Statistical
approaches have typically relied on linguistically informed features which
capture both of these aspects, like temporal markers, polarity tags, Levin
verb classes and sentiment lexicons, as well as the Cartesian products of the
word tokens in the two arguments (Lin et al., 2009). More recent efforts use
distributed representations with neural network architectures (Qin et al.,
2016a).
The main question in designing neural networks for discourse relation
classification is how to get the neural networks to effectively encode the
discourse relational arguments such that all of the aspects relevant to the
classification of the relation are represented, in particular in the face of
very limited amounts of annotated training data, see e.g. Rutherford et al.
(2017). The crucial intuition in the present paper is to make use of the
annotated implicit connectives in the PDTB: in addition to the typical
relation label classification task, we also train the model to encode and
decode the discourse relational arguments, and at the same time predict the
implicit connective. This novel secondary task forces the internal
representation to more completely encode the semantics of the relational
arguments (in order to allow the model to decode later), and to make a more
fine-grained classification (predicting the implicit connective) than is
necessary for the overall task. This more fine-grained task thus aims to force
the model to represent the discourse relational arguments in a way that allows
the model to also predict a suitable connective. Our overall discourse
relation classifier combines representations from the relational arguments as
well as the hidden representations generated as part of the encoder-decoder
architecture to predict relation labels. What’s more, with an explicit memory
network, the network also has access to history representations and acquire
more explicit context knowledge. We show that our method outperforms previous
approaches on the 11-way classification on the PDTB 2.0 benchmark.
The remaining of the paper is organized as follows: Section 2 discusses
related work; Section 3 describes our proposed method; Section 4 gives the
training details and experimental results, which is followed by conclusion and
future work in section 5.
## 2 Related Work
### 2.1 Implicit Discourse Relation Classification
Implicit discourse relation recognition is one of the most important
components in discourse parsing. With the release of PDTB (Prasad et al.,
2008), the largest available corpus which annotates implicit examples with
discourse relation labels and implicit connectives, a lot of previous works
focused on typical statistical machine learning solutions with manually
crafted sparse features (Rutherford and Xue, 2014).
Recently, neural networks have shown an advantage of dealing with data
sparsity problem, and many deep learning methods have been proposed for
discourse parsing, including convolutional (Zhang et al., 2015), recurrent (Ji
et al., 2016), character-based (Qin et al., 2016a), adversarial (Qin et al.,
2017) neural networks, and pair-aware neural sentence modeling (Cai and Zhao,
2017). Multi-task learning has also been shown to be beneficial on this task
(Lan et al., 2017).
However, most neural based methods suffer from insufficient annotated data.Wu
et al. (2016) extracted bilingual-constrained synthetic implicit data from a
sentence-aligned English-Chinese corpus. Shi et al. (2017, 2018) proposed to
acquire additional training data by exploiting explicitation of connectives
during translation. Explicitation refers to the fact that translators
sometimes add connectives into the text in the target language which were not
originally present in the source language. They used explicitated connectives
as a source of weak supervision to obtain additional labeled instances, and
showed that this extension of the training data leads to substantial
performance improvements.
The huge gap between explicit and implicit relation recognition (namely, 50%
vs. 90% in 4-way classification) also motivates to incorporate connective
information to guide the reasoning process. Zhou et al. (2010) used a language
model to automatically insert discourse connectives and leverage the
information of these predicted connectives. The approach which is most similar
in spirit to ours, Qin et al. (2017), proposed a neural method that
incorporates implicit connectives in an adversarial framework to make the
representation as similar as connective-augmented one and showed that the
inclusion of implicit connectives could help to improve classifier
performance.
### 2.2 Sequence-to-sequence Neural Networks
Sequence to sequence model is a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure, and firstly
proposed by Sutskever et al. (2014). It uses multi-layered Long Short-Term
Memory (LSTM) or Gated Recurrent Units (GRU) to map the input sequence to a
vector with a fixed dimensionality, and then decode the target sequence from
the vector with another LSTM / GRU layer.
Sequence to sequence models allow for flexible input/output dynamics and have
enjoyed great success in machine translation and have been broadly used in
variety of sequence related tasks such as Question Answering, named entity
recognition (NER) / part of speech (POS) tagging and so on.
If the source and target of a sequence-to-sequence model are exactly the same,
it is also called Auto-encoder, Dai and Le (2015) used a sequence auto-encoder
to better represent sentence in an unsupervised way and showed impressive
performances on different tasks. The main difference between our model and
this one is that we have different input and output (the output contains a
connective while the input doesn’t). In this way, the model is forced to
explicitate implicit relation and try to learn the latent pattern and
discourse relation between implicit arguments and connectives and then
generate more discriminative representations.
Figure 1: The Architecture of Proposed Model.
## 3 Methodology
Our model is based on the sequence-to-sequence model used for machine
translation (Luong et al., 2015), an adaptation of an LSTM (Hochreiter and
Schmidhuber, 1997) that encodes a variable length input as a fix-length
vector, then decodes it into a variable length of outputs. As illustrated in
Figure 1, our model consists of three components: Encoder, Decoder and
Discourse Relation Classifier. We here use different LSTMs for the encoding
and decoding tasks to help keep the independence between those two parts.
The task of implicit discourse relation recognition is to recognize the senses
of the implicit relations, given the two arguments. For each discourse
relation instance, The Penn Discourse Tree Bank (PDTB) provides two arguments
($Arg_{1}$, $Arg_{2}$) along with the discourse relation (Rel) and manually
inserted implicit discourse connective ($Conn_{i}$). Here is an implicit
example from section 0 in PDTB:
1. 3.
$\mathbf{Arg_{1}}$: This is an old story.
$\mathbf{Arg_{2}}$: We’re talking about years ago before anyone heard of
asbestos having any questionable properties.
$\mathbf{Conn_{i}}$: in fact
$\mathbf{Rel}$: Expansion.Restatement
During training, the input and target sentences for sequence-to-sequence
neural network are $\left[\textit{$Arg_{1}$};\textit{$Arg_{2}$}\right]$ and
$\left[\textit{$Arg_{1}$};\textit{$Conn_{i}$};\textit{$Arg_{2}$}\right]$
respectively, where “;” denotes concatenation.
### 3.1 Model Architecture
#### 3.1.1 Encoder
Given a sequence of words, an encoder computes a joint representation of the
whole sequence.
After mapping tokens to Word2Vec embedding vectors (Mikolov et al., 2013), a
LSTM recurrent neural network processes a variable-length sequence
$x=(x_{1},x_{2},...,x_{n})$. At time step $t$, the state of memory cell
$c_{t}$ and hidden $h_{t}$ are calculated with the Equations 1:
$\small\begin{gathered}\left[\begin{array}[]{c}i_{t}\\\ f_{t}\\\ o_{t}\\\
\hat{c_{t}}\end{array}\right]=\left[\begin{array}[]{c}\sigma\\\ \sigma\\\
\sigma\\\ \tanh\end{array}\right]W\cdot[h_{t-1},x_{t}]\\\ c_{t}=f_{t}\odot
c_{t-1}+i_{t}\odot\hat{c_{t}}\\\ h_{t}=o_{t}\odot\tanh(c_{t})\\\
\end{gathered}$ (1)
where $x_{t}$ is the input at time step $t$, $i$, $f$ and $o$ are the input,
forget and output gate activation respectively. $\hat{c_{t}}$ denotes the
current cell state, $\sigma$ is the logistic sigmoid function and $\odot$
denotes element-wise multiplication. The LSTM separates the memory $c$ from
the hidden state $h$, which allows for more flexibility incombining new inputs
and previous context.
For the sequence modeling tasks, it is beneficial to have access to the past
context as well as the future context. Therefore, we chose a bidirectional
LSTM as the encoder and the output of the word at time-step $t$ is shown in
the Equation 2. Here, element-wise sum is used to combine the forward and
backward pass outputs.
$\small h_{t}=\left[\overrightarrow{h_{t}}\oplus\overleftarrow{h_{t}}\right]$
(2)
Thus we get the output of encoder:
$\small h_{e}=\left[h^{e}_{1},h^{e}_{2},...,h^{e}_{n}\right]$ (3)
#### 3.1.2 Decoder
With the representation from the encoder, the decoder tries to map it back to
the targets space and predicts the next words.
Here we used a separate LSTM recurrent network to predict the target words.
During training, target words are fed into the LSTM incrementally and we get
the outputs from decoder LSTM:
$\small h_{d}=\left[h^{d}_{1},h^{d}_{2},...,h^{d}_{n}\right]$ (4)
#### Global Attention
In each time-step in decoding, it’s better to consider all the hidden states
of the encoder to give the decoder a full view of the source context. So we
adopted the global attention mechanism proposed in Luong et al. (2015). For
time step $t$ in decoding, context vector $c_{t}$ is the weighted average of
$h_{e}$, the weights for each time-step are calculated with $h_{t}^{d}$ and
$h_{e}$ as illustrated below:
$\small\alpha_{t}=\frac{\exp({h_{t}^{d}}^{\top}\mathbf{W_{\alpha}}h_{e})}{\sum\limits_{t=1}^{n}\exp({h_{t}^{d}}^{\top}\mathbf{W_{\alpha}}h_{e})}$
(5) $\small c_{t}=\alpha h_{e}$ (6)
#### Word Prediction
Context vector $c_{t}$ captured the relevant source side information to help
predict the current target word $y_{t}$. We employ a concatenate layer with
activation function $\tanh$ to combine context vector $c_{t}$ and hidden state
of decoder $h^{d}_{t}$ at time-step t as follows:
$\small\hat{h^{d}_{t}}=\tanh(\mathbf{W_{c}}\left[c_{t};h^{d}_{t}\right])$ (7)
Then the predictive vector is fed into the softmax layer to get the predicted
distribution $\hat{p}(y_{t}|s)$ of the current target word.
$\small\begin{gathered}\hat{p}(y_{t}|s)=softmax(\mathbf{W}_{s}\hat{h_{d}}+\mathbf{b}_{s})\\\
\hat{y_{t}}=\arg\max_{y}\hat{p}(y_{t}|s)\end{gathered}$ (8)
After decoding, we obtain the predictive vectors for the whole target sequence
$\hat{h_{d}}=\left[h^{d}_{1},h^{d}_{2},...,h^{d}_{n}\right]$. Ideally, it
contains the information of exposed implicit connectives.
#### Gated Interaction
In order to predict the coherent discourse relation of the input sequence, we
take both the $h_{encoder}$ and the predictive word vectors $h_{d}$ into
account. K-max pooling can “draw together” features that are most
discriminative and among many positions apart in the sentences, especially on
both the two relational arguments in our task here; this method has been
proved to be effective in choosing active features in sentence modeling
(Kalchbrenner et al., 2014). We employ an average k-max pooling layer which
takes average of the top k-max values among the whole time-steps as in
Equation 9 and 10:
$\small\bar{h}_{e}=\frac{1}{k}\sum\limits^{k}_{i=1}topk(h_{e})$ (9)
$\small\bar{h}_{d}=\frac{1}{k}\sum\limits^{k}_{i=1}topk(\hat{h^{d}})$ (10)
$\bar{h}_{e}$ and $\bar{h}_{d}$ are then combined using a linear layer (Lan et
al., 2017). As illustrated in Equation 11, the linear layer acts as a gate to
determine how much information from the sequence-to-sequence network should be
mixed into the original sentence’s representations from the encoder. Compared
with bilinear layer, it also has less parameters and allows us to use high
dimensional word vectors.
$\small
h^{*}=\bar{h}_{e}\oplus\sigma(\mathbf{W}_{i}\bar{h}_{d}+\mathbf{b}_{i})$ (11)
#### Explicit Context Knowledge
To further capture common knowledge in contexts, we here employ a memory
network proposed in Liu et al. (2018), to get explicit context representations
of contexts training examples. We use a memory matrix $M\in R^{K\times N}$,
where $K,N$ denote hidden size and number of training instances respectively.
During training, the memory matrix remembers the information of training
examples and then retrieves them when predicting labels.
Given a representation $h^{*}$ from the interaction layer, we generate a
knowledge vector by weighted memory reading:
$\small k=Msoftmax(M^{T}h^{*})$ (12)
We here use dot product attention, which is faster and space-efficient than
additive attention, to calculate the scores for each training instances. The
scores are normalized with a softmax layer and the final knowledge vector is a
weighted sum of the columns in memory matrix $M$.
Afterwards, the model predicts the discourse relation using a softmax layer.
$\small\begin{gathered}\hat{p}(r|s)=softmax(\mathbf{W}_{r}[k;h^{*}]+\mathbf{b}_{r})\\\
\hat{r}=\arg\max_{y}\hat{p}(r|s)\end{gathered}$ (13)
### 3.2 Multi-objectives
In our model, the decoder and the discourse relation classifier have different
objectives. For the decoder, the objective consists of predicting the target
word at each time-step. The loss function is calculated with masked cross
entropy with $\mathtt{L2}$ regularization, as follows:
$\small\mathit{Loss_{de}}=-\frac{1}{n}\sum\limits^{n}_{t=1}y_{t}\log(\hat{p_{y}})+\frac{\lambda}{2}\parallel\theta_{de}\parallel^{2}_{2}$
(14)
where $y_{t}$ is one-hot represented ground truth of target words,
$\hat{p_{y}}$ is the estimated probabilities for each words in vocabulary by
softmax layer, $n$ denotes the length of target sentence. $\lambda$ is a
hyper-parameter of $L2$ regularization and $\theta$ is the parameter set.
The objective of the discourse relation classifier consists of predicting the
discourse relations. A reasonable training objective for multiple classes is
the categorical cross-entropy loss. The loss is formulated as:
$\small\mathit{Loss_{cl}}=-\frac{1}{m}\sum\limits^{m}_{i=1}r_{i}\log(\hat{p_{r}})+\frac{\lambda}{2}\parallel\theta_{cl}\parallel^{2}_{2}$
(15)
where $r_{i}$ is one-hot represented ground truth of discourse relation
labels, $\hat{p_{r}}$ denotes the predicted probabilities for each relation
class by softmax layer, $m$ is the number of target classes. Just like above,
$\lambda$ is a hyper-parameter of $L2$ regularization.
For the overall loss of the whole model, we set another hyper-parameter $w$ to
give these two objective functions different weights. Larger $w$ means that
more importance is placed on the decoder task.
$\small\mathit{Loss}=\mathit{w}\cdot\mathit{Loss_{de}}+\mathit{(1-w)}\cdot\mathit{Loss_{cl}}$
(16)
### 3.3 Model Training
To train our model, the training objective is defined by the loss function we
introduced above. We use Adaptive Moment Estimation (Adam) (Kingma and Ba,
2014) with different learning rate for different parts of the model as our
optimizer. Dropout layers are applied after the embedding layer and also on
the top feature vector before the softmax layer in the classifier. We also
employ $L_{2}$ regularization with small $\lambda$ in our objective functions
for preventing over-fitting. The values of the hyper-parameters, are provided
in Table 2. The model is trained firstly to minimize the loss in Equation 14
until convergence, we use scheduled sampling (Bengio et al., 2015) during
training to avoid “teacher-forcing problem”. And then to minimize the joint
loss in Equation 16 to train the implicit discourse relation classifier.
## 4 Experiments and Results
### 4.1 Experimental Setup
We evaluate our model on the PDTB. While early work only evaluated
classification performance between the four main PDTB relation classes, more
recent work including the CoNLL 2015 and 2016 shared tasks on Shallow
Discourse Parsing (Xue et al., 2015, 2016) have set the standard to second-
level classification. The second-level classification is more useful for most
downstream tasks. Following other works we directly compare to in our
evaluation, we here use the setting where AltLex, EntRel and NoRel tags are
ignored. About 2.2% of the implicit relation instances in PDTB have been
annotated with two relations, these are considered as two training instances.
To allow for full comparability to earlier work, we here report results for
three different settings. The first one is denoted as PDTB-Lin (Lin et al.,
2009); it uses sections 2-21 for training, 22 as dev and section 23 as test
set. The second one is labeled PDTB-Ji (Ji and Eisenstein, 2015), and uses
sections 2-20 for training, 0-1 as dev and evaluates on sections 21-22. Our
third setting follows the recommendations of Shi and Demberg (2017), and
performs 10-fold cross validation on the whole corpus (sections 0-23). Table 1
shows the number of instances in train, development and test set in different
settings.
Settings | Train | Dev | Test
---|---|---|---
PDTB-Lin | 13351 | 515 | 766
PDTB-Ji | 12826 | 1165 | 1039
Cross valid. per fold avg. | 12085 | 1486 | 1486111Cross-validation allows us to test on all 15057 instances.
Table 1: Numbers of train, development and test set on different settings for
11-way classification task. Instances annotated with two labels are double-
counted and some relations with few instances have been removed.
The advantage of the cross validation approach is that it addresses problems
related to the small corpus size, as it reports model performance across all
folds. This is important, because the most frequently used test set (PDTB-Lin)
contains less than 800 instances; taken together with a lack in the community
to report mean and standard deviations from multiple runs of neural networks
(Reimers and Gurevych, 2018), the small size of the test set makes reported
results potentially unreliable.
#### Preprocessing
We first convert tokens in PDTB to lowercase and normalize strings, which
removes special characters. The word embeddings used for initializing the word
representations are trained with the CBOW architecture in
Word2Vec222https://code.google.com/archive/p/word2vec/ (Mikolov et al., 2013)
on PDTB training set. All the weights in the model are initialized with
uniform random.
To better locate the connective positions in the target side, we use two
position indicators ($\langle conn\rangle$, $\langle/conn\rangle$) which
specify the starting and ending of the connectives (Zhou et al., 2016), which
also indicate the spans of discourse arguments.
Since our main task here is not generating arguments, it is better to have
representations generated by correct words rather than by wrongly predicted
ones. So at test time, instead of using the predicted word from previous time
step as current input, we use the source sentence as the decoder’s input and
target. As the implicit connective is not available at test time, we use a
random vector, which we used as “impl_conn” in Figure 2, as a placeholder to
inform the sequence that the upcoming word should be a connective.
#### Hyper-parameters
There are several hyper-parameters in our model, including dimension of word
vectors $d$, two dropout rates after embedding layer $q_{1}$ and before
softmax layer $q_{2}$, two learning rates for encoder-decoder $lr_{1}$ and for
classifier $lr_{2}$, top $k$ for k-max pooling layer, different weights $w$
for losses in Equation (16) and $\lambda$ denotes the coefficient of
regularizer, which controls the importance of the regularization term, as
shown in Table 2.
$d$ | $q_{1}$ | $q_{2}$ | ${lr}_{1}$ | ${lr}_{2}$ | $k$ | $w$ | $\lambda$
---|---|---|---|---|---|---|---
100 | 0.5 | 0.2 | $2.5e^{-3}$ | $5e^{-3}$ | 5 | 0.2 | $5e^{-4}$
Table 2: Hyper-parameter settings.
### 4.2 Experimental Results
We compare our models with six previous methods, as shown in Table 3. The
baselines contain feature-based methods (Lin et al., 2009), state-of-the-art
neural networks (Qin et al., 2016a; Cai and Zhao, 2017), including the
adversarial neural network that also exploits the annotated implicit
connectives (Qin et al., 2017), as well as the data extension method based on
using explicitated connectives from translation to other languages (Shi et
al., 2017).
Additionally, we ablate our model by taking out the prediction of the implicit
connective in the sequence to sequence model. The resulting model is labeled
Auto-Encoder in Table 3. And seq2seq network without knowledge memory, which
means we use the output of gated interaction layer to predict the label
directly, as denoted as Seq2Seq w/o Mem Net.
Methods | PDTB-Lin | PDTB-Ji | Cross Validation
---|---|---|---
Majority class | 26.11 | 26.18 | 25.59
Lin et al. (2009) | 40.20 | - | -
Qin et al. (2016a) | 43.81 | 45.04 | -
Cai and Zhao (2017) | - | 45.81 | -
Qin et al. (2017) | 44.65 | 46.23 | -
Shi et al. (2017) (with extra data) | 45.50 | - | 37.84
Encoder only (Bi-LSTM) (Shi et al., 2017) | 34.32 | - | 30.01
Auto-Encoder | 43.86 | 45.43 | 39.50
Seq2Seq w/o Mem Net | 45.75 | 47.05 | 40.29
Proposed Method | 45.82 | 47.83 | 41.29
Table 3: Accuracy (%) of implicit discourse relations on PDTB-Lin, PDTB-Ji and
Cross Validation Settings for multi-class classification.
Our proposed model outperforms the other models in each of the settings.
Compared with performances in Qin et al. (2017), although we share the similar
idea of extracting highly discriminative features by generating connective-
augmented representations for implicit discourse relations, our method
improves about 1.2% on setting PDTB-Lin and 1.6% on the PDTB-Ji setting. The
importance of the implicit connective is also illustrated by the fact that the
“Auto-Encoder” model, which is identical to our model except it does not
predict the implicit connective, performs worse than the model which does.
This confirms our initial hypothesis that training with implicit connectives
helps to expose the latent discriminative features in the relational
arguments, and generates more refined semantic representation. It also means
that, to some extent, purely increasing the size of tunable parameters is not
always helpful in this task and trying to predict implicit connectives in the
decoder does indeed help the model extract more discriminative features for
this task. What’s more, we can also see that without the memory network, the
performances are also worse, it shows that with the concatenation of knowledge
vector, the training instance may be capable of finding related instances to
get common knowledge for predicting implicit relations. As Shi and Demberg
(2017) argued that it is risky to conclude with testing on such small test
set, we also run cross-validation on the whole PDTB. From Table 3, we have the
same conclusion with the effectiveness of our method, which outperformed the
baseline (Bi-LSTM) with more than 11% points and 3% compared with Shi et al.
(2017) even though they have used a very large extra corpus.
For the sake of obtaining a better intuition on how the global attention works
in our model, Figure 2 demonstrates the weights of different time-steps in
attention layer from the decoder. The weights show how much importance the
word attached to the source words while predicting target words. We can see
that without the connective in the target side of test, the word filler still
works as a connective to help predict the upcoming words. For instance, the
true discourse relation for the right-hand example is Expansion.Alternative,
at the word filler’s time-step, it attached more importance on the negation
“don’t” and “tastefully appointed”. It means the current representation could
grasp the key information and try to focus on the important words to help with
the task. Here we see plenty room for adapting this model to discourse
connective prediction task, we would like to leave this to the future work.
We also try to figure out which instances’ representations have been chosen
from the memory matrix while predicting. Table 4 shows two examples and their
context instances with top 2 memory attentions among the whole training set.
We can see that both examples show that the memory attention attached more
importance on the same relations. This means that with the Context Memory, the
model could facilitate the discourse relation prediction by choosing examples
that share similar semantic representation and discourse relation during
prediction.
Figure 2: Visualization of attention weights during predicting target sentence
in train and test, x-axis denotes the source sentence and the y-axis is the
targets. First two figures are examples from training set with implicit
connectives inside, while the following one, in which the implicit connective
has been replaced by the word filler “impl_conn”, is from test. In recent
years, U.S. steelmakers have supplied about 80% of the 100 million tons of
steel used annually by the nation. (in addition,) Of the remaining 20% needed,
the steel-quota negotiations allocate about 15% to foreign suppliers.
---
— Expansion.Conjunction
1\. The average debt of medical school graduates who borrowed to pay for their
education jumped 10% to $42,374 this year from $38,489 in 1988, says the
Association of American Medical Colleges. (furthermore) that’s 115% more than
in 1981
— Expansion.Conjunction
2\. … he rigged up an alarm system, including a portable beeper, to alert him
when Sventek came on the line. (and) Some nights he slept under his desk.
— Expansion.Conjunction
Prices for capital equipment rose a hefty 1.1% in September, while prices for
home electronic equipment fell 1.1%. (Meanwhile,) food prices declined 0.6%,
after climbing 0.3% in August.
— Comparison.Contrast
1\. Lloyd’s overblown bureaucracy also hampers efforts to update marketing
strategies. (Although) some underwriters have been pressing for years to tap
the low-margin business by selling some policies directly to consumers.
— Comparison.Contrast 2\. Valley National ”isn’t out of the woods yet.
(Specifically), the key will be whether Arizona real estate turns around or at
least stabilizes
— Expansion.Restatement
Table 4: Example of attention in Context Knowledge Memory. The sentences in italic are from PDTB test set and following 2 instances are the ones with top 2 attention weights from training set. Relation | Train | Dev | Test
---|---|---|---
Comparison | 1855 | 189 | 145
Contingency | 3235 | 281 | 273
Expansion | 6673 | 638 | 538
Temporal | 582 | 48 | 55
Total | 12345 | 1156 | 1011
Table 5: Distribution of top-level implicit discourse relations in the PDTB. Methods | Four-ways | One-Versus-all Binary ($F_{1}$)
---|---|---
$F_{1}$ | Acc. | Comp. | Cont. | Expa. | Temp.
Rutherford and Xue (2014) | 38.40 | 55.50 | 39.70 | 54.42 | 70.23 | 28.69
Qin et al. (2016b) | - | - | 41.55 | 57.32 | 71.50 | 35.43
Liu et al. (2016) | 44.98 | 57.27 | 37.91 | 55.88 | 69.97 | 37.17
Ji et al. (2016) | 42.30 | 59.50 | - | - | - | -
Liu and Li (2016) | 46.29 | 57.17 | 36.70 | 54.48 | 70.43 | 38.84
Qin et al. (2017) | - | - | 40.87 | 54.46 | 72.38 | 36.20
Lan et al. (2017) | 47.80 | 57.39 | 40.73 | 58.96 | 72.47 | 38.50
Our method | 46.40 | 61.42 | 41.83 | 62.07 | 69.58 | 35.72
Table 6: Comparison of $F_{1}$ scores (%) and Accuracy (%) with the State-of-
the-art Approaches for four-ways and one-versus-all binary classification on
PDTB. Comp., Cont., Expa. and Temp. stand for Comparison, Contingency,
Expansion and Temporal respectively.
#### 4.2.1 Top-level Binary and 4-way Classification
A lot of the recent works in PDTB relation recognition have focused on first
level relations, both on binary and 4-ways classification. We also report the
performance on level-one relation classification for more comparison to prior
works. As described above, we followed the conventional experimental settings
(Rutherford and Xue, 2015; Liu and Li, 2016) as closely as possible. Table 5
shows the distribution of top-level implicit discourse relation in PDTB, it’s
worth noticing that there are only 55 instances for Temporal Relation in the
test set.
To make the results comparable with previous work, we report the $F_{1}$ score
for four binary classifications and both $F_{1}$ and Accuracy for 4-way
classification, which can be found in Table 6. We can see that our method
outperforms all alternatives on Comparison and Contingency, and obtain
comparable scores with the state-of-the-art in others. For 4-way
classification, we got the best accuracy and second-best $F_{1}$ with around
2% better than in Ji et al. (2016).
## 5 Conclusion and Future Work
We present in this paper a novel neural method trying to integrate implicit
connectives into the representation of implicit discourse relations with a
joint learning framework of sequence-to-sequence network. We conduct
experiments with different settings on PDTB benchmark, the results show that
our proposed method can achieve state-of-the-art performance on recognizing
the implicit discourse relations and the improvements are not only brought by
the increasing number of parameters. The model also has great potential
abilities in implicit connective prediction in the future.
Our proposed method shares similar spirit with previous work in Zhou et al.
(2010), who also tried to leverage implicit connectives to help extract
discriminative features from implicit discourse instances. Comparing with the
adversarial method proposed by Qin et al. (2017), our proposed model more
closely mimics humans’ annotation process of implicit discourse relations and
is trained to directly explicitate the implicit relations before
classification. With the representation of the original implicit sentence and
the explicitated one from decoder, and the help of the explicit knowledge
vector from memory network, the implicit relation could be classified with
higher accuracy.
Although our method has not been trained as a generative model in our
experiments, we can see potential for applying it to generative tasks. With
more annotated data, minor modification and fine-tuned training, we believe
our proposed method could also be applied to tasks like implicit discourse
connective prediction, or argument generation in the future.
## 6 Acknowledgments
This work was supported by German Research Foundation (DFG) as part of SFB
1102 “Information Density and Linguistic Encoding”. We would like to thank the
anonymous reviewers for their careful reading and insightful comments.
## References
* Bengio et al. (2015) Bengio, S., O. Vinyals, N. Jaitly, and N. Shazeer (2015). Scheduled sampling for sequence prediction with recurrent neural networks. In Proceeding of NIPS, pp. 1171–1179.
* Cai and Zhao (2017) Cai, D. and H. Zhao (2017). Pair-aware neural sentence modeling for implicit discourse relation classification. In IEA-AIE, pp. 458–466. Springer.
* Dai and Le (2015) Dai, A. M. and Q. V. Le (2015). Semi-supervised sequence learning. In Proceedings of NIPS, pp. 3079–3087.
* Hochreiter and Schmidhuber (1997) Hochreiter, S. and J. Schmidhuber (1997). Long short-term memory. Neural computation 9(8), 1735–1780.
* Ji and Eisenstein (2015) Ji, Y. and J. Eisenstein (2015). One vector is not enough: Entity-augmented distributional semantics for discourse relations. TACL 3, 329–344.
* Ji et al. (2016) Ji, Y., G. Haffari, and J. Eisenstein (2016). A latent variable recurrent neural network for discourse relation language models. In Proceedings of NAACL, pp. 332–342.
* Kalchbrenner et al. (2014) Kalchbrenner, N., E. Grefenstette, and P. Blunsom (2014). A convolutional neural network for modelling sentences. In Proceedings of ACL.
* Kingma and Ba (2014) Kingma, D. and J. Ba (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
* Lan et al. (2017) Lan, M., J. Wang, Y. Wu, Z.-Y. Niu, and H. Wang (2017). Multi-task attention-based neural networks for implicit discourse relationship representation and identification. In Proceedings of EMNLP, pp. 1299–1308.
* Lin et al. (2009) Lin, Z., M.-Y. Kan, and H. T. Ng (2009). Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of EMNLP, pp. 343–351.
* Liu et al. (2018) Liu, Q., Y. Zhang, and J. Liu (2018). Learning domain representation for multi-domain sentiment classification. In Proceedings of NAACL, pp. 541–550.
* Liu and Li (2016) Liu, Y. and S. Li (2016). Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In Proceedings of EMNLP, pp. 1224–1233.
* Liu et al. (2016) Liu, Y., S. Li, X. Zhang, and Z. Sui (2016). Implicit discourse relation classification via multi-task neural networks. In Proceedings of AAAI, pp. 2750–2756.
* Luong et al. (2015) Luong, M.-T., H. Pham, and C. D. Manning (2015). Effective approaches to attention-based neural machine translation. In Proceedings of EMNLP, pp. 1412–1421.
* Mikolov et al. (2013) Mikolov, T., I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013). Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pp. 3111–3119.
* Miltsakaki et al. (2005) Miltsakaki, E., N. Dinesh, R. Prasad, A. Joshi, and B. Webber (2005). Experiments on sense annotations and sense disambiguation of discourse connectives. In Proceedings of the Fourth Workshop TLT-2005.
* Prasad et al. (2008) Prasad, R., N. Dinesh, A. Lee, E. Miltsakaki, L. Robaldo, A. Joshi, and B. Webber (2008). The penn discourse treebank 2.0. In Proceedings of LREC.
* Qin et al. (2016a) Qin, L., Z. Zhang, and H. Zhao (2016a). Implicit discourse relation recognition with context-aware character-enhanced embeddings. In Proceedings of COLING.
* Qin et al. (2016b) Qin, L., Z. Zhang, and H. Zhao (2016b). A stacking gated neural architecture for implicit discourse relation classification. In Proceedings of EMNLP, pp. 2263–2270.
* Qin et al. (2017) Qin, L., Z. Zhang, H. Zhao, Z. Hu, and E. Xing (2017). Adversarial connective-exploiting networks for implicit discourse relation classification. In Proceedings of ACL, pp. 1006–1017.
* Reimers and Gurevych (2018) Reimers, N. and I. Gurevych (2018). Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. arXiv preprint arXiv:1803.09578.
* Rutherford et al. (2017) Rutherford, A., V. Demberg, and N. Xue (2017). A systematic study of neural discourse models for implicit discourse relation. In Proceedings of EACL, pp. 281–291.
* Rutherford and Xue (2014) Rutherford, A. and N. Xue (2014). Discovering implicit discourse relations through brown cluster pair representation and coreference patterns. In Proceedings of EACL, pp. 645–654.
* Rutherford and Xue (2015) Rutherford, A. and N. Xue (2015). Improving the inference of implicit discourse relations via classifying explicit discourse connectives. In Proceedings of NAACL, pp. 799–808.
* Shi and Demberg (2017) Shi, W. and V. Demberg (2017). On the need of cross validation for discourse relation classification. In Proceedings of EACL, pp. 150–156.
* Shi et al. (2018) Shi, W., F. Yung, and V. Demberg (2018). Acquiring annotated data with cross-lingual explicitation for implicit discourse relation classification. arXiv preprint arXiv:1808.10290.
* Shi et al. (2017) Shi, W., F. Yung, R. Rubino, and V. Demberg (2017). Using explicit discourse connectives in translation for implicit discourse relation classification. In Proceedings of IJCNLP, pp. 484–495.
* Sutskever et al. (2014) Sutskever, I., O. Vinyals, and Q. V. Le (2014). Sequence to sequence learning with neural networks. In Proceedings of NIPS, pp. 3104–3112.
* Wu et al. (2016) Wu, C., X. Shi, Y. Chen, Y. Huang, and J. Su (2016). Bilingually-constrained synthetic data for implicit discourse relation recognition. In Proceedings of EMNLP, pp. 2306–2312.
* Xue et al. (2015) Xue, N., H. T. Ng, S. Pradhan, R. Prasad, C. Bryant, and A. Rutherford (2015). The conll-2015 shared task on shallow discourse parsing. In Proceedings of CoNLL-15 Shared Task, pp. 1–16.
* Xue et al. (2016) Xue, N., H. T. Ng, A. Rutherford, B. Webber, C. Wang, and H. Wang (2016). Conll 2016 shared task on multilingual shallow discourse parsing. In Proceedings of CoNLL-16 shared task, pp. 1–19.
* Zhang et al. (2015) Zhang, B., J. Su, D. Xiong, Y. Lu, H. Duan, and J. Yao (2015). Shallow convolutional neural network for implicit discourse relation recognition. In Proceedings of EMNLP, pp. 2230–2235.
* Zhou et al. (2016) Zhou, P., W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu (2016). Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of ACL, pp. 207–212.
* Zhou et al. (2010) Zhou, Z.-M., Y. Xu, Z.-Y. Niu, M. Lan, J. Su, and C. L. Tan (2010). Predicting discourse connectives for implicit discourse relation recognition. In Proceedings of COLING, pp. 1507–1514.
|
# A Comparison of Methods for Neural Network Aggregation
John Pomerat Aviv Segev Department of Computer Science, University of South
Alabama, Mobile, AL, 36688 USA e-mail<EMAIL_ADDRESS>
###### Abstract
Deep learning has been successful in the theoretical aspect. For deep learning
to succeed in industry, we need to have algorithms capable of handling many
inconsistencies appearing in real data. These inconsistencies can have large
effects on the implementation of a deep learning algorithm. Artificial
Intelligence is currently changing the medical industry. However, receiving
authorization to use medical data for training machine learning algorithms is
a huge hurdle. A possible solution is sharing the data without sharing the
patient information. We propose a multi-party computation protocol for the
deep learning algorithm. The protocol enables to conserve both the privacy and
the security of the training data. Three approaches of neural networks
assembly are analyzed: transfer learning, average ensemble learning, and
series network learning. The results are compared to approaches based on data-
sharing in different experiments. We analyze the security issues of the
proposed protocol. Although the analysis is based on medical data, the results
of multi-party computation of machine learning training are theoretical and
can be implemented in multiple research areas.
## 1 Introduction
In recent years, the theoretical progress of machine learning promises to
revolutionize many domains of industry, from manufacturing[1], through
healthcare [2] and transportation[19], to education[20]. Although there have
been many successful implementations of learning algorithms, much of the
progress in machine learning remains theoretical [3]. One reason for the lack
of implementation, particularly in the healthcare domain, is the practicality,
resilience, and security of learning algorithms [2, 4]. A staple of machine
learning is data; as such, its shape, organization, quantity, and quality must
all be carefully considered for many real-world implementations [5]. As the
need for healthcare datasets rises, data-sharing [5] has been suggested as a
strategy to get data for healthcare models. In data-sharing, hospitals
reformat data into an agreed upon structure and anonymize contents so as not
to expose confidential patient data. We propose an alternative to data-sharing
using secure multi-party computation (MPC). Multi-party computation is a
branch of cryptography concerned with calculating functions on private, user-
held data. One motivating example considers two people who wish to determine
which of them has a higher salary without either party exposing their salary
to one-another. With MPC, there is an algorithm capable of solving this
problem and other, similar, problems. We propose a protocol for training
neural networks on private datasets then combining the neural networks such
that private data is not exposed, and the model’s final performance is
comparable to a model trained on the combined private datasets. This paper
considers three methods of neural network aggregation to combine networks
trained on distinct datasets sharing an underlying function. For all three of
the methods, underlying network architecture, datasets were kept constant.
Additionally, hyperparameters, including activation functions, optimizer, and
batch size were held constant. Performance was measured by mean square error,
which was recorded to compare the three methods. The three methods are
transfer learning, average ensemble learning, and series network learning.
This paper will explore these methods of neural network aggregation in depth.
## 2 Related Work
### 2.1 Security
In the healthcare domain, the importance of maintaining data privacy is clear.
As such, sensitive data should be anonymized as much as possible to prevent
any kind of data leakage. There are a number of attacks against learning
algorithms [6, 7, 8, 9]. In 2015, Goodfellow et al. proposed adversarial
attacks as a security vulnerability of neural networks [10]. Since then, there
has been more research into the security of neural networks and more attack
vectors have been discovered [6, 7, 8, 9]. Our problem, as we have defined it,
is not vulnerable to black box adversarial attacks. One attack vector that our
system is vulnerable to is training code provided by a malicious adversary
[11]. To protect against this attack, the code which specific parties
implement should be open-sourced and independently reviewed. Additionally,
there is an attack vector for generative models[12] which should be considered
for some implementations with generative models but is beyond the scope of
this paper. The primary attack vector of concern is the membership inference
attack [18]. The membership inference attack is a blackbox attack vector for a
trained neural network classifier. The attack is an algorithm to statistically
determine from a trained neural network whether an input tuple is a member of
the private training set or not[18]. To protect against the membership
inference attack, models should avoid overfitting. Additionally, adding
regularization, prediction vector restrictions, and increasing the entropy of
the prediction vector have value in preventing membership inference attacks
[18].
### 2.2 Transfer Learning
A well known method of neural network aggregation is transfer learning.
Recently, transfer learning has been shown to be useful and extremely
versatile, particularly with reinforcement learning and deep neural network
models[22, 23, 16, 24]. Additionally, transfer learning is also more versatile
than some of the methods explored in this paper since it is capable of working
with a wider variety of learning algorithms including convolutional neural
networks[22]. Furthermore, transfer learning has been shown to work well with
time series predictions and recurrent neural networks [25]. In addition,
research on transfer learning in the context of the healthcare domain already
shows promise [26, 27, 28]. One paper by Gupta et al.[26] leveraged transfer
learning to generalize models in the healthcare domain to similar tasks in the
same domain. Similarly, a result by Chen et al.[27] with wearable technology
used transfer learning on time series health data to improve performance and
increase personalization of the FedHealth model. The results for transfer
learning show promise for the viability of neural network aggregation for deep
learning in the healthcare domain.
Previous work in transfer learning shows great promise for neural network
aggregation as an alternative to data-sharing in big data healthcare
applications. These results, combined with some of the research conducted in
security and multi-party computation act as the motivating examples for this
paper.
## 3 Neural Network Aggregation
### 3.1 Problem Statement
The setup goes as follows. Let $D_{1},D_{2},\cdots,D_{k}$ be subsets of
$\mathbb{R}^{n}$ represented as datasets. Then, let
$G:\mathbb{R}^{n}\to\mathbb{R}^{m}$ be a differentiable function represented
as a multilayer perceptron with parameters $\theta_{g}$. We are concerned with
methods of producing $\theta_{g}$ from the $D_{i}$ such that the loss of $G$
is comparable to obtaining the $\theta_{g}$ from $\bigcap_{i=1}^{k}D_{i}$ and
it is not computationally feasible to extract information about the members of
the $D_{i}$ from $G$. This process, of training a neural network from
multiple, disjoint datasets is called neural network aggregation. The three
methods of neural network aggregation are series network learning, average
ensemble learning, and transfer learning.
### 3.2 Series Network Learning
The first method, called series network learning, functions by training a
neural network with a pretrained “expert” neural network as an additional
input. For our experiment we consider a single neural network trained on the
first dataset. The neural network’s performance on the testing set is
recorded. The network then generates a prediction for every entry in the
second dataset, a new neural network is then created for the second dataset
with the prediction array as an additional input. The neural network is then
trained on the second dataset and the mean square error is recorded.
Intuitively, the second neural network will likely have an improvement in mean
square error as the network will “learn” when to trust the first network’s
prediction and when to instead use its own calculations, Fig. 1.
Figure 1: Neural network as an input to assist training a second network.
for _For all parties except the last_ do
train network on parties data;
end for
take network and append each output of the trained networks as a new input
neuron then train resulting network on the final parties data;
Algorithm 1 Series Networks
### 3.3 Average Ensemble Learning
The second method considers two neural networks, $N_{1},N_{2}$ of the exact
same architecture, with the same activation functions, optimizer, number of
hidden layers, and number of neurons. Each of the networks are then trained on
different datasets of identical structure and mean square error on the testing
set is recorded. Then, the two neural networks are combined to form a third
network of the exact same structure $N_{3}$ (Fig. 2). The weights and biases
of $N_{3}$ are the average of the corresponding weights and biases in $N_{1}$
and $N_{2}$. More specifically, if $n$ is the total number of weights and
biases in $N_{3}$, and $N_{3}(i)$ refers to the $i$-th weight or bias in
$N_{3}$, then for all $0<i\leq n$,
$N_{3}(i)=\frac{N_{1}(i)+N_{2}(i)}{2}$
$N_{3}$ is then measured on the testing set and its performance is compared to
the performance of both $N_{1}$ and $N_{2}$. In addition to a pure average,
other strategies are considered. Initially, a weighted average may be
performed with weights proportional to the size of the dataset to guarantee
that a model trained on significantly more data is not treated the same as a
model trained on a much smaller set of data. Another option is to use a
weighted average not only with the size of the dataset, but also the ratio of
positives and negatives for disease prediction cases. This is done to ensure
that a larger dataset, which is not highly informative, will not overpower a
smaller dataset containing more information.
Figure 2: Averaging the corresponding weights and biases of a neural network
for _For all parties_ do
train identical network on parties data;
end for
initialize new model identical to the others;
for _every weight and bias in the network_ do
for _every trained network_ do
sum values of corresponding weight or bias;
end for
weight or bias in new network is that sum divided by the number of parties;
end for
Algorithm 2 Average Ensemble Learning
### 3.4 Transfer Learning
The third method is transfer learning. Instead of combining two neural
networks, transfer learning functions by training on additional datasets with
a single neural network without weight reinitialization[16, 17]. Our
experiment considers a single neural network with randomly initialized weights
trained on the first dataset. The mean squared error on the testing set is
recorded. Then, the neural network is trained on the second dataset without
reinitializing the weights and the mean squared error is recorded again. This
process is then repeated by training on the second dataset and then the first.
With mean square error being recorded throughout.
## 4 Experiments
To compare the proposed methods of neural network aggregation, we ran two
experiments, one with artificially generated polynomial data, and the other on
the University of Wisconsin Madison Hospital’s breast cancer dataset[21]. The
motivation of these tests was to get an initial performance comparison between
the proposed methods and a neural network trained on all of the data
simultaneously representing data-sharing.
### 4.1 Data
The neural networks in this paper were trained on both real and artificially
generated data. The artificially generated data was created as follows. A
random normal distribution was employed to create 2 dimensional arrays
populated with random rational numbers in a specified range. The rows of the
array consisted of 7 random rational numbers representing data features.
Multiple datasets were created for the experiment. Arrays of size 3200, 1600,
800, and 400 were created. After the arrays were generated, a multivariate
polynomial of degree $n$ under lexicographic term ordering was created with
coefficients randomly chosen from a normal distribution.
$f(x_{1},x_{2}...x_{7})$ (1)
Next, for each set of $7$ values in the generated data, $\gamma$, $f(\gamma)$
was calculated by plugging the values from the generated data into the
polynomial Fig. 3 illustrates this in $2$ dimensions as opposed to $7$.
Figure 3: Random x values (red) and calculated y values (green) on the
generated polynomial function (blue).
After $f(\gamma)$ has been calculated for all the tuples in each array, the
values were combined with the generated data to form a dataset such that each
row contains 8 values, 7 random rational numbers, and the calculated y-value
according to the generated function. Thus, the networks in the experiments
will be trained on the 7 rational numbers to learn the underlying polynomial
function. These datasets were then divided into two training sets and a
testing set containing 80% and 20% of the entries respectively. The training
set was then divided again into two training sets of equal size.
The real data used in this paper comes from the University of Wisconsin
Madison Hospital’s breast cancer dataset[21]. This dataset was also divided
into two equally sized training sets and a single testing set. The breast
cancer dataset contains 569 rows and 32 features. This was split into two
training sets each with 256 training examples and a testing set with 57
examples. The features in the breast cancer data describe tumors. Some of the
features include clump thickness, uniformity of cell size and shape, marginal
adhesion, and others. Furthermore, each of these features was recorded in
three different ways in the data. For each feature, an average, a low, and a
high value were all available in the data. The data preprocessing used
consisted of normalization and minor feature manipulations to get the data in
the right shape to form proper training and testing sets.
### 4.2 Regression
For this experiment, the regression data (as defined above) was taken, then
split into a training set and a testing set with $80\%$ of the examples for
training and $20\%$ of the examples for testing. A neural network was trained
on the training set, then loss on testing set was recorded. Since all the data
was in one place, the resulting model represents a network trained on a
dataset created through data-sharing. Then, the training set was split into
two smaller training sets of equal size. Then, we perform each of the three
methods to train a neural network from the split datasets recording loss for
each. The neural network architecture was chosen to best fit the data and was
kept constant while the test was repeated many times under different
conditions. The conditions varied epochs from 10 to 200, noise in the
regression data from a logarithmic shift with coefficients varying between 1
and 3, size of the datasets from 400 to 32,000, and polynomial degree of the
underlying dataset from 2 to 5. The average loss for all methods, including
the loss from the data-sharing model represented as “None”, across all tests
can be seen in below in Fig. 4. Additionally, average loss for tests with
varying degrees of added noise can be found in Table 1. The added noise in the
data is given by
$y=f(x_{1},x_{2},...x_{7})+ndr$
where $x_{1},x_{2}...x_{7}$ is a data point, $n$ a chosen noise value, $r$ is
a random real number selected from a random normal distribution between -2 and
2, and $f$ is a polynomial function with degree $d$.
Figure 4: Loss comparison of methods on regression data Method | Average MSE | Noise n
---|---|---
| 0.015 | 0
Average Ensemble | 0.011 | 1
| 0.011 | 2
| 0.013 | 0
Series Networks | 0.010 | 1
| 0.010 | 2
| 0.011 | 0
Transfer Learning | 0.007 | 1
| 0.009 | 2
| 0.006 | 0
None | 0.008 | 1
| 0.008 | 2
Table 1: Loss comparison for methods with added noise
Preperformance is the loss measured on the testing set once the model had
learned on the first dataset. Similarly, postperformance is the loss measured
after the second dataset had been aggregated in. The purpose of this is to see
the method converge to the performance of the model obtained through data-
sharing by aggregating in multiple datasets. After training, all three methods
achieved comparable aggregate performance compared to the model trained on the
combined “shared” data (None). Here, series networks had the best performance
of the three methods and also had the greatest performance increase after
aggregation.
### 4.3 Breast Cancer Classification
For this experiment, the goal is to train a classifier to determine whether a
tumor is benign or malignant. The breast cancer dataset contains 569 rows and
32 features. Similarly to the regression experiment, the data was split, a
neural network architecture was configured for the data, then accuracy values
for the three methods were computed. Additionally, the data-sharing equivalent
model was trained on the data before the training sets were bifurcated and the
accuracy was recorded. Tests were repeated with varied hyperparameters,
including, batch size, epochs, and number of neurons. The accuracy values of
the test can be found in Fig. 5, the ROC curve for the test is in Fig. 6, and
precision, recall, and F1 scores are in Table 2.
Here, all of the methods performed better than the equivalent model obtained
through data-sharing. Additionally, this example also provides evidence for
the viability of our method in the healthcare domain.
Figure 5: Accuracy comparison of methods on breast cancer data Figure 6: ROC Curve for Breast Cancer Data Method | Precision | Recall | F1 Score
---|---|---|---
Average Ensemble | 0.76 | 1.00 | 0.87
Series Networks | 1.00 | 0.93 | 0.96
Transfer Learning | 1.00 | 0.93 | 0.96
Table 2: Metrics for compared methods
From the accuracy graph (Fig. 5) and the ROC curve (Fig. 6), transfer learning
and series networks performed the best, outperforming training on the combined
dataset. This is likely due to the fact that with smaller dataset size,
training on smaller subsets of the data grants more generalization.
## 5 Conclusion
In order for neural network aggregation to be fully recognized as a stronger
alternative to data-sharing, more tests need to be run. Additionally, future
work should examine the scaling of the proposed model, examining for model
convergence as the number of disjoint datasets increases. If transfer learning
or series network learning is able to converge to the same model acquired
through data-sharing by distributing training across many datasets, then the
method would be viable. Furthermore, more studies need to be conducted on
membership inference attacks to lower the security concerns. Since the
membership inference attack is strong against overfit models, it would be
interesting to see what the end behavior of series networks or transfer
learning after training on many datasets. Despite this, both transfer learning
and series network learning seem to be promising methods for distributing
training on private datasets. Thus, while more tests need to be conducted to
prove the viability of neural network aggregation, this paper establishes an
initial claim for neural network aggregation as a functional alternative to
data-sharing.
## References
* [1] L. Columbus. 10 Ways Machine Learning is Revolutionizing Manufacturing In 2019. Forbes Journal, 2019.
* [2] T. Davenport and R. Kalakota. The Potential for Artificial Intelligence in Healthcare. Future healthcare journal vol. 6,2, 2019.
* [3] B. Bergstein. This is why AI has yet to Reshape Most Businesses. MIT Technology Review, 2019.
* [4] J. He et al. The Practical Implementation of Artificial Intelligence Technologies in Medicine. Nature medicine vol. 25,1, 2019.
* [5] K. Benke and G. Benke. Artificial Intelligence and Big Data in Public Health. International journal of environmental research and public health vol. 15,12 2796. 10, 2018.
* [6] N. Papernot et al. The Limitations of Deep Learning in Adversarial Settings. arXiv preprint arXiv:1511.07528v1, 2015.
* [7] N. Carlini and D. Wagner. Towards Evaluating the Robustness of Neural Networks. arXiv preprint arXiv:1608.04644v2, 2017.
* [8] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. arXiv preprint arXiv:1611.01236v2, 2017.
* [9] F. Tramer et al. Ensemble Adversarial Training Attacks and Defenses. arXiv preprint arXiv:1705.07204v5, 2020.
* [10] I. Goodfellow, J. Shlens, and C. Szegedy Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572v3, 2015.
* [11] C. Song, T. Ristenpart, and V. Shmatikov Machine Learning Models that Remember Too Much. arXiv preprint arXiv:1709.07886v1, 2017.
* [12] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. arXiv preprint arXiv:1802.08232v3, 2019.
* [13] C. Clifton, M. Kantarcioglu, J. Vaidya, X. Lin, and M. Y. Zhu. Tools for Privacy Preserving Distributed Data Mining. ACM SIGKDD Special Interest Group on Knowledge Discovery and Data Mining Explorations Newsletter, 4, 2002.
* [14] D. Bogdanov, M. Niitsoo, T. Toft, and J. Willemson. High-performance Secure Multi-Party Computation for Data Mining Applications. IJIS International Journal of Information Security, 11, 403–418, 2012.
* [15] W. Du, Y. S. Han, and S. Chen. Privacy-Preserving Multivariate Statistical Analysis: Linear Regression and Classification. SIAM International Conference on Data Mining, Proceedings, 2004.
* [16] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu. A Survey on Deep Transfer Learning, arXiv preprint arXiv:1808.01974, 2018.
* [17] K. Bonawitz et al. Practical Secure Aggregation for Federated Learning on User-Held Data, arXiv preprint arXiv:1611.04482, 2016.
* [18] R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership Inference Attacks Against Machine Learning Models, arXiv preprint arXiv:1610.05820v2, 2017.
* [19] B. Keshav. Autonomous Cars: Past, Present and Future - A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology, ICINCO - 12th International Conference on Informatics in Control, Automation and Robotics, Proceedings, 2015.
* [20] O. Zawacki-Richter, V. Marin, M. Bond, and F. Gouverneur. Systematic Review of Research on Artificial Intelligence Applications in Higher Education – Where are the Educators? International Journal of Education Technology in Higher Education 16, 39, 2019.
* [21] D. Dheeru and G. Casey. UCI Machine Learning Repository University of California, Irvine, School of Information and Computer Sciences, 2017.
* [22] H. Mahbub, B. Jordan, and F. Diego. A Study on CNN Transfer Learning for Image Classification 18th Annual UK Workshop on Computational Intelligence, At Nottingham, Proceedings, 2018.
* [23] K. Weiss, T. Khoshgoftaar, and D. Wang. A Survey of Transfer Learning Journal of Big Data 3, 9, 2016.
* [24] M. Taylor and P. Stone. Transfer Learning for Reinforcement Learning Domains: A Survey Journal of Machine Learning Research 10, 1633-1685, 2009.
* [25] A. Giel and R. Diaz. Recurrent Neural Networks and Transfer Learning for Action Recognition, 2015.
* [26] P. Gupta, P. Malhotra, J. Narwariya, L. Vig, and G. Shroff. Transfer Learning for Clinical Time Series Analysis using Deep Neural Networks arXiv preprint arXiv:1904.00655, 2019.
* [27] Y. Chen, J. Wang, C. Yu, W. Gao, and X. Qin. FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare arXiv preprint arXiv:1907.09173, 2019.
* [28] M., J. Kleinberg, C. Zhang, and S. Bengio. Transfusion: Understanding Transfer Learning for Medical Imaging arXiv preprint arXiv:1902.07208, 2019.
|
# Rare Kaon Decay to Missing Energy: Implications of the NA62 Result for a
$Z^{\prime}$ Model
Téssio B. de Melo1<EMAIL_ADDRESS>Sergey Kovalenko2
<EMAIL_ADDRESS>Farinaldo S. Queiroz1,3<EMAIL_ADDRESS>C. Siqueira1,4<EMAIL_ADDRESS>Yoxara S. Villamizar1,3
<EMAIL_ADDRESS>1 International Institute of Physics, Universidade Federal
do Rio Grande do Norte, Campus Universitario, Lagoa Nova, Natal-RN 59078-970,
Brazil
2 Departamento de Ciencias Físicas, Universidad Andres Bello, Sazie 2212,
Santiago, Chile
3 Departamento de Física, Universidade Federal do Rio Grande do Norte,
59078-970, Natal, RN, Brasil
4 Instituto de Física de São Carlos, Universidade de São Paulo, Av.
Trabalhador São-carlense 400, São Carlos, Brasil.
###### Abstract
Meson decays offer a good opportunity to probe new physics. The rare kaon
decay $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ is one of the cleanest of them
and, for this reason, is rather sensitive to new physics, in particular,
vector mediators. NA62 collaboration, running a fixed-target experiment at
CERN, recently reported an unprecedented sensitivity to this decay, namely a
branching fraction of
$BR(K^{+}\rightarrow\pi^{+}\nu\bar{\nu})=(11^{+4.0}_{-3.5})\times 10^{-11}$ at
68% C.L. Vector mediators that couple to neutrinos may yield a sizeable
contribution to this decay. Motivated by the new measurement, we interpret
this result in the context of a concrete $Z^{\prime}$ model, and put our
findings into perspective with the correlated
$K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ decay measured by KOTO collaboration,
current, and future colliders, namely the High-Luminosity and High-Energy LHC.
## I Introduction
Mesons have played a key role in the understanding of properties of elementary
particles. The introduction of strangeness along with isospin lead us to the
eight-fold way, based on the SU(3) flavor symmetry. These theoretical insights
have contributed to the discovery of quantum chromodynamics as we know today.
Another good example is the famous $\theta-\tau$ puzzle. Two different decays
were found for charged strange mesons known at the time as $\theta^{+}$ and
$\tau^{+}$. The decay modes had different parities, but the particles were
shown to have the same mass and lifetime. It was indeed a puzzling
observation. Later, it was realized that weak interactions violate parity, and
these two particles were actually the same $K^{+}$-meson. Additionally, the
Glashow-Iliopoulos-Maiani (GIM) mechanism and quark charm surfaced as an
explanation of the absence of weak flavor changing neutral currents in the
processes such as $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$. The discovery of CP
violation in the $K^{0}-\bar{K}^{0}$ system further proved the importance of
meson physics for our understanding of nature. Furthermore, meson systems are
able to access possible new sources of CP violation that are of paramount
importance for explaining the observed baryon-antibaryon asymmetry in our
universe [1]. Lastly, the $K^{+}$ rare decay into neutrinos can efficiently
probe the presence of heavy vector mediators, beyond the Standard Model (SM),
[2, 3, 4, 5, 6, 7, 8, 9] via the Feynman diagrams displayed in Fig.1.
Figure 1: Feynman diagrams that illustrate how vector mediators can contribute
to the $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ decay. The first diagram requires
$Z^{\prime}$ coupling to neutrinos, whereas the second further requires
couplings to the top quark.
The meson decay $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ is a flavor changing
neutral current process that occurs in the SM via the box and penguin
diagrams, with the latter being dominated by the top quark contribution. Due
to the GIM and loop suppression, the SM contribution to this decay is very
small, reading ${\rm BR}(K^{+}\rightarrow\pi^{+}\nu\bar{\nu})=(8.4\pm
1.0)\times 10^{-11}$ [10], while the NA62 currently imposes ${\rm
BR}(K^{+}\rightarrow\pi^{+}\nu\bar{\nu})=11.0^{+4.0}_{-3.5}\times 10^{-11}$
[11, 12] (results collected in 2016, 2017 and 2018). Therefore, one can
clearly notice that the current sensitivity achieved by NA62 is not far from
the SM prediction. Having said that, the NA62 collaboration has been
continuously searching for the rare kaon decay [12]. KOTO collaboration has
also conducted some searches as well, but offering weaker constraints
[13]111KOTO collaboration has also recently reported the observation of three
anomalous events in the $K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$. This anomaly
requires the branching ratio for this decay mode to be about two orders of
magnitude larger than the SM one [4], indicating the presence of a new light
and long-lived particle with mass of the order of $100$ MeV. There is no such
light particle in our model. Hence the KOTO anomaly will be regarded as an
statistical fluke.. To concretely show the relevance of the recent NA62
result, we will put it into perspective other existing limits in a model that
features a heavy vector mediator. The model is based on the $SU(3)_{C}\times
SU(3)_{L}\times U(1)_{Y}$ gauge group, known as 3-3-1 model. It is well
motivated by the ability to naturally explain the observed replication of the
generations, and nicely hosting dark matter candidates [14, 15, 16, 17, 18,
19, 20, 21], addressing neutrino masses [22, 23, 24, 25], among other
interesting phenomena [26, 26, 27, 3, 28, 29, 30, 31, 32, 33]. As a result of
the enlarged gauge group, the 3-3-1 model have several new gauge bosons, such
as $W^{\prime}$ and $Z^{\prime}$ bosons, which are subject to restrictive
constraints rising from collider physics [34, 35, 36, 37], muon anomalous
magnetic moment [38, 39, 40, 41], and lepton flavor violation [42, 43]. We
will investigate the role of the $Z^{\prime}$ gauge boson in the rate $K^{+}$
decay and then use this to draw bounds on the $Z^{\prime}$ mass using the
$K_{L}\rightarrow\pi^{0}\bar{\nu}\nu$, $K^{+}\rightarrow\pi^{+}\bar{\nu}\nu$
decays.
Our work is structured as follows: In Section II, we review the 3-3-1 model
and compute the $Z^{\prime}$ couplings necessary to our analyses; in Section
III, we discuss the computation of the branching fractions of the kaons in our
model; in Section IV, we discuss our results and, finally in Section VI, we
draw our conclusions.
## II The Model
The 3-3-1 models are extensions of the standard model and are based on the
following local symmetry group:
$\textbf{SU(3)}_{\textbf{C}}\times\textbf{SU(3)}_{\textbf{L}}\times\textbf{U(1)}_{\textbf{N}}$,
where C corresponds to the color charge, L denotes the left-handed fermions
and N is the quantum number of the U(1) group. The general expression for the
electric charge operator in these models is written as,
$\frac{Q}{e}=\frac{1}{2}\left(\lambda_{3}+\beta\lambda_{8}\right)+\text{N}\,\text{I}=\left(\begin{array}[]{c}\frac{1}{3}+\text{N}\\\
-\frac{2}{3}+\text{N}\\\ \frac{1}{3}+\text{N}\end{array}\right),$ (1)
where $\lambda_{3}=\operatorname{diag}(1,-1,0)$,
$\lambda_{8}=\operatorname{diag}(1,1,-2)/\sqrt{3}$, and I are the diagonal
Gell-Mann matrices and the identity matrix, respectively. We took
$\beta=-\frac{1}{\sqrt{3}}$ because in our work we choose the model known as
3-3-1 with right-handed neutrinos (3-3-1 r.h.n) [44, 45]. However, we
highlight that our conclusions are also applicable to the 3-3-1 model with
neutral fermions proposed in [14]. The hypercharge in this model is given as,
$Y=2Q-\lambda_{3}=2N-\frac{\sqrt{3}\lambda_{8}}{3},$ (2)
which is identical to the standard model one. The left (L) and right
(R)-handed fermionic fields of this model are represented as follows,
$\begin{array}[]{c}f_{L}^{a}=\left(\begin{array}[]{c}\nu_{L}^{a}\\\
\ell_{L}^{a}\\\
\left(\nu_{R}^{c}\right)^{a}\end{array}\right)\sim(1,3,-1/3),\\\ \\\
\ell_{R}^{a}\sim(1,1,-1),\end{array}$ (3)
$\begin{array}[]{c}Q_{iL}=\left(\begin{array}[]{c}d_{iL}\\\ -u_{iL}\\\
d_{iL}^{\prime}\end{array}\right)\sim(3,\overline{3},0),\\\ \\\
u_{iR}\sim(3,1,2/3),\,d_{iR}\sim(3,1,-1/3),\\\ \\\
d_{iR}^{\prime}\sim(3,1,-1/3),\\\ \\\ Q_{3L}=\left(\begin{array}[]{c}u_{3L}\\\
d_{3L}\\\ T_{L}\end{array}\right)\sim(3,3,1/3),\\\ \\\
u_{3R}\sim(3,1,2/3),\,d_{3R}\sim(3,1,-1/3),\\\ \\\
T_{R}\sim(3,1,2/3),\end{array}$ (4)
where $a=1,2,3$ and $i=1,2$ are the generation indexes, $f^{a}_{L}$ and
$Q_{iL}$, $Q_{3L}$ represent the left-handed leptonic and quark triplets,
respectively. These fields encompass the SM spectrum like neutrinos
($\nu^{a}=\nu_{e},\nu_{\mu},\nu_{\tau}$), charged leptons
$\ell^{a}=e,\mu,\tau$, and quarks $u_{i}=\overline{u},\overline{c}$,
$d_{i}=\overline{d},\overline{s}$, $u_{3}=t$ and $d_{3}=b$. Besides, there are
other particles additional to the SM: the right-handed neutrino
$\left(\nu_{R}^{c}\right)^{a}$ and three new heavy exotic quarks
$d_{iL}^{\prime}$ and $T_{L}$. In Eqs. (3), (4), we have specified the field
assignments indicating how they transform under the symmetries
$\left(\mathrm{SU}(3)_{c},\mathrm{SU}(3)_{L},\mathrm{U}(1)_{N}\right)$,
respectively. The values of their electric charge and hypercharge can be found
from Eqs. (1) and (2).
Furthermore, the 3-3-1 r.h.n model contains three scalar fields $\chi$, $\eta$
and $\rho$ in the following representations
$\displaystyle\quad\quad\chi=\left(\begin{array}[]{c}\chi^{0}\\\ \chi^{-}\\\
\chi^{\prime 0}\end{array}\right)\sim(1,3,-1/3),$ (8)
$\displaystyle\rho=\left(\begin{array}[]{c}\rho^{+}\\\ \rho^{0}\\\
\rho^{\prime+}\end{array}\right)\sim(1,3,2/3),$ (12)
$\displaystyle\eta=\left(\begin{array}[]{c}\eta^{0}\\\ \eta^{-}\\\
\eta^{\prime 0}\end{array}\right)\sim(1,3,-1/3).$ (16)
These scalar triplets in Eq. (16) are responsible for the spontaneous symmetry
breaking (SSB) in the model, with the following vacuum (vev) structure,
$\langle\chi\rangle=\left(\begin{array}[]{c}0\\\ 0\\\
v_{\chi}\end{array}\right),\langle\rho\rangle=\left(\begin{array}[]{c}0\\\
v_{\rho}\\\
0\end{array}\right),\langle\eta\rangle=\left(\begin{array}[]{c}v_{\eta}\\\
0\\\ 0\end{array}\right),$ (17)
.
We will assume that $v_{\chi}\gg v_{\eta},v_{\rho}$, leading to the two-step
SSB,
$\displaystyle\textbf{SU(3)}_{\textbf{L}}\times\textbf{U(1)}_{\textbf{X}}\xrightarrow{\quad\langle\chi\rangle\quad}\textbf{SU(2)}_{\textbf{L}}\times\textbf{U(1)}_{\textbf{Y}}\xrightarrow{\langle\eta\rangle,\langle\rho\rangle}\textbf{U(1)}_{\textbf{Q}}$
with $U(1)_{Q}$, being the $U(1)$ from electrodynamics.
The fermion masses rise from the Yukawa Lagrangian that reads,
$\displaystyle\mathcal{L}_{Yuk}$
$\displaystyle=\lambda_{1a}\bar{Q}_{1L}d_{aR}\rho+\lambda_{2ia}\bar{Q}_{iL}u_{aR}\rho^{*}+G_{ab}^{\prime}\bar{f}_{L}^{a}e_{R}^{b}\rho$
(18)
$\displaystyle+G_{ab}\varepsilon^{ijk}\left(\bar{f}_{L}^{a}\right)_{i}\left(f_{L}^{b}\right)_{j}^{c}\left(\rho^{*}\right)_{k}+\lambda_{3a}\bar{Q}_{1L}u_{aR}\eta$
$\displaystyle+\lambda_{4ia}\bar{Q}_{iL}d_{aR}\eta^{*}+\lambda_{1}\bar{Q}_{3L}T_{R}\chi+\lambda_{2ij}\bar{Q}_{iL}d_{jR}^{\prime}\chi^{*}+H.c.$
The quark and charged lepton masses are proportional to $v=246$ GeV, where
$v^{2}=v_{\rho}^{2}+v_{\eta}^{2}$ similarly to the SM. The fourth term leads
to a $3\times 3$ antisymmetric neutrino mass matrix [45], which means that the
model has one massless and two degenerate neutrino mass eigenstates. Moreover,
the gauge bosons $W$ and $Z$ acquire mass term identical to the SM as well. In
addition to the SM fields, there are new massive gauge bosons predicted by the
model as result of the enlarged gauge symmetry, denoted as
$Z^{\prime},V^{\pm}$ and $U^{0},U^{0\dagger}$. The masses of these fields are,
$\displaystyle M_{W^{\pm}}^{2}$
$\displaystyle=\frac{1}{4}g^{2}v^{2},M_{Z}^{2}=\frac{M_{W^{\pm}}^{2}}{C_{W}^{2}}$
(19) $\displaystyle M_{Z^{\prime}}^{2}$
$\displaystyle=\frac{g^{2}}{4\left(3-4S_{W}^{2}\right)}\left[4C_{W}^{2}v_{\chi}^{2}+\frac{v^{2}}{C_{W}^{2}}+\frac{v^{2}\left(1-2S_{W}^{2}\right)^{2}}{C_{W}^{2}}\right]$
$\displaystyle M_{V^{\pm}}^{2}$
$\displaystyle=\frac{1}{4}g^{2}\left(v_{\chi}^{2}+v^{2}\right),M_{U^{0}}^{2}=\frac{1}{4}g^{2}\left(v_{\chi}^{2}+v^{2}\right).$
with $M_{W}\ll M_{U},M_{V}$, $S_{W}=\sin\theta_{W}$ and
$C_{W}=\cos\theta_{W}$, with $\theta_{W}$, the Weinberg angle.
The charged (CC) and neutral (NC) currents are found to be,
$\displaystyle\mathcal{L}^{CC}=$
$\displaystyle-\frac{g}{\sqrt{2}}\left[\bar{\nu}_{L}^{a}\gamma^{\mu}e_{L}^{a}W_{\mu}^{+}+\left(\nu_{R}^{\bar{c}}\right)^{a}\gamma^{\mu}e_{L}^{a}V_{\mu}^{+}\right.$
(20)
$\displaystyle+\left.\bar{\nu}_{L}^{a}\gamma^{\mu}\left(\nu_{R}^{c}\right)^{a}U_{\mu}^{0}\right]$
$\displaystyle-\frac{g}{\sqrt{2}}\left[\left(\bar{u}_{3L}\gamma^{\mu}d_{3L}+\bar{u}_{iL}\gamma^{\mu}d_{iL}\right)W_{\mu}^{+}\right.$
$\displaystyle+\left.\left(\bar{T}_{L}\gamma^{\mu}d_{3L}+\bar{u}_{iL}\gamma^{\mu}d_{iL}^{\prime}\right)V_{\mu}^{+}\right.$
$\displaystyle+\left(\bar{u}_{3L}\gamma^{\mu}T_{L}-\bar{d}_{iL}^{\prime}\gamma^{\mu}d_{iL}\right)U_{\mu}^{0}+\text{
h.c. }],$
$\displaystyle\mathcal{L}^{NC}$
$\displaystyle=\frac{g}{2c_{W}}\left\\{\bar{f}\gamma^{\mu}\left[a_{1L}(f)\left(1-\gamma_{5}\right)+a_{1R}(f)\left(1+\gamma_{5}\right)\right]fZ_{\mu}^{1}\right.$
(21)
$\displaystyle\left.+\bar{f}\gamma^{\mu}\left[a_{2L}(f)\left(1-\gamma_{5}\right)+a_{2R}(f)\left(1+\gamma_{5}\right)\right]fZ_{\mu}^{2}\right\\},$
The second and third term in Eq. (20) violate leptonic number and weak isospin
[45]. $Z^{1}$ and $Z^{2}$ are neutral physical gauge bosons, which rise from
the $Z$ and $Z^{\prime}$ gauge boson mixtures. $a_{1R}(f)$, $a_{1L}(f)$,
$a_{2R}(f)$ and $a_{2L}(f)$ are couplings of fermions with the $Z^{1}$ and
$Z^{2}$ bosons. The mixing angle of these bosons is commonly denoted as $\phi$
and when $\phi=0$, the couplings of $Z^{1}$ with the leptons and quarks are
the same as the boson $Z$ in the SM. Likewise, the couplings of $Z^{2}$ in
this limiting case should be the same as $Z^{\prime}$ [45]. These couplings
for the vertices $Z^{\prime}-\nu-\overline{\nu}$ ,
$Z^{\prime}-\overline{d_{i}}-d_{i}$ and $Z^{\prime}-\overline{b}-b$ are shown
in Table 1.
| $Z^{\prime}-\nu-\overline{\nu}$ | $Z^{\prime}-\overline{d_{i}}-d_{i}$ | $Z^{\prime}-\overline{b}-b$
---|---|---|---
Coupling constant | $\frac{1-2S_{W}^{2}}{2\sqrt{3-4S_{W}^{2}}}$ | $-\frac{\sqrt{3-4S_{W}^{2}}}{6}$ | $\frac{3-2S_{W}^{2}}{6\sqrt{3-4S_{W}^{2}}}$
Table 1: $Z^{\prime}$ couplings to neutrinos and the left-handed down-type
quarks, considering $\phi=0$.
In order to study meson physics in our model, and investigate its connection
to the $Z^{\prime}$ boson, we need to extract flavor changing neutral current.
To do so, we start by reviewing how the flavor changing terms arise. The quark
fields the following standard rotation are,
$\left(\begin{array}[]{l}u\\\ c\\\
t\end{array}\right)_{L}=V_{L}^{u}\left(\begin{array}[]{c}u^{\prime}\\\
c^{\prime}\\\ t^{\prime}\end{array}\right)_{L},\left(\begin{array}[]{l}d\\\
s\\\ b\end{array}\right)_{L}=V_{L}^{d}\left(\begin{array}[]{l}d^{\prime}\\\
s^{\prime}\\\ b^{\prime}\end{array}\right)_{L},$
where $V_{L}^{u}$ and $V_{L}^{d}$ are the $3\times 3$ unitary matrices such
that for the Cabibbo-Kobayashi-Maskawa (CKM) matrix we have
$V_{\mathrm{CKM}}\equiv V_{L}^{u\dagger}V_{L}^{d}$. Note that only the left-
chiral terms of the Lagrangian (LABEL:eq:NC-Lagrangian) lead to the quark
flavor violation. The right-chiral quark couplings to $Z^{\prime}$ in Eq.
(LABEL:eq:NC-Lagrangian) are independent of flavor and therefore are flavor-
diagonal in the mass eigenstate basis. We can write these terms in the form,
$\mathcal{L}_{Z^{\prime}}\supset\frac{g}{C_{W}}(\overline{D_{L}^{\prime}}\gamma^{\mu}Y^{D}_{L}D_{L}^{\prime})Z^{\prime}_{\mu},$
(22)
with $D_{L}^{\prime}=(d^{\prime},s^{\prime},b^{\prime})^{T}$, and
$\displaystyle Y^{D}_{L}=$ $\displaystyle\frac{1}{6\sqrt{3-4S_{W}^{2}}}\times$
$\displaystyle\mathrm{Diagonal}(-3+4S_{W}^{2},-3+4S_{W}^{2},3-2S_{W}^{2}).$
Changing the basis we get,
$\mathcal{L}_{Z^{\prime}}\supset\frac{g}{C_{W}}(\overline{D_{L}}\gamma^{\mu}Y^{D\prime}_{L}D_{L})Z^{\prime}_{\mu}=\Delta^{sd}_{L}(\overline{s_{L}}\gamma^{\mu}d_{L})Z^{\prime}_{\mu}+...,$
(24)
where $D_{L}^{\prime}=V_{L}^{d}D_{L}$ and
$Y^{D\prime}_{L}=V_{L}^{d\dagger}Y^{D}_{L}V_{L}^{d}$. Using the unitarity of
the $V_{L}$ matrix, we finally find the coupling between the quarks $d$ and
$s$, which is given by,
$\Delta_{L}^{sd}=\frac{gC_{W}}{3-4S_{W}^{2}}V_{L32}^{*}V_{L31},$ (25)
Analogously for the neutrino-$Z^{\prime}$ coupling we have,
$\Delta_{L}^{\nu\bar{\nu}}=\frac{g}{2C_{W}}\frac{1-2S_{W}^{2}}{\sqrt{3-4S_{W}^{2}}}.$
(26)
In principle, we can vary the entries of the matrix $V_{L}^{d}$ freely since
the CKM matrix does not constrain $V^{d}_{L}$, but the product
$V^{d}_{L}V^{u}_{L}$. So, we choose the following parametrization for the
$V_{L}^{d}$ matrix [46]
$V_{L}^{d}=\left(\begin{array}[]{ccc}\tilde{c}_{12}\tilde{c}_{13}&\tilde{s}_{12}\tilde{c}_{23}e^{i\delta_{3}}-\tilde{c}_{12}\tilde{s}_{13}\tilde{s}_{23}e^{i(\delta_{1}-\delta_{2})}&\tilde{c}_{12}\tilde{c}_{23}\tilde{s}_{13}e^{i\delta_{1}}+\tilde{s}_{12}\tilde{s}_{23}e^{i(\delta_{2}+\delta_{3})}\\\
-\tilde{c}_{13}\tilde{s}_{12}e^{-i\delta_{3}}&\tilde{c}_{12}\tilde{c}_{23}+\tilde{s}_{12}\tilde{\tilde{s}}_{13}\tilde{s}_{23}e^{i(\delta_{1}-\delta_{2}-\delta_{3})}&-\tilde{s}_{12}\tilde{s}_{13}\tilde{c}_{23}e^{i(\delta_{1}-\delta_{3})}-\tilde{c}_{12}\tilde{s}_{23}e^{i\delta_{2}}\\\
-\tilde{s}_{13}e^{-i\delta_{1}}&-\tilde{c}_{13}\tilde{s}_{23}e^{-i\delta_{2}}&\tilde{c}_{13}\tilde{c}_{23}\end{array}\right).$
(27)
where $\tilde{s}_{ij}=\sin{\tilde{\theta}_{ij}}$,
$\tilde{c}_{ij}=\cos{\tilde{\theta}_{ij}}$ and $\delta_{i}$ are the phases,
with $i,j=1,2,3$. For our purposes, the following entries will be important,
$\displaystyle V_{L31}^{d}$ $\displaystyle=$
$\displaystyle-\tilde{s}_{13}e^{-i\delta_{1}}$ (28) $\displaystyle
V_{L32}^{d}$ $\displaystyle=$
$\displaystyle-\tilde{c}_{13}\tilde{s}_{23}e^{-i\delta_{2}}$ (29)
then, the product which appears in the $\Delta_{L}^{sd}$ coupling is
$V_{L31}^{d}V_{L32}^{d*}=-\tilde{s}_{13}\tilde{c}_{13}\tilde{s}_{23}e^{-i(\delta_{1}-\delta_{2})}\equiv|V_{L32}^{d*}V_{L31}^{d}|e^{-i\delta}$
(30)
where we leave the product $|V_{L32}^{d*}V_{L31}^{d}|$ and the phase $\delta$
as free parameters.
## III Kaon decays
The rare Kaon decay modes $K^{+}\to\pi^{+}\nu\bar{\nu}$ and
$K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ are considered golden modes in flavor
physics, as they are very well understood theoretically and are sensitive to
new physics contributions. In the SM both decays occur only at loop level and
are dominated by $Z$ penguin and box diagrams. The corresponding branching
ratios have been calculated at a high precision, including NNLO QCD,
electroweak corrections and also non-perturbative and isospin breaking effects
[47, 48, 49, 50, 51].
The decay $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ is CP conserving, whereas
$K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ is CP violating. In the 331 model, the
new sources of flavor and CP violation which contribute to these decays come
from the $Z^{\prime}$ interactions with ordinary quarks and leptons, as
discussed above. Although these couplings induce the transitions at tree
level, they are suppressed by the large $Z^{\prime}$ mass.
Following the notation of Ref. [52], we can write the branching ratios for the
Kaon decay modes $K\to\pi\nu\bar{\nu}$ as,
$\begin{split}&BR(K^{+}\rightarrow\pi^{+}\nu\bar{\nu})=\\\ &\text{\ \ \ \ \ \
\ \ \ \ \ \ \
}\kappa_{+}\left[\left(\frac{\text{Im}X_{eff}}{\lambda^{5}}\right)^{2}+\left(\frac{\text{Re}X_{eff}}{\lambda^{5}}-\bar{P}_{c}(X)\right)^{2}\right],\end{split}$
(31)
and,
$BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})=\kappa_{L}\left(\frac{\text{Im}X_{eff}}{\lambda^{5}}\right)^{2}.$
(32)
In these expressions $\lambda$ denotes the Cabibbo angle, $\kappa_{+}$ and
$\kappa_{L}$ are given by,
$\kappa_{+}=(5.21\pm 0.025)\cdot
10^{-11}\left(\frac{\lambda}{0.2252}\right)^{8},$ $\kappa_{L}=(2.25\pm
0.01)\cdot 10^{-10}\left(\frac{\lambda}{0.2252}\right)^{8},$
and $P_{c}(X)$ summarizes the charm contribution,
$\bar{P}_{c}(X)=\left(1-\frac{\lambda^{2}}{2}\right)P_{c}(X),$
with,
$P_{c}(X)=(0.42\pm 0.03)\left(\frac{0.2252}{\lambda}\right)^{4}.$
$X_{\text{eff}}$ describes the contribution of short distance physics,
$X_{\text{eff}}=V_{ts}^{*}V_{td}X_{L}(K),$
where,
$X_{L}(K)=\eta_{X}X_{0}(x_{t})+\frac{\Delta_{L}^{\nu\bar{\nu}}}{g_{SM}^{2}m_{Z^{\prime}}^{2}}\frac{\Delta_{L}^{sd}}{V_{ts}^{*}V_{td}},$
(33)
and,
$g_{SM}^{2}=4\frac{m_{W}^{2}G_{F}^{2}}{2\pi^{2}}=1.78137\times 10^{-7}\text{\
GeV}^{-2}.$
The first term in Eq. (33) represents the SM contribution, which is dominated
by $Z$-penguin and box diagrams, and includes QCD and electroweak corrections.
The factor $\eta_{X}$ is close to unity, $\eta_{X}=0.994$, and
$X_{0}(x_{t})=\frac{x_{t}}{8}\left[\frac{x_{t}+2}{x_{t}-1}+\frac{3x_{t}-6}{(x_{t}-1)^{2}}\ln
x_{t}\right],$
with $x_{t}=m_{t}^{2}/m_{W}^{2}$. The 331 contribution is enclosed in the
second term of Eq. (33), with $\Delta_{L}^{sd}$ and
$\Delta_{L}^{\nu\bar{\nu}}$ given in the Eqs. (25) and (26), respectively. If
$Z^{\prime}$ is absent we have $\Delta_{L}^{\nu\bar{\nu}}=\Delta_{L}^{sd}=0$
and the SM result is recovered.
The decays $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ and
$K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ are related to each other in the SM, via
isospin symmetry, since both transitions are ruled by the same short distance
operator. This interdependence leads to the Grossman-Nir limit [53],
$BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})\leq
4.3BR(K^{+}\rightarrow\pi^{+}\nu\bar{\nu}).$ (34)
For a fixed value of $BR(K^{+}\rightarrow\pi^{+}\nu\bar{\nu})$, this
theoretical bound provides an upper limit for
$BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})$, which is typically still stronger
than the current experimental limits. The bound remains valid in SM extensions
in which the new physics is much heavier than the Kaon mass. In particular,
the 331 model obeys the bound, as we will show in the next section.
Having presented the formulas that summarizes the predictions of the 331 model
regarding meson FCNC processes, we can now discuss the implications of the
experimental searches for the flavor violating Kaon decays on the parameter
space of the model.
Figure 2: Branching ratio versus $Z^{\prime}$ mass for kaon long (left panel),
BR($K_{L}\to\pi^{0}+\bar{\nu}\nu$), and for the $K^{+}$,
BR($K^{+}\to\pi^{+}+\bar{\nu}\nu$) (right panel). The gray band provides the
excluded region by KOTO experiment, and the blue region provides the allowed
by the NA62 experiment. The colored bar represents the variation of the
$\delta$ phase, for details see the text.
## IV Results
In this section, we present our results for the FCNC processes using Eqs. (31)
and (32), for the 331 r.h.n model, but we emphasize that our findings are also
applicable to the 3-3-1LHN [14], which is a 3-3-1 version that features heavy
neutral leptons instead of right-handed neutrinos. Anyway, we compare our
results with the last limits from KOTO [54] and, NA62 Run1 (2016 + 2017 +
2018)222https://indico.cern.ch/event/868940/contributions/3815641/attachments/2080353/3496097/RadoslavMarchevski_ICHEP_2020.pdf
[11, 12]. In Fig. 2, we display the branching for $K_{L}$ (right panel) and
$K^{+}$ (left panel) versus the $Z^{\prime}$ mass for the 331 r.h.n model,
namely combining the standard model contribution and the new one provided by
the new $Z^{\prime}$ boson, as mentioned above. We compute both branching
varying the $Z^{\prime}$ mass, the phase $\delta$ (color bar) and we fix the
product $|V_{L32}^{d*}V_{L31}^{d}|=10^{-3}$. From the left-panel of Fig. 2 one
can conclude that the KOTO experiment excludes at most $Z^{\prime}$ masses
around $400$ GeV, which occurs for $\delta>1.4$. However, from the right-panel
of Fig. 2, we find that $Z^{\prime}$ masses below $3\times 10^{3}$ GeV might
be excluded depending on the value adopted for the phase $\delta$. For
$\delta\rightarrow\pi/2$, the NA62 sensitivity to heavy $Z^{\prime}$ mediators
severely weakens. Hence, NA62 yields complementary limits to other existing
probes [55, 56, 35]. We can also see the presence of a dip due to destructive
interference between the $Z^{\prime}$ and the SM contributions, where the
branching ratio lies bellow the SM predicted value. In the plot it occurs for
$Z^{\prime}$ masses in the $600-900$ GeV range, but in general its depth and
location depends on the combination of $|V_{L32}^{d*}V_{L31}^{d}|$ and
$\delta$. Notice that there is no dip when $\delta=\pi/2$, while it reaches
its maximum depth for $\delta=0$.
Figure 3: Excluded parameter space regions in the 331 r.h.n model, in the
plane $|V_{L32}^{d*}V_{L31}^{d}|$ versus $m_{Z^{\prime}}$, for $\delta=0$
(left panel) and $\delta=\pi/2$ (right panel). The colored regions represent
the limits from NA62 (cyan), KOTO (blue) and LHC current (gray) and prospects
(dashed lines).
We have drawn quantitative conclusions based on a particular value for
$V_{L32}^{d*}V_{L31}^{d}$. To assess how our bounds change for different
choices of this product we examine the
range$10^{-4}<V_{L32}^{d*}V_{L31}^{d}<10^{-1}$. In Fig. 3 we show exclusion
plots in the plane $|V_{L32}^{d*}V_{L31}^{d}|$ versus $m_{Z^{\prime}}$, for
fixed $\delta=0$ (left-panel) and $\delta=\pi/2$ (right-panel). Here we
combined limits from NA62, KOTO, and LHC, High-Luminosity LHC (HL-LHC) and
High-Energy-LHC (HE-LHC). The strongest collider bounds on the $Z^{\prime}$
mass stems from the resonant production of $Z^{\prime}$ decaying into
dileptons [40].
There is an interesting interplay between collider and flavor physics. While
collider bounds rely mostly on the interaction strength between the
$Z^{\prime}$ boson and fermions, which is fixed in 3-3-1 models, the flavor
bounds weaken with $|V_{L32}^{d*}V_{L31}^{d}|$. The collider bounds displayed
in Fig. 3 are conservative, as they take into account the presence of
$Z^{\prime}$ decays into right-handed neutrinos and exotic quarks, which can
be light enough so that the decays are not kinematically forbidden. The
original lower mass bound reads $m_{Z^{\prime}}>4$ TeV [55], but those exotic
decays were ignored in [55]. If more decay channels become available, then the
bound weakens. Here, in order to take into account this uncertainty on the
$Z^{\prime}$ decay modes, we assume conservatively a branching fraction of 50%
into charged leptons, which leads to the grey region in Fig. 3 in agreement
with [41]. We also show the prospects for the HL-LHC, with $3$ ab-1 of
integrated luminosity, and the HE-LHC, which corresponds to an integrated
luminosity of $15$ ab-1 at a center-of-mass energy of $27$ TeV. These
projected collider limits were obtained using the code described in [57],
which can be used to forecast lower mass limits for resonance searches, which
is precisely our case.
We found that the NA62 bounds from the decay
$K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ are rather restrictive, providing
stronger limits than the ones from colliders over a significant region of the
parameter space. Looking at left-panel of Fig.3, for instance, where
$\delta=0$, NA62 can exclude $Z^{\prime}$ masses as high as $10$ TeV, if
$|V_{L32}^{d*}V_{L31}^{d}|$ is of the order $10^{-2}$ or larger. In the same
vein, NA62 enforces the product $|V_{L32}^{d*}V_{L31}^{d}|$ to remain below
$\sim 10^{-3}$, when $m_{Z^{\prime}}\sim 2$ TeV.
On the other hand, these parameters are completely unconstrained by KOTO when
$\delta=0$, since the contribution from $Z^{\prime}$ to the decay
$K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ vanishes in this case. In the absence of
new CP violating sources, the $BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})$ takes
the same value as in the SM, since this process will occur only through the
contribution to CP violation from the SM via CKM matrix.
This can be easily understood with Eq. (32), from which we see that
$BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})$ depends only on the imaginary part
of $X_{eff}$, in particular, on $Im(V_{ts}^{*}V_{td})$ when $\delta=0$. Still
in the left-panel of Fig. 3 we notice that in the range shown, the Grossman-
Nir limit does not appear, because the suppressed values of
$BR(K_{L}\rightarrow\pi^{0}\nu\bar{\nu})$ makes this bound easily satisfied
for practically any reasonable value of $|V_{L32}^{d*}V_{L31}^{d}|$. However,
as $\delta$ increases from $0$ to $\pi/2$, this bound becomes more relevant,
likewise the KOTO exclusion region gradually grows, reflecting the enhancement
in the $K_{L}\rightarrow\pi^{0}\nu\bar{\nu}$ decay, while the NA62 exclusion
region slightly decreases. Nevertheless, even with maximum enhancement at
$\delta=\pi/2$ (right plot), the KOTO bounds remain less constraining compared
to NA62 and the Grossman-Nir limit.
## V Discussion
Our conclusions relied on the presence of flavor changing interactions
involving the $Z^{\prime}$ boson, but as the model features a large scalar
sector, that could be new sources of flavor changing interactions rising from
the heavy scalar fields. These new contributions have been shown to be
subdominant. Thus can be safely ignored. Moreover, the entire reasoning was
based on the 3-3-1 model with right-handed neutrinos, but our results are also
applicable to the 3-3-1 model where the right-handed neutrinos are replaced by
heavy neutral fields. This occurs because these models have the same neutral
current. In summary, our finding are relevant for two different models and
solid irrespective of the presence of scalar fields.
## VI Conclusion
In this work we explored the FCNC processes mediated by a $Z^{\prime}$ gauge
boson featuring both in the 3-3-1 r.h.n and 3-3-1LHN models. We computed the
$K^{+}$ and $K_{L}$ decay rates to missing energy, considering the extra
contributions from the $Z^{\prime}$ in addition to the SM contribution,
leaving the quark mixing matrix and the $Z^{\prime}$ mass as free parameters.
We performed a complementary analysis using the results from NA62, KOTO, and
the LHC (current and prospects) to set bounds on the 331 r.h.n parameters. We
found that the last result of the NA62 experiment was able to constrain a
large region of the parameter space, setting lower limits on the $Z^{\prime}$
mass that can be more stringent than those from dilepton searches at the LHC.
For example, we can impose $m_{Z^{\prime}}>10$ TeV for
$|V_{L32}^{*}V_{L31}|\sim 10^{-1}$, while $|V_{L32}^{*}V_{L31}|\lesssim
2\times 10^{-3}$ for $m_{Z^{\prime}}=3$ TeV. These results apply for
$\delta=0$, where the sensitivity of NA62 is maximum. In the case when the new
CP violation effects are large our constraints weaken.
###### Acknowledgements.
The authors thank Antonio Santos and Diego Cogollo for discussions. TM, CS and
FSQ thanks UFRN and MEC for the financial support. FSQ also acknowledges the
CNPq grants 303817/2018-6 and 421952/2018-0, the financial support from ICTP-
SAIFR FAPESP grant 2016/01343-7, and the Serrapilheira Institute (grant number
Serra-1912-31613). FSQ and CS have been supported by the São Paulo Research
Foundation (FAPESP) through Grant No 2015/15897-1. CS is supported by grant
2020/00320-9, São Paulo Research Foundation (FAPESP). SK acknowledges the
support of the FONDECYT (Chile) grant No 1190845. Y. S. Villamizar
acknowledges the financial support from CAPES under grants
88882.375870/2019-01. This work was supported by the Serrapilheira Institute
(grant number Serra-1912-31613). We thank the High Performance Computing
Center (NPAD) at UFRN for providing computational resources.
## References
* Dine and Kusenko [2003] M. Dine and A. Kusenko, The Origin of the matter - antimatter asymmetry, Rev. Mod. Phys. 76, 1 (2003), arXiv:hep-ph/0303065 .
* Cogollo _et al._ [2012] D. Cogollo, A. de Andrade, F. Queiroz, and P. Rebello Teles, Novel sources of Flavor Changed Neutral Currents in the $331_{RHN}$ model, Eur. Phys. J. C 72, 2029 (2012), arXiv:1201.1268 [hep-ph] .
* Queiroz _et al._ [2016] F. S. Queiroz, C. Siqueira, and J. W. F. Valle, Constraining Flavor Changing Interactions from LHC Run-2 Dilepton Bounds with Vector Mediators, Phys. Lett. B 763, 269 (2016), arXiv:1608.07295 [hep-ph] .
* Kitahara _et al._ [2020] T. Kitahara, T. Okui, G. Perez, Y. Soreq, and K. Tobioka, New physics implications of recent search for $K_{L}\to\pi^{0}\nu\bar{\nu}$ at KOTO, Phys. Rev. Lett. 124, 071801 (2020), arXiv:1909.11111 [hep-ph] .
* Borah _et al._ [2020] D. Borah, L. Mukherjee, and S. Nandi, Low scale U(1)X gauge symmetry as an origin of dark matter, neutrino mass and flavour anomalies, JHEP 12, 052, arXiv:2007.13778 [hep-ph] .
* Dutta _et al._ [2020] B. Dutta, S. Ghosh, and T. Li, Explaining $(g-2)_{\mu,e}$, the KOTO anomaly and the MiniBooNE excess in an extended Higgs model with sterile neutrinos, Phys. Rev. D 102, 055017 (2020), arXiv:2006.01319 [hep-ph] .
* Jho _et al._ [2020] Y. Jho, S. M. Lee, S. C. Park, Y. Park, and P.-Y. Tseng, Light gauge boson interpretation for (g $-$ 2)μ and the KL→ $\pi^{0}$ \+ (invisible) anomaly at the J-PARC KOTO experiment, JHEP 04, 086, arXiv:2001.06572 [hep-ph] .
* Aebischer _et al._ [2020] J. Aebischer, A. J. Buras, and J. Kumar, Another SMEFT Story: $Z^{\prime}$ Facing New Results on $\varepsilon^{\prime}/\varepsilon$, $\Delta M_{K}$ and $K\to\pi\nu\bar{\nu}$, JHEP 12, 097, arXiv:2006.01138 [hep-ph] .
* Kang and Shigekami [2020] Z. Kang and Y. Shigekami, $(g-2)_{\mu}$ Versus $K\to\pi+E_{miss}$ Induced by the $(B-L)_{23}$ Boson, (2020), arXiv:2008.09793 [hep-ph] .
* Buras _et al._ [2015] A. J. Buras, D. Buttazzo, J. Girrbach-Noe, and R. Knegjens, ${K}^{+}\to{\pi}^{+}\nu\overline{\nu}$ and ${K}_{L}\to{\pi}^{0}\nu\overline{\nu}$ in the Standard Model: status and perspectives, JHEP 11, 033, arXiv:1503.02693 [hep-ph] .
* Marchevski [2020] R. Marchevski, New result on the search for the $k^{+}\to\pi^{+}\bar{\nu}\nu$ decay at the na62 experiment at cern, in _40th International Conference on High Energy physics_ (https://pos.sissa.it/390/398/pdf, 2020).
* Cortina Gil _et al._ [2020] E. Cortina Gil _et al._ (NA62), An investigation of the very rare ${K}^{+}\to{\pi}^{+}\nu\overline{\nu}$ decay, JHEP 11, 042, arXiv:2007.08218 [hep-ex] .
* Ahn _et al._ [2020] J. Ahn _et al._ (KOTO), Study of the $K_{L}\\!\to\\!\pi^{0}\nu\overline{\nu}$ decay at the J-PARC KOTO experiment, (2020), arXiv:2012.07571 [hep-ex] .
* Mizukoshi _et al._ [2011] J. Mizukoshi, C. de S.Pires, F. Queiroz, and P. Rodrigues da Silva, WIMPs in a 3-3-1 model with heavy Sterile neutrinos, Phys. Rev. D 83, 065024 (2011), arXiv:1010.4097 [hep-ph] .
* Kelso _et al._ [2014a] C. Kelso, C. A. de S. Pires, S. Profumo, F. S. Queiroz, and P. S. Rodrigues da Silva, A 331 WIMPy Dark Radiation Model, Eur. Phys. J. C 74, 2797 (2014a), arXiv:1308.6630 [hep-ph] .
* Profumo and Queiroz [2014] S. Profumo and F. S. Queiroz, Constraining the $Z^{\prime}$ mass in 331 models using direct dark matter detection, Eur. Phys. J. C 74, 2960 (2014), arXiv:1307.7802 [hep-ph] .
* Cogollo _et al._ [2014] D. Cogollo, A. X. Gonzalez-Morales, F. S. Queiroz, and P. R. Teles, Excluding the Light Dark Matter Window of a 331 Model Using LHC and Direct Dark Matter Detection Data, JCAP 11, 002, arXiv:1402.3271 [hep-ph] .
* Montero _et al._ [2018] J. Montero, A. Romero, and B. Sánchez-Vega, Axion dark matter in a $3-3-1$ model, Phys. Rev. D 97, 063015 (2018), arXiv:1709.04535 [hep-ph] .
* Carvajal _et al._ [2017] C. Carvajal, B. Sánchez-Vega, and O. Zapata, Linking axionlike dark matter to neutrino masses, Phys. Rev. D 96, 115035 (2017), arXiv:1704.08340 [hep-ph] .
* Abdalla _et al._ [2019] E. Abdalla _et al._ , Brazilian Community Report on Dark Matter, (2019), arXiv:1912.10076 [hep-ph] .
* Leite _et al._ [2020] J. Leite, A. Morales, J. W. Valle, and C. A. Vaquera-Araujo, Dark matter stability from Dirac neutrinos in scotogenic 3-3-1-1 theory, Phys. Rev. D 102, 015022 (2020), arXiv:2005.03600 [hep-ph] .
* Vien _et al._ [2019] V. Vien, H. Long, and A. Cárcamo Hernández, Lepton masses and mixings in a $T^{\prime}$ flavoured 3-3-1 model with type I and II seesaw mechanisms, Mod. Phys. Lett. A 34, 1950005 (2019), arXiv:1812.07263 [hep-ph] .
* Nguyen _et al._ [2018] T. Nguyen, T. T. Le, T. Hong, and L. Hue, Decay of standard model-like Higgs boson $h\rightarrow\mu\tau$ in a 3-3-1 model with inverse seesaw neutrino masses, Phys. Rev. D 97, 073003 (2018), arXiv:1802.00429 [hep-ph] .
* Cárcamo Hernández _et al._ [2019a] A. Cárcamo Hernández, N. A. Pérez-Julve, and Y. Hidalgo Velásquez, Fermion masses and mixings and some phenomenological aspects of a 3-3-1 model with linear seesaw mechanism, Phys. Rev. D 100, 095025 (2019a), arXiv:1907.13083 [hep-ph] .
* Cárcamo Hernández _et al._ [2019b] A. Cárcamo Hernández, Y. Hidalgo Velásquez, and N. A. Pérez-Julve, A 3-3-1 model with low scale seesaw mechanisms, Eur. Phys. J. C 79, 828 (2019b), arXiv:1905.02323 [hep-ph] .
* Van Dong _et al._ [2019] P. Van Dong, N. Ngan, T. Tham, L. Thien, and N. Thuy, Phenomenology of the simple 3-3-1 model with inert scalars, Phys. Rev. D 99, 095031 (2019), arXiv:1512.09073 [hep-ph] .
* Alves _et al._ [2017] A. Alves, G. Arcadi, P. Dong, L. Duarte, F. S. Queiroz, and J. W. F. Valle, Matter-parity as a residual gauge symmetry: Probing a theory of cosmological dark matter, Phys. Lett. B 772, 825 (2017), arXiv:1612.04383 [hep-ph] .
* Dong _et al._ [2018] P. Dong, D. Huong, F. S. Queiroz, J. W. F. Valle, and C. Vaquera-Araujo, The Dark Side of Flipped Trinification, JHEP 04, 143, arXiv:1710.06951 [hep-ph] .
* Huong _et al._ [2019] D. Huong, D. Dinh, L. Thien, and P. Van Dong, Dark matter and flavor changing in the flipped 3-3-1 model, JHEP 08, 051, arXiv:1906.05240 [hep-ph] .
* Arcadi _et al._ [2020] G. Arcadi, M. Lindner, J. Martins, and F. S. Queiroz, New physics probes: Atomic parity violation, polarized electron scattering and neutrino-nucleus coherent scattering, Nucl. Phys. B 959, 115158 (2020), arXiv:1906.04755 [hep-ph] .
* Van Loi _et al._ [2020] D. Van Loi, C. H. Nam, and P. Van Dong, Dark matter in the fully flipped 3-3-1-1 model, (2020), arXiv:2012.10979 [hep-ph] .
* Duy _et al._ [2020] N. Duy, T. Inami, and D. Huong, Physical constraints derived from FCNC in the 3-3-1-1 model, (2020), arXiv:2009.09698 [hep-ph] .
* Dias _et al._ [2020] A. G. Dias, J. Leite, B. Sánchez-Vega, and W. C. Vieira, Dynamical symmetry breaking and fermion mass hierarchy in the scale-invariant 3-3-1 model, Phys. Rev. D 102, 015021 (2020), arXiv:2005.00556 [hep-ph] .
* Cao and Zhang [2016] Q.-H. Cao and D.-M. Zhang, Collider Phenomenology of the 3-3-1 Model, (2016), arXiv:1611.09337 [hep-ph] .
* Nepomuceno and Meirose [2020] A. Nepomuceno and B. Meirose, Limits on 331 vector bosons from LHC proton collision data, Phys. Rev. D 101, 035017 (2020), arXiv:1911.12783 [hep-ph] .
* Nepomuceno _et al._ [2020] A. Nepomuceno, B. Meirose, G. Marvila, and M. Viera, Exclusion Limits on Neutral, Singly and Doubly Charged Vector Bosons at LHC, PoS EPS-HEP2019, 553 (2020).
* Liu _et al._ [2011] Y.-B. Liu, A.-Q. An, and H.-M. Han, The 3-3-1 model with RH neutrinos and associated Z H production at high-energy e+ e- collider, Braz. J. Phys. 41, 66 (2011).
* Kelso _et al._ [2014b] C. Kelso, P. Pinheiro, F. S. Queiroz, and W. Shepherd, The Muon Anomalous Magnetic Moment in the Reduced Minimal 3-3-1 Model, Eur. Phys. J. C 74, 2808 (2014b), arXiv:1312.0051 [hep-ph] .
* Kelso _et al._ [2014c] C. Kelso, H. Long, R. Martinez, and F. S. Queiroz, Connection of $g-2_{\mu}$, electroweak, dark matter, and collider constraints on 331 models, Phys. Rev. D 90, 113011 (2014c), arXiv:1408.6203 [hep-ph] .
* de Jesus _et al._ [2020] A. S. de Jesus, S. Kovalenko, C. A. de S. Pires, F. S. Queiroz, and Y. S. Villamizar, Dead or alive? Implications of the muon anomalous magnetic moment for 3-3-1 models, Phys. Lett. B 809, 135689 (2020), arXiv:2003.06440 [hep-ph] .
* De Jesus _et al._ [2020] A. De Jesus, S. Kovalenko, F. Queiroz, C. Siqueira, and K. Sinha, Vectorlike leptons and inert scalar triplet: Lepton flavor violation, $g-2$, and collider searches, Phys. Rev. D 102, 035004 (2020), arXiv:2004.01200 [hep-ph] .
* Arcadi _et al._ [2018] G. Arcadi, C. Ferreira, F. Goertz, M. Guzzo, F. S. Queiroz, and A. Santos, Lepton Flavor Violation Induced by Dark Matter, Phys. Rev. D 97, 075022 (2018), arXiv:1712.02373 [hep-ph] .
* Ferreira _et al._ [2019] M. M. Ferreira, T. B. de Melo, S. Kovalenko, P. R. Pinheiro, and F. S. Queiroz, Lepton Flavor Violation and Collider Searches in a Type I + II Seesaw Model, Eur. Phys. J. C 79, 955 (2019), arXiv:1903.07634 [hep-ph] .
* Long [1996a] H. N. Long, SU(3)-L x U(1)-N model for right-handed neutrino neutral currents, Phys. Rev. D54, 4691 (1996a), arXiv:hep-ph/9607439 [hep-ph] .
* Long [1996b] H. N. Long, The 331 model with right handed neutrinos, Phys. Rev. D53, 437 (1996b), arXiv:hep-ph/9504274 [hep-ph] .
* Buras _et al._ [2013] A. J. Buras, F. De Fazio, J. Girrbach, and M. V. Carlucci, The Anatomy of Quark Flavour Observables in 331 Models in the Flavour Precision Era, JHEP 02, 023, arXiv:1211.1237 [hep-ph] .
* Buchalla and Buras [1998] G. Buchalla and A. J. Buras, Two loop large $m_{t}$ electroweak corrections to $K\to\pi\nu\bar{\nu}$ for arbitrary Higgs boson mass, Phys. Rev. D 57, 216 (1998), arXiv:hep-ph/9707243 .
* Brod _et al._ [2011] J. Brod, M. Gorbahn, and E. Stamou, Two-Loop Electroweak Corrections for the $K\to\pi\nu\bar{\nu}$ Decays, Phys. Rev. D 83, 034030 (2011), arXiv:1009.0947 [hep-ph] .
* Buras _et al._ [2005] A. Buras, M. Gorbahn, U. Haisch, and U. Nierste, The Rare decay K+ —¿ pi+ nu anti-nu at the next-to-next-to-leading order in QCD, Phys. Rev. Lett. 95, 261805 (2005), arXiv:hep-ph/0508165 .
* Buras _et al._ [2006] A. J. Buras, M. Gorbahn, U. Haisch, and U. Nierste, Charm quark contribution to K+ —¿ pi+ nu anti-nu at next-to-next-to-leading order, JHEP 11, 002, [Erratum: JHEP 11, 167 (2012)], arXiv:hep-ph/0603079 .
* Brod and Gorbahn [2008] J. Brod and M. Gorbahn, Electroweak Corrections to the Charm Quark Contribution to K+ —¿ pi+ nu anti-nu, Phys. Rev. D 78, 034006 (2008), arXiv:0805.4119 [hep-ph] .
* Buras and Girrbach [2014] A. J. Buras and J. Girrbach, Towards the Identification of New Physics through Quark Flavour Violating Processes, Rept. Prog. Phys. 77, 086201 (2014), arXiv:1306.3775 [hep-ph] .
* Grossman and Nir [1997] Y. Grossman and Y. Nir, K(L) —$>$ pi0 neutrino anti-neutrino beyond the standard model, Phys. Lett. B 398, 163 (1997), arXiv:hep-ph/9701313 .
* Ahn _et al._ [2019] J. Ahn _et al._ (KOTO), Search for the $K_{L}\\!\to\\!\pi^{0}\nu\overline{\nu}$ and $K_{L}\\!\to\\!\pi^{0}X^{0}$ decays at the J-PARC KOTO experiment, Phys. Rev. Lett. 122, 021802 (2019), arXiv:1810.09655 [hep-ex] .
* Lindner _et al._ [2018] M. Lindner, M. Platscher, and F. S. Queiroz, A Call for New Physics : The Muon Anomalous Magnetic Moment and Lepton Flavor Violation, Phys. Rept. 731, 1 (2018), arXiv:1610.06587 [hep-ph] .
* Santos and Vasconcelos [2018] A. C. O. Santos and P. Vasconcelos, Lower Mass Bound on the $W^{\prime}$ mass via Neutrinoless Double Beta Decay in a 3-3-1 Model, Adv. High Energy Phys. 2018, 9132381 (2018), arXiv:1708.03955 [hep-ph] .
* Thamm _et al._ [2015] A. Thamm, R. Torre, and A. Wulzer, Future tests of Higgs compositeness: direct vs indirect, JHEP 07, 100, arXiv:1502.01701 [hep-ph] .
|
Tesaro, 1000 Winter St, Waltham, MA 02451
E-mail<EMAIL_ADDRESS>
# Sample size calculation for the Andersen-Gill model comparing rates of
recurrent events
Yongqiang Tang Ronan Fitzpatrick Tesaro, 1000 Winter Street, Waltham, MA
02451, USA
Statistical Solutions Ltd., 4500 Avenue 4000, Cork Airport Business Park,
Cork, T12 NX7D, Ireland
(28 November 2018; 11 June 2019; 25 June 2019)
###### Abstract
[Summary] Recurrent events arise frequently in biomedical research, where the
subject may experience the same type of events more than once. The Andersen-
Gill (AG) model has become increasingly popular in the analysis of recurrent
events particularly when the event rate is not constant over time. We propose
a procedure for calculating the power and sample size for the robust Wald test
from the AG model in superiority, noninferiority and equivalence clinical
trials. Its performance is demonstrated by numerical examples. Sample SAS code
is provided in the supplementary material.
###### keywords:
Mixed Poisson process; Noninferiority and equivalence trials; Overdispersion;
Proportional rates/means model; Sandwich variance
††articletype: Research Article
## 1 Introduction
Recurrent events are frequently encountered in biomedical research, where the
subject may experience the same type of events more than once. Examples
include attacks in hereditary angioedema, exacerbations in chronic obstructive
pulmonary disease, bleeds in hemophilia, relapses in multiple sclerosis, and
infections in chronic granulomatous disease (CGD). In clinical trials, the
recurrent events are commonly analyzed by the negative binomial (NB)
regression 1, 2, 3. The NB regression assumes constant event rates over time,
which may fail to hold in some applications 4, 5, 6. The Andersen-Gill (AG)
model 7 provides a popular alternative tool for the analysis of recurrent
events, and it allows arbitrary event rate functions. The AG model often
yields similar treatment effect estimates (i.e. ratio of event rates between
groups) to the NB regression in empirical studies when the event rate is
roughly constant over time 1.
††The paper was published in Statistics in Medicine 2019 (Volume 38, Issue 24,
Pages 4819 - 4827). There was an error in Equation A4 in the appendix. It does
not affect design 1, but appears to slightly overestimate the sample size for
design 2 with staggered entry. The result becomes better after the correction
in the sense that the nominal power generally becomes closer to the simulated
power for design 2. The corrected contents were highlighted in red.
Sample size calculation is critical in designing a clinical trial to ensure
sufficient power to detect an important treatment effect. Sample size
methodology has been well developed for the NB regression; Please see Tang 8
and references therein. Matsui 4 and Song et al 9 derive sample size formulae
for the robust log-rank test 10, which is a nonparametric test suitable only
for superiority trials. In this paper, we propose a power and sample size
calculation procedure for the robust Wald test from the AG model 11. It is
applicable to superiority, noninferiority (NI) and equivalence trials. Two
designs are considered. In one design, the planned treatment duration is the
same for all subjects. In the other design, subjects are enrolled at different
calendar time, but administratively censored at the same calendar time. We
introduce the sample size procedure in Section $2$, and assess its performance
numerically in Section 3.
## 2 Power and sample size formulae
Andersen and Gill 7 provides a simple extension of the Cox proportional
hazards model to the analysis of recurrent events. Suppose $n$ subjects are
randomized to either the active ($x_{i}=1$) or control ($x_{i}=0$) treatment
in a clinical trial. Let $T_{i}$ be the follow-up time for subject $i$,
$Y_{i}(t)=I(T_{i}\geq t)$ the indicator function that subject $i$ is still
under observation at time $t$, and $N_{i}(t)$ the number of events experienced
by subject $i$ by time $t$. Inference for the event rate ratio $\exp(\beta)$
between treatment groups is based on the following partial likelihood
$PL(\beta)=\prod_{i=1}^{n}\prod_{\\{t:Y_{i}(t)=1\\}}\left[\frac{\exp(\beta
x_{i})}{\sum_{j=1}^{n}Y_{j}(t)\exp(\beta x_{j})}\right]^{dN_{i}(t)}.$ (1)
An attractive feature of the AG model is that the baseline event rate function
can be of arbitrary shape. We assume a constant event rate ratio over time,
but the AG model can handle time-varying treatment effects.
To obtain the maximum likelihood estimate (MLE) $\hat{\beta}$, we solve the
score function
$U(\beta)=\frac{\partial\log[PL(\beta)]}{\beta}=\sum_{i=1}^{n}\int_{0}^{\tau}[x_{i}-\bar{x}(\beta,t)]dN_{i}(t)=0,$
where $S^{(k)}(\beta,t)=n^{-1}\sum_{i=1}^{n}Y_{i}(t)x_{i}^{k}\exp(\beta
x_{i})$, $\bar{x}(\beta,t)=\frac{S^{(1)}(\beta,t)}{S^{(0)}(\beta,t)}$, and
$\tau$ is the maximum treatment duration in the trial.
If all covariates are time invariant (the covariates measured after
randomization are rarely used to assess the treatment effect in clinical
trials since the covariates may be affected by the treatment), the AG model
assumes that the time increments between events are independent according to a
Poisson process, but the recurrent events are generally dependent within a
subject 1. The Poisson-type assumption can be relaxed by using the sandwich
variance estimator, and the validity of this robust approach is justified by
Lin et al 11 for arbitrary dependence structures among recurrent events if the
proportional rate or mean assumption is met. For this reason, the robust
approach is also called the proportional rates/means model. The sandwich
variance estimate 11 for $\hat{\beta}$ is
$n^{-1}\hat{V}_{\beta}=n^{-1}\hat{I}_{\beta}^{-1}\hat{\Sigma}_{\beta}\hat{I}_{\beta}^{-1}$,
where
$\hat{\Lambda}_{0}(t)=\sum_{i=1}^{n}\int_{0}^{\tau}[nS^{(0)}(\hat{\beta},t)]^{-1}dN_{i}(t)$,
$d\hat{M}_{i}(t)=dN_{i}(t)-Y_{i}(t)\exp(\hat{\beta}x_{i})d\hat{\Lambda}_{0}(t)$,
$\hat{U}_{i}=\int_{0}^{\tau}[x_{i}-\bar{x}(\hat{\beta},t)]d\hat{M}_{i}(t)$,
$\hat{\Sigma}_{\beta}=n^{-1}\sum_{i=1}^{n}\hat{U}_{i}^{2}$, and
$\hat{I}_{\beta}=n^{-1}\sum_{i=1}^{n}\int_{0}^{\tau}[x_{i}-\bar{x}(\hat{\beta},t)]^{2}Y_{i}(t)\exp(\beta
x_{i})d\hat{\Lambda}_{0}(t)=n^{-1}\sum_{i=1}^{n}\int_{0}^{\tau}[\bar{x}(\hat{\beta},t)-\bar{x}^{2}(\hat{\beta},t)]dN_{i}(t)\,$.
The two-sided $100(1-\alpha)\%$ confidence interval (CI) for $\beta$ is
$[c_{l},c_{u}]=[\hat{\beta}-z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}},\hat{\beta}+z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}],$
where $z_{p}$ is the $p$-th percentile of the standard normal distribution
$N(0,1)$.
In the sample size determination, we assume a mixed Poisson process (MPP)
model 4, 9, 12 for the event process. Let
$\Lambda_{g}(t)=\text{E}[N_{i}(t)|x_{i}=g]$ be the mean event function for
group $g$. The MPP introduces a random effect $\epsilon_{i}$ with mean $1$ and
variance $\kappa_{g}$ for each subject. Given $\epsilon_{i}$, the subject in
group $g$ follows a Poisson process with mean function
$\epsilon_{i}\Lambda_{g}(t)$. Subjects with $\epsilon_{i}>1$
($\epsilon_{i}<1$) tend to experience more (less) events than the average in
the population. The dispersion parameter $\kappa_{g}$ measures the between-
subject heterogeneity. Inclusion of important risk factors in the model may
reduce heterogeneity 12. The MPP provides a natural way to handle
overdispersion in recurrent events in that the variance of $N_{i}(t)$ is
larger than its mean 12. The mixing distribution for the random effect
$\epsilon_{i}$ is unspecified in the AG model. The NB regression uses a gamma
mixing distribution, and the event count $N_{i}(t)$ follows the NB
distribution 3.
In Appendix A.1, we show that $\hat{V}_{\beta}$ converges in probability to
$V_{\beta}$
$V_{\beta}=\frac{p_{1}[A_{1}+\kappa_{1}B_{1}]+p_{0}[A_{0}+\kappa_{0}B_{0}]}{\left(\int_{0}^{\tau}\frac{[p_{0}\pi_{0}(t)]\,[p_{1}\pi_{1}(t)\exp(\beta)]}{p_{0}\pi_{0}(t)+p_{1}\pi_{1}(t)\exp(\beta)}d\Lambda_{0}\right)^{2}},$
(2)
where $p_{g}$ is the proportion of subjects randomized to treatment group $g$,
$\pi_{g}(t)$ is the probability that a subject in group $g$ remains in the
study at time $t$,
$\omega_{0}(t)=\frac{p_{1}\pi_{1}(t)\exp(\beta)}{p_{1}\pi_{1}(t)\exp(\beta)+p_{0}\pi_{0}(t)}$,
$\omega_{1}(t)=1-\omega_{0}(t)$,
$A_{g}=\int_{t=0}^{\tau}\omega_{g}^{2}(t)\pi_{g}(t)d\Lambda_{g}(t)$, and
$B_{g}=2\int_{t=0}^{\tau}\left[\int_{s=0}^{t}\omega_{g}(s)d\Lambda_{g}(s)\right]\pi_{g}(t)\omega_{g}(t)d\Lambda_{g}(t)$.
We allow the loss to follow-up distribution $G_{g}(t)=1-\pi_{g}(t)$ and the
dispersion parameter $\kappa_{g}$ to differ between the two treatment groups.
At the design stage, it is often reasonable to assume the same dropout
distribution in the two treatment groups (i.e. $\pi_{1}(t)=\pi_{0}(t)$ for all
$t$), and $V_{\beta}$ reduces to
$V_{\beta}=\frac{1}{p_{1}E_{1}}+\frac{1}{p_{0}E_{0}}+2\left(\frac{\kappa_{1}}{p_{1}}\frac{F_{1}}{E_{1}^{2}}+\frac{\kappa_{0}}{p_{0}}\frac{F_{0}}{E_{0}^{2}}\right)=\left[\frac{1}{p_{1}\exp(\beta)}+\frac{1}{p_{0}}\right]\frac{1}{E_{0}}+\left[\frac{\kappa_{1}}{p_{1}}+\frac{\kappa_{0}}{p_{0}}\right]\frac{2F_{0}}{E_{0}^{2}},$
(3)
where $E_{g}=\int\pi_{g}(t)d\Lambda_{g}(t)$ and
$F_{g}=\int_{t=0}^{\tau}\pi_{g}(t)\Lambda_{g}(t)d\Lambda_{g}(t)$. In general,
formula (2) can be well approximated by the term between the two equal signs
in formula (3) if the dropout distribution differs between the two groups.
In Appendix A.2, we provide analytic expressions of $E_{g}$ and $F_{g}$ for
the Weibull and piecewise constant event rate functions when the dropout
pattern is identical in the two groups in two types of clinical trial designs.
In practical applications, almost any event rate function can be approximated
reasonably well by the piecewise constant function.
### 2.1 Superiority and NI trials
Suppose a lower event rate is desirable. In both superiority and NI trials,
the hypothesis can be written as
$H_{0}:\exp(\beta)\geq M_{0}\text{ or }\beta\geq\log(M_{0})\text{ \it versus
}H_{1}:\exp(\beta)<M_{0}\text{ or }\beta<\log(M_{0}).$ (4)
In a superiority trial, the objective is to demonstrate that the experimental
treatment can lower the event rate, and we set $M_{0}=1$. The NI trial aims to
show that the experimental treatment is not worse than the standard control
treatment by $M_{0}$, where $M_{0}>1$ is the prespecifed NI margin on the rate
ratio.
The power for test (4) is given by
$\displaystyle\begin{aligned}
\Pr(c_{u}<&\log(M_{0}))=\Pr\left[Z<\frac{-z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}-\beta+\log(M_{0})}{\sqrt{n^{-1}V_{\beta}}}\right]\approx\Phi\left[\frac{\sqrt{n}|\log(M_{0})-\beta|}{\sqrt{V_{\beta}}}-z_{1-\alpha/2}\right],\end{aligned}$
(5)
where $Z=(\hat{\beta}-\beta)/\sqrt{n^{-1}V_{\beta}}$ is asymptotically
distributed as $N(0,1)$. The required sample size is
$n=\frac{(z_{1-\alpha/2}+z_{P})^{2}V_{\beta}}{[\log(M_{0})-\beta]^{2}}.$ (6)
As mentioned in Tang 3, Equation (6) is identical to the upper size bound of
Tang 3, 13 for the NB regression (the dispersion parameter may differ between
the two groups in Tang 13) under the assumption of constant event rates if the
dropout pattern is the same in the two groups since
$F_{0}=\lambda_{0}^{2}\text{E}(T_{i}^{2})/2$,
$E_{0}=\lambda_{0}\text{E}(T_{i})$, and
$V_{\beta}=\left[\frac{1}{p_{1}\exp(\beta)}+\frac{1}{p_{0}}\right]\frac{1}{\lambda_{0}\text{E}(T_{i})}+\left[\frac{\kappa_{1}}{p_{1}}+\frac{\kappa_{0}}{p_{0}}\right]\frac{\text{E}(T_{i}^{2})}{\text{E}^{2}(T_{i})}.$
In this special situation, the AG model is almost as powerful as the NB
regression when the variation in the patients’ follow-up time $T_{i}$ is
small, and the two models yield the same power if all subjects have the same
follow-up time $T_{1}=\ldots=T_{n}$. However, the AG model does not require
specifying the mixing distribution.
The NI test is one-sided, and the actual type I error is $\alpha/2$. In
superiority trials, a two-sided test (i.e $H_{0}:\exp(\beta)=1$ vs
$H_{1}:\exp(\beta)\neq 1$) is often used in practice. Formulae (5) and (6) can
be used for the two-sided test since there is little chance that the observed
outcomes will be significantly better in the control group than in the
experimental group if the experimental treatment is truly more effective than
the control treatment 14. The power and sample size formulae (5) and (6)
remain the same if higher event rates indicate better health ($M_{0}\leq 1$)
and the experimental treatment is truly superior or clinically noninferior to
the control treatment in improving the event rate.
### 2.2 Equivalence trials
In an equivalence trial, the objective is to demonstrate that the experimental
treatment is neither superior nor inferior to the standard control treatment.
If the $100(1-\alpha)\%$ CI for $\exp(\beta)$ lies completely within the
interval $[M_{l},M_{u}]$, we can claim clinical equivalence of the two
treatments, where $M_{l}<1$ and $M_{u}>1$ are the prespecified margins. The
hypothesis is
$H_{0}:\exp(\beta)\geq M_{u}\text{ or }\exp(\beta)\leq M_{l}\text{ \it versus
}H_{1}:M_{l}<\exp(\beta)<M_{u}.$
The equivalence test can be viewed as the two one-sided tests and the type I
error is $\alpha/2$. The power is given by
$\displaystyle\begin{aligned}
P&=\Pr(\hat{\beta}+z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}<\log(M_{u})\text{
and }\hat{\beta}-z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}>\log(M_{l}))\\\
&\approx\Phi\left(\frac{\sqrt{n}[\log(M_{u})-\beta]}{\sqrt{V_{\beta}}}-z_{1-\alpha/2}\right)-\Phi\left(\frac{\sqrt{n}[\log(M_{l})-\beta]}{\sqrt{V_{\beta}}}+z_{1-\alpha/2}\right).\end{aligned}$
(7)
Formula (7) assumes that
$z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}+\log(M_{l})<\log(M_{u})-z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}$
and hence $2z_{1-\alpha/2}\sqrt{n^{-1}\hat{V}_{\beta}}<\log(M_{u}/M_{l})$,
which may not hold with a positive probability. Formula (7) works well in
large samples or when the estimated power is large, but generally
underestimates the power in small samples. The argument is the same as that
for continuous outcomes 15, 14.
The required sample size can be obtained by numerical inversion of the power
formula (7). In the special case when
$\Delta=\log(M_{u})-\beta=\beta-\log(M_{l})=\log(M_{u}/M_{l})/2$, the sample
size is given by
$n=\frac{(z_{1-\alpha/2}+z_{(1+P)/2})^{2}V_{\beta}}{\Delta^{2}}.$ (8)
## 3 Numerical examples
### 3.1 Example 1
We illustrate the sample size calculation for superiority trials by the
analysis of a CGD trial 11, 4. CGD is a rare immune system disorder
characterized by recurrent pyogenic infections. A total of $128$ patients were
randomized to gamma interferon or placebo. The trial was terminated early for
efficacy on basis of an interim analysis of the time to the first infection.
In the trial, $14$ ($22.2\%$) out of $63$ treated patients and $30$ ($46.2\%$)
out of $65$ patients on placebo had at least one infection. Furthermore, $9$
placebo patients and $4$ treated patients experienced at least $2$ infections.
One objective is to estimate the infection rate ratio between the two
treatments. The NB regression gives an estimate of $0.3566$ ($95\%$ CI:
$[0.1934,0.6575]$) while the AG model yields an estimate of $0.3338$ ($95\%$
CI: $[0.1814,0.6143]$). As evidenced by the exploratory analysis of Matsui 4,
the rate of infections may not be constant over time. For this reason, the AG
model is more appropriate for analyzing the CGD trial since it allows
arbitrary event rate function.
Suppose we want to design a new trial to assess the effect of a new
experimental product on the infection rate. We assume the event rate function
is of Weibull form $\lambda_{0}(t)=\psi\nu t^{\nu-1}$ in the placebo arm, and
the event rate ratio between the two treatments is constant
$\lambda_{1}(t)/\lambda_{0}(t)=\exp(\beta)=0.6$ over time. We get the MLE
$(\hat{\psi},\hat{\nu},\hat{\kappa})=(1.097^{1.221},1.221,0.871)$ by fitting a
NB process 12 model to the data using the SAS NLMIXED procedure on basis of
the likelihood function given in Equation (20) of Dean and Balshaw 16. Matsui
4 obtained similar point estimates based on the generalized estimating
equations (GEE) for the MPP 16.
To determine the sample size, we assume a common dispersion parameter and
identical dropout pattern in the two groups. We set $\psi=1.1$, $\nu=1.2$,
$\kappa=0.8$, which are close to the MLE. The treatment allocation ratio is
$p_{1}:p_{0}=1:1$ or $2:1$. We also perform sensitivity analyses to calculate
the sample sizes at alternative parameter values $\kappa=0.4,1.2$, $\psi=1.5$,
$\nu=0.9$. Both design $1$ (planned treatment duration $\tau_{c}=1$ year for
all patients) and design $2$ (accrual period $\tau_{a}=0.5$ year, additional
treatment duration $\tau_{c}=1$ year, constant enrollment rate $\eta=0$) are
considered (please refer to Appendix A.2 for details). In both designs, the
loss to follow-up distribution is exponential with mean $1/\delta=4$ years.
Table 1 reports the sample size and power estimates at the target $90\%$ power
and one-sided type I error $\alpha/2=0.025$. The empirical power is evaluated
based on $40,000$ trials. The data are simulated using Algorithm $2$ of Tang
12 and analyzed using the SAS PHREG procedure. There is more than $95\%$
chance that the simulated power lies within $2\sqrt{0.9*0.1/40000}=0.3\%$ of
the true power. In both designs, In design $1$, the simulated power is within
$1\%$ of the nominal power in nearly all cases. The performance slightly
deteriorates in design $2$ possibly because of larger variation in the follow-
up time and higher overall dropout rate.
| | | Design $1$ | Design $2$
---|---|---|---|---
| | | balanced size | unbalanced size | balanced size | unbalanced size
| | | total | power ($\%$) | total | power ($\%$) | total | power ($\%$) | total | power ($\%$)
$\kappa$ | $\psi$ | $\nu$ | size | nominal | SIM | size | nominal | SIM | size | nominal | SIM | size | nominal | SIM
$0.4$ | $1.1$ | $0.9$ | $289$ | $90.05$ | $91.03$ | $304$ | $90.00$ | $89.05$ | ${\color[rgb]{1,0,0}256}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}90.67}$ | ${\color[rgb]{1,0,0}271}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}89.38}$
| | $1.2$ | $294$ | $90.01$ | $90.68$ | $310$ | $90.03$ | $89.33$ | ${\color[rgb]{1,0,0}251}$ | ${\color[rgb]{1,0,0}90.10}$ | ${\color[rgb]{1,0,0}90.75}$ | ${\color[rgb]{1,0,0}265}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}89.34}$
| $1.5$ | $0.9$ | $231$ | $90.12$ | $90.78$ | $244$ | $90.03$ | $89.13$ | ${\color[rgb]{1,0,0}207}$ | ${\color[rgb]{1,0,0}90.05}$ | ${\color[rgb]{1,0,0}90.63}$ | ${\color[rgb]{1,0,0}220}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}89.11}$
| | $1.2$ | $235$ | $90.07$ | $90.45$ | $249$ | $90.07$ | $88.90$ | ${\color[rgb]{1,0,0}204}$ | ${\color[rgb]{1,0,0}90.13}$ | ${\color[rgb]{1,0,0}90.54}$ | ${\color[rgb]{1,0,0}217}$ | ${\color[rgb]{1,0,0}90.09}$ | ${\color[rgb]{1,0,0}89.30}$
$0.8$ | $1.1$ | $0.9$ | $358$ | $90.02$ | $90.77$ | $382$ | $90.01$ | $89.15$ | ${\color[rgb]{1,0,0}328}$ | ${\color[rgb]{1,0,0}90.08}$ | ${\color[rgb]{1,0,0}90.67}$ | ${\color[rgb]{1,0,0}351}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}89.35}$
| | $1.2$ | $365$ | $90.03$ | $90.77$ | $390$ | $90.05$ | $89.23$ | ${\color[rgb]{1,0,0}324}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}90.48}$ | ${\color[rgb]{1,0,0}348}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}89.34}$
| $1.5$ | $0.9$ | $300$ | $90.07$ | $90.39$ | $322$ | $90.03$ | $89.05$ | ${\color[rgb]{1,0,0}278}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}90.52}$ | ${\color[rgb]{1,0,0}300}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}89.59}$
| | $1.2$ | $306$ | $90.08$ | $90.74$ | $328$ | $90.01$ | $89.34$ | ${\color[rgb]{1,0,0}277}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}90.60}$ | ${\color[rgb]{1,0,0}300}$ | ${\color[rgb]{1,0,0}90.09}$ | ${\color[rgb]{1,0,0}89.42}$
$1.2$ | $1.1$ | $0.9$ | $428$ | $90.06$ | $90.74$ | $460$ | $90.01$ | $89.49$ | ${\color[rgb]{1,0,0}399}$ | ${\color[rgb]{1,0,0}90.05}$ | ${\color[rgb]{1,0,0}90.38}$ | ${\color[rgb]{1,0,0}431}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}88.97}$
| | $1.2$ | $436$ | $90.04$ | $90.59$ | $469$ | $90.01$ | $89.55$ | ${\color[rgb]{1,0,0}398}$ | ${\color[rgb]{1,0,0}90.06}$ | ${\color[rgb]{1,0,0}90.24}$ | ${\color[rgb]{1,0,0}431}$ | ${\color[rgb]{1,0,0}90.05}$ | ${\color[rgb]{1,0,0}89.38}$
| $1.5$ | $0.9$ | $369$ | $90.03$ | $90.13$ | $400$ | $90.03$ | $89.17$ | ${\color[rgb]{1,0,0}349}$ | ${\color[rgb]{1,0,0}90.00}$ | ${\color[rgb]{1,0,0}90.56}$ | ${\color[rgb]{1,0,0}380}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}89.51}$
| | $1.2$ | $376$ | $90.02$ | $90.24$ | $408$ | $90.04$ | $89.28$ | ${\color[rgb]{1,0,0}351}$ | ${\color[rgb]{1,0,0}90.07}$ | ${\color[rgb]{1,0,0}90.32}$ | ${\color[rgb]{1,0,0}382}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}89.31}$
Table 1: Estimated sample size at the nominal $90\%$ power and simulated power
(SIM) at the calculated size in designing a new GCD superiority trial
[1] SIM is evaluated using $40,000$ simulated trials.
[2] Losses to follow-up are exponentially distributed with mean $1/\delta=4$ years (annual dropout rate $22.1\%$) in both arms. | | Unequal dispersion(a) | Unequal dropout(b)
---|---|---|---
| | | | design 1 | design 2 | | | design 1 | design 2
| | | | total | power ($\%$) | total | power ($\%$) | | | total | power ($\%$) | total | power ($\%$)
$\psi$ | $\nu$ | $\kappa_{0}$ | $\kappa_{1}$ | size | nominal | SIM | size | nominal | SIM | $\kappa_{0}$ | $\kappa_{1}$ | size | nominal | SIM | size | nominal | SIM
1.1 | 0.9 | 0.4 | 0.8 | $324$ | $90.08$ | $91.12$ | ${\color[rgb]{1,0,0}292}$ | ${\color[rgb]{1,0,0}90.05}$ | ${\color[rgb]{1,0,0}90.95}$ | 0.4 | 0.4 | $287$ | $90.06$ | $90.89$ | ${\color[rgb]{1,0,0}254}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}90.57}$
| | 0.4 | 1.2 | $358$ | $90.02$ | $91.07$ | ${\color[rgb]{1,0,0}328}$ | ${\color[rgb]{1,0,0}90.08}$ | ${\color[rgb]{1,0,0}91.10}$ | 0.8 | 0.8 | $356$ | $90.02$ | $90.55$ | ${\color[rgb]{1,0,0}326}$ | ${\color[rgb]{1,0,0}90.07}$ | ${\color[rgb]{1,0,0}90.65}$
| | 0.8 | 1.2 | $393$ | $90.04$ | $90.84$ | ${\color[rgb]{1,0,0}363}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}90.80}$ | 1.2 | 1.2 | $426$ | $90.06$ | $90.46$ | ${\color[rgb]{1,0,0}397}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}90.26}$
1.1 | 1.2 | 0.4 | 0.8 | $330$ | $90.06$ | $90.79$ | ${\color[rgb]{1,0,0}287}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}90.69}$ | 0.4 | 0.4 | $292$ | $90.05$ | $90.37$ | ${\color[rgb]{1,0,0}248}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}90.36}$
| | 0.4 | 1.2 | $365$ | $90.03$ | $91.02$ | ${\color[rgb]{1,0,0}324}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}90.99}$ | 0.8 | 0.8 | $363$ | $90.06$ | $90.58$ | ${\color[rgb]{1,0,0}322}$ | ${\color[rgb]{1,0,0}90.03}$ | ${\color[rgb]{1,0,0}90.66}$
| | 0.8 | 1.2 | $400$ | $90.00$ | $90.75$ | ${\color[rgb]{1,0,0}361}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}90.69}$ | 1.2 | 1.2 | $434$ | $90.06$ | $90.29$ | ${\color[rgb]{1,0,0}396}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}90.33}$
1.5 | 0.9 | 0.4 | 0.8 | $265$ | $90.04$ | $90.82$ | ${\color[rgb]{1,0,0}243}$ | ${\color[rgb]{1,0,0}90.09}$ | ${\color[rgb]{1,0,0}90.94}$ | 0.4 | 0.4 | $229$ | $90.06$ | $90.55$ | ${\color[rgb]{1,0,0}206}$ | ${\color[rgb]{1,0,0}90.11}$ | ${\color[rgb]{1,0,0}90.59}$
| | 0.4 | 1.2 | $300$ | $90.07$ | $91.26$ | ${\color[rgb]{1,0,0}278}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}91.12}$ | 0.8 | 0.8 | $298$ | $90.01$ | $90.36$ | ${\color[rgb]{1,0,0}277}$ | ${\color[rgb]{1,0,0}90.05}$ | ${\color[rgb]{1,0,0}90.47}$
| | 0.8 | 1.2 | $334$ | $90.00$ | $90.70$ | ${\color[rgb]{1,0,0}314}$ | ${\color[rgb]{1,0,0}90.06}$ | ${\color[rgb]{1,0,0}90.89}$ | 1.2 | 1.2 | $368$ | $90.06$ | $90.33$ | ${\color[rgb]{1,0,0}348}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}90.26}$
1.5 | 1.2 | 0.4 | 0.8 | $270$ | $90.03$ | $90.63$ | ${\color[rgb]{1,0,0}240}$ | ${\color[rgb]{1,0,0}90.02}$ | ${\color[rgb]{1,0,0}90.62}$ | 0.4 | 0.4 | $233$ | $90.05$ | $90.51$ | ${\color[rgb]{1,0,0}202}$ | ${\color[rgb]{1,0,0}90.08}$ | ${\color[rgb]{1,0,0}90.07}$
| | 0.4 | 1.2 | $306$ | $90.08$ | $91.13$ | ${\color[rgb]{1,0,0}277}$ | ${\color[rgb]{1,0,0}90.04}$ | ${\color[rgb]{1,0,0}90.72}$ | 0.8 | 0.8 | $304$ | $90.05$ | $90.63$ | ${\color[rgb]{1,0,0}276}$ | ${\color[rgb]{1,0,0}90.08}$ | ${\color[rgb]{1,0,0}90.48}$
| | 0.8 | 1.2 | $341$ | $90.05$ | $90.67$ | ${\color[rgb]{1,0,0}314}$ | ${\color[rgb]{1,0,0}90.06}$ | ${\color[rgb]{1,0,0}90.79}$ | 1.2 | 1.2 | $375$ | $90.06$ | $90.37$ | ${\color[rgb]{1,0,0}349}$ | ${\color[rgb]{1,0,0}90.01}$ | ${\color[rgb]{1,0,0}90.45}$
Table 2: Estimated sample size and simulated power (SIM) at the nominal $90\%$
power in the presence of unequal dropout or dispersion
[1] SIM is evaluated using $10,000$ simulated trials
[2] The treatment allocation ratio is $1:1$
(a) Losses to follow-up are exponentially distributed with mean $1/\delta=4$
years (annual dropout rate $22.1\%$) in both arms.
(b) Losses to follow-up are exponentially distributed with $\delta_{1}=0.15$
and $\delta_{0}=0.35$ (annual dropout rates $13.9\%$ and $29.5\%$) in the two
arms.
[3] The sample size and nominal power estimates are updated for design 2 with
staggered entry. The simulated power may be different from the previously
reported values after re-runing the simulation for design 1.
### 3.2 Example 2
We conduct simulations to assess the performance of the proposed method in the
presence of unequal dispersion or differential dropout. Two scenarios are
considered. In one scenario, the dispersion parameters in the two groups are
different. In the other scenario, we assume different loss to follow-up
distributions for the two groups. The setup is otherwise similar to that in
the example $1$. The parameter values and simulation results are presented in
Table 2. The performance of the power and sample size method is almost as good
as that in Example $1$.
### 3.3 Example 3
Simulation is conducted to assess the proposed sample size method for NI and
equivalence trials. For illustration purposes, we assume a piecewise constant
event rate function for the control arm $\lambda_{0}(t)=1.0I(0\leq
t<0.4)+1.25I(0.4\leq t<0.8)+1.5I(0.8\leq t\leq 1)$, the event rate ratio
between the active and control arm is
$\exp(\beta)=\lambda_{1}(t)/\lambda_{0}(t)=0.9$ or $1.0$, and the dispersion
parameter is $\kappa=0.8$ or $1.2$. Only design $1$ is considered, and the
planned treatment duration is $\tau_{c}=1$ year for all patients. The
treatment allocation ratio is $1:1$. The loss to follow-up is exponentially
distributed with mean $1/\delta=4$ years. The margin is $M_{0}=1.25$ in the NI
trials, and $(M_{l},M_{u})=(0.75,1.25)$ in the equivalence trials.
Table 3 reports the sample size and power estimates at the target $80\%$ power
and one-sided type I error $\alpha/2=0.025$. The empirical power is evaluated
based on $10,000$ simulated trials. There is more than $95\%$ chance that the
simulated power lies within $2\sqrt{0.8*0.2/10000}=0.8\%$ of the true power.
The simulated power at the calculated sample size is generally close to the
target $80\%$ power, indicating the accuracy of the proposed method.
NI trials (a) | Equivalence trials (b)
---|---
| | total | power ($\%$) | | | total | power ($\%$)
$\kappa$ | $\exp(\beta)$ | size | nominal | SIM | $\kappa$ | $\exp(\beta)$ | size | nominal | SIM
$0.8$ | $0.9$ | $547$ | $80.00$ | $80.26$ | $0.8$ | $0.9$ | $1781$ | $80.02$ | $79.49$
| $1.0$ | $1153$ | $80.03$ | $79.96$ | | $1.0$ | $1262$ | $80.02$ | $79.58$
$1.2$ | $0.9$ | $675$ | $80.04$ | $80.32$ | $1.2$ | $0.9$ | $2195$ | $80.01$ | $81.20$
| $1.0$ | $1429$ | $80.02$ | $80.31$ | | $1.0$ | $1564$ | $80.01$ | $80.47$
Table 3: Estimated sample size at the nominal $80\%$ power and simulated
power (SIM) at the calculated sample size based on $10,000$ NI or equivalence
trials
(a) NI margin is $M_{0}=1.25$
(b) Equivalence margin is $(M_{l},M_{u})=(0.75,1.25)$
## 4 Discussion
We derive the power and sample size formulae for comparing recurrent rates in
superiority, NI and equivalence trials using the robust Wald test from the AG
model. The method allows the dispersion parameter, dropout rate, and/or sample
size to differ between treatment groups. Numerical examples demonstrate the
accuracy of the proposed method in moderate-to-large samples. It is always
recommended to run simulation studies to verify the power calculation
particularly when the sample size is relatively small.
We calculate the variance $V_{\beta}$ and the sample size at given event rate
function, dispersion parameter and dropout rate. These parameters may be
estimated from the historical trials using parametric methods. It is flexible
to adjust the parameter values and conduct sensitivity analyses to examine how
the sample size estimates vary with these parameter values. Please see Example
$1$ for illustration. It is possible to estimate $V_{\beta}$ from the
historical trials by nonparametric methods 9. However, the nonparametric
approach may require that the new trial is sufficiently similar to the
historical trial in terms of the study population, treatment duration, drop
rates, etc.
The robust AG approach has several limitations. First, the AG model uses a
common baseline hazard function for all events, and assumes that the risk of
an event is unaffected by any early events that occur within the same subject.
Therefore, the AG model is not suitable if the occurrence of early events
increases the risk for subsequent ones 1, 17. The AG model provides a
convenient way to estimate an overall treatment effect, but it would be
difficult to estimate the event specific treatment effect 1, 17, which is
useful for studying whether the treatment effect reduces after the patients
experience one or more events. Second, when the sample size is small, the
sandwich variance estimator tends to underestimate the true variance and have
large sampling variability, leading to inflated type I error rate 18, 19. In
the GEE methodology, the bias corrected sandwich variance estimator has been
proposed for small sample inferences 18, 19, 20. It is possible to extend the
bias correction method to the analysis of recurrent events. An alternative
strategy for the analysis of small trials is to use the robust score test
instead of the robust Wald test 20.
## Appendix A Appendix: Technical details
### A.1 A brief proof of equations (2) and (3)
By Lin et al 11, $\hat{V}_{\beta}$ is a consistent estimate of $V_{\beta}$
$V_{\beta}=\frac{\text{E}[\text{E}(U_{i}^{2}|x_{i})]}{\text{E}^{2}(I_{\beta})}=\frac{p_{1}\Sigma_{1}+p_{0}\Sigma_{0}}{\left(\int_{0}^{\tau}\omega_{1}(t)\omega_{0}(t)[p_{1}\pi_{1}(t)d\Lambda_{1}+p_{0}\pi_{0}(t)d\Lambda_{0}]\right)^{2}},$
(9)
where $d\,M_{i}(t)=dN_{i}(t)-Y_{i}(t)\exp(\beta x_{i})d\Lambda_{0}(t)$,
$U_{i}=\int_{0}^{\tau}[x_{i}-\bar{x}(\beta,t)]d\,{M}_{i}(t)$,
$\Sigma_{g}=\text{E}(U_{i}^{2}|x_{i}=g)$, and
$I_{\beta}=n^{-1}\sum_{i=1}^{n}\int_{0}^{\tau}[\bar{x}(\beta,t)-\bar{x}^{2}(\beta,t)]dN_{i}(t)$.
By Lemma $1$ in the web-based supplementary material of Song et al 9, we get
$\Sigma_{g}=\text{E}\left[\int_{0}^{\tau}(g-\bar{x}(\beta,t))dM_{i}(t)\int_{0}^{\tau}(g-\bar{x}(\beta,s))dM_{i}(s)\right]=A_{g}+\kappa
B_{g}$ (10)
for subjects in group $g$, where
$\displaystyle\begin{aligned}
A_{g}&=\text{E}\left[\int_{0}^{\tau}Y_{i}(t)\omega_{g}^{2}(t)d\Lambda_{g}(t)\right]=\int\omega_{g}^{2}(t)\pi_{g}(s)d\Lambda_{g}(t),\\\
B_{g}&=\text{E}\left[\int_{0}^{\tau}\int_{0}^{\tau}Y_{i}(t)Y_{i}(s)\omega_{g}(t)\omega_{g}(s)d\Lambda_{g}(t)d\Lambda_{g}(s)\right]=2\int_{s=0}^{\infty}\left[\int_{t=0}^{s}\omega_{g}(t)d\Lambda(t)\right]\pi_{g}(s)\omega_{g}(s)d\Lambda_{g}(s).\end{aligned}$
Inserting Equation (10) into Equation (9) yields Equation (2).
Equation (3) holds under equal dropout since $\omega_{0}(t)\equiv
p_{1}\exp(\beta)/D$, $\omega_{1}(t)\equiv p_{0}/D$,
$I_{\beta}=p_{0}\omega_{0}(t)E_{0}=p_{1}\omega_{1}(t)E_{1}$,
$A_{i}=\omega_{i}^{2}(t)E_{i}$, $B_{i}=2\omega_{i}^{2}(t)F_{i}$,
$E_{1}=E_{0}\exp(\beta)$, $F_{1}=F_{0}\exp(2\beta)$ and
$F_{1}/E_{1}^{2}=F_{0}/E_{0}^{2}$, where $D=p_{0}+p_{1}\exp(\beta)$.
### A.2 Asymptotic variance expressions in two designs under equal dropout
#### A.2.1 Design 1
The planned treatment duration is $\tau_{c}$ years for each subject (the
accrual period is irrelevant in the sample size calculation). The loss to
follow-up is exponentially distributed with mean $\delta^{-1}$. The
probability that a subject is in the trial at time $t$ after randomization is
$\pi(t)=\exp(-\delta t)$.
Weibull event rate function
Suppose the rate function is $\lambda_{0}(t)=\psi\nu t^{\nu-1}$ and the mean
function is $\Lambda_{0}(t)=\psi t^{\nu}$ for the recurrent event in the
control group, where $\psi$ is a scale parameter and $\nu$ is a shape
parameter. Let $\text{IG}(\nu,a)=\int_{t=0}^{a}t^{\nu-1}\exp(-t)dt$ be the
incomplete gamma function and
$\text{IG}(\nu,a,b)=\int_{t=b}^{a}t^{\nu-1}\exp(-t)dt$. We have
$E_{0}^{\text{I}}=\int_{0}^{\tau_{c}}\pi(t)d\Lambda_{0}(t)=\psi\nu\int_{0}^{\tau_{c}}\exp(-\delta
t)t^{\nu-1}dt=\begin{cases}\frac{\psi\nu}{\delta^{\nu}}\text{IG}(\nu,\delta\tau_{c})&\text{
if }\delta\neq 0\\\ \psi\tau_{c}^{\nu}&\text{ at }\delta=0,\end{cases}$
$F_{0}^{\text{I}}=\int_{0}^{\tau_{c}}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)=\psi^{2}\nu\int_{0}^{\tau_{c}}\exp(-\delta
t)t^{2\nu-1}dt=\begin{cases}\frac{\psi^{2}\nu}{\delta^{2\nu}}\text{IG}(2\nu,\delta\tau_{c})&\text{
if }\delta\neq 0\\\ \psi^{2}\tau_{c}^{2\nu}/2&\text{ at }\delta=0.\end{cases}$
Piecewise constant event rate function
Let $\lambda_{0}(t)=\sum_{k=1}^{d}\tilde{\lambda}_{k}I(l_{k-1}\leq t<l_{k})$,
where $l_{0}=0$, $l_{d}=\tau_{c}$. Then
$\Lambda_{0}(t)=\Lambda_{0}(l_{k-1})+\tilde{\lambda}_{k}(t-l_{k-1})$ when
$l_{k-1}\leq t<l_{k}$. Let $\Delta_{k}=l_{k}-l_{k-1}$ and
$G_{km}=\int_{l_{k-1}}^{l_{k}}\exp[-\delta(t-l_{k-1})](t-l_{k-1})^{m}dt=\int_{0}^{\Delta_{k}}\exp[-\delta
t]t^{m}dt$ for $m=0,1,2$. Then
$\begin{cases}G_{k0}=\frac{1-\exp(-\delta\Delta_{k})}{\delta},\,\,G_{k1}=\frac{1-(1+\delta\Delta_{k})\exp(-\delta\Delta_{k})}{\delta^{2}},\,\,G_{k2}=\frac{2-(\delta^{2}\Delta_{k}^{2}+2\delta\Delta_{k}+2)\exp(-\delta\Delta_{k})}{\delta^{3}}&\text{
if }\delta>0\\\
G_{k0}=\Delta_{k},\,\,G_{k1}=\frac{\Delta_{k}^{2}}{2},\,\,G_{k2}=\frac{\Delta_{k}^{3}}{3}&\text{
at }\delta=0\end{cases}$ (11)
We have
$E_{0}^{\text{I}}=\int_{0}^{\tau_{c}}\pi(t)d\Lambda_{0}(t)=\sum_{k=1}^{d}\tilde{\lambda}_{k}\int_{l_{k-1}}^{l_{k}}\pi(t)dt=\begin{cases}\sum_{k=1}^{d}\tilde{\lambda}_{k}\exp[-\delta
l_{k-1}]G_{k0}&\text{ if }\delta\neq 0\\\ \Lambda(\tau_{c})&\text{ at
}\delta=0,\end{cases}$
$F_{0}^{\text{I}}=\int_{0}^{\tau_{c}}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)=\sum_{k=1}^{d}\tilde{\lambda}_{k}\int_{l_{k-1}}^{l_{k}}\pi(t)\Lambda_{0}(t)dt=\begin{cases}\sum_{k=1}^{d}\tilde{\lambda}_{k}\exp[-\delta
l_{k-1}][\Lambda(l_{k-1})G_{k0}+\tilde{\lambda}_{k}G_{k1}]&\text{ if
}\delta\neq 0\\\ \Lambda^{2}(\tau_{c})/2&\text{ at }\delta=0.\end{cases}$
#### A.2.2 Design 2
Subjects are enrolled during an accrual period of $\tau_{a}$ years, and
followed for an additional $\tau_{c}$ years after the closure of recruitment.
The total study duration is $\tau=\tau_{a}+\tau_{c}$ years. Suppose the entry
time for a subject is distributed with density function given by
$f(e_{i})=\frac{\eta\exp(-\eta e_{i})}{1-\exp(-\eta\tau_{a})},\text{ where
}0\leq e_{i}\leq\tau_{a}.$
The entry distribution is convex (faster patient entry at the beginning) if
$\eta>0$, and concave (lagging patient entry) if $\eta<0$, and uniform
$f(e_{i})=1/\tau_{a}$ if $\eta\rightarrow 0$. In terms of the sample size
calculation, design $1$ can be viewed as a special case of design $2$ by
setting $\tau_{a}=0$.
Given the entry time $e_{i}$, the maximum follow-up for an individual is
$\tau-e_{i}$. We assume the loss to follow-up is exponentially distributed
with mean $\delta^{-1}$. The probability that a subject is still in the trial
at time $t$ after randomization is
$\displaystyle\begin{aligned}
\pi(t)&=\Pr(T_{i}>t)=\Pr(T_{i}>t|e_{i}+t\leq\tau)\Pr(e_{i}+t\leq\tau)+\Pr(T_{i}>t|e_{i}+t>\tau)\Pr(e_{i}+t>\tau)\\\
&=\begin{cases}\exp(-\delta t)\text{ if }t\leq\tau_{c}\\\ \exp(-\delta
t){\color[rgb]{1,0,0}\frac{1-\exp[-\eta(\tau-t)]}{1-\exp(-\eta\tau_{a})}}\text{
if }\tau_{c}<t\leq\tau.\end{cases}\end{aligned}$ (12)
When $\eta\rightarrow 0$, we shall replace
${\color[rgb]{1,0,0}\frac{1-\exp[-\eta(\tau-t)]}{1-\exp(-\eta\tau_{a})}}$ by
its limiting value ${\color[rgb]{1,0,0}(\tau-t)/\tau_{a}}$ in Equation (12).
In design 2, it is easy to see that
$\int_{0}^{\tau}\pi(t)d\Lambda_{0}(t)=E^{\text{I}}+\int_{\tau_{c}}^{\tau}\pi(t)d\Lambda_{0}(t)\text{
and
}\int_{0}^{\tau}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)=F_{0}^{\text{I}}+\int_{\tau_{c}}^{\tau}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t),$
where $E_{0}^{\text{I}}$ and $F_{0}^{\text{I}}$ are defined in Appendix A.2.1.
Below we give analytic expression for
$\int_{\tau_{c}}^{\tau}\pi(t)d\Lambda_{0}(t)$ and
$\int_{\tau_{c}}^{\tau}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)$ at $\eta=0$. The
expressions are omitted when $\eta\neq 0$ due to limited space.
Weibull event rate function
Suppose $\lambda_{0}(t)=\psi\nu t^{\nu-1}$. When $\eta=0$, we get
$\displaystyle\begin{aligned}
\int_{\tau_{c}}^{\tau}\pi(t)d\Lambda_{0}(t)&=\begin{cases}{\color[rgb]{1,0,0}\frac{\tau\psi\nu}{\tau_{a}\delta^{\nu}}\text{IG}(\nu,\delta\tau,\delta\tau_{c})-\frac{\psi\nu}{\tau_{a}\delta^{\nu+1}}\text{IG}(\nu+1,\delta\tau,\delta\tau_{c})}&\text{
if }\delta>0\\\
{\color[rgb]{1,0,0}\frac{\tau\psi}{\tau_{a}}[\tau^{\nu}-\tau_{c}^{\nu}]-\frac{\psi\nu}{\tau_{a}(\nu+1)}[\tau^{\nu+1}-\tau_{c}^{\nu+1}]}&\text{
at }\delta=0\end{cases}\\\
\int_{\tau_{c}}^{\tau}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)&=\begin{cases}{\color[rgb]{1,0,0}\frac{\tau\psi^{2}\nu}{\tau_{a}\delta^{2\nu}}\text{IG}(2\nu,\delta\tau,\delta\tau_{c})-\frac{\psi^{2}\nu}{\tau_{a}\delta^{2\nu+1}}\text{IG}(2\nu+1,\delta\tau,\delta\tau_{c})}&\text{
if }\delta>0\\\
{\color[rgb]{1,0,0}\frac{\tau\psi^{2}}{2\tau_{a}}[\tau^{2\nu}-\tau_{c}^{2\nu}]-\frac{\psi^{2}\nu}{\tau_{a}(2\nu+1)}[\tau^{2\nu+1}-\tau_{c}^{2\nu+1}]}&\text{
at }\delta=0\end{cases}\\\ \end{aligned}$ (13)
Piecewise constant event rate function
Suppose $\lambda_{0}(t)=\sum_{k=1}^{d^{*}}\tilde{\lambda}_{k}I(l_{k-1}\leq
t<l_{k})$, where $l_{d^{*}}=\tau=\tau_{a}+\tau_{c}$ and $l_{d}=\tau_{c}$. For
notational convenience, if $\tau_{c}$ is not a knot, it can be added as a
knot. When $\eta=0$,
$\displaystyle\begin{aligned}
&\int_{\tau_{c}}^{\tau}\pi(t)d\Lambda_{0}(t)={\color[rgb]{1,0,0}\sum_{k=d+1}^{d^{*}}\int_{l_{k-1}}^{l_{k}}\lambda_{k}\exp(-\delta
t)\frac{\tau-t}{\tau_{a}}dt=\sum_{k=d+1}^{d^{*}}\frac{\lambda_{k}}{\tau_{a}}\exp(-\delta
l_{k-1})\left[(\tau-l_{k-1})G_{k0}-G_{k1}\right]},\\\
&\int_{\tau_{c}}^{\tau}\pi(t)\Lambda_{0}(t)d\Lambda_{0}(t)={\color[rgb]{1,0,0}\sum_{k=d+1}^{d^{*}}\frac{\lambda_{k}}{\tau_{a}}\exp(-\delta
l_{k-1})\left\\{\Lambda_{k-1}[(\tau-l_{k-1})G_{k0}-G_{k1}]+\lambda_{k}[(\tau-
l_{k-1})G_{k1}-G_{k2}]\right\\}},\end{aligned}$
where $G_{k0}$, $G_{k1}$ and $G_{k2}$ are defined in Equation (11).
## References
* 1 Wang Y, Meyerson L, Tang Y, Qian N. Statistical methods for the analysis of relapse data in MS clinical trials. Journal of the Neurological Sciences 2009; 285: 206 - 11.
* 2 Aban IB, Cutter GR, Mavinga N. Inferences and power analysis concerning two negative binomial distributions with an application to MRI lesion counts data. Computational Statistics & Data Analysis 2009; 53: 820 -33.
* 3 Tang Y. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time. Journal of Biopharmaceutical Statistics 2015; 25: 1100 - 13.
* 4 Matsui S. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data. Statistics in Medicine 2005; 24: 1339 - 56.
* 5 Nicholas R, Straube S, Schmidli H, Schneider S, Friede T. Trends in annualized relapse rates in relapsing-remitting multiple sclerosis and consequences for clinical trial design. Multiple sclerosis 2011; 17: 1211 \- 17.
* 6 Inusah S, Sormani MP, Cofield SS, et al. Assessing changes in relapse rates in multiple sclerosis. Multiple sclerosis 2010; 16: 1414 - 21\.
* 7 Andersen PK, Gill RD. Cox’s regression model counting process: a large sample study. Annual of Statistics 1982; 10: 1100 - 20.
* 8 Tang Y. Negative Binomial Regression: Sample Size with Unequal Follow-Up Times. In: CRC Press; 2017.
* 9 Song R, Kosorok MR, Cai J. Robust covariate-adjusted log-rank statistics and corresponding sample size formula for recurrent events data. Biometrics 2008; 92: 741 -50.
* 10 Lawless JF, Nadeau C. Some Simple Robust Methods for the Analysis of Recurrent Events. Technometrics 1995; 37: 158 - 68.
* 11 Lin DY, Wei LJ, Yang I, Ying Z. Semiparametric Regression for the Mean and Rate Functions of Recurrent Events. Journal of the Royal Statistical Society B 2000; 62: 711 - 30.
* 12 Tang Y. Algorithms for imputing partially observed recurrent events with applications to multiple imputation in pattern mixture models. Journal of Biopharmaceutical Statistics 2018; 28: 518-533.
* 13 Tang Y. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up time. Journal of Biopharmaceutical Statistics 2018; 28: 475-491.
* 14 Tang Y. Exact and approximate power and sample size calculations for analysis of covariance in randomized clinical trials with or without stratification. Journal of Biopharmaceutical research 2018; 10: 274-286.
* 15 Tang Y. A Noniterative Sample Size Procedure for Tests Based on t Distributions. Statistics in Medicine 2018; 37: 3197 - 3213\.
* 16 Dean CB, Balshaw R. Efficiency Lost By Analyzing Counts Rather Than Event Times in Poisson and Overdispersed Poisson Regression Models. Journal of the American Statistical Association 1997; 92: 1387 - 98.
* 17 Amorim LD, Cai J. Modelling recurrent events: a tutorial for analysis in epidemiology. International journal of epidemiology 2015; 44: 1 - 10.
* 18 Kauermann G, Carroll RJ. A note on the efficiency of sandwich covariance matrix estimation. Journal of the American Statistical Association 2001; 96: 1387 - 96.
* 19 Lu B, Preisser JS, Qaqish BF, Suchindran C, Bangdiwala S, Wolfson M. A comparison of two bias-corrected covariance estimators for generalized estimating equations. Biometrics 2007; 63: 935-41.
* 20 Guo X, Pan W, Connett JE, Hannan PJ, French SA. Small-sample performance of the robust score test and its modifications in generalized estimating equations. Statistics in Medicine 2005; 24: 3479 - 95.
|
# Jet Parameters in the Black-Hole X-Ray Binary MAXI J1820+070
Andrzej A. Zdziarski Nicolaus Copernicus Astronomical Center, Polish Academy
of Sciences, Bartycka 18, PL-00-716 Warszawa, Poland<EMAIL_ADDRESS>Alexandra J. Tetarenko NASA Einstein Fellow East Asian Observatory, 660 N.
A’ohōkū Place, University Park, Hilo, Hawaii 96720, USA Department of Physics
and Astronomy, Texas Tech University, Lubbock, Texas 79409-1051, USA Marek
Sikora Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences,
Bartycka 18, PL-00-716 Warszawa, Poland<EMAIL_ADDRESS>
###### Abstract
We study the jet in the hard state of the accreting black-hole binary MAXI
J1820+070. From the available radio-to-optical spectral and variability data,
we put strong constraints on the jet parameters. We find while it is not
possible to uniquely determine the jet Lorentz factor from the spectral and
variability properties alone, we can estimate the jet opening angle ($\approx
1.5\pm 1\arcdeg$), the distance at which the jet starts emitting synchrotron
radiation ($\sim$3$\times 10^{10}$ cm), the magnetic field strength there
($\sim$104 G), and the maximum Lorentz factor of the synchrotron-emitting
electrons ($\sim$110–150) with relatively low uncertainty, as they depend
weakly on the bulk Lorentz factor. We find the breaks in the variability power
spectra from radio to sub-mm are consistent with variability damping over the
time scale equal to the travel time along the jet at any Lorentz factor. This
factor can still be constrained by the electron-positron pair production rate
within the jet base, which we calculate based on the observed X-ray/soft
gamma-ray spectrum, and the jet power, required to be less than the accretion
power. The minimum ($\sim$1.5) and maximum ($\sim$4.5) Lorentz factors
correspond to the dominance of pairs and ions, and the minimum and maximum jet
power, respectively. We estimate the magnetic flux threading the black hole
and find the jet can be powered by the Blandford-Znajek mechanism in a
magnetically-arrested flow accretion flow. We point out the similarity of our
derived formalism to that of core shifts, observed in extragalactic radio
sources.
## 1 Introduction
Our knowledge of the structure of extragalactic radio jets is already quite
detailed, see, e.g., Blandford et al. (2019). While a number of aspects
remains to be determined, e.g., the jet lateral structure (Perlman et al.,
2019), radio maps provide us the projected structures of the jets, in
particular their opening angles. Magnetic fields can be determined via core
shifts, which are angular displacements of the position of the radio core
between two frequencies (e.g., Lobanov 1998; Zamaninasab et al. 2014;
Zdziarski et al. 2015, hereafter Z15). Superluminal motion allows us to
estimate the jet bulk Lorentz factors, $\Gamma$ (e.g., Jorstad et al. 2001;
Kellermann et al. 2004; Lister et al. 2019). They can also be independently
estimated from radiative models of blazars (Ghisellini & Tavecchio, 2015) and
from the radio core luminosity function (Yuan et al., 2018). The jet power can
be estimated from calorimetry of radio lobes (Willott et al., 1999; Shabala &
Godfrey, 2013) and core shifts (e.g., Pjanka et al. 2017). Finally, the e±
pair content can be obtained from comparison of the observed jet powers with
theoretical predictions (Sikora et al. 2020 and references therein).
On the other hand, our knowledge of jets in accreting black-hole (BH)
binaries, which are the main class of microquasars, is much more rudimentary.
From available radio maps, we can only set upper limits on the jet opening
angles (e.g., Stirling et al. 2001; Miller-Jones et al. 2006). If we know the
distance, we can constrain the Lorentz factors of ejected transient blobs,
which phenomenon is associated with transitions from the hard spectral state
to the soft one, e.g., Atri et al. (2020); Wood et al. (2021). The Lorentz
factors of steady compact jets commonly present in the hard spectral state are
even more difficult to constrain, with only rough estimates of $\Gamma\gtrsim
1.5$–2 (e.g., Stirling et al. 2001; Casella et al. 2010; Tetarenko et al.
2019). Therefore, an accurate determination of the jet parameters even for a
single source would be very important.
Here we study the jet in the transient BH X-ray binary MAXI J1820+070 during
its outburst in 2018. We use the observational data of outstanding quality for
that source presented by Tetarenko et al. (2021) (hereafter T21), which gives
us an opportunity of such an accurate parameter determination. We interpret
these data in terms of the classical model of Blandford & Königl (1979) and
Königl (1981). Here flat radio spectra are interpreted in terms of a
superposition of synchrotron self-absorbed and optically-thin spectra, spectra
above the break frequency are optically-thin synchrotron, and the electron
distribution and the magnetic field strength are parametrized by power laws.
The jet in the synchrotron-emitting part is assumed to be conical and of a
constant bulk-motion velocity. We provide an updated analysis of those data,
making corrections to the similar model used in T21. In particular, T21
followed the formulation of the model that suffered from some errors related
to the transformation from the comoving frame to that of the observer. Also,
we properly connect the break frequencies in the radio/mm spectra with the
propagation time along the jet, and we correct the expressions for jet power.
Furthermore, we link the dependencies of energy densities on the distance to
the observed hard inverted spectral index, as well as we use additional data
from Rodi et al. (2021). This allows us to obtain constraints based on the
full radio-through-optical spectrum.
MAXI J1820+070 was discovered during its outburst in 2018 (Tucker et al.,
2018; Kawamuro et al., 2018). The source is relatively nearby, with a distance
of $D\approx 2.96\pm 0.33$ kpc measured based on a radio parallax (Atri et
al., 2020). Then, Wood et al. (2021) determined $D\leq 3.11\pm 0.06$ kpc based
on the proper motion of the moving ejecta during the hard-to-soft state
transition. The inclination of the radio jet is $i\approx 64\arcdeg\pm
5\arcdeg$ (Wood et al., 2021), while the inclination of the binary is
constrained to $i_{\rm b}\approx 66\arcdeg$–$81\arcdeg$ (Torres et al., 2020).
The BH mass is given by $M\approx(5.95\pm 0.22){\rm M}_{\sun}/\sin^{3}i_{\rm
b}$ (Torres et al., 2020).
The data presented in T21 were obtained during a multiwavelength observational
campaign from radio to X-rays performed during a 7-h period during the hard
state on 2018 April 12 (MJD 58220). We also use the simultaneous IR and
optical data obtained by Rodi et al. (2021). During the campaign, the source
was in a part of the initial hard state which formed a plateau in the X-ray
hardness vs. flux diagram (Buisson et al., 2019).
Our theoretical model is presented in Section 2 and Appendix A. In Section 3,
we fit it to the data. In Section 4, we discuss various aspects of our
results, and show that our formalism based on time lags between different
frequencies of the flat spectrum is equivalent to the formalism based on core
shifts. We give our conclusions in Section 5.
## 2 Steady-state jets
### 2.1 Power-law dependencies
Following the theoretical interpretation in T21, we consider a continuous,
steady-state, jet in the range of its distance from the BH where it has
constant both the bulk Lorentz factor and the opening angle, i.e., it is
conical. We consider its synchrotron emission and self-absorption, and assume
that the hard, partially self-absorbed, part of the total spectrum results
from superposition of spectra from different distances with breaks
corresponding to unit self-absorption optical depth (Blandford & Königl,
1979). We use the formulation of the model of Königl (1981) (which is an
extension of the model of Blandford & Königl 1979 for cases with the self-
absorbed radio index different from zero) as developed in Zdziarski et al.
(2019), hereafter Z19. In this model, the jet is assumed to be laterally
uniform, which is a good approximation for $i\gg\Theta$, where $\Theta$ is the
jet (half) opening angle. We denote the observed and comoving-frame photon
frequencies as $\nu$ and $\nu^{\prime}$, respectively, and introduce the
dimensionless distance,
$\nu^{\prime}=\frac{\nu(1+z_{\rm
r})}{\delta},\quad\delta\equiv\frac{1}{\Gamma(1-\beta\cos
i)},\quad\xi\equiv\frac{z}{z_{0}},$ (1)
where $z_{\rm r}$ is the cosmological redshift (equal to null in our case),
$\Gamma$ and $\beta$ are the jet bulk Lorentz factor and the velocity in units
of the light speed, respectively, $z$ is the distance from the BH, and $z_{0}$
is the distance at which the jet becomes optically thin to self-absorption at
all considered frequencies. As in Königl (1981), we assume the electron
differential density distribution, $n(\gamma,\xi)$, and the magnetic field
strength, $B$, are parameterized by power-law dependencies,
$R(\xi)=z_{0}\xi\tan\Theta,\,n(\gamma,\xi)=n_{0}\xi^{-a}\gamma^{-p},\,B(\xi)=B_{0}\xi^{-b},$
(2)
where $R$ is the jet radius and $\gamma$ is the Lorentz factor of the emitting
electrons in the jet comoving frame, with $\gamma_{\rm
min}\leq\gamma\leq\gamma_{\rm max}$. The quantities $R$ and $z$ are measured
in the local observer’s frame, while $n$ and $B$ are given in the comoving
frame (for notational simplicity, we skip the primes). For a conserved
electron number along the jet and conserved magnetic energy flux dominated by
toroidal fields, we have $a=2$ and $b=1$, corresponding to the spectral index
of $\alpha=0$, independent of the value of $p$ (Blandford & Königl, 1979).
Here, we define $\alpha$ by the energy flux density of
$F_{\nu}\propto\nu^{\alpha}$. If either the electron or magnetic energy is
dissipated, $a>2$, $b>1$, respectively. Then, the emission weakens with the
distance and the synchrotron spectrum in the partially self-absorbed frequency
range becomes harder than in the conserved case, $\alpha>0$. The spectral
indices of partially self-absorbed and optically thin synchrotron emission are
given by
$\alpha=\frac{5a+3b+2(b-1)p-13}{2a-2+b(p+2)},\quad\alpha_{\rm
thin}=\frac{1-p}{2},$ (3)
respectively (Königl 1981; Z19). Using a delta-function approximation to the
single-electron synchrotron spectrum at $\gamma^{2}\gg 1$ (assumed hereafter),
the synchrotron frequency for a given $\gamma$ and $\xi$, and its range
emitted by the jet are
$\displaystyle\frac{h\nu^{\prime}}{m_{\rm e}c^{2}}=$
$\displaystyle\frac{B_{0}\xi^{-b}}{B_{\rm cr}}\gamma^{2},$ (4)
$\displaystyle\frac{h\nu^{\prime}_{\rm min}}{m_{\rm
e}c^{2}}=\frac{B_{0}\xi_{\rm M}^{-b}}{B_{\rm cr}}\gamma_{\rm min}^{2}$
$\displaystyle,\quad\frac{h\nu^{\prime}_{\rm max}}{m_{\rm
e}c^{2}}=\frac{B_{0}}{B_{\rm cr}}\gamma_{\rm max}^{2},$ (5)
respectively. Here $z_{0}\xi_{\rm M}$ is the distance at which the jet
terminates, $h$ is the Planck constant, $B_{\rm cr}={2\pi m_{\rm
e}^{2}c^{3}/(eh)}\approx 4.414\times 10^{13}$ G is the critical magnetic field
strength, and $m_{\rm e}$ and $e$ is the electron mass and charge,
respectively. The spectral density of the synchrotron emission for a single
jet parameterized by Equation (2) and for $\nu_{\rm min}\leq\nu\leq\nu_{\rm
max}$ is then given by (see equation A5 of Z19),
$\displaystyle F_{\nu}$ $\displaystyle\simeq
F_{0}\left(\frac{\nu}{\nu_{0}}\right)^{\frac{5}{2}}\int_{\xi_{\rm
min}}^{\xi_{\rm max}}{\rm
d}\xi\,\xi^{1+b/2}\left\\{1-\exp[-\tau(\frac{\nu}{\nu_{0}},\xi)]\right\\},$
(6) $\displaystyle F_{0}$ $\displaystyle\equiv{(1+z_{\rm
r})^{\frac{7}{2}}(m_{\rm e}h\delta)^{\frac{1}{2}}\pi
C_{1}(p)z_{0}^{2}\nu_{0}^{\frac{5}{2}}\tan\Theta\sin i\over
6cC_{2}(p)(B_{0}/B_{\rm cr})^{\frac{1}{2}}D^{2}}.$ (7)
Here $F_{0}$ is a constant proportional to the bolometric flux,
$\tau(\nu/\nu_{0},\xi)$ is the synchrotron self-absorption optical depth,
$\nu_{0}$ is the break frequency, see Equation (13) below, and $C_{1}(p)$,
$C_{2}(p)$ are constants following from averaging the synchrotron emission and
absorption coefficients over the pitch angle,
$\displaystyle C_{1}(p)={3^{p+4\over 2}\Gamma_{\rm E}\left(3p-1\over
12\right)\Gamma_{\rm E}\left(3p+19\over 12\right)\Gamma_{\rm E}\left(p+1\over
4\right)\over 2^{5}\pi^{1\over 2}\Gamma\left(p+7\over 4\right)},$ (8)
$\displaystyle C_{2}(p)={3^{p+3\over 2}\Gamma_{\rm E}\left(3p+2\over
12\right)\Gamma_{\rm E}\left(3p+22\over 12\right)\Gamma_{\rm E}\left(p+6\over
4\right)\over 2^{4}\pi^{\frac{1}{2}}\Gamma_{\rm E}\left(p+8\over 4\right)}$
(9)
(cf. Jones et al. 1974; Zdziarski et al. 2012), where $\Gamma_{\rm E}$ is the
Euler Gamma function. In the extragalactic case, $D$ is the luminosity
distance. The lower and upper limits of the integral (6) are,
$\displaystyle\xi_{\rm min}(\nu)=\max\left[1,\left(\frac{B_{0}m_{\rm
e}c^{2}\gamma_{\rm min}^{2}}{h\nu^{\prime}B_{\rm
cr}}\right)^{\frac{1}{b}}\right],$ (10) $\displaystyle\xi_{\rm
max}(\nu)=\min\left[\left(\frac{\nu_{\rm
max}}{\nu}\right)^{\frac{1}{b}},\xi_{\rm M}\right],$ (11)
respectively. Figure 1 shows an example of the spatial dependencies of the
emission along the jet at different frequencies for $\gamma_{\rm min}=10$ and
$\nu_{\rm max}=10^{7}$ GHz. For $\gamma_{\rm min}\gtrsim 30$, the emission at
all frequencies in this case would be in the optically-thin regime only, cf.
Equation (16) below. We note that above we have assumed the single-electron
synchrotron emission is isotropic in the plasma frame, which is strictly valid
for a tangled magnetic field.
Figure 1: An example of the spatial structure of the jet emission at different
frequencies for $\nu_{0}=2\times 10^{4}$ GHz, $F_{0}=300$ mJy, $b=1.1$,
$a=2b$, $p=2$. The red dots, blue dashes, magenta dots, cyan dashes, and green
dots correspond to $\nu=5.25$, 25.9, 343.5, $1.4\times 10^{5}$ GHz, and
$5\times 10^{6}$ GHz, respectively. The black solid curve corresponds to
$\nu=\nu_{0}$. The three lowest and two highest frequency curves end at
$\xi_{\rm min}>1$ and at $\xi_{\rm max}$, respectively, beyond which there is
no emission at those $\nu$. These values of $\xi_{\rm min}$, $\xi_{\rm max}$
were calculated for $\gamma_{\rm min}=10$, $\nu_{\rm max}=10^{7}$ GHz,
$i=64\arcdeg$, $\Gamma=3$ and $B_{0}=10^{4}$ G (which correspond to
$\gamma_{\rm max}\approx 793$).
Figure 2: An example of the jet synchrotron spectrum for $\nu_{0}=2\times
10^{4}$ GHz, $F_{0}=300$ mJy, $b=1.1$, $a=2b$, $p=2$, $\nu_{\rm max}=10^{7}$
GHz, $i=64\arcdeg$. This spectrum is virtually independent of $\xi_{\rm
min}(\nu)$ in the shown range of $\nu$, as long as $\xi_{\rm
min}(\nu)\ll\xi_{\nu}$. The blue curve shows the accurate spectrum of Equation
(6), and the red dashes show the approximation of Equation (12). The gradual
high-energy cutoff of the accurate spectrum is due to $\xi_{\rm max}$
decreasing with increasing $\nu$ and reaching unity for $\nu_{\rm max}$.
Then, power-law dependencies assuming $\xi_{\rm min}=1$, $\xi_{\rm
max}=\infty$ in the optically-thick and optically thin cases are (cf. equation
A11 in Z19)
$F_{\nu}\simeq 2F_{0}\begin{cases}\displaystyle{\Gamma_{\rm
E}\left[\frac{2a-6+b(p+1)}{2a-2+b(p+2)}\right]}\frac{(\nu/\nu_{0})^{\alpha}}{4+b},&\nu\ll\nu_{0};\cr\displaystyle{\frac{(\nu/\nu_{0})^{\alpha_{\rm
thin}}}{2a-6+b(p+1)}},&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\nu_{0}\ll\nu\ll\nu_{\rm
max}.\cr\end{cases}$ (12)
Figure 2 shows an example comparison of the accurate spectrum of Equation (6)
with these power-law approximations. We see they are inaccurate around
$\nu_{0}$ as well as close to $\nu_{\rm max}$, where they fail to reproduce
the gradual cutoff of the accurate spectrum. While the power-law asymptotic
solutions intersect at a $\nu$ slightly different from $\nu_{0}$, that
frequency has no physical meaning since the actual spectrum in that range does
not follow the broken-power law form, see Figure 2. We can define a broken-
power law approximation by taking the minimum of the two branches in Equation
(12).
The optical depth along a line of sight crossing the jet spine can be written
as
$\tau(\nu/\nu_{0},\xi)=(\nu/\nu_{0})^{-(p+4)/2}\xi^{1-a-b(p+2)/2},$ (13)
where $\nu_{0}$ is defined by $\tau(\nu=\nu_{0},\xi=1)=1$. The place $\xi=1$,
or $z=z_{0}$, corresponds to the jet being optically thin for all
$\nu\geq\nu_{0}$. There is no synchrotron emission111We note that since the
partially optically-thick emission of a jet at $z>z_{0}$ would remain almost
unaffected if there were still emission following the scaling of Equation (2)
at $z<z_{0}$ (which would, however, decrease the actual value of $z_{0}$ and
increase $\nu_{0}$), it is also possible to formulate the structure of the
partially optically-thick part without invoking $z_{0}$ and $\nu_{0}$. Such a
formulation is presented in Equations (A1–A4) in Appendix A. at $z<z_{0}$, and
thus $z_{0}$ corresponds to the onset of the jet emission. The relationship of
$\nu_{0}$ to the jet parameters is given by equation (A8) of Z19. We express
it here as a formula for the normalization of the electron distribution,
$n_{0}=\left(\frac{B_{\rm
cr}}{B_{0}\delta}\right)^{1+\frac{p}{2}}\\!\\!\left[\frac{h\nu_{0}(1+z_{\rm
r})}{m_{\rm e}c^{2}}\right]^{2+\frac{p}{2}}\\!\\!\frac{\alpha_{\rm f}\sin
i}{C_{2}(p)\pi\sigma_{\rm T}z_{0}\tan\Theta},$ (14)
where $\alpha_{\rm f}$ is the fine-structure constant and $\sigma_{\rm T}$ is
the Thomson cross section. From Equation (13), the distance along the jet at
which $\tau(\nu,\xi_{\nu})=1$ at $\nu\lesssim\nu_{0}$ is
$\xi_{\nu}=\left(\frac{\nu}{\nu_{0}}\right)^{-q}\\!\\!,\,\,q\equiv\frac{p+4}{2a+bp+2b-2},\,\,z_{\nu}=z_{0}\xi_{\nu}.$
(15)
For $a=2$ and $b=1$, we have $q=1$ at any $p$. This distance is very close to
that at which most of the flux at a given $\nu$ is emitted, which can be
defined by the maximum of ${\rm d}F_{\nu}(\xi)/{\rm d}\,\ln\xi$, see Figure 1,
and can be calculated using Equation (6). For example, at $a=2.2$, $b=1.1$,
and $p=2$, that maximum is at $\xi\approx 1.19\xi_{\nu}$. The emission around
the peak has a broad spatial distribution; the 50% values of the maximum flux
are reached at $\xi=0.65\xi_{\nu}$ and $3.64\xi_{\nu}$.
Then, the Lorentz factor responsible for the bulk of emission at $\xi_{\nu}$
is
$\gamma_{\nu}=\left(\frac{B_{\rm cr}}{B_{0}}\frac{h\nu_{0}}{\delta m_{\rm
e}c^{2}}\right)^{1/2}\left(\frac{\nu}{\nu_{0}}\right)^{(1-bq)/2},$ (16)
which is usually weakly dependent on $\nu$. While the integral spectrum of
Equation (6) is valid for any $\gamma_{\rm min}$, the asymptotic power-laws of
Equation (12) require $\gamma_{\rm min}$ to be by a factor of at least a few
lower than $\gamma_{\nu}$ for values of $\nu$ of interest (in the range
$<\nu_{0}$) and $\gamma_{\rm max}$ is required to be a factor of a few larger
than $\gamma_{\nu}$. If a high-energy cutoff is observed, an additional
constraint follows from it, see Equation (4).
If we know $\alpha$ and $\alpha_{\rm thin}$, we still cannot determine the
values of $a$ and $b$ separately. However, a likely possibility is that the
ratio between the electron and magnetic energy densities remains constant,
i.e., maintaining the same degree of equipartition along the jet, in which
case $a=2b$. We define an equipartition parameter as the ratio of the energy
densities,
$\beta_{\rm eq}\equiv{u_{\rm p}\over B^{2}/8\pi}={n_{0}m_{\rm e}c^{2}(1+k_{\rm
i})(f_{E}-f_{N})\over B_{0}^{2}/8\pi},$ (17)
where
$f_{E}\equiv\begin{cases}{\gamma_{\rm max}^{2-p}-\gamma_{\rm min}^{2-p}\over
2-p},&p\neq 2;\cr\ln{\gamma_{\rm max}\over\gamma_{\rm
min}},&p=2,\cr\end{cases}\quad f_{N}\equiv\frac{\gamma_{\rm
min}^{1-p}-\gamma_{\rm max}^{1-p}}{p-1},$ (18)
the second equality in Equation (17) is at $z_{0}$, $u_{\rm p}$ is the
particle energy density, $k_{\rm i}$ accounts for the energy density in
particles other than the power-law electrons, in particular in ions (excluding
the rest energy), and $p>1$ has been assumed in the expression for $f_{N}$.
For $a=2b$, $\beta_{\rm eq}$ is constant along the jet (provided $k_{\rm i}$
is also constant) at $z\geq z_{0}$, which yields
$\alpha=\frac{(b-1)(13+2p)}{b(p+6)-2},\quad q=\frac{p+4}{b(p+6)-2}.$ (19)
Below, we use $\beta_{\rm eq}$ and $a=2b$ to constrain the jet parameters.
We note that the case of $a>2$ requires that either $\gamma_{\rm min}$
decreases or the electrons removed from their power-law distribution move to
some low energies below $\gamma_{\rm min}$ (with negligible emission). Since
we assume $\gamma_{\rm min}$ to be constant along the jet, the latter has to
be the case.
We next consider the difference between the arrival times of two photons. The
first photon, at $\nu_{1}$, is emitted toward the observer at $\xi_{\nu_{1}}$.
The second photon, with $\nu_{2}<\nu_{1}$, is emitted at $\xi_{\nu_{2}}$ by
the same comoving point of the jet after the time $\Delta t_{\rm e}$, which is
further downstream in the jet by $\beta c\Delta t_{\rm e}$. Since the jet
moves at an angle $i$ with respect to the line of sight, the distance of the
emitting point to the observer will become shorter during this time by $\beta
c\Delta t_{\rm e}\cos i$. For an observed difference in the arrival times of
$\Delta t_{\rm a}$, the intrinsic separation between the emission points
(measured in the local observer’s frame) will be
$z_{\nu_{2}}-z_{\nu_{1}}=z_{0}(\xi_{\nu_{2}}-\xi_{\nu_{1}})=\frac{\Delta
t_{\rm a}\beta c}{(1-\beta\cos i)(1+z_{\rm r})}.$ (20)
At frequencies $\leq\nu_{0}$, $\xi_{\nu}$ follows from Equation (15). Here, we
have also taken into account the redshift, making this expression correct also
for an extragalactic source. Given an observed $\Delta t_{\rm a}$, Equations
(15) and (20) imply
$\displaystyle z_{0}=\frac{\Delta t_{\rm a}\nu_{0}^{-q}\beta c}{(1-\beta\cos
i)\left(\nu_{2}^{-q}-\nu_{1}^{-q}\right)(1+z_{\rm
r})}=\frac{t_{0}c\beta\Gamma\delta}{1+z_{\rm r}},$ (21) $\displaystyle
t_{0}\equiv\frac{\Delta t_{\rm a}}{\Delta\xi}=\frac{\Delta t_{\rm
a}\nu_{0}^{-q}}{\nu_{2}^{-q}-\nu_{1}^{-q}},$ (22)
where $t_{0}$ can be obtained from time lag data if $\nu_{0}$ and $q$ are
known (from the spectrum).
Appendix A provides general solutions for $z_{\nu}$, $B$, $\Theta$ and $n$ to
the equations in this section assuming the validity of Equation (12) for
$\nu<\nu_{0}$ as functions of $b$ (assuming $a=2b$), $p$, $\nu_{0}$, $F_{0}$,
$t_{0}$, $i$ and $D$, as well as $\beta_{\rm eq}$, $\gamma_{\rm min}$ and
$\gamma_{\rm max}$.
### 2.2 The jet power
The jet power can be calculated using standard expressions. Note that it is
defined in terms of the proper enthalpy rather than the energy density, e.g.,
Levinson (2006), Zdziarski (2014). Then, the component of the jet power due to
both the relativistic electrons and magnetic fields (assuming they are
predominantly toroidal at $z_{0}$) including both the jet and counterjet for
$a=2b$ and $k_{\rm i}=0$ at $z\geq z_{0}$ is
$P_{B}+P_{\rm e}=\left(\frac{1}{2}+\frac{\beta_{\rm
eq}}{3}\right)\\!c\beta(B_{0}z_{0}\Gamma\tan\Theta)^{2}\xi^{2-2b}.$ (23)
The usable power in cold ions, at any $z$, but calculated at $z_{0}$, is
$P_{\rm i}=2\pi\mu_{\rm e}n_{0}f_{N}\\!\left(\\!1-\frac{2n_{+}}{n_{\rm
e}}\\!\right)\\!m_{\rm p}c^{3}\beta\Gamma(\Gamma-1)(z_{0}\tan\Theta)^{2},$
(24)
where $n_{\rm e}$ and $n_{+}$ is the density of both electrons and positrons
(which ratio is assumed to be constant along the jet), and positrons only,
respectively, $\mu_{\rm e}=2/(1+X)$ is the electron mean molecular weight, $X$
($\approx 0.7$ for the cosmic composition) is the H mass fraction, $m_{\rm p}$
is the proton mass, and $f_{N}$ is given by Equation (18). This is the power
in ions that has to be supplied to the jet, and then it can be dissipated,
hence the factor $(\Gamma-1)$. Equation (24) neglects the possible presence of
background electrons being piled up at $\gamma<\gamma_{\rm min}$ already at
$z_{0}$. On the other hand, $a>2$ at a constant $\gamma_{\rm min}$ requires
that leptons removed from $n(\gamma,\xi)$ at $z>z_{0}$ do appear at some low
energies below $\gamma_{\rm min}$ (with negligible emission).
We note that in a steady state, $2n_{+}/n_{\rm e}=2\dot{N}_{+}/\dot{N}_{\rm
e}$, where $2\dot{N}_{+}$ is the total rate of advection upstream of e± pairs
produced at the jet base, and $\dot{N}_{\rm e}$ is the total lepton flow rate,
$\dot{N}_{\rm e}\approx 2\pi n_{0}f_{N}c\beta\Gamma(z0\tan\Theta)^{2}.$ (25)
This implies
$P_{\rm i}=\mu_{\rm e}m_{\rm p}c^{2}(\Gamma-1)\left(\dot{N}_{\rm
e}-2\dot{N}_{+}\right)\geq 0.$ (26)
Since the pair production rate at the jet base, $\dot{N}_{+}$ (Section 2.3),
is independent of $\Gamma$, the condition of $P_{\rm i}\geq 0$ can give a
lower limit on $\Gamma$.
In any jet model, the total usable jet power is approximately constrained by
the accretion power,
$P_{\rm j}=P_{B}+P_{\rm e}+P_{\rm
i}\lesssim\dot{M}c^{2}=\frac{L}{\epsilon_{\rm eff}}$ (27)
where $\dot{M}$ is the mass accretion rate, $L$ is the bolometric luminosity
and $\epsilon_{\rm eff}\sim 0.1$ is the accretion efficiency. This then gives
an upper limit on $\Gamma$. The limit $\dot{M}c^{2}$ can be exceeded if the
rotation of the BH is tapped, but only by a factor $\lesssim$1.3 and for a
maximally rotating BH, see the references in Section 2.4 below.
Finally, we consider the power lost in synchrotron emission. It equals the
synchrotron luminosity emitted by both jets in all directions (which is
Lorentz invariant). Since $\nu_{0}$ depends on the direction and the partially
self-absorbed emission is not isotropic in the comoving frame, we neglect its
effect and assume the entire emission is optically thin and isotropic in that
frame, which is a good approximation for hard electron distributions with
$p\lesssim 2.5$ or so. This gives
$\displaystyle P_{\rm S}$
$\displaystyle\approx\frac{1}{3}(B_{0}\tan\Theta)^{2}\sigma_{\rm
T}cz_{0}^{3}n_{0}\Gamma f_{E2}f_{\xi},$ $\displaystyle f_{E2}$
$\displaystyle\equiv\begin{cases}{\gamma_{\rm max}^{3-p}-\gamma_{\rm
min}^{3-p}\over 3-p},&p\neq 3;\cr\ln\frac{\gamma_{\rm max}}{\gamma_{\rm
min}},&p=3,\cr\end{cases}$ (28) $\displaystyle f_{\xi}$
$\displaystyle\equiv\int_{1}^{\infty}{\rm d}\xi\xi^{2-2b-a}=\frac{1}{2b+a-3},$
where $2b+a>3$ is assumed. This $P_{\rm S}$ approximately equals the intrinsic
luminosity of both jets,
$L_{\rm jet}\approx 8\pi D^{2}\delta^{-3}\Gamma\int_{0}^{\nu_{\rm
max}(\delta)}{\rm d}\nu F_{\nu},$ (29)
where $F_{\nu}$ is for the approaching jet only, $\nu_{\rm max}$ is given by
Equation (4) and the transformation law is for a stationary jet emitting
isotropically in its comoving frame (Sikora et al., 1997). In addition to that
law, $\nu_{\rm max}$ depends on $\delta$. For self-consistency of our
equations, $P_{\rm S}\ll P_{\rm j}$ is required.
### 2.3 Pair production
As we see from Equation (26), the jet power in ions (given an observed
synchrotron spectrum) strongly depends on the abundance of e± pairs. In the
case of extragalactic jets, there are strong indications that they dominate by
number, though most of the rest mass is usually still in ions (Sikora et al.,
2020). In the case of jets in microquasars, this is uncertain. An important
issue for that is the origin of pairs. A likely mechanism is pair production
in photon-photon collisions by photons produced close to the BH.
Figure 3: A sketch of of the pair-producing geometry based on fig. 3.9 of
Tchekhovskoy (2015), which shows the result of his 3D GRMHD simulation for
magnetically-arrested accretion on a BH with the spin parameter of
$a_{*}=0.99$. In our case the disk is hot in its inner part (up to the radius
$R_{\rm hot}$) and surrounded by a cool outer disk. We consider e± pair
production within the jet base (shown in green), which is devoid of matter,
with the wavy arrows representing pair-producing photons. We denote the
characteristic jet radius of the pair-producing region as $R_{\rm jet}$. In
addition, pairs are produced within the hot disk, but it is magnetically
shielded from the jet base. The black solid curves show the poloidal magnetic
field.
Pairs can be produced within the hot flow, e.g, Svensson (1987). Since the
Larmor radius of either a proton or an electron is orders of magnitude lower
than $R_{\rm g}$ (where $R_{\rm g}=GM/c^{2}$ is the gravitational radius), the
magnetic base of the jet is shielded from the hot plasma and it is unlikely
that pairs produced in the accretion flow enter the jet. On the other hand,
pairs can also be produced within the magnetic base of the jet, outside the
hot plasma (Sikora et al., 2020). There, photon-photon collisions will create
e± pairs in an environment devoid of background matter, thus strongly reducing
the rate of pair annihilation. A possible geometry (see also Henri & Pelletier
1991; Ferreira et al. 2006) is shown in Figure 3. From an observed hard X-ray
spectrum and a radius, $R_{\rm hot}$, of the emitting hot plasma (inferred,
e.g., from X-ray spectroscopy; Bambi et al. 2021), we can estimate the average
photon density within the jet base, which then gives us the rate of pair
production per unit volume, $\propto R_{\rm hot}^{-4}$. We approximate the
pair-producing volume as two cylinders with the height $R_{\rm hot}$ and the
characteristic radius of the jet, $R_{\rm jet}$, i.e., $V=2\pi R_{\rm
jet}^{2}R_{\rm hot}$. We can then write the total lepton production rate as
$2\dot{N}_{+}=A_{\gamma\gamma}R_{\rm hot}^{-3}R_{\rm jet}^{2},$ (30)
where the factor $A_{\gamma\gamma}$ would follow from detailed calculations.
Depending on the equilibrium density of the pairs, some of them would
annihilate, and some would be advected to the BH, reducing the effective
$2\dot{N}_{+}$. We address this issue for the case of MAXI J1820+070 in
Section 3.1.
### 2.4 The Blandford-Znajek mechanism
We can also estimate the jet power in the framework of the model with
extraction of the rotational power of the BH (Blandford & Znajek, 1977), as
illustrated in Figure 3. The jet power in this case depends on the magnetic
flux, $\Phi_{\rm BH}$, threading the BH (on one hemisphere), which can be
written as
$\Phi_{\rm BH}=\phi_{\rm BH}(\dot{M}c)^{1/2}R_{\rm g},$ (31)
where $\phi_{\rm BH}$ is a dimensionless magnetic flux. Its maximum value of
$\approx$50 is obtained in magnetically arrested disks (MAD; Narayan et al.
2003), as it was found in GRMHD simulations of MAD accretion (Tchekhovskoy et
al. 2011; McKinney et al. 2012; see its more accurate value in Davis &
Tchekhovskoy 2020). Then it has been found that
$P_{\rm j}\approx 1.3\left(\frac{\phi_{\rm
BH}}{50}\right)^{2}h_{0.3}a_{*}^{2}\dot{M}c^{2},$ (32)
where $a_{*}$ is the BH spin parameter and $h_{0.3}$ is defined by the half-
thickness of the disk being $H_{\rm disk}=R_{\rm disk}0.3h_{\rm 0.3}$ (Davis &
Tchekhovskoy, 2020). This maximum differs from that of Equation (27) by the
factor $1.3h_{0.3}a_{*}^{2}$. In the spectrally hard state, the disk is most
likely hot, in which case $h_{0.3}\sim 1$.
Figure 4: MCMC fit results for the seven model-independent quantities, which
require only the assumptions of $a=2b$, see Section 3.1. Here and in Figure 8
below, the panels show the histograms of the one-dimensional posterior
distributions for the model parameters, and the two-parameter correlations,
with the best-fitting values of the parameters indicated by green
lines/squares. The best-fit results for fitted quantities are taken as the
medians of the resulting posterior distributions, and are shown by the middle
vertical dashed lines in the distribution panels. The surrounding vertical
dashed lines correspond approximately to a $1\sigma$ uncertainty.
We can estimate $\phi_{\rm BH}$ using the magnetic field strength measured far
from the BH by using the conservation of the magnetic flux. Specifically, we
use the expected equality between the poloidal and toroidal field components
at the Alfvén surface (in the observer’s frame), which radius, for strongly
magnetized jets, approaches the light cylinder radius, $R_{\rm LC}$
(Lyubarsky, 2010). This implies $\Gamma\langle
B^{\prime}_{\phi}\rangle\approx(R/R_{\rm LC})B_{\rm p}$, where $B_{\rm p}$ is
the poloidal field (which has the same value in the comoving and BH frames)
and $\langle B^{\prime}_{\phi}\rangle$ is the average toroidal field strength
in the comoving frame, denoted by $B$ in the remainder of this paper. Then,
the toroidal field dominates at $z$ satisfying $R(z)\gg\Gamma R_{\rm LC}$,
and, presumably, at $z\geq z_{0}$. The magnetic flux using this method was
determined for a sample of radio loud active galactic nuclei in Zamaninasab et
al. (2014) and Z15. We use the resulting formula as derived in Z15,
$\Phi_{\rm j}={2^{3/2}\pi R_{\rm H}sz_{0}B_{0}(1+\sigma)^{1/2}\over\ell
a_{*}},$ (33)
which allows us to estimate $\phi_{\rm BH}$ for a given $a_{*}$ by setting
$\Phi_{\rm j}=\Phi_{\rm BH}$. Here $R_{\rm H}=[1+(1-a_{*}^{2})^{1/2}]R_{\rm
g}$ is the BH horizon radius, $\ell\lesssim 0.5$ is the ratio of the field and
BH angular frequencies, and $s$ is the scaling factor relation between the jet
opening angle and the bulk Lorentz factor (Komissarov et al., 2009;
Tchekhovskoy et al., 2009), limited by causality to $\lesssim 1$,
$\Theta\approx s\sigma^{1/2}/\Gamma.$ (34)
Here, $\sigma$ is the magnetization parameter, which is defined as the ratio
of the proper magnetic enthalpy to that for particles including the rest
energy,
$\sigma\equiv{B^{2}/4\pi\over\eta u_{\rm p}+\rho c^{2}}=\frac{2}{\beta_{\rm
eq}}\left[\eta+\frac{\mu_{\rm e}m_{\rm p}(1-2n_{+}/n_{\rm e})f_{N}}{m_{\rm
e}(f_{E}-f_{N})(1+k_{\rm i})}\right]^{-1},$ (35)
where $\rho$ is the rest-mass density, and $4/3<\eta<5/3$ is the particle
adiabatic index. The second equality relates $\sigma$ to $\beta_{\rm eq}$
assuming that the only ions are those associated with the power law electrons
(i.e., neglecting the possible presence of ions associated with electrons with
$\gamma<\gamma_{\rm min}$, e.g., with a quasi-Maxwellian distribution). For
$p>2$ and a large enough $\gamma_{\rm max}$, $f_{E}/f_{N}\approx\gamma_{\rm
min}(p-1)/(p-2)$. Then, for $\beta_{\rm eq}(1-2n_{+}/n_{\rm e})/\gamma_{\rm
min}\gg m_{\rm e}/m_{\rm p}$, we have $\sigma\ll 1$.
## 3 Application to MAXI J1820+070
Here, we apply the model of Section 2 to the source. We use the VLA fluxes at
5.25, 7.45, 8.5, 11.0, 20.9, 25.9 and the ALMA flux at 343.5 (from table 1 in
T21). We also use the IR flux at $1.5\times 10^{5}$ from VLT/HAWK-I, and the
optical flux at $3.9\times 10^{5}$ GHz from NTT/ULTRACAM (T21), and the 13
fluxes between 1.37 and $7.00\times 10^{5}$ GHz from the VLT X-shooter and the
INTEGRAL/OMC flux at $5.66\times 10^{5}$ GHz (Rodi et al., 2021). All of the
IR/optical fluxes have been de-reddened with $E(B\\!-\\!V)=0.18$ (Tucker et
al., 2018), as assumed in T21. We use the time lags between 25.9 GHz and lower
frequencies, 11.0 GHz and lower frequencies, and between 343.5 GHz and lower
frequencies. The lags are given in tables 3 and 4 of T21. This gives us 23
spectral measurements and 14 time lags. We present analytical and numerical
estimates in Sections 3.1 and 3.2, respectively. We assume the ratio between
the electron and magnetic energy densities to be constant along the jet, i.e.,
$a=2b$.
In our fits below, we use the Markov-Chain Monte Carlo (hereafter MCMC)
technique with wide uniform priors, see T21 for details. We assume
$i=64\arcdeg\pm 5\arcdeg$ (Wood et al., 2021) with a Gaussian prior, and
$D=2.96\pm 0.33$ kpc (Atri et al., 2020) with a Gaussian prior, but truncated
at the upper limit of $D_{\rm max}=3.11$ kpc found by Wood et al. (2021).
We assume the observed spectrum is from the approaching jet only. At $i\approx
64\arcdeg$, this assumption is satisfied only roughly. The ratio of the jet-
to-counterjet fluxes in the optically-thick part of the spectrum is given by
$[(1+\beta\cos i)/(1-\beta\cos i)]^{(7+3p)/(4+p)}$ (which follows from
Zdziarski et al. 2012), which is $\approx$7 at the fitted $p\approx 2.21$ (see
below).
### 3.1 The initial fit and analytical estimates
Table 1: The basic parameters of the jet in MAXI J1820+070. $b$ | $p$ | $\nu_{0}$ | $F_{0}$ | $\nu_{\rm max}$ | $\alpha_{\rm disk}$ | $F_{\rm disk}$ | $t_{0}$
---|---|---|---|---|---|---|---
| | $10^{4}$ GHz | mJy | $10^{4}$ GHz | | mJy | s
$1.10^{+0.01}_{-0.01}$ | $2.21^{+0.22}_{-0.19}$ | $2.32^{+0.65}_{-0.60}$ | $298^{+31}_{-38}$ | $32.5^{+5.0}_{-4.4}$ | $0.54^{+0.13}_{-0.12}$ | $30.4_{-5.9}^{+6.0}$ | $0.79^{+0.30}_{-0.19}$
Figure 5: The radio-to-optical spectrum from T21 (VLA, ALMA, VLT/HAWK-I,
NTT/ULTRACAM; red error bars), the 339 MHz measurement (VLITE, magenta error
bar; Polisensky et al. 2018), and from the VLT/X-shooter (blue error bars) and
the INTEGRAL/OMC (cyan error bar) as obtained by Rodi et al. (2021), but with
the de-reddening correction for $E(B\\!-\\!V)=0.18$ (Tucker et al., 2018). The
error bars for the radio and sub-mm measurements of T21 are the square roots
of the squares of their statistical and systematic errors, and 10% systematic
errors are assumed for the IR and optical measurements. The spectrum above 5
GHz is fitted by the jet model of Equation (6) using the best-fit parameters
shown in Figure 4 (cyan dashed curve) and a phenomenological power law
approximating the disk component with $\alpha_{\rm disk}=0.53$ (green solid
curve). The former is virtually independent of $\xi_{\rm min}$ within the
ranges obtained in the full fits (Equation 10; Section 3.2), so it can be
assumed to be unity. The sum is shown by the solid black curve. The
corresponding asymptotic optically thick and optically thin spectra of
Equation (12) are shown by the magenta dotted lines.
Figure 6: The time lags measured by T21 vs. the theoretically expected
distance in units of $z_{0}$ between the emission points for the partially-
self-absorbed part of the spectrum. The blue and red symbols correspond to the
lags between 343.5 GHz and radio frequencies (5.25–25.9 GHz), and within the
radio frequencies, respectively, where we assumed a constant ratio between the
electron and magnetic energy densities, $\xi_{\nu}=(\nu/\nu_{0})^{-q}$,
$q\approx 0.88$, and $\nu_{0}=2.32\times 10^{4}$ GHz. The diagonal line gives
the best-fit theoretical relationship between the two quantities corresponding
to $t_{0}=0.79$ s, see text.
We find we can solve for $b$, $p$, $\nu_{0}$, $F_{0}$, $\nu_{\rm max}$ and
$t_{0}$ by only assuming $a=2b$. From the measured fluxes, we obtain $b$, $p$,
$\nu_{0}$, $F_{0}$ and $\nu_{\rm max}$ using Equations (6–7). We find the
fitted spectrum is very insensitive to $\xi_{\rm min}(\nu)$, Equation (10), as
long as it is low enough. We can just use any low value of it, or just assume
$\xi_{\rm min}=1$, and check a posteriori the consistency of the choice.
Similar to Rodi et al. (2021), we find the presence of an additional hard
component beyond $\nu_{\rm max}$, apparently due to the emission of an
accretion disk. Given the limited range of the fitted frequencies, we fit it
phenomenologically as a power law, $F_{\nu,\rm disk}=F_{\rm
disk}(\nu/10^{5}{\rm GHz})^{\alpha_{\rm disk}}$. Then, using the obtained
values of $b$, $p$ and $\nu_{0}$, we can fit the time lags using Equations
(19) and (22). However, with the MCMC technique, we fit the flux and time-lag
measurements simultaneously. The fitted parameters and the correlations
between them are shown in Figure 4, and the parameters are listed in Table 1.
The best-fitting values are given as the median of the resulting posterior
distributions, and the lower and upper uncertainties are reported as the range
between the median and the 15th percentile, and the 85th percentile and the
median, respectively. These uncertainties correspond approximately to
$1\sigma$ errors. We use these best-fit values as well as the best-fit values
of $D$ and $i$ in our estimates in this subsection.
Figure 5 shows the observed average radio-to-optical spectrum fitted by the
above model. The best-fitting spectral indices in the optically thick and
optically thin regimes are then $\alpha\approx 0.25$ and $\alpha_{\rm
thin}\approx-0.61$, respectively. We show the theoretical spectrum calculated
by integrating Equation (6) and the asymptotic optically thick and thin
spectra of Equation (12) for this fit.
We then show the time lags in Figure 6, where we plot the values of the
measured $\Delta t_{\rm a}$ against the separation in the dimensionless units,
$\xi$, using Equations (15) and (22). At the best-fit values of $b$ and $p$,
$q\approx 0.883$, see Equation (19). The actual lags have to follow a single
dependence relating the physical separation between the emission points to
$\Delta t_{\rm a}$, which is shown by the diagonal line showing the linear
relationship between $t_{\rm a}$ and $\Delta\xi_{\nu}$ corresponding to the
best-fit value of $t_{0}$, see Equation 22. We see a certain offset between
the points corresponding to the lags between the sub-mm frequency of 343.5 GHz
and 6 radio frequencies (blue error bars), and the lags measured between the
radio frequencies (red error bars). This may be related to the different
methods used in T21 to determine those. On the other hand, the offset is
significantly reduced for $q=0.8$, which value of $q$, however, is not
compatible with $\alpha\approx 0.25$. This may indicate that the jet is more
complex than we assume, e.g., either $\Gamma$, $\Theta$ or $b$ are not
constant at $z\geq z_{0}$.
Our formalism assumes the lags correspond to propagation of perturbations
between different values of $z_{\nu}$ at the jet speed, $\beta$. With this
assumption, we obtain $z_{0}$ as a function of $\Gamma$, see Equation (21),
$z_{0}\approx(2.37\times 10^{10}\,{\rm cm})(t_{0}/0.79\,{\rm
s})\beta\Gamma\delta.$ (36)
We then use the solutions obtained in Appendix A assuming $\gamma_{\rm
min}=3$, $k_{\rm i}=0$. However, we only know $\nu_{\rm max}$ rather than
$\gamma_{\rm max}$, see Equation (5). Since the solutions depend on
$\gamma_{\rm max}$ relatively weakly, we assume here the best-fit values of
$B_{0}=10^{4}$ G and $\Gamma=2.2$ obtained in Section 3.2 for $\gamma_{\rm
min}=3$, which yield $\gamma_{\rm max}=125$, which we will use hereafter in
this subsection. Using Equation (A5), we obtain at the best fit
$\Theta\approx\frac{2.32\arcdeg}{\left(\beta\Gamma\right)^{1.89}\delta^{2.57}\beta_{\rm
eq}^{0.11}}.$ (37)
At $\beta\approx 1$ and $\beta_{\rm eq}=1$, $\Theta\approx
0.53\arcdeg\Gamma^{0.67}$. Next, Equations (A6–A7) give at the best fit,
$\displaystyle B_{0}\approx\frac{8.2\times 10^{3}\,{\rm
G}(\beta\Gamma)^{0.22}}{\delta^{0.13}\beta_{\rm eq}^{0.22}},$ (38)
$\displaystyle n_{0}\approx\frac{1.8\times 10^{12}\,{\rm
cm}^{-3}(\beta\Gamma)^{0.43}\beta_{\rm eq}^{0.57}}{\delta^{0.26}},$ (39)
with $B_{0}\propto\Gamma^{0.35}$ and $n_{0}\propto\Gamma^{0.70}$ at
$\beta\approx 1$. Equation (37) shows that we cannot determine both $\Theta$
and $\delta$ even assuming a value of $\beta_{\rm eq}$ (on which $\Theta$
depends very weakly). We can also calculate the Thomson scattering optical
depth along the jet radius at $z\geq z_{0}$, which equals
$\tau_{\rm T}(\xi)=\sigma_{\rm
T}n_{0}f_{N}z_{0}\tan\Theta\xi^{1-2b}\approx\frac{2.5\times 10^{-4}\beta_{\rm
eq}^{0.46}}{(\beta\Gamma)^{0.46}\delta^{1.82}\xi^{1.2}}.$ (40)
At $i=64\arcdeg$ and $\Gamma=2$, 3, 4, we have $\delta\approx 0.81$, 0.57,
0.43, and, at $\beta_{\rm eq}=1$, $\Theta\approx 1.4\arcdeg$, $1.4\arcdeg$,
$1.5\arcdeg$, $B_{0}\approx 1.0,\,1.1,\,1.2\times 10^{4}$ G, $z_{0}\approx
3.3,\,3.8,\,4.0\times 10^{10}$ cm, and $\tau_{\rm T}(\xi=1)\approx
2.9,\,4.3,\,6.1\times 10^{-4}$, respectively. The values of $z_{0}$ correspond
to $\approx(2.8$–$3.3)\times 10^{4}R_{\rm g}$ at an assumed $M=8{\rm
M}_{\sun}$. We find that $\Theta$, $B_{0}$ and $z_{0}$ depend relatively
weakly on $\Gamma$ for $1.5\lesssim\Gamma\lesssim 5$.
We determine the typical Lorentz factors, $\gamma_{\nu}$, of relativistic
electrons giving rise to the emission at $\nu$, which in the partially self-
absorbed regime originates mostly from $z_{\nu}$, see Equation (16). We obtain
$\gamma_{\nu}\approx 32\beta_{\rm
eq}^{0.11}(\beta\Gamma)^{-0.11}\delta^{-0.43}(\nu/\nu_{0})^{0.014}.$ (41)
In order to obtain a power-law emission in that regime, we need $\gamma_{\rm
min}$ to be a factor of a few smaller. Thus, we require $\gamma_{\rm
min}\lesssim 10$ for the validity of the model. The maximum $\gamma$
corresponds to the fitted $\nu_{\rm max}$, Equation (5). From that, we obtain
$\gamma_{\rm max}$ ranging from $\approx$123 to 147 for $\Gamma$ increasing
from 2 to 4. Combining this with the values of $\tau_{\rm T}$ from Equation
(40), we find that the power in the synchrotron self-Compton component is
relatively similar to that in the synchrotron one, $P_{\rm
SSC}\lesssim\tau_{\rm T}\gamma_{\rm max}^{2}P_{\rm S}$.
Figure 7: The locations of the emission at the observed frequencies based on
the break in the power spectra as $z_{\rm b}=\beta c/f_{\rm break}$ for
$\Gamma=3$ and $i=64\arcdeg$, shown as their ratio to the locations based on
time lags and the slope of the partially self-absorbed spectrum,
$z_{\nu}\approx z_{0}(\nu/\nu_{0})^{-0.88}$.
We can then consider implications of the break frequencies, $f_{\rm b}$, in
the power spectra for different frequencies measured by T21. For those power
spectra, most of the variability power per $\ln f$ occurs at $f\leq f_{\rm
b}$, with the variability at higher frequencies strongly damped, see figs. 3
and 5 in T21. We define the distance, $z_{\rm b}$, as that covered by a jet
element moving with the jet velocity during the time222T21 assumed
$z_{\nu}=z_{\rm b}\equiv\beta c\delta/f_{\rm b}(\nu)$, which they used as the
final condition determining the jet parameters. Thus, they transformed the
observed variability frequency to the jet frame, $f_{\rm b}/\delta$, and
multiplied the resulting time scale, $\delta/f_{\rm b}$, by the jet velocity
in the observer’s frame, $\beta c$, which does not appear to be correct. We
note that in the present case we consider the light curve originating from a
fixed region of the jet around $z_{\nu}$. While the plasma in that region is
moving, two adjacent maxima in the observed light curve are emitted from the
same region in the frame connected to the BH, which is the same frame as the
observer’s one (in the absence of a redshift). Thus, a frequency inferred from
the variability power spectrum should not be transformed. $1/f_{\rm b}$,
$z_{\rm b}(\nu)\equiv\beta c/f_{\rm b}(\nu).$ (42)
We can compare it to the distance along the jet from the BH up to the location
of the peak emission at $\nu$, i.e., $z_{\nu}$ (Equations 15, 21). In our
model, $z_{\nu}\approx z_{0}(\nu/\nu_{0})^{-0.88}$ with
$z_{0}\propto\beta/(1-\beta\cos i)$, giving $z_{\rm b}/z_{\nu}\propto
1-\beta\cos i$. Then, this ratio depends only weakly on $\beta$ (or $\Gamma$);
at $i=64\arcdeg$, $1-\beta\cos i$ changes only from 1 at $\beta\ll 1$ to 0.56
at $\beta\approx 1$. This implies that this correlation cannot be used to
determine the actual bulk Lorentz factor of the jet.
Figure 7 shows $z_{\rm b}/z_{\nu}$ vs. $z_{\nu}$ for $\Gamma=3$. We see an
approximately constant ratio of $z_{\rm b}/z_{\nu}\approx 1.5$–2. Therefore,
$z_{\rm b}$ is proportional and close to the travel time along $z_{\nu}$ in
all of the cases. A possible explanation of the damping of the variability at
frequencies $>c/z_{\nu}$ appears to be a superposition of the contributions to
the emission from different parts of the region dominating at a given $\nu$,
which is $\propto z_{\nu}$, as shown in Figure 1. The peak of ${\rm
d}F_{\nu}/{\rm d}\,\ln z$ for $p=2.21$ is at $\approx 1.15z_{\nu}$ and its
width defined by ${\rm d}F_{\nu}/{\rm d}\,\ln z$ decreasing to the 50% of the
peak is $(0.65$–$3.16)z_{\nu}$. Thus, if different parts vary independently,
the variability will be damped at $f\gtrsim c/(2z_{\nu})$, as observed.
Alternatively, the observed radio/IR variability can be driven by the variable
power supplied from the vicinity of the BH with a wide range of frequencies
(Malzac, 2013, 2014) and then transferred upstream, the travel time can act as
a low-pass filter, removing most of the variability at frequencies $f>\beta
c/z_{\nu}$. This can happen due to damping of perturbations along the jet due
to some kind of internal viscosity, e.g., collisions between shells within the
jet moving with a range of velocities (Jamil et al., 2010). The process would
be then analogous to viscous damping in accretion disks, where modulations
with a period shorter than the signal travel time across the disk are strongly
damped (Zdziarski et al., 2009). This picture is also compatible with the
integrated fractional variability of the power spectra (RMS) decreasing with
the decreasing $\nu$ (as shown in fig. 5 of T21). This means increasing the
distance travelled along the jet leads to the increasing damping.
We note that the break frequencies in the power spectra of T21 have been
defined by choosing a specific, and not unique, algorithm, as well as the
obtained values of $f_{\rm b}$ are close to the minimum frequency at which the
power spectrum is measured for $f<10$ GHz, which limits the accuracy of the
determination of those $f_{\rm b}$. Also, while the damping of variability
above $\beta c/z_{\nu}$ clearly occurs, details of the physics behind it
remain uncertain, and the damping could start at $f\sim\beta c/(2z_{\nu})$
instead of exactly $\beta c/z_{\nu}$. Summarizing, our results are completely
compatible with the variability damping at time scales shorter than the
light/jet travel time across $z_{\nu}$. However, unlike our previous estimates
from the observed spectrum and time lags, which are based on a relatively
rigorous and well-understood model, the detailed cause of the connection
between the break frequencies and the distance along the jet remains
uncertain.
We can also consider the prediction of the location of the bulk of the 15 GHz
emission, $z_{\nu}\approx z_{0}(\nu/\nu_{0})^{-0.88}\approx 2.5\times 10^{13}$
cm (at $\Gamma=3$, but weakly dependent on it), with the jet angular size at
this frequency from the VLBA observation on 2018 March 16 (MJD 58193),
reported in T21 as $0.52\pm 0.02$ mas. the deprojected size is $(2.60\pm
0.10)\times 10^{13}$ cm. The total flux density at 15 GHz was measured as
$F_{\nu}\approx 20.0\pm 0.1$ mJy. However, the VLBA observation was 27 d
before the radio/sub-mm ones. On MJD 58220, our best-fit spectral model yields
$F_{\nu}\approx 56\pm 1$ mJy. Within the framework of the continuous conical
jet model, we have $z_{\nu}\propto F_{\nu}^{(p+6)/(2p+13)}$ (Equation A2;
Zdziarski et al. 2012). Thus, for $p=2.2$ we predict the size at 15 GHz on MJD
58220 being $(56/20)^{0.47}\approx 1.6$ times larger than that on MJD 58193,
namely $\sim 4\times 10^{13}$ cm. While somewhat larger than the above
$z_{\nu}$, this size appears consistent with it since the peak of ${\rm
d}F_{\nu}/{\rm d}\,\ln z$ for $p=2.21$ is at $\approx 1.15z_{\nu}\approx
3.0\times 10^{13}$ cm, and that spatial distribution is broad and skewed
toward higher distances, see Figure 1 and the discussion of it above.
We then estimate the rate of pair production. For MAXI J1820+070, pair
production within the hot plasma was calculated by Zdziarski et al. (2021)
based on the spectrum observed by INTEGRAL in the hard state. That spectrum
was measured up to $\sim$2 MeV, well above the pair production threshold of
511 keV, and modelled by Comptonization. It was found that an appreciable pair
abundance can be obtained only provided the hard X-ray source size is as small
as several $R_{\rm g}$, while the spectroscopy based on the relativistic
broadening of the fluorescent Fe K$\alpha$ line indicates a size of $\gtrsim
20R_{\rm g}$. Then, the pair abundance within the Comptonizing plasma is very
low.
However, as discussed in Section 2.3, pair production within the jet base can
be much more efficient. To calculate it, we adapt the results of Zdziarski et
al. (2021). We modify their equation (1) to calculate the photon density above
the hot disk, dividing the total rate of the photon emission by $2\pi R_{\rm
hot}^{2}$ (including both sides). We then use this photon density in equation
(3) of that paper for the spectral parameters of average spectrum (table 2 in
Zdziarski et al. 2021). This gives the pair production rate per unit volume.
With the assumptions as in Section 2.3, we have
$2\dot{N}_{+}\approx 4.65\times 10^{40}\,{\rm s}^{-1}\left(\frac{R_{\rm
hot}}{20R_{\rm g}}\right)^{-3}\left(\frac{R_{\rm jet}}{10R_{\rm
g}}\right)^{2}.$ (43)
This is then balanced by the sum of the rates of pair annihilation and pair
advection. Using formulae in Zdziarski et al. (2021), we have found that pair
annihilation can be neglected for the advection velocity of
$\beta_{\pm}\gtrsim 0.1$. It appears that such a velocity can be achieved due
to the net momentum component of the pair-producing photons along the $z$
axis, see Figure 3. Thus, while some of the produced pairs will annihilate
(and a small fraction will be advected to the BH), a major fraction of the
produced pairs will have a sufficient net bulk velocity to escape upstream.
Then, the lepton flow rate through the jet, Equation (25), for $\gamma_{\rm
min}=3$ is
$\dot{N}_{\rm e}\approx\frac{6.7\times 10^{40}{\rm s}^{-1}\beta_{\rm
eq}^{0.35}}{(\beta\Gamma)^{0.35}\delta^{3.39}}\propto\Gamma^{3.05},$ (44)
where Equations (36), (37), (39) have been used and the proportionality
assumes $\beta\approx 1$. Comparing with Equation (43), we find $\dot{N}_{\rm
e}>2\dot{N}_{+}$ at any $\Gamma$ for $R_{\rm hot}=20R_{\rm g}$, $R_{\rm
jet}=10R_{\rm g}$ and $\gamma_{\rm min}=3$. Thus, at these parameters the
synchrotron-emitting plasma is never composed of pure pairs. If we assume
either $R_{\rm jet}=15R_{\rm g}$ or $\gamma_{\rm min}=10$, we find
$\dot{N}_{\rm e}=2\dot{N}_{+}$ at $\Gamma\approx 2$, which thus represent the
minimum possible $\Gamma$ for these parameters. While the hot disk and jet
radii and $\gamma_{\rm min}$ are poorly constrained, we consider the fact that
the numbers in Equations (43) and (44), obtained with completely different
physical considerations, are of the same order of magnitude, to be highly
remarkable and indicating that indeed the two rates may be similar in this
source. Then, the jet can contain a large fractional abundance of pairs, and
they can dominate by number over the ions.
Figure 8: (a) The MCMC fit results for $\Gamma$, $\Theta$, $z_{0}$, $B_{0}$,
$P_{\rm j}$, $2n_{+}/n_{\rm e}$ and $\gamma_{\rm max}$ assuming $\gamma_{\rm
min}=3$ and $\epsilon_{\rm eff}=0.3$. The meaning of the panels and lines is
the same as in Figure 4. See Section 3.2 for details.
Figure 8: (b) The MCMC fit results for $\gamma_{\rm min}=10$ and $\epsilon_{\rm eff}=0.1$. Table 2: The parameters of the jet in MAXI J1820+070 other than those given in Table 1. $\gamma_{\rm min}$ | $\epsilon_{\rm eff}$ | $\Gamma$ | $\Theta$ | $\log_{10}z_{0}$ | $B_{0}$ | $\log_{10}P_{\rm j}$ | $\gamma_{\rm max}$
---|---|---|---|---|---|---|---
| | | $\arcdeg$ | cm | $10^{4}$ G | erg s-1 |
3f | 0.3f | $2.20^{+0.69}_{-0.46}$ | $1.04_{-0.35}^{+0.48}$ | $10.63^{+0.09}_{-0.08}$ | $0.99^{+0.22}_{-0.18}$ | $38.31^{+0.32}_{-0.60}$ | $120^{+8}_{-11}$
10f | 0.1f | $3.10^{+1.03}_{-0.85}$ | $1.41_{-0.56}^{+0.47}$ | $10.57^{+0.10}_{-0.13}$ | $1.21^{+0.29}_{-0.22}$ | $38.66^{+0.37}_{-0.59}$ | $124^{+21}_{-14}$
Next, we calculate the jet power. The power in the relativistic electrons and
magnetic fields, Equation (23), becomes at $z_{0}$
$P_{B}+P_{\rm e}\approx 1.9\times 10^{36}{\rm erg\,s}^{-1}\frac{3+2\beta_{\rm
eq}}{6\beta_{\rm eq}^{0.65}}\frac{\Gamma^{0.65}}{\beta^{0.35}\delta^{3.39}}.$
(45)
which increases very fast with $\Gamma$, approximately as $\propto\Gamma^{4}$
at $\beta\approx 1$. At $\beta_{\rm eq}=1$, $\Gamma=3$, this power is $\approx
2.2\times 10^{37}$ erg s-1. We find from Equations (24), (26), the power
associated with the bulk motion of cold matter as
$\displaystyle P_{\rm i}\approx 1.2\times 10^{38}{\rm
erg\,s}^{-1}(\Gamma-1)\times$ (46) $\displaystyle\left[\frac{\beta_{\rm
eq}^{0.35}}{(\beta\Gamma)^{0.35}\delta^{3.39}}-0.7\left(\frac{R_{\rm
hot}}{20R_{\rm g}}\right)^{-3}\\!\\!\left(\frac{R_{\rm jet}}{10R_{\rm
g}}\right)^{2}\right]\\!.$
The first term is approximately $\propto\Gamma^{3}(\Gamma-1)$.
To constrain $P_{\rm j}$ by the accretion power, we use the estimate of the
hard-state bolometric flux of $F_{\rm bol}\approx 1.4\times 10^{-7}$ erg cm-2
s-1 (Shidatsu et al., 2019). This yields $L\approx 1.5(D/2.96\,{\rm
kpc})^{2}10^{38}$ erg s-1 and
$\dot{M}c^{2}\approx 1.5\times 10^{39}\left(D\over 2.96\,{\rm
kpc}\right)^{2}\left(\epsilon_{\rm eff}\over 0.1\right)^{-1}\,{\rm
erg\,s}^{-1}.$ (47)
For the default parameter values, $P_{\rm j}\lesssim\dot{M}c^{2}$ implies
$\Gamma\lesssim 3.3$. If pair production is efficient enough, we also have a
lower limit on $\Gamma$ from the requirement of $P_{\rm i}>0$. The allowed
range depends significantly on the assumed parameters, in particular
$\gamma_{\rm min}$, $R_{\rm hot}$ and $R_{\rm jet}$. E.g., at $\gamma_{\rm
min}=10$, $R_{\rm hot}=20R_{\rm g}$ and $R_{\rm jet}=10R_{\rm g}$,
$\Gamma\gtrsim 2.4$ is required.
We can then compare the total jet power, $P_{\rm j}$, with the synchrotron
power. At the low $\gamma_{\rm max}$ implied by the $\nu_{\rm max}$ fitted to
the spectrum, we find $P_{\rm S}\ll P_{\rm j}$ always. For example, $P_{\rm
S}\approx 0.009P_{\rm j}$ at the maximum allowed $\Gamma\approx 3.3$, and
$P_{\rm S}\approx 0.02P_{\rm j}$ at $\Gamma=2$. On the other hand, we have
found $P_{\rm S}\sim 0.5(P_{B}+P_{\rm e})(z_{0})$, weakly depending on either
$\Gamma$ or $\gamma_{\rm min}$. Thus, the synchrotron emission can be entirely
accounted for by the power in electrons and magnetic fields at $z_{0}$, and
most of the decline of $P_{B}+P_{\rm e}$ with the distance can be due to the
synchrotron losses. However, we may see that the decline of $(P_{B}+P{\rm e})$
with $\xi$ is slower than that of the synchrotron power. If the former would
be just to the synchrotron emission, we would have ${\rm d}(P_{B}+P{\rm
e})/{\rm d}\xi+{\rm d}P_{\rm S}/{\rm d}\xi=0$, while the former and the latter
terms are $\propto-\xi^{1-2b}$ and $\propto\xi^{2-4b}$. This implies either
some electron re-acceleration at $z>z_{0}$ at the expense of $P_{\rm i}$, or
more complexity of the actual physical situation, with the initial energy loss
in the flow being faster and followed by a slower one.
In the framework of models with the jet dissipation mechanism being the
differential collimation of poloidal magnetic surfaces, the obtained
$\Theta\Gamma\ll 1$ indicate the jet magnetization at $z\gtrsim z_{0}$ is low.
Using Equation (34), we have (at $\beta\approx 1$)
$\sigma=(\Theta\Gamma/s)^{2}\approx\frac{8.4\times
10^{-5}\Gamma^{3.35}}{\beta_{\rm eq}^{0.22}s^{2}}.$ (48)
At $\beta_{\rm eq}=1$ and assuming $s=0.6$ (as found as the average value for
a large sample of radio-loud AGNs by Pjanka et al. 2017), we obtain
$\sigma\approx 0.0093(\Gamma/3)^{3.35}$. This can be compared to $\sigma$ from
its definition, Equation (35), which equals,
$\sigma\approx\beta_{\rm eq}^{-1}\left[2/3+130(1-2n_{+}/n_{\rm
e})\right]^{-1}.$ (49)
In the absence of pairs, $\sigma\approx 0.0078/\beta_{\rm eq}$. Comparing the
two estimates of $\sigma$, we see it requires $\Gamma\gtrsim 3$ at $\beta_{\rm
eq}=1$. However, the actual value of $s$ is uncertain, there could be ions
associated with background electrons piled up at $\gamma<\gamma_{\rm min}$,
and, importantly, $\beta_{\rm eq}$ could be $\gg 1$. Still, the low
magnetization implied by Equation (48) disfavors the case of strong pair
dominance, $(1-2n_{+}/n_{\rm e})\ll 1$.
Using $\sigma\ll 1$, we can calculate the magnetic fluxes in the model with
extraction of the BH rotational power. The jet magnetic flux from Equation
(33) with $z_{0}B_{0}$ from Equations (36) and (38) is then
$\Phi_{\rm j}\approx(4.1\times 10^{21}\,{\rm
G\,cm}^{2})\frac{s[1+(1-a_{*}^{2})^{\frac{1}{2}}](\beta\Gamma)^{1.22}\delta^{0.87}}{(\ell/0.5)a_{*}\beta_{\rm
eq}^{0.22}},$ (50)
which is $\approx 5.3\times 10^{21}\,{\rm G\,cm}^{2}$ for $a_{*}=1$,
$\Gamma=3$, $\ell=0.5$, $s=0.6$, $\beta_{\rm eq}=1$. The flux threading the
BH, Equation (31) with $\dot{M}$ estimated as above from $L$, is
$\Phi_{\rm BH}\approx(1.3\times 10^{22}\,{\rm G\,cm}^{2})\frac{\phi_{\rm
BH}}{50}\frac{D}{3\,{\rm kpc}}\left(\frac{\epsilon_{\rm
eff}}{0.1}\right)^{-1/2},$ (51)
where $M=8{\rm M}_{\sun}$ was assumed for both ($\Phi\propto M$). At
$\phi_{\rm BH}=50$ and the assumed parameters, the two fluxes are
approximately equal for $a_{*}\approx 0.7$. We consider the close agreement of
the above two estimates to be very remarkable. They are based on completely
different physical considerations. Thus, our results are consistent with the
jet being powered by the BH rotation and the accretion flow being magnetically
arrested. In this case, the jet power is constrained by Equation (32). Then, a
low value of $a_{*}$ would constrain $\Gamma$ to values lower than those
implied by $P_{\rm j}\lesssim\dot{M}c^{2}$.
### 3.2 Numerical estimates
In order to solve directly for the physical jet parameters and their
uncertainties, we use again the MCMC method. In the fits shown in Figure 4, we
fitted $b$, $p$, $\nu_{0}$, $\nu_{\rm max}$, $F_{0}$, $t_{0}$, $F_{\rm disk}$
and $\alpha_{\rm disk}$ with the minimum assumption of $a=2b$, and, in
particular, without the need to specify the value of $\Gamma$. Now we fit for
all of the parameters. However, since the solution given in Appendix A is
given in terms of $\gamma_{\rm max}$ rather than $\nu_{\rm max}$, we fit for
the former (which yields $\nu_{\rm max}$ given the values of $\Gamma$, $i$ and
$B_{0}$, see Equation 5).
In particular, we determine $\Theta$ from Equation (A5), $z_{0}$ from Equation
(21) and $B_{0}$ from Equation (A6). That requires specifying $\Gamma$ (which
is then a free parameter) and $\gamma_{\rm min}$. We fix $\beta_{\rm eq}=1$
and $k_{\rm i}=0$. However, in order to be able to constrain $\Gamma$ rather
than have it entirely free, we include further constraints, using the pair
production rate of Equation (43) and requiring $2\dot{N}_{+}/\dot{N}_{\rm
e}\leq 1$ in Equation (26) and from the maximum possible jet power, $P_{\rm
j}\leq\dot{M}c^{2}$, Equations (23–27). These constraints require specifying
$R_{\rm hot}$ and $R_{\rm jet}$, the bolometric luminosity, $L$, and the
accretion efficiency, $\epsilon_{\rm eff}$. We then solve simultaneously for
all of the parameters, including $b$, $p$, $\nu_{0}$, $F_{0}$, $t_{0}$,
$F_{\rm disk}$ and $\alpha_{\rm disk}$. Those parameters have now values
similar to those shown in Figure 4, and we thus do not show them again.
In the solution, we sample $D$ and $i$ as described at the beginning of
Section 3. We assume $L=1.5\times 10^{38}$ erg s-1, $X=0.7$, $R_{\rm
hot}=20R_{\rm g}$, $R_{\rm jet}=10R_{\rm g}$ (for $M=8{\rm M}_{\sun}$). We
show the resulting posterior distributions for two cases with ($\gamma_{\rm
min}=3$, $\epsilon_{\rm eff}=0.3$), and with ($\gamma_{\rm min}=10$,
$\epsilon_{\rm eff}=0.1$), in Figures 8(a), (b), respectively, and list the
fitted parameters in Table 2. We see that the obtained ranges of $\Gamma$ and
$\Theta$ depend on those two sets of assumptions, being larger for for the
latter case. The allowed maximum jet power is $\propto\epsilon_{\rm
eff}^{-1}$, and then it is higher in case (b). On the other hand, the obtained
values of $z_{0}\approx 2$–$4\times 10^{10}$ cm and $B_{0}\approx 10^{4}$ G
depend relatively weakly on those assumptions. For the sake of brevity, we
have not shown the effect of changing the values of $R_{\rm hot}$ and $R_{\rm
jet}$. For example, for $R_{\rm hot}>20R_{\rm g}$, pair production will be
less efficient, which would in turn allow fewer leptons in the flow and lower
values of $\Gamma$, see Equations (43–44). Thus, we cannot conclusively rule
out values of $\Gamma\lesssim 1.5$. Then, values of $\Gamma$ higher than those
obtained above would be possible for $\epsilon_{\rm eff}<0.1$.
Figures 8(a–b) also show $\gamma_{\rm max}$ and the pair abundance,
$2n_{+}/n_{\rm e}$. The former ir relatively tightly constrained in the
$\approx 110$–150 range. The latter is strongly anticorrelated with the jet
power, being low at the maximum $P_{\rm j}$ and close to unity at the minimum
$P{\rm j}$, in agreement with our considerations in Section 3.1. We find the
synchrotron power, Equation (28), is typically $P_{\rm S}\sim 0.01P_{\rm j}$,
as in Section 3.1, and thus the jet radiative efficiency, $P_{\rm S}/P_{\rm
j}$, is low. In our fits, we have not used constraints from the break
frequencies in the power spectra and from the jet spatial extent measurement,
following our discussion in Section 3.1.
## 4 Discussion
### 4.1 Electron energy losses and re-acceleration
In our model, we parametrize the electron distribution as a power-law function
of the distance, and assume that distribution keeps a constant shape. Such a
situation requires the electron energy losses are moderate and satisfying
$\dot{\gamma}\propto\gamma$. We compare here the time scale for synchrotron
energy losses,
$t_{\rm syn}=\frac{6\pi m_{\rm e}c\xi^{2b}}{\sigma_{\rm T}B_{0}^{2}\gamma},$
(52)
with the adiabatic/advection time scale,
$t_{\rm ad}=\frac{3z_{0}\xi}{2\beta\Gamma c}$ (53)
(e.g., Z19). We consider the solution in Section 3.1 for $\Gamma=3$. At
$\gamma\approx 30$, which corresponds to the bulk of the partially self-
absorbed emission, $t_{\rm syn}$ is shorter than $t_{\rm ad}$ for $\xi\lesssim
3$, and it is $\approx$3 times shorter at $z_{0}$. This implies that electrons
responsible for the optically-thin part of the synchrotron emission have to be
re-accelerated above $z_{0}$.
Calculating the electron distribution self-consistently as a function of the
distance as well as accounting for the slope of the spectrum at $\nu<\nu_{0}$
is relatively complex, involving solving a kinetic equation with both losses
and spatial advection (e.g., Z19). This also requires taking into account
losses from Compton scattering of synchrotron photons as well as the reduction
of the electron energy loss rate due to self-absorption (Ghisellini et al.,
1988; Katarzyński et al., 2006). Such a model is beyond the scope of the
present work.
### 4.2 Comparison with other jet models of accreting black holes
The very long time lags found in T21 unambiguously show that the radio/sub-mm
emission originates at size scales several orders of magnitude higher than
$R_{\rm g}$. The time lags between $\nu_{1}$ and $\nu_{2}$ are found to be
approximately proportional to $\nu_{2}^{-1}-\nu_{1}^{-1}$. Knowing the break
frequency, $\nu_{0}$, above which the entire synchrotron emission is optically
thin, we can extrapolate this correlation and find the location corresponding
to $\nu_{0}$. This is found to be $z_{0}\sim 3\times 10^{4}R_{\rm g}$, with
the uncertainty of a factor of at most a few. This rules out jet models
predicting the onset of the synchrotron emission to in an immediate vicinity
of the BH, for example that described in Giannios (2005) (based on the model
of Reig et al. 2003).
The main independent study of the hard-state jet of MAXI J1820+070 is that by
Rodi et al. (2021). They had at their disposal only the spectral data. They
assumed $R/z=0.1$, corresponding to $\Theta=5.7\arcdeg$, which is much larger
than that found by us. They assumed $\Gamma=2.2$ following the result of
Bright et al. (2020) for the ejection during the hard-to-soft transition, but
that of the hard-state jet can be different. The jet model of Rodi et al.
(2021) is also different from ours, and considers an initial acceleration
event followed by synchrotron cooling assuming no adiabatic losses. Still,
they obtain relatively similar values of the distance of onset of electron
acceleration, $z_{0}\approx 2.8\times 10^{10}$ cm, and the magnetic field
strength at that distance, $B_{0}\approx 1.8\times 10^{4}$ G.
### 4.3 Other constraints and caveats
Our model is based on that of Blandford & Königl (1979) and Königl (1981),
which assumes uniform scaling of the emission regions, through the
coefficients $a$ and $b$. As we see in Figure 5, this model does not account
for the observed flux at 339 MHz, measured by Polisensky et al. (2018). This
hints for the decline of the energy content in the relativistic electrons and
magnetic field being initially faster (responsible for the emission closer to
$z_{0}$) and then slower (responsible for the emission farther away from
$z_{0}$). This would introduce more complexity in the modelling, and is beyond
the scope of this work. On the other hand, the flux at 339 MHz could be due to
another component, in particular a pair of radio lobes at the jet ends. An
assumption of our model is that the bulk of the emission at a given distance
in the partially self-absorbed part of the spectrum occurs at a $\nu$
corresponding to $\tau\approx 1$. As we have found out, this corresponds to
the synchrotron emission by electrons with $\gamma\sim 30$. If the minimum
Lorentz factor of the electron distribution were higher, $\gamma_{\rm
min}>30$, then the emission at a given distance in that part of the spectrum
would be dominated by the electrons at $\gamma_{\rm min}$ instead, with no
contribution from self-absorption.
We assumed the jet is already fully accelerated at $z_{0}$ and then does not
decelerate. This may be not the case, and the available data do not exclude
that. The jet model of Z19 allows for a variable $\Gamma$, and we could use
some parametrization of $\Gamma(z)$ and refit our data (as done in Zdziarski
2019 for another source). This would, however, introduce more free parameters,
and make the resulting fits less constrained than in the present case. We have
also considered the steady state, while variability has been observed.
However, the fractional variability was $\sim 0.3$ at the sub-mm range and
much less than that in the radio regime. Thus, the variability can be
considered as a small perturbation of the steady state.
We also use a $\delta$-function approximation to the synchrotron process,
which is a good approximation for power-law parts of the spectra, but becomes
less accurate at cutoffs, given the single-electron synchrotron spectrum is
quite broad (Ginzburg & Syrovatskii, 1965). We also assume the synchrotron
emission of a single electron is isotropic in the plasma frame, which is valid
for a tangled magnetic field, while we assume a toroidal field in some of our
equations. Furthermore, we assume a sharp cutoff in the electron distribution
at $\gamma_{\rm max}$. While this is not realistic, the actual form of the
cutoff depends on details of the acceleration process and is poorly
constrained. Thus, our determination of $\gamma_{\rm max}$ based on the
observed cutoff in the optical range is only approximate.
Then, we have used our self-consistent set of equations, in which the slope of
the partially self-absorbed part of the synchrotron spectrum is connected to
the rate of decline of the energy density along the jet. The latter determines
the relationship between the characteristic emitted frequency and the distance
(Equation 15), and thus the time-lag vs. frequency relation. A significant
discrepancy between the spectral slope and time lags vs. frequency was found
in Cyg X-1 (Tetarenko et al., 2019). In our case, the two are in an
approximate mutual agreement.
We have found that the break frequencies in the power spectra, $f_{\rm
b}(\nu)$, are compatible with the origin of the emission at $z_{\nu}$, which
are roughly equal to $\beta c/f_{\rm b}(\nu)$ for $\nu<\nu_{0}$. However, an
increasing $f_{\rm b}$ with increasing $\nu$ is also observed for the IR and
optical data (see fig. 5 of T21), for which $\nu>\nu_{0}$. In our jet model,
the emission at $\nu>\nu_{0}$ is the optically-thin synchrotron from the
entire part of the jet at $z>z_{0}$, which implies $z_{\nu>\nu_{0}}=z_{0}$.
Thus, we expect that the above scaling of $f_{\rm b}\propto z_{\nu}^{-1}$ no
longer holds at $\nu>\nu_{0}$. Then, the IR/optical variability at high
Fourier frequencies may be mostly due to electron energy losses and the re-
acceleration (see Section 4.1) rather than due to propagation of some
disturbances from $z<z_{0}$.
As shown in fig. 8 of T21, the optical and IR light curves are tightly
correlated, with no measurable lag ($-18^{+30}_{-50}$ ms), in spite of a
relatively large disk contribution in the optical range (at $3.7\times 10^{5}$
GHz), as shown in Figure 5. This shows the the disk contribution is constant
on the studied time scales, which is consistent with the rms variability in
the optical range reduced with respect to the IR one, see fig. 5 in T21. As
shown in T21, the upper limit on the lag is consistent with the synchrotron
energy losses at the magnetic field strength of $\sim 10^{4}$ G, which agrees
with our determination of $B_{0}$.
### 4.4 Relationship to core shifts
Time lags are closely related to core shifts, $\Delta\theta$, which are
angular displacements of the radio cores, observed at frequencies where the
synchrotron emission is partially self-absorbed. They are commonly found in
radio-loud active galactic nuclei (e.g., Pushkarev et al. 2012). The physical
cause of the physical displacement along the jet, $z_{\nu_{2}}-z_{\nu_{1}}$,
is the same for both the core shifts and time lags; only the methods to
determine it are different. Using equation (4) in Z15 and Equation (20), the
relationship of $\Delta\theta$ to $\Delta t_{\rm a}$ is
$\Delta\theta=\frac{\Delta t_{\rm a}\beta c(1+z_{\rm r})\sin i}{D(1-\beta\cos
i)}.$ (54)
We can then relate $\Delta t_{\rm a}$ to $z_{\nu}$, $z_{0}$ and $\nu_{0}$
using Equations (20–21) and (A2).
We can estimate $B_{0}$ using the core-shift method, but only assuming $a=2$,
$b=1$, which parameters have been assumed in published core-shift studies,
including Z15. Using equation (7) of Z15 and Equation (54), we obtain
$B_{0}\approx 1.0\times 10^{4}$ G at $p=2$, $\delta=1$ and $\beta_{\rm eq}=1$,
in a good agreement with our estimate of Equation (38). We can also obtain
$B_{0}$ from equation (8) in Z15 without specifying $\beta_{\rm eq}$.
## 5 Conclusions
We have based our study on the results of a multiwavelength campaign observing
MAXI J1820+070 when it was close to the peak of its luminous hard spectral
state, at $\sim$15% of the Eddington luminosity. We have used mostly the data
published in T21 as well as the IR/optical spectrum from Rodi et al. (2021).
Our main conclusions are as follows.
A major model-independent result of our study is the estimation of the
distances at which the jet emits below the observed break frequency, based on
the time lags between various frequencies measured by T21. These distances are
definitely several orders of magnitude higher than $R_{\rm g}$. By
extrapolating the observed approximate correlation of the time lags with the
differences between the photon wavelengths, the place where that emission
begins can be estimated to be at the distance of several tens of thousands of
$R_{\rm g}$ from the BH. This value of that distance also agrees with the
corresponding finding of Rodi et al. (2021), based on spectral modelling
alone.
We then use the classical model of Blandford & Königl (1979), as formulated in
detail in later works, to determine the parameters of the jet emitting in the
radio-to-optical range. The model assumes the hard inverted spectrum is due to
the superposition of locally-emitted spectra that are self-absorbed up to some
frequency and then are optically thin. Apart from some details, this is the
same model as that used by T21. The values of the jet parameters obtained by
us update those of T21, which suffered from some errors (see Section 1 and
Appendix A). Our analysis is also broader than that of T21, utilizing
constraints from the break frequency and the optically thin part of the
spectrum.
By applying the model to the data, we find we cannot uniquely determine the
jet bulk Lorentz factor, $\Gamma$, which then needs to be specified as a free
parameter. However, it can be constrained from above by the requirement of the
jet power being less than the accretion power. It can also be constrained from
below by estimating the e± pair production rate in the base of the jet and
comparing it to the flux of e± required to account for the observed
synchrotron emission. We then use a Bayesian MCMC method to determine all of
the jet parameters, and find the most likely range of
$1.7\lesssim\Gamma\lesssim 4.1$. We find the jet half-opening angle, $\Theta$
constrained to $\approx$0.6–$2\arcdeg$. The onset of the emission is at
$z_{0}\approx 3$–$4\times 10^{10}$ cm, where the magnetic field strength is
$B_{0}\approx 0.8$–$1.4\times 10^{4}$ G. The total jet power is between
$P_{\rm j}\sim 10^{37}$ and $\sim\\!10^{39}$ erg s-1. The jet composition is
strongly correlated with $P_{\rm j}$, being mostly pairs at the lower limit
and mostly e- and ions at the upper limit. The optical spectral data imply a
rather low value of the maximum electron Lorentz factor, of $\gamma_{\rm
min}\approx 110$–150.
In order to explain the possible presence of e± pairs in the jet, we calculate
the rate of pair production in the jet base in immediate vicinity of the hot
accretion flow. We use the measurement of a power-law spectral component
extending at least to 2 MeV in the same state of MAXI J1820+070. This rate
depends on the geometry, see Figure 3, but we find it to be of the same order
as the rate of the electron flow through the synchrotron-emitting part of the
jet, both being $\sim 10^{40}$–$10^{41}$ s-1. We find this coincidence to be a
strong argument for the presence of pairs in the hard-state jet of MAXI
J1820+070.
We also consider the possibility of the jet power being limited by the power
from the rotation of the BH in the presence of magnetically arrested accretion
flow. To test it, we calculate the magnetic flux of the jet in the emitting
region and that threading the BH. We find them to be very similar to each
other, of $\sim\\!10^{21}$ G cm2, which remarkable coincidence argues for that
scenario. Then, the jet is initially magnetic, Poynting-flux dominated, slow,
and not emitting, and accelerates and dissipates its energy only at large
distances, in agreement with our finding of the emission being far from the
BH.
We find the synchrotron emitting power being only a small fraction,
$\sim\\!10^{-2}$, of the total jet power. On the other hand, the synchrotron
power is very similar to either the electron or magnetic powers at the onset
of the dissipation ($z_{0}$), showing that decline of those powers with the
distance necessary to explain the observations can be due to the synchrotron
emission.
Finally, we show the correspondence between the methods to determine the jet
parameters based on time lags and radio core shifts. We give a formula
relating the lags and the angular displacements of the radio cores.
## Acknowledgments
We thank B. De Marco, M. Böttcher and P.-O. Petrucci for valuable comments,
and A. Tchekhovskoy for permission to use his plot of GRMHD simulations. We
acknowledge support from the Polish National Science Centre under the grants
2015/18/A/ST9/00746 and 2019/35/B/ST9/03944, and from the International Space
Science Institute (Bern). Support for this work was provided by NASA through
the NASA Hubble Fellowship grant #HST-HF2-51494.001 awarded by the Space
Telescope Science Institute, which is operated by the Association of
Universities for Research in Astronomy, Inc., for NASA, under contract
NAS5-26555. This paper makes use of the following ALMA data:
ADS/JAO.ALMA#2017.1.01103.T. ALMA is a partnership of ESO (representing its
member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST
and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the
Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and
NAOJ. The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc.
## Appendix A The general solution
We provide here the general solution to the equations providing the jet
structure, given in Section 2.1. We first give the solution parametrized by
the equipartition parameter, $\beta_{\rm eq}$, but without utilizing the time-
lag constraint. This solution is analogous to that given by equations (28–29)
in Zdziarski et al. (2012), which is valid for $a=2$, $b=1$ only. Here, we
assume that equipartition holds along the entire emitting jet, i.e., $a=2b$;
otherwise, it would be artificial to impose it only at $z_{0}$. We note that
the solutions below are functions of $\gamma_{\rm max}$ (through $f_{E}$ and
$f_{N}$), while observationally we determine $\nu_{\rm max}$. The relation
between the two involves $B_{0}$, $\Gamma$ and $i$, see Equations (1) and (5).
As a consequence, explicit solutions in terms of $\nu_{\rm max}$ would be
rather complicated, and we do not provide them.
We determine $B_{0}$ by setting $n_{0}$ from Equation (14) equal to that
following from Equation (17), and then use $z_{\nu}=z_{0}(\nu_{0}/\nu)^{q}$
from Equation (15), finding
$\displaystyle B(z_{\nu})=\frac{m_{\rm
e}c}{e}\left(\frac{\pi}{\delta}\right)^{\frac{2+p}{6+p}}\left[\frac{3c(1+k_{\rm
i})(f_{E}-f_{N})\sin i}{\beta_{\rm
eq}C_{2}(p)z_{\nu}\tan\Theta}\right]^{\frac{2}{6+p}}$
$\displaystyle\times\left[2\nu(1+z_{\rm r})\right]^{\frac{4+p}{6+p}}$ (A1)
for $z_{\nu}\geq z_{0}$, $\nu\leq\nu_{0}$. Then, $B_{0}=B(z_{0})$. We then
substitute $B(z_{\nu})$ of Equation (A1) in the formula for $F_{\nu}$ in the
optically-thick case, Equation (12), which yields for $z_{\nu}\geq z_{0}$
$\displaystyle
z_{\nu}=\frac{1}{\delta^{\frac{4+p}{13+2p}}\nu}\left[\frac{c(1+k_{\rm
i})(f_{E}-f_{N})}{2\pi\beta_{\rm eq}}\right]^{\frac{1}{13+2p}}\times$
$\displaystyle\left[\frac{C_{2}(p)}{\sin
i}\right]^{\frac{5+p}{13+2p}}\\!\\!\left[\frac{F_{\nu}D^{2}}{m_{\rm
e}C_{1}(p)g_{bp}}\right]^{\frac{6+p}{13+2p}}\times$ (A2)
$\displaystyle\left(\frac{3}{\pi\tan\Theta}\right)^{\frac{7+p}{13+2p}}(1+z_{\rm
r})^{-\frac{19+3p}{13+2p}},$
where $g_{bp}$ follows from Equation (12) for $a=2b$,
$g_{bp}\equiv\Gamma_{\rm E}\left[\frac{b(p+5)-6}{b(p+6)-2}\right]/(4+b).$ (A3)
Then, $z_{0}=z_{\nu_{0}}$, at which $F_{\nu_{0}}=2F_{0}g_{bp}$, see Equation
(12)333Equation (A2) also provides the correct form of equation (5) in Heinz
(2006) for $p=2$, $b=1$. His equation should be multiplied by $\delta^{1/2}$
factor, which is due to that factor missing in his equation (1), which should
have accounted for the frame transformation from $\nu^{\prime}$ to $\nu$. That
incorrect model formulation was used in T21..
We can then substitute the above $z_{\nu}\,(\geq z_{0})$ into Equation (A1),
$\displaystyle B(\nu)=\nu\left[\frac{3C_{1}(p)g_{bp}(1+k_{\rm
i})^{2}(f_{E}-f_{N})^{2}\sin^{3}i}{C_{2}(p)^{3}\beta_{\rm
eq}^{2}D^{2}F_{\nu}\tan\Theta}\right]^{\frac{2}{13+2p}}$
$\displaystyle\times\frac{\pi^{\frac{7+2p}{13+2p}}2^{\frac{9+2p}{13+2p}}c^{\frac{17+2p}{13+2p}}m_{\rm
e}^{\frac{15+2p}{13+2p}}(1+z_{\rm
r})^{\frac{15+2p}{13+2p}}}{e\delta^{\frac{3+2p}{13+2p}}},$ (A4)
and $B_{0}=B(\nu_{0})$. We see that the above solutions are obtained without
specifying either $\nu_{0}$ or $z_{0}$. Also, the spatial index $b$ enters
only in the factor $g_{bp}$, and does not modify the functional dependencies.
Equations (A2) and (A4) are equivalent to equations (28–29) in Zdziarski et
al. (2012), which are for $a=2$, $b=1$, and differ only in the definition of
$\beta_{\rm eq}$ and by factors of the order of unity due to a slightly
different way of integrating the emission along the jet.
Next, we can use the independent determination of $z_{\nu}$ from the time
lags, $\Delta t_{\rm a}$. A single measured lag between the frequencies
$\nu_{2}$ and $\nu_{1}$ determines, via Equation (20),
$z_{\nu_{2}}-z_{\nu_{1}}$. This can be compared to the prediction using
$z_{\nu}$ of Equation (A2), which yields a constraint between $\Theta$ and
$\Gamma$. However, a single measurement of $\Delta t_{\rm a}$ has typically a
large error. We can combine them by fitting the relationship between $\Delta
t_{\rm a}$ vs. $z_{\nu_{2}}-z_{\nu_{1}}$. This can be done even when the break
frequency, $\nu_{0}$, is unknown. However, here it is known, and we find it
convenient to define $t_{0}$ by $\Delta t_{\rm
a}=t_{0}(z_{\nu_{2}}-z_{\nu_{1}})/z_{0}$, fitted to a number of measured lags.
This then implies $z_{0}=ct_{0}\beta\Gamma\delta$. We can set it equal to that
implied by Equation (A2), and solve for $\tan\Theta$ as a function of
$\Gamma$,
$\displaystyle\tan\Theta=\frac{3\left(\beta\Gamma\nu_{0}t_{0}\right)^{-\frac{13+2p}{7+p}}}{\pi^{\frac{8+p}{7+p}}\delta^{\frac{17+3p}{7+p}}}\\!\left[\frac{(1+k_{\rm
i})(f_{E}-f_{N})}{\beta_{\rm eq}}\right]^{\frac{1}{7+p}}\times$
$\displaystyle\left[\frac{2C_{2}(p)}{\sin
i}\right]^{\frac{5+p}{7+p}}\left[\frac{F_{0}D^{2}}{m_{\rm
e}c^{2}C_{1}(p)}\right]^{\frac{6+p}{7+p}}(1+z_{\rm
r})^{-\frac{19+3p}{7+p}}\\!.$ (A5)
Note a relatively strong dependence of $\Theta$ on $t_{0}$,
$\Theta\mathrel{\vbox{ \offinterlineskip\halign{\hfil$#$\cr\propto\cr\kern
2.0pt\cr\sim\cr\kern-2.0pt\cr}}}t_{0}^{-2}$. We can then insert this
$\tan\Theta$ into Equation (A4) to obtain
$\displaystyle B_{0}=\frac{2^{\frac{3+p}{7+p}}\pi^{\frac{5+p}{7+p}}(m_{\rm
e}\nu_{0})^{\frac{9+p}{7+p}}c^{\frac{11+p}{7+p}}}{e\delta^{\frac{p-1}{7+p}}}\times$
(A6) $\displaystyle\left[\frac{\beta\Gamma t_{0}C_{1}(p)(1+k_{\rm
i})(f_{E}-f_{N})\sin^{2}i}{F_{0}\beta_{\rm
eq}C_{2}(p)^{2}D^{2}}\right]^{\frac{2}{7+p}}\\!\\!(1+z_{\rm
r})^{\frac{11+p}{7+p}}\\!.$
We determine $n_{0}$ using the above $B_{0}$ in Equation (17),
$\displaystyle n_{0}=\frac{\nu_{0}^{\frac{18+2p}{7+p}}m_{\rm
e}^{\frac{11+p}{7+p}}}{2^{\frac{15+p}{7+p}}\delta^{\frac{2p-2}{7+p}}e^{2}}\left[\frac{c^{2}\beta\Gamma
t_{0}C_{1}(p)\sin^{2}i}{F_{0}C_{2}(p)^{2}D^{2}}\right]^{\frac{4}{7+p}}$
$\displaystyle\times\left[\frac{\pi\beta_{\rm eq}}{(1+k_{\rm
i})(f_{E}-f_{N})}\right]^{\frac{3+p}{7+p}}(1+z_{\rm
r})^{\frac{22+2p}{7+p}}\\!.$ (A7)
## References
* Atri et al. (2020) Atri, P., Miller-Jones, J. C. A., Bahramian, A., et al. 2020, MNRAS, 493, L81, doi: 10.1093/mnrasl/slaa010
* Bambi et al. (2021) Bambi, C., Brenneman, L. W., Dauser, T., et al. 2021, Space Sci. Rev., 217, 65, doi: doi.org/10.1007/s11214-021-00841-8
* Blandford et al. (2019) Blandford, R., Meier, D., & Readhead, A. 2019, ARA&A, 57, 467, doi: 10.1146/annurev-astro-081817-051948
* Blandford & Königl (1979) Blandford, R. D., & Königl, A. 1979, ApJ, 232, 34, doi: 10.1086/157262
* Blandford & Znajek (1977) Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433, doi: 10.1093/mnras/179.3.433
* Bright et al. (2020) Bright, J. S., Fender, R. P., Motta, S. E., et al. 2020, Nature Astronomy, 4, 697, doi: 10.1038/s41550-020-1023-5
* Buisson et al. (2019) Buisson, D. J. K., Fabian, A. C., Barret, D., et al. 2019, MNRAS, 490, 1350, doi: 10.1093/mnras/stz2681
* Casella et al. (2010) Casella, P., Maccarone, T. J., O’Brien, K., et al. 2010, MNRAS, 404, L21, doi: 10.1111/j.1745-3933.2010.00826.x
* Davis & Tchekhovskoy (2020) Davis, S. W., & Tchekhovskoy, A. 2020, ARA&A, 58, 407, doi: 10.1146/annurev-astro-081817-051905
* Ferreira et al. (2006) Ferreira, J., Petrucci, P. O., Henri, G., Saugé, L., & Pelletier, G. 2006, A&A, 447, 813, doi: 10.1051/0004-6361:20052689
* Ghisellini et al. (1988) Ghisellini, G., Guilbert, P. W., & Svensson, R. 1988, ApJ, 334, L5, doi: 10.1086/185300
* Ghisellini & Tavecchio (2015) Ghisellini, G., & Tavecchio, F. 2015, MNRAS, 448, 1060, doi: 10.1093/mnras/stv055
* Giannios (2005) Giannios, D. 2005, A&A, 437, 1007, doi: 10.1051/0004-6361:20041491
* Ginzburg & Syrovatskii (1965) Ginzburg, V. L., & Syrovatskii, S. I. 1965, ARA&A, 3, 297, doi: 10.1146/annurev.aa.03.090165.001501
* Heinz (2006) Heinz, S. 2006, ApJ, 636, 316, doi: 10.1086/497954
* Henri & Pelletier (1991) Henri, G., & Pelletier, G. 1991, ApJ, 383, L7, doi: 10.1086/186228
* Jamil et al. (2010) Jamil, O., Fender, R. P., & Kaiser, C. R. 2010, MNRAS, 401, 394, doi: 10.1111/j.1365-2966.2009.15652.x
* Jones et al. (1974) Jones, T. W., O’Dell, S. L., & Stein, W. A. 1974, ApJ, 188, 353, doi: 10.1086/152724
* Jorstad et al. (2001) Jorstad, S. G., Marscher, A. P., Mattox, J. R., et al. 2001, ApJS, 134, 181, doi: 10.1086/320858
* Katarzyński et al. (2006) Katarzyński, K., Ghisellini, G., Svensson, R., & Gracia, J. 2006, A&A, 451, 739, doi: 10.1051/0004-6361:20054346
* Kawamuro et al. (2018) Kawamuro, T., Negoro, H., Yoneyama, T., et al. 2018, Astron. Telegram, 11399, 1
* Kellermann et al. (2004) Kellermann, K. I., Lister, M. L., Homan, D. C., et al. 2004, ApJ, 609, 539, doi: 10.1086/421289
* Komissarov et al. (2009) Komissarov, S. S., Vlahakis, N., Königl, A., & Barkov, M. V. 2009, MNRAS, 394, 1182, doi: 10.1111/j.1365-2966.2009.14410.x
* Königl (1981) Königl, A. 1981, ApJ, 243, 700, doi: 10.1086/158638
* Levinson (2006) Levinson, A. 2006, International Journal of Modern Physics A, 21, 6015, doi: 10.1142/S0217751X06035063
* Lister et al. (2019) Lister, M. L., Homan, D. C., Hovatta, T., et al. 2019, ApJ, 874, 43, doi: 10.3847/1538-4357/ab08ee
* Lobanov (1998) Lobanov, A. P. 1998, A&A, 330, 79. https://arxiv.org/abs/astro-ph/9712132
* Lyubarsky (2010) Lyubarsky, Y. E. 2010, MNRAS, 402, 353, doi: 10.1111/j.1365-2966.2009.15877.x
* Malzac (2013) Malzac, J. 2013, MNRAS, 429, L20, doi: 10.1093/mnrasl/sls017
* Malzac (2014) —. 2014, MNRAS, 443, 299, doi: 10.1093/mnras/stu1144
* McKinney et al. (2012) McKinney, J. C., Tchekhovskoy, A., & Blandford, R. D. 2012, MNRAS, 423, 3083, doi: 10.1111/j.1365-2966.2012.21074.x
* Miller-Jones et al. (2006) Miller-Jones, J. C. A., Fender, R. P., & Nakar, E. 2006, MNRAS, 367, 1432, doi: 10.1111/j.1365-2966.2006.10092.x
* Narayan et al. (2003) Narayan, R., Igumenshchev, I. V., & Abramowicz, M. A. 2003, PASJ, 55, L69, doi: 10.1093/pasj/55.6.L69
* Perlman et al. (2019) Perlman, E., Meyer, E., Eilek, J., et al. 2019, BAAS, 51, 59. https://arxiv.org/abs/1903.03657
* Pjanka et al. (2017) Pjanka, P., Zdziarski, A. A., & Sikora, M. 2017, MNRAS, 465, 3506, doi: 10.1093/mnras/stw2960
* Polisensky et al. (2018) Polisensky, E., Giacintucci, S., Peters, W. M., Clarke, T. E., & Kassim, N. E. 2018, The Astronomer’s Telegram, 11540, 1
* Pushkarev et al. (2012) Pushkarev, A. B., Hovatta, T., Kovalev, Y. Y., et al. 2012, A&A, 545, A113, doi: 10.1051/0004-6361/201219173
* Reig et al. (2003) Reig, P., Kylafis, N. D., & Giannios, D. 2003, A&A, 403, L15, doi: 10.1051/0004-6361:20030449
* Rodi et al. (2021) Rodi, J., Tramacere, A., Onori, F., et al. 2021, ApJ, 910, 21, doi: 10.3847/1538-4357/abdfd0
* Shabala & Godfrey (2013) Shabala, S. S., & Godfrey, L. E. H. 2013, ApJ, 769, 129, doi: 10.1088/0004-637X/769/2/129
* Shidatsu et al. (2019) Shidatsu, M., Nakahira, S., Murata, K. L., et al. 2019, ApJ, 874, 183, doi: 10.3847/1538-4357/ab09ff
* Sikora et al. (1997) Sikora, M., Madejski, G., Moderski, R., & Poutanen, J. 1997, ApJ, 484, 108, doi: 10.1086/304305
* Sikora et al. (2020) Sikora, M., Nalewajko, K., & Madejski, G. M. 2020, MNRAS, 499, 3749, doi: 10.1093/mnras/staa3128
* Stirling et al. (2001) Stirling, A. M., Spencer, R. E., de la Force, C. J., et al. 2001, MNRAS, 327, 1273, doi: 10.1046/j.1365-8711.2001.04821.x
* Svensson (1987) Svensson, R. 1987, MNRAS, 227, 403
* Tchekhovskoy (2015) Tchekhovskoy, A. 2015, Launching of Active Galactic Nuclei Jets, ASSL, Vol. 414, 45, doi: 10.1007/978-3-319-10356-3_3
* Tchekhovskoy et al. (2009) Tchekhovskoy, A., McKinney, J. C., & Narayan, R. 2009, ApJ, 699, 1789, doi: 10.1088/0004-637X/699/2/1789
* Tchekhovskoy et al. (2011) Tchekhovskoy, A., Narayan, R., & McKinney, J. C. 2011, MNRAS, 418, L79, doi: 10.1111/j.1745-3933.2011.01147.x
* Tetarenko et al. (2019) Tetarenko, A. J., Casella, P., Miller-Jones, J. C. A., et al. 2019, MNRAS, 484, 2987, doi: 10.1093/mnras/stz165
* Tetarenko et al. (2021) —. 2021, MNRAS, 504, 3862, doi: 10.1093/mnras/stab820
* Torres et al. (2020) Torres, M. A. P., Casares, J., Jiménez-Ibarra, F., et al. 2020, ApJ, 893, L37, doi: 10.3847/2041-8213/ab863a
* Tucker et al. (2018) Tucker, M. A., Shappee, B. J., Holoien, T. W. S., et al. 2018, ApJ, 867, L9, doi: 10.3847/2041-8213/aae88a
* Willott et al. (1999) Willott, C. J., Rawlings, S., Blundell, K. M., & Lacy, M. 1999, MNRAS, 309, 1017, doi: 10.1046/j.1365-8711.1999.02907.x
* Wood et al. (2021) Wood, C. M., Miller-Jones, J. C. A., Homan, J., et al. 2021, MNRAS, 505, 3393, doi: 10.1093/mnras/stab1479
* Yuan et al. (2018) Yuan, Z., Wang, J., Worrall, D. M., Zhang, B.-B., & Mao, J. 2018, ApJS, 239, 33, doi: 10.3847/1538-4365/aaed3b
* Zamaninasab et al. (2014) Zamaninasab, M., Clausen-Brown, E., Savolainen, T., & Tchekhovskoy, A. 2014, Nature, 510, 126, doi: 10.1038/nature13399
* Zdziarski (2014) Zdziarski, A. A. 2014, MNRAS, 445, 1321, doi: 10.1093/mnras/stu1835
* Zdziarski (2019) —. 2019, MNRAS, 489, L58, doi: 10.1093/mnrasl/slz127
* Zdziarski et al. (2009) Zdziarski, A. A., Kawabata, R., & Mineshige, S. 2009, MNRAS, 399, 1633, doi: 10.1111/j.1365-2966.2009.15386.x
* Zdziarski et al. (2012) Zdziarski, A. A., Lubiński, P., & Sikora, M. 2012, MNRAS, 423, 663, doi: 10.1111/j.1365-2966.2012.20903.x
* Zdziarski et al. (2015) Zdziarski, A. A., Sikora, M., Pjanka, P., & Tchekhovskoy, A. 2015, MNRAS, 451, 927, doi: 10.1093/mnras/stv986
* Zdziarski et al. (2019) Zdziarski, A. A., Stawarz, Ł., & Sikora, M. 2019, MNRAS, 485, 1210, doi: 10.1093/mnras/stz475
* Zdziarski et al. (2021) Zdziarski, A. A., Jourdain, E., Lubiński, P., et al. 2021, ApJ, 914, L5, doi: 10.3847/2041-8213/ac0147
|
# Hilbert space fragmentation imposed real spectrum of a non-Hermitian system
Somsubhra Ghosh1, K. Sengupta1, and Indranil Paul2 1School of Physical
Sciences, Indian Association for the Cultivation of Science, Kolkata 700032,
India.
2Université Paris Cité, CNRS, Laboratoire Matériaux et Phénomènes Quantiques,
75205 Paris, France.
###### Abstract
We show that constraints imposed by strong Hilbert space fragmentation (HSF)
along with the presence of certain global symmetries can provide a sufficient
condition for the reality of eigenspectra of non-Hermitian quantum systems;
such a reality cannot be guaranteed by global symmetries alone. We demonstrate
this insight for an interacting finite fermionic Nelson-Hatano chain. We show
analytically that strong HSF and real spectrum are both consequences of the
same dynamical constraints in the limit of large interaction, provided the
system has sufficient global symmetries. The spectrum stays real for
interactions above a finite critical value, where the system encounters a
many-body exceptional point. We provide a method to detect this exceptional
point using a local equal-time correlation function.
_Introduction._ — Non-Hermitian many-body Hamiltonians are of great current
interest for their relevance to open quantum systems, and also for their novel
properties without Hermitian analog reviews_nonH ; nonhlit1 ; nonhlit2 ;
nonhlit3 ; nonhlit4 ; nonhlit5 ; nonhlit6 ; nonhlit7 ; nonhlit8 ; nonhlit9 ;
nonhlit10 ; nonhlit11 ; nonhlit12 ; nonhlit13 ; nhdyn1 ; nhdyn2 ; nhdyn3 ;
nhdyn4 ; nhdyn5 ; nhdyn6 . One such feature in the spectrum is an exceptional
point, where certain energy eigenvalues and eigenfunctions coalesce, while
across it eigenvalues can transform from being real to complex reviews_EP ;
eptop1 ; eptop2 ; eptop3 ; eptop4 ; eptop5 ; eptop6 .
The purpose of the current work is to investigate an important related
feature. Namely, why the spectra of certain non-Hermitian Hamiltonians are
entirely real in some parameter regimes. Note, this question cannot be
addressed completely by invoking global symmetry properties. For example, a
property such as pseudo-Hermitcity only guarantees that complex eigenvalues,
if they appear, come in complex conjugate pairs Mostafazadeh2002a ;
Mostafazadeh2002b . Likewise, a so-called $\mathcal{P}\mathcal{T}$-symmetric
system has completely real eigenvalues only in the regime where all the energy
eigenfunctions are also simultaneously eigenfunctions of the
$\mathcal{P}\mathcal{T}$ operator bender2007 ; zyablovsky2014 ; ozdemir2019 ,
and the question remains as to what guarantees the latter.
In this work we study a canonical non-Hermitian system, namely the interacting
fermionic Hatano-Nelson model comprising of a finite chain of spinless
fermions with non-reciprocal hopping and with nearest-neighbor interaction
nonhlit11 , shown schematically in Fig. 1(a). It is well-known that this model
has a “phase” at large enough interaction in which all the many-body energy
eigenvalues are real zhang2022 . Our goal here is to examine microscopically
what gives rise to and protects this phase.
Our work is built upon earlier results which showed that, in the limit of
infinitely large interaction, the Hermitian version of the above model shows
strong Hilbert space fragmentation, where the Fock space breaks up into
dynamically disjoint pieces whose number scales exponentially with the system
size khemani2020 ; sala2020 . This phenomena is the focus of intense research
at present since it leads to non-ergodicity and the ability to generate exotic
non-equilibrium states which cannot be accessed in equilibrium rakovszky2020 ;
yang2020 ; tomasi2019 ; frey2022 ; hsf6 ; hsf7 ; hsf8 ; hsf9 ; hsf10 ;
ghosh2023 .
Figure 1: (Color Online) (a) Schematic of a spinless fermionic chain with non-
reciprocal hopping, and nearest neighbor interaction. Filled sites are marked
red. (b), (c) show the maximum imaginary part and the real parts of the
spectrum, respectively, as a function of the interaction $V_{1}$, for $L=10$
and $\gamma=0.2J$. $V_{1}^{c}$ is the critical interaction where the system
encounters a many-body exceptional point. Below $V_{1}^{c}$ complex conjugate
pairs of eigenvalues first appear. In (c) the two eigenvalues which coalesce
at the exceptional point are delineated in blue. (d) $V_{1}^{c}$ diverges as
$\gamma\rightarrow J$, and also with system size.
Our main results are the following. First, we show analytically that,
precisely in the fragmented limit there is a many-body similarity
transformation that maps the non-Hermitian system to a Hermitian one, thereby
guaranteeing the reality of the spectrum in this limit. In fact, fragmentation
and real spectrum are both consequences of the same dynamical constraints
which emerge in the limit of infinitely large interaction. Second, we show
that the spectrum persists to be real for arbitrarily large but finite
interaction, provided the system has sufficient global symmetry protection.
These symmetries impose a hidden Hermiticity in a subspace where the reality
of the spectrum is not guaranteed by the fragmentation limit. These
observations imply that the spectrum is real for interactions above a finite
critical value, where the system encounters a many-body exceptional point.
Finally we compute a local equal-time density-density fermionic correlation
function and show that it can be used to detect the position of this
exceptional point. Overall, our work provides the first analysis of the role
of dynamical constraints in determining the spectrum of a non-Hermitian
system.
_Model._ — In terms of fermionic creation and annihilation operators
$(c^{\dagger}_{i},c_{i})$ at site $i$ the Hamiltonian is
$\mathcal{H}=\sum_{i=1}^{L}\left[(J-\gamma)c^{\dagger}_{i}c_{i+1}+(J+\gamma)c^{\dagger}_{i+1}c_{i}+V_{1}\hat{n}_{i}\hat{n}_{i+1}\right],$
(1)
where $\hat{n}_{i}\equiv c^{\dagger}_{i}c_{i}$ is the number operator at site
$i$, and $L$ is the system size, see Fig. 1(a). The non-Hermiticity of
$\mathcal{H}$ is due to the non-reciprocal hopping parameter $\gamma>0$. We
study the system at half-filling with $\sum_{i}n_{i}=L/2$ where $n_{i}$ is the
fermion occupation at site $i$, and we impose periodic (anti) boundary
condition for total particle number odd (even). This choice ensures that the
system is traslationally invariant, as discussed in the Supplementary
information (SI) SI . For open boundary condition and $\gamma<J$ the problem
is trivial because there is a one-body similarity transformation which makes
$\mathcal{H}$ Hermitian eptop3 . As discussed in detail in the SI SI ,
$\mathcal{H}$ is pseudo-Hermitian, while its global symmetries are
$\mathcal{G}=(\mathcal{P}\mathcal{C},\mathcal{R})$ with
$[\mathcal{H},\mathcal{G}]=0$, where $(\mathcal{P},\mathcal{C},\mathcal{R})$
are parity, charge conjugation and translation by one site, respectively.
Furthermore, since integrability plays no role, our results are valid even in
the presence of next-nearest neighbor interaction.
The spectral properties of $\mathcal{H}$ are summarized in Fig. 1, (b) - (d).
Panels (b) and (c) show that, for $\gamma<J$, the spectrum is real for $V_{1}$
above a critical value $V_{1}^{c}$. At $V_{1}^{c}$ one pair (or two pairs) of
eigenvalues and eigenvectors coalesce at a many-body exceptional point, and
below $V_{1}^{c}$ they become complex conjugate pairs, as dictated by the
pseudo-Hermiticity of $\mathcal{H}$. This is shown in blue in (c). Panel (d)
shows that $V_{1}^{c}$ diverges as $\gamma\rightarrow J$, and also with system
size $L$. Thus, the regime with real spectrum is relevant only for finite size
systems. Note, while these features were reported in a recent work zhang2022 ,
the link between Hilbert space fragmentation and symmetry protection with the
reality of the spectrum, which is the focus of this work, has not been
explored earlier.
_Limit of fragmentation._ — We consider the limit of large interaction,
keeping terms to linear order in $(J,\gamma)$ and ignoring those of order
$(J,\gamma)^{2}/V_{1}$ and smaller. This gives dias2000 ,
$\displaystyle\mathcal{H}\approx\mathcal{H}_{f}$
$\displaystyle=\sum_{i=1}^{L}\left[\hat{P}_{i}\left((J-\gamma)c^{\dagger}_{i}c_{i+1}+(J+\gamma)c^{\dagger}_{i+1}c_{i}\right)\hat{P}_{i}\right.$
$\displaystyle\left.+V_{1}\hat{n}_{i}\hat{n}_{i+1}\right],$ (2)
where the projector $\hat{P}_{i}\equiv 1-(\hat{n}_{i-1}-\hat{n}_{i+2})^{2}$
ensures that the hopping is constrained, and is allowed only if the process
does not change the total number of nearest neighbor occupations
$\hat{N}_{d}\equiv\sum_{i}\hat{n}_{i}\hat{n}_{i+1}$ . Thus, by suitably
increasing $V_{1}$, the spectra of $\mathcal{H}_{f}$ and $\mathcal{H}$ can be
made to coincide with arbitrary accuracy, as shown in the SI.
The Hermitian version of $\mathcal{H}_{f}$ has been shown to display strong
Hilbert space fragmentation tomasi2019 ; frey2022 . Since fragmentation does
not depend on whether the hopping mediated connectivity between the many-body
Fock states is reciprocal or not, the non-Hermitian $\mathcal{H}_{f}$ will
show the same property. Below we prove that the dynamical constraints that
give rise to fragmentation also allows the existence of a many-body similarity
transformation that maps $\mathcal{H}_{f}$ into a Hermitian form for
$\gamma<J$.
_Similarity transformation._ — The first step of the proof is to label the
many-body states. Traditionally, this is done using “spins and movers”
dias2000 ; tomasi2019 ; frey2022 . Here we take a different strategy, and we
label them by “defects”. A “particle-defect” and a “hole-defect” are two
occupied or two unoccupied nearest-neighbor sites, respectively. Due to half-
filling particle- and hole-defects appear in pair, and their numbers are
conserved, since $\left[\mathcal{H}_{f},\hat{N}_{d}\right]=0$. Thus, the
Hilbert space factorizes into sectors with eigenvalue
$N_{d}=0,1,\ldots,L/2-1$.
All dynamically frozen (i.e., zero connectivity) states have real energies.
This is the case for $N_{d}=0$, where the two defect-free wavefunctions with
occupations $|0,1,0,1,\ldots\rangle$ and $|1,0,1,0,\ldots\rangle$ have zero
energy. For $N_{d}\neq 0$ we label a defect position by the location of the
first of the two nearest-neighbor sites. Thus, any state with $N_{d}=1$ has
label $|(i)(j)\rangle$, where $i$ and $j$ are locations of the particle- and
hole-defect, respectively. Likewise, a state with $N_{d}=2$ is labeled by
$|(i_{1},i_{2})(j_{1},j_{2})\rangle$, and $N_{d}=n$ by $|(i_{1},i_{2},\ldots
i_{n})(j_{1},j_{2},\ldots j_{n})\rangle$. Since the fermions are
indistinguishable, permutations of $i$ and of $j$ imply the same state. Thus,
the states shown in Fig. 2(a) have labels $|(5)(7)\rangle$ and
$|(8)(4)\rangle$ with $N_{d}=1$, and $|(2,6)(4,8)\rangle$ and
$|(2,3)(5,8)\rangle$ with $N_{d}=2$.
Due to half-filling the defect locations follow certain rules. (a) If two
particle-defects at $i_{1}$ and $i_{2}$ are “adjacent”, then $(i_{1},i_{2})$
can only be (odd, even) or (even, odd). The same applies for two adjacent
hole-defects. Here, “adjacent” does not imply defects located right next to
one another. Two defects are adjacent if there is no third defect in between
the two while traversing either clockwise or counterclockwise. (b) If a
particle-defect at $i_{1}$ is adjacent to a hole-defect at $j_{1}$, then
$(i_{1},j_{1})$ can only be (even, even) or (odd, odd). One can verify that
the wavefunctions in Fig. 2(a) follow these rules.
Figure 2: (Color Online) (a) Examples of labeling many-body wavefunctions by
the location of “defects”. Two nearest neighbor sites form a particle- or a
hole-defect when they are both occupied or both unoccupied, respectively. The
defect position is the location of the first of the two sites from the left.
(b) and (c) are examples of connectivities for $L=10$, $N_{d}=2,1$,
respectively, see text. Solid (green) and dashed (brown) arrows denote
fermions hopping to the right and left, respectively. Reversing an arrow
direction also implies exchanging solid $\leftrightarrow$ dashed. (d) Three
examples of non-reciprocal hopping over four sites. (i) has open boundary,
while (ii) and (iii) have periodic boundary conditions. $1,r,r^{2}$, _et
cetera_ (in blue) in (b)-(d) are the scaling factors $\lambda$, such as in Eq.
3, of the wavefunctions next to them, which define the similarity
transformation, wherever possible. In (d, ii) one of the sites, indicated by a
question symbol, cannot be scaled consistently. For a closed loop a similarity
transformation is only possible if there are equal number of solid and dashed
arrows while traversing the loop clockwise or anticlockwise, as in (d, iii).
The second step is to determine the defect dynamics which, due to the
constrained hopping, obey the following rules. (i) An allowed fermion hop
changes $i$ or $j$ by $\pm$2 modulo $L$. (ii) Since second nearest neighbor
hopping is absent, two defects cannot cross each other. It can be shown that
due to rules (i) and (ii) each sector of $N_{d}$ breaks into exponentially
large number of disjoint subsectors, i.e. fragments, that scale as $e^{N_{d}}$
sala2020 .
The third step is to establish the constrained hopping induced connectivity
between the many-body wavefunctions within each non-trivial subsector. There
is no general pattern for these connectivities, and they need to be worked out
case by case, even though the proof below holds for all the connectivities. To
show few examples, Fig. 2(b) is the connectivity for $L=10$, $N_{d}=2$ with
$(i_{1},i_{2})(j_{1},j_{2})$ (odd, odd)(odd, odd), while Fig. 2(c) is for
$L=10$, $N_{d}=1$ with $(i)(j)$ (odd)(odd). The dashed and solid arrows denote
fermions hopping to the left (with amplitude $J_{1}\equiv J-\gamma$) and right
(with amplitude $J_{2}\equiv J+\gamma$), respectively. Reversing an arrow
implies the exchange $J_{1}\leftrightarrow J_{2}$. A fermion hopping left can
result either a particle-defect to move left, i.e., $i\rightarrow(i-2)$ mod
$L$, or a hole-defect to move right, i.e., $j\rightarrow(j+2)$ mod $L$. Thus,
the connectivity diagram of each sub-sector can be viewed as a single non-
interacting “particle” hopping in the abstract space of many-body
wavefunctions in a non-reciprocal manner.
The fourth and final step of the proof is to establish the existence of the
similarity transformation in each sub-sector. For pedagogical reason we first
consider few examples of non-reciprocal hopping of a single particle in a
four-site system. Fig. 2(d, i) is a linear chain with open boundary condition.
This can be mapped to a Hermitian form for $\gamma<J$ by the scaling
$|i\rangle\rightarrow\lambda_{i}|i\rangle,\quad\langle
i|\rightarrow(1/\lambda_{i})\langle i|,$ (3)
where $\lambda_{i}=1,r,r^{2},r^{3}$, for $i=1,\ldots,4$, respectively, and
$r\equiv\sqrt{J_{2}/J_{1}}$ eptop3 . However, for periodic boundary condition,
as in Fig. 2(d, ii), the transformation will not work since the new link 4-1
cannot be made Hermitian. Thus, finding similarity transformations is non-
trivial where the connections form closed loops, which is precisely our case
as seen in Fig. 2(b, c). Now consider Fig. 2(d, iii) which is also a closed
loop, but where the hops are $J_{2},J_{2},J_{1},J_{1}$ moving clockwise. In
this case, once again, a similarity mapping exist, with
$\lambda_{i}=1,r,r^{2},r$, respectively. This example illustrates the
principle that a closed loop which has equal number of $J_{1}$ and $J_{2}$
hops while traversing it clockwise or anticlockwise can be mapped to a
Hermitian form. This is because every $J_{2}$ link requires an additional
scaling of $r$ for the second site compared to the first, while a $J_{1}$ link
requires a scaling of $1/r$. This is exactly the case for the connectivities
of Fig. 2 (b, c), where the scalings associated with the wavefunctions are
marked in blue. Additional examples of such scalings are discussed in the SI
SI . We prove below that _all_ the connections of $\mathcal{H}_{f}$ are such
that each and every possible loop has this property.
Starting from any $|(i_{1},i_{2},\ldots i_{n})(j_{1},j_{2},\ldots
j_{n})\rangle$, a closed loop is obtained in three basic ways.
(1) If one or more of the site indices change as, say, $i\rightarrow
i^{\prime}\rightarrow i^{\prime\prime}$ and so on, and then reverse the path
to go back to $i$, while obeying the rules (i) and (ii). Since the reverse of
a $J_{1}$ hop is a $J_{2}$ hop, and vice versa, traversing the loop along one
direction will necessarily have equal number of $J_{1}$ and $J_{2}$ hops. The
loop (1)(7) $\rightarrow$ (1)(5) $\rightarrow$ (3)(5) $\rightarrow$ (3)(7)
$\rightarrow$ (1)(7) in Fig. 2(c) is an example.
(2) If a defect does not retrace its path, but moves across the chain,
traversing $L$ sites, and gets back to its original position using the
periodic boundary condition. However, according to rule (ii) this can happen
only if all other defects perform the same circular motion in the same
direction and regain their original positions, each having traversed $L$
sites. Since a particle-defect moving to the right is a $J_{2}$ hop, while a
hole-defect moving to the right is a $J_{1}$ hop, and since there are equal
number of particle- and hole-defects, this loop, too, will have equal number
of $J_{1}$ and $J_{2}$ hops. Starting from state (1)(3) on the left side of
Fig. 2(c) and ending again at (1)(3) on the right side of the figure is an
example of such a loop.
(3) In some cases, such as in Fig. 2(b), it is possible for the defects to
exchange positions such that $i_{1}\rightarrow i_{2}\rightarrow
i_{3}\ldots\rightarrow i_{n}\rightarrow i_{1}$, and $j_{1}\rightarrow
j_{2}\rightarrow j_{3}\ldots\rightarrow j_{n}\rightarrow j_{1}$. In this case
a loop is completed by permuting the indices, while the defects neither
retrace their paths nor complete the circle. Here the sum of the sites
traversed by all the particle-defects is $L$ and the same is true for all the
hole-defects, and they are along the same direction. Thus, again here the loop
has equal number of $J_{1}$ and $J_{2}$ hops.
This completes the proof that $\mathcal{H}_{f}$ can be mapped into a Hermitian
form for $\gamma<J$; this feature guarantees reality of eigenspectrum of
${\mathcal{H}}_{f}$ in this limit.
_Finite $V_{1}^{c}$ and symmetry protection_.— The above conclusion, however,
is not sufficient to deduce that the spectrum remains real once $V_{1}$ is
finite. To understand why, consider two eigenstates of $\mathcal{H}_{f}$ from
the same sub-sector of $\hat{N}_{d}$. Measuring energies from the average
eigenvalue, the sub-system has the structure
$\mathcal{H}_{2}=\begin{pmatrix}l&m_{1}+m_{2}\\\ m_{1}-m_{2}&-l\end{pmatrix},$
with eigenvalues $\pm\sqrt{l^{2}+m_{1}^{2}-m_{2}^{2}}$. Since
$m_{1,2}\sim\mathcal{O}((J,\gamma)^{2}/V_{1})$ or smaller, for finite $l$ the
reality of the eigenvalues is guaranteed for $V_{1}$ sufficiently large. But,
this argument fails when the two states are degenerate and $l=0$. However, in
this subspace the reality of the spectrum can still be protected by the global
symmetries $\mathcal{G}$, provided any two degenerate states are related by
$\mathcal{G}|\psi\rangle=|\phi\rangle$, since it implies that the off-diagonal
matrix elements
$\langle\langle\psi|\mathcal{H}|\phi\rangle=\langle\langle\phi|\mathcal{H}|\psi\rangle$.
Here $(|\psi\rangle,|\phi\rangle)$ and
$(|\psi\rangle\rangle,|\phi\rangle\rangle)$ are the right and left
eigenvectors, respectively, of $\mathcal{H}_{f}$ in the degenerate subspace.
Since these off-diagonal terms are also real (because it is possible to choose
the eigenvectors of $\mathcal{H}_{f}$ to be real), we conclude that in this
subspace there can be a hidden Hermiticity of $\mathcal{H}$ which is symmetry
protected. Thus, the dynamical constraints and the global symmetries together
ensure that the spectrum stay real for $V_{1}$ greater than a finite value
$V_{1}^{c}$.
As discussed in the SI, the above symmetry protection can be destroyed by a
suitable choice of boundary condition. In this case one can have complex
eigenvalues for any finite value of $V_{1}$, even though the spectrum of the
corresponding $\mathcal{H}_{f}$ is real SI .
The above argument also implies that the subspace $\mathcal{H}_{2}$ that
eventually triggers the exceptional point at $V_{1}^{c}$ cannot be symmetry
related. In this case $l\sim\sqrt{J^{2}-\gamma^{2}}/e^{cL}$ is the average
level spacing of the sub-sector, and the constant $c>0$ depends on the sub-
sector size. Empirically, we find that $m_{1}$ is at least one order of
magnitude smaller than $m_{2}$, while $m_{2}\propto
V_{1}^{-\alpha}(J^{2}-\gamma^{2})^{-\beta/2}$ where the exponents
$(\alpha,\beta)$ are $L$-dependent. This implies that $V_{1}^{c}\sim
Je^{cL/\alpha}/(1-(\gamma/J)^{2})^{(\beta+1)/(2\alpha)}$. Thus, $V_{1}^{c}$
diverges exponentially with the system size, and it diverges as a power-law
with a $L$-dependent exponent for $\gamma\rightarrow J$. These features are
illustrated in Fig. 1(d).
Note, in passing, that for certain values of $L$ the two coalescing levels at
$V_{1}^{c}$ are each doubly degenerate, so that below $V_{1}^{c}$ there are
two pairs of complex conjugate eigenvalues. This degeneracy is related to
$\mathcal{P}\mathcal{C}$ invariance SI .
Figure 3: (Color Online) (a)-(c) Time evolution of the correlation function
$\chi(t)$ for $V_{1}$ = 3J, 4.285J, and 6J, respectively, for $L=14$ and
$\gamma=0.2J$. In (b) the system is very close to the exceptional point at
$V_{1}^{c}\approx 4.2863$. (d) Variation of the relaxation timescale $\tau$
with $V_{1}$ showing a one-sided divergence as $V_{1}\rightarrow V_{1}^{c}$
from below.
_Correlation function._ — Finally, we study the time evolution of a
correlation function which, in principle, can be measured to identify the
location of the many-body exceptional point. One such example is
$\chi(t)=\langle\psi(t)|\hat{N}_{d}|\psi(t)\rangle/L$ starting from a random
Fock state $|\psi(0)\rangle=\sum_{m}c_{m}|\phi_{m}\rangle$, expanded in the
basis of the right eigenvectors $|\phi_{m}\rangle$ of $\mathcal{H}$. The time-
evolved wavefunction, suitably normalized to account for the non-Hermiticity
of $\mathcal{H}$, is
$|\psi(t)\rangle=\frac{e^{-i\mathcal{H}t/\hbar}|\psi(0)\rangle}{||e^{-i\mathcal{H}t/\hbar}|\psi(0)\rangle||}=\frac{\sum_{m}c_{m}(t)|\phi_{m}\rangle}{\sqrt{\sum_{m,n}c_{m}^{*}(t)c_{n}(t)\langle\phi_{m}|\phi_{n}\rangle}},$
where $c_{m}(t)=c_{m}e^{-i\epsilon_{m}t/\hbar}$.
Fig. 3, panels (a)-(c) show the time evolution of $\chi(t)$ for $V_{1}$ less
than, nearly equal to, and greater than $V_{1}^{c}$, respectively. For
$V_{1}<V_{1}^{c}$ the time evolution is dominated by the eigenvalue with the
largest imaginary component. Consequently, after a timescale $\tau\sim 1/{\rm
Max[Im}\,\epsilon]$, the correlation function attains a steady state value
$\chi(t\gg\tau)\sim 1/L\langle\phi_{m}^{*}|\hat{N}_{d}|\phi_{m}^{*}\rangle$,
where $|\phi_{m}^{*}\rangle$ is the eigenvector with the largest ${\rm
Im}\,\epsilon$. This implies that $\tau$ diverges as $V_{1}\rightarrow
V_{1}^{c}$ from below, as seen in Fig. 3(d). For $V_{1}\geq V_{1}^{c}$ all the
eigenvalues are real and the system quickly attains a diagonal ensemble, and
$\chi(t)$ fluctuates about an average value $\chi(t\gg\tau)\sim
1/L\sum_{m}|c_{m}|^{2}\langle\phi_{m}|\hat{N}_{d}|\phi_{m}\rangle$ rigol1 ;
rigol2 , implying that the the peak of $\tau(V_{1})$ in Fig. 3(d) is one-
sided. This peak can be used to detect the exceptional point.
_Conclusion._ — To summarize, we explained microscopically why the interacting
fermionic Hatano-Nelson model has purely real many-body spectrum for nearest
neighbor interaction $V_{1}>V_{1}^{c}$. This is a consequence of two
ingredients. First, the dynamical constraints in the infinitely large
interaction limit which also fragments the Hilbert space of the model. Second,
the global symmetries of the Hamiltonian. While the role of the global
symmetries has been widely studied, that of the first ingredient has not been
explored earlier. Thus, we reveal a deep link between the physics of
fragmentation and the property of real spectrum of an interacting non-
Hermitian system. This link is worth investigating in the future for other
interacting non-Hermitian models.
_Acknowledgement._ – The authors thank Diptiman Sen for several comments. IP
is thankful to Masudul Haque for insightful discussions. SG acknowledges the
financial support provided by CSIR, India through file
09/080(1133)/2019-EMR-I. KS thanks DST, India for support through SERB project
JCB/2021/000030.
## References
* (1) For reviews see, e.g., I. Rotter, J. Phys. A: Math. Theor. 42, 153001 (2009); Y. Ashida, Z. Gong, and M. Ueda, Adv. Phys. 69, 249 (2021); I. Rotter and J. P. Bird, Rep. Prog. Phys. 78, 114001 (2015).
* (2) J. Gonzalez and R. A. Molina, Phys. Rev. B 96, 045437 (2017); V. Kozii and L. Fu, arXiv:1708.05841 (unpublished); A. A. Zyuzin and A. Y. Zyuzin, Phys. Rev. B 97, 041203(R) (2018); H. Shen and L. Fu, Phys. Rev. Lett. 121, 026403 (2018); R. A. Molina and J. Gonzalez, Phys. Rev. Lett. 120, 146601 (2018);T. Yoshida, R. Peters, and N. Kawakami, Phys. Rev. B 98, 035141 (2018); J. Carlstrom and E. J. Bergholtz, Phys. Rev. A 98, 042114 (2018).
* (3) T. M. Philip, M. R. Hirsbrunner, and M. J. Gilbert, Phys. Rev. B 98, 155430 (2018); Y. Chen and H. Zhai, Phys. Rev. B 98, 245130 (2018); K. Moors, A. A. Zyuzin, A. Y. Zyuzin, R. P. Tiwari, and T. L. Schmidt, Phys. Rev. B 99, 041116(R) (2018); R. Okugawa and T. Yokoyama, Phys. Rev. B 99, 041202(R) (2019); J. C. Budich, J. Carlstrom, F. K. Kunst, and E. J. Bergholtz, Phys. Rev. B 99, 041406(R) (2019).
* (4) Z. Yang and J. Hu, Phys. Rev. B 99, 081102(R) (2019); T. Yoshida, R. Peters, N. Kawakami, and Y. Hatsugai, Phys. Rev. B 99, 121101(R) (2019); Y. Wu, W. Liu, J. Geng, X. Song, X. Ye, C.-K. Duan, X. Rong, and J. Du, Science 364, 878 (2019); P. San-Jose, J. Cayao, E. Prada, and R. Aguado, Sci. Rep. 6, 21427 (2016); Q.-B. Zeng, B. Zhu, S. Chen, L. You, and R. Lu, Phys. Rev. A 94, 022119 (2016); C. Li, X. Z. Zhang, G. Zhang, and Z. Song, Phys. Rev. B 97, 115436 (2018); J. Cayao and A. M. Black-Schaffer Phys. Rev. B 105, 094502 (2022); R. Arouca, J. Cayao, A. M. Black-Schaffer, arXiv:2206.15324 (unpublished).
* (5) K. Kawabata, Y. Ashida, H. Katsura, and M. Ueda, Phys. Rev. B 98, 085116 (2018); A. Guo, G. J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, Phys. Rev. Lett. 103, 093902 (2009); C. E. Ruter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, Nat. Phys. 6, 192 (2010); L. Feng, M. Ayache, J. Huang, Y.-L. Xu, M.-H. Lu, Y.-F. Chen, Y. Fainman, and A. Scherer, Science 333, 729 (2011); A. Regensburger, C. Bersch, M.-A. Miri, G. Onishchukov, D. N. Christodoulides, and U. Peschel, Nature (London) 488, 167 (2012).
* (6) L. Feng, Y.-L. Xu, W. S. Fegadolli, M.-H. Lu, J. E. Oliveira, V. R. Almeida, Y.-F. Chen, and A. Scherer, Nat. Mater. 12, 108 (2013); C. Poli, M. Bellec, U. Kuhl, F. Mortessagne, and H. Schomerus, Nat. Commun. 6, 6710 (2015); B. Zhen, C.W. Hsu, Y. Igarashi, L. Lu, I. Kaminer, A. Pick, S.-L. Chua, J. D. Joannopoulos, and M. Solja.i., Nature (London) 525, 354 (2015); H. Zhao, S. Longhi, and L. Feng, Sci. Rep. 5, 17022 (2015); K. Ding, Z. Q. Zhang, and C. T. Chan, Phys. Rev. B 92, 235310 (2015).
* (7) S. Weimann, M. Kremer, Y. Plotnik, Y. Lumer, S. Nolte, K. Makris, M. Segev, M. Rechtsman, and A. Szameit, Nat. Mater. 16, 433 (2017); H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides, and M. Khajavikhan, Nature (London) 548, 187 (2017); W. Chen, K. Ozdemir, G. Zhao, J. Wiersig, and L. Yang, Nature (London) 548, 192 (2017); P. St-Jean, V. Goblot, E. Galopin, A. Lemaitre, T. Ozawa, L. Le Gratiet, I. Sagnes, J. Bloch, and A. Amo, Nat. Photonics 11, 651 (2017).
* (8) B. Bahari, A. Ndao, F. Vallini, A. E. Amili, Y. Fainman, and B. K. Le, Science 358, 636 (2017); J. Wang, H. Y. Dong, Q. Y. Shi, W. Wang, and K. H. Fung, Phys. Rev. B 97, 014428 (2018); H. Zhou, C. Peng, Y. Yoon, C.W. Hsu, K. A. Nelson, L. Fu, J. D. Joannopoulos, M. Solja.i., and B. Zhen, Science 359, 1009 (2018); M. Parto, S.Wittek, H. Hodaei, G. Harari, M. A. Bandres, J. Ren, M. C. Rechtsman, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Phys. Rev. Lett. 120, 113901 (2018); H. Zhao, P. Miao, M. H. Teimourpour, S. Malzard, R. El-Ganainy, H. Schomerus, and L. Feng, Nat. Commun. 9, 981 (2018).
* (9) G. Harari, M. A. Bandres, Y. Lumer, M. C. Rechtsman, Y. D. Chong, M. Khajavikhan, D. N. Christodoulides, and M. Segev, Science 359, 1230 (2018); M. A. Bandres, S. Wittek, G. Harari, M. Parto, J. Ren, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Science 359, 1231 (2018); M. Pan, H. Zhao, P. Miao, S. Longhi, and L. Feng, Nat. Commun. 9, 1308 (2018); L. Jin and Z. Song, Phys. Rev. Lett. 121, 073901 (2018); S. Malzard and H. Schomerus, Phys. Rev. A 98, 033807 (2018); Z. Oztas and C. Yuce, Phys. Rev. A 98, 042104 (2018).
* (10) M. Kremer, T. Biesenthal, L. J. Maczewsky, M. Heinrich, R. Thomale, and A. Szameit, Nat. Commun. 10, 435 (2019); K. Y. Bliokh, D. Leykam, M. Lein, and F. Nori, Nat. Commun. 10, 580 (2019); S. Wang, B. Hou, W. Lu, Y. Chen, Z. Zhang, and C. Chan, Nat. Commun. 10, 832 (2019); S. Chen,W. Zhang, B. Yang, T.Wu, and X. Zhang, Sci. Rep. 9, 5551 (2019); T. E. Lee and C.-K. Chan, Phys. Rev. X 4, 041001 (2014); Y. Xu, S.-T. Wang, and L.-M. Duan, Phys. Rev. Lett. 118, 045701 (2017); Y. Ashida, S. Furukawa, and M. Ueda, Nat. Commun. 8, 15791 (2017); Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Higashikawa, and M. Ueda, Phys. Rev. X 8, 031079 (2018); M. Nakagawa, N. Kawakami, and M. Ueda, Phys. Rev. Lett. 121, 203001 (2018); K. Takata and M. Notomi, Phys. Rev. Lett. 121, 213902 (2018); L. Pan, S. Chen, and X. Cui, Phys. Rev. A 99, 011601(R) (2019).
* (11) J. Li, A. K. Harter, J. Liu, L. de Melo, Y. N. Joglekar, and L. Luo, Nat. Commun. 10, 855 (2019); T. Liu, Y.-R. Zhang, Q. Ai, Z. Gong, K. Kawabata, M. Ueda, and F. Nori, Phys. Rev. Lett. 122, 076801 (2019); M. S. Rudner and L. S. Levitov, Phys. Rev. Lett. 102, 065703 (2009); J. M. Zeuner, M. C. Rechtsman, Y. Plotnik, Y. Lumer, S. Nolte, M. S. Rudner, M. Segev, and A. Szameit, Phys. Rev. Lett. 115, 040402 (2015); K. Mochizuki, D. Kim, and H. Obuse, Phys. Rev. A 93, 062116 (2016); L. Xiao, X. Zhan, Z. Bian, K. Wang, X. Zhang, X. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami et al., Nat. Phys. 13, 1117 (2017).
* (12) N. Hatano and D. R. Nelson, Phys. Rev. Lett. 77, 570 (1996); N. Hatano and D. R. Nelson, Phys. Rev. B 56, 8651 (1997); N. Hatano and D. R. Nelson, Phys. Rev. B 58, 8384 (1998).
* (13) J. A. S. Lourenco, R. L. Eneias, and R. G. Pereira, Phys. Rev. B 98, 085126 (2018); E. I. Rosenthal, N. K. Ehrlich, M. S. Rudner, A. P. Higginbotham, and K.W. Lehnert, Phys. Rev. B 97, 220301(R) (2018); M. Wang, L. Ye, J. Christensen, and Z. Liu, Phys. Rev. Lett. 120, 246601 (2018).
* (14) M. Ezawa, Phys. Rev. B 99, 121411(R) (2019); M. Ezawa, Phys. Rev. B 99, 201411(R) (2019); M. Ezawa, Phys. Rev. B 100, 045407 (2019).
* (15) L. Zhou, Q.-h.Wang, H.Wang, and J. Gong, Phys. Rev. A 98, 022129 (2018); L. Zhou and Q. Du, New J. Phys. 23, 063041 (2021); B. Zhu, Y. Ke, H. Zhong, and C. Lee, Phys. Rev. Research 2, 023043 (2020); L. Zhou and J. Gong, Phys. Rev. B 98, 205417 (2018); L. Zhou, Phys. Rev. B 100, 184314 (2019); L. Zhou, Y. Gu, and J. Gong, Phys. Rev. B 103, L041404 (2021).
* (16) L. Zhou and W. Han, Phys. Rev. B 106, 054307 (2022); C-H Liu, H. Hu, and S. Chen, Phys. Rev. B 105, 214305 (2022); L. Zhou, R. W. Bomantara, and S. Wu, SciPost Phys. 13, 015 (2022).
* (17) S. Zamani, R. Jafari, and A. Langari, Phys. Rev. B 102, 144306 (2020); R. Jafari and A. Akbari, Phys. Rev. A 103, 012204 (2021); K. Yang, L. Zhou,W. Ma, X. Kong, P.Wang, X. Qin, X. Rong, Y.Wang, F. Shi, J. Gong, and J. Du, Phys. Rev. B 100, 085308 (2019); D. Chowdhury, A. Banerjee, and A. Narayan Phys. Rev. A 103, L051101 (2021).
* (18) P. He and Z-H Huang, Phys. Rev. A 102, 062201 (2020); S. Longhi, J. Phys. A: Math. Theor. 50, 505201 (2017).
* (19) X. Turkeshi and M. Schiro, arXiv:2201.09895 (unpublished); T. Banerjee and K. Sengupta, Phys. Rev. B 107, 155117 (2023).
* (20) J. Ren, P. Hanggi, and B. Li, Phys. Rev. Lett. 104, 170601 (2010); J. Ren, S. Liu, and B. Li Phys. Rev. Lett. 108, 210603 (2012); H. Xu, D. Mason, L. Jiang and J. G. E. Harris, Nature 537, 80 (2016); Z. Wang, J. Chen, and J. Ren, Phys. Rev. E 106, L032102 (2020);L. J. Fernández-Alcazar, R. Kononchuk, H. Li, and T. Kottos, Phys. Rev. Lett. 126, 204101 (2021).
* (21) for reviews see, e.g., E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Rev. Mod. Phys. 93, 015005 (2021); W. D. Heiss, J. Phys. A: Math. Theor. 45, 444016 (2012); M. Müller and I. Rotter, J. Phys. A: Math. Theor. 41, 244018 (2008).
* (22) Y. C. Hu and T. L. Hughes, Phys. Rev. B 84, 153101 (2011); K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Phys. Rev. B 84, 205128 (2011); T. E. Lee, Phys. Rev. Lett. 116, 133903 (2016); D. Leykam, K. Y. Bliokh, C. Huang, Y. D. Chong, and F. Nori, Phys. Rev. Lett. 118, 040401 (2017); V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres, Phys. Rev. B 97, 121401(R) (2018); Y. Xiong, J. Phys. Commun. 2, 035043 (2018); H. Shen, B. Zhen, and L. Fu, Phys. Rev. Lett. 120, 146402 (2018).
* (23) C. Yuce, Phys. Rev. A 97, 042118 (2018); C. Yin, H. Jiang, L. Li, R. Lu, and S. Chen, Phys. Rev. A 97, 052115 (2018); C. Yuce, Phys. Rev. A 98, 012111 (2018); F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Phys. Rev. Lett. 121, 026808 (2018); S. Yao, F. Song, and Z. Wang, Phys. Rev. Lett. 121, 136802 (2018); K. Kawabata, K. Shiozaki, and M. Ueda, Phys. Rev. B 98, 165148 (2018); C. Yuce and Z. Oztas, Sci. Rep. 8, 17416 (2018).
* (24) S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018)
* (25) K. Kawabata, S. Higashikawa, Z. Gong, Y. Ashida, and M. Ueda, Nat. Commun. 10, 297 (2019); L. Jin and Z. Song, Phys. Rev. B 99, 081103(R) (2019); H. Wang, J. Ruan, and H. Zhang, Phys. Rev. B 99, 075130 (2019); D. S. Borgnia, A. J. Kruchkov, and R.-J. Slager, Phys. Rev. Lett. 124, 056802 (2020); Z. Ozcakmakli Turker and C. Yuce, Phys. Rev. A 99, 022127 (2019); E. Edvardsson, F. K. Kunst, and E. J. Bergholtz, Phys. Rev. B 99, 081302(R) (2019).
* (26) C.-H. Liu, H. Jiang, and S. Chen, Phys. Rev. B 99, 125103 (2019); C. H. Lee and R. Thomale, Phys. Rev. B 99, 201103(R) (2019); F. K. Kunst and V. Dwivedi, Phys. Rev. B 99, 245116 (2019); K. Yokomizo and S. Murakami, Phys. Rev. Lett. 123 066404 (2019).
* (27) R. Nehra, and D. Roy, Phys. Rev. B 105, 195407 (2022); K. Kawabata, K. Shiozaki, and S. Ryu, Phys. Rev. B 105, 165137 (2022); K. Yang, D. Varjas, E. J. Bergholtz, S. Morampudi, and F. Wilczek, arXiv:2202.04435 (unpublished).
* (28) A. Mostafazadeh, J. Math. Phys. 43, 205 (2002).
* (29) A. Mostafazadeh, J. Math. Phys. 43, 2814 (2002).
* (30) C. M Bender Rep. Prog. Phys. 70, 947 (2007).
* (31) A. A. Zyablovsky, A. P. Vinogradov, A. A. Pukhov, A. V. Dorofeenko, A. A. Lisyansky, Phys.-Uspekhi 57, 1063 (2014).
* (32) S. K. Özdemir, S. Rotter, F. Nori, and L. Yang, Nat. Mater. 18, 783 (2019).
* (33) S.-B. Zhang, M. M. Denner, T. Bzdušek, M. A. Sentef, and T. Neupert, Phys. Rev. B 106, L121102 (2022).
* (34) V. Khemani, M. Hermele and R. Nandkishore, Phys. Rev. B 101, 174204 (2020).
* (35) P. Sala, T. Rakovszky, R. Verresen, M. Knap and F. Pollmann, Phys. Rev. X 10, 011047 (2020).
* (36) T. Rakovszky, P. Sala, R. Verresen, M. Knap and F. Pollmann, Phys. Rev. B 101, 125126 (2020).
* (37) Z.-C. Yang, F. Liu, A. V. Gorshkov and T. Iadecola, Phys. Rev. Lett. 124, 207602 (2020).
* (38) G. De Tomasi, D. Hetterich, P. Sala, and F. Pollmann, Phys. Rev. B 100, 214313(2019).
* (39) P. Frey, L. Hackl, and S. Rachel, Phys. Rev. B 106, L220301 (2022).
* (40) S. Moudgalya and O. I. Motrunich, Phys. Rev. X 12, 011050 (2022); D. T. Stephen, O. Hart, and R. M. Nandkishore, arXiv:2209.03966 (unpublished); D. Hahn, P. A. McClarty, D. J. Luitz, SciPost Phys. 11, 074 (2021); N. Regnault and B. A. Bernevig, arXiv:2210.08019 (unpublished); D. Vu, K. Huang, X. Li, and S. Das Sarma, Phys. Rev. Lett. 128, 146601 (2022).
* (41) T. Kohlert, S. Scherg, P. Sala, F. Pollmann, B. H. Madhusudhana, I. Bloch, and M. Aidelsburger, arXiv:2106.15586 (unpublished).
* (42) B. Mukherjee, D. Banerjee, K. Sengupta, and A. Sen, Phys. Rev. B 104, 155117 (2021); P. Brighi, M. Ljubotina, and M. Serbyn, arXiv:2210.5607 (unpublished).
* (43) J. Lehmann, P. Sala, F. Pollmann, and T. Rakovszky, arXiv:2208.12260 (unpublished).
* (44) A. Chattopadhyay, B. Mukherjee, K. Sengupta, and A. Sen, arXiv:2208.13800 (unpublished).
* (45) S. Ghosh, I. Paul, and K. Sengupta, Phys. Rev. Lett. 130, 120401 (2023).
* (46) See supplementary information for more details.
* (47) R. G. Dias, Phys. Rev. B 62, 7791 (2000).
* (48) M. Rigol, V. Dunjko, M. Olshanii, Nature 452, 854 (2008).
* (49) E. Khatami, G. Pupillo, M. Srednicki, M. Rigol, Phys. Rev. Lett. 111, 050403 (2013).
|
# Optimal Power Allocation for HARQ Schemes over Time-Correlated Nakagami-m
Fading Channels
Zheng Shi1 1Department of Electrical and Computer Engineering, University of
Macau, Macau
2Department of Electrical and Electronic Engineering, The University of Hong
Kong, Hong Kong Shaodan Ma1 1Department of Electrical and Computer
Engineering, University of Macau, Macau
2Department of Electrical and Electronic Engineering, The University of Hong
Kong, Hong Kong Fen Hou1 1Department of Electrical and Computer Engineering,
University of Macau, Macau
2Department of Electrical and Electronic Engineering, The University of Hong
Kong, Hong Kong Kam-Weng Tam1 1Department of Electrical and Computer
Engineering, University of Macau, Macau
2Department of Electrical and Electronic Engineering, The University of Hong
Kong, Hong Kong and Yik-Chung Wu2 1Department of Electrical and Computer
Engineering, University of Macau, Macau
2Department of Electrical and Electronic Engineering, The University of Hong
Kong, Hong Kong
###### Abstract
This paper investigates the problem of power allocation for hybrid automatic
repeat request (HARQ) schemes over time-correlated Nakagami-m fading channels
under outage constraint. The presence of time correlation complicates the
power allocation problem due to the involvement of multiple correlated fading
channels. Under a general time-correlated Nakagami-m fading channel with
exponential correlation, outage probabilities for three widely adopted HARQ
schemes, including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ
with incremental redundancy (HARQ-IR), are first derived. With these results,
power allocation schemes are proposed to minimize the average total
transmission power with guaranteed outage performance. Simulation results
demonstrate the accuracy of our outage analysis and the effectiveness of our
proposed power allocation schemes. It is shown that our proposed power
allocation schemes can achieve significant power savings when compared with
fixed power allocation. Moreover, under practical low outage constraint, the
power efficiency is further improved when the time correlation is reduced
and/or the fading order is increased.
###### Index Terms:
Time-correlated Nakagami-m fading, hybrid automatic repeat request, power
allocation.
## I Introduction
Hybrid automatic repeat request (HARQ) is a powerful transmission protocol to
combat the detrimental effects of channel fading and noise due to its
combination of automatic repeat request and forward error correction.
Generally, there are three types of HARQ schemes, including Type I HARQ, HARQ
with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR).
For Type I HARQ, the erroneously received packets are discarded and only the
most recently received packet is used for decoding. Since the failed packet
may still contain some useful information, it can be exploited for performance
enhancement and the other two HARQ schemes are thus designed for this purpose.
They combine the erroneously received packets with subsequently received
packets for joint decoding to improve the performance. Their difference lies
in whether the same set of coded bits are transmitted in each HARQ round.
Specifically, for HARQ-CC, the same coded sequence is repetitively transmitted
in each HARQ round and maximal-ratio-combining (MRC) is employed to combine
all the received packets to recover the message, whereas HARQ-IR transmits
different sets of coded bits in each retransmission and code combining is
adopted for joint decoding.
Power allocation for HARQ schemes has attracted considerable research
attention recently. Most of prior works consider either quasi-static fading
channels [1, 2, 3] or fast fading channels [4, 5, 6]. To be specific, in [1],
an optimal power allocation scheme is proposed to minimize the average total
transmission power of HARQ-CC over quasi-static fading channels, where the
channel response remains constant during multiple HARQ rounds. Similar to [1],
outage-limited power allocation is investigated for HARQ-CC and HARQ-IR
schemes in both continuous and bursting communication systems in [2].
Considering the same quasi-static fading channels, power allocation is
investigated in [3]. A backward sequential calculation method is developed to
find the optimum power allocation. On the other hand, some of prior literature
considers fast fading channels, where channel responses vary independently
among multiple transmissions. For example, in [4], power allocation is
discussed for HARQ-IR enabled distributed cooperative beamforming system,
where the source and the relay have fixed transmission power in each HARQ
round. Another power allocation scheme is proposed for HARQ-CC over
independent Rayleigh fading channels in [5]. By reformulating the power
allocation problem as a geometric programming problem and using dominant-term
approximation, the optimal solution is found efficiently. The same approach is
further extended to the power allocation for HARQ-enabled incremental MIMO
systems in [6].
Apart from quasi-static and fast fading channels, another frequently
experienced channel is time-correlated fading channel [7, 8], which usually
occurs when the transceiver has low-to-medium mobility. Under time correlated
fading channels, power allocation becomes much more challenging due to the
involvement of multiple correlated random variables and there are few
solutions if any in the literature. In this paper, we investigate power
allocation for HARQ schemes over time-correlated Nakagami-m fading channels. A
general multivariate Nakagami-m distribution with exponential correlation is
adopted to model time-correlated fading channels. The outage probabilities and
their asymptotic expressions are first derived for three HARQ schemes, i.e.,
Type I HARQ, HARQ-CC and HARQ-IR. These analytical results then enable the
optimal power allocation to minimize the average total transmission power with
guaranteed outage performance. Closed-form optimal solutions are found based
on the asymptotic outage expressions. Finally, these theoretical results are
validated through simulations. It is found that our proposed power allocation
schemes can achieve significant power savings when compared with fixed power
allocation. Moreover, under practical low outage constraint, the power
efficiency is further improved with the reduction of time correlation and the
increase of fading order.
The remainder of this paper is organized as follows. In Section II, system
model is given and outage analysis is conducted for three HARQ schemes.
Section III generalizes the problem of outage-limited power allocation for
three HARQ schemes, and optimal solutions are proposed in closed forms. In
Section IV, numerical results are presented and discussed to demonstrate the
efficiency of our proposed power allocation schemes. Finally, Section V
concludes this paper.
## II System Model and Outage Analysis
A point-to-point HARQ enabled system operating over time-correlated Nakagami-m
block-fading channels is considered in this paper. Following the HARQ
protocol, $L$ maximal transmissions are allowed for each single message. The
received signal $y_{l}$ in the $l$th HARQ round is written as
$y_{l}=\sqrt{P_{l}}h_{l}x_{l}+\eta_{l},\quad 0\leq l\leq L,$ (1)
where $x_{l}$ denotes the transmitted signal with unit power in the $l$th HARQ
round, $P_{l}$ refers to the transmit power in the $l$th HARQ round,
$\eta_{l}$ represents the complex Gaussian white noise with zero mean and unit
variance, i.e., $\eta_{l}\sim{\mathcal{CN}}(0,1)$, and $h_{l}$ is the channel
coefficient in the $l$th HARQ round. Unlike prior literature, time correlated
Nakagami-m fading channels are considered. More precisely, the joint
distribution of channel amplitudes $|{\bf{h}}_{L}|=[|h_{1}|,\cdots,|h_{L}|]$
are modeled as a multivariate Nakagami-m distribution with exponential
correlation [7, 8, 9], whose joint probability density function (PDF) is given
by
${f_{|{\bf{h}}_{L}|}}\left({{z_{1}},\cdots,{z_{L}}}\right)=\int_{t=0}^{\infty}{\frac{{{t^{m-1}}}}{{\Gamma\left(m\right)}}{{\rm{e}}^{-t}}}\times\\\
\prod\limits_{l=1}^{L}{\frac{{2{z_{l}}^{2m-1}}}{{\Gamma\left(m\right){{\left({\frac{{{\Omega_{l}}\left({1-{\rho^{2\left({l+\delta-1}\right)}}}\right)}}{m}}\right)}^{m}}}}{e^{-\frac{{m{z_{l}}^{2}}}{{{\Omega_{l}}\left({1-{\rho^{2\left({l+\delta-1}\right)}}}\right)}}}}}\times\\\
{e^{-\frac{{{\rho^{2\left({l+\delta-1}\right)}}t}}{{1-{\rho^{2\left({l+\delta-1}\right)}}}}}}_{0}{F_{1}}\left({;m;\frac{{m{z_{l}}^{2}{\rho^{2\left({l+\delta-1}\right)}}t}}{{{\Omega_{l}}{{\left({1-{\rho^{2\left({l+\delta-1}\right)}}}\right)}^{2}}}}}\right)dt,\rho\neq
1,$ (2)
where $\rho$ and $\delta$ denote the time correlation and the channel feedback
delay, $m$ denotes the fading order, ${{\Omega_{l}}}$ is defined as the
average power of $h_{l}$, i.e., ${{\Omega_{l}}}={\rm E}\\{|h_{l}|^{2}\\}$,
$\Gamma(\cdot)$ denotes Gamma function and ${}_{0}{F_{1}}(\cdot)$ denotes the
confluent hypergeometric limit function [10, Eq. 9.14.1].
The system performance is fully characterized by outage probability, which is
defined as the probability that the message cannot be successfully decoded,
i.e., the mutual information is smaller than the target transmission rate $R$
bps/Hz. For different HARQ schemes, the outage probability over time-
correlated Nakagami-m fading channels are analyzed as follows.
### II-A Outage Probability of Type I HARQ
For Type I HARQ, only the most recently received packet is employed for
recovering the message. The outage probability $p_{out,l}^{\rm I}$ after $l$
transmissions can be formulated as
$p_{out,l}^{\rm{I}}=\Pr\left({{I_{1}}<R,\cdots,{I_{l}}<R}\right)\\\
={F_{|{{\bf{h}}_{l}}|}}\left({\left|{{h_{1}}}\right|<\sqrt{\frac{{{2^{R}}-1}}{{{P_{1}}}}},\cdots,\left|{{h_{l}}}\right|<\sqrt{\frac{{{2^{R}}-1}}{{{P_{l}}}}}}\right),$
(3)
where
${I_{\iota}}={\log_{2}}\left({1+{P_{\iota}}{{\left|{{h_{\iota}}}\right|}^{2}}}\right)$
denotes the mutual information in the $\iota$th transmission, and
${F_{|{{\bf{h}}_{l}}|}}(\cdot)$ denotes the joint cumulative distribution
function (CDF) with respect to $|{{\bf{h}}_{l}}|$, which can be derived in the
following lemma.
###### Lemma 1.
The joint CDF ${F_{|{{\bf{h}}_{l}}|}}(y_{1},\cdots,y_{l})$ can be written as a
weighted sum of joint CDF of $l$ independent Nakagami RVs ${\bf A}_{\bf n}$
with parameters
$(m+n_{\iota},{{{\Omega_{\iota}}\left({1-{\rho^{2\left({\iota+\delta-1}\right)}}}\right)}}(m+n_{\iota})/m)$,
where ${\bf n}=[n_{1},\cdots,n_{l}]$ and $0\leq\iota\leq l$. Precisely,
${F_{|{{\bf{h}}_{l}}|}}\left({{y_{1}},\cdots,{y_{l}}}\right)=\sum\limits_{{n_{1}},\cdots,{n_{l}}=0}^{\infty}{{W_{\bf{n}}}{F_{{{\bf{A}}_{\bf{n}}}}}\left({{y_{1}},\cdots,{y_{l}}}\right)},$
(4)
where the coefficient $W_{\bf n}$ is given by
${W_{\bf{n}}}=\frac{{\Gamma\left({m+\sum\limits_{\iota=1}^{l}{{n_{\iota}}}}\right)}}{{\Gamma\left(m\right){{\left({1+\sum\limits_{\iota=1}^{l}{{\omega_{\iota}}}}\right)}^{m}}}}\prod\limits_{\iota=1}^{l}{\frac{1}{{{n_{\iota}}!}}{{\left({\frac{{{\omega_{\iota}}}}{{1+\sum\limits_{\iota=1}^{l}{{\omega_{\iota}}}}}}\right)}^{{n_{\iota}}}}}$
(5)
and satisfies
$\sum\limits_{{n_{1}},\cdots,{n_{l}}=0}^{\infty}{{W_{\bf{n}}}}=1$,
${\omega_{\iota}}{\rm{=}}\frac{{{\rho^{2\left({\iota+\delta-1}\right)}}}}{{1-{\rho^{2\left({\iota+\delta-1}\right)}}}}$,
and the joint CDF with respect to ${\bf A}_{\bf n}$,
${F_{{{\bf{A}}_{\bf{n}}}}}\left({{y_{1}},\cdots,{y_{l}}}\right)$, is
explicitly expressed as
${F_{{{\bf{A}}_{\bf{n}}}}}\left({{y_{1}},\cdots,{y_{l}}}\right)=\prod\limits_{\iota=1}^{l}{\frac{{\Upsilon\left({m+{n_{\iota}},\frac{{m{y_{\iota}}^{2}}}{{{\Omega_{\iota}}\left({1-{\rho^{2\left({\iota+\delta-1}\right)}}}\right)}}}\right)}}{{\Gamma\left({m+{n_{\iota}}}\right)}}}$
(6)
with $\Upsilon(\cdot,\cdot)$ being the lower incomplete Gamma function.
###### Proof.
The result directly follows from (2) and the series expansion of
${}_{0}{F_{1}}(\cdot)$ [10, Eq. 9.14.1]. ∎
With Lemma 1, the outage probability of Type I HARQ can be obtained as
$p_{out,l}^{\rm{I}}=\sum\limits_{{n_{1}},\cdots,{n_{l}}=0}^{\infty}{{W_{\bf{n}}}{F_{{{\bf{A}}_{\bf{n}}}}}\left({\sqrt{\frac{{{2^{R}}-1}}{{{P_{1}}}}},\cdots,\sqrt{\frac{{{2^{R}}-1}}{{{P_{l}}}}}}\right)}.$
(7)
In practice, the outage probability can be computed by truncating the infinite
series in (7). Herein, an efficient truncation method is proposed as
$\tilde{p}_{out,l}^{\rm{I}}=\sum\limits_{t=0}^{N}{\sum\limits_{{n_{1}}{\rm{+}}\cdots{\rm{+}}{n_{l}}=t}^{\infty}{{W_{\bf{n}}}{F_{{{\bf{A}}_{\bf{n}}}}}\left({\sqrt{\frac{{{2^{R}}-1}}{{{P_{1}}}}},\cdots,\sqrt{\frac{{{2^{R}}-1}}{{{P_{l}}}}}}\right)}},$
(8)
where $N$ defines the truncation order. It can be proved that the truncation
error exponentially decreases with $N$. The proof is omitted here due to space
limit.
Under high SNR, the outage probability can be asymptotically derived as shown
in following theorem.
###### Theorem 1.
Under high SNR regime, i.e., $P_{1},\cdots,P_{l}\to\infty$, the outage
probability $p_{out,l}^{\rm{I}}$ is written as
$p_{out,l}^{\rm{I}}=\frac{{{m^{ml}}\ell\left(l,\rho\right){{\left({{2^{R}}-1}\right)}^{lm}}}}{{{\Gamma^{l}}\left({m+1}\right)\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}},$
(9)
where
$\ell\left(l,\rho\right)={\left({\left({1+\sum\limits_{\iota=1}^{l}{\frac{{{\rho^{2\left({\iota+\delta-1}\right)}}}}{{1-{\rho^{2\left({\iota+\delta-1}\right)}}}}}}\right)\prod\limits_{\iota=1}^{l}{\left({1-{\rho^{2\left({\iota+\delta-1}\right)}}}\right)}}\right)^{-m}}$.
###### Proof.
By using the series expansion of $\Upsilon(\cdot,\cdot)$ [10, Eq. 8.354.1] and
omitting the higher order infinitesimal of
${{{{\prod\limits_{\iota=1}^{l}{{P_{\iota}}}^{-m}}}}}$, the outage probability
(7) can be asymptotically expressed as (9). ∎
### II-B Outage Probability of HARQ-CC
In HARQ-CC scheme, all the previously received packets are combined through
MRC for decoding. The outage probability after $l$ HARQ rounds is thus written
as
$\displaystyle p_{out,l}^{CC}$
$\displaystyle=\Pr\left({{{\log}_{2}}\left({1+\sum\limits_{\iota=1}^{l}{{P_{\iota}}{{\left|{{h_{\iota}}}\right|}^{2}}}}\right)<R}\right)$
$\displaystyle=\Pr\left({Y_{l}\triangleq\sum\limits_{\iota=1}^{l}{{P_{\iota}}{{\left|{{h_{\iota}}}\right|}^{2}}}<{2^{R}}-1}\right)={F_{{Y_{l}}}}\left(2^{R}-1\right).$
(10)
where ${F_{{Y_{l}}}}\left(\cdot\right)$ denotes the CDF of $Y_{l}$. After
deriving the CDF ${F_{{Y_{l}}}}\left(\cdot\right)$ using the method of moment
generating function (MGF), the outage probability $p_{out,l}^{CC}$ is derived
in the following theorem.
###### Theorem 2.
The outage probability for HARQ-CC scheme $p_{out,l}^{CC}$ can be obtained as
$p_{out,l}^{CC}=1+\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\times\\\
\sum\limits_{\kappa=1}^{\cal
K}{\sum\limits_{\varsigma=1}^{m{q_{\kappa}}}{\frac{{{\Phi_{\kappa\varsigma}}\left({-{\lambda_{\kappa}}}\right)}}{{\left({m{q_{\kappa}}-\varsigma}\right)!\left({\varsigma-1}\right)!}}{(2^{R}-1)^{m{q_{\kappa}}-\varsigma}}{e^{-{\lambda_{\kappa}}(2^{R}-1)}}}}$
(11)
where $\lambda_{1},\cdots,\lambda_{\mathcal{K}}$ define $\mathcal{K}$ distinct
poles of the MGF of $Y_{l}$ with multiplicities
$q_{1},\cdots,q_{\mathcal{K}}$, respectively,
$\sum\nolimits_{\kappa=1}^{\mathcal{K}}{{q_{\kappa}}}=l$, and
${\Phi_{\kappa\varsigma}}\left(s\right)=\frac{{{d^{\varsigma-1}}}}{{d{s^{\varsigma-1}}}}\left({{s^{-1}}\prod\limits_{\tau=1,\tau\neq\kappa}^{\cal
K}{{{\left({s{\rm{+}}{\lambda_{\tau}}}\right)}^{-m{q_{\tau}}}}}}\right)$.
Under high SNR regime, the outage probability $p_{out,l}^{CC}$ can also be
expressed asymptotically as
$p_{out,l}^{CC}=\frac{{{m^{ml}}\ell\left(l,\rho\right){{\left({{2^{R}}-1}\right)}^{ml}}}}{{\Gamma\left({ml+1}\right)\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}.$
(12)
###### Proof.
Please see Appendix A. ∎
### II-C Outage Probability of HARQ-IR
Different from Type I HARQ and HARQ-CC, HARQ-IR accumulates mutual information
in all previous HARQ rounds for decoding. From information theoretical
perspective, an outage happens when the accumulated mutual information is less
than the target transmission rate $R$. Thus the outage probability after $l$
HARQ rounds is formulated as
$p_{out,l}^{IR}=\Pr\left({\sum\limits_{\iota=1}^{l}{{{\log}_{2}}\left({1+{P_{\iota}}{{\left|{{h_{\iota}}}\right|}^{2}}}\right)}<R}\right).$
(13)
Due to the time correlation among $h_{l}$, it is intractable to find closed-
form expression for (13). Instead, a lower bound of $p_{out,l}^{IR}$ is
adopted to characterize the outage probability of HARQ-IR. By using Jensen’s
inequality, $p_{out,l}^{IR}$ is lower bounded as
$\displaystyle p_{out,l}^{IR}$
$\displaystyle\geq\Pr\left({{{\log}_{2}}\left({1+\frac{1}{l}\sum\limits_{\iota=1}^{l}{{P_{\iota}}{{\left|{{h_{\iota}}}\right|}^{2}}}}\right)<\frac{R}{l}}\right)$
$\displaystyle={F_{{Y_{l}}}}\left({l\left({{2^{R/l}}-1}\right)}\right)\triangleq{p_{out,l}^{IR,lower}}.$
(14)
With the CDF ${F_{{Y_{l}}}}\left(\cdot\right)$ derived in Theorem 2, the lower
bound ${p_{out,l}^{IR,lower}}$ and its asymptotic expression can be derived in
the following theorem.
###### Theorem 3.
The lower bound of the outage probability $p_{out,l}^{IR,lower}$ can be
obtained as
${p_{out,l}^{IR,lower}}=1+\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\times\\\
\sum\limits_{\kappa=1}^{\cal
K}{\sum\limits_{\varsigma=1}^{m{q_{\kappa}}}{\frac{{{\Phi_{\kappa\varsigma}}\left({-{\lambda_{\kappa}}}\right)}}{{\left({m{q_{\kappa}}-\varsigma}\right)!\left({\varsigma-1}\right)!}}{(l({{2^{R/l}}-1}))^{m{q_{\kappa}}-\varsigma}}{e^{-{\lambda_{\kappa}}(l({{2^{R/l}}-1}))}}}}.$
(15)
Under high SNR regime, ${p_{out,l}^{IR,lower}}$ is further simplified as
$p_{out,l}^{IR,lower}=\frac{{{(lm)^{ml}}\ell\left(l,\rho\right){{\left({{2^{R/l}}-1}\right)}^{ml}}}}{{\Gamma\left({ml+1}\right)\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}.$
(16)
## III Optimal Power allocation
In this section, the problem of power allocation is studied for the three HARQ
schemes. Generally, the average total transmission power for HARQ is defined
as ${\bar{P}=\sum\nolimits_{l=1}^{L}{{P_{l}}{p_{out,l-1}}}}$ [5]. Here
${p_{out,l}}$ refers to the outage probability after $l$ transmissions and it
unifies the cases of ${p_{out,l}^{I}}$, ${p_{out,l}^{CC}}$ and
${p_{out,l}^{IR,lower}}$. When power efficiency is concerned with certain
performance requirement, the transmission power among multiple HARQ rounds
should be properly designed to minimize the total transmission power while
guaranteeing the performance. The power allocation problem can be formulated
as
$\begin{array}[]{*{20}{cl}}{\mathop{\min}\limits_{{P_{1}},{P_{2}},\cdots{P_{L}}}}&{\bar{P}=\sum\limits_{l=1}^{L}{{P_{l}}{p_{out,l-1}}}}\\\
{{\rm{s}}{\rm{.t}}{\rm{.}}}&{{P_{l}}\geq 0,0\leq l\leq L}\\\
{}\hfil&{{p_{out,L}}\leq\varepsilon}\\\ \end{array},$ (17)
where $\varepsilon$ represents the outage tolerance.
Due to the complicated expressions of the exact outage probabilities given in
(7), (11) and (15), it is impossible to find closed-form power allocation
solutions directly. However, interior-point methods can be exploited to
numerically solve the problem (17). Meanwhile, based on the asymptotic
expressions of the outage probabilities, an efficient power allocation scheme
can be found as follows.
Notice that the asymptotic outage probabilities in (9), (12) and (16) can be
unified as
${p_{out,l}}\simeq\frac{{{\phi_{l}}}}{{{{\left({\prod\limits_{k=1}^{l}{{P_{k}}}}\right)}^{m}}}},\,0\leq
l\leq L,$ (18)
where $\phi_{l}$ depends on HARQ schemes, more precisely,
${\phi_{l}}=\left\\{{\begin{array}[]{*{20}{cl}}{\frac{{{m^{ml}}\ell\left({l,\rho}\right){{\left({{2^{R}}-1}\right)}^{ml}}}}{{{\Gamma^{l}}\left({m+1}\right)\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}}}},}&{{\textrm{Type}}\;{\textrm{I}};}\\\
{\frac{{{m^{ml}}\ell\left({l,\rho}\right){{\left({{2^{R}}-1}\right)}^{ml}}}}{{\Gamma\left({ml+1}\right)\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}}}},}&{{\textrm{HARQ-
CC}};}\\\
{\frac{{{{\left({ml}\right)}^{ml}}\ell\left({l,\rho}\right){{\left({{2^{R/l}}-1}\right)}^{ml}}}}{{\Gamma\left({ml+1}\right)\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}}}},}&{{\textrm{HARQ-
IR}}.}\end{array}}\right.$ (19)
Substituting (18) into (17), the Lagrangian of the optimization problem (17)
is formed as
$\mathfrak{L}\left({{P_{1}},\cdots,{P_{L}},\mu,{\nu_{1}},\cdots,{\nu_{L}}}\right)=\sum\limits_{l=1}^{L}{{P_{l}}\frac{{{\phi_{l-1}}}}{{{{\left({\prod\limits_{k=1}^{l-1}{{P_{k}}}}\right)}^{m}}}}}\\\
+\mu\left({\frac{{{\phi_{L}}}}{{{{\left({\prod\limits_{k=1}^{L}{{P_{k}}}}\right)}^{m}}}}-\varepsilon}\right)-\sum\limits_{l=1}^{L}{{\nu_{l}}{P_{l}}},$
(20)
where $\mu,\nu_{1},\cdots,\nu_{L}$ are the Lagrangian multipliers of the
constraints in the problem (17). Since the Karush-Khun-Tucker (KKT) conditions
are necessary for an optimal solution, we have
${\left.{\frac{{\partial\mathfrak{L}}}{{\partial{P_{n}}}}}\right|_{\left({{P_{1}^{*}},\cdots,{P_{L}^{*}},{\mu^{*}},{\nu_{1}}^{*},\cdots,{\nu_{L}}^{*}}\right)}}=0,$
(21)
${\mu^{*}}\left({\frac{{{\phi_{L}}}}{{{{\left({\prod\limits_{k=1}^{L}{P_{k}^{*}}}\right)}^{m}}}}-\varepsilon}\right)=0,$
(22) ${\nu_{l}}^{*}{P_{l}}^{*}=0,$ (23)
where $\mu^{*},{\nu_{1}}^{*},\cdots,{\nu_{L}}^{*},{P_{l}}^{*}$ denote the
optimal Lagrangian multipliers and the optimal power allocation, respectively.
Based on the KKT conditions (21)-(23), the optimal power allocation solution
to (17) could be found in closed form as follows.
###### Theorem 4.
The optimal solution to the problem (17) is uniquely given as
$\displaystyle P_{L}^{*}$
$\displaystyle={\left({\frac{{{\phi_{L}}\prod\limits_{k=2}^{L}{{{\left({\left({m+1}\right)\frac{{{\phi_{k-1}}}}{{{\phi_{k-2}}}}}\right)}^{\frac{1}{{{{\left({m+1}\right)}^{k-1}}}}}}}}}{{{\phi_{L-1}}{{\left({m+1}\right)}^{L-1}}\varepsilon}}}\right)^{\frac{{{{\left({m+1}\right)}^{L-1}}}}{{{{\left({m+1}\right)}^{L}}-1}}}},$
(24)
$P_{n}^{*}=\prod\limits_{k=n+1}^{L}{{{\left({\left({m+1}\right)\frac{{{\phi_{k-1}}}}{{{\phi_{k-2}}}}}\right)}^{\frac{1}{{{{\left({m+1}\right)}^{k-n}}}}}}}{P_{L}^{*}}^{\frac{1}{{{{\left({m+1}\right)}^{L-n}}}}},\\\
\quad{\textrm{f}or}\quad 1\leq n\leq L-1.$ (25)
Moreover, the minimal average total transmission power $\bar{P}^{*}$ is
$\displaystyle\bar{P}^{*}$
$\displaystyle=\frac{{\varepsilon\left({P_{L}}{{}^{*}}\right)^{{m+1}}{\phi_{L-1}}}}{{{\phi_{L}}}}\frac{{{{{\left({m+1}\right)}^{L}}-1}}}{m}.$
(26)
###### Proof.
Please see Appendix B. ∎
## IV Numerical Results and Discussions
In this section, the analytical results are verified through simulations, and
our proposed power allocation (PPA) scheme is compared to the fixed power
allocation (FPA) scheme in [4]. Notice that for FPA, the problem (17) is
solved by adding the constraint $P_{1}=\cdots=P_{L}$. In the sequel, we take
systems with $\Omega_{1}=\cdots=\Omega_{l}=1$, $\delta=1$s and $R=2$bps/Hz as
examples.
### IV-A Comparison of PPA and FPA
The minimal total transmission powers $\bar{P}^{*}$ of the PPA and FPA schemes
are compared in Fig. 1. The outage-limited systems with various HARQ and
parameters as $L=2$, $m=2$ and $\rho=0.5$ are considered. Clearly, the results
of PPA using asymptotic outage expressions (PPA-A) agree well with that of PPA
using exact outage expressions (PPA-E) under low outage constraint
$\varepsilon$. It is also readily found that PPA-A performs better than FPA
under low outage constraint $\varepsilon$, and their performance gap
significantly increases when $\varepsilon$ decreases. Moreover, it reveals
that HARQ-IR is superior to Type I HARQ and HARQ-CC in terms of power
efficiency.
Figure 1: Comparison of the proposed power allocation with the fixed power
allocation.
### IV-B Impacts of Time Correlation
Since the performance of PPA-A is asymptotically equivalent to that of PPA-E
under low $\varepsilon$, PPA-A is adopted to test the impact of time
correlation on power allocation. Fig. 2 plots the minimal total transmission
power $\bar{P}^{*}$ against time correlation coefficient $\rho$ by setting
parameters as $m=2$ and $\varepsilon=10^{-6}$. It can be easily seen that the
increase of time correlation $\rho$ would lead to the increase of the minimal
total transmission power $\bar{P}^{*}$ for both $L=2$ and $L=4$. It means that
time correlation has negative effect on power efficiency under low outage
constraint.
Figure 2: Impact of time correlation.
### IV-C Impacts of Fading Order
Fig. 3 depicts the impact of fading order $m$ on power allocation by setting
$\rho=0.5$ and $\varepsilon=10^{-6}$. Clearly, the minimal total transmission
power $\bar{P}^{*}$ decreases with the increase of fading order $m$. In fact,
higher fading order leads to higher diversity introduced by the channel, thus
reducing the power consumption given a certain outage performance constraint.
Figure 3: Impact of fading order.
## V Conclusions
Outage-limited power allocation for various HARQ schemes operating over time-
correlated Nakagami-m fading channels has been investigated in this paper.
After deriving the outage probabilities and their asymptotic expressions, an
optimal power allocation solution has been proposed in closed form. It has
been demonstrated that our proposed solution can achieve significant power
saving when compared to the fixed power allocation solution. The superiority
of the proposed optimal solution in terms of power consumption is further
enhanced when the channel time correlation is reduced and/or the fading order
increases.
## VI Acknowledgements
This work was supported in part by the Research Committee of University of
Macau under grants: MYRG101(Y1-L3)-FST13-MSD, MYRG2014-00146-FST and
MYRG2016-00146-FST, in part by the Macau Science and Technology Development
Fund under grant FDCT 091/2015/A3, and in part by the National Natural Science
Foundation of China under grant No.61601524.
## Appendix A Proof of Theorem 2
The moment generation function (MGF) with respect to $Y_{l}$ can be written as
${M_{{Y_{l}}}}\left(s\right)={\rm{E}}\left({{e^{s{Y_{l}}}}}\right)\\\
=\int\limits_{0}^{\infty}{\cdots\int\limits_{0}^{\infty}{{e^{s\sum\limits_{\iota=1}^{l}{{P_{\iota}}{x_{\iota}}^{2}}}}{f_{|{{\bf{h}}_{l}}|}}\left({{x_{1}},\cdots,{x_{l}}}\right)d{x_{1}}\cdots{dx_{l}}}}.$
(27)
Plugging (2) into (27), after some algebraic manipulations, it follows that
$\displaystyle{M_{{Y_{l}}}}\left(s\right)$
$\displaystyle={{{\left(\begin{array}[]{l}\prod\limits_{\iota=1}^{l}{\left({1-s{P_{\iota}}{\Omega_{\iota}}\left({1-{\rho^{2\left({\iota+\delta-1}\right)}}}\right)/m}\right)}\\\
\times\left({1+\sum\limits_{\iota=1}^{l}{\frac{{s{P_{\iota}}{\Omega_{\iota}}{\rho^{2\left({\iota+\delta-1}\right)}}/m}}{{s{P_{\iota}}{\Omega_{\iota}}\left({1-{\rho^{2\left({\iota+\delta-1}\right)}}}\right)/m-1}}}}\right)\end{array}\right)}^{-m}}}$
(30)
$\displaystyle=\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\prod\limits_{\kappa=1}^{\cal
K}{{{\left({{\lambda_{\kappa}}-s}\right)}^{-m{q_{\kappa}}}}},$ (31)
where $\lambda_{1},\cdots,\lambda_{\mathcal{K}}$ define $\mathcal{K}$ distinct
poles of ${M_{{Y_{l}}}}\left(s\right)$ with multiplicities
$q_{1},\cdots,q_{\mathcal{K}}$, respectively, and
$\sum\limits_{\kappa=1}^{\mathcal{K}}{{q_{\kappa}}}=l$. After some tedious
manipulations, ${M_{{Y_{l}}}}\left(s\right)$ can be simplified as
${M_{{Y_{l}}}}\left(s\right)=\det{\left({{\bf{I}}-s{{\bf{F}}^{1/2}{\bf{E}}{\bf{F}}^{1/2}}}\right)^{-m}}$,
where the notation $\det(\cdot)$ refers to the determinant, $\bf I$ represents
an identity matrix, $\bf F$ is an $l\times l$ diagonal matrix with diagonal
entries as $\\{\Omega_{\iota}P_{\iota}/m\\}_{\iota=1}^{l}$, and $\bf E$ is an
$l\times l$ positive definite matrix given by
${\bf{E}}=\left[{\begin{array}[]{*{20}{c}}1&{{\rho^{2\delta+1}}}&\cdots&{{\rho^{2\delta+l-1}}}\\\
{{\rho^{2\delta+1}}}&1&\cdots&{{\rho^{2\delta+l}}}\\\
\vdots&\vdots&\ddots&\vdots\\\
{{\rho^{2\delta+l-1}}}&{{\rho^{2\delta+l}}}&\cdots&1\end{array}}\right].$ (32)
Since $1/\lambda_{1},\cdots,1/\lambda_{\mathcal{K}}$ are the eigenvalues of
the positive definite matrix ${{\bf{F}}^{1/2}{\bf{E}}{\bf{F}}^{1/2}}$, we have
$\lambda_{1},\cdots,\lambda_{\mathcal{K}}>0$. By using inverse Laplace
transform and its integration property [11, Eq.9.109], the CDF with respect to
$Y_{l}$ is derived as
$\displaystyle{F_{{Y_{l}}}}\left(y\right)=\mathcal{L}^{-1}\left\\{{{M_{{Y_{l}}}}\left({-s}\right)}\right\\}\left(y\right)=\frac{1}{{2\pi
j}}\int\limits_{a-j\infty}^{a+j\infty}{\frac{{{M_{{Y_{l}}}}\left({-s}\right)}}{s}{e^{sy}}ds},$
(33)
By using [12, Eq. 5.2.21], (33) can be calculated as
${F_{{Y_{l}}}}\left(y\right)=\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\frac{1}{{2\pi
j}}\int\limits_{a-j\infty}^{a+j\infty}{\frac{{{e^{sy}}}}{{s\prod\limits_{\kappa=1}^{\cal
K}{{{\left({s{\rm{+}}{\lambda_{\kappa}}}\right)}^{m{q_{\kappa}}}}}}}ds}\\\
=1+\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\sum\limits_{\kappa=1}^{\cal
K}{\sum\limits_{\varsigma=1}^{m{q_{\kappa}}}{\frac{{{\Phi_{\kappa\varsigma}}\left({-{\lambda_{\kappa}}}\right)}}{{\left({m{q_{\kappa}}-\varsigma}\right)!\left({\varsigma-1}\right)!}}{y^{m{q_{\kappa}}-\varsigma}}{e^{-{\lambda_{\kappa}}y}}}},$
(34)
where
${\Phi_{\kappa\varsigma}}\left(s\right)=\frac{{{d^{\varsigma-1}}}}{{d{s^{\varsigma-1}}}}\left({{s^{-1}}\prod\limits_{\tau=1,\tau\neq\kappa}^{\cal
K}{{{\left({s{\rm{+}}{\lambda_{\tau}}}\right)}^{-m{q_{\tau}}}}}}\right)$.
Therefore, by using (34) and together with $y=2^{R}-1$, (11) in Theorem 2 is
proved.
By using the expansion of Maclaurin series, (34) can be further rewritten as
${F_{{Y_{l}}}}\left(y\right)=\sum\limits_{n=0}^{\infty}{\frac{{{F_{{Y_{l}}}}^{\left(n\right)}\left(0\right)}}{{n!}}{y^{n}}}.$
(35)
Since
${F_{{Y_{l}}}}^{\left(n\right)}\left(0\right)=\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}\frac{1}{{2\pi
j}}\int\limits_{a-j\infty}^{a+j\infty}{\frac{{{s^{n-1}}}}{{\prod\limits_{\kappa=1}^{\cal
K}{{{\left({s{\rm{+}}{\lambda_{\kappa}}}\right)}^{m{q_{\kappa}}}}}}}ds},\\\
0\leq n\leq ml,$ (36)
it follows by using the initial-value theorems of Laplace transform [11, Eq.
9.5.10] that
${F_{{Y_{l}}}}^{\left(1\right)}\left(0\right)=\cdots={F_{{Y_{l}}}}^{\left(ml-1\right)}\left(0\right)=0$,
and
${F_{{Y_{l}}}}^{\left(ml\right)}\left(0\right)=\frac{{{m^{ml}}\ell\left(l,\rho\right)}}{{\prod\nolimits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}$.
Moreover, it can be proved that ${F_{{Y_{l}}}}^{\left(n\right)}\left(0\right)$
is a higher order term of $\prod\nolimits_{\iota=1}^{l}{{P_{\iota}}^{-m}}$ for
$n>ml$. The proof is omitted due to space limit. Thus it yields
${F_{{Y_{l}}}}\left(y\right)=\frac{{{m^{ml}}\ell\left(l,\rho\right){y^{ml}}}}{{\Gamma\left({ml+1}\right)\prod\limits_{\iota=1}^{l}{{\Omega_{\iota}}^{m}{P_{\iota}}^{m}}}}+o\left({\prod\limits_{\iota=1}^{l}{{P_{\iota}}^{-m}}}\right).$
(37)
Hence under high SNR, i.e., $P_{\iota}\to\infty$, (12) can be derived by using
(37) together with $y=2^{R}-1$. Thus Theorem 2 is proved.
## Appendix B Proof of Theorem 4
Clearly from (18), since $P_{l}^{*}\neq 0$, we have ${\nu_{l}}=0$. Therefore,
after some algebraic manipulations, (21) can be rewritten as
${\left.{\frac{{\partial\mathfrak{L}}}{{\partial{P_{n}}}}}\right|_{\left({{P_{1}^{*}},\cdots,{P_{L}^{*}},{\mu^{*}}}\right)}}=\frac{{{\phi_{n-1}}}}{{{{\left({\prod\limits_{k=1}^{n-1}{P_{k}^{*}}}\right)}^{m}}}}\\\
-\frac{m}{{{P_{n}^{*}}}}\sum\limits_{l=n+1}^{L}{{P_{l}}^{*}\frac{{{\phi_{l-1}}}}{{{{\left({\prod\limits_{k=1}^{l-1}{P_{k}^{*}}}\right)}^{m}}}}}-\frac{m}{{{P_{n}^{*}}}}{\mu^{*}}\frac{{{\phi_{L}}}}{{{{\left({\prod\limits_{k=1}^{L}{P_{k}^{*}}}\right)}^{m}}}}=0.$
(38)
Together with
${\left.{\frac{{\partial\mathfrak{L}}}{{\partial{P_{n-1}}}}}\right|_{\left({{P_{1}^{*}},\cdots,{P_{L}^{*}},{\mu^{*}}}\right)}}=0$,
(38) reduces to
$P_{n}^{*}=\left({m+1}\right){P_{n+1}^{*}}\frac{{{\phi_{n}}}}{{{\phi_{n-1}}{{\left({P_{n}^{*}}\right)}^{m}}}}.$
(39)
Now from (39), we can derive $P_{n}^{*}$ recursively as (25). Regarding to
$P_{L}^{*}$, by letting $n=L$ in (38), we have
$\frac{{{\phi_{L-1}}}}{{{{\left({\prod\limits_{k=1}^{L-1}{P_{k}^{*}}}\right)}^{m}}}}=m{\mu^{*}}\frac{{{\phi_{L}}}}{{{P_{L}^{*}}{{\left({\prod\limits_{k=1}^{L}{P_{k}^{*}}}\right)}^{m}}}}\Rightarrow{\mu^{*}}{\rm{=}}\frac{{{P_{L}^{*}}^{m+1}{\phi_{L-1}}}}{{m{\phi_{L}}}}.$
(40)
Recalling that $P_{L}\neq 0$, thus ${\mu^{*}}\neq 0$. According to (22), we
have
$\frac{{{\phi_{L}}}}{{{{\left({\prod\limits_{k=1}^{L}{P_{k}^{*}}}\right)}^{m}}}}-\varepsilon=0\Rightarrow{\left({\prod\limits_{n=1}^{L}{{P_{n}^{*}}}}\right)^{m}}=\frac{{{\phi_{L}}}}{\varepsilon}.$
(41)
Substituting (25) into (41) yields (24). Moreover, by using (39), it follows
that
${P_{n}^{*}}=\frac{{{{\left({m+1}\right)}^{L-n}}{\phi_{L-1}}}}{{{\phi_{n-1}}}}\frac{{{P_{L}^{*}}}}{{{{\left({{P_{n}^{*}}\cdots
P_{L-1}^{*}}\right)}^{m}}}}.$ (42)
## References
* [1] W. Su, S. Lee, D. Pados, J. D. Matyjas _et al._ , “Optimal power assignment for minimizing the average total transmission power in hybrid-ARQ Rayleigh fading links,” _IEEE Trans. Commun._ , vol. 59, no. 7, pp. 1867–1877, Jul. 2011.
* [2] B. Makki, A. Graell i Amat, and T. Eriksson, “Green communication via power-optimized HARQ protocols,” _IEEE Trans. Veh. Technol._ , vol. 63, no. 1, pp. 161–177, Jan. 2013.
* [3] G. Wang, J. Wu, and Y. R. Zheng, “Optimum energy and spectral efficient transmissions for delay-constrained hybrid ARQ systems,” _IEEE Trans. Veh. Technol._ , vol. PP, no. 99, pp. 1–1, Aug. 2014.
* [4] J. Choi, W. Xing, D. To, Y. Wu, and S. Xu, “On the energy efficiency of a relaying protocol with HARQ-IR and distributed cooperative beamforming,” _IEEE Trans. Wireless Commun._ , vol. 12, no. 2, pp. 769–781, Feb. 2013\.
* [5] T. V. Chaitanya and E. G. Larsson, “Optimal power allocation for hybrid ARQ with chase combining in i.i.d. Rayleigh fading channels,” _IEEE Trans. Commun._ , vol. 61, no. 5, pp. 1835–1846, May 2013.
* [6] T. Chaitanya and T. Le-Ngoc, “Energy-efficient adaptive power allocation for incremental MIMO systems,” _IEEE Trans. Veh. Technol._ , vol. PP, no. 99, pp. 1–1, Mar. 2015.
* [7] S. M. Kim, W. Choi, T. W. Ban, and D. K. Sung, “Optimal rate adaptation for hybrid ARQ in time-correlated Rayleigh fading channels,” _IEEE Trans. Wireless Commun._ , vol. 10, no. 3, pp. 968–979, Mar. 2011.
* [8] H. Jin, C. Cho, N.-O. Song, and D. K. Sung, “Optimal rate selection for persistent scheduling with HARQ in time-correlated Nakagami-m fading channels,” _IEEE Trans. Wireless Commun._ , vol. 10, no. 2, pp. 637–647, Feb. 2011.
* [9] N. C. Beaulieu and K. T. Hemachandra, “Novel simple representations for Gaussian class multivariate distributions with generalized correlation,” _IEEE Trans. Inf. Theory_ , vol. 57, no. 12, pp. 8072–8083, Dec. 2011.
* [10] I. S. Gradshteyn, I. M. Ryzhik, A. Jeffrey, D. Zwillinger, and S. Technica, _Table of integrals, series, and products_. Academic press New York, 1965, vol. 6.
* [11] A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, _Signals &Amp; Systems (2Nd Ed.)_. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1996.
* [12] H. Bateman, “Tables of integral transforms,” _California Institute of Technology Bateman Manuscript Project, New York: McGraw-Hill, 1954, edited by Erdelyi, Arthur_ , vol. 1, 1954.
|
# Extensions of Gorenstein weighted projective 3-spaces and characterization
of the primitive curves of their surface sections.
Bruno Dewer
###### Abstract
We investigate the Gorenstein weighted projective spaces of dimension 3. Given
such a space $\mathbf{P}$, our first focus is its maximal extension in its
anticanonical model $\mathbf{P}\subset\mathbf{P}^{g+1}$, i.e., the variety
$Y\subset\mathbf{P}^{g+1+r}$ of largest dimension such that $Y$ is not a cone
and $\mathbf{P}$ is a linear section of $Y$. In [DS21] Thomas Dedieu and
Edoardo Sernesi have computed the dimension of $Y$ by cohomological
computations on the canonical curves inside $\mathbf{P}$. We give an explicit
description of $Y$ in the cases where it was not known. Next, we examine the
general anticanonical divisors of $\mathbf{P}$. These are K3 surfaces, not
necessarily primitively polarized. We give a geometric characterization of the
curve sections in their primitive polarization.
## 1 Introduction
The notion of extendability for projective varieties consists in the
following.
###### Definition 1.1.
A projective variety $X\subset\mathbf{P}^{N}$ is _extendable_ if there exists
$X_{1}\subset\mathbf{P}^{N+1}$ which is not a cone, such that $X$ is a
hyperplane section of $X_{1}$.
Moreover, if $X_{r}\subset\mathbf{P}^{N+r}$ is not a cone and $X$ can be
obtained as the intersection of $X_{r}$ with a linear subspace of dimension
$N$, we say that $X_{r}$ is an $r-$extension of $X$, or also that $X$ has been
extended $r$ times. If moreover there exists no extension of $X$ of larger
dimension, we say that $X_{r}$ is maximal.
The topic of this paper is the extendability of weighted projective spaces,
more precisely those of dimension $3$ that are Gorenstein.
Given four coprime positive integers $a_{0},a_{2},a_{2}$ and $a_{3}$, the
weighted projective space $\mathbf{P}=\mathbf{P}(a_{0},a_{2},a_{2},a_{3})$ is
$\mathrm{Proj}(R)$ where $R=\mathbf{C}[x_{0},x_{1},x_{2},x_{3}]$ endowed with
the grading $\deg x_{i}=a_{i}$. By definition, it is Gorenstein if its
anticanonical divisor class is Cartier, which holds if and only if all the
$a_{i}$’s divide their sum (see for instance Theorem 3.3.4 in [Do81]). As
mentioned in [Pr04], among all the weighted projective spaces of dimension
$3$, there are exactly $14$ which are Gorenstein.
Assume $\mathbf{P}\subset\mathbf{P}^{g+1}$ is Gorenstein and embedded by its
anticanonical linear system, then its general hyperplane section $S$ is a K3
surface with canonical singularities. The induced polarization
$(S,-K_{\mathbf{P}}{}|_{S})$ is of genus $g$, meaning that the general member
$\Gamma$ of $-K_{\mathbf{P}}{}|_{S}$ is a canonical curve of genus $g$. As a
consequence of this, any extension of $\mathbf{P}$ is also an extension of
$\Gamma\subset\mathbf{P}^{g-1}$.
Consider then a smooth canonical curve $\Gamma\subset\mathbf{P}^{g-1}$
obtained as a linear section of $\mathbf{P}\subset\mathbf{P}^{g+1}$ and the
number $\alpha(\Gamma,K_{\Gamma})$ introduced in [Lvo92], which can be
computed as the corank of the Gauß-Wahl application of the polarization
$(\Gamma,K_{\Gamma})$ (see Definition 2.1). In this situation, it follows from
[BM87] and [Wah87] that $\alpha(\Gamma,K_{\Gamma})$ is nonzero. Then by
Theorem 2.1 and Theorem 2.17 in [CDS20] we have $\dim
Y=1+\alpha(\Gamma,K_{\Gamma})$, for $Y$ a maximal extension of $\Gamma$. This
allows us to know exactly how many times $\mathbf{P}$ can be extended, by
Corollary 6.4 in [DS21].
The full list of the Gorenstein weighted projective $3-$spaces, as well as the
maximal extensions which are known from [DS21], is given below.
Notice that the space $\mathbf{P}(1,1,1,3,5,5,5)$ is denoted by
$\mathbf{P}(1^{3},3,5^{3})$, as the weights $1$ and $5$ each appear three
times, and the weight $3$ appears once. From now on, we will adopt this
notation for brevity. We also adopt the following convention: when
$\mathbf{P}$ is not extendable, we say that its unique maximal extension is
itself.
$\begin{array}[]{|l|l|l|}\hline\cr\mathbf{P}&\text{extendable?}&\text{maximal
extension}\\\ \hline\cr\mathbf{P}(1,1,1,1)&\text{no}&\text{itself}\\\
\mathbf{P}(1,1,1,3)&\text{no}&\text{itself}\\\
\mathbf{P}(1,1,4,6)&\text{no}&\text{itself}\\\
\mathbf{P}(1,2,2,5)&\text{yes}&\text{sextic hypersurface of
}\mathbf{P}(1^{3},3,5^{3})\\\ \mathbf{P}(1,1,2,4)&\text{no}&\text{itself}\\\
\mathbf{P}(1,3,4,4)&\text{yes}&\text{quartic hypersurface of
}\mathbf{P}(1^{4},3^{4})\\\ \mathbf{P}(1,1,2,2)&\text{no}&\text{itself}\\\
\mathbf{P}(2,3,3,4)&\text{yes}&\text{cubic hypersurface of
}\mathbf{P}(1^{5},2^{5})\\\
\hdashline[8pt/3pt]\mathbf{P}(1,4,5,10)&\text{yes}&\text{was not known}\\\
\mathbf{P}(1,2,6,9)&\text{yes}&\text{was not known}\\\
\mathbf{P}(1,2,3,6)&\text{no}&\text{itself}\\\
\mathbf{P}(1,3,8,12)&\text{yes}&\text{was not known}\\\
\mathbf{P}(1,6,14,21)&\text{yes}&\text{was not known}\\\
\mathbf{P}(2,3,10,15)&\text{yes}&\text{was not known}\\\ \hline\cr\end{array}$
###### Definition 1.2.
Let $\mathcal{K}_{g}^{i}$ be the moduli space of the polarized surfaces
$(S,L)$ with $S$ a K3 surface and $L$ an ample Cartier divisor on $S$ such
that the general members of $L$ are genus $g$ curves and the index of $L$ in
$\mathrm{Pic}(S)$ is equal to $i$. In other words, $i$ is the largest positive
integer $r$ such that $\frac{1}{r}L\in\mathrm{Pic}(S)$.
For all $g$ and $i$ we consider the function
$\alpha:\mathcal{K}_{g}^{i}\to\mathbf{Z}$ given by
$\alpha(S,L)=\alpha(\Gamma^{\prime},K_{\Gamma^{\prime}})-1$, where
$\Gamma^{\prime}$ is a general member of $L$. Given $\mathbf{P}$ a Gorenstein
weighted projective space of dimension $3$ and $S$ a general anticanonical
divisor of $\mathbf{P}$, let $g$ and $i_{S}$ respectively denote the genus and
the index of the induced polzarization $(S,-K_{\mathbf{P}}|_{S})$. This
polarization is then a member of the moduli space $\mathcal{K}_{g}^{i_{S}}$.
T.Dedieu and E.Sernesi have computed and stated in Proposition 6.2 of [DS21]
that in each of the 14 cases, there is a constant $\alpha_{g}^{i_{S}}$ such
that $\alpha$ takes the value $\alpha_{g}^{i_{S}}$ on a dense open subset of
$\mathcal{K}_{g}^{i_{S}}$. The first 8 cases of the list above are those for
which $\alpha(S,-K_{\mathbf{P}}|_{S})=\alpha_{g}^{i_{S}}$, and we are going to
examine the ones for which this equality doesn’t hold.
The core results of this paper, stated and proved in Section 4 and Section 5,
are summarized in the two following theorems.
###### Theorem 1.3.
Assume that the polarization $(S,-K_{\mathbf{P}}|_{S})$ is not general in
$\mathcal{K}_{g}^{i_{S}}$, in the sense that
$\alpha(S,-K_{\mathbf{P}}|_{S})>\alpha^{i_{S}}_{g}.$
Then $\mathbf{P}$ is one of the last six items of the list given above. Each
of them admits a maximal extension $Y$ which has a description as follows.
$\begin{array}[]{|l|l|l|}\hline\cr\mathbf{P}&Y&\textsl{dim}(Y)\\\
\hline\cr&&\\\\[-10.0pt] \mathbf{P}(1,4,5,10)&\textsl{nongeneral quintic of
}\mathbf{P}(1^{3},2,4^{3})&5\\\ \mathbf{P}(1,2,6,9)&\textsl{nongeneral
}10-\textsl{ic of }\mathbf{P}(1^{2},3,5,9^{2})&4\\\
\mathbf{P}(1,2,3,6)&\mathbf{P}(1,2,3,6)&3\\\
\mathbf{P}(1,3,8,12)&\textsl{nongeneral }9-\textsl{ic of
}\mathbf{P}(1^{2},3,4,8^{2})&4\\\ \mathbf{P}(1,6,14,21)&\textsl{nongeneral
heptic of }\mathbf{P}(1^{2},2,3,6^{2})&4\\\
\mathbf{P}(2,3,10,15)&\textsl{codim. }2\textsl{ complete intersection in
a}&5\\\ &\mathbf{P}(1^{2},2,3,5^{3})-\textsl{bundle over }\mathbf{P}^{1}&\\\
\hline\cr\end{array}$
Next, we consider $C$ a general member of the primitive divisor class
$-\frac{1}{i_{S}}K_{\mathbf{P}}|_{S}$. We focus on the same cases as in
Theorem 1.3, and to give an insight on the geometry of $S$, we give a
geometric characterization of $C$.
###### Theorem 1.4.
Let $\mathbf{P}$ be a Gorenstein weighted projective space and $S$ a general
anticanonical divisor of $\mathbf{P}$ such that
$\alpha(S,-K_{\mathbf{P}}|_{S})>\alpha^{i_{S}}_{g}$. If $C$ is a general
member of $-\frac{1}{i_{S}}K_{\mathbf{P}}|_{S}$ then it is as follows.
$\begin{array}[]{|l|l|}\hline\cr\mathbf{P}&C\\\ \hline\cr&\\\\[-10.0pt]
\mathbf{P}(1,4,5,10)&\textsl{plane quintic with a total inflection point}\\\
\mathbf{P}(1,2,6,9)&\textsl{smooth hyperelliptic curve of genus }4\\\
\mathbf{P}(1,2,3,6)&\textsl{normalization of a plane sextic with an
oscnode}\\\ \mathbf{P}(1,3,8,12)&\textsl{trigonal curve of genus }7\textsl{
with a total ramification point}\\\ \mathbf{P}(1,6,14,21)&\textsl{blowup of a
plane }21-\textsl{ic curve at 8 heptuple points}\\\
\mathbf{P}(2,3,10,15)&\textsl{normalization of a nodal }6-\textsl{gonal curve
of genus }16\textsl{ such that the}\\\ &g_{6}^{1}\textsl{ has two members of
the form }6p\textsl{ and }2p_{1}+2p_{2}+2p_{3}\textsl{ respectively}\\\
\hline\cr\end{array}$
Conversely, for all items on the list except $\mathbf{P}(1,2,3,6)$, any curve
with the given description is a member of
$-\frac{1}{i_{S}}K_{\mathbf{P}}|_{S}$.
The organization of the article is as follows. In Section 2 we go over some
information and definitions about the $3-$dimensional Gorenstein weighted
projective spaces. In Section 3 we introduce a birational model of
$\mathbf{P}$ which realizes $S$ as a nongeneral anticanonical divisor of
another weighted projective $3-$space. This allows us in Section 4 to express
$\Gamma$ as a complete intersection of two surfaces of different degrees and
to construct $Y$ as a hypersurface in a larger weighted projective space in
all cases but one, which requires additional work. In Section 5 we consider
the primitive polarization of $S$ and give a geometric characterization of the
general curve $C$ in the linear system
$|-\frac{1}{i_{S}}K_{\mathbf{P}}{}|_{S}|$.
## 2 The Gorenstein weighted projective spaces of dimension $3$
We refer to §5 and §6 in [Ia00] as a reference for basic facts about weighted
projective spaces.
Let $\mathbf{P}=\mathbf{P}(a_{0},a_{1},a_{2},a_{3})$ be a weighted projective
space. Its anticanonical divisor class is that of degree
$a_{0}+a_{1}+a_{2}+a_{3}$ surfaces and its Picard group is generated by
$[\mathcal{O}_{\mathbf{P}}(l)]$ with
$l=\mathrm{lcm}(a_{0},a_{1},a_{2},a_{3})$. Hence $-K_{\mathbf{P}}$ is Cartier
if and only if $a_{i}$ is a divisor of $a_{0}+a_{1}+a_{2}+a_{3}$ for all $i$.
Assume that this conditions holds, making $\mathbf{P}$ Gorenstein. We will
observe that $\mathbf{P}$ is embedded in $\mathbf{P}^{g+1}$ by its
anticanonical linear system. Let $\Gamma\subset\mathbf{P}^{g-1}$ be a curve
section of $\mathbf{P}$ cut out by two general hyperplanes. The adjunction
formula yields $K_{\Gamma}=-K_{\mathbf{P}}{}|_{\Gamma}.$ Hence $\Gamma$ in
$\mathbf{P}^{g-1}$ is a canonical curve, and $\mathbf{P}$ can only be extended
finitely many times, by Theorem 2.3.
We list all the Gorenstein weighted projective spaces of dimension $3$ in
Table 1 below, together with the following information. If $a_{i}$ are the
weights of $\mathbf{P}$, $l=\mathrm{lcm}(a_{0},a_{1},a_{2},a_{3})$ and
$\sigma=a_{0}+a_{1}+a_{2}+a_{3}$, then the index $i$ of $-K_{\mathbf{P}}$ in
$\mathrm{Pic}(\mathbf{P})$ is equal to $\frac{\sigma}{l}$.
$\hypertarget{Table
1}{}\begin{array}[]{|l|l|l|l|l|}\hline\cr\\#&\mathbf{P}&l&\sigma&i\\\
\hline\cr&&&&\\\\[-10.0pt] 1&\mathbf{P}(1,1,1,1)&1&4&4\\\
2&\mathbf{P}(1,1,1,3)&3&6&2\\\ 3&\mathbf{P}(1,1,4,6)&12&12&1\\\
4&\mathbf{P}(1,2,2,5)&10&10&1\\\ 5&\mathbf{P}(1,1,2,4)&4&8&2\\\
6&\mathbf{P}(1,3,4,4)&12&12&1\\\ 7&\mathbf{P}(1,1,2,2)&2&6&3\\\
8&\mathbf{P}(2,3,3,4)&12&12&1\\\ 9&\mathbf{P}(1,4,5,10)&20&20&1\\\
10&\mathbf{P}(1,2,6,9)&18&18&1\\\ 11&\mathbf{P}(1,2,3,6)&6&12&2\\\
12&\mathbf{P}(1,3,8,12)&24&24&1\\\ 13&\mathbf{P}(1,6,14,21)&42&42&1\\\
14&\mathbf{P}(2,3,10,15)&30&30&1\\\ \hline\cr\end{array}$
Table 1
### 2.1 Extendability of $\Gamma$ and $\mathbf{P}$
For each $\mathbf{P}$ in Table 1, if $S$ is a general anticanonical divisor,
then the couple $(S,-K_{\mathbf{P}}|_{S})$ represents an element of the moduli
space $\mathcal{K}_{g}^{i_{S}}$ (see Definition 1.2). We focus here on the
last $6$ examples ($\\#9$ to $\\#14$) which are mentioned in Theorem 1.3. A
maximal extension of $\mathbf{P}$ in these cases was not known so far.
###### Definition 2.1.
Let $X\subset\mathbf{P}^{N}$ be a projective variety, and
$L=\mathcal{O}_{\mathbf{P}^{N}}(1)|_{X}$. we introduce
$\alpha(X,L)=h^{0}(N_{X/\mathbf{P}^{N}}\otimes L^{-1})-N-1.$
So that, if $X=\Gamma$ is a canonical curve in $\mathbf{P}^{g-1}$, one has
$\alpha(\Gamma,L)=\alpha(\Gamma,K_{\Gamma})$.
###### Lemma 2.2.
In the case where $\Gamma$ is a canonical curve, it holds that
$\alpha(\Gamma,K_{\Gamma})=\mathrm{cork}(\Phi_{K_{\Gamma}})$ where
$\Phi_{K_{\Gamma}}$ is the Gauß-Wahl map of the polarization
$(\Gamma,K_{\Gamma})$.
We refer to §2 in [BM87], and [Wah87] for the definition of the Gauß-Wahl map
of a polarized curve and to Lemma 3.2 in [CDS20] for a proof of this lemma.
The value of $\alpha(\Gamma,K_{\Gamma})$ for $\Gamma$ a general curve linear
section of any Gorenstein weighted projective $3-$space have been computed by
T.Dedieu and E.Sernesi and we display these values in the relevant cases in
Table 2 below. This allows us to know the dimension of any maximal extension
of $\Gamma$ by the following theorem.
###### Theorem 2.3 ([CDS20], Theorems 2.1, 2.17).
Let $\Gamma\subset\mathbf{P}^{g-1}$ be a smooth canonical curve with $g\geq
11$ and such that $\mathrm{Cliff}(\Gamma)>2$. Then $\Gamma$ is extendable only
$\alpha(\Gamma,K_{\Gamma})$ times, i.e., there exists $Y$ in
$\mathbf{P}^{g-1+\alpha(\Gamma,K_{\Gamma})}$ such that
$\dim(Y)=1+\alpha(\Gamma,K_{\Gamma})$, which is a maximal extension of
$\Gamma$.
In addition, there exists a maximal extension $Y$ of $\Gamma$ which is
universal, meaning that for each surface extension $S$ of $\Gamma$, there is a
unique $g-$plane $\Lambda\subset\mathbf{P}^{g-1+\alpha(\Gamma,K_{\Gamma})}$
such that $\Gamma\subset\Lambda$ and $S=Y\cap\Lambda$.
Let us consider $\Gamma$ a general linear curve section of
$\mathbf{P}\subset\mathbf{P}^{g+1}$ where $\mathbf{P}$ is a Gorenstein
weighted projective $3-$space. Hence, $\Gamma$ is a general hyperplane section
of a K3 surface $S\subset\mathbf{P}$. Such a surface only has isolated
singularities, so by Bertini’s Theorem, $\Gamma$ is smooth. The values of $g$
are listed in Table 2 below, and in each case we have $g\geq 11$ and
$\mathrm{Cliff}(\Gamma)>2$ by Corollary 2.8, to the effect that Theorem 2.3
applies to $\Gamma$.
As a consequence, we know that any extension of $\mathbf{P}$ of dimension
$1+\alpha(\Gamma,K_{\Gamma})$ is maximal. In particular, in any case for which
$\alpha(\Gamma,K_{\Gamma})=2$, the threefold $\mathbf{P}$ is not extendable
and it is the universal extension of $\Gamma$.
We list in Table 2 examples #9 to #14 coupled with the data of $i_{S}$, the
genera of $\Gamma$ and $C$, where $\Gamma$ is a general member of
$-K_{\mathbf{P}}|_{S}$ and $C$ is a general member of
$-\frac{1}{i_{S}}K_{\mathbf{P}}|_{S}$. The value of
$\alpha(\Gamma,K_{\Gamma})$ and the datum of the singular points of $S$ are
also provided.
$\hypertarget{Table
2}{}\begin{array}[]{|l|l|l|l|l|l|l|}\hline\cr\\#&\mathbf{P}&i_{S}&g=g(\Gamma)&g(C)&\alpha(\Gamma,K_{\Gamma})&\mathrm{Sing}(S)\\\
\hline\cr&&&&&&\\\\[-10.0pt] 9&\mathbf{P}(1,4,5,10)&2&21&6&4&A_{1},2A_{4}\\\
10&\mathbf{P}(1,2,6,9)&3&28&4&3&3A_{1},A_{2}\\\
11&\mathbf{P}(1,2,3,6)&2&25&7&2&2A_{1},2A_{2}\\\
12&\mathbf{P}(1,3,8,12)&2&25&7&3&2A_{2},A_{3}\\\
13&\mathbf{P}(1,6,14,21)&1&22&22&3&A_{1},A_{2},A_{6}\\\
14&\mathbf{P}(2,3,10,15)&1&16&16&4&3A_{1},2A2,A_{4}\\\ \hline\cr\end{array}$
Table 2
The anticanonical divisor $-K_{\mathbf{P}}$ being very ample by Theorem 2.5,
its embeds $\mathbf{P}$ in $\mathbf{P}^{g+1}$ as a variety of degree
$(-K_{\mathbf{P}})^{3}=2g-2$. In that model, recall that $\mathbf{P}$ can only
be extended $\alpha(\Gamma,K_{\Gamma})-2$ times. The only one which is not
extendable in our list is $\mathbf{P}(1,2,3,6)$.
### 2.2 The very ampleness of $-K_{\mathbf{P}}$
Let us prove now that $-K_{\mathbf{P}}$ is very ample in the cases $\\#9$ to
$\\#14$ given above, using the following lemma.
###### Lemma 2.4.
Let $X$ be a projective variety and $D$ an ample Cartier divisor on $X$. Let
$A_{n}=H^{0}(X,\mathcal{O}_{X}(nD))$ for all $n\in\mathbf{N}$ and
$A=\oplus_{n\geq 1}A_{n}$. If $A$ is generated by $A_{1}$ as a
$\mathbf{C}-$algebra, then $D$ is very ample.
###### Proof.
Let $q$ be a positive integer such that $Z=qD$ is very ample. It induces an
embedding of $X$ into a projective space, thus
$X\simeq\mathrm{Proj}(A^{(q)})\simeq\mathrm{Proj}(A),$
where $A^{(q)}$ is the graded ring such that $(A^{(q)})_{d}=A_{dq}$. Let
$\varphi:X\dashrightarrow\mathbf{P}(H^{0}(X,\mathcal{O}_{X}(D)))$
be the map induced by $D$. If $L_{n}$ is the image of the multiplication map
$H^{0}(X,\mathcal{O}_{X}(D))^{\otimes n}\to H^{0}(X,\mathcal{O}_{X}(nD)),$
then it holds that $\varphi(X)=\mathrm{Proj}(L)$ with $L=\oplus_{n\geq
1}L_{n}$. The condition that $A$ is generated by $A_{1}$ is equivalent to
$A=L$, and therefore it yields $\varphi(X)=\mathrm{Proj}(A)\simeq X$. ∎
###### Theorem 2.5.
Let $\mathbf{P}(a_{0},a_{1},a_{2},a_{3})$ be one of the Gorenstein weighted
projective $3-$spaces listed in Table 2. Then $-K_{\mathbf{P}}$ is very ample.
###### Proof.
Let $S$ be a general member of $|-K_{\mathbf{P}}|$ and $\Gamma$ a general
member of $|-K_{\mathbf{P}}|_{S}|$. We first prove that $-K_{\mathbf{P}}|_{S}$
is very ample. Thanks to Lemma 2.4, it is enough to prove that
$H^{0}(S,\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S}))^{\otimes n}\to
H^{n}(S,\mathcal{O}_{S}(-nK_{\mathbf{P}}|_{S}))$ is onto for all $n\geq 1$. It
is naturally the case for $n=1$. Assume now that $n>1$ and that it holds up to
rank $n-1$. Consider the following commutative diagram.
${H^{0}(\Gamma,\mathcal{O}_{\Gamma}(K_{\Gamma}))^{\otimes
n}}$${H^{0}(\Gamma,\mathcal{O}_{\Gamma}(nK_{\Gamma}))}$${H^{0}(S,\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S}))^{\otimes
n}}$${H^{0}(S,\mathcal{O}_{S}(-nK_{\mathbf{P}}|_{S}))}$${H^{0}(S,\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S}))^{\otimes
n-1}}$${H^{0}(S,\mathcal{O}_{S}(-(n-1)K_{\mathbf{P}}|_{S}))}$$\scriptstyle{\otimes
f}$$\scriptstyle{\cdot f}$
Here, $f$ is a global section of $\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S})$ such
that $\Gamma=\left\\{f=0\right\\}$ on $S$. The bottom arrow is onto by the
induction hypothesis.
The right column is the restriction exact sequence. The surjectivity of the
top two vertical arrows follows as a consequence of Lemma 2.1 and Proposition
2.1 in [Sho71]. This applies since $g(\Gamma)\geq 3$ and $\Gamma$ is
nonhyperelliptic, ensuring that $\Gamma$ is projectively normal. The
surjectivity of the top horizontal arrow follows from the fact that
$K_{\Gamma}$ is very ample. We refer to Theorem 2.7 for a case-by-case proof
that $\Gamma$ is nonhyperelliptic and $K_{\Gamma}$ is very ample.
As the vertical sequence on the right is exact, the surjectivity of the middle
arrow follows from the surjectivity of the bottom and the top arrows by
diagram chasing. Hence the induction holds and
$H^{0}(S,\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S}))^{\otimes n}\to
H^{n}(S,\mathcal{O}_{S}(-nK_{\mathbf{P}}|_{S}))$ is onto for all $n\geq 1$.
Now, by the fact that K3 surfaces are projectively normal (see [SD74]), the
restriction map
$H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-nK_{\mathbf{P}}))\to
H^{0}(S,\mathcal{O}_{S}(-nK_{\mathbf{P}}|_{S}))$ is onto for all $n\geq 1$.
Consider the following commutative diagram with $n>1$, whose right column is
exact.
${H^{0}(S,\mathcal{O}_{S}(-K_{\mathbf{P}}|_{S}))^{\otimes
n}}$${H^{0}(S,\mathcal{O}_{S}(-nK_{\mathbf{P}}|_{S}))}$${H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-K_{\mathbf{P}}))^{\otimes
n}}$${H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-nK_{\mathbf{P}}))}$${H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-K_{\mathbf{P}}))^{\otimes
n-1}}$${H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-(n-1)K_{\mathbf{P}}))}$
We may assume that the bottom arrow is surjective by an induction hypothesis,
as it is the case for $n=1$. By an analogous argument of diagram chasing as
above, the middle arrow
$H^{0}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-K_{\mathbf{P}}))^{\otimes n}\to
H^{n}(\mathbf{P},\mathcal{O}_{\mathbf{P}}(-nK_{\mathbf{P}}))$ is onto. This is
true for all $n$ and thus the conclusion follows from Lemma 2.4. ∎
The proof of the nonhyperellipticity of $\Gamma$ and the very ampleness of
$K_{\Gamma}$ requires that we state the following lemma.
###### Lemma 2.6.
Let $S$ be a K3 surface and $\Gamma\subset S$ a smooth curve of genus at least
$2$. If $\Gamma$ is hyperelliptic, then there exists a line bundle
$\mathcal{J}$ on $S$ such that the restriction $\mathcal{J}|_{\Gamma}$ is a
$g^{1}_{2}$ (i.e., a pencil of degree $2$ divisors).
A proof of this lemma relies on the first theorem of [GL87], by the fact that
$\Gamma$ is hyperlliptic iff. $\mathrm{Cliff}(\Gamma)=0$, in which case there
is a line bundle on $S$ whose restriction to $\Gamma$ is a pencil of degree
$2$.
We will apply this lemma together with the known fact that, for a given curve
$\Gamma$ of genus $g\geq 2$, $K_{\Gamma}$ is very ample if and only if
$\Gamma$ is nonhyperelliptic (see Proposition IV.5.2. in [Har77]).
###### Theorem 2.7.
Let $\mathbf{P}$ be one of the Gorenstein weighted projective $3-$spaces
listed in Table 2. Then the general linear curve section
$\Gamma\subset\mathbf{P}$ in nonhyperelliptic and $K_{\Gamma}$ is very ample.
###### Proof.
Being a general anticanonical divisor of $\mathbf{P}$, $S$ has isolated
singularities. By Bertini’s Theorem, the general $\Gamma$ in $S$ is smooth.
In case $\\#10$, the index of $-K_{\mathbf{P}}|_{S}$ in $\mathrm{Pic}(S)$ is
equal to $3$. Hence $\Gamma=3C$ where $C$ is a Cartier divisor on $S$. By
Lemma 2.6, if $\Gamma$ is hyperelliptic, there exists a line bundle
$\mathcal{J}$ on $S$ such that $\mathcal{J}|_{\Gamma}$ is a $g_{2}^{1}$. Hence
$\mathcal{J}|_{C}$ has degree $\frac{2}{3}$, which is not possible.
In cases $\\#9,\\#11$ and $\\#12$, the index is equal to $2$, hence
$\Gamma=2C$ for some primitive Cartier divisor $C$ on $S$. Once again, if
$\Gamma$ is hyperelliptic, by Lemma 2.6 there exists a line bundle
$\mathcal{J}$ on $S$ such that $\mathcal{J}|_{\Gamma}$ is a $g_{2}^{1}$.
Therefore, $\mathcal{J}|_{C}$ is a $g_{1}^{1}$, i.e., $C$ is a rational curve.
But we know from the values of $g(C)$ listed in Table 2 that it is not the
case.
Lastly, in cases $\\#13$ and $\\#14$, the index is equal to $1$, so $\Gamma$
is primitive in $\mathrm{Pic}(S)$. We need to use the information which is
given in Table 5 and yields:
1. $\mathtt{I}$.
for $\\#13$, $\Gamma=-7K_{\Sigma}$ where $\Sigma$ is a general sextic in
$\mathbf{P}(1,1,2,3)$. It holds by the adjunction formula that
$K_{\Gamma}=-6K_{\Sigma}|_{\Gamma}=\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)|_{\Gamma}$
which is very ample.
2. $\mathtt{II}$.
for $\\#14$, $\Gamma=-6K_{\Sigma}$ where $\Sigma$ is a general $10-$ic in
$\mathbf{P}(1,2,4,5)$. By the adjunction formula,
$K_{\Gamma}=-5K_{\Sigma}|_{\Gamma}=\mathcal{O}_{\mathbf{P}(1,2,4,5)}(10)|_{\Gamma}$
which is very ample, provided that $\Gamma$ does not meet the base point of
$\mathcal{O}_{\mathbf{P}(1,2,4,5)}(10)$, which holds by the generality
assumption.
In both cases, $K_{\Gamma}$ is very ample and so $\Gamma$ is nonhyperelliptic.
∎
###### Corollary 2.8.
In the setting of Theorem 2.7, the Clifford index of the curve $\Gamma$ is
strictly larger than $2.$
###### Proof.
Since the anticanonical model $\mathbf{P}\subset\mathbf{P}^{g+1}$ satisfies
the $N_{2}$ property, as stated in Proposition 6.1 in [DS21], so does the
curve $\Gamma$. It follows by the appendix of [GL84] that
$\mathrm{Cliff}(\Gamma)>2$. ∎
## 3 Birational models
Before constructing the maximal extensions, we study for each $\mathbf{P}$ in
Table 2 except $\mathbf{P}(1,2,3,6)$ a birational model
$\varphi:\mathbf{P}\dashrightarrow\mathbf{P}^{\prime}$ such that $\varphi$
restricts to an isomorphism on the general $S\in|-K_{\mathbf{P}}|$. This will
allow us to express the general $\Gamma\in|-K_{\mathbf{P}}|_{S}|$ as a
complete intersection of two equations of different degrees in
$\mathbf{P}^{\prime}$ and to construct extensions of $\Gamma$ in Section 4.
The first step consists in the introduction of a suitable Veronese map on each
$\mathbf{P}$, which is an embedding $v_{n}:\mathbf{P}\hookrightarrow X$ where
$X$ is a weighted projective space of dimension $4$ so that the anticanonical
model $\mathbf{P}\to\mathbf{P}^{g+1}$ factors as a composition
${\mathbf{P}}$${X}$${\mathbf{P}^{g+1}}$$\scriptstyle{v_{n}}$
where $X\dashrightarrow\mathbf{P}^{g+1}$ is a rational map that we will
specify.
### 3.1 The Veronese maps
Let $\mathbf{P}=\mathbf{P}(a_{0},a_{1},a_{2},a_{3})$ be a weighted projective
space. Denote $R=\mathbf{C}[x,y,z,w]$ with the suitable grading such that
$\mathbf{P}=\mathrm{Proj}(R)$. Then the following isomorphism holds for all
$n\in\mathbf{N}$,
$\mathbf{P}\simeq\mathrm{Proj}(R^{(n)}),$
where $R^{(n)}$ is the graded ring whose degree $d$ part is
$(R^{(n)})_{d}=R_{nd}$. This gives rise to an embedding $v_{n}$ which we refer
to as the $n-$Veronese map: given a fixed $n$, $R^{(n)}$ is isomorphic to a
quotient of the form $\nicefrac{{\mathbf{C}[y_{0},...,y_{m}]}}{{I}}$ where
$\mathbf{C}[y_{0},...,y_{m}]$ has a given grading $\deg y_{i}=d_{i}$ and $I$
is a homogeneous ideal. This makes $\mathbf{P}$ a subscheme of a larger
weighted projective space, as $\mathbf{P}\simeq
V(I)=\left\\{\mathrm{y}=[y_{0}:\cdots:y_{m}]\>|\>f(\mathrm{y})=0\text{ for all
}f\in I\right\\}$ in $\mathbf{P}(d_{0},...,d_{m})$.
###### Example 3.1.
The $5-$Veronese embedding of $\mathbf{P}=\mathbf{P}(1,4,5,10)$. Taking $n=5$
yields the isomorphism
$\mathbf{P}\simeq\mathrm{Proj}(\mathbf{C}[x^{5},xy,z,w,y^{5}])=\mathrm{Proj}\left(\nicefrac{{\mathbf{C}[u_{0},u_{1},u_{2},v,s]}}{{(u_{0}s-u_{1}^{5})}}\right).$
The grading on the right is the following: the $u_{i}$’s have weight $1$,
while $v$ has weight $2$ and $s$ has weight $4$. This realizes $\mathbf{P}$ as
the degree $5$ hypersurface given by the equation $u_{0}s=u_{1}^{5}$ in
$\mathbf{P}(1,1,1,2,4)$ through the following embedding.
$v_{5}:[x:y:z:w]\in\mathbf{P}\mapsto[u_{0}:u_{1}:u_{2}:v:s]=[x^{5}:xy:z:w:y^{5}].$
The choice $n=5$ is motivated by the fact that it is a divisor of $\sigma=20$,
with $-K_{\mathbf{P}}=\mathcal{O}_{\mathbf{P}}(\sigma)$. We can recover
$-K_{\mathbf{P}}$ from $v_{5}$ using the equality
$-K_{\mathbf{P}}=v_{5}^{*}\mathcal{O}_{\mathbf{P}(1,1,1,2,4)}(4)$.
In general, given $\mathbf{P}$ any Gorenstein space of the list, we can choose
$n$ a divisor of $\sigma$ and embed $\mathbf{P}$ by $v_{n}$ in a larger
weighted projective space $X$. This yields
$-K_{\mathbf{P}}=v_{n}^{*}\mathcal{O}_{X}(\frac{\sigma}{n})$.
For the items on our list from #9 to #14 a suitable choice of $n$ is given in
Table 3, realizing each time $\mathbf{P}$ as a hypersurface in a weighted
projective space $X$ of dimension $4$. Similarly as in Table 1, $\sigma$
refers to the degree of $-K_{\mathbf{P}}$ with regard to the grading of
$\mathbf{P}$.
$\hypertarget{Table
3}{}\begin{array}[]{|l|l|l|l|l|l|l|}\hline\cr\\#&\mathbf{P}&\sigma&n&X&v_{n}(\mathbf{P})\subset
X&-K_{\mathbf{P}}\\\ \hline\cr&&&&&&\\\\[-10.0pt]
9&\mathbf{P}(1,4,5,10)&20&5&\mathbf{P}(1,1,1,2,4)_{[u_{0}:u_{1}:u_{2}:v:s]}&\text{quintic
}(u_{0}s=u_{1}^{5})&v_{5}^{*}\mathcal{O}_{X}(4)\\\
10&\mathbf{P}(1,2,6,9)&18&2&\mathbf{P}(1,1,3,5,9)_{[u_{0}:u_{1}:v:s:t]}&10\text{-ic
}(u_{0}t=s^{2})&v_{2}^{*}\mathcal{O}_{X}(9)\\\
11&\mathbf{P}(1,2,3,6)&12&2&\mathbf{P}(1,1,2,3,3)_{[u_{0}:u_{1}:v:s_{0}:s_{1}]}&\text{quartic
}(u_{0}s_{0}=v^{2})&v_{2}^{*}\mathcal{O}_{X}(6)\\\
12&\mathbf{P}(1,3,8,12)&24&3&\mathbf{P}(1,1,3,4,8)_{[u_{0}:u_{1}:v:s:t]}&9\text{-ic
}(u_{0}t=v^{3})&v_{3}^{*}\mathcal{O}_{X}(8)\\\
13&\mathbf{P}(1,6,14,21)&42&7&\mathbf{P}(1,1,2,3,6)_{[u_{0}:u_{1}:v:s:t]}&\text{heptic
}(u_{0}t=u_{1}^{7})&v_{7}^{*}\mathcal{O}_{X}(6)\\\
14&\mathbf{P}(2,3,10,15)&30&3&\mathbf{P}(1,2,4,5,10)_{[u:v:s:t:r]}&12\text{-ic
}(vr=s^{3})&v_{3}^{*}\mathcal{O}_{X}(10)\\\ \hline\cr\end{array}$
Table 3
In each case, the anticanonical embedding
$\mathbf{P}\hookrightarrow\mathbf{P}^{g+1}$ factors as the composite map
${\mathbf{P}}$${X}$${\mathbf{P}^{g+1}.}$$\scriptstyle{v_{n}}$$\scriptstyle{|\mathcal{O}_{X}(\frac{\sigma}{n})|}$
By a divisibility criterion, we can check fairly easily that
$\mathcal{O}_{X}(\frac{\sigma}{n})$ is not always basepoint-free. This
criterion is purely numerical: $\mathcal{O}_{X}(\frac{\sigma}{n})$ is
basepoint-free if and only if $\frac{\sigma}{n}$ is divisible by all the
weights of $X$. Namely, it is not the case for #10, #12 and #14, for which the
induced map $X\dashrightarrow\mathbf{P}^{g+1}$ is nonregular.
### 3.2 The birational models
The goal now is to exhibit a birational map from $\mathbf{P}$ to another
$3-$dimensional weighted projective space $\mathbf{P}^{\prime}$ which realizes
$S$ as a nongeneral anticanonical divisor of $\mathbf{P}^{\prime}$.
In all cases but #11, the image of $X$ in $\mathbf{P}^{g+1}$ is a cone with a
point as vertex. This follows from the fact that $\frac{\sigma}{n}$ equals the
largest weight of $X$; say $X=\mathbf{P}(d_{0},d_{1},d_{2},d_{3},d_{4})$ with
$\frac{\sigma}{n}=d_{4}$. Then the map given by the linear system
$|\mathcal{O}_{X}(\frac{\sigma}{n})|$ is the following.
$[x_{0}:x_{1}:x_{2}:x_{3}:x_{4}]\in
X\mapsto[\mathtt{f}_{0}:\mathtt{f}_{1}:\cdots:\mathtt{f}_{s}:x_{4}]$
where the $\mathtt{f}_{i}$’s form a basis of the degree $\frac{\sigma}{n}$
homogeneous polynomials in the variables $x_{0},x_{1},x_{2}$ and $x_{3}$.
Hence the equations for the image of $X$ encode the algebraic relations
between the $\mathtt{f}_{i}$’s and do not involve $x_{4}$.
This suggests projecting from the vertex point of this cone to
$\mathbf{P}^{g}$. This acts trivially on $S$ provided that it is general,
since $S\subset\mathbf{P}^{g}$ is a hyperplane section of $\mathbf{P}$.
This yields a birational map
$\varphi:\mathbf{P}\dashrightarrow\mathbf{P}^{\prime}$, where
$\mathbf{P}^{\prime}$ is the weighted projective $3-$space whose weights are
those of $X$ but the last one. In other words, if
$X=\mathbf{P}(d_{0},d_{1},d_{2},d_{3},d_{4})$ with $d_{4}=\frac{\sigma}{n}$,
then $\mathbf{P}^{\prime}=\mathbf{P}(d_{0},d_{1},d_{2},d_{3})$. The
restriction of this map to $S$ is an isomorphism; the image of $S$ is K3 so it
is a (nongeneral) anticanonical divisor of $\mathbf{P}^{\prime}$. We denote by
$\mathcal{L}$ the (noncomplete) linear system whose members are the
anticanonical divisors of $\mathbf{P}^{\prime}$ which are the direct images of
all $D\in|-K_{\mathbf{P}}|$,
$\mathcal{L}=\varphi_{*}|-K_{\mathbf{P}}|\subset|-K_{\mathbf{P}^{\prime}}|,$
so that he surface $\varphi(S)$ is a general member of $\mathcal{L}$.
Since $S\simeq\varphi(S)$ we will drop the notation $\varphi(S)$ for the sake
of brevity and use $S$ instead. Likewise, we will refer to $\varphi(\Gamma)$
as $\Gamma$. As our computations will show, the restriction of $\mathcal{L}$
to $S$ has $|\mathcal{O}_{\mathbf{P}^{\prime}}(\frac{\sigma}{n})|_{S}|$ as its
mobile part. That way, $\Gamma$ is cut out on $\mathbf{P}^{\prime}$ by two
equations of different degrees.
###### Example 3.2.
We have seen in Example 3.1 that the $5-$Veronese map on
$\mathbf{P}=\mathbf{P}(1,4,5,10)$ embeds it in $X=\mathbf{P}(1,1,1,2,4)$. The
anticanonical model of $\mathbf{P}$ factors through
${X}$${\mathbf{P}^{22}.}$$\scriptstyle{|\mathcal{O}_{X}(4)|}$
Let $\mathbf{P}^{\prime}=\mathbf{P}(1,1,1,2)$. It is embedded in
$\mathbf{P}^{21}$ by $|\mathcal{O}_{\mathbf{P}^{\prime}}(4)|$, making $X$ a
cone over $\mathbf{P}^{\prime}$. As $\mathbf{P}$ passes through the vertex of
$X$, projecting from the vertex point onto $\mathbf{P}^{\prime}$ induces a
rational map $\varphi$ from $\mathbf{P}$ to $\mathbf{P}^{\prime}$. This map
restricts to an isomorphism on $S$, since $S$ is a general hyperplane section
of $\mathbf{P}$ on which the projection map acts as the identity. In
homogeneous coordinates, we can express $\varphi$ as
$\varphi:[x:y:z:w]\in\mathbf{P}\mapsto[u_{0}:u_{1}:u_{2}:v]=[x^{5}:xy:z:w].$
This is the $5-$Veronese map without its last component $y^{5}$.
###### Lemma 3.3.
The map $\varphi$ given in Example 3.2 is birational.
###### Proof.
The map $\varphi$ is toric, so we can consider the transformation it
represents on the respective fans for $\mathbf{P}$ and $\mathbf{P}^{\prime}$.
The map $\varphi$ is regular outside the point for which $x=z=w=0$. We refer
to this point as $p_{y}$ and call it the indeterminacy point of $\varphi$.
Since it is a toric point, we may consider the toric map corresponding to a
weighted blowup of $\mathbf{P}$ at $p_{y}$. This consists in a subdivision of
the cone of the affine chart $\left\\{y\neq 0\right\\}$.
An algorithm for the construction of fans of weighted projective spaces is
given in Proposition 2.8 of [RT13]. This gives rise to the following fan in
$\mathbf{Z}^{3}$ for the toric variety $\mathbf{P}$.
$\Sigma_{\mathbf{P}}=\mathrm{Fan}\left(\left[\begin{array}[]{c}-4\\\ -5\\\
-10\end{array}\right],\left[\begin{array}[]{c}1\\\ 0\\\
0\end{array}\right],\left[\begin{array}[]{c}0\\\ 1\\\
0\end{array}\right],\left[\begin{array}[]{c}0\\\ 0\\\
1\end{array}\right]\right)=:\mathrm{Fan}(\mathtt{e}_{x},\mathtt{e}_{y},\mathtt{e}_{z},\mathtt{e}_{w}),$
in the sense that $\Sigma_{\mathbf{P}}$ is the fan whose cones are all the
cones generated by all the strict subsets of the family
$\left\\{\mathtt{e}_{x},\mathtt{e}_{y},\mathtt{e}_{z},\mathtt{e}_{w}\right\\}$.
Here, $\mathtt{e}_{y},\mathtt{e}_{z}$ and $\mathtt{e}_{w}$ form the canonical
basis of $\mathbf{Z}^{3}$ while $\mathtt{e}_{x}$ is the vector $(-4,-5,-10)$.
There is a one-to-one correspondence between the $1-$cones of
$\Sigma_{\mathbf{P}}$ and the toric coordinates of $\mathbf{P}$, so that for
instance $x$ corresponds to $\mathtt{e}_{x}$ by the fact that the cone
$\mathbf{R}_{+}\mathtt{e}_{x}$ encodes the toric hypersurface
$\left\\{x=0\right\\}$ and likewise, the cone
$\mathbf{R}_{+}\mathtt{e}_{y}+\mathbf{R}_{+}\mathtt{e}_{z}+\mathbf{R}_{+}\mathtt{e}_{w}$
encodes the dense toric orbit $\left\\{x\neq 0\right\\}$.
In particular, the indeterminacy point $p_{y}$ of $\varphi$ is the origin of
the affine chart $\left\\{y\neq 0\right\\}$ and a weighted blowup of
$\mathbf{P}$ at this point results in a subdivision of the cone
$\mathbf{R}_{+}\mathtt{e}_{x}+\mathbf{R}_{+}\mathtt{e}_{z}+\mathbf{R}_{+}\mathtt{e}_{w}$,
i.e., adding a new cone of dimension $1$ which is generated by a linear
combination over $\mathbf{N}$ of $\mathtt{e}_{x},\mathtt{e}_{z}$ and
$\mathtt{e}_{w}$.
We know from the same algorithm given in [RT13] that the following is a fan
for the variety $\mathbf{P}^{\prime}=\mathbf{P}(1,1,1,2)$.
$\Sigma_{\mathbf{P}^{\prime}}=\mathrm{Fan}\left(\left[\begin{array}[]{c}-1\\\
-1\\\ -2\end{array}\right],\left[\begin{array}[]{c}1\\\ 0\\\
0\end{array}\right],\left[\begin{array}[]{c}0\\\ 1\\\
0\end{array}\right],\left[\begin{array}[]{c}0\\\ 0\\\
1\end{array}\right]\right)=:\mathrm{Fan}(\mathtt{e}_{\zeta},\mathtt{e}_{y},\mathtt{e}_{z},\mathtt{e}_{w}).$
Here, $\mathtt{e}_{\zeta}$ refers to the vector $(-1,-1,-2)$.
The goal is then to obtain a multiple of $\mathtt{e}_{\zeta}$ as an element of
$\mathbf{N}\mathtt{e}_{x}+\mathbf{N}\mathtt{e}_{z}+\mathbf{N}\mathtt{e}_{w}$.
This can be done with coefficients $1,1$ and $2$, as the following points out.
$4\left(\begin{array}[]{c}-1\\\ -1\\\
-2\end{array}\right)=\left(\begin{array}[]{c}-4\\\ -5\\\
-10\end{array}\right)+\left(\begin{array}[]{c}0\\\ 1\\\
0\end{array}\right)+2\left(\begin{array}[]{c}0\\\ 0\\\ 1\end{array}\right).$
This gives rise to a variety $\widehat{\mathbf{P}}$ which is the weighted
blowup of $\mathbf{P}$ at $p_{y}$ with weights $1,1$ and $2$ and the fan
associated to $\widehat{\mathbf{P}}$ is
$\mathrm{Fan}(\mathtt{e}_{x},\mathtt{e}_{\zeta},\mathtt{e}_{y},\mathtt{e}_{z},\mathtt{e}_{w})$.
We now refer to the construction of homogeneous coordinates on toric varieties
which is explained in the fifth chapter of [CLS11] and more specifically at
page 205. This allows us to introduce five toric coordinates on
$\widehat{\mathbf{P}}$ which we denote by
$(\mathtt{x},\zeta,\mathtt{y},\mathtt{z},\mathtt{w})$ with the following
grading in $\mathbf{Z}^{2}$, so that
$\widehat{\mathbf{P}}=\mathrm{Proj}(\mathbf{C}[\mathtt{x},\zeta,\mathtt{y},\mathtt{z},\mathtt{w}])$.
$\begin{array}[]{c|ccccc}&\mathtt{x}&\zeta&\mathtt{y}&\mathtt{z}&\mathtt{w}\\\
\hline\cr\text{degree}&1&0&4&5&10\\\ \text{in
}\mathbf{Z}^{2}:&0&1&1&1&2\end{array}$
Besides, $\widehat{\mathbf{P}}$ is also the weighted blowup of
$\mathbf{P}^{\prime}$ along a toric curve, since
$\left(\begin{array}[]{c}-4\\\ -5\\\
-10\end{array}\right)=5\left(\begin{array}[]{c}-1\\\ -1\\\
-2\end{array}\right)+\left(\begin{array}[]{c}1\\\ 0\\\ 0\end{array}\right),$
in other words, $\mathtt{e}_{x}=4\mathtt{e}_{\zeta}+\mathtt{e}_{y}$ where
$\mathtt{e}_{\zeta}$ and $\mathtt{e}_{y}$ are rays of the fan
$\Sigma_{\mathbf{P}^{\prime}}$.
The blowup map $\varepsilon_{1}$ from $\widehat{\mathbf{P}}$ to $\mathbf{P}$
in homogeneous coordinates is the following
$[\mathtt{x}:\zeta:\mathtt{y}:\mathtt{z}:\mathtt{w}]\mapsto[\mathtt{x}\zeta:\mathtt{y}\zeta^{3}:\mathtt{z}\zeta^{4}:\mathtt{w}\zeta^{8}]\in\mathbf{P}$
which is well defined everywhere and contracts the exceptional divisor
$\left\\{\zeta=0\right\\}$ to the point $p_{y}$. Indeed, fix a point with a
representative $(\mathtt{x},\zeta,\mathtt{y},\mathtt{z},\mathtt{w})$ and
$\zeta^{\nicefrac{{1}}{{4}}}$ a fourth root of $\zeta$, then its image in
$\mathbf{P}$ is
$[\mathtt{x}\zeta^{\nicefrac{{1}}{{4}}}:\mathtt{y}:\mathtt{z}\zeta^{\nicefrac{{1}}{{4}}}:\mathtt{w}\zeta^{\nicefrac{{1}}{{2}}}]$.
On the other hand, the blowup map $\varepsilon_{2}$ from
$\widehat{\mathbf{P}}$ to $\mathbf{P}^{\prime}$ is the following
$[\mathtt{x}:\zeta:\mathtt{z}:\mathtt{w}]\mapsto[\mathtt{x}^{5}\zeta:\mathtt{x}\mathtt{y}:\mathtt{z}:\mathtt{w}],$
contracting the exceptional locus $\left\\{\mathtt{x}=0\right\\}$ to a curve.
As a consequence, one checks from the description of $\varphi$ in homogeneous
coordinates that the following diagram commutes.
${\widehat{\mathbf{P}}}$${\mathbf{P}}$${\mathbf{P}^{\prime}}$$\scriptstyle{\varepsilon_{1}}$$\scriptstyle{\varepsilon_{2}}$$\scriptstyle{\varphi}$
and therefore, $\varphi$ is birational, by the fact that $\varepsilon_{1}$ and
$\varepsilon_{2}$ are. ∎
Continuing on Example 3.2, $S$ is general in the basepoint-free linear system
$|-K_{\mathbf{P}}|$, it avoids the indeterminacy point of $\varphi$. Its image
being a K3 surface, it is an anticanonical divisor of $\mathbf{P}^{\prime}$,
i.e., a quintic surface in $\mathbf{P}(1,1,1,2)$. Using the description of
$\varphi$ in homogeneous coordinates, we see that $S$ in $\mathbf{P}(1,1,1,2)$
has equation
$\hypertarget{(1)}{}u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=0$ (1)
with $f_{4}$ a general homogeneous polynomial of degree $4$ in the variable
$\mathbf{u}=(u_{0},u_{1},u_{2})$ and $v$. Indeed, such an equation pulls back
to an equation on $\mathbf{P}$ of the form $x^{5}f_{20}(x,y,z,w)=0$, where
$f_{20}$ is a general $20$-ic on $\mathbf{P}$. In other words, the pullback to
$\mathbf{P}$ of $S\subset\mathbf{P}^{\prime}$ is $S+(x^{5})$ where the locus
$x=0$ is the exceptional divisor of $\varepsilon_{2}$ and contracted by
$\varphi$.
Therefore, $\mathcal{L}\subset|-K_{\mathbf{P}^{\prime}}|$ consists of those
quintic surfaces of the form
$u_{0}f_{4}(\mathbf{u},v)+\lambda u_{1}^{5}=0,$
with $\deg(f_{4})=4$ and $\lambda\in\mathbf{C}$. The surface $S$ being general
in $\mathcal{L}$, $\lambda$ is non zero and up to scaling, we may assume
$\lambda=1$ as in (1). The base locus of $\mathcal{L}$ is the curve
$\Delta:=\left\\{u_{0}=u_{1}=0\right\\}$.
###### Lemma 3.4.
In case $\\#9$, given a general $S\in|-K_{\mathbf{P}}|$, the general
$\Gamma\in|-K_{\mathbf{P}}|_{S}|$ is cut out on $S$ in $\mathbf{P}^{\prime}$
by a general quartic.
###### Proof.
Let $S^{\prime}$ be another general member of $\mathcal{L}$, i.e., the image
under $\varphi$ of a general anticanonical divisor of $\mathbf{P}$, which is
the zero locus of $u_{0}f^{\prime}_{4}(\mathbf{u},v)+u_{1}^{5}$ with
$f^{\prime}_{4}$ a homogeneous quartics, then
$S\cap
S^{\prime}=\left\\{u_{0}f_{4}+u_{1}^{5}=u_{0}f^{\prime}_{4}+u_{1}^{5}=0\right\\}=S\cap\left\\{u_{0}(f_{4}-f^{\prime}_{4})=0\right\\},$
and $f_{4}-f^{\prime}_{4}$ is a general quartic of $\mathbf{P}^{\prime}$. This
shows that the restriction $\mathcal{L}|_{S}$ has
$S\cap\left\\{u_{0}=0\right\\}=\Delta$ as its fixed part, and its mobile part
is $|\mathcal{O}_{\mathbf{P}^{\prime}}(4)|_{S}|$. Thus the map from $S$ given
by the restriction of $\mathcal{L}$ is the same map as the one induced by the
quartics of $\mathbf{P}^{\prime}$,
$S\xrightarrow{|\mathcal{O}_{\mathbf{P}^{\prime}}(4)|_{S}|}\mathbf{P}^{21}$
so that the curve $\Gamma$ is the pullback to $S$ of a hyperplane of
$\mathbf{P}^{21}$. Hence, $\Gamma$ is cut out on $S$ by a general quartic of
$\mathbf{P}^{\prime}$. ∎
In conclusion, the curve $\Gamma$ is a complete intersection of degrees $5$
and $4$ given by the two equations
$u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=g_{4}(\mathbf{u},v)=0$
with $g_{4}$ a general quartic on $\mathbf{P}^{\prime}$. This is summed up in
the following commutative diagram.
${\mathbf{P}}$${\mathbf{P}^{\prime}}$${X}$${\mathbf{P}^{22}}$${\mathbf{P}^{21}}$$\scriptstyle{\varphi}$$\scriptstyle{v_{5}}$$\scriptstyle{|-K_{\mathbf{P}}|}$$\scriptstyle{|\mathcal{O}_{\mathbf{P}^{\prime}}(4)|}$$\scriptstyle{|\mathcal{O}_{X}(4)|}$$\scriptstyle{\mathrm{pr}}$
Here, $\mathrm{pr}$ is the projection map from the vertex point of the cone
$X$ onto $\mathbf{P}^{\prime}$.
In all cases from #9 to #14 but #11, we can apply similar arguments leading to
a description of $\Gamma$ as a complete intersection of two different degrees.
All the needed pieces of information are listed in Table 4 and Table 5 below.
As in the example above, $\Delta$ refers to the base locus of $\mathcal{L}$;
this is also the fixed part of $\mathcal{L}|_{S}$. An expression of $\varphi$
in the coordinates $[x:y:z:w]$ of $\mathbf{P}$ is also provided. It makes it
possible to check the given equation for $S$ in $\mathbf{P}^{\prime}$. It also
makes visible the indeterminacy point of $\varphi$, which we denote by $p$.
The proof that $\varphi$ is birational in each case is along the same lines as
the proof of Lemma 3.3.
In the case of $\mathbf{P}(1,4,5,10)$, the expression of $\varphi$ is
$[x^{5}:xy:z:w]$. The indeterminacy point $p$ is the point for which
$x=z=w=0$, commonly denoted by $p_{y}$. In all cases, the indeterminacy point
is such a coordinate point, as displayed in Table 4 below.
$\hypertarget{Table
4}{}\begin{array}[]{|l|l|l|l|l|}\hline\cr\\#&\mathbf{P}&\mathbf{P}^{\prime}&\text{expression
of }\varphi&p\\\ \hline\cr&&&&\\\\[-10.0pt]
9&\mathbf{P}(1,4,5,10)&\mathbf{P}(1,1,1,2)_{[u_{0}:u_{1}:u_{2}:v]}&[x^{5}:xy:z:w]&p_{y}\\\
10&\mathbf{P}(1,2,6,9)&\mathbf{P}(1,1,3,5)_{[u_{0}:u_{1}:v:s]}&[x^{2}:y:z:xw]&p_{w}\\\
12&\mathbf{P}(1,3,8,12)&\mathbf{P}(1,1,3,4)_{[u_{0}:u_{1}:v:s]}&[x^{3}:y:xz:w]&p_{z}\\\
13&\mathbf{P}(1,6,14,21)&\mathbf{P}(1,1,2,3)_{[u_{0}:u_{1}:v:s]}&[x^{7}:xy:z:w]&p_{y}\\\
14&\mathbf{P}(2,3,10,15)&\mathbf{P}(1,2,4,5)_{[u:v:s:t]}&[y:x^{3}:xz:w]&p_{z}\\\
\hline\cr\end{array}$
Table 4
The anticanonical model $\mathbf{P}\subset\mathbf{P}^{g+1}$ is a hypersurface
of the cone $X$ whose vertex point is $p$. The projection map from the point
$p$ to $\mathbf{P}^{\prime}$ restrict to an isomorphism on the general $S$
such that $p\notin S$, since $S$ is cut out by a hyperplane.
$\hypertarget{Table 5}{}\begin{array}[]{|l|l|l|l|}\hline\cr\\#&\text{equation
for }S\text{ in }\mathbf{P}^{\prime}&\Delta&\Gamma\\\
\hline\cr&&&\\\\[-10.0pt]
9&u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=0&u_{0}=u_{1}=0&S\cap\mathrm{quartic}\\\
10&u_{0}f_{9}(\mathbf{u},v,s)+s^{2}=0&u_{0}=s=0&S\cap 9\mathrm{-ic}\\\
12&u_{0}f_{8}(\mathbf{u},v,s)+v^{3}=0&u_{0}=v=0&S\cap 8\mathrm{-ic}\\\
13&u_{0}f_{6}(\mathbf{u},v,s)+u_{1}^{7}=0&u_{0}=u_{1}=0&S\cap\mathrm{sextic}\\\
14&vf_{10}(u,v,s,t)+s^{3}=0&v=s=0&S\cap 10\mathrm{-ic}\\\
\hline\cr\end{array}$
Table 5
We always denote by $f_{d}$ a general degree $d$ homogeneous polynomial in
accordance with the grading of $\mathbf{P}^{\prime}$.
## 4 Extensions of $\mathbf{P}$
Recall from Theorem 2.3 and the values of $\alpha(\Gamma,K_{\Gamma})$ given in
Table 2 that $\mathbf{P}(1,2,3,6)$ admits no extension; therefore, we focus
here on items #9 to #14 but #11 and use the description given for $\Gamma$ as
a complete intersection of two different degrees in $\mathbf{P}^{\prime}$ to
construct an extension of $\mathbf{P}$. In all the following cases except #14,
we manage to construct a maximal extension $Y$ as a hypersurface in a weighted
projective space of dimension $2+\alpha(\Gamma,K_{\Gamma})$. The last case
$\\#14$, which is $\mathbf{P}=\mathbf{P}(2,3,10,15)$, will require additional
work.
As a consequence of Theorem 2.3, we are assured that if $Y$ contains all the
surface extensions of $\Gamma$, then it is the universal extension of
$\Gamma$.
### 4.1 $\mathbf{P}=\mathbf{P}(1,4,5,10)$
According to Table 4 and Table 5, the curve $\Gamma$ is cut out on
$\mathbf{P}(1,1,1,2)$ with homogeneous coordinates $[u_{0}:u_{1}:u_{2}:v]$ by
the equations
$u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=g_{4}(\mathbf{u},v)=0$
where $f_{4}$ and $g_{4}$ are general homogeneous quartic polynomials.
Consider then the equation
$u_{0}s_{0}+u_{1}s_{1}+u_{2}s_{2}=u_{1}^{5}$
where $s_{0},s_{1}$ and $s_{2}$ are coordinates of weight $4$. This defines a
quintic hypersurface $Y$ in $\mathbf{X}=\mathbf{P}(1^{3},2,4^{3})$.
###### Lemma 4.1.
The variety $Y$ has a model in $\mathbf{P}^{24}$ which is a maximal extension
of $\mathbf{P}$, i.e., it has dimension $5=1+\alpha(\Gamma,K_{\Gamma})$
according to Table 2, contains $\mathbf{P}$ as a linear section, and is not a
cone.
###### Proof.
Consider the linear system $|\mathcal{O}_{\mathbf{X}}(4)|$, whose restriction
to $Y$ is very ample and realizes $Y$ in $\mathbf{P}^{24}$ as a variety of
degree
$[\mathcal{O}_{\mathbf{X}}(5)]\cdot[\mathcal{O}_{\mathbf{X}}(4)]^{5}=\frac{4^{5}\times
5}{4^{3}\times 2}=40=2g-2$. This model is an extension of $\Gamma$, as
$Y\cap\left\\{s_{0}=-f_{4}(\mathbf{u},v),s_{1}=s_{2}=g_{4}(\mathbf{u},v)=0\right\\}=\Gamma.$
The fivefold $Y$ is a maximal extension of $\Gamma$ by Theorem 2.3, since it
has dimension $1+\alpha(\Gamma,K_{\Gamma})$ and it contains
$\mathbf{P}=\mathbf{P}(1,4,5,10)$ as a $3-$fold linear section. Indeed, as
indicated in Table 3, $\mathbf{P}$ embeds into $\mathbf{P}(1,1,2,4)$ as the
quintic hypersurface $u_{0}s_{0}=u_{1}^{5}$. It follows that
$\mathbf{P}=Y\cap\left\\{s_{1}=s_{2}=0\right\\}$.
It remains to be proven that $Y$ is not a cone in $\mathbf{P}^{24}$. Its
embedding in $\mathbf{P}^{24}$ is given by the restriction of
$|\mathcal{O}_{\mathbf{X}}(4)|$, whose expression in the weighted coordinates
is
$[u_{0}:u_{1}:u_{2}:v:s_{0}:s_{1}:s_{2}]\mapsto[\mathtt{f}_{0}:\cdots:\mathtt{f}_{r}:s_{0}:s_{1}:s_{2}]$
where the $\mathtt{f}_{i}$’s form a basis of homogeneous quartic polynomials
in $u_{0},u_{1},u_{2}$ and $v$. The variety $\mathbf{P}$ is not a cone and it
is cut out on $Y$ by the two hyperplanes $s_{1}=0$ and $s_{2}=0$. Assume by
contradiction that $Y$ is a cone, then it has a hyperplane section of the form
$Y\cap\left\\{\lambda s_{1}+\mu s_{2}=0\right\\}$ which is a cone over
$\mathbf{P}$ with a point as vertex. One of the two coefficients $\lambda,\mu$
is nonzero, so without loss of generality and up to scaling we may assume that
$\mu=1$, and the variety $Y^{\prime}=Y|_{s_{2}=-\lambda s_{1}}$ which is a
cone over $\mathbf{P}$ is given by the equation
$u_{0}s_{0}+(u_{1}-\lambda u_{2})s_{1}=u_{1}^{5}$
in $\mathbf{P}(1,1,1,2,4,4)$ with coordinates
$[u_{0}:u_{1}:u_{2}:s_{0}:s_{1}]$. Let $F$ be the homogeneous quintic
$u_{0}s_{0}+(u_{1}-\lambda u_{2})s_{1}-u_{1}^{5}$, so that the cone
$Y^{\prime}$ is the vanishing locus of $F$. Besides, one has
$Y^{\prime}\cap\left\\{s_{1}=0\right\\}=\mathbf{P}$. We may consider an
automorphism of $\mathbf{P}^{23}$ fixing the hyperplane
$\left\\{s_{1}=0\right\\}$ and sending the vertex point $p$ of $Y^{\prime}$ to
$p_{s_{1}}=\left\\{u_{0}=u_{1}=u_{2}=v=s_{0}=0\right\\}$. The restriction of
this automorphism to $\mathbf{P}(1,1,1,2,4,4)$ is a polynomial automorphism of
the weighted coordinates $[\mathbf{u}:v:s_{0}:s_{1}]$ which would eliminate
the variable $s_{1}$ from $F$, i.e., which would make the affine chart
$Y^{\prime}|_{s_{1}=1}$ an affine cone.
As the polynomial $F$ doesn’t involve $v$, it is only affected by changes of
the variables $u_{0},u_{1},u_{2},s_{0}$ and $s_{1}$. Such a transformation is
as follows
$[u_{0}:u_{1}:u_{2}:s_{0}:s_{1}]\mapsto[A\mathbf{u}:as_{0}+bs_{1}+h_{1}(\mathbf{u},v):cs_{0}+ds_{1}+h_{2}(\mathbf{u},v)]$
with
$A=(A_{ij})_{i,j\in\left\\{0,1,2\right\\}}\in\mathrm{GL}_{3}(\mathbf{C})$,
$ad\neq bc$ and $h_{1},h_{2}$ are homogeneous quartics.
Such a change of variables applied to the equation $F=0$ yields
$\begin{array}[]{lccl}(A_{21}u_{0}+A_{22}u_{1}+A_{23}u_{2})^{5}&=&&(A_{00}u_{0}+A_{01}u_{1}+A_{02}u_{2})(as_{0}+bs_{1}+h_{1})\\\
&&+&(A_{10}u_{0}+A_{11}u_{1}+A_{12}u_{2})(cs_{0}+ds_{1}+h_{2})\\\ &&-&(\lambda
A_{20}u_{0}+\lambda A_{21}u_{1}+\lambda
A_{22}u_{2})(cs_{0}+ds_{1}+h_{2}).\end{array}$
The condition that this does not involve $s_{1}$ implies the following:
$A_{00}b+A_{10}d-\lambda A_{20}d=A_{01}b+A_{11}d-\lambda
A_{21}d=A_{02}b+A_{12}d-\lambda A_{22}d=0.$
As $b$ and $d$ cannot be both zero, the columns of $A$ are linearly dependent
and thus $\det(A)=0$, a contradiction. ∎
Letting $(\lambda_{0},\lambda_{1},\lambda_{2})$ move in $\mathbf{C}^{3}$, we
get a family of K3 surfaces
$Y\cap\left\\{s_{0}=\lambda_{0}g_{4}(\mathbf{u},v)-f_{4}(\mathbf{u},v),s_{1}=\lambda_{1}g_{4}(\mathbf{u},v),s_{2}=\lambda_{2}g_{4}(\mathbf{u},v)\right\\}$
which are all linear sections of $Y$ and contain $\Gamma$ as a hyperplane
section. Indeed, the curve $\Gamma$ is cut out on all of them by
$\left\\{g_{4}(\mathbf{u},v)=0\right\\}$. Among them, those that are members
of the linear system $\mathcal{L}$ are those parameterized by
$\lambda_{1}=\lambda_{2}=0$.
###### Lemma 4.2.
The variety $Y$ in $\mathbf{P}^{24}$ is the universal extension of $\Gamma$.
###### Proof.
By Theorem 2.3 we need to show that $Y$ contains all the surface extensions of
$\Gamma$ as linear sections, and that these surfaces in $Y$ are unique up to
projective automorphisms of $\mathbf{P}^{24}$ fixing $\Gamma$.
Given $\lambda=(\lambda_{0},\lambda_{1},\lambda_{2})\in\mathbf{C}^{3}$ as
above, consider the following surface in
$\mathbf{P}^{\prime}=\mathbf{P}(1,1,1,2)$:
$S_{\lambda}=Y\cap\left\\{s_{0}=\lambda_{0}g_{4}(\mathbf{u},v)-f_{4}(\mathbf{u},v),s_{1}=\lambda_{1}g_{4}(\mathbf{u},v),s_{2}=\lambda_{2}g_{4}(\mathbf{u},v)\right\\}.$
It is a linear section of $Y$ and contains $\Gamma$ as a hyperplane section in
$\mathbf{P}^{21}$. Letting $\lambda$ move, we obtain a family indexed by the
affine space $\mathbf{C}^{3}$.
Assume by contradiction that there exists $\lambda$ such that $S_{\lambda}$ is
a cone. Then in particular, it contains a line of $\mathbf{P}^{21}$, i.e., a
curve $L\subset S_{\lambda}$ such that
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(4)]=1$. But $\mathbf{P}^{\prime}$ is
embedded in $\mathbf{P}^{6}$ by the linear system
$|\mathcal{O}_{\mathbf{P}^{\prime}}(2)|$, and we have
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(2)]=\frac{1}{2}$, which is a
contradiction.
Assume now by contradiction that there exist $\lambda\neq\lambda^{\prime}$ in
$\mathbf{C}^{3}$ and an automorphism $\rho$ of $\mathbf{P}^{22}$ which acts
trivially on $\Gamma$ and such that $\rho(S_{\lambda})=S_{\lambda^{\prime}}$.
Let us denote by $\langle S_{\lambda},S_{\lambda^{\prime}}\rangle$ the linear
space spanned by $S_{\lambda}\cup S_{\lambda^{\prime}}$ in $\mathbf{P}^{24}$,
and consider the threefold $T=Y\cap\langle
S_{\lambda},S_{\lambda^{\prime}}\rangle$ which contains $S_{\lambda}$ and
$S_{\lambda^{\prime}}$ as hyperplane sections. This threefold is a cone, or
else it would be spanned by a pencil of surface linear sections of the
universal extension of $\Gamma$ and by Theorem 2.3 we would have
$\rho(S_{\lambda})\neq S_{\lambda^{\prime}}$.
Since $Y$ is a hypersurface of $\mathbf{X}=\mathbf{P}(1^{3},2,4^{3})$ and $T$
is cut out on $Y$ by two quartic hypersurfaces, the threefold $T$ is naturally
realized as a hypersurface of $X=\mathbf{P}(1,1,1,2,4)$. Since $T$ is a cone
in $\mathbf{P}^{22}$, it is covered by lines, i.e., curves $L\subset T$ such
that $L\cdot[\mathcal{O}_{X}(2)]=\frac{1}{2}$.
Let $p$ be the unique basepoint of $\mathcal{O}_{X}(2)$. With respect to the
weighted coordinates $[u_{0}:u_{1}:u_{2}:v:s]$ on $X$, the point $p$ is given
by the equations $u_{0}=u_{1}=u_{2}=v=0$. If $p$ is not the vertex of $T$,
then there is a line $L\subset T$ such that $p\notin L$ and thus the
restriction of $\mathcal{O}_{X}(2)$ to $L$ is Cartier, which is not compatible
with $L\cdot[\mathcal{O}_{X}(2)]=\frac{1}{2}$. If $p$ is the vertex point of
$T$, then the affine chart $T|_{s\neq 0}$ is an affine cone, i.e., the
equation for $T$ in $X$ does not involve the coordinate $s$. But we know from
the defining equation of $Y$ in $\mathbf{X}=\mathbf{P}(1^{3},2,4^{3})$ that it
is not possible.
The conclusion is that $Y$ contains a family of pairwise unique surface
extensions of $\Gamma$ which is parameterized by the affine space
$\mathbf{C}^{3}$. Let $\mathcal{H}$ be the family which parameterizes
$21-$planes $\Lambda\subset\mathbf{P}^{24}$ such that $\Gamma\subset\Lambda$,
and $\mathcal{S}=\mathbf{P}(\mathrm{ker}({}^{t}\Phi_{K_{\Gamma}}))$ the family
of surface extensions of $\Gamma$. Then we have $\mathcal{H}\simeq
S\simeq\mathbf{P}^{3}$, and the map $\mathcal{H}\to\mathcal{S}$ which maps
$\Lambda$ to $Y\cap\Lambda$ is linear (see [CDS20]) and its image contains a
dense open subset of $\mathcal{S}$ by the above. It follows that it is an
isomorphism and therefore, $Y$ is universal by Theorem 2.3. ∎
As a sanity check, let us see what happens if we try to build a larger
extension of $\mathbf{P}$. One might consider adding a coordinate $s_{3}$ of
weight $4$ and the hypersurface
$\ell_{0}s_{0}+\ell_{1}s_{1}+\ell_{2}s_{2}+\ell_{3}s_{3}=u_{1}^{5}$
where $\ell_{0},\ell_{1},\ell_{2},\ell_{3}$ are degree $1$ homogeneous
polynomials in the variables $u_{0},u_{1}$ and $u_{2}$. This is
$6-$dimensional and indeed contains $\Gamma$ as a linear section; however, the
$\ell_{i}$’s are linearly dependent and thus the variety given by the equation
above is a cone over $Y$.
### 4.2 $\mathbf{P}=\mathbf{P}(1,2,6,9)$
By Table 4 and Table 5, the curve $\Gamma$ is given in $\mathbf{P}(1,1,3,5)$
by the following two equations
$u_{0}f_{9}(\mathbf{u},v,s)+s^{2}=g_{9}(\mathbf{u},v,s)=0$
where $f_{9}$ and $g_{9}$ are general homogeneous polynomials of degree $9$.
Adding two coordinates $t_{0}$ and $t_{1}$ of weight $9$, we consider the
$10-$ic hypersurface $Y$ in $\mathbf{X}=\mathbf{P}(1^{2},3,5,9^{2})$ given by
the equation
$u_{0}t_{0}+u_{1}t_{1}=s^{2}.$
###### Lemma 4.3.
The variety $Y$ has a model in $\mathbf{P}^{30}$ which is a maximal extension
of $\mathbf{P}$.
###### Proof.
The linear system $|\mathcal{O}_{\mathbf{X}}(9)|$ has one base point in
$\mathbf{X}$ but its restriction to $Y$ defines an embedding which realizes
$Y$ as a projective variety in $\mathbf{P}^{30}$ of degree $\frac{9^{4}\times
10}{9^{2}\times 5\times 3}=54=2g-2$. It has dimension
$4=1+\alpha(\Gamma,K_{\Gamma})$ by Table 2 and contains $\Gamma$ as a linear
section in $\mathbf{P}^{27}$:
$Y\cap\left\\{t_{0}=-f_{9}(\mathbf{u},v,s),t_{1}=g_{9}(\mathbf{u},v,s)=0\right\\}=\Gamma.$
Besides, $Y$ is not a cone, by the same arguments as those mentioned in the
proof of Lemma 4.1. The fourfold $Y$ is a maximal extension of $\Gamma$.
Recall from Table 3 that $\mathbf{P}$ is the hypersurface of
$\mathbf{P}(1,1,3,5,9)$ given by the equation $u_{0}t_{0}=s^{2}$. This shows
that $\mathbf{P}=Y\cap\left\\{t_{1}=0\right\\}$. ∎
In particular,
$Y\cap\left\\{t_{0}=\lambda_{0}g_{9}(\mathbf{u},v,s)-f_{9}(\mathbf{u},v,s),t_{1}=\lambda_{1}g_{9}(\mathbf{u},v,s)\right\\}$
describes a family of K3 surfaces in $Y$ indexed by
$(\lambda_{0},\lambda_{1})\in\mathbf{C}^{2}$ which all contain $\Gamma$ as a
hyperplane section. Among them, the members of $\mathcal{L}$ are the ones for
which $\lambda_{1}=0$.
###### Lemma 4.4.
The variety $Y$ in $\mathbf{P}^{30}$ is the universal extension of $\Gamma$.
###### Proof.
The proof is along the same lines as the proof of Lemma 4.2. The surface
extensions of $\Gamma\subset\mathbf{P}^{27}$ are parameterized by
$\mathbf{P}^{2}$ and we have a dense family indexed by
$\lambda=(\lambda_{0},\lambda_{1})\in\mathbf{C}^{2}$ of surfaces in
$\mathbf{P}^{\prime}=\mathbf{P}(1,1,3,5)$:
$S_{\lambda}=Y\cap\left\\{t_{0}=\lambda_{0}g_{9}(\mathbf{u},v,s)-f_{9}(\mathbf{u},v,s),t_{1}=\lambda_{1}g_{9}(\mathbf{u},v,s)\right\\},$
all of which are linear sections of $Y$ and contain $\Gamma$ as a hyperplane
section in $\mathbf{P}^{28}$. None of them is a cone over $\Gamma$, since such
a cone would contain a line, i.e., a curve $L\subset S_{\lambda}$ such that
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(9)]=1$ and thus
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(3)]=\frac{1}{3}$, which is not
possible since $\mathcal{O}_{\mathbf{P}^{\prime}}(3)$ induces by restriction
on $S_{\lambda}$ a double cover of $\mathbf{P}(1,1,3)$. Besides, the surfaces
$S_{\lambda}$ are unique up to automorphisms of $\mathbf{P}^{28}$ fixing
$\Gamma$ by a similar argument as in Lemma 4.2 and thus $Y$ is the universal
extension of $\Gamma$. ∎
### 4.3 $\mathbf{P}=\mathbf{P}(1,3,8,12)$
By Table 4 and Table 5, the curve $\Gamma$ is given in $\mathbf{P}(1,1,3,4)$
by the following equations
$u_{0}f_{8}(\mathbf{u},v,s)+v^{3}=g_{8}(\mathbf{u},v,s)=0$
where $f_{8}$ and $g_{8}$ are general homogeneous polynomials of degree $8$.
After adding two coordinates $t_{0}$ and $t_{1}$ of weight $8$, we consider
the $9-$ic hypersurface $Y$ in $\mathbf{X}=\mathbf{P}(1^{2},3,4,8^{2})$ of
equation
$u_{0}t_{0}+u_{1}t_{1}=v^{3}.$
###### Lemma 4.5.
The variety $Y$ has a model in $\mathbf{P}^{27}$ which is a maximal extension
of $\mathbf{P}$.
###### Proof.
It is embedded in $\mathbf{P}^{27}$ by the restriction of the linear system
$|\mathcal{O}_{\mathbf{X}}(8)|$. This model has degree $\frac{8^{4}\times
9}{8^{2}\times 4\times 3}=46=2g-2$ and dimension
$4=1+\alpha(\Gamma,K_{\Gamma})$ by Table 2 and contains $\Gamma$ as a linear
section:
$Y\cap\left\\{t_{0}=-f_{8}(\mathbf{u},v,s),t_{1}=g_{8}(\mathbf{u},v,s)=0\right\\}=\Gamma.$
Hence it is a maximal extension of $\Gamma$. It is also an extension of
$\mathbf{P}$; indeed, we know from Table 3 that $\mathbf{P}$ is the
hypersurface $u_{0}t_{0}=v^{3}$ is $\mathbf{P}(1,1,3,4,8)$. This exhibits
$\mathbf{P}$ as a hyperplane section of $Y$, which is
$\mathbf{P}=Y\cap\left\\{t_{1}=0\right\\}$. The fact that $Y$ is not a cone
can be proven in the same way as in Lemma 4.1. ∎
Letting $(\lambda_{0},\lambda_{1})$ move in $\mathbf{C}^{2}$, we get a family
$Y\cap\left\\{t_{0}=\lambda_{0}g_{8}(\mathbf{u},v,s)-f_{8}(\mathbf{u},v,s),t_{1}=\lambda_{1}g_{8}(\mathbf{u},v,s)\right\\}$
of K3 surfaces in $Y$ which contain $\Gamma$ as a hyperplane section. The
surfaces in this family that are members of $\mathcal{L}$ are the ones for
which $\lambda_{1}=0$.
###### Lemma 4.6.
The variety $Y$ in $\mathbf{P}^{27}$ is the universal extension of $\Gamma$.
###### Proof.
The proof is similar as the proof of Lemma 4.2. The surface extensions of
$\Gamma$ are parameterized by $\mathbf{P}^{2}$ and we have a dense family
indexed by $\lambda=(\lambda_{0},\lambda_{1})\in\mathbf{C}^{2}$ of surfaces in
$\mathbf{P}^{\prime}=\mathbf{P}(1,1,3,4)$:
$S_{\lambda}=Y\cap\left\\{t_{0}=\lambda_{0}g_{8}(\mathbf{u},v,s)-f_{8}(\mathbf{u},v,s),t_{1}=\lambda_{1}g_{8}(\mathbf{u},v,s)\right\\},$
which are linear sections of $Y$ and contain $\Gamma$ as a hyperplane section.
There exists no $\lambda$ such that $S_{\lambda}$ is a cone, since otherwise
it would contain a line of $\mathbf{P}^{25}$, i.e., a curve $L\subset
S_{\lambda}$ such that $L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(8)]=1$ and
thus $L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(4)]=\frac{1}{2}$, which is not
possible since the restriction of $\mathcal{O}_{\mathbf{P}^{\prime}}(4)$ to
$S_{\lambda}$ is basepoint-free. Besides, the surfaces $S_{\lambda}$ are
unique up to automorphisms of $\mathbf{P}^{25}$ fixing $\Gamma$ by a similar
argument as in Lemma 4.2 and thus $Y$ is the universal extension of $\Gamma$.
∎
### 4.4 $\mathbf{P}=\mathbf{P}(1,6,14,21)$
By Table 4 and Table 5, the curve $\Gamma$ is given in $\mathbf{P}(1,1,2,3)$
by the following equations
$u_{0}f_{6}(\mathbf{u},v,s)+u_{1}^{7}=g_{6}(\mathbf{u},v,s)=0$
with $f_{6}$ and $g_{6}$ general homogeneous polynomials of degree $6$. Adding
two coordinates $t_{0}$ and $t_{1}$ of weight $6$, we consider the heptic
hypersurface $Y$ in $\mathbf{X}=\mathbf{P}(1^{2},2,3,6^{2})$ given by
$u_{0}t_{0}+u_{1}t_{1}=u_{1}^{7}.$
###### Lemma 4.7.
The variety $Y$ has a model in $\mathbf{P}^{24}$ which is a maximal extension
of $\mathbf{P}$.
###### Proof.
It is embedded in $\mathbf{P}^{24}$ by restriction of the linear system
$|\mathcal{O}_{\mathbf{X}}(6)|$. This model has degree $\frac{6^{4}\times
7}{6^{2}\times 3\times 2}=42=2g-2$ and contains $\Gamma$ as a linear section:
$Y\cap\left\\{t_{0}=-f_{6}(\mathbf{u},v,s),t_{1}=g_{6}(\mathbf{u},v,s)=0\right\\}=\Gamma.$
Besides, $Y$ is not a cone, by the same arguments as those mentioned in the
proof of Lemma 4.1. By Table 2, it has dimension
$4=1+\alpha(\Gamma,K_{\Gamma})$. Hence, it a maximal extension of $\Gamma$. It
is also an extension of $\mathbf{P}$: recall from Table 3 that $\mathbf{P}$ is
the hypersurface $u_{0}t_{0}=u_{1}^{7}$ in $\mathbf{P}(1,1,2,3,6)$. As a
consequence, we have the equality $Y\cap\left\\{t_{1}=0\right\\}=\mathbf{P}$.
∎
Letting $(\lambda_{0},\lambda_{1})$ move in $\mathbf{C}^{2}$, we have a family
$Y\cap\left\\{t_{0}=\lambda_{0}g_{6}(\mathbf{u},v,s)-f_{6}(\mathbf{u},v,s),t_{1}=\lambda_{1}g_{6}(\mathbf{u},v,s)\right\\}$
of K3 surfaces in $Y$ which are extensions of $\Gamma$. Those surfaces which
are members of $\mathcal{L}$ are the ones for which $\lambda_{1}=0$. Let
$\lambda=(\lambda_{0},\lambda_{1})$, then the surface given by the
intersection above is the following hypersurface in
$\mathbf{P}^{\prime}=\mathbf{P}(1,1,2,3)$:
$S_{\lambda}=\left\\{u_{1}^{7}+u_{0}f_{6}(\mathbf{u},v,s)=(\lambda_{0}u_{0}+\lambda_{1}u_{1})g_{6}(\mathbf{u},v,s)\right\\},$
so that $\Gamma=S_{\lambda}\cap\left\\{g_{6}(\mathbf{u},v,s)=0\right\\}$. The
question arises whether there exists $\lambda$ such that $S_{\lambda}$ is a
cone over $\Gamma$ in $\mathbf{P}^{22}$. If the answer is no, then $Y$ is the
universal extension of $\Gamma$ since it contains all its surface extensions.
However, the argument used in Lemma 4.2 doesn’t apply here, since a curve
$L\subset S_{\lambda}$ such that
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(6)]=1$ could pass through the base
points of $\mathcal{O}_{\mathbf{P}^{\prime}}(2)$ and
$\mathcal{O}_{\mathbf{P}^{\prime}}(3)$, allowing
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(2)]=\frac{1}{3}$ and
$L\cdot[\mathcal{O}_{\mathbf{P}^{\prime}}(3)]=\frac{1}{2}$.
### 4.5 $\mathbf{P}=\mathbf{P}(2,3,10,15)$
By Table 4 and Table 5, the curve $\Gamma$ is given in $\mathbf{P}(1,2,4,5)$
by the equations
$vf_{10}(u,v,s,t)+s^{3}=g_{10}(u,v,s,t)=0,$
with $f_{10}$ and $g_{10}$ general homogeneous polynomials of degree $10$.
After adding two coordinates $r_{0}$ and $r_{1}$ of weight $10$ we construct
an extension of $\Gamma$ as the hypersurface
$u^{2}r_{0}+vr_{1}=s^{3}$
which we denote by $Y_{1}$, in $\mathbf{X}=\mathbf{P}(1,2,4,5,10^{2})$.
###### Lemma 4.8.
The variety $Y_{1}$ has a model in $\mathbf{P}^{18}$ which is a nonmaximal
extension of $\mathbf{P}$.
###### Proof.
It is embedded in $\mathbf{P}^{18}$ by the restriction of the linear system
$|\mathcal{O}_{\mathbf{X}}(10)|$. This model has degree $\frac{10^{4}\times
12}{10^{2}\times 5\times 4\times 2}=30=2g-2$ and contains $\Gamma$ as a linear
section:
$Y_{1}\cap\left\\{r_{1}=-f_{10}(u,v,s,t),r_{0}=g_{10}(u,v,s,t)=0\right\\}=\Gamma.$
In accordance with the equation $vr_{1}=s^{3}$ which is given for $\mathbf{P}$
in Table 3 as a hypersurface in $\mathbf{P}(1,2,4,5,10)$, one checks that
$Y_{1}\cap\left\\{r_{0}=0\right\\}=\mathbf{P}$. However, $Y_{1}$ has dimension
$4$, while $1+\alpha(\Gamma,K_{\Gamma})=5$, so the extension isn’t maximal. ∎
The next two subsections are devoted to the construction of a maximal
extension. The strategy is to introduce another birational model for
$\mathbf{P}$ to construct another nonmaximal extension $Y_{2}$. The data of
$Y_{1}$ and $Y_{2}$ will allow us to construct in §4.5.2 a maximal extension
of $\mathbf{P}$.
#### 4.5.1 An alternative model for $\mathbf{P}=\mathbf{P}(2,3,10,15)$
Here, we construct $Y_{2}$. Introducing homogeneous coordinates
$[u^{\prime}:v^{\prime}:s^{\prime}:t^{\prime}]$ on the weighted projective
space $\mathbf{P}(1,3,5,9)$, consider the following rational map $\psi$ from
$\mathbf{P}$ to $\mathbf{P}(1,3,5,9)$
$[x:y:z:w]\in\mathbf{P}\mapsto[u^{\prime}:v^{\prime}:s^{\prime}:t^{\prime}]=[x:y^{2}:z:yw].$
The expression of $\psi$ in homogeneous coordinates is obtained from the
$2-$Veronese map $v_{2}$ on $\mathbf{P}$,
$[x:y:z:w]\mapsto[x:y^{2}:z:yw:w^{2}],$
by removing the last component $w^{2}$. This is a similar construction as the
one for $\varphi$ displayed in Table 4, which was obtained from the
$3-$Veronese map $v_{3}$.
###### Lemma 4.9.
The map $\psi$ is birational and it restricts to an isomorphism on the general
anticanonical divisor of $\mathbf{P}$.
The proof that $\psi$ is birational, which we shall not detail here, consists
in a resolution of the indeterminacy point of $\psi$, as was done in the proof
of Lemma 3.3. A similar argument applies to $\psi$ as the one which was given
for $\varphi:\mathbf{P}\dashrightarrow\mathbf{P}(1,2,4,5)$ which ensured that
$\varphi(S)\simeq S$ for a general $S\in|-K_{\mathbf{P}}|$. It revolves around
the following commutative diagram.
${\mathbf{P}}$${\mathbf{P}(1,3,5,9,15)}$${\mathbf{P}(1,3,5,9)}$${\mathbf{W}=\mathrm{cone}(\mathbf{V})}$${\mathbf{V}}$$\scriptstyle{v_{2}}$$\scriptstyle{\psi}$$\scriptstyle{|\mathcal{O}(15)|}$$\scriptstyle{|\mathcal{O}(15)|}$$\scriptstyle{\mathrm{pr}}$
Here, $v_{2}$ is the $2-$Veronese map from $\mathbf{P}$ to
$\mathbf{P}(1,3,5,9,15)$,
$-K_{\mathbf{P}}=v_{2}^{*}\mathcal{O}_{\mathbf{P}(1,3,5,9,15)}(15)$,
$\mathbf{W}$ is a cone over $\mathbf{V}$ with a point as vertex and
$\mathrm{pr}$ is the projection map from the vertex point of $\mathbf{W}$ onto
$\mathbf{V}$.
A consequence of this is that $S$ can be realized as a nongeneral
anticanonical divisor of $\mathbf{P}(1,3,5,9)$, namely, one of equation
$v^{\prime}f_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})+t^{\prime 2}=0$
where $f_{15}$ is a homogeneous polynomial of degree $15$. One checks that the
pullback to $\mathbf{P}$ of such a hypersurface is $S+(y^{2})$, and the locus
$y=0$ is contracted by $\psi$.
In $\mathbf{P}(1,3,5,9)$, the curve $\Gamma$ is cut out on $S$ by a general
hypersurface of degree $15$, as the diagram above shows. Hence $\Gamma$ is
given in $\mathbf{P}(1,3,5,9)$ by the following equations
$v^{\prime}f_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})+t^{\prime
2}=g_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})=0$
where $g_{15}$ is a general homogeneous polynomial of degree $15$. Let’s add
two coordinates $r_{0}^{\prime}$ and $r_{1}^{\prime}$ of weight $15$ and
examine the hypersurface $Y_{2}$ in
$\mathbf{X}^{\prime}=\mathbf{P}(1,3,5,9,15^{2})$ given by the equation
$u^{\prime 3}r_{0}^{\prime}+v^{\prime}r_{1}^{\prime}=t^{\prime 2}.$
###### Lemma 4.10.
The variety $Y_{2}$ has a model in $\mathbf{P}^{18}$ which is also a
nonmaximal extension of $\mathbf{P}$.
###### Proof.
It has dimension $4$ and is embedded in $\mathbf{P}^{18}$ by restriction of
$|\mathcal{O}_{\mathbf{X}^{\prime}}(15)|$. This model contains $\Gamma$ as a
linear section; indeed, given two constants $\lambda_{0}$ and $\lambda_{1}$ :
$Y_{2}\cap\left\\{\begin{array}[]{l}r_{0}^{\prime}=\lambda_{0}g_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})\\\
r_{1}^{\prime}=\lambda_{1}g_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})-f_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})\\\
g_{15}(u^{\prime},v^{\prime},s^{\prime},t^{\prime})=0\end{array}\right\\}=\Gamma.$
This is an extension of $\mathbf{P}$ as well. Indeed, we have
$Y_{2}\cap\left\\{r_{0}^{\prime}=0\right\\}=\left\\{v^{\prime}r_{1}^{\prime}=t^{\prime
2}\right\\}=\mathbf{P}$ in $\mathbf{P}(1,3,5,9,15)$. ∎
#### 4.5.2 The maximal extension of $\mathbf{P}(2,3,10,15)$
In the preceding subsections we constructed $Y_{1}$ and $Y_{2}$ two fourfold
extensions of $\Gamma$. Now we construct a maximal extension $Y$ of
$\mathbf{P}$, such that $Y$ contains both $Y_{1}$ and $Y_{2}$ as hyperplane
sections in $\mathbf{P}^{19}$. This construction involves a weighted
projective bundle over $\mathbf{P}^{1}$, i.e., a quotient of a vector bundle
such that the fiber is a weighted projective space.
Let $\Lambda=\mathbf{P}^{17}$ be the linear subspace spanned by $\mathbf{P}$
in $\mathbf{P}^{19}$; for $i=1,2$ we have $Y_{i}=Y\cap H_{i}$ where $H_{i}$ is
a hyperplane in $\mathbf{P}^{19}$ such that $\Lambda\subset H_{i}$. The
fourfolds $Y_{1}$ and $Y_{2}$ generate a pencil of hyperplane sections of $Y$
which all contain $\mathbf{P}$.
The construction of $Y$ will require a realization of $Y_{1}$ and $Y_{2}$ as
complete intersections in $\mathbf{P}(1^{2},2,3,5^{3})$. Note that the image
of the $6-$Veronese map $v_{6}$ on $\mathbf{P}(2,3,10,15)$ lies in
$\mathbf{P}(1^{2},2,3,5^{2})$, so one might think that it could be possible to
recover $Y_{1}$ and $Y_{2}$ from $v_{6}$. However, all my attempts in trying
so were unsuccessful.
On the one hand, $Y_{1}$ is given as the $12-$ic hypersurface in $\mathbf{X}$
of equation $u^{2}r_{0}+vr_{1}=s^{3}$. On the other hand, $Y_{2}$ is the
$18-$ic hypersurface in $\mathbf{X}^{\prime}$ given by the equation $u^{\prime
3}r_{0}^{\prime}+v^{\prime}r_{1}^{\prime}=t^{\prime 2}$. Both $\mathbf{X}$ and
$\mathbf{X}^{\prime}$ can be embedded in $\mathbf{P}(1^{2},2,3,5^{3})$ by the
following Veronese maps.
$(v_{2})_{\mathbf{X}}:\begin{cases}\mathbf{X}=\mathbf{P}(1,2,4,5,10^{2})&\longrightarrow\mathbf{P}(1^{2},2,3,5^{3})\\\
[u:v:s:t:r_{0}:r_{1}]&\longmapsto[u^{2}:v:s:ut:t^{2}:r_{1}:r_{0}].\end{cases}$
$(v_{3})_{\mathbf{X}^{\prime}}:\begin{cases}\mathbf{X}^{\prime}=\mathbf{P}(1,3,5,9,15^{2})&\longrightarrow\mathbf{P}(1^{2},2,3,5^{3})\\\
[u^{\prime}:v^{\prime}:s^{\prime}:t^{\prime}:r_{0}^{\prime}:r_{1}^{\prime}]&\longmapsto[v^{\prime}:u^{\prime
3}:u^{\prime}s^{\prime}:t^{\prime}:r_{1}^{\prime}:s^{\prime
3}:r_{0}^{\prime}].\end{cases}$
We may choose $[U_{0}:U_{1}:V:W:X_{0}:X_{1}:X_{2}]$ as homogeneous coordinates
on $\mathbf{P}(1^{2},2,3,5^{3})$, whose pullbacks by the Veronese maps are
$\begin{array}[]{c|ccccccc}&U_{0}&U_{1}&V&W&X_{0}&X_{1}&X_{2}\\\
\hline\cr\text{pullback to }\mathbf{X}&u^{2}&v&s&ut&t^{2}&r_{1}&r_{0}\\\
\text{pullback to }\mathbf{X}^{\prime}&v^{\prime}&u^{\prime
3}&u^{\prime}s^{\prime}&t^{\prime}&r_{1}^{\prime}&s^{\prime
3}&r_{0}^{\prime}\end{array}$
Hence the above realizes $\mathbf{X}$ (respectively $\mathbf{X}^{\prime}$) as
the hypersurface of equation $U_{0}X_{0}=W^{2}$ (respectively
$U_{1}X_{1}=V^{3}$). The descriptions we know for $Y_{1}$ and $Y_{2}$ in
$\mathbf{X}$ and $\mathbf{X}^{\prime}$ yield
$Y_{1}=\left\\{\begin{array}[]{l}U_{0}X_{0}=W^{2}\\\
U_{1}X_{1}+U_{0}X_{2}=V^{3}\end{array}\right\\}$
and
$Y_{2}=\left\\{\begin{array}[]{l}U_{0}X_{0}+U_{1}X_{2}=W^{2}\\\
U_{1}X_{1}=V^{3}\end{array}\right\\}.$
Besides, we know from Lemma 4.8 and Lemma 4.10 that $\mathbf{P}$ is cut out on
$Y_{1}$ and $Y_{2}$ by the same hyperplane in $\mathbf{P}^{18}$, namely:
$Y_{1}\cap\left\\{X_{2}=0\right\\}=Y_{2}\cap\left\\{X_{2}=0\right\\}=\mathbf{P}$.
In particular, $\mathbf{P}$ in $\left\\{X_{2}=0\right\\}$ is given by the
equations
$\hypertarget{(2)}{}U_{0}X_{0}=W^{2},\hskip 5.69046ptU_{1}X_{1}=V^{3}.$ (2)
We introduce now two coordinates $\lambda,\mu$ and consider
$F=\mathrm{Proj}(R)$ with
$R=\mathbf{C}[\lambda,\mu,U_{0},U_{1},V,W,X_{0},X_{1},X_{2}]$
endowed with the following grading in $\mathbf{Z}^{2}$:
$\begin{array}[]{c|ccccccccc}&\lambda&\mu&U_{0}&U_{1}&V&W&X_{0}&X_{1}&X_{2}\\\
\hline\cr\text{degree}&1&1&0&0&0&0&0&0&-1\\\ \text{in
}\mathbf{Z}^{2}:&0&0&1&1&2&3&5&5&5\end{array}$
It is a bundle over $\mathbf{P}^{1}$ with fiber $\mathbf{P}(1^{2},2,3,5^{3})$,
whose bundle map to $\mathbf{P}^{1}$ is $[\lambda:\mu]$ and the locus
$X_{2}=0$ is the trivial subbundle
$\mathbf{P}^{1}\times\mathbf{P}(1^{2},2,3,5^{2})$.
There is a morphism $\phi:F\to\mathbf{P}(1^{2},2,3,5^{4})$ which is given in
coordinates by the expression $[U_{0}:U_{1}:V:W:X_{0}:X_{1}:\lambda X_{2}:\mu
X_{2}]$ and the projective model in $\mathbf{P}^{19}$ induced by the linear
system $|\mathcal{O}_{F}(0,5)|$ decomposes as the composite map
${F}$${\mathbf{P}(1^{2},2,3,5^{4})}$${\mathbf{P}^{19}.}$$\scriptstyle{\phi}$$\scriptstyle{|\mathcal{O}(5)|}$
Notice that $\phi$ contracts the trivial bundle
$\mathbf{P}^{1}\times\mathbf{P}(1^{2},2,3,5^{2})$ given by the equation
$X_{2}=0$ onto $\mathbf{P}(1^{2},2,3,5^{2})$. Hence the image of
$\left\\{X_{2}=0\right\\}$ by $|\mathcal{O}_{F}(0,5)|$ is the image of
$\mathbf{P}(1^{2},2,3,5^{2})$ in $\mathbf{P}^{17}$.
Consider the complete intersection $Z$ in $F$ given by the two homogeneous
equations
$\displaystyle U_{0}X_{0}+\lambda U_{1}X_{2}$ $\displaystyle=W^{2},$
$\displaystyle U_{1}X_{1}+\mu U_{0}X_{2}$ $\displaystyle=V^{3}.$
###### Lemma 4.11.
The image of $Z$ in $\mathbf{P}^{19}$ is not a cone and contains $Y_{1}$ and
$Y_{2}$ as hyperplane sections. By Table 2, it has dimension
$1+\alpha(\Gamma,K_{\Gamma})$, and thus it is a maximal extension of
$\mathbf{P}$.
###### Proof.
The restriction of $Z$ to $\left\\{X_{2}=0\right\\}$ is the complete
intersection in $\mathbf{P}^{1}\times\mathbf{P}(1^{2},2,3,5^{2})$ yielded by
the equations $U_{0}X_{0}=W^{2}$ and $U_{1}X_{1}=V^{3}$. These are the
defining equations for $\mathbf{P}$ in $\mathbf{P}(1^{2},2,3,5^{2})$ as
mentioned in (2), hence
$Z\cap\left\\{X_{2}=0\right\\}=\mathbf{P}^{1}\times\mathbf{P}$
and it is contracted by $\phi$ to $\mathbf{P}$.
Let $Y$ be the image of $Z$ in $\mathbf{P}^{19}$. Let us show that it contains
$Y_{1}$ and $Y_{2}$ as hyperplane sections. On the one hand, $\left\\{\lambda
X_{2}=0\right\\}$ is the pullback to $F$ of a hyperplane in $\mathbf{P}^{19}$,
such that
$Z\cap\left\\{\lambda X_{2}=0\right\\}=Z|_{\lambda=0}+Z|_{X_{2}=0}.$
In the above, $Z|_{X_{2}=0}$ is contracted onto $\mathbf{P}$, and
$Z|_{\lambda=0}$ has image $Y_{1}$.
On the other hand,
$Z\cap\left\\{\mu X_{2}=0\right\\}=Z|_{\mu=0}+Z|_{X_{2}=0}$
where once again, $Z|_{X_{2}=0}$ is contracted onto $\mathbf{P}$, and
$Z|_{\mu=0}$ has image $Y_{2}$.
It remains to be proven that $Y$ is not a cone. The pencil of fourfold
extensions of $\mathbf{P}$ contained in $Y$ consists of all the $Y\cap H$,
where $H\subset\mathbf{P}^{19}$ is a hyperplane such that $\mathbf{P}\subset
H$. These fourfolds are each cut out on $Y$ by $\ell(\lambda,\mu)X_{2}=0$,
with $\ell$ a linear form. Hence, they are complete intersections in
$\mathbf{P}(1^{2},2,3,5^{3})$ of the form
$\displaystyle U_{0}X_{0}+\lambda U_{1}X_{2}$ $\displaystyle=W^{2},$
$\displaystyle U_{1}X_{1}+\mu U_{0}X_{2}$ $\displaystyle=V^{3}$
where $\lambda$ and $\mu$ are fixed constant coefficients (to be precise,
solutions to $\ell(\lambda,\mu)=0$). Let $Y_{(\lambda,\mu)}$ be the fourfold
section of $Y$ given by the equations above, so that $Y_{1}=Y_{(0,1)}$ and
$Y_{2}=Y_{(1,0)}$. We first notice that $Y_{(\lambda,\mu)}\simeq
Y_{(\alpha\lambda,\beta\mu)}$ for all $\alpha,\beta\in\mathbf{C}^{*}$; indeed,
the automorphism which consists in the change of variables $U_{1}\mapsto\alpha
U_{1},U_{0}\mapsto\beta U_{0},X_{1}\mapsto\frac{1}{\alpha}X_{1}$ and
$X_{0}\mapsto\frac{1}{\beta}X_{0}$ identifies $Y_{(\alpha\lambda,\beta\mu)}$
with $Y_{(\lambda,\mu)}$. Therefore, among the $Y_{(\lambda,\mu)}$ there are
at most three isomorphism classes: $Y_{(1,0)},Y_{(0,1)}$ and $Y_{(1,1)}$. In
particular, the class represented by $Y_{(1,1)}$ is dense in the pencil
$\left\\{Y\cap H\>|\>\mathbf{P}\subset H\right\\}$.
Assume now by contradiction that $Y$ is a cone. It contains $\mathbf{P}$ as a
linear section of codimension $2$, and $\mathbf{P}$ is not a cone, so there
are only two possible cases: either the vertex of $Y$ is a point, or it is a
line. In the latter case, all the $Y_{(\lambda,\mu)}$’s are cones over
$\mathbf{P}$ with each time a point as vertex; in the former case, there is a
unique member $Y_{(\lambda,\mu)}$ which is a cone over $\mathbf{P}$. This
unique member is either $Y_{(1,0)}$ or $Y_{(0,1)}$ since the class of
$Y_{(1,1)}$ is dense in the pencil, so without loss of generality we may
assume that $Y_{(1,0)}$ is a cone over $\mathbf{P}$ with a point as vertex
(the rest of the proof is analogous if the cone is $Y_{(0,1)}$).
Let us recall the equations for $Y_{(1,0)}$ in $\mathbf{P}(1^{2},2,3,5^{3})$
with respect to the coordinates $[U_{0}:U_{1}:V:W:X_{0}:X_{1}:X_{2}]$.
$\displaystyle U_{0}X_{0}+U_{1}X_{2}$ $\displaystyle=W^{2},$ $\displaystyle
U_{1}X_{1}$ $\displaystyle=V^{3}.$
We recall as well the fact that $\mathbf{P}$ is the hyperplane section
$Y_{(1,0)}\cap\left\\{X_{2}=0\right\\}$ in $\mathbf{P}^{18}$. There is a
change of variable which fixes the hyperplane $\left\\{X_{2}=0\right\\}$ and
moves the vertex point to $p_{X_{2}}$. This change of variables makes the
affine chart $Y_{(1,0)}|_{X_{2}=1}$ an affine cone, i.e., it eliminates the
variable $X_{2}$ from the equations above.
Indeed, let $F=G=0$ be the defining equations of a cone whose vertex point is
$p_{X_{2}}$, such that $F$ and $G$ are two homogeneous sextics on
$\mathbf{P}(1^{2},2,3,5^{3})$, not divisible by $X_{2}$, and set
$f=F|_{X_{2}=1},g=G|_{X_{2}=1}$. If one of them is not homogeneous, say $f$,
then
$f=f_{6}+\tilde{f}$
where $f_{6}=f_{6}(U_{0},U_{1},V,W,X_{0},X_{1})$ is homogeneous of degree $6$
and $\tilde{f}$ has degree $5$ or less. By the fact that $\deg(\tilde{f})<6$
and $g$ and $f$ are sextics, we have $\tilde{f}\notin(f,g)$ and thus there
exists a point $q=(U_{0},U_{1},V,W,X_{0},X_{1})$ in the affine cone
$\left\\{f=g=0\right\\}$ such that $\tilde{f}(q)\neq 0$. From the condition
$f(q)=0$, we have the equality $f_{6}(q)=-\tilde{f}(q)$, and by the fact that
$\left\\{f=g=0\right\\}$ is an affine cone, then for all
$\lambda\in\mathbf{C}^{*}$ the point
$\lambda\cdot q=(\lambda U_{0},\lambda
U_{1},\lambda^{2}V,\lambda^{3}W,\lambda^{5}X_{0},\lambda^{5}X_{1})$
also belongs to $\left\\{f=g=0\right\\}$. If $\lambda$ is general, we have
$f_{6}(\lambda\cdot
q)=\lambda^{6}f_{6}(q)=-\lambda^{6}\tilde{f}(q)\neq-\tilde{f}(\lambda\cdot q)$
since the equality $\lambda^{6}\tilde{f}(q)=\tilde{f}(\lambda\cdot q)$ is a
polynomial condition of degree $6$ on $\lambda$. This leads to the
contradiction that $\lambda\cdot q\notin\left\\{f=g=0\right\\}$ and the
conclusion that $f$ and $g$ are homogeneous.
As a consequence, there exists a transformation of the form
$[U_{0}:U_{1}:V:W:X_{0}:X_{1}:X_{2}]\mapsto[A\mathbf{U}:V:W:M\mathbf{X}]$
where $A\in GL_{2}(\mathbf{C})$ and $M\in GL_{3}(\mathbf{C})$, which
eliminates $X_{2}$ from the equations of $Y_{(1,0)}$. Let us denote
$A=\left(\begin{array}[]{cc}a&b\\\ c&d\end{array}\right)\text{ and
}M=\left(\begin{array}[]{ccc}\alpha_{0}&\beta_{0}&\gamma_{0}\\\
\alpha_{1}&\beta_{1}&\gamma_{1}\\\
\alpha_{2}&\beta_{2}&\gamma_{2}\end{array}\right).$
This change of variables applied to the equations of $Y_{(1,0)}$ yields
$\displaystyle(aU_{0}+bU_{1})(\alpha_{0}X_{0}+\beta_{0}X_{1}+\gamma_{0}X_{2})+(cU_{0}+dU_{1})(\alpha_{2}X_{0}+\beta_{2}X_{1}+\gamma_{2}X_{2})$
$\displaystyle=W^{2},$
$\displaystyle(cU_{0}+dU_{1})(\alpha_{1}X_{0}+\beta_{1}X_{1}+\gamma_{1}X_{2})$
$\displaystyle=V^{3}.$
By the fact that this does not involve the variable $X_{2}$, we have
$a\gamma_{0}+c\gamma_{2}=c\gamma_{1}=b\gamma_{0}+d\gamma_{2}=d\gamma_{1}=0$
in other words,
$\left(\begin{array}[]{cc}a&c\\\
b&d\end{array}\right)\left(\begin{array}[]{c}\gamma_{0}\\\
\gamma_{2}\end{array}\right)=0\text{ and }\left(\begin{array}[]{cc}a&c\\\
b&d\end{array}\right)\left(\begin{array}[]{c}0\\\
\gamma_{1}\end{array}\right)=0.$
This contradicts either $\det(A)\neq 0$, or $\det(M)\neq 0$. The conclusion is
that $Y_{(1,0)}$ is not a cone over $\mathbf{P}$ and therefore, $Y$ is not a
cone ∎
## 5 The primitive polarizations of the K3 surfaces
We recall that the index of the polarized K3 surface
$(S,-K_{\mathbf{P}}|_{S})$, which is denoted by $i_{s}$ in Table 2, is the
divisibility of $-K_{\mathbf{P}}|_{S}$ in the Picard group of $S$, i.e., the
largest integer $r$ such that $-\frac{1}{r}K_{\mathbf{P}}|_{S}$ is a Cartier
divisor on $S$. Here, $\Gamma$ is a general member of
$|-K_{\mathbf{P}}|_{S}|$, and we introduce $C$ a general member of
$|-\frac{1}{i_{S}}K_{\mathbf{P}}|_{S}|$, so that $\Gamma=i_{S}C$ in
$\mathrm{Pic}(S)$.
In what follows, we go through all the cases $\\#9$ to $\\#14$ that are listed
in Table 2 and give a geometric description of the curve $C$.
### 5.1 $\mathbf{P}=\mathbf{P}(1,4,5,10)$
According to Table 2, the index $i_{S}$ is equal to $2$. The genus of $C$ is
$6$. In Table 5, $S$ is explicitly given as the quintic hypersurface
$u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=0$ in $\mathbf{P}(1,1,1,2)$, with $\deg
f_{4}=4$, and $\Gamma$ is cut out on $S$ by a quartic. Therefore
$C=\frac{1}{2}\Gamma$ is cut out by a quadric, so its defining equations are
$u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=g_{2}(\mathbf{u},v)=0$
with $g_{2}$ a general homogeneous quadric polynomial.
###### Lemma 5.1.
The curve $C$ is isomorphic to a plane quintic with a total inflection point,
i.e., there is a line $\Delta$ which is tangent to $C$ in $\mathbf{P}^{2}$ and
$C|_{\Delta}$ is a quintuple point.
Conversely, any such plane quintic can be realized as a member of
$|-\frac{1}{2}K_{\mathbf{P}}|_{S}|$ for a general $S\in|-K_{\mathbf{P}}|$.
###### Proof.
Up to scaling, we may choose $g_{2}(\mathbf{u},v)=v-\alpha(u_{0},u_{1},u_{2})$
where $\alpha$ is a conic. Hence $C$ is cut out by
$u_{0}f_{4}(u_{0},u_{1},u_{2},v)+u_{1}^{5}=0,v=\alpha(u_{0},u_{1},u_{2}).$
Substituting $\alpha(u_{0},u_{1},u_{2})$ for $v$ in the first equation
naturally realizes $C$ as a quintic in $\mathbf{P}^{2}$ with coordinates
$[u_{0}:u_{1}:u_{2}]$. Moreover, the restriction of $C$ to the line $u_{0}=0$
is a quintuple point. Let $\Delta=\left\\{u_{0}=0\right\\}$,
$C|_{\Delta}=5p$
where $p=\left\\{u_{0}=u_{1}=0\right\\}$ in $\mathbf{P}^{2}$. This is an
inflection point of order $5$ of the curve $C$. The tangent cone of $C$ at
this point is the reduced line $\Delta$ by generality of $f_{4}$ (the curve
$C$ is indeed smooth, since it is a general hyperplane section of $S$ in
$\mathbf{P}^{6}$ and $S$ has isolated singularities).
Conversely, let $C^{\prime}$ be such a plane quintic. Up to a choice of
coordinates, $C^{\prime}$ is given by an equation of the form
$u_{0}g_{4}(u_{0},u_{1},u_{2})+u_{1}^{5}=0$ with $\deg g_{4}=4$. It holds that
$C^{\prime}$ in its canonical model can be extended by a quintic surface in
$\mathbf{P}(1,1,1,2)$, as the following points out. Following the construction
that was done in A.2 in the appendix of [LD21], there exists a quintic
polynomial $f_{5}(\mathbf{u},v)$ and a quadric
$\alpha(\mathbf{u})=\alpha(u_{0},u_{1},u_{2})$ such that
$C^{\prime}=\left\\{f_{5}(\mathbf{u},v)=0,v=\alpha(\mathbf{u})\right\\}$
in $\mathbf{P}(1,1,1,2)$. Hence the quintic surface
$\left\\{f_{5}(\mathbf{u},v)=0\right\\}$ in $\mathbf{P}(1,1,1,2)$ is an
extension of $C^{\prime}$.
Here $f_{5}$ and $\alpha$ are so that
$f_{5}|_{v=\alpha(\mathbf{u})}=u_{0}g_{4}(u_{0},u_{1},u_{2})+u_{1}^{5}$. Thus
$f_{5}=u_{1}^{5}+\lambda\beta(\mathbf{u},v)(v-\alpha(\mathbf{u}))\text{ (mod
}u_{0})$ in $\mathbf{C}[u_{0},u_{1},u_{2},v]$ for some constant $\lambda$ and
$\deg\beta=3$. Picking $\lambda=0$ doesn’t change $C^{\prime}$, and yields
$f_{5}=u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}$ for some homogeneous quartic
$f_{4}$ on $\mathbf{P}(1,1,1,2)$.
Hence $C^{\prime}$ is cut out on $S^{\prime}$ by a quadric, where $S^{\prime}$
is the quintic $u_{0}f_{4}(\mathbf{u},v)+u_{1}^{5}=0$. Recall from Table 4
that $\varphi:\mathbf{P}\dashrightarrow\mathbf{P}(1,1,1,2)$ restricts to an
isomorphism on the general member of $|-K_{\mathbf{P}}|$; here $S^{\prime}$ is
a member of $\mathcal{L}=\varphi_{*}|-K_{\mathbf{P}}|$ so it is isomorphic to
a general anticanonical divisor of $\mathbf{P}$. Moreover,
$C^{\prime}=-\frac{1}{2}K_{\mathbf{P}}|_{S^{\prime}}$ in
$\mathrm{Pic}(S^{\prime})$. ∎
### 5.2 $\mathbf{P}=\mathbf{P}(1,2,6,9)$
According to Table 2, the index $i_{S}$ of the polarization
$(S,-K_{\mathbf{P}}|_{S})$ is equal to $3$. The curve $C$ has genus $4$. We
know by Table 5 that $S$ is a degree $10$ hypersurface in
$\mathbf{P}(1,1,3,5)$, of equation $u_{0}f_{9}(\mathbf{u},v,s)+s^{2}=0$ and
$\Gamma$ is the intersection of $S$ with a general $9-$ic. Hence $C$ is cut
out on $S$ by a general cubic of $\mathbf{P}(1,1,3,5)$, i.e., its equations
are
$u_{0}f_{9}(\mathbf{u},v,s)+s^{2}=0,\hskip 5.69046ptv=\alpha(u_{0},u_{1})$
where $\alpha$ is a homogeneous cubic polynomial on $\mathbf{P}^{1}$.
###### Lemma 5.2.
The curve $C$ is isomorphic to a $10-$ic curve in $\mathbf{P}(1,1,5)$ (i.e., a
quadric section of the cone over a rational quintic curve). In other words,
$C$ is a smooth hyperelliptic curve of genus $4$.
Conversely, any such curve can be realized as a member of
$|-\frac{1}{3}K_{\mathbf{P}}|_{S}|$ for a general $S\in|-K_{\mathbf{P}}|$.
###### Proof.
By the above equations, $C$ is naturally realized as a degree $10$ curve on
$\mathbf{P}(1,1,5)$ with coordinates $[u_{0}:u_{1}:s]$ of the form
$u_{0}h_{9}(\mathbf{u},s)+s^{2}=0$. Hence the linear system
$|\mathcal{O}_{\mathbf{P}(1,1,5)}(1)|$, whose base locus
$\left\\{u_{0}=u_{1}=0\right\\}$ does not meet $C$, restricts to a $g_{2}^{1}$
on $C$.
Conversely, let $C^{\prime}$ be a curve in $\mathbf{P}(1,1,5)$ of degree $10$.
In a suitable choice of coordinates, the point $[u_{0}:u_{1}]=[0:1]$ belongs
to the branch locus of the double cover $C\to\mathbf{P}^{1}$. Hence the line
$\Delta=\left\\{u_{0}=0\right\\}$ in $\mathbf{P}(1,1,5)$ is tangent to
$C^{\prime}$. As a result, the restriction $C^{\prime}|_{u_{0}=1}$ is a double
point, which yields the following equation for $C^{\prime}$:
$u_{0}h^{\prime}_{9}(\mathbf{u},s)+s^{2}=0,$
with $\deg h^{\prime}_{9}=9$. Introducing $v$ a coordinate of weight $3$ and
$f^{\prime}_{9}(\mathbf{u},v,s)$ a degree $9$ homogeneous polynomial on
$\mathbf{P}(1,1,3,5)$ such that
$f^{\prime}_{9}(\mathbf{u},v,s)|_{v=\alpha(u_{0},u_{1})}=h^{\prime}_{9}(\mathbf{u},v)$,
we realize $C^{\prime}$ as a complete intersection in $\mathbf{P}(1,1,3,5)$ of
equations
$u_{0}f^{\prime}_{9}(\mathbf{u},v,s)+s^{2}=0,v=\alpha(u_{0},u_{1}).$
This makes $C^{\prime}$ a curve in the surface
$S^{\prime}=\left\\{u_{0}f^{\prime}_{9}(\mathbf{u},v,s)+s^{2}=0\right\\}$,
which is a member of $\mathcal{L}=\varphi_{*}|-K_{\mathbf{P}}|$, meaning that
$S^{\prime}$ is isomorphic to a general anticanonical divisor of $\mathbf{P}$.
Moreover, the moving part of $\mathcal{L}|_{S^{\prime}}$ is the restriction to
$S^{\prime}$ of the $9-$ics, and therefore $3C$ is a member of the moving part
of $\mathcal{L}|_{S}$. This makes $3C$ the class of the hyperplane sections of
$S$ in $\mathbf{P}^{28}$, in other words $3C=-K_{\mathbf{P}}|_{S}$ in
$\mathrm{Pic}(S)$. This yields $C=-\frac{1}{3}K_{\mathbf{P}}|_{S^{\prime}}$. ∎
### 5.3 $\mathbf{P}=\mathbf{P}(1,2,3,6)$
This is the only example of our list for which
$-\frac{1}{i_{S}}K_{\mathbf{P}}$ is Cartier. The index $i_{S}$ is equal to $2$
and $-\frac{1}{2}K_{\mathbf{P}}$ is the class of sextic surfaces. Hence, $S$
is a general surface of degree $12$ and $C$ is cut out on $S$ by a sextic. The
projective model associated to $-\frac{1}{2}K_{\mathbf{P}}$ in which $C$ is a
hyperplane section of $S$ is a realization of $\mathbf{P}$ as a variety of
degree $(-\frac{1}{2}K_{\mathbf{P}})^{3}=6$ in $\mathbf{P}^{7}$. It factors as
the composite map
${\mathbf{P}}$${\mathbf{P}(1,1,2,3,3)}$${\mathbf{P}^{7}}$$\scriptstyle{v_{2}}$$\scriptstyle{|\mathcal{O}(3)|}$
where $v_{2}$ is the $2-$Veronese embedding mentioned in Table 3.
Let $[u_{0}:u_{1}:v:s_{0}:s_{1}]$ be coordinates on $\mathbf{P}(1,1,2,3,3)$,
then $v_{2}$ realizes $\mathbf{P}$ as the hypersurface $u_{0}s_{0}=v^{2}$, $S$
as the intersection of $\mathbf{P}$ with a general sextic, and $C$ as the
intersection of $S$ with a general cubic.
Consider now $\mathbf{P}^{\prime}:=\mathbf{P}(1,1,1,3)$ with coordinates
$[a_{0}:a_{1}:a_{2}:b]$ and the rational map
$\psi:\mathbf{P}^{\prime}\dashrightarrow\mathbf{P}(1,1,2,3,3)$ given by the
expression
$[u_{0}:u_{1}:v:s_{0}:s_{1}]=[a_{0}:a_{1}:a_{0}a_{2}:a_{0}a_{2}^{2}:b].$
Its image satisfies the same equation as $\mathbf{P}$, hence it is equal to
$\mathbf{P}$. There is a birational map $\varphi$ which makes the following
diagram commute.
${\mathbf{P}}$${\mathbf{P}^{\prime}}$${\mathbf{P}(1,1,2,3,3)}$$\scriptstyle{\varphi}$$\scriptstyle{v_{2}}$$\scriptstyle{\psi}$
One checks from the expression of $v_{2}$ in Table 3 and that of $\psi$ that
$\varphi$ has the following expression with regard to the weighted
coordinates.
$\varphi:[x:y:z:w]\mapsto[a_{0}:a_{1}:a_{2}:b]=[x^{3}:xy:z:x^{3}w].$
It contracts the vanishing locus of $x$ to a point $p$. The divisor
$D=\left\\{x=0\right\\}$ has degree $2$ on $C$; indeed, $C$ is cut out on
$\mathbf{P}$ by two general equations of respective degree $12$ and $6$ with
regard to the grading of $\mathbf{P}$, and
$D\in|\mathcal{O}_{\mathbf{P}}(1)|$, therefore:
$\deg D|_{C}=D\cdot
C=[\mathcal{O}_{\mathbf{P}}(1)]\cdot[\mathcal{O}_{\mathbf{P}}(12)]\cdot[\mathcal{O}_{\mathbf{P}}(6)]=2.$
This ensures that the restriction $\varphi|_{C}$ maps $2$ distinct points to
$p$. The indeterminacy locus $x=z=0$ does not meet $C$ by the generality
assumption, so the map $\varphi$ induces by restriction to $C$ a morphism
$C\to\mathbf{P}(1,1,1,3)$ which has degree $1$ and makes $C$ the normalization
of its image.
###### Lemma 5.3.
Let $C_{0}$ be the image of $C$ by $\varphi$. Then $C_{0}$ is isomorphic to a
plane sextic which has an oscnode at a point $p$ and there is a line
$\Delta\subset\mathbf{P}^{2}$ through $p$ such that $C_{0}|_{\Delta}=6p$.
Besides, $C$ is the normalization of $C_{0}$.
###### Proof.
We denote by $\varphi^{-1}$ the rational inverse of $\varphi$. One checks from
the expressions of $v_{2}$ and $\psi$ that $\varphi^{-1}$ is the map
$[a_{0}:a_{1}:a_{2}:b]\in\mathbf{P}^{\prime}\mapsto[x:y:z:w]=[a_{0}:a_{0}a_{1}:a_{0}^{2}a_{2}:a_{0}^{3}b].$
Say $q$ is a fixed point in $\mathbf{P}^{\prime}$ with a chosen representative
$(a_{0},a_{1},a_{2},b)$, and $\sqrt{a_{0}}$ is a square root of $a_{0}$, then
the image of $q$ by $\varphi^{-1}$ is
$[\sqrt{a_{0}}:a_{1}:\sqrt{a_{0}}a_{2}:b]\in\mathbf{P}$
and one checks that the composition $v_{2}\circ\varphi^{-1}$ indeed maps $q$
to $\psi(q)$. Let $\Sigma$ be the direct image of $S$ under $\varphi$; it is
the proper transform of $S$ by $\varphi^{-1}$. From the above, we identify the
exceptional locus of $\varphi^{-1}$ as $\left\\{a_{0}=0\right\\}$ and the
pullback to $\mathbf{P}^{\prime}$ of the general $12-$ic surface $S$ is
$(\varphi^{-1})^{*}S=\left\\{a_{0}^{6}(a_{0}f_{5}(\mathbf{a},b)+\lambda
a_{1}^{6}+\mu a_{1}^{3}b+\gamma b^{2})=0\right\\}$
where $f_{5}(\mathbf{a},b)$ is a quintic on $\mathbf{P}^{\prime}$ and
$\lambda,\mu,\gamma$ are constant. Hence the proper transform $\Sigma$ of $S$
is a nongeneral sextic of $\mathbf{P}^{\prime}=\mathbf{P}(1,1,1,3)$ of
equation
$a_{0}f_{5}(\mathbf{a},b)+\lambda a_{1}^{6}+\mu a_{1}^{3}b+\gamma b^{2}=0.$
On the one hand, we have $h^{0}\mathcal{O}_{\mathbf{P}}(-K_{\mathbf{P}})=27$,
while $h^{0}\mathcal{O}_{\mathbf{P}^{\prime}}(5)+3=30$, so even the quintic
$f_{5}$ can’t be general, and it must belong to a subspace $V\subset
H^{0}(\mathbf{P}^{\prime},\mathcal{O}_{\mathbf{P}^{\prime}}(5))$ with $\dim
V=\dim H^{0}(\mathbf{P}^{\prime},\mathcal{O}_{\mathbf{P}^{\prime}}(5))-3=24$.
Namely, it follows from the expression of $\varphi^{-1}$ that $f_{5}$ does not
involve the monomials $a_{2}^{5},a_{2}^{4}a_{1}$ and $a_{2}^{3}a_{1}^{2}$.
As $C$ is a hyperplane section of $S$ in $\mathbf{P}^{7}$, it is cut out on
$S$ by a general cubic $\alpha(u_{0},u_{1},v,s_{0},s_{1})=0$ in
$\mathbf{P}(1,1,2,3,3)$, with
$[u_{0}:u_{1}:v:s_{0}:s_{1}]=[a_{0}:a_{1}:a_{0}a_{2}:a_{0}a_{2}^{2}:b],$
and thus its image $C_{0}$ in $\mathbf{P}^{\prime}$ is cut out on $\Sigma$ by
a nongeneral cubic of the form
$b=\tau a_{1}^{3}+a_{0}q(\mathbf{a})$
with $\tau$ a constant and $q$ a general quadric.
$C_{0}=\Sigma\cap\left\\{b=\tau
a_{1}^{3}+a_{0}q(\mathbf{a})\right\\}=\left\\{\begin{array}[]{l}a_{0}f_{5}(\mathbf{a},b)+\lambda
a_{1}^{6}+\mu a_{1}^{3}b+\gamma b^{2}=0\\\ b=\tau
a_{1}^{3}+a_{0}q(\mathbf{a})\end{array}\right\\}.$
This makes $C_{0}$ a sextic curve in $\mathbf{P}^{2}$ with coordinates
$[a_{0}:a_{1}:a_{2}]$ such that $C_{0}|_{a_{0}=0}=(a_{1}^{6})$. Hence the
restriction of $C_{0}$ to the line $\Delta=\left\\{a_{0}=0\right\\}$ is a
sextic point. This point is the intersection point of $C_{0}$ with $\Delta$,
i.e., the point $p=\left\\{a_{0}=a_{1}=0\right\\}$.
The morphism from $C$ to $C_{0}$ has degree $1$ and maps two distinct points
to $p$. Hence $C_{0}$ has two local branches at the point $p$, say $B_{1}$ and
$B_{2}$, such that
$6p=C_{0}|_{\Delta}=B_{1}|_{\Delta}+B_{2}|_{\Delta}.$
This implies that $B_{i}|_{\Delta}=a_{1}p$ for $a_{i}\in\mathbf{N}$ for
$i=1,2$ and $a_{1}+a_{2}=6$. There are three cases to distinguish.
1. $(i)$
If $(a_{1},a_{2})=(1,5)$, then $C_{0}$ is a sextic plane curve with a node at
$p$, and $g(C)=\frac{(6-1)(6-2)}{2}-1=9$. But $g(C)=7$ by Table 2.
2. $(ii)$
If $(a_{1},a_{2})=(2,4)$, then $C_{0}$ has a tacnode at $p$, and
$g(C)=\frac{(6-1)(6-2)}{2}-2=8$, which is also a contradiction.
3. $(iii)$
If $(a_{1},a_{2})=(3,3)$, then $C_{0}$ has an oscnode at $p$, and we indeed
have $g(C)=\frac{(6-1)(6-2)}{2}-3=7$.
Hence, around $p$, $C_{0}$ has two local branches which meet at an oscnode. By
the generality assumption, $C$ is smooth, as a consequence of Bertini’s
Theorem. It follows that $p$ is the only singular point of $C_{0}$ and $C$ is
the normalization of $C_{0}$. ∎
### 5.4 $\mathbf{P}=\mathbf{P}(1,3,8,12)$
As stated in Table 2, the curve $C$ has genus $7$. It follows from Table 5
that $S$ is isomorphic to the $9-$ic hypersurface
$u_{0}f_{8}(\mathbf{u},v,s)+v^{3}=0$ in $\mathbf{P}(1,1,3,4)$ with coordinates
$[u_{0}:u_{1}:v:s]$ and $\Gamma$ is cut out on $S$ by a degree $8$
hypersurface of $\mathbf{P}(1,1,3,4)$. The index $i_{S}$ is equal to 2,
therefore $C=\frac{1}{2}\Gamma$ in $\mathrm{Pic}(S)$ and $C$ is the
intersection of $S$ with a general quartic. Such a quartic has equation
$s=\alpha(\mathbf{u},v)$, where $\deg\alpha=4$. Hence $C$ is cut out by the
equations
$u_{0}f_{8}(\mathbf{u},v,s)+v^{3}=0,s=\alpha(\mathbf{u},v).$
###### Lemma 5.4.
The curve $C$ is isomorphic to a degree $9$ curve in $\mathbf{P}(1,1,3)$
(i.e., a cubic section of the cone over a rational cubic curve) with an
inflection point of order $3$ along a line of the ruling, i.e., there is a
line $\Delta\in|\mathcal{O}_{\mathbf{P}(1,1,3)}(1)|$ which is tangent to $C$
at a point $p$, and $C|_{\Delta}=3p$. In other words, $C$ is a trigonal curve
of genus $7$ with a total ramification point.
Conversely, any such curve in $\mathbf{P}(1,1,3)$ is isomorphic to a member of
$|-\frac{1}{2}K_{\mathbf{P}}|_{S}|$ for a general $S\in|-K_{\mathbf{P}}|$.
###### Proof.
By the above equations, $C$ is naturally realized as the curve of degree $9$
in $\mathbf{P}(1,1,3)$ given by the following
$u_{0}h_{8}(\mathbf{u},v)+v^{3}=0$
where $\deg h_{8}=8$. Let $\Delta$ be the line $\left\\{u_{0}=0\right\\}$,
then $C|_{\Delta}=3p$ where $p$ is the point $\left\\{u_{0}=v=0\right\\}$. The
tangent cone of $C$ at $p$ is the reduced line $\Delta$, hence $C$ is smooth
and has an inflection point of order $3$ at $p$.
Conversely, let $C^{\prime}\subset\mathbf{P}(1,1,3)$ be such a curve. Then for
a fitting choice of coordinates, it is cut out by an equation of the form
$u_{0}h^{\prime}_{8}(\mathbf{u},v)+v^{3}=0$
with $\deg h^{\prime}_{8}=8$. We introduce a coordinate $s$ of weight $4$ and
a homogeneous degree $8$ polynomial $f^{\prime}_{8}(\mathbf{u},v,s)$ on
$\mathbf{P}(1,1,3,4)$ such that
$f^{\prime}_{8}(\mathbf{u},v,s)|_{s=\alpha(\mathbf{u},v)}=h^{\prime}_{8}(\mathbf{u},v)$.
In this setting, $C^{\prime}$ is the complete intersection in
$\mathbf{P}(1,1,3,4)$ given by
$u_{0}f^{\prime}_{8}(\mathbf{u},v,s)+v^{3}=0,s=\alpha(\mathbf{u},v)$
meaning it is cut out by a general quartic on the surface
$S^{\prime}=\left\\{u_{0}f^{\prime}_{8}(\mathbf{u},v,s)+v^{3}=0\right\\}$.
Recall from Table 4 and Table 5 that the birational map $\varphi$ from
$\mathbf{P}$ to $\mathbf{P}(1,1,3,4)$ restricts to an isomorphism on the
general anticanonical divisors of $\mathbf{P}$. Here $S^{\prime}$ is a member
of $\mathcal{L}=\varphi_{*}|-K_{\mathbf{P}}|$, hence it is isomorphic to a
general member of $|-K_{\mathbf{P}}|$, and furthermore the moving part of
$\mathcal{L}|_{S^{\prime}}$ is the restriction to $S^{\prime}$ of the $8-$ics
of $\mathbf{P}(1,1,3,4)$. As a result, $2C$ is a member of the moving part of
$\mathcal{L}|_{S}$, i.e., it is a hyperplane section of $S$ in
$\mathbf{P}^{25}$. Hence $2C=-K_{\mathbf{P}}|_{S}$ in $\mathrm{Pic}(S)$, and
the conclusion follows that $C=-\frac{1}{2}-K_{\mathbf{P}}|_{S}$. ∎
### 5.5 $\mathbf{P}=\mathbf{P}(1,6,14,21)$
We know from Table 5 that $S$ is a heptic hypersurface in
$\mathbf{P}(1,1,2,3)$ with coordinates $[u_{0}:u_{1}:v:s]$ of equation
$u_{0}f_{6}(\mathbf{u},v,s)+u_{1}^{7}=0$. In this case, the index $i_{S}$ is
equal to $1$, hence $C$ and $\Gamma$ are two curves of genus $22$ which
represent the same Cartier divisor on $S$, which is cut out by a general
sextic of $\mathbf{P}(1,1,2,3)$. Such a sextic is smooth by generality, since
$\mathbf{P}(1,1,2,3)$ has only two isolated singularities and the linear
system of its sextics doesn’t have base points. Moreover, the general sextic
is a double cover of $\mathbf{P}(1,1,2)$ ramified over a general curve of
degree $6$. It is indeed given by an equation of the form
$s^{2}=h_{6}(\mathbf{u},v)$ with $\deg(h_{6})=6$, and the ramification locus
in $\mathbf{P}(1,1,2)$ is the curve $h_{6}(\mathbf{u},v)=0$. This sextic of
$\mathbf{P}(1,1,2,3)$ is then a Del Pezzo surface of degree $1$ and we shall
denote it by $DP_{1}$. In particular, it can be obtained from $\mathbf{P}^{2}$
by blowing up $8$ general points.
###### Lemma 5.5.
The curve $C$ is the blowup of a plane $21-$ic curve $C_{0}$ at $8$ heptuple
points $p_{1},...,p_{8}$. Moreover, if $p$ is the ninth base points of the
pencil $\mathcal{P}$ whose members are the plane cubics through the
$p_{i}^{\prime}$s, then there exists $\gamma$ a member of $\mathcal{P}$ such
that
$C_{0}|_{\gamma}=7p+7p_{1}+\cdots+7p_{8}.$
Conversely, the proper transform in $DP_{1}$ by the blowup map
$DP_{1}\to\mathbf{P}^{2}$ of any such plane curve $C_{0}$ of degree $21$ is
isomorphic to a member of $|-K_{\mathbf{P}}|_{S}|$ for a general
$S\in|-K_{\mathbf{P}}|$.
###### Proof.
Let $\varepsilon:DP_{1}\to\mathbf{P}^{2}$ be the blowup map,
$H=\varepsilon^{*}[\mathcal{O}_{\mathbf{P}^{2}}(1)]$ the pullback of the lines
and $E_{i}$ the exceptional curve over $p_{i}$, $i\in\\{1,...,8\\}$. On the
one hand, by the discrepancy of $\varepsilon$, we have
$-K_{DP_{1}}=3H-\sum_{i=1}^{8}E_{i}$. On the other hand, the adjunction
formula yields $-K_{DP_{1}}=[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)|_{DP_{1}}]$.
Since $C=DP_{1}\cap S$, where $S$ is a heptic in $\mathbf{P}(1,1,2,3)$, it
holds that
$C=-7K_{DP_{1}}=21H-\sum_{i=1}^{8}7E_{i}$
and thus it is the proper transform of a degree $21$ curve $C_{0}$ in
$\mathbf{P}^{2}$ which passes through the points $p_{i}$, each with
multiplicity $7$.
The curve $C$ is given by the two following equations.
$u_{0}f_{6}(\mathbf{u},v,s)+u_{1}^{7}=g_{6}(\mathbf{u},v,s)=0$
for $g_{6}$ a general homogeneous sextic polynomial, so that $g_{6}=0$ is the
defining equation of $DP_{1}$. The base point of $-K_{DP_{1}}$ is the
intersection point of $DP_{1}$ with the locus
$\left\\{u_{0}=u_{1}=0\right\\}$, which we denote by $p$. Let $B$ be the curve
$DP_{1}\cap\left\\{u_{0}=0\right\\}$. It is an anticanonical curve of $DP_{1}$
and by the equations above we have
$C|_{B}=(DP_{1}\cap\left\\{u_{0}f_{6}(\mathbf{u},v,s)+_{1}^{7}=0\right\\})|_{u_{0}=0}=DP_{1}|_{u_{0}=0}\cap(u_{1}^{7})|_{u_{0}=0}=7p.$
If $\gamma=\varepsilon(B)$, which is a plane cubic through
$p,p_{1},...,p_{8}$, then the above implies that
$C_{0}|_{\gamma}=7p+7p_{1}+\cdots+7p_{8}$.
Conversely, if $C_{0}^{\prime}$ is a plane $21-$ic curve, then its blowup
$C^{\prime}\subset DP_{1}$ at the points $p_{i}$ is in the Cartier class
$-7K_{DP_{1}}$, and by the surjectivity of
$H^{0}(\mathbf{P}(1,1,2,3),\mathcal{O}_{\mathbf{P}(1,1,2,3)}(7))\twoheadrightarrow
H^{0}(DP_{1},\mathcal{O}_{DP_{1}}(-7K_{DP_{1}}))$
which follows from the restriction short exact sequence
$0\to\mathcal{O}_{\mathbf{P}(1,1,2,3)}\to\mathcal{O}_{\mathbf{P}(1,1,2,3)}(7)\to\mathcal{O}_{DP_{1}}(-7K_{DP_{1}})\to
0$
and the vanishing $h^{1}(\mathcal{O}_{\mathbf{P}(1,1,2,3)})=0$, we have
$C^{\prime}=S^{\prime}\cap DP_{1}$ where $S^{\prime}$ is a heptic surface in
$\mathbf{P}(1,1,2,3)$. It follows that $C^{\prime}$ in $\mathbf{P}(1,1,2,3)$
has equations
$f^{\prime}_{7}(\mathbf{u},v,s)=g_{6}(\mathbf{u},v,s)=0.$
Besides, there exists $B^{\prime}$ an anticanonical curve of $DP_{1}$ such
that $C^{\prime}|_{B^{\prime}}=7p$. We may choose the coordinates
$[u_{0}:u_{1}:v:s]$ on $\mathbf{P}(1,1,2,3)$ such that
$B^{\prime}=DP_{1}\cap\left\\{u_{0}=0\right\\}$. This yields
$f^{\prime}_{7}(\mathbf{u},v,s)|_{u_{0}=g_{6}(\mathbf{u},v,s)=0}=u_{1}^{7}.$
In other words,
$f^{\prime}_{7}=u_{1}^{7}+\lambda\alpha(u_{0},u_{1})g_{6}(\mathbf{u},v,s)(\text{mod
}u_{0})$ for some constant $\lambda$ and $\deg\alpha=1$. We may choose
$\lambda=0$, which does not change $C^{\prime}$ and realizes it as a complete
intersection in $\mathbf{P}(1,1,2,3)$ of the form
$u_{0}f^{\prime}_{6}(\mathbf{u},v,s)+u_{1}^{7}=g_{6}(\mathbf{u},v,s)=0$
with $\deg f^{\prime}_{6}=0$. Thus $C^{\prime}$ lies on the surface
$S^{\prime}=\left\\{u_{0}f^{\prime}_{6}(\mathbf{u},v,s)+u_{1}^{7}=0\right\\}$.
It is a member of $\mathcal{L}=\varphi_{*}|-K_{\mathbf{P}}|$, for $\varphi$
the birational map displayed in Table 4. Therefore $S^{\prime}$ is isomorphic
to a general member of $|-K_{\mathbf{P}}|$ and
$C^{\prime}=-K_{\mathbf{P}}|_{S^{\prime}}$. ∎
### 5.6 $\mathbf{P}=\mathbf{P}(2,3,10,15)$
The index $i_{S}$ is equal to $1$, meaning that both $C$ and $\Gamma$
represent the same Cartier divisor on $S$. The curve $C$ is then the
intersection of $\mathbf{P}$ in $\mathbf{P}^{17}$ with two general
hyperplanes. Recall from (2) that $\mathbf{P}$ is realized as a complete
intersection in $\mathbf{P}(1^{2},2,3,5^{2})$ with coordinates
$[U_{0}:U_{1}:V:W:X_{0}:X_{1}]$ of equations
$U_{0}X_{0}=W^{2},U_{1}X_{1}=V^{3}$
and that its hyperplane sections in $\mathbf{P}^{17}$ are its sections by the
quintics of $\mathbf{P}(1^{2},2,3,5^{2})$. For the sake of notation, let us
use lower case letters instead of upper case ones to designate the
coordinates, as there is no risk of confusion.
The equations that cut out the curve $C$ in $\mathbf{P}$ are thus general
quintics of $\mathbf{P}(1^{2},2,3,5^{2})$, and by the generality assumption we
may choose them to be $x_{0}=f_{5}(\mathbf{u},v,w)$ and
$x_{1}=h_{5}(\mathbf{u},v,w)$ where $f_{5}$ and $h_{5}$ are homogeneous of
degree $5$.
This makes $C$ the curve in $\mathbf{P}(1,1,2,3)$ given by the two following
sextic equations
$v^{3}=u_{1}h_{5}(\mathbf{u},v,w),w^{2}=u_{0}f_{5}(\mathbf{u},v,w).$
In other words, $C$ is the intersection of the two sextic surfaces
$\Sigma=\left\\{v^{3}=u_{1}h_{5}(\mathbf{u},v,w)\right\\}$ and
$\Theta=\left\\{w^{2}=u_{0}f_{5}(\mathbf{u},v,w)\right\\}$. The former is
birational to the surface $\mathbf{P}(1,1,2)$, while the latter is a double
cover of it.
###### Lemma 5.6.
The curve $C$ is the normalization of a $1-$nodal curve $C_{0}$ in
$\mathbf{P}(1,1,2)$ of degree $12$ (i.e., a sextic section of the cone over a
conic). Let $p$ be the node of $C_{0}$ such that $C=Bl_{p}C_{0}$, then it is a
smooth point of $\mathbf{P}(1,1,2)$ and the line
$\Delta\in|\mathcal{O}_{\mathbf{P}(1,1,2)}(1)|$ through $p$ is such that
$C_{0}|_{\Delta}=6p$. Furthermore, there is another line
$\Delta^{\prime}\in|\mathcal{O}_{\mathbf{P}(1,1,2)}(1)|$ such that $C_{0}$ is
tri-tangent to $\Delta^{\prime}$, meaning
$C_{0}|_{\Delta^{\prime}}=2p_{1}+2p_{2}+2p_{3}$
where $p_{1},p_{2}$ and $p_{3}$ are general points of $\Delta^{\prime}$. In
other words, $C_{0}$ is a $6-$gonal curve of genus $16$ such that one member
of the $g_{6}^{1}$ is a sextuple point, and another member consists of three
double points.
Conversely, the normalization of any such $12-$ic curve
$C_{0}\subset\mathbf{P}(1,1,2)$ can be realized as a member of
$|-K_{\mathbf{P}}|_{S}|$ for a general $S\in|-K_{\mathbf{P}}|$.
###### Proof.
Note first that $C$ is smooth. Indeed, $S$ has only isolated singularities and
$C$ is a general hyperplane section of $S\subset\mathbf{P}^{16}$ which avoids
the singular points. The smoothness is then a consequence of Bertini’s
Theorem.
Consider the map
$\varepsilon:\mathbf{P}(1,1,2,3)\dashrightarrow\mathbf{P}(1,1,2)$ which maps
$[u_{0}:u_{1}:v:w]$ to $[u_{0}:u_{1}:v]$, well defined at all the points for
which $(u_{0},u_{1},v)\neq(0,0,0)$. Its indeterminacy point
$p_{w}=\left\\{u_{0}=u_{1}=v=0\right\\}$ belongs to $\Sigma$. The fiber
$\varepsilon$ over a general smooth point of $\mathbf{P}(1,1,2)$, say
$[u_{0}:u_{1}:v]$ fixed with $(u_{0},u_{1})\neq(0,0)$, is a $\mathbf{P}(1,3)$
in $\mathbf{P}(1,1,2,3)$ parameterized by
$[\mathtt{u}:\mathtt{w}]\in\mathbf{P}(1,3)\to[\mathtt{u}u_{0}:\mathtt{u}u_{1}:\mathtt{u}^{2}v:\mathtt{w}]\in\varepsilon^{-1}([u_{0}:u_{1}:v]).$
It follows from the defining equation of $\Sigma$ in $\mathbf{P}(1,1,2,3)$
that its restriction to such a fiber is the locus in $\mathbf{P}(1,3)$ given
by an equation of the form $\mathtt{u}^{3}(\mu\mathtt{u}^{3}+\mathtt{w})=0$,
with $\mu$ a constant. Hence the restriction of $\Sigma$ to this fiber
consists of two points, a general one and the indeterminacy point of
$\varepsilon$ (for which $\mathtt{u}=0$). The only exception is the particular
fiber $\mathfrak{f}:=\left\\{u_{1}=v=0\right\\}$, which is a $\mathbf{P}(1,3)$
with coordinates $[u_{0}:w]$, contained in $\Sigma$ and contracted to a smooth
point of $\mathbf{P}(1,1,2)$.
Meanwhile, the fiber over the singular point $p_{v}\in\mathbf{P}(1,1,2)$,
i.e., all the points $[u_{0}:u_{1}:v:w]$ for which $(u_{0},u_{1})=(0,0)$, is a
$\mathbf{P}(2,3)$ with coordinates $[v:w]$ and the restriction of $\Sigma$ to
this particular fiber is given by the equation $v^{3}=0$. Hence $\Sigma$ meets
this particular fiber only at the indeterminacy point of $\varepsilon$.
Hence, $\varepsilon|_{\Sigma}$ is a birational map from $\Sigma$ to
$\mathbf{P}(1,1,2)$ with indeterminacy point
$p_{w}=\left\\{u_{0}=u_{1}=v\right\\}$ and it induces a regular map
$\Sigma-\left\\{p_{w}\right\\}\to\mathbf{P}(1,1,2)-\left\\{p_{v}\right\\}$
which contracts the curve $\mathfrak{f}$ to a point.
Using the parameterization of $\mathfrak{f}$ as a $\mathbf{P}(1,3)$ with
coordinates $[u_{0}:w]$, we know from the equations for $C$ in
$\mathbf{P}(1,1,2,3)$ that the restriction $C|_{\mathfrak{f}}$ is cut out by
the equation $\lambda u_{0}^{3}w+w^{2}=0$, where $\lambda$ is a constant.
Hence, $C$ has degree $2$ on $\mathfrak{f}$ and does not contain the
indeterminacy point $p_{w}$ of $\varepsilon$. This implies in particular that
the restriction $\varepsilon|_{C}$ is a regular map with image a curve
$C_{0}\subset\mathbf{P}(1,1,2)$. Since $\varepsilon|_{\Sigma}$ is birational,
then the morphism from $C$ to $C_{0}$ is birational; besides, it maps the two
points $C\cap\mathfrak{f}$ to a single point $p$, so the curve $C_{0}$ has a
node at $p$ and $C=Bl_{p}C_{0}$. Since $C$ is smooth, it makes it the
normalization of $C_{0}$.
Furthermore, $C_{0}$ is a member of $|\mathcal{O}_{\mathbf{P}(1,1,2)}(d)|$
such that
$\frac{d}{2}=C_{0}\cdot[\mathcal{O}_{\mathbf{P}(1,1,2)}(1)]=C\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)]=[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)]^{2}\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)]=6.$
The curve $C_{0}$ is then given by a degree $12$ equation on
$\mathbf{P}(1,1,2)$; in other words, it is a sextic section of the cone over a
conic.
In particular, let $\Delta=\left\\{u_{1}=0\right\\}$ be the line through $p$
in $\mathbf{P}(1,1,2)$. By the above, $C_{0}|_{\Delta}$ has degree $6$. But
the intersection of $C_{0}$ with $\Delta$ is the image by $\varepsilon$ of
$C\cap\left\\{u_{1}=0\right\\}$, and since $C\subset\Sigma$ and
$\Sigma\cap\left\\{u_{1}=0\right\\}=\left\\{u_{1}=v=0\right\\}=\mathfrak{f}$
we have
$C_{0}\cap\Delta=\varepsilon(C|_{u_{1}=0})\subset\varepsilon(\mathfrak{f})=\left\\{p\right\\}.$
Hence $C_{0}|_{\Delta}=6p$.
Now let $\Delta^{\prime}$ be the line $\left\\{u_{0}=0\right\\}$ in
$\mathbf{P}(1,1,2)$. The restriction of $C_{0}$ to $\Delta^{\prime}$ is the
image by $\varepsilon$ of $C|_{u_{0}=0}$. By the fact that
$C=\Sigma\cap\Theta$ and
$\Theta|_{u_{0}=0}=(w^{2})=2\ell$
where $\ell$ is the curve $u_{0}=w=0$, which is a $\mathbf{P}(1,2)$ with
coordinates $[u_{1}:v]$, and $\Sigma|_{\ell}$ has degree $3$,
$C|_{u_{0}=0}=\Theta|_{u_{0}=0}\cap\Sigma|_{u_{0}=0}=2\ell\cap\Sigma|_{u_{0}=0}$
which consists of three double points. Therefore,
$C_{0}|_{\Delta^{\prime}}=2p_{1}+2p_{2}+2p_{3}$ where $p_{1},p_{1},p_{3}$ are
general points of $\Delta^{\prime}$. By the fact that $C_{0}$ is smooth
outside its node, it means that the three contact points of $C_{0}$ with
$\Delta^{\prime}$ are tangency points.
Conversely, let $C_{0}^{\prime}$ be such a curve in $\mathbf{P}(1,1,2)$ and
$C^{\prime}$ its proper transform in $\Sigma$. As $C_{0}^{\prime}$ is given by
an equation of degree $12$,
$6=C_{0}^{\prime}\cdot[\mathcal{O}_{\mathbf{P}(1,1,2)}(1)]=C^{\prime}\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)]=\Sigma\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)]\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)].$
The surjectivity of
$H^{0}(\mathbf{P}(1,1,2,3),\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6))\to
H^{0}(\Sigma,\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)|_{\Sigma})$ which follows
from the restriction exact sequence
$0\to\mathcal{O}_{\mathbf{P}(1,1,2,3)}\to\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)\to\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)|_{\Sigma}\to
0$
and the vanishing of $h^{1}(\mathcal{O}_{\mathbf{P}(1,1,2,3)})$ implies that
there exists $\Theta^{\prime}$ a sextic of $\mathbf{P}(1,1,2,3)$ such that
$C^{\prime}=\Sigma\cap\Theta^{\prime}$. Let $f_{6}(\mathbf{u},v,w)=0$ be an
equation for $\Theta^{\prime}$. As $C_{0}^{\prime}$ is tri-tangent to the line
$\Delta^{\prime}$, the set $C_{0}^{\prime}\cap\left\\{u_{0}=0\right\\}$ has
cardinality $3$, and moreover
$\deg
C^{\prime}|_{u_{0}=0}=\Theta^{\prime}|_{u_{0}=0}\cdot\Sigma|_{u_{0}=0}=[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(6)]^{2}\cdot[\mathcal{O}_{\mathbf{P}(1,1,2,3)}(1)]=6,$
so necessarily $\Theta^{\prime}|_{u_{0}=0}$ is a nonreduced curve
$2\ell^{\prime}$ of $\mathbf{P}(1,2,3)_{[u_{1}:v:w]}$ such that
$\ell^{\prime}\cdot\Sigma|_{u_{0}=0}=3$. Hence
$f_{6}(\mathbf{u},v,w)|_{u_{0}=0}=h_{3}(u_{1},v,w)^{2}$ with $h_{3}$ a
homogeneous cubic. Up to scaling, we have $h_{3}=w+\alpha(u_{1},v)$ with
$\deg\alpha=3$, and the change of variables $w\mapsto w+\alpha(u_{1},v)$,
which is an automorphism of $\mathbf{P}(1,1,2,3)$, yields
$f_{6}(\mathbf{u},v,w)|_{u_{0}=0}=w^{2}$ and thus
$f_{6}(\mathbf{u},v,w)=u_{0}f^{\prime}_{5}(\mathbf{u},v,w)+w^{2}$ for some
homogeneous quintic $f^{\prime}_{5}$. Hence $C^{\prime}$ lies on the surface
$\Theta^{\prime}$ of equation
$u_{0}f^{\prime}_{5}(\mathbf{u},v,w)+w^{2}=0.$
As a consequence, $C^{\prime}$ is the complete intersection in
$\mathbf{P}(1,1,2,3)$ given by the following.
$u_{0}f^{\prime}_{5}(\mathbf{u},v,w)+w^{2}=0,v^{3}=u_{1}h_{5}(\mathbf{u},v,w).$
As $\mathbf{P}$ is cut out in $\mathbf{P}(1^{2},2,3,5^{2})$ by the equations
$u_{0}x_{0}=w^{2}$ and $u_{1}x_{1}=v^{3}$, and the pullbacks of the
hyperplanes of $\mathbf{P}^{17}$ are the quintics hypersurfaces of
$\mathbf{P}(1^{2},2,3,5^{2})$, it makes it visible that $C^{\prime}$ is a
linear section of $\mathbf{P}$ in $\mathbf{P}^{17}$:
$C^{\prime}=\mathbf{P}\cap\left\\{x_{0}=-f^{\prime}_{5}(\mathbf{u},v,w),x_{1}=h_{5}(\mathbf{u},v,w)\right\\}.$
In particular, there exists a general hyperplane section
$S^{\prime}\in|-K_{\mathbf{P}}|$ of $\mathbf{P}$ in $\mathbf{P}^{17}$ such
that $C^{\prime}$ is a hyperplane section of $S^{\prime}$, which yields
$C^{\prime}=-K_{\mathbf{P}}|_{S^{\prime}}$. ∎
## References
* [BM87] Arnaud Beauville and Jean-Yves Mérindol, Sections hyperplanes des surfaces K3. Duke Mathematical Journal 55, No. 4, p.873-878, 1987.
* [CD20] Ciro Ciliberto and Thomas Dedieu, Double covers and extensions, 2020\. To appear in the Kyoto Journal of Mathematics. Preprint: arXiv:2008.03109v3
* [CDS20] Ciro Ciliberto, Thomas Dedieu and Edoardo Sernesi, Wahl maps and extensions of canonical curves and K3 surfaces. Journal für die reine und angewandte Mathematik 761, p.219-245, 2020.
* [CLS11] David A. Cox, John B. Little, Henry K. Schenck, Toric Varieties. American Mathematical Society, 2011.
* [Do81] Igor Dolgachev, Weighted projective varieties. In Group Actions and Vector Fields, Springer, p.34-71, 1981.
* [DS21] Thomas Dedieu and Edoardo Sernesi, Deformations and extensions of Gorenstein weighted projective spaces, 2021\. To appear in The Art of Doing Algebraic Geometry, Trends in Mathematics, Springer. Preprint: arXiv:2103.08210v3
* [GL84] Mark Green, Koszul cohomology and the geometry of projective varieties (with an appendix by M.Green and R.Lazarsfeld). Journal of Differential Geometry 19, No.1, p.125-171, 1984.
* [GL87] Mark Green and Robert Lazarsfeld, Special divisors on curves on a K3 surface. Inventiones mathematicae 89, p.357-370, 1987.
* [Har77] Robin Hartshorne, Algebraic Geometry. Graduate Texts in Mathematics, Springer, 1977.
* [Ia00] Anthony R. Iano-Fletcher, Working with weighted complete intersections. In Explicit birational geometry of 3-folds, Cambridge University Press, 2000.
* [LD21] Angelo Felice Lopez, On the extendability of projective varieties : a survey (with the appendix Extendability of canonical models of plane quintics by Thomas Dedieu), 2021. To appear in The Art of Doing Algebraic Geometry, Trends in Mathematics, Springer. Preprint: arXiv:2102.04431v6
* [Lvo92] Serge Lvovski, Extensions of projective varieties and deformations I,II. Michigan Mathematical Journal 39, No.1, p.41-51, 1992.
* [Pr04] Yuri G. Prokhorov, A remark on Fano threefolds with canonical Gorenstein singularities. In The Fano conference, Turin, 2004.
* [RT13] Michele Rossi and Lea Terracini, Weighted projective spaces from the toric point of view, 2013\. Preprint: arXiv:1112.1677v3
* [SD74] Bernard Saint-Donat, Projective models of K3 surfaces. American journal of Mathematics 96, No.4, 1974.
* [Sho71] Vyacheslav Vladimirovich Shokurov, The Noether–Enriques Theorem on canonical curves. Math. USSR S. No.15 p.361-403, 1971.
* [Wah87] Johnatan Wahl, The Jacobian algebra of a graded Gorenstein singularity. Duke Mathematical Journal 55, No.4, p843-871, 1987.
INSTITUT DE MATHÉMATIQUES DE TOULOUSE (CNRS UMR 5219), UNIVERSITÉ PAUL
SABATIER, 31062 TOULOUSE CEDEX 9, FRANCE
E-mail address<EMAIL_ADDRESS>
|
/dev/null
# Point Intervention: Improving ACVP Test Vector Generation Through Human
Assisted Fuzzing††thanks: This work was partially funded by the EU research
project SWARMCHESTRATE (No. 101135012) and the Mozilla Corporation.
Iaroslav Gridin Tampere University, Tampere, Finland 0000-0002-1239-1841
Tampere University, Tampere, Finland Antonis Michalas Tampere University,
Tampere, Finland 0000-0002-0189-3520
###### Abstract
Automated Cryptographic Validation Protocol (ACVP) is an existing protocol
that is used to validate a software or hardware cryptographic module
automatically. In this work, we present a system providing the method and
tools to produce well-covering tests in ACVP format for cryptographic
libraries. The system achieves better coverage than existing fuzzing methods
by using a hybrid approach to fuzzing cryptographic primitives. In addition,
the system offers a framework that allows to creates easily and securely
create testing modules for cryptographic libraries. The work demonstrates how
this system has been used to improve automated testing of NSS (Network
Security Services), a popular cryptographic library, detect its
vulnerabilities and suggest ways to improve and further develop the ACVP test
format.
###### Keywords:
ACVP; Coverage; Cryptography; Fuzzing; KAT; NSS; Testing;
## 1 Introduction
Testing computer software is a sine qua non that ensures proper functionality.
Numerous implementation issues arise due to human errors. A typical example
lays in programs lacking features that check input size in order to prevent
access attempts after the end of an array. While modern programming languages
offer various mechanisms to mitigate issues, such as advanced type systems,
performance is paramount in writing cryptographic software. As a result, these
programs often rely on direct memory access and are typically written in
languages like C or C++ [15].
One method of ensuring low-level code correctness is external automated
testing. Automated testing is a process that verifies the execution of a
program without human interaction, thus significantly reducing costs.
Typically, testing involves issuing challenges to the program and validating
responses. For example, a test might involve “encrypting bytes A with key
$\mathsf{K}$ and verifying that the output matches bytes B”. Tests can be
generated on demand or pre-generated, and may verify results against pre-
existing values or another program, or simply confirm that the program is
executed without errors. A crucial aspect of testing is coverage, which
measures how thoroughly the code is tested to ensure that no portion remains
untested.
Often, well-covering test sets are produced by fuzz testing, or fuzzing for
short. Fuzzing is a form of automatic testing, which repeatedly runs the
target software with mutated input. In recent years, coverage-based grey-box
fuzzing (CGF) has emerged as an effective way of locating security issues
[19]. CGF involves instrumenting the code by inserting special markers to
collect coverage data. It then utilizes changes in coverage as a guide to
identify areas of input to be modified in order to maximize coverage and gain
insights into the structure of the input. However, satisfying complex coverage
criteria through random mutation can be resource-intensive. To address this
challenge, various additional approaches have been explored, such as
leveraging hardware support [18] and employing symbolic execution [23].
### 1.1 Automated Cryptographic Validation Protocol (ACVP)
On July 17, 1995, NIST established the Cryptographic Algorithm Validation
Program (CAVP) and the Cryptographic Module Validation Program (CMVP) in order
to validate cryptographic modules [5]. Originally, all CMVP communications and
operations on submitted cryptographic modules took place exclusively in
testing laboratories. However, as technology advanced, the industry demanded
faster testing cycles than that scheme could provide, the required human
involvement resulted in mistakes, while modules could not be monitored after
initial validation. The Automated Cryptographic Validation Testing (ACVT)
project was implemented to reduce the costs and time of validation, while
still providing a high level of assurance. As part of the ACVT project, NIST
has designed and developed the Automated Cryptographic Validation Protocol
(ACVP) [4] – a protocol and software for automated algorithm testing. NIST has
published specifications of the protocol [4] and the source code for the
server [13], while it runs both demo and production servers for remote
verification. ACVP is a protocol automatically testing software or hardware
cryptographic modules [3]. ACVP is developed by NIST and includes a portable,
human-understandable, universal format of test data based on JSON [16]. ACVP
software often is categorized as one of three parties: a server, a proxy, and
a client.
1. 1.
The server side manages various requests, including those for test vectors and
validation.
2. 2.
A proxy equipped with ACVP enables communication with offline systems and
facilitates the transfer of information from the tested system to the server
and back. Sometimes, software combines functions of a proxy and a client.
3. 3.
The client component is particularly relevant to users seeking validation for
library. An ACVP client is directly connected to the module undergoing testing
and communicates with the ACVP server to request test vectors, output the
results of test executions, and seek algorithm validation.
### 1.2 ACVP Tests
ACVP supports many primitives by way of ‘‘subspecifications’’, which describe
a family of cryptographic primitives like ‘‘Secure Hash’’ [1]. ACVP tests do
not have a shared defined structure, but, as a rule, subspecifications
describe similar layouts. Tests are distributed in form of “vector sets”.
Vector sets contain shared information like the algorithm and the
subspecification revision, and an array of “test groups”. Test groups,
similarly, include shared information specific to the subspecification, and an
array of “tests”. Tests include the rest of the information. The cryptographic
module being tested has to respond to a test vector set with “test vector
response”, which is structured in a similar way. An example of an ACVP vector
set can be seen in Figure 1.
⬇
{
"vsId": 805548,
"algorithm": "ACVP-AES-GCM",
"revision": "1.0",
"isSample": true,
"testGroups": [{
"tgId": 1,
"testType": "AFT",
"direction": "encrypt",
"keyLen": 128,
"ivLen": 96,
"ivGen": "external",
"ivGenMode": "8.2.1",
"payloadLen": 256,
"aadLen": 120,
"tagLen": 104,
"tests": [{
"tcId": 1,
"pt": "28E3FB…9809",
"key": "C19A…AD2",
"aad": "E9FB…1B",
"iv": "C4…DEFB"
}]
}]
}
Figure 1: Example of an ACVP test vector set, obtained from ACVP demo server.
### 1.3 Contributions
The core contribution of this work lies in the development of acvp-rust – a
comprehensive system designed to generate tests for cryptographic libraries.
This system features a human-readable, flexible, and universal format,
facilitating seamless integration into existing workflows. Several tools
interface with the ACVP (e.g. acvpproxy [11]) or work with cryptographic
libraries to run vector sets (e.g. acvpparser [11]) or even support both (see
libacvp [6]). However, these tools are predominantly coded in C, posing
challenges in terms of extensibility and complexity. Given the need for
precise handling of ACVP tests and seamless integration with complementary
tools for program execution analysis, we opted to develop our own library in
Rust. Rust is renowned for its strong typing and security-focused design,
hence aligns seamlessly with our objectives, ensuring robustness and
efficiency in our implementation efforts.
The core contributions of the paper can be summarized as follows:
1. bx1.
Development of a software framework for producing and running test vector sets
tailored for cryptographic libraries.
2. bx2.
Introduction of a methodology leveraging human assistance to enhance the
framework’s capability in generating comprehensive test vectors.
3. bx3.
Proposal of two enhancements to augment the ACVP test vector format, along
with the introduction of novel subspecifications for ACVP.
4. bx4.
Completion of extensive experiments that allowed us to trace undiscovered bugs
in Mozilla’s NSS cryptographic library111https://firefox-source-
docs.mozilla.org/security/nss/index.html. This serves as proof that the
framework we designed and developed facilitates the detection of undiscovered
bugs.
### 1.4 Organization
The rest of the paper is organized as follows: Section 2 introduces the key
fuzzing tools that closely align with our research objectives. Section 3
provides an overview of acvp-rust, detailing its architecture and design
decisions. Section 4 illustrates the discovery of bugs in the cryptographic
library NSS through the utilization of acvp-rust while Section 5 emphasizes in
its ability to achieve enhanced code coverage. In Section 6 we assess the ACVP
system and its testing format, offering suggestions for enhancements. Finally,
Section 7 concludes the paper and outlines potential future research
directions to further develop acvp-rust.
## 2 Related Work
Fuzzing is a constantly developing field. Several competing mature coverage-
guided fuzzers are being improved and multiple projects increase the speed and
quality of fuzzing in specific areas or conditions. Here are some examples of
popular coverage-guided fuzzers and recent novel fuzzing techniques.
AFL++ [20] is a community-driven open-source tool that performs fuzzing. AFL++
has been created based on patches of the original AFL that were unmaintained
for 18 months, though still popular popular among researchers. AFL++’s fuzzing
is coverage-guided: it receives feedback from code executed to mutate the
input. Similar to libFuzzer [10], AFL++ features “Custom Mutator
API”222https://aflplus.plus/docs/custom_mutators/ which allows users to supply
own functions modifying the input within given limitations, to bypass early
failure points. AFL++ uses many sophisticated methods to automatically
discover new code paths, some of which are listed in the referenced paper.
AFL++ automatically produces good coverage, but will still fail to produce
deep coverage often when applied to cryptographic software, due to its random
nature and complexity of conditions used in cryptography. As shown in Section
4, acvp-rust provides an improvement over a greybox fuzzer through hybrid
fuzzing, the resulting fuzzer is able to proceed through typical roadblocks.
LibFuzzer [10] is a fuzzing tool integrated into the LLVM compiler
infrastructure. LLVM [21] is a widely used compiler framework for several
languages, which includes debugging, profiling, and other related tools.
LibFuzzer is a coverage-guided fuzzer, using LLVM to inspect running code and
measure coverage. LibFuzzer can perform structure-aware fuzzing, allowing
users to supply a “mutator” that ensures the output has a specific structure.
LibFuzzer can interact with other LLVM tools, like sanitizers that help
discover issues such as memory management mistakes in running code. LibFuzzer
can produce a well-covering corpus of outputs, similar to AFL++, according to
tests ran by FuzzBench project [22], but as other fuzzers, it struggles with
complex roadblocks, which are unlikely to be solved by random output
generation. In this paper, we build on top of the fully automatic fuzzer to
provide a framework in order to augment its output with human input:
roadblocks which are by their nature difficult for a fuzzer to overcome are
identified and solved by the human operator.
Fuzztruction [14] presents a way of generating better outputs by mutating the
program that normally produces this format. This allows us to reuse already
written code that generates the structure. Thus the resulting fuzzer
outperforms normal coverage-guided fuzzers like AFL++. However, there is a
need for the producing program as random modification of its logic has its
limits. Our work develops independently of the available software type and
relies on the interactive adjustment of structure to meet the roadblocks
instead of automatic random modification of the producer.
Carpetfuzz [24] uses natural language processing to extract relationships
between program options from documentation. This data is then processed into
inputs likely to elicit abnormal behavior from the program. This approach to
fuzzing is novel and has helped uncover multiple bugs, though it relies on
natural language documentation being present and covering the options we are
interested in. Our work does not rely on anything but the code itself and
covers different use cases. Additionally, it is not restricted to command line
options.
## 3 Automatic Test Generation Framework
ACVP includes a portable and universal test format. However, there is still a
need for software that allows to quickly, easily, and reliably adapt it to
cryptographic libraries. We introduce acvp-rust, a framework for producing and
running ACVP test vectors. This framework can generate test vectors with
fuzzing, using code coverage feedback from cryptographic libraries, or run
test vectors to validate these libraries. We designed acvp-rust to be modular
and extensible in order to facilitate the addition of ACVP subspecifications,
cryptographic library modules and instrumentations, while keeping the
resulting code maintainable.
### 3.1 Architecture
Figure 2: Structure of acvp-rust
acvp-rust consists of two main parts, the “runners” and the “library”.
“Runners” are adaptors that incapsulate third-party libraries or other
cryptographic modules under inspection. These provide a common interface and
can be used in any combination to produce or run test vectors. “Library” is
the shared logic that parses ACVP tests and handles the runners. Runner and
library are different processes, thus their execution is independent and
library can handle any kind of unexpected behavior from a runner. Using acvp-
rust, users can execute test vectors on a runner to validate the module or
fuzz the runner’s library to generate a well-covering test vector and check
the cryptographic library for memory issues, crashes, or other unexpected
behavior.
As a result, acvp-rust can fuzz and instrument any library that can be
compiled by LLVM that supports many modern languages, without much adaptation.
During fuzzing, memory handling can be checked by some of the sanitizers
supported by LLVM. “Library” implements multiple ACVP subspecifications,
contains tools to easily implement more, and routines for shared functionality
required for related tasks. Integration of libFuzzer into Rust ecosystem is
provided by cargo-fuzz project [7] that facilitates fuzzing Rust code or any
other code linked to Rust.
LibFuzzer can be combined with multiple sanitizers, i.e. tools that instrument
the code to detect possible issues. During our fuzzing, we used ASAN sanitizer
[2] which can detect improper memory handling while being compatible with most
code.
### 3.2 Hybrid Fuzzing
Figure 3: Flowchart of the hybrid fuzzing process.
Fuzzing tends to uncover many errors caused by an unexpected input, but when
applied to cryptography or other kinds of highly structured input, it has
difficulty producing deep-reaching coverage, as most inputs get discarded
early. To help with this, in acvp-rust we use hybrid fuzzing. The method
includes a simple bytestring-mutating fuzzer libFuzzer augmented with a
domain-specific test case translator using bits mutated by the fuzzer to
decide whether to produce restricted inputs that satisfy specific conditions
in the code. LibFuzzer can learn based on increasing code coverage and include
the mutations that provide the increase. However with specific conditions in
cryptographic protocols it can take a while for the fuzzer to randomly produce
an input matching them Therefore, acvp-rust test generators introduce special
restrictions based on bit flips, like this.
⬇
let salt_len = if Arbitrary::arbitrary(u)? {
u64::arbitrary(u)? % hash_alg.digest_len()
} else {
u64::arbitrary(u)?
};
Here, the human operator added a condition based on bit taken from randomly-
mutated data to restrict the salt length to digest length, helping to avoid
the input failing an early check in the library code. The coverage produced by
the fuzzer indicates what is needed for the fuzzer to increase its coverage,
while a related constraint can be introduced to the test case generator (for
example see Figure 4).
Figure 4: Example of an opportunity to add a code constraint: fuzzer fails to
satisfy a condition
The resulting approach combines both manual testing and fuzzing strengths: the
fuzzer can automatically find deep-reaching inputs wherever possible, while
manual intervention helps with the demanding parts. Then the resulting test
vector set can be used for another library. With this as a good starting point
it may be extended to cover the library’s special cases. Thus, a test
generator developed with one library will be useful to provide better coverage
for other libraries as they are likely to need similar restrictions.
Additionally, unlike tests focused on exclusively verifying the correctness of
an algorithm implementation itself, tests generated by acvp-rust also protect
against typical implementation issues by ensuring the library gracefully
handles unusual or invalid input without causing security or stability issues.
### 3.3 Flexibility
The tools from acvp-rust can be used to implement hybrid fuzzing for any
library in a language compiled by LLVM tools. At the moment, that includes,
most notably, C, C++, and Rust. In Section 4, we describe how it was used to
produce test coverage and find bugs in NSS, but the same approach can be used
for any library. Tests created by the system are saved in ACVP standard format
and can be easily examined by a human, modified, and used in any system
implementing the ACVP specifications.
### 3.4 Open Science & Reproducible Research
To support open science and reproducible research, and provide other
researchers with the opportunity to use, test, and hopefully extend our tool,
we have made source code publicly available online333acvp-rust’s Source Code
Repository: https://gitlab.com/nisec/acvp-rust under MPL (Mozilla Public
License) 2.0.
## 4 Detecting Bugs in NSS Through acvp-rust
In this section, we describe a series of bugs that we managed to uncover,
while improving the fuzzing coverage with acvp-rust. The material presented in
this section can be also used as guidance on how to use acvp-rust to discover
bugs in a cryptographic library.
### 4.1 Mozilla NSS
NSS is an open-source library, maintained by Mozilla, Red Hat and others,
providing support for a variety of security standards. The standards include
network security protocols (such as TLSv1.3), as well as many cryptography
standards (like PKCS#3 and PKCS#11). The library implements the most common
cryptographic algorithms used in both public-key and private-key cryptography.
The NSS library is used by a huge variety of products, including the
following: Mozilla products (including Firefox and Thunderbird), open-source
client applications (such as LibreOffice), Red Hat and Oracle server products,
and others. Security researchers repeatedly ran tests and targeted the
library, which is covered under several bug bounty programs.
Since the library implements security-critical functionality, the code has
also been extensively tested. Further, we will be exclusively referencing
testing applied to the cryptographic primitives and not the network protocols,
parsers or other elements of the library.
Any modification occurring to in the library has to pass through Continuous
Integration tests. All these tests for cryptographic primitives could be
divided into two big groups. The first group of tests uses an internal
application called bltest. The application allows to quickly check that the
modifications in the code do not break the cryptography. For each primitive
implemented in NSS, bltest provides several test vectors, provided by NIST.
1. bx1.
SHA2: 8 tests;
2. bx2.
AES-GCM: 18 tests. The tests come from the original paper describing the
standard [12];
3. bx3.
RSA/PSS/OAEP: 1/18/22 tests. The latter has SHA1 and SHA2 variants;
4. bx4.
ECDSA: 21 tests.
The files in bltest contain test vectors for ECDSA using the NIST P-256 curve
(test vectors from 0 to 6 included), using the NIST P-384 curve (test vectors
from 7 to 13 included), and using the NIST P-521 curve (test vectors from 14
to 20 included).
As the number of test vectors in bltest is limited, a second group of
additional tests is performed each time the code in NSS changes, implemented
using Google gtests [8] facility. These tests (together with the wycheproof
[9] tests run as a part of gtests), allow the developers to gain deeper
confidence in the code. Wycheproof tests include, among others, AES-GCM, ECDSA
P-256, P-384, P-521, and RSA, which are also implemented in the current acvp-
rust NSS runner.
As more cryptographic functions are implemented using formal verification, the
library relies less on testing. However, formally verified code is still
covered by constant-time tests and fuzzed corpuses.
### 4.2 Improving NSS Testing Coverage with acvp-rust
As part of a project to improve NSS testing infrastructure, we have developed
an NSS runner for acvp-rust and some extensions to the ACVP standard to cover
more code. Specifically, we added a private_key structure to RSA and ECDSA
test cases to allow the test case to specify the key when generating the
signature, and implemented a bn (big number) subspecification that tests big
numbers directly, avoiding the lack of deep coverage that is resulting from
testing higher-level API. NSS runner supports most of the sha, symmetric, rsa,
ecdsa published ACVP subspecifications.
### 4.3 RSA Modulus Bug
While working on RSA coverage with acvp-rust’s NSS runner, we have discovered
the following issue. We describe the issue and the fix here to illustrate the
methodology of discovering bugs using acvp-rust.
NSS functions implementing RSA operations call for a couple of similar
functions rsa_modulusLen and rsa_modulusBits to strip leading zeroes from
modulus bytes (see Figure 5).
⬇
static unsigned int
rsa_modulusLen(SECItem *modulus)
{
unsigned char byteZero = modulus->data[0];
unsigned int modLen = modulus->len - !byteZero;
return modLen;
}
⬇
static unsigned int
rsa_modulusBits(SECItem *modulus)
{
unsigned char byteZero = modulus->data[0];
unsigned int numBits = (modulus->len - 1) * 8;
if (byteZero == 0) {
numBits -= 8;
byteZero = modulus->data[1];
}
while (byteZero > 0) {
numBits++;
byteZero >>= 1;
}
return numBits;
}
Figure 5: NSS functions determining the RSA modulus lengths, from rsapkcs.c
As demonstrated in Figure 5, they make assumptions about the length of the
modulus and perform indexed array access before checking the array size. This
may cause access to unrelated memory. As a result, decisions based on it may
lead to security issues. For example, the attacker can arrange for the next
part of memory to contain data, the decision based on which will lead to
falsely considering the signature being processed valid. The bug is
reproducible using the public RSA API of NSS. Figure 6 demonstrates how the
bug can be triggered.
⬇
SECITEM_MakeItem(NULL, &key.publicExponent, "", 0);
SECITEM_MakeItem(NULL, &key.modulus, "", 0);
RSA_CheckSignPSS(&key, HASH_AlgSHA256,
HASH_AlgSHA256, 0, NULL, 0, NULL, 0);
Figure 6: Example code fragment triggering the memory issue in RSA modulus
length check
The bug is not exploitable via existing software using NSS, because an
unrelated check for insecure key sizes in TLS code discards the problematic
RSA keys before operations are performed on them. However, a valid ACVP test
case uses our extensions: Figure 7 causes improper memory access, thus
increasing vulnerability for third party software using the NSS RSA interface
directly.
⬇
{
"algorithm": "RSA",
"mode": "sigGen",
"revision": "FIPS186-5",
"testGroups": [{
"hashAlg": "SHA2_384",
"maskFunction": "mgf1",
"modulo": 4096,
"saltLen": 31,
"sigType": "pss",
"testType": "GDT",
"tests": [{
"message": "",
"privateKey": {
"coefficient" : "00",
"exponent1" : "00",
"exponent2" : "00",
"modulus" : "00",
"prime1" : "00",
"prime2" : "00",
"privateExponent" : "00",
"publicExponent" : "00"
},
"tcId" : 0,
}],
}]
}
Figure 7: ACVP test case triggering the RSA modulus check bug in NSS
We submitted a fix for the bug that adds additional checks to ensure array
index cannot be out of bounds using Mozilla’s official bug tracker. The fix
has been accepted by the maintainers and included in the next version of NSS.
This bug was not caught because of lack of focus on abnormal inputs, despite
NSS testing suite including RSA test vectors. This highlights both the need to
include diverse test cases within the valid input limits in the test vectors
as well as the effectiveness and usability of acvp-rust in improving test
coverage and identifying new vulnerabilities.
### 4.4 Other Bugs
Several other non-security-related issues have been discovered during NSS
testing. One example is parsing negative big numbers that was non-functional
due to an apparent bug. Such issues, while not leading to vulnerabilities or
even inadvertently shielding from them, are still dangerous, because they
obscure other bugs and interfere with code analysis. Even if dealt with or
worked around, other issues may arise. Table 1 provides a list with all the
bugs we discovered while using acvp-rust. “Issue” is the short description of
the issue, “Security” is whether the issue was deemed to be related to
security, “Fix submitted” means we submitted a patch to Mozilla official bug
tracker, “Fix accepted” means the patch was accepted by NSS maintainers and
included in the next NSS version.
Table 1: List of issues discovered in NSS Issue | Security | Fix Submitted | Fix Accepted
---|---|---|---
Segmentation fault or buffer overflow when calling RSA_CheckSignPSS with special input. | ✓ | ✓ | ✓
Infinite loop in RSA_PopulatePrivateKey. | ✓ | ✓ | ✓
Fails to build with clang 15 due to set but not used variable. | ✗ | ✗ | ✗
Fails to build with clang 15 and optimized build due to set but used only in an assert variable. | ✗ | ✗ | ✗
Assertion failed with certain mp_exptmod inputs. | ✓ | ✓ | ✓
Negative sign on mp_int is ignored when read from raw form. | ✗ | ✓ | ✓
RSA overflow check fails with really small modulus. | ✓ | ✓ | ✓
### 4.5 Disclosure
All bugs discovered were securely disclosed to NSS maintainers and have since
been fixed in the latest development version of the library. This serves as
proof that acvp-rust has the potential to significantly enhance the security
of existing cryptographic libraries by improving the process of identifying
and addressing previously undiscovered bugs.
## 5 Analysis of Efficiency at Improving Coverage
In this section, we elaborate on the efficiency of acvp-rust at improving code
coverage. Usually, coverage of testing is measured in the percentage of
covered code. The quantity of code is measured in lines, functions, branches,
or other important parts, depending on the testing level [17]. Ultimately, the
most important measure of testing is how many issues are prevented or
discovered. The bugs we found in already tested code indicate that hybrid
fuzzing reached the new code that required coverage. To get an idea of how
much coverage acvp-rust generated tests provide, we took measurements of the
coverage by the corpus generated by libFuzzer running on NSS code. We used an
acvp-rust RSA mutator developed by hybrid fuzzing, for 1 hour, with a maximum
input size of 10,000 bytes. The experiments were performed on an Intel
i7-12700 processor at 2100 MHz, using a single thread. We used the current
development version of NSS as of Fri Sep 8 21:48:00 2023 +0000. To measure the
efficiency, we consider coverage of the RSA code, its improvement over
traditional coverage-guided fuzzing and over the existing NSS test suite.
### 5.1 Scope of Generated Coverage
Some areas of RSA code are excluded from coverage due to limitations of either
NSS or the ACVP standard. Key generation and related code is excluded since
NSS does not provide an API for generating predictable keys. Additionally,
“block” functions are excluded since their usage is mostly internal. Neither
variant covers paths that involve running out of memory and other unexpected
outside factors. The ACVP subspecification with our custom extensions covers:
1. bx1.
Signature Generation and Verification: PSS, PKCS #1 1.5, and primitive modes,
with multiple SHA variants as the digest function;
2. bx2.
Encryption and Decryption: OAEP and primitive modes;
3. bx3.
Key Population: As part of the above, missing private key components are
generated from present ones.
### 5.2 Analysis of Improvement over Pure Coverage-Guided Fuzzing
Coverage-guided fuzzing, such as employed by libFuzzer, is good at
automatically covering most of the code, but it fails to satisfy particular
criteria commonly present in public key cryptography implementations, thus
omitting numerous potentially vulnerable code areas.
The most important RSA code is located in two source files, rsa.c and
rsapkcs.c. The following list describes the remaining pieces of code not
covered by modes of fuzzing and coverage differences between libFuzzer
standard coverage guided fuzzing and hybrid fuzzing enhanced by acvp-rust. The
full coverage reports are available in the source code repository: RSA code
coverage before using acvp-rust and after: RSA code coverage after using acvp-
rust.
1. 1.
rsapkcs.c:254: Check for proper padding being present, leads to rsaFormatBlock
never executed. Data size conditionally restricted.
2. 2.
DecryptOAEP: Coverage is missing from plain fuzzing due to key checks failing
most of the time for fuzzing-generated keys. Restrictions added to make sure
key components pass basic checks. This also causes eme_oaep_decode not to be
covered in plain variant.
3. 3.
rsapkcs.c:1258 emsa_pss_encode: Check for modulus length fails due to
complicated relations between multiple lengths. Interlinked restrictions added
on salt and modulus length to pass the check.
4. 4.
emsa_pss_verify: Is not covered in plaintext versions due to RSA_PublicKeyOp
never succeeding in this context due to unmet conditions listed above.
5. 5.
rsapkcs.c:1669 RSA_CheckSignRecover: Hybrid version can pass the signature
verification earlier, but further checks on decoded data fail. It is not
feasible to improve coverage further.
The end result is that using acvp-rust helps the fuzzer to produce tests
covering critical areas inaccessible by CGF with minimal human intervention.
## 6 ACVP Test Vector Format
As part of our design-related work, implementation and testing of acvp-rust
consisted of implementing the processes for both the parsing and handling of
the ACVP test vector format. During this process, we identified the need for
improvement. More precisely, we became aware of the need to render its
implementation easier and safer in modern languages and improve the test
transmission and storage efficiency. In this section, we provide some
suggestions on how to work towards achieving these improvements. We propose to
make the ACVP test format include more well-defined nested structures to make
it more flexible and to make parsing easier. We also suggest to make the tests
simpler to write and combine by allowing user-controlled level of sharing data
between groups of tests.
### 6.1 Structures Usage
In modern libraries parsing serialization formats, the parsing code is often
generated from the declarative structure definition, like in Serde. This
approach produces safe code with automatic error handling. In ACVP
subspecifications structures like encryption keys are included in the parent
structure as a set of optional fields. All or none of these fields should be
present, but such check has to be written manually. Additionally, these
combinations of fields are often repeated. Moving them to a separate structure
could improve readability and maintainability of the specification as well as
its implementations, as can be seen in Figure 8 vs Figure 9.
⬇
{
"d" : "02",,
"message" : "ffffff21ff",
"q" : "00ffffffffffff21ff",
"dmp1" : "00",
"tcId" : 0,
}
Figure 8: Example of an RSA ACVP test case with private key flattened into
main structure
⬇
{
"message" : "ffffff21ff",
"privateKey" : {
"coefficient" : "00",
"prime2" : "00ffffffffffff21ff",
"privateExponent" : "02",
},
"tcId" : 0,
}
Figure 9: Example of an RSA ACVP test case with private key separated into a
structure
### 6.2 Level-Specific Fields
ACVP vector sets include three levels: Test Vector, Test Group, and Test Case.
Each of them can contain a combination of multiple sublevels. Some levels may
include fields that affect lower levels. This was clearly intended as a
simplification measure, but in case users need to implement multiple test
cases with different attributes only available at the higher level, the
complexity of the vector set actually grows, as multiple vector sets or test
vectors need to be introduced. To remedy that, we propose rendering said
fields universal by providing the option of adding both at test case and test
group level and making test groups recursive, so that test cases may be
grouped in a flexible manner, as can be seen in Figure 10 vs Figure 11.
⬇
"testType" : "GDT",
"tests" : [
{
"d" : "fefff",
"message" : "",
"n" : "fd12",
"p" : "136",
"q" : "1",
"tcId" : 0
},
{
"d" : "fefff",
"message" : "",
"n" : "fd12",
"p" : "ff",
"q" : "1254",
"tcId" : 3
},
{
"d" : "fefff",
"message" : "",
"n" : "fd12",
"p" : "ff",
"q" : "36fa",
"tcId" : 6
},
]
Figure 10: Example of an RSA ACVP test group with fields repeated for every
test case
⬇
"testType" : "GDT",
"testFields": {
"d" : "fefff",
"message" : "",
},
"tests" : [
{
"p" : "136",
"q" : "1",
"tcId" : 0
},
{
"p" : "ff",
"q" : "1254",
"tcId" : 3
},
{
"p" : "ff",
"q" : "36fa",
"tcId" : 6
},
]
Figure 11: Example of an RSA ACVP test group with shared fields in one place
## 7 Conclusion
In this paper, we presented acvp-rust – a software framework for analyzing
cryptographic libraries, whose main aim is to discover possible bugs in the
code. Through a series of experiments, we have demonstrated that acvp-rust
produces efficient covering tests that can be shared between cryptographic
libraries. Furthermore, it provides a base that facilitates the structure of
an adaptor for a new library. In addition, it creates sets of tests that not
only increase confidence about how correct implemented algorithms are, but
also provides good coverage, capable of using knowledge gained from researches
conducted in other libraries.
Additionally, we used acvp-rust to analyze Mozilla’s NSS cryptographic
library. This allowed us to trace new, undiscovered bugs in this widely-used
library. The identified bugs have been disclosed and accepted by maintainers.
This serves as proof that acvp-rust facilitates the detection of undiscovered
bugs and has the potential to improve the security of existing software with a
main focus on cryptographic libraries. Furthermore, we showed that acvp-rust
increases code coverage compared to other tools. This leads to significant
improvements in fuzzing quality and helps to detect issues in otherwise hard-
to-reach code areas. Finally, in order to support open science and
reproducible research, we have made acvp-rust publicly available.
Experience has shown that it is important to include diverse test cases in
test suites to ensure both corner cases are not missing, and code hidden
behind complex conditions is covered. This is what acvp-rust methodology
allows a researcher to do.
### 7.1 Future Research
Future possibilities for improving the work include development of further
subspecifications, with the goal of providing more input flexibility to
increase coverage, possible further automation of the process, and
automatically discovering side-channel vulnerabilities by integrating related
tools.
## References
* [1] ACVP secure hash algorithm (SHA) JSON specification, https://pages.nist.gov/ACVP/draft-celi-acvp-sha.html, accessed: 2024-06-05
* [2] AddressSanitizer: Clang 17.0.0git documentation, https://clang.llvm.org/docs/AddressSanitizer.html, accessed: 2024-06-05
* [3] Automated Cryptographic Validation Protocol (ACVP) JSON Specification, https://pages.nist.gov/ACVP/draft-fussell-acvp-spec.html#name-introduction, accessed: 2024-06-05
* [4] Automated Cryptographic Validation Protocol Documentation, https://pages.nist.gov/ACVP, accessed: 2024-06-05
* [5] Automated Cryptographic Validation Testing | CSRC, https://csrc.nist.gov/Projects/Automated-Cryptographic-Validation-Testing, accessed: 2024-06-05
* [6] cisco/libacvp: The libacvp library is a client-side implementation of the draft ACVP protocol (github.com/usnistgov/ACVP)., https://github.com/cisco/libacvp, accessed: 2024-06-05
* [7] GitHub - rust-fuzz/cargo-fuzz: Command line helpers for fuzzing, https://github.com/rust-fuzz/cargo-fuzz, accessed: 2024-06-05
* [8] google/googletest: GoogleTest - Google Testing and Mocking Framework, https://github.com/google/googletest, accessed: 2024-06-05
* [9] google/wycheproof: Project Wycheproof tests crypto libraries against known attacks., https://github.com/google/wycheproof, accessed: 2024-06-05
* [10] libFuzzer – a library for coverage-guided fuzz testing. - LLVM 17.0.0git documentation, https://llvm.org/docs/LibFuzzer.html, accessed: 2024-06-05
* [11] smuellerDD/acvpparser: ACVP Parser for invocation of cryptographic implementations using the ACVP JSON test vectors, https://github.com/smuellerDD/acvpparser, accessed: 2024-06-05
* [12] The Galois/Counter Mode of Operation (GCM), https://csrc.nist.rip/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-spec.pdf, accessed: 2024-06-05
* [13] usnistgov/ACVP-Server: A repository tracking releases of NIST ACVP server. See www.github.com/usnistgov/ACVP for the protocol, https://github.com/usnistgov/ACVP-Server, accessed: 2024-06-05
* [14] Bars, N., Schloegel, M., Scharnowski, T., Schiller, N., Holz, T.: Fuzztruction: Using fault injection-based fuzzing to leverage implicit domain knowledge. In: Calandrino, J.A., Troncoso, C. (eds.) 32nd USENIX Security Symposium, USENIX Security 2023, Anaheim, CA, USA, August 9-11, 2023. pp. 1847–1864. USENIX Association (2023), https://www.usenix.org/conference/usenixsecurity23/presentation/bars
* [15] Benitez, S.: Short paper: Rusty types for solid safety. In: Murray, T.C., Stefan, D. (eds.) Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security, PLAS@CCS 2016, Vienna, Austria, October 24, 2016. pp. 69–75. ACM (2016). https://doi.org/10.1145/2993600.2993604, https://doi.org/10.1145/2993600.2993604
* [16] Bray, T.: The javascript object notation (JSON) data interchange format. RFC 8259, 1–16 (2017). https://doi.org/10.17487/RFC8259, https://doi.org/10.17487/RFC8259
* [17] Derakhshanfar, P., Devroey, X., Zaidman, A.: Basic block coverage for search-based unit testing and crash reproduction. Empir. Softw. Eng. 27(7), 192 (2022). https://doi.org/10.1007/s10664-022-10155-0, https://doi.org/10.1007/s10664-022-10155-0
* [18] Ding, R., Kim, Y., Sang, F., Xu, W., Saileshwar, G., Kim, T.: Hardware support to improve fuzzing performance and precision. In: Kim, Y., Kim, J., Vigna, G., Shi, E. (eds.) CCS ’21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 - 19, 2021\. pp. 2214–2228. ACM (2021). https://doi.org/10.1145/3460120.3484573, https://doi.org/10.1145/3460120.3484573
* [19] Fioraldi, A., D’Elia, D.C., Coppa, E.: WEIZZ: automatic grey-box fuzzing for structured binary formats. In: Khurshid, S., Pasareanu, C.S. (eds.) ISSTA ’20: 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Virtual Event, USA, July 18-22, 2020. pp. 1–13. ACM (2020). https://doi.org/10.1145/3395363.3397372, https://doi.org/10.1145/3395363.3397372
* [20] Fioraldi, A., Maier, D., Eißfeldt, H., Heuse, M.: Afl++: combining incremental steps of fuzzing research. In: Proceedings of the 14th USENIX Conference on Offensive Technologies. WOOT’20, USENIX Association, USA (2020)
* [21] Lattner, C., Adve, V.S.: LLVM: A compilation framework for lifelong program analysis & transformation. In: 2nd IEEE / ACM International Symposium on Code Generation and Optimization (CGO 2004), 20-24 March 2004, San Jose, CA, USA. pp. 75–88. IEEE Computer Society (2004). https://doi.org/10.1109/CGO.2004.1281665, https://doi.org/10.1109/CGO.2004.1281665
* [22] Metzman, J., Szekeres, L., Simon, L., Sprabery, R., Arya, A.: Fuzzbench: an open fuzzer benchmarking platform and service. In: Spinellis, D., Gousios, G., Chechik, M., Penta, M.D. (eds.) ESEC/FSE ’21: 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Athens, Greece, August 23-28, 2021. pp. 1393–1403. ACM (2021). https://doi.org/10.1145/3468264.3473932, https://doi.org/10.1145/3468264.3473932
* [23] Poeplau, S., Francillon, A.: Symbolic execution with symcc: Don’t interpret, compile! In: Capkun, S., Roesner, F. (eds.) 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020. pp. 181–198. USENIX Association (2020), https://www.usenix.org/conference/usenixsecurity20/presentation/poeplau
* [24] Wang, D., Li, Y., Zhang, Z., Chen, K.: Carpetfuzz: Automatic program option constraint extraction from documentation for fuzzing. In: Calandrino, J.A., Troncoso, C. (eds.) 32nd USENIX Security Symposium, USENIX Security 2023, Anaheim, CA, USA, August 9-11, 2023. pp. 1919–1936. USENIX Association (2023), https://www.usenix.org/conference/usenixsecurity23/presentation/wang-dawei
|
# A Bayesian Agent-Based Framework for Argument Exchange Across Networks
Leon Assaad<EMAIL_ADDRESS>Munich Center for Mathematical
Philosophy, LMUGeschwister Scholl Platz 1MunichBavariaGermany80539 , Rafael
Fuchs Graduate School of Systemic Neuroscience, LMUGroßhadernerstraße 2
Planegg-MartinsriedBavariaGermany82152 , Ammar Jalalimanesh , Kirsty
Phillips Birkbeck College, University of LondonMalet StreetLondonUnited
KingdomWC1E 7HX , Leon Schöppl Munich Center for Mathematical Philosophy,
LMUGeschwister Scholl Platz 1MunichBavariaGermany and Ulrike Hahn Birkbeck
College, University of LondonMalet StreetLondonUnited Kingdomand Munich Center
for Mathematical Philosophy, LMUGeschwister Scholl Platz 1MunichBavariaGermany
(14 Nov, 2023)
###### Abstract.
In this paper, we introduce a new framework for modelling the exchange of
multiple arguments across agents in a social network. To date, most modelling
work concerned with opinion dynamics, testimony, or communication across
social networks has involved only the simulated exchange of a single opinion
or single claim. By contrast, real-world debate involves the provision of
numerous individual arguments relevant to such an opinion. This may include
arguments both for and against, and arguments varying in strength. This
prompts the need for appropriate aggregation rules for combining diverse
evidence as well as rules for communication. Here, we draw on the Bayesian
framework to create an agent-based modelling environment that allows the study
of belief dynamics across complex domains characterised by Bayesian Networks.
Initial case studies illustrate the scope of the framework.
Agent-Based Model; Argumentation; Bayesian Argumentation; Argument Exchange
††copyright: none
## 1\. Introduction
Human societies are based on information exchange, deliberation, and
negotiation. This means human societies rely fundamentally on argumentation.
As a result, argumentation –broadly construed– is a topic of active research
interest across a wide range of disciplines.
Some of these, such as research on argumentation (Walton, 2009) and research
on persuasion (Maio et al., 2018) have tended to focus on detailed
characteristics of individual arguments. Others, such as research in
computational social science (Lazer et al., 2009) studying large-scale debates
across online social platforms like Twitter or Facebook, have focussed in
detail on the spread of arguments (Hofman et al., 2021)–with the arguments
themselves subjected to far more coarse-grained analysis in terms of keywords
or sentiments (e.g., (Berger and Milkman, 2012)). Research focussed on spread
includes also agent-based modelling of belief- or opinion dynamics. Here,
‘arguments’ have been highly stylised -represented only by numbers or elements
of a vector (Mäs and Flache, 2013). Or they have been captured only implicitly
by their effects, as in approaches that model opinion dynamics as a kind of
‘contagion’.
This disconnect between research traditions focussed on individual arguments,
and research traditions focussed on dynamics of spread has left a fundamental
gap in the current understanding of how the exchange of arguments figures in
human society. Moreover, this gap encompasses pressing theoretical and
practical questions, for example concerning the impact of large online
platforms on democratic debate, and with it, the health of democratic
societies.
Bridging that gap will, arguably, require bringing together tools and theories
of research traditions that have focussed on individual arguments with those
concerned with characteristics of spread. For example, both argumentation and
persuasion research have historically focussed on dyads: communicating pairs
exchanging reasons for claims in an ongoing exchange that involves competing
and supporting arguments that combine in complex, often hierarchically nested
ways. Researchers have sought to understand both argument ‘quality’ and
persuasive ‘success’ in that dyadic frame of reference, developing both
procedural rules for engagement (Van Eemeren and Houtlosser, 2003; Van Eemeren
et al., 2013, 2015) and graphing techniques or ‘maps’ to aid argument
evaluation and production (Gordon et al., 2007). This scales only partly to
contexts with multiple, and possibly large numbers of, communicating agents
(see also (Bonevac, 2003; Lewiński and Aakhus, 2014)) and the historical focus
on dyads has left fundamental questions about argumentation and persuasion
unaddressed. Conversely, the insights that might be gained from analysing the
dynamics of the spread of arguments across large corpora of public debate are
restricted by the level of content analysis applied to the arguments
themselves.
The research presented in this paper aims to help bridge this gap.
Specifically, we introduce a new framework, NormAN –short for Normative
Argument Exchange across Networks– for agent-based modelling of argument
exchange across social networks. This framework, we argue, combines important
features of argumentation research on argument evaluation with agent-based
modelling. Specifically, its goal is to incorporate multiple fundamental
features of real-world communication: In discussing or debating a claim,
individuals exchange arguments (individual reasons) for believing that claim
to be true or false. Some of these may be arguments for, others arguments
against, and some of these arguments may be better than others. And while some
of this may be a matter of subjective evaluation, what arguments are available
and how strong they are is also constrained by the topic at hand. Finally,
communicative exchanges might take place in anything from small, tightly knit
groups exchanging in-depth information, to large networks involving only
fleeting exchange. This means understanding real-world arguments also requires
understanding the impact of who agents communicate with and what they choose
to exchange.
No single model, let alone a single investigation, will be able to give equal
focus to each of these fundamental features of real-world argumentation. As a
result, NormAN is designed as a framework. In this paper, we set out this
framework and introduce a basic model, NormAN version 1.0, that incorporates
each of these core features. Specifically, the paper proceeds in three main
parts. We first (2) briefly describe research across both sides of the ‘gap’
in order to situate NormAN in the context of current research and motivate our
specific design choices. We then introduce the framework and NormAN v. 1.0 in
more detail (3). Finally, we describe two case studies to illustrate the
features and benefits of the framework. In particular, we seek to show how
this framework –though still highly stylised in nature– affords both deeper
understanding of longstanding questions, and opens up new avenues for
research.
## 2\. Motivation and Background
The goal of NormAN is to bring together the detail of research on argument
evaluation and the simulation of multi-agent contexts in order to bridge the
current gap and enrich both research traditions. We next provide background on
the most important strands within these respective traditions.
### 2.1. Argument Quality and Dialectics
Traditional research on argumentation has not used agent-based simulations.
Rather, this highly interdisciplinary field has drawn on observation, formal
analysis, and behavioural experiments.
#### 2.1.1. The Breadth of Argumentation Research
Philosophers have focussed on normative theories, that is, theories of how we
_should_ behave. The traditional standard has been formal logic, but more
recently, pragma-dialectical theories have focussed on the norms and
conventions governing argumentative process (e.g., (Van Eemeren et al., 2013,
2015; Walton, 1998; Walton and Godden, 2007)). Within psychology, ‘persuasion’
has been a central topic of social psychological research (e.g., (Eagly and
Chaiken, 1993)). This vast literature has identified many moderating variables
(e.g., speaker likeability, engagement, mode of presentation, fit with prior
beliefs) that affect the degree to which persuasive communication will be
effective. Developmental and education research has focussed on the way
children’s argumentation skills develop and examined ways in which critical
thinking and argument skills might be improved (e.g., (Felton and Kuhn, 2001;
Kuhn and Udell, 2003; Von Aufschnaiter et al., 2008)). Logicians and computer
scientists have sought to devise argumentation frameworks for dealing with
dialectical information, seeking to capture the structural relationships
between multiple theses, rebuttals, and supporting arguments for use in
computational argumentation system (Dung, 1995; Prakken and Vreeswijk, 2001;
Rahwan and Simari, 2009).
#### 2.1.2. The Central Role of Normative Concerns
The sheer breadth of disciplinary perspectives, research questions, and
methods makes for a bewildering array of literatures and findings on
argumentation. Furthermore, many of these literatures have historically been
largely or even wholly disconnected from one another. There is, however, a
shared focal concern across most, if not all, argument research. This is the
issue of argument quality or ‘what makes a good argument?’, and, with that,
the question of how good arguments can be distinguished from bad ones. This
question is a normative, evaluative, question about what kinds of arguments
should convince us, and which are the appropriate normative standards against
which argument quality should be judged. Across fields and research interests,
this question features both as an explicit topic of study and as an implicit
concern.
It is of explicit interest within philosophy in research on human rationality
and the epistemological question of how we can arrive at secure knowledge of
the world ((Rescher, 1977; Dawid et al., 2015; Hartmann, 2021; Eva and
Hartmann, 2018; Godden and Zenker, 2018). In psychology, cognitive
psychologists study the quality of people’s argumentation (e.g., (Kuhn, 1991)
as part of a long tradition of research on reasoning, judgment and decision-
making (e.g., (Oaksford and Chater, 2009; Stanovich and West, 2000; Kahneman,
2011; Hahn and Oaksford, 2012)). And educational psychologists teaching or
improving argument skills and critical thinking (e.g., (Van Eemeren and
Houtlosser, 2003)) must clarify their intended target.
In other research, the question of argument quality is raised _implicitly_ by
research goals and methodological constraints. For example, argument quality
matters for logicians and computer scientists interested in argumentation as a
tool for artificial intelligence systems (e.g., (Jackson, 1986; Neapolitan,
1990)), because, to work well, such systems must adequately weigh and
aggregate information. So how can argument quality be measured? What normative
standards might be devised?
#### 2.1.3. Standards for Argument Quality
A wide range of tools, from different disciplines, has historically been
applied to the question of what makes a ‘good’ argument:
1. (1)
classical logic
2. (2)
attempts to elucidate argument by mapping out structural relations between
arguments:
* •
either informally by tagging them as ‘claims’, ‘warrants’, or ‘rebuttals’
(e.g., (Toulmin, 2003))
* •
or in formal, computational frameworks (e.g., so-called ‘argumentation
frameworks’, (Dung, 1995))
3. (3)
pragma-dialectical theories spelling out putative norms underlying
argumentative discourse, such as a ‘right to reply’ or ‘burdens of proof’ (Van
Eemeren et al., 2004)
While all of these are useful and aid interesting research questions in
different fields, they still miss much about argument quality.
Classical logic says nothing about most everyday informal arguments, other
than that they are not logically valid (Toulmin, 2003; Hamblin, 1970), and,
hence, it is too restrictive. 111At the same time, it is too permissive in
that it renders arguments strong that actually seem poor: For example, ‘A,
therefore, B or not B’, where A is wholly irrelevant rather than providing a
meaningful reason. Likewise, the quality of argument content cannot generally
be reduced to procedural rules or to systems that map out support, attack and
defeat relations. To illustrate: “the book is in the library…no it’s not,
because the moon is made of cheese” involves an (intended) counter-argument,
but is patently absurd (Hahn, 2020). Simply noting that an argument is
_offered_ as support or attack is itself a purely structural, syntactic
evaluation. A content-based measure of argument strength is still needed in
order to know whether intended ‘support’ or ‘defeat’ is successful. Likewise,
pragma-dialectic notions such as ‘burden of proof’ depend on the specific
content of an argument in order to determine whether or not a burden of proof
has actually been met (Hahn and Oaksford, 2007).
This means that normative standards in addition to classical logic, procedural
rules or merely syntactic renditions of the structural relations between
arguments are necessary in order to capture argument content adequately. This
has recommended a Bayesian approach to argumentation.
#### 2.1.4. Bayesian Argumentation
The probability calculus is intensional (Pearl, 1988): the probabilities that
attach to propositions are determined by their specific content, not (just)
their logical form. The resulting ability of the probability calculus (or,
where decisions and utilities are involved, Bayesian decision theory) to
meaningfully capture normative questions about argument content is
demonstrated by its application to the catalogue of so-called fallacies of
argumentation. The fallacies are argument schemes such as ‘arguments from
ignorance’, ‘ad hominem arguments’ etc. that have long posed a challenge for
explanations of why exactly they are poor arguments (see (Woods, [n. d.];
Hamblin, 1970)). One central difficulty encountered here was that not all
instances of these schemes seem equally poor or fallacious, and a Bayesian
treatment has elucidated those differences (Hahn, 2020).
The Bayesian framework has also been applied to a broader set of schemes for
everyday argument from the informal logic literature that, unlike the
fallacies, are presumptively (but defeasibly) ‘good’ arguments. Specifically,
they provide reasonable, albeit defeasible, inferences for uncertain,
ampliative reasoning (which sets them apart from logical schemes such as the
classic set of syllogism or conditional reasoning schemes such as modus
ponens, modus tollens etc.). The literature on informal argument previously
catalogued 60+ such schemes (Walton et al., 2008) that identify recurring
structures that occur with varying content (and hence varying strength) in
everyday discourse. As (Hahn and Hornikx, 2016) seeks to show, the Bayesian
framework can provide a normative basis for these schemes. It can thus further
the long-standing goals of research on argument schemes, namely a
computationally explicit treatment with guidance for lay reasoners on when
particular instances of these schemes are weak or strong.
In the Bayesian framework, argument strength can be captured by considering
the extent to which an argument or piece of evidence rationally changes one’s
beliefs. The posterior, $P(C|A)$, is affected by the likelihood (i.e., the
sensitivity of the evidential test $P(A|C)$), and by the false positive rate
(i.e., $P(A|notC)$) as captured in the likelihood ratio (i.e.,
$P(A|C)/P(A|notC)$).
With the likelihood ratio, the Bayesian framework has a notion of
informational relevance. This helps with the fallacies, given that fallacies
are typically fallacies of relevance (Walton, 2004). It is also essential to
capturing argument quality in general, and elucidating the notion of relevance
in a formally satisfactory, non-question-begging, way has been a long-standing
challenge (see, (Sperber and Wilson, 1986; Hahn and Oaksford, 2006)). Finally,
the Bayesian framework has a well-developed normative foundation that links to
goals such as inaccuracy minimisation (on the link between ‘being Bayesian’
and inaccuracy minimisation see (Pettigrew, 2016), for discussion of normative
foundations for argumentation more generally, see (Corner and Hahn, 2013)).
Bayesian Argumentation has also been expanded to other features of argument
(e.g., such as ‘argument cogency’ or ‘degrees of justification’ (Zenker, 2012;
Godden and Zenker, 2018)). At the same time, work by Hartmann and colleagues
has extended the formal arsenal of Bayesian Argumentation in order to broaden
the scope of possible inferences (Eva and Hartmann, 2018; Eva et al., 2020)
and has provided detailed treatments of scientific inference schemes (e.g.,
(Dawid et al., 2015)) in a programme paralleling the treatment of everyday
schemes. Specifically, Hartmann and colleagues have shown how forms of new
‘evidence’ not amenable to Bayesian conditionalization may be captured through
the application of Kullback-Leibler divergence.
The body of work on Bayesian Argumentation arguably represents the state of
the art with respect to measuring argument quality, in that both a
quantitative measure and a well-developed normative basis is provided (see
also (Nussbaum, 2011)). It is for this reason that we adopt the Bayesian
framework for NormAN.
While a Bayesian perspective on argument quality has arguably been productive,
there are key features of argumentation –discussed next– that have been
neglected to date, not just by Bayesian argumentation, but by argumentation
research as a whole.
### 2.2. Beyond Dyads: Multi-Agent Models
As noted above, most work on argumentation has, at best, concerned itself with
dyads, that is, a proponent and an opponent engaged in direct exchange. Public
debate, however, has many actors choosing when to contribute and when not,
which arguments to repeat, which to ignore, which to address, and how. This
fundamental feature of real-world argument has remained largely outside the
view of argumentation research.
Even where argumentation research has concerned itself with large-scale
debates, it has either attempted to assimilate these into dialogue-oriented
models (Lewiński and Aakhus, 2014) or it has focussed exclusively on the
arguments themselves (e.g., in argument mapping approaches to large-scale
debates such as Kialo222See https://www.kialo.com; on such tools more
generally see e.g., (Benetos, 2023)). This obscures all sense of the
_dynamics_ of argument exchange in public debate and the many underlying
decisions by participants that give rise to those dynamics.
The dynamics of debate, however, have become a matter of research interest
with the advent of online social media. With online social media came the
ability to harvest and analyse large volumes of real-world debate. And the
rise of computational social science has seen the analysis of online data from
large platforms such as Twitter, Facebook or Reddit become a major topic of
research (Cioffi-Revilla, 2014; Lazer et al., 2009). At the same time,
putative negative impacts of large online platforms on misinformation,
polarization, extremism, and a weakening of democratic norms (Lorenz-Spreen et
al., 2020; Lewandowsky et al., 2020; Lorenz-Spreen et al., 2023) has fuelled
interest in belief and opinion formation across social networks. This has led
to a wealth of modelling research to help understand how opinions spread
across networks.
There remains, however, a significant gap: the analysis of real world data
from platforms such as Twitter has largely focussed on limited features of
such data, focussing either on spread by analysing retweets (Java et al.,
2007; Suh et al., 2010; Ten Thij et al., 2014; Cha et al., 2010) or analysing
content in very restricted ways such as sentiment analysis (Hutto and Gilbert,
2014), bags of words (Naveed et al., 2011; Brady et al., 2017; Storey and
O’Leary, 2022) and/or topic modelling (Zhao et al., 2011; Corti et al., 2022)
(but see also more recently e.g., (Visser et al., 2020)). This is a far cry
from the detailed analyses of content common within the research tradition
concerned with argument quality outlined in the previous section. And, as the
following sections will show, models of belief or opinion-dynamics are
arguably even more restrictive: At present, most ABMs do not involve the
communication of reasons for claims. In other words, they do not capture
argument at all.
#### 2.2.1. Models of Opinion Dynamics
The modelling of opinion dynamics has seen multiple frameworks. Two of these
import concepts from other disciplines: contagion and social physics models.
Contagion-based models, in effect, treat the spread of opinions or behaviours
as a kind of “infection” (López-Pintado, 2008; Barash, 2011; Centola, 2018).
This allows models from epidemiology to be applied. Contagion based models
have been used to examine everything from basic contagion dynamics (Barash,
2011; Izquierdo et al., 2018), effects of network structure (Jackson and
Rogers, 2007; López-Pintado, 2008), the influence of word of mouth reports of
events on elections (Moya et al., 2017), extremism (Youngblood, 2020), to echo
chambers and viral misinformation (Törnberg, 2018). Methods have ranged from
standard analytic models within epidemiology (see e.g., (Kiss et al., 2017)),
through statistical models, to agent-based modelling.
The social physics approach draws on models developed within physics to
capture opinion dynamics (Castellano et al., 2009). In particular, methods
(e.g., mean field approximation) and models from statistical mechanics, such
as diffusion models and the Ising model (Dorogovtsev et al., 2008), have been
used to model issues such as opinion polarization (Macy et al., 2003) or the
spread of misinformation (Budak et al., 2011).
Finally, one of the earliest and one of the most influential models of opinion
dynamics was first put forward by statistician M. DeGroot (DeGroot, 1974). The
DeGroot model was proposed initially to shed light on how groups might use
opinion pooling to reach a consensus judgement. It is based on repeated
(weighted) belief averaging until beliefs converge. Iterated averaging also
underlies the Lehrer-Wagner (Lehrer and Wagner, 1981) model in philosophy that
has been used extensively to develop notions of rational consensus, and the
work of Hegselmann and Krause (e.g.,(Hegselmann and Krause, 2002)), which we
discuss further in the next section.
With respect to motivating NormAN, we note two main observations about models
of belief- and opinion dynamics discussed so far: First, they use an
unanalysed aggregate –the opinion or belief in question, typically represented
as a boolean or continuous variable– without the provision of reasons; this
limits the research focus of such models to the population dynamics regarding
that single quantity. This renders this body of work impossible to connect
meaningfully to the research on argumentation described in section 2.1 above.
Second, there is no ‘ground truth’ at stake in these models (but for an
addition of ‘ground truth’ to the DeGroot model see Golub and Jackson (Golub
and Jackson, 2010)). Hence many questions about knowledge and accuracy, of
either individual agents or the collective as a whole (Hahn, 2022), are
necessarily outside the scope of these models.
#### 2.2.2. Agent Based Models in Social Epistemology
Questions of how knowledge comes about and how it comes about specifically in
social contexts are, however, the central concern of social epistemology.
Considerable research within social epistemology has utilised the DeGroot
model—either in the form of the Lehrer-Wagner (1981) or the Hegselman-Krause
model (Hegselmann and Krause, 2002, 2015). Hegselman-Krause added the idea of
weights reflecting differential ‘trust’ in other members of the collective
such that agents only listen to others who are sufficiently ‘close’ in their
estimates (giving rise to so-called convergence threshold models (Hegselmann
and Krause, 2015)). Work using these models has focussed on understanding when
networks do and do not converge (see for extensive analysis, (Krause, 2015)).
In order to connect better to the concerns of social epistemology, Hegselman
and Krause (Hegselmann et al., 2006) later also added to their model the idea
that a specific opinion value may be designated as ‘the truth’ (for other
extensions see (Douven and Riegler, 2010), including, outside of social
epistemology, toward greater psychological realism (Xu et al., 2023); for a
review, (Douven, 2019a)).
An influential further class of models in social epistemology is bandit models
(Zollman, 2007, 2010). These models use one- or multi-armed bandits (Slivkins
et al., 2019) to generate evidence about an underlying state of the world.
That evidence may be observed directly by individual agents or indirectly
through observation of other agents’ states based on aggregates of that
evidence, or received via communication. Used initially by economists to study
social learning across networks and its effect on economic outcomes (Bala and
Goyal, 1998, 2000), bandit-based models have been applied in social
epistemology to questions such as when (and when not!) communication is
beneficial to the progress of science (Zollman, 2010), the effects of
misinformation (O’Connor and Weatherall, 2018), and polarization (O’Connor and
Weatherall, 2018). Although they often involve Bayesian updating at the
individual agent level, bandit models have also been combined with DeGroot-
like averaging (Douven, 2019b). Conceptually, bandit models allow there to be
a model ground truth, and the evidence dispensed by the bandit provides at
least a very limited notion of ‘argument’.
A different model aimed at many of the same research questions is the model of
testimony first proposed by Olsson and colleagues (Olsson, 2011; Olsson and
Vallinder, 2013; Olsson, 2013; Angere and Olsson, 2017). The realisation that
much of what humans believe to know stems from the testimony of others (Coady,
1992), has fuelled research concerned with the conditions under which
testimony is reliable and a meaningful guide to truth. A significant
proportion of that work has drawn on probability theory to explicate those
conditions in formal models (Olsson, 2005; Bovens and Hartmann, 2003; Olsson
and Schubert, 2007) including agent-based simulations. The Olsson (2011) model
is such an agent-based model.
As it has inspired many of the features of the new framework presented in this
paper, we outline it in some detail here. In the model, there is a single
proposition (represented as a Boolean variable) at issue. Agents in the model
each occupy a particular position in a social network. At each time step,
there is a probability of acquiring a piece of evidence ‘from the world’, and
a probability of communication. Communication links are symmetrical, and
communicating agents will affirm that proposition C, the target hypothesis
under scrutiny in this artificial society, is true, whenever their degree of
belief in C exceeds a threshold of assertion (say, p = .8). When belief drops
below 1 minus the threshold of assertion, agents will assert not-C; on all
other occasions they will remain silent. This is designed to capture the fact
that real-world communication does not typically involve communication of
point values, but rather involves the assertion of claims (‘C is true’). The
agents in the model are Bayesian, using Bayes’ rule to revise both belief in
the target hypothesis and the reliability of their sources (including their
own inquiry into evidence from the world).333More precisely they are naive
Bayesian agents in that they make the simplifying assumption that evidence is
independent, (Ng and Jordan, 2002). For an examination of the consequences of
this assumption see (Merdes et al., 2021; Hahn, 2023).
In the Olsson model, agents have a belief (in the claim at issue) and there is
a ground truth. There is also a very simple type of ‘argument’ or evidence
which consists solely of assertion that the claim in question is true or
false.
The Olsson model has been used, among other things, to study the impact of
network topology on accuracy (Hahn et al., 2018a), polarization (Olsson, 2013,
2020; Pallavicini et al., 2021), the impacts of different strategies for
estimating the reliability of testimonial sources (Hahn et al., 2018b; Collins
et al., 2018) and the dependence created through communication (Hahn et al.,
2019). This includes the role of communication in the context of science,
specifically asking whether the overall progress of science is helped or
hindered by frequent communication between scientists (Angere and Olsson,
2017).
This latter question has also been studied in so-called epistemic landscape
models (Weisberg and Muldoon, 2009; Pinto and Pinto, 2018; Grim et al., 2013).
These models capture scientific exploration of a topic by agent-based probing
of a fitness landscape: The boundaries of the landscape represent the
boundaries of the topic; the coordinates of the landscape correspond to
different approaches scientists could be bringing to its study and the
topography of the landscape represents the relative ‘significance’ of the
resultant scientific work.
The recent argumentation-based ABM of (Borg et al., 2019) represents yet a
further attempt to study the same problem. In this model, agents seek to
explore an underlying argument map setting out the relevant arguments on the
issue. In effect, the ‘epistemic landscape’ is the argument map. This argument
map is formulated in the abstract argumentation framework of (Dung, 1995) and
agents exchange information about arguments they have encountered. This allows
investigation of the impact of different argument selection and communication
strategies by agents with respect to achieving full end-state knowledge of the
relevant state of affairs.
In both epistemic landscape models and the Borg et al. argumentation-based
ABM, there is a ‘ground truth’ of sorts implicit in the model. However, the
design of the underlying ‘landscape’ (whether the fitness landscape or the
argument map) is essentially arbitrary and unconstrained. To the extent that
properties of that landscape matter to model behaviour and findings, results
will remain somewhat difficult to interpret –in particular, with respect to
how they generalise to real-world situations. At the same time, however, there
is no explicit representation of agent beliefs regarding a particular target
hypothesis, which separates these models fundamentally from models of opinion
dynamics.
To fully join dyadic models of argument with opinion- and belief dynamics, a
modelling framework that distinguishes between arguments, aggregate beliefs or
opinions, and the social network across which communication is happening is
required (for related points see also (Grim et al., 2013)). We return to these
issues below.
#### 2.2.3. Multi-Agent Models of Argumentation
Finally, the exchange of arguments between computational agents has been a
focal point for research on multi-agent-systems (for introduction and
overviews to multi-agent-systems see e.g., (Dorri et al., 2018; Van der Hoek
and Wooldridge, 2008)). Much of the modelling here has involved logical
formalisms of one kind or another (Calegari et al., 2021; Chesnevar et al.,
2000), though other argumentation frameworks such as Dung’s abstract
argumentation framework (Dung, 1991) and extensions thereof (e.g., (Bench-
Capon, 2002)) have also been used (see for an overview of relevant approaches
(Rahwan et al., 2003; Carrera and Iglesias, 2015)). And there has been some
(but comparatively limited) interest in the Bayesian framework (e.g., (Saha
and Sen, 2004; Nielsen and Parsons, 2007; Vreeswijk, 2004)).
Both the tools used for capturing argument and some of the research questions
asked have connections to traditional (dyad focussed) argumentation research
as described in section 2.1. Not only are there researchers that have
contributed to both communities, input has specifically been sought from non-
computational argumentation researchers (see for illustration of this point
e.g., (Rahwan and Simari, 2009)). However, the different focus of most
research on argumentation for autonomous agents means that this body of
research does not ultimately connect well to research on belief- or opinion
dynamics, or to research concerned with the spread of arguments (at least at
present). This stems from the fact that multi-agent-systems research typically
has in mind practical applications for which collections of agents provide a
potential computational solution. This makes a top-down approach to systems
involving many agents the natural focus. The goal is to solve complex
computing problems, and multi-agent systems, as a type of distributed
artificial intelligence, provide a potential tool. By contrast, most of the
research discussed thus far is interested in the bottom-up question of
dynamics and patterns that emerge naturally from groups of interacting agents.
### 2.3. The Value of Normative Models
The preceding discussion should have made clear that there is a particular
value to normative models in the context of argumentation research. In the
context of individual-focused, or (at best) dialectical research on
argumentation the explicit and implicit normative focus is clear (section
2.1.2 above). Not only have normative issues been of direct, explicit,
interest, but normative models have methodological value even in otherwise
purely descriptive endeavours.
Specifically, the new (to argumentation research) normative standard provided
by Bayesian probability not only addressed long-standing theoretical,
philosophical concerns (e.g., about fallacies of argumentation), it also
opened up novel paths for empirical, behavioural experimentation examining
laypeople’s reasoning (e.g., (Corner et al., 2011; Bhatia and Oaksford, 2015;
Corner and Hahn, 2009; Harris et al., 2012; Hornikx et al., 2018)). And the
specificity of research questions pursued in those studies goes considerably
beyond what was possible with normatively limited frameworks such as the
Toulmin framework (see also (Hahn and Tešić, 2023) for further discussion of
this point).
In the context of multi-agent models, the importance of normative concerns
should also be clear. Again, there is considerable interest in normative
accounts of collective discussion and information exchange in social
epistemology. Likewise, there is considerable interest, for example, in
improving online platforms, and for such endeavour an understanding of what is
possible, in the best case, is important. Finally, normative models are
important in making sense of descriptive data which all too often are simply
assumed to reflect bias and irrationality whenever data patterns seem
surprising (as illustrated, by the literature on shifts to extremity in group
discussions or polarization, both examined in more detail below, C.1 and C.2).
At present, however, there is a large gap in that there are no normative
models of argument exchange across collectives that would allow researchers to
address issues such as accuracy and link up to classic, individual- and dyad-
focussed research on argument.
Crucially, in order to achieve that, two components of the model need to have
a normative grounding: argument evaluation and aggregation on the one hand,
and, on the other, a grounding of the evidence distribution in a ground truth
world model, against which beliefs can be compared and scored.
Both the evaluation/aggregation rules used by agents and the distribution of
(in principle) available evidence will affect belief dynamics. Consequently,
making both of these aspects principled (and not ‘just so’) seems of
fundamental importance for connecting to real-world contexts in meaningful
ways.
This gives rise to the following desiderata for an agent-based model of
argument exchange. What is required is a model with (i) a ground truth world,
(ii) evidence linked to that ground truth world, giving rise to a principled
evidence distribution, and (iii) rational agents who form optimal beliefs
given the evidence they have. Furthermore, such a model should be easy to use
and extend. NormAN seeks to provide a general framework for creating just such
models.
## 3\. Introducing the NormAN Framework
The core conceptual components of the NormAN framework are illustrated in
Figure 1. It comprises the three core elements that make up a NormAN model: a
ground truth’ ‘world’, individual ‘agents’, and the social ‘network’ across
which these agents communicate. The ground truth world determines the true
state of the claim (hypothesis) at issue in the discussion, along with the
evidence for it that could be discovered in principle. Agents receive evidence
about that world (through inquiry) and may communicate that evidence to others
as arguments and receive it in turn.444The framework of Bayesian argumentation
elides the difference between evidence and arguments in as much as it models
arguments with the same basic machinery used to model other evidence (though
substantive differences may, of course, arise as a result of that
formalisation). For clarity, it can be helpful to restrict the term ‘evidence’
for information received in the model ‘from the world’ (whether through
initial assignment or subsequent inquiry), and ‘argument’ for communication of
that evidence. Agents aggregate all evidence/arguments that they have
encountered to form a degree of belief in the claim at issue. Communication,
finally, takes place across a social network (including a possible ‘null
network’ in which no communication takes place for comparison).
Figure 1. Illustration of the main components of NormAN. A model within the
NormAN framework specifies (1) a ground truth world, (2) a social network, and
(3) individual agents communicating across that network. Each of these three
components has core specifications. In addition, the square in the middle
“activity levels” refers to aspects that could variously be conceived of as
properties of world, network or agent. The assignment of model parameters in
the current version of NormAN (1.0) to these various model aspects is
described in the text.
This is an overview of NormAN. It shows the three core components ”world”
(blue box), ”network” (red box), and ”agent” (yellow box) along with the core
features/specifications of these components (green boxes).
NormAN sets out a general framework in as much as each of these core
components is inherently modifiable: users can modify the ‘world’, the
evidence received, aggregation rules, communication rules, and network
topology. Moreover, NormAN is designed this way because it is our contention
that all three components are essential in coming to a deep understanding of
argumentation, communication, opinion dynamics and deliberation. As a
consequence of this foundational assumption, even the initial release of
NormAN (version 1.0) already has sufficient flexibility to allow users to
define models selected from a broad space of models varying world, agent and
network characteristics.
Moreover, NormAN is freely available (see E) so as to allow users to readily
adapt and extend the code and create new models by modifying the core
components.
As outlined above (see 2.3), the absence of agent-based models capturing
argument exchange over a ground truth world is central to the current ‘gap’. A
key problem in developing NormAN was thus how to build a ground truth world.
Extant models such as the Olsson (2011) model or bandit models (e.g.,
(Zollman, 2007) ) utilise a simple binomial process to this end. The modeller
stipulates a particular ‘hypothesis’ (say ‘1’) to be true, and a binomial
process with probability $p$ –representing the accuracy of the evidence
source555Assuming $p(E|H)=p(E|notH)$, i.e., sensitivity and specificity of the
‘test’ are set to be equal, see also (Hahn et al., 2018b) for discussion of
the extent to which this does and does not limit generalisability of model
results.– produces a stream of 0s and 1s as ‘evidence’ which can then form the
basis of Bayesian updating on the part of the agents.
Rich argument exchange over a ground truth world requires extending this kind
of approach in appropriate ways. This means not just generating an evidence
distribution that is plausible vis a vis the real world, but also generating
evidence in such a way as to allow agents to form probabilistically coherent
beliefs (‘be Bayesian’) _at least in principle_. While one’s analytic
interests need by no means be limited to optimal agents (ours are not), the
mere possibility of implementing a rational agent exhibiting normatively
correct reasoning (or as close as possible to that) means one cannot simply
generate wholly unconstrained, arbitrary combinations of evidence, because
doing so may potentially generate evidence combinations over which no rational
belief is possible.
To satisfy the dual demands of a plausible world and meaningful evidence
distribution, NormAN adopts Bayesian Belief Networks (BNs) as a tool for
generating the world. BNs are graphical representations of multi-variable
relationships (Pearl, 1988, 2000; Korb and Nicholson, 2010; Scutari and Denis,
2021). They are widely used across many disciplines (ranging from computer
science, through statistics, philosophy, psychology and sociology, among
others) for both theoretical analysis and practical software and engineering
applications (e.g., (Kammouh et al., 2020)). Specifically, BNs summarise the
way variables do and, more importantly, do not influence one another in a
graphical representation that simplifies Bayesian calculations. BNs thus have
a normative (Bayesian) foundation and they connect to extant work on
argumentation (see section 2.1.2 above) including Bayesian models of argument
generation (Zukerman et al., 1998, 1999; Jitnah et al., 2000; Keppens, 2019;
Timmer et al., 2015). Furthermore, their use for statistical analysis (Salini
and Kenett, 2009) and decision-support systems (Fenton and Neil, 2018) mean
that there exist repositories of BNs (e.g., the bnlearn repository,
https://www.bnlearn.com/bnrepository/) that putatively capture real-world
structure within the respective application domain.
NormAN allows users to select a BN and use it to generate a suitable ground
truth world through a simple trick. A variable in the network is selected as
the target hypothesis or claim at issue; its value is set for the purposes of
one or more model runs to represent the ’true state of the world’ regarding
that claim. The resultant probabilities for the remaining variables (given
that true state) are then used to stochastically generate a body of evidence
that is available, in principle, in that ground truth world (for fuller
details see below). Agents may subsequently receive evidence from that world
and exchange what evidence they have received via communication. Conceptually,
this generating procedure encompasses the simple binomial processes used in
past models as a special case.
Finally, while the use of this generating procedure is an integral part of the
appeal or value of NormAN (at least to us), it should be noted that the
framework is general enough to allow incorporation of other ’world
representations’ beyond BNs. Agent-based models defined over arbitrary
argument graphs (such as (Borg et al., 2019), see Section 2.2.2), for example,
can readily be captured as a type of NormAN model that uses an argument graph
instead of a BN as an underlying world, and in which agents’ belief
aggregation is disabled.
The key feature of the basic NormAN agents (as implemented in version 1.0) is
that they optimally aggregate evidence via Bayes’ rule. To do so, they too,
draw on a BN, which in the basic version of the model is a veridical model of
‘the world’, that is, in essence, a matching (subjective) BN ‘in the head’
(future extensions of the model involve relaxing that requirement of veridical
match between world and model in the head and also the subjective models of
other agents). Crucially, agents must also communicate and NormAN version 1.0
implements a variety of different rules and constraints on what and when
agents communicate. This includes both rules motivated by past work (e.g.,
(Mäs and Flache, 2013)) and initial suggestions for other communication rules
in a multi-argument selection context.
NormAN also allows users to vary the structure of the communication network
across which the argument exchange takes place. In the current version, this
includes selecting from a range of network types, as well as basic network
parameters such as network size and link density (the number of connections
between agents).
Finally, the framework allows modellers to determine the relative balance
between evidence ‘from the world’ (both initial and as a result of ongoing
inquiry) and the amount of communication. This feature derives from the Olsson
(2011) model and is of importance because it has been shown to affect a
variety of outcomes from individual and collective accuracy (Angere and
Olsson, 2017; Hahn et al., 2019) to polarization (Hahn et al., 2023; Hahn,
2023).
We describe the framework and its current implementation in more detail next.
For a full technical description of NormAN 1.0 following the ODD protocol
(Grimm et al., 2010), see the Appendix (Section A).
### 3.1. Main Components
The core parameters of NormAN are shown in Table 1. We describe the world, the
agents, and the networks in turn.
Entity | Variable | Value range/Type | Description
---|---|---|---
World | causal-structure | Bayesian network | Determines evidence nodes
| | | and their causal relation to hypothesis node.
| hypothesis | Variable | The hypothesis proposition (truth values: true or false).
| hypothesis-probability | $0-1$ | Probability that hypothesis is true.
| evidence-list | List | Stores truth values of evidence nodes (values: true/false).
Agents | agent-evidence-list | List (dynamic) | Stores truth values of evidence nodes agents encountered.
| agent-belief | 0 - 1 (dynamic) | Belief in the hypothesis.
| initial-belief | 0 - 1 (dynamic) | Unconditional prior belief in hypothesis.
| update-list | List (dynamic) | Stores impact.
| recency-list | List (dynamic) | Governs communication via the recency rule.
| chattiness | $0-1$ | Probability of communication.
| curiosity | $0-1$ | Probability of inquiry (evidence gathering).
| conviction-threshold | $0-1$ | Conviction in claim required to join debate.
| max-draws | Integer | Determines number of inquiries agents can perform.
| share | Chooser | Determines agents’ communication rule. Values:
| | | random, impact, recent.
Network | number-of-agents | 1 - 1000 | Determines the size of the network.
| social-network | Chooser | Network type: null, complete, small-world, wheel.
Table 1. The Core Parameters of NormAN governing world, agents, and social
network (for a complete table, cf. Appendix Fig 2).
#### 3.1.1. The World
The world model consists of a BN comprising a set of nodes (variables) and the
probabilistic/causal links between them. In general, Bayesian networks consist
of a so-called directed acyclic graph and a matching probability distribution
that specifies the conditional probabilities of each variable in accordance
with the graph.666$B=\langle G,P\rangle$ is a directed acyclical graph
$G=\langle V,E\rangle$, with a set of nodes (variables) $V$ and edges $E$, and
a joint probability distribution $P$ over $V$ such that $G$ satisfies the
parental Markov condition together with $P$. As an example, consider the well-
known lung cancer/asia network (Lauritzen and Spiegelhalter, 1988b), as seen
in Fig. 2: a hypothetical network from the medical field that models the
causal (probabilistic) relationships between a patient’s habits (smoking,
visiting Asia) and their symptoms (dyspnoea, positive X-ray).
Figure 2. The original ‘Asia’ lung cancer network (Lauritzen and
Spiegelhalter, 1988a). The Asia BN model was accessed via the bnlearn Bayesian
Network Repository (https://www.bnlearn.com/bnrepository/discrete-small.html);
it is also one of the exemplar BNs used in the bnlearn package documentation
(https://www.bnlearn.com/documentation/bnlearn-manual.pdf). This BN was
constructed using a hypothetical case of qualitative medical knowledge to
illustrate the utility of Bayes’ rule for expert systems. The target
hypothesis is ‘Lung’ (whether or not lung cancer is true, shown as the blue
node), and there are seven observable evidence nodes (shown as the orange
nodes): Asia (recent visit to Asia); smoking; tuberculosis; bronchitis;
dyspnoea (shortness of breath); and, x-ray (chest x-ray). The likelihood of
lung cancer is increased when smoking, x-ray, bronchitis and dyspnoea are set
to true. Combinations of evidence lead to interactions. The ‘either’ node is a
deterministic node that is used in this network to represent the fact that
both tuberculosis and lung cancer can result in positive x-ray. Network
properties: Number of nodes = 8, Number of arcs = 8, Number of parameters =
18, Average Markov blanket size: 2.5, Average degree = 2 and Maximum in-degree
= 2.
In such a BN, the modeller identifies a variable as the hypothesis variable
(hypothesis), or H for short, (e.g., ‘lung cancer’) and chooses a subset of
the other nodes as evidence nodes
($\texttt{E}_{1},\texttt{E}_{2},\ldots,\texttt{E}_{n}$). In NormAN 1.0,
hypothesis nodes and evidence nodes must be two-valued, that is, they are
either true or false. The model assigns such a truth value to the hypothesis
(manually or probabilistically). The following procedure then determines the
values of the evidence nodes. The marginal conditional probability of the
evidence is calculated; and on initialisation, this chance stochastically
determines the truth value of each piece of evidence. For example, if it is
true that the patient has lung cancer, and $P(bronchitis|lungcancer)=0.2$,
then there is a 20% chance that the value of the variable bronchitis is true.
Since the evidence nodes are two-valued, this procedure yields a chain of
evidence in the form of $\neg E_{1},E_{2},E_{3},\ldots,\neg E_{n}$ (where
$\neg E_{i}$ denotes ‘$\texttt{E}_{i}$ is false’ and $E_{i}$ denotes
‘$\texttt{E}_{i}$ is true’). This list of indexed truth values is stored in
evidence-list.
Crucially, this evidence assignment determines what evidence, on a given run,
counts as evidence for or against the hypothesis. While the structure of the
BN determines the evidential impact of a piece of evidence (e.g., the degree
to which the presence of smoking [‘smoke’] increases belief in ‘lung cancer’),
it is the actual value assigned to that variable on a given run which
determines the evidence as for or against _in this world_ : if the value of
‘smoke’ is initialised to false, it provides evidence against the hypothesis
lung cancer as knowledge of that fact will lower degree of belief in ‘lung
cancer’ being true.
This means also that many possible combinations of evidence for and against
will be generated by a single BN world model, see Fig 3. And in the user
interface (UI) NormAN users can determine not only which BN they wish to work
with, but also whether or not the evidence is re-initialised on a given run.
Figure 3. Different instantiations of the ‘world’ defined by the ‘Asia’ lung
cancer network (Lauritzen and Spiegelhalter, 1988a). On a given model run, the
base net (a), can give rise to different ‘worlds’ with varying arguments ‘for’
(green) and ‘against’ (red), depending on the stochastic initialisation.
.
#### 3.1.2. The Agents
Each agent is characterised by (a) their degree of belief in the hypothesis
(variable agent-belief), (b) their representation of the causal structure of
the world, and (c) a list of evidence they have already encountered. We go
through each feature in turn. First, each agent assigns a degree of belief to
the hypothesis (variable agent-belief). Second, they use a BN that connects
the evidence to the hypothesis as their representation of the world to compute
said belief. Third, they store the truth values of the evidence they have
already encountered (variable agent-evidence-list). These three aspects are
related in a dynamic, straightforward way. Suppose an agent $A$ stores the
following list of evidence at time $t$: $\texttt{agent-evidence-
list}_{A}^{t}=\\{E_{1},\neg E_{3}\\}$. In that case, they will use their
Bayesian network to compute $\texttt{agent-belief}_{A}^{t}=P(H|E_{1},\neg
E_{3})$ by using Bayesian conditionalization. Whenever agents encounter a new
piece of evidence (e.g., $E_{2}$), they update their degree of belief (e.g.,
$\texttt{agent-belief}_{A}^{t+1}=P(H|E_{1},E_{2},\neg E_{3})$). When the
agent’s agent-evidence-list is empty, that is, when they have not yet
encountered any evidence, their agent-belief is simply the base rate (marginal
probability) of the hypothesis node in their BN. This value is stored in the
agent-variable initial-belief as their agnostic, pre-evidence belief in the
hypothesis.
In the first version of NormAN presented here, we assume that each agent’s BN
simply corresponds to the world model’s network: that is, we assume that
agents represent the world correctly (on relaxing this assumption, see 4.3.1
below). This homogeneity of worldviews entails that whenever two agents have
access to the same evidence, they also have the same degree of belief in the
hypothesis. This assumption can be interpreted as fulfilling the uniqueness
standard of rationality, that is, the claim that for any body of evidence $E$
and proposition $P$, $E$ justifies at most one doxastic attitude toward $P$
(Feldman and Antony, 2011; White, 2019). This homogeneity also means that
disagreements are entirely the result of asymmetric information. Heterogeneity
in beliefs arises because agents may have access to different sets of
evidence.
#### 3.1.3. The Social Network
The model places ‘number-of-agents’ agents on a grid and then specifies who is
connected to whom via undirected communication links. Agents can only
communicate with their link neighbours. NormAN provides a number of different
network structures that the user can select before initialisation (via the
chooser variable social-network), such as the complete network, a ‘wheel’ (cf.
(Zollman, 2010; Frey and Šešelja, 2020)) and small-world networks (also known
as Watts- Strogatz networks (Watts and Strogatz, 1998; Wilensky, 2005)). The
latter are a type of network structure found in many social and biological
networks. They are characterised by comparatively short paths between nodes in
the network (‘five degrees of separation’) and comparatively high clustering,
although the density of connections is relatively low (see Fig. 4 for a
visualisation).
Figure 4. Two groups of 50 agents connected in a social network: a ‘small-
world’ network on the left, and a complete network on the right. Green
triangles represent agents who currently support the hypothesis, and red those
who do not (cf. Section 3.2). Both network types are used in the case studies
(sections 4.1, 4.2). Parameters of the small-world network: rewiring-
probability=0.2, k=2.
### 3.2. Process Overview
Deliberation unfolds dynamically, in discrete time steps. At each step, agents
follow this protocol:
* (1)
Collect evidence: agents may collect a new piece of evidence from the world.
* (2)
Communication: agents may share one piece of evidence they have already
encountered with their link neighbours.
Collecting evidence facilitates the flow of information into the network, and
communication facilitates the flow of information through the network. This
subsection explains when and how agents perform each activity (a detailed
description and a flowchart of the protocol can be found in section A.3, Fig.
9).
To collect evidence (or ‘inquire’), agents randomly select an item from the
world model’s evidence-list that they have not yet encountered. They add this
truth value to their personal agent-evidence-list.777As an example, suppose
that at time $t$, agent $A$ stores the truth values $\texttt{agent-evidence-
list}_{A}^{t}=\\{E_{1},\neg E_{3}\\}$. Through inquiry, they may find that
$E_{2}$ is indeed true, thus extending their list to $\texttt{agent-evidence-
list}_{A}^{t+1}=\\{E_{1},E_{2},\neg E_{3}\\}$. Inquiry is therefore modelled
by ‘drawing’ from the world’s evidence-list. Learning any new piece of
evidence (be it via inquiry or communication) is modelled as learning that a
certain piece of evidence is true or false. Two agent variables govern
inquiry. First, agents have a fixed maximum number of inquiries (the variable
max-draws determines how many pieces they may each collect during one
simulation). Second, agents will not necessarily inquire every round. Rather,
their chance of inquiry is determined by a curiosity parameter. Hence, agents
only collect evidence if they are ‘curious’, and if they still have ‘draws’.
Next, in each round, agents may communicate and receive evidence through
communication. In NormAN 1.0, communication is modelled via a simple
transmission mechanism: the communicating agent $A$ chooses which piece of
evidence to transmit to their link neighbours. Each link neighbour, e.g., $B$,
then either adds this evidence to their agent-evidence-list, and computes a
new agent-belief, or ignores this evidence if they have already heard
it.888For instance, if $A$’s list is $\texttt{agent-evidence-
list}^{i}=\\{E_{1},\neg E_{3}\\}$, and $B$’s agent-evidence-list is
$\texttt{agent-evidence-list}_{B}^{i}=\\{\neg E_{3}\\}$, then $A$’s sharing
$E_{1}$ will enrich agent $B$’s list to $\texttt{agent-evidence-
list}_{B}^{i+1}=\\{E_{1},\neg E_{3}\\}$. Had $A$ chosen $\neg E_{3}$, $B$’s
list would have remained unchanged. In NormAN, agents recognise distinct
pieces of evidence and never ‘double count’ pieces of evidence.
Although this mechanism of informational ‘passing the parcel’ is simple in
that it avoids the complexity of testimony, it can be used to capture
distinct, complex styles of communication. In NormAN 1.0, three sharing rules
are examined:
1. (1)
Random: Agents share a random piece of evidence from their agent-evidence-
list.
2. (2)
Recency: Agents share the piece of evidence they most recently encountered.
3. (3)
Impact: Agents share the piece of evidence that they hold to be the best piece
of evidence in favour of their current position.
Since the random rule is self-explanatory, we briefly explain how the recency
and impact rules work (for a detailed, technical explanation, see Section
A.3). Under the recency rule (loosely inspired by Maes and Flache’s model of
bi-polarization (Mäs and Flache, 2013)), agents are most likely to share the
last piece of evidence they heard. This is implemented by each agent’s
recency-list, which keeps track of the order of receipt.999Importantly, even
if agents receive a piece of evidence they have already encountered, this
piece is ‘popped’ to the top of the recency-list. With a high probability of
$x$ the agents share the last element of this list, but with a probability of
$1-x$ they share another random element from the list.101010In the base model,
$x=0.9$.
The impact-sharing rule provides a very basic implementation of the idea that
speakers seek to communicate what they consider to be (most) relevant. This
means sharing what they consider to be their best—strongest—piece of evidence.
In our simple impact rule, this is the piece of evidence which most convinces
agents of their current position. In NormAN, agents track the magnitude of the
belief update that evidence triggers, that is, its ‘impact’. To measure this,
for each evidence $E_{i}$, agents store the update magnitude
$P(H|E_{i})-P^{initial}(H)$, where $P^{initial}(H)$ marks the agent’s prior,
pre-evidence belief (initial-belief). That is, the impact of a piece of
evidence is measured by how far it moved (or would move) an agent’s belief
away from their agnostic prior. Each agent has an update-list recording the
impact of each piece of received evidence. If an agent currently supports the
hypothesis, they share the evidence with the highest update value (and they
share the evidence with the lowest, i.e., largest negative impact if they
currently oppose it). NormAN models this ‘support’ as follows: if the agent’s
agent-belief¿initial-belief, they support the hypothesis, and they oppose it
if agent-belief¡initial-belief (cf. Fig. 4). Hence, an agent’s position is
measured relative to their pre-evidence, agnostic prior.
Communication is regulated by a conviction threshold, a percentage value that
serves as a cut-off point for when an agent’s belief departs sufficiently from
their agnostic, pre-evidence prior (initial-belief) for them to jump into the
discussion. This threshold is set by the global variable conviction-threshold,
which determines a percentage by which the agent’s conviction needs to exceed
their initial, pre-evidence belief.111111Specifically, it defines a lower
bound and an upper bound for agent beliefs. The lower bound is computed as
($\texttt{initial-belief}-\texttt{initial-belief}\times\texttt{conviction-
threshold}$). The upper bound is computed as ($\texttt{initial-
belief}+(1-\texttt{initial-belief})\times\texttt{conviction-threshold}$). If
an agent’s agent-belief does not exceed the threshold (above or below), they
will not share. Note that if conviction-threshold is set to $0$, the sharing
condition is trivially met in most cases: agents will share whenever their
agent-belief $\not=$ initial-belief.121212As an example, if conviction-
threshold $=0$, initial-belief $=0.3$, and agents use the impact sharing rule,
they will share pieces of evidence ‘against’ $H$ if their current belief is
below 0.3 (and vice versa for agent-belief$>0.3$).
One last agent-variable co-determines the frequency of communication: agents’
chattiness $\in[0,1]$, that is, the chance that they will communicate with
their link neighbours on each round (determined by the global variable
chattiness). If an agent passes the conviction threshold and is chatty, they
will send an argument (item from to their agent-evidence-list) to their link
neighbours.
To summarise, in each time step, agents may first collect new evidence from
the world (if they are curious and still have ‘draws’). Then, if they cross
the threshold and are chatty, they share one of their pieces of evidence with
their neighbours (according to the sharing rule chosen by the model user).
Whenever they learn of a new piece of evidence, they compute their new belief
in the hypothesis.
### 3.3. Implementation and usage
In order to make the NormAN framework accessible to researchers from a broad
range of backgrounds we chose to implement it in NetLogo (Wilensky, 1999;
Wilensky and Rand, 2015). Designed initially to teach programming to
beginners, NetLogo is an accessible, well-documented, platform that has been
and continues to be widely used in agent-based modelling research (Gunaratne
and Garibay, 2021), including specifically for research on opinion dynamics
(Lorenz, 2017; Wang et al., 2022), belief dynamics (Hahn et al., 2018b, 2019,
2023), and social epistemology (Weisberg and Muldoon, 2009; Pinto and Pinto,
2018).
Its benefits lie in the fact that much of the machinery required for setting
up an ABM and running simulations with it is in-built, leading to very compact
code: the initial version of NormAN (version 1.0) has only 500 lines of code
(excluding the BN definitions).
Moreover, Netlogo has extensions for both R (Thiele et al., 2012) and Python
(Jaxa-Rozen and Kwakkel, 2018), that allow two-way interactions with the
extensive scientific computing resources of those platforms. For our initial
version of NormAN we chose the R extension (a version with Python is planned
in future). Specifically, NormAN draws on the R-package bnlearn (Scutari,
2009; Scutari and Denis, 2021) to handle all Bayesian belief updating over
BNs. NormAN 1.0 was developed using NetLogo version 6.2.1, which efficiently
implements the R extension (developed by (Thiele and Grimm, 2010)).
The combination of R (bnlearn) and NetLogo makes for a very flexible modelling
environment: to characterize the world BN, the modeller can load whole $R$
files into the NetLogo model, or simply copy and paste the lines of R code
into the indicated section of the NetLogo code. One can also use one of eight
preset BN’s (see B). The NetLogo interface handles the rest: sliders, inputs
and switches determine the size and shape of the social network, the agent
variables such as sharing styles, as well as the specification of which BN
nodes ought to count as the evidence and hypothesis.
#### 3.3.1. Running and Evaluating Simulations
With respect to running and evaluating simulations, the use of the R extension
means that users have two routes for controlling ‘experiments’ and model
explorations: Netlogo’s built-in BehaviorSpace (Tisue and Wilensky, 2004) and
directly through R (Thiele et al., 2012).
While the use of a high-level language such as NetLogo does come at a
performance cost, we found simulations with NormAN version 1.0 to not only
(easily) be efficient enough for practical purposes in the range of network
sizes we think modellers will most likely wish to explore in detail (up to
around 100). It is also possible to run larger networks. We have, albeit
slowly, run networks with 100,000 agents in the NetLogo User Interface (on a
2020 MacBook Pro with 2 GHz Quad-Core Intel Core i5, and 16 GB of RAM).
It is thus possible, even in the current implementation, to check how findings
scale and whether ‘more is different’ for the target phenomenon of interest
(Anderson, 1972). Furthermore, much of the processing time for large networks
involves the construction of the social network (in particular, the small
world network), suggesting paths for scalable future versions (Railsback et
al., 2017).
While we consider the balance between accessibility and performance to be a
suitable one with respect to our current goals (see also (Burbach et al.,
2020; Railsback et al., 2017)), re-implementing the NormAN framework not just
with other extensions (Salecker et al., 2019; Gunaratne and Garibay, 2021),
but also within other platforms and languages is a goal for future work.
## 4\. Initial Case Studies
In the third main part of this paper, we seek to demonstrate the utility of
NormAN with two case studies. These have been chosen to illustrate the value
of its features and demonstrate the reasons for our basic design choices. In
particular, they serve to underscore the claims of section 2.3 above, that a
suitable model of argument exchange needs normative grounding both with
respect to the aggregation of evidence/arguments by individual agents and with
respect to the ground truth world. The case studies have been chosen also to
illustrate how NormAN, as a normative model, may contribute to extant
theoretical concerns across a range of disciplines.
### 4.1. Case Study 1: Shift to Extremity
The so-called ‘shift to extremity’ (Stoner, 1968) is the original
‘polarization’ phenomenon. Although the term ‘polarization’ has now become
associated with belief or opinion _divergence_ within a population, the term
was first used to describe the phenomenon whereby deliberating groups tended
to shift further in the direction of an initial opinion over the course of
deliberation (for a review see e.g., (Isenberg, 1986)). This shift to
extremity attracted considerable research spanning six decades to date and has
proved highly reliable (it has been observed with lab-based studies, mock
juries, deliberative polling (Myers and Lamm, 1976) and citizen debates
(Lindell et al., 2017) and with tasks as diverse as risk acceptance,
probability judgment, policy choice and negotiation (Lamm, 1988)), though it
is not observed in every group discussion. This interest has been fuelled not
just by the phenomenon’s practical relevance, but also the fact that it (at
least initially) seemed counter-intuitive and in need of explanation: after
all, one might expect a deliberation to surface both arguments for and against
a claim. This made it seem surprising that beliefs might shift regularly in
one particular direction.
Multiple lines of explanation were pursued in the literature, such as the idea
that the shift reflects social comparison processes (Sanders and Baron, 1977)
or social identity considerations (Abrams et al., 1990): people may become
comfortable expressing positions they initially feared others might view as
extreme or they may, as a matter of identity, seek to adopt attitudes
stereotypical of the group. A third account, by contrast, attributed the shift
to the arguments that surfaced within a debate. Burnstein and Vinokur’s
‘argumentative theory’ proposed that group members lean toward an initial
position because they have more (or stronger) arguments in favour of that
position and more (or stronger) arguments in favour of that position will
consequently be available for exchange in the deliberation (Burnstein and
Vinokur, 1977). Experimental research subsequently sought to distinguish these
competing (but ultimately not mutually exclusive) accounts (Lamm, 1988;
Sanders and Baron, 1977; Vinokur and Burnstein, 1978).
The argumentative theory was also supported through simulations in an agent-
based model by (Mäs and Flache, 2013). In this model, Maes and Flache
implement the substantive assumptions of the persuasive argumentation theory
and combine them with the modelling of homophily in order to understand bi-
polarization or belief divergence (an issue we turn to in our next case
study). In effect, their model of the latter combines the shift to extremity
afforded by persuasive argumentation with homophily-based segregation to
explain divergence. In their model, agents have a numerically valued opinion
(drawn from the interval -1 to 1) representing their stance on the issue in
question. Additionally, there is a set of arguments that address that issue.
The valence of an argument is expressed numerically (pro = 1, con = -1), and
all arguments carry equal weight. An agent’s current stance is based on the
average value of the arguments they are currently considering, and arguments
for communication are selected randomly from the agent’s current relevance set
–a subset of the encountered arguments determined by recency. Resultant
findings support the argumentative theory in as much as the positions of
agents within the homophily-driven clusters become more extreme.
One limitation of the model, however, is that both the generation and
evaluation of arguments lack a principled basis. And the initial distribution
of arguments in the population is essentially ‘just so’.
Most recently, it has been pointed out that both of these concerns may be
addressed by adopting a Bayesian perspective on argument (Hahn, 2023), as
described in section 2.1.4 above. From that perspective, multiple interlocking
components give rise to the shift to extremity: Group members’ pre-
deliberation beliefs are based on a sample of the arguments available ‘in the
world’. The available population of arguments for and against the claim is
likely to contain stronger arguments in one direction than the other. Pre-
deliberation beliefs based on samples will, on average, reflect those
characteristics. By the same token, combining samples via group discussion is
more likely to see individual members add stronger arguments supporting the
initial direction, which in turn will shift the group mean.
The core component that the expected distribution of arguments available _in
the world_ is skewed follows from the Bayesian conceptualisation of argument
–outlined in section 2.1.4– whereby an argument or piece of evidence is strong
or diagnostic to the extent that it is much more likely to be found if the
hypothesis is true than if it is false (expressed by the likelihood ratio
$P(e|H)/P(e|notH)$). This translates into an expected distribution of
arguments by virtue of the fact that how likely a piece of evidence is (its
so-called marginal probability) is determined by ‘total probability’:
$P(e)=P(e|H)\times P(H)+P(e|notH)\times P(notH)$. In other words, the
probability of a piece of evidence is determined by the probability of
obtaining it if the underlying hypothesis is true, weighted by the probability
of the hypothesis, plus the probability of a false positive, weighted by the
probability of the hypothesis being false. This means that (all other things
equal), strong evidence in favour of a hypothesis is more likely than equally
strong evidence against if the hypothesis is true (Hahn, 2023).
In short, the Bayesian framework helps make explicit the fundamental point
that –at least for claims about issues of fact– what arguments are available
in a domain is determined by the underlying structure of the world. And that
evidence distribution, in turn, will impact agents’ subsequent beliefs. The
shift to extremity is simply an initially counter-intuitive consequence of
that fundamental point. We should expect a group with prior evidence/arguments
to lean, on average, in a particular direction and expect that exchanging that
evidence will likely lead to a further shift in that direction.
Figure 5. Results from simulation using the ‘Big Net’ network. Shown are the
mean beliefs of agents in the target hypothesis and standard error bars,
across all 800 experiment runs, at the pre (0) and post-deliberation phase (in
this experiment after 25 exchanges/steps). Groups are split by the agents’
mean initial direction of belief for a given run.
This is the Shift to Extremity
Figure 6. The Figure shows the trajectories of mean group beliefs across time
(thin lines) and the average of those means (dashed lines) with standard error
(grey shaded area), split by whether the group initially leaned ’for’ (blue)
or ’against’ (red). The top row shows the simulation results for complete
networks of various sizes (10,50,100, and 500 agents). The bottom row shows
the same dynamics in small-world networks with corresponding numbers of
agents. Note that the ‘pre-evidence’ neutral point in these simulations is .5
(base rate of hypothesis), as indicated by the black dotted line.
This is the Shift to Extremity
In our first case study, we demonstrate these theoretical points through
simple simulations with the NormAN framework. To this end, we simply ran
NormAN using one of our default networks –the ‘Big Net’ network (see Fig. 15
in Appendix B below). This stylised network is useful for making the current
point because it is symmetrical by design: the structure contains three
arguments that, when true, are evidence for the claim, and three that, when
true, are equally strong evidence against. On average, instantiations of this
world will not, however, lead to equal numbers of arguments for and against
(though this will sometimes occur) for the reasons just outlined. As a result,
the shift to extremity is readily observed despite the balanced nature of this
stylised network. To show this, we simulated many model runs, graphing the
mean belief at 2 points in time: once after the initial argument-draw by
agents, and once at the end of the run. The former represents the pre-
communication, and hence the pre-deliberation state, the latter the end of
deliberation (for full details see Appendix C.1) Figure 5 shows the respective
pre- and post-deliberation means split by whether the group’s initial belief
(given their initial access to evidence prior to deliberation) leans ’for’ or
’against’ the claim in question. The second figure, Figure 6 shows the same
measurements but split by individual runs of the simulation. It plots the
belief dynamics across time for the individual runs, split again by whether
the group’s initial belief (given their initial access to evidence prior to
deliberation) leans ‘for’ or ‘against’ the claim in question (i.e., the target
hypothesis). As can be seen, the shift does not always happen but happens most
of the time.
The experimental literature on the shift typically considers fairly small
groups, under conditions where members typically all hear one another. In
network terms, participants constitute a complete network. These conditions
are captured by the top left panel Figure 6. But we can also explore how
increasing the group size influences the dynamics. To this end, the four
columns of row 1 in Fig. 6 show group sizes of 10, 50, 100 and 500 agents
respectively. These show (for a constant pool of available arguments) a
sharpening of the shift as a function of group size. This reflects the fact
that the available evidence enters the group discussion more quickly. The
bottom row of Fig. 6 shows the same information for a small world network
(Watts and Strogatz, 1998). One can see a dampening of the shift, due to the
slower diffusion of arguments among agents. Finally, to demonstrate that these
findings are not a unique feature of the particular BN ‘world’ selected, Fig.
18 in the Appendix section C.1 shows the same results for a different network
included with NormAN version 1.0, the ‘Asia network’ of Fig. 2 above. The
Appendix also contains the full simulation details, model parameters, and
further supplementary figures that elucidate model behaviour.
These simple simulations illustrate the different components of the ‘shift to
extremity’ that a Bayesian perspective helps unify and make explicit. In so
doing, they also illustrate how additional insight and refinement of
understanding becomes available as a result of moving from the argumentative
theories’ initial, purely verbal, formulation, through Maes and Flache’s
agent-based model (Mäs and Flache, 2013) to a normative framework. Most
importantly, however, these simple simulations highlight the important point
that evidence or argument distribution matters fundamentally to understanding
model behaviour. By the same token, it matters fundamentally to the
understanding of the kind of real-world behaviours these models are trying to
elucidate.
### 4.2. Case Study 2: Polarization versus Convergence
The goal of consensus as unanimous agreement is one of the key motivations for
deliberation in early theories of deliberative democracy (Landemore and Page,
2015). Conversely, polarization as (extreme) belief divergence is seen as a
threat to contemporary democracies worldwide (Sunstein, 2018). Under which
conditions can we expect a group to converge on consensus—and correct
consensus at that? And under which conditions does polarization emerge?
Computational models of deliberation have identified conditions undermining
consensus, that is, conditions that lead to non-convergence. While such
models, to date, have yielded a wealth of insight, in particular formal
insight (e.g., (Krause, 2015)), there are key features of the most popular
paradigms that significantly limit or distort insight into polarization as a
real-world phenomenon.
As discussed in section 2.2.1 above, most opinion dynamic or social-influence
models revolve around the notion that individuals communicate their opinions
and influence is typically implemented as opinion averaging (Hegselmann and
Krause, 2002; French Jr, 1956; Friedkin and Johnsen, 2011; Deffuant et al.,
2005). They thus abstract away entirely from supporting arguments themselves.
As a result, these models have several consequences that are strongly at odds
with what is empirically observed. For one, they typically exhibit an
inevitable drive to convergence (Abelson, 1964; Lorenz, 2006; Krause, 2015)
which has meant that other factors preventing convergence and giving rise to
polarization must additionally be imposed (e.g., (O’Connor and Weatherall,
2018)). As Maes and Flache (2013) note, many of these factors can be
understood as negative influence of one form or another (Baldassarri and
Bearman, 2007; Macy et al., 2003; Mason et al., 2007; Olsson, 2020) giving
rise to two competing forms of influence—positive influence from similar,
like-minded agents, and negative influence from dissimilar agents. One aim of
the Maes and Flache (2013) model introduced in the previous section is to
demonstrate how positive influence alone (in the form of persuasive
argumentation coupled with homophily) can give rise to divergence. At the same
time, models based on opinion averaging also fail to capture the shift to
extremity. Averaging implies that averaging agents will not adjust their
opinions when they interact with others with whom they already agree (Mäs and
Flache, 2013). This follows necessarily from the fact that the average of two
near-identical values will not only be similar to these values but will be
less extreme than the more extreme of the two. It is thus difficult to
generate the empirically observed dynamics whereby deliberation leads to views
more extreme than those of any of the participants prior to the interaction.
Yet this possibility has been observed widely in the empirical research on the
shift to extremity discussed above.
A model that is rich enough to include individual arguments thus seems
essential to fully understanding real-world divergence of beliefs. In this
context, the dynamics of NormAN help clarify a fundamental mechanism producing
non-convergence: namely, the incomplete sharing of information on a social
network. Whenever two agents start with the same prior degree of belief and
also interpret pieces of evidence in the same way, the acquisition of
different sets of evidence will drive their posterior beliefs apart. Hence, in
our model, whenever deliberation does not result in a state where all agents
have access to the same evidence, then final beliefs may be scattered as well.
This is a natural consequence of the uniqueness assumption implemented in
NormAN 1.0: for each set of evidence, there is only one permissible ‘doxastic
state’, that is, only one possible degree of belief. Other models identify a
different cause of non-convergence: differences in the interpretation of
evidence. Cook and Lewandowski (Cook and Lewandowsky, 2016) highlight how
Bayesian agents entertaining different BNs (in particular, different
conditional probability distributions) will exhibit belief divergence when
receiving the same evidence (an experimental finding dating back to (Lord et
al., 1979)). Indeed, relaxing the uniqueness requirement and allowing agents
to have different conceptions of the world’s causal structure will make
perfect convergence via deliberation (as characterized here by the exchange of
arguments/evidence) rather hard to attain: even if all agents have access to
the same evidence, they may still disagree.
Figure 7. The beliefs of a population over time, for three sharing rules:
random (top left), recency (top right) and impact (below). Each function
tracks the state of one agent’s agent-belief at each time step. Initial
conditions: causal-structure= Vole (see the Appendix, section B for an
explanation), chattiness=0.5, conviction-threshold=0, curiosity=0, initial-
draws=1, max-draws=1, social-network=complete, number-of-agents=50. The prior
initial-belief$\approx 0.3$.
When can we expect agents to end up with identical sets of evidence, and
therefore in consensus? In this brief case study, we use the NormAN framework
to exemplify how a common style of communication can undermine consensus:
impact-driven sharing (as explained in section 3.2). Instead of sharing all
their evidence, agents only share the piece of evidence that most convinces
them of their current position, that is, the argument or piece of evidence
that they find most impactful. Crucially, this type of incomplete, selective
sharing need not be an irrational or bad-faith communication strategy. Using
the impact rule simply means curating what one shares, leading to
communication of what one considers one’s best, most truth-conducive piece of
evidence. Given that both communicating and processing arguments is costly,
such a strategy may be pragmatically appropriate in many contexts. It is
intuitive that a communication style between agents who do not (or cannot)
share their entire evidence may lead to non-convergence: even if no piece of
evidence is completely unknown to the group, the impact rule, or any
incomplete sharing rule (i.e., rules where agents communicate only a subset of
their evidence), will make an end state of fully shared information less
likely. Instead, each agent’s access to evidence is filtered by their social
environment: If their neighbours believe that the hypothesis is true, they are
more likely to receive confirmatory pieces of evidence through communication
(and vice versa).
Figure 7 shows three exemplifying model runs of a population of 50 agents
connected in a complete network (the world model uses the ‘Vole’ network,
explained in the Appendix, Section B). The graphs track the agents’ agent-
beliefs in the hypothesis during a very simple deliberation process:
initially, each agent has one piece of evidence (i.e., their agent-evidence-
list contains the truth value of one randomly chosen evidence node). Subfigure
1 illustrates the evolution of beliefs resulting from deliberation following
the ‘random’ sharing rule. Convergence is virtually guaranteed as agents will
share all their evidence eventually (the random rule is a ‘complete’ sharing
rule). The recency rule (Subfigure 2) creates a similar dynamic: although
convergence takes longer to achieve, agents do, eventually, form a consensus.
Subfigure 3 illustrates non-convergence as a consequence of the ‘incomplete’,
selective impact sharing. This non-convergence is easily explained: meaningful
communication between two agents is interrupted when the sender has
communicated what they consider to be their best evidence (the sender may
repeat themselves, but the receiving agent, having already incorporated the
information, will now ignore this communication with respect to their belief
in the hypothesis). It is only resumed when the sender learns even stronger
information in favour of their position or when they change their mind (i.e.,
cross the conviction threshold). However, once agents find themselves in a
group of like-minded neighbours, they are unlikely to receive further evidence
that changes their minds. Consequently, communication ceases and divergent
beliefs stabilise.
Figure 8. Different deliberations tracked by (left) the number of
transmissions of each argument per round and (right) the degrees of belief of
the agents. As for the tracking of arguments (left), if the argument $S$ was
shared by 20 agents at time step $10$, the red line marks $\\#uttered=20$ on
the y-axis. On the right-hand side, we see a histogram of the agents’ agent-
beliefs at time step 50 (the beliefs have stabilized at that point). The top
figure shows a run driven by the random rule. The middle figure shows a run
controlled by the recency rule. The bottom figure shows a run controlled by
the impact rule. Initial conditions: causal-structure= Vole (see the Appendix,
section B for an explanation), chattiness=0.5, conviction-threshold=0,
curiosity=0, initial-draws=1, max-draws=1, social-network=complete, number-of-
agents=50. The prior initial-belief$\approx 0.3$.
This non-convergence result contrasts with models of homophily and echo
chambers where agents themselves (possibly unknowingly) select sub-optimal
information environments. The present simulations with NormAN reveal that
divergence may arise also as a function of good faith communication strategies
whereby agents seek simply to convey what they consider to be their ‘best
evidence’, without homophily or any other attempt to filter their (input)
sources, and also without any form of negative influence. These sample runs
also demonstrate a further important point. Paradigmatic models of
polarization typically showcase only one type of non-convergence: wholly
driving apart two subgroups, possibly to the extremes (e.g., 0, that is,
certainty that the central hypothesis is false and 1, certainty of truth)
(Olsson, 2013; Pallavicini et al., 2021; Bramson et al., 2017). Realistic
models of deliberation that connect meaningfully to real-world data, however,
plausibly ought to reconstruct non-convergence as a scattering of intermediate
beliefs across the unit interval as well.
As the model runs of NormAN show, scattering of beliefs can arise as the
result of incomplete sharing in a social network. In the simulations (as,
arguably, in the real world), exactly how much the group’s beliefs diverge,
will depend partly on the nature of the available evidence: situations where
there are strong pieces of evidence both in favour and against the hypothesis
will prove more divisive than more homogenous distributions. Furthermore,
using scarcer, more realistic network structures (as opposed to the complete
networks in the above sample run) will also exacerbate divergence effects:
using a small-world network slows down belief convergence induced by the
complete sharing rules (recency and random), and exacerbates the scattering
effect of the impact rule (cf. Fig. 19 in the Appendix.)
The case study simulations show how different communication rules will give
rise to different patterns. Needless to say, in the real world these factors
may additionally interact with other factors that have been shown to give rise
to polarization such as homophily or differences in trust. A richer underlying
model of communication that involves argument exchange thus opens up new
avenues for understanding the complex interplay of factors likely at work in
real-world settings.
A corollary of the fact that communication rules are so central is that it
highlights the need to understand in more depth _what has been communicated_.
Different belief distributions come about because agents end up being exposed
to different subsets of the evidence. To analyze the dynamics of argument
exchange, NormAN allows the modeller to track the sharing frequency of
particular pieces of evidence. Fig. 8 shows three model runs (with the same
initial conditions as in Fig. 7), with the left panels tracking the number of
transmissions of each piece of evidence per time step (on the right we see the
final distributions of the agent’s beliefs). Both the first (top) Subfigure,
showing a random run, and the second (middle), showing a run driven by the
recency rule, end in convergence. Note, however, the differences in the
frequency of argument sharing: While the frequencies of arguments sent remain
roughly the same when sharing is random (each argument is similarly popular),
the frequency of arguments governed by the recency rule rises and falls in
waves. Finally, in the Subfigure below, which shows a run governed by the
impact rule, we can see that two groups of disagreeing agents (one with agent-
belief¡initial-belief, the other with agent-belief¿initial-belief) each select
what they consider to be their ‘best’ evidence and stabilize on sharing those
arguments repeatedly. Therefore, less impactful evidence is not communicated
and, as can be seen on the right-hand side, the agents’ beliefs do not
converge.
Crucially, this illustrates also that NormAN can be used to study not only
belief dynamics but also _argument dynamics_. However, studying the dynamics
of what arguments are exchanged is not just a matter of supplementary
analysis. It deserves to be seen as a research topic in its own right. As
noted in 2.2.1 above, computational social science has seen large amounts of
research devoted to aspects of how particular messages spread across real-
world social networks. That analysis has, however, remained coarse-grained,
arguably, in good part because of a lack of models to help formulate theory-
driven questions and hypotheses. We would hope that NormAN simulations could
provide new impetus here.
### 4.3. Implications of the Case Studies and Future Directions
Both case studies illustrate how the NormAN framework may contribute to extant
research. While these case studies are intended to be of interest in their own
right, we think also that they illustrate the breadth of disciplines and
research questions that the NormAN framework could contribute to. Case Study 1
on the shift to extremity helps further illuminate an empirical phenomenon
that has exercised, in particular, psychologists and political scientists.
Case Study 2 on polarization contributes to a research topic that has
attracted interest across a broad range of disciplines, ranging from
psychology (e.g., (Fasce et al., 2023; Brown et al., 2022), communication
studies (Moore-Berg et al., 2020; Kubin and von Sikorski, 2021; Lee et al.,
2014), sociology (e.g., (Mäs and Flache, 2013)), epistemology (e.g., (Olsson,
2013)), political science (e.g., (Baldassarri and Bearman, 2007; Fiorina and
Abrams, 2008)), economics (e.g., (Fang et al., 2021)), as well as complex
systems researchers examining models inspired by perspectives as diverse as
epidemiology (e.g., (Vasconcelos et al., 2019)), and physics (e.g., (Galam,
2005)).
Furthermore, our case studies underscore key points that have shaped the
design of the framework. First, they serve to highlight why one cannot simply
study belief dynamics with representations that involve only a single
numerical opinion. As models of polarization have shown, this does not
generalise well. In particular, the difficulty of obtaining anything other
than convergence in the majority of extant models of opinion dynamics
illustrates that point. Enriching models with arguments or reasons in support
of beliefs is thus essential for a more realistic understanding.
Second, doing so highlights how the distribution of arguments that are, in
principle, available to agents—and from which the arguments they personally
have available are sampled—strongly influences belief dynamics. This opens the
door for a deeper understanding of extant theories, alternative mechanisms for
known phenomena, and novel predictions that can be brought to real-world data.
Third, the fact that ‘the world’ matters to observable belief dynamics
furthermore makes it important that the distributional assumptions about
underlying arguments or evidence are sufficiently grounded. The Bayesian
framework helps with this problem because BN models of domains can be learned
from real-world data (Heckerman, 2008).
Fourth, the rules by which agents evaluate arguments and form beliefs also
clearly matter. Hence it is important to supply agents with principled
argument evaluation and aggregation rules. This is not to claim that humans
are perfect Bayesians. They may approximate Bayesian norms more or less
faithfully in some contexts, however (Peterson and Beach, 1967; Chater et al.,
2006, 2010). The analytic value of considering ‘rational’ or optimal
aggregation, though, does not ultimately rest on the precise extent of that
approximation. Rather, consideration of optimal rules aids the identification
of deep structural challenges that cognitive agents face and the attribution
of ‘bias’ or irrationality is meaningful only against the foil of the
performance of an ideal agent (Hahn and Harris, 2014).
Fifth, even the simple simulations of our case studies highlight the
fundamental impact of agents’ communication rules, that is, what evidence they
choose to communicate and why. This makes clear how much of the final model
behaviour depends on a key component that is itself without a (clear)
normative basis. This reflects a more general gap in normatively oriented
research: normative accounts of _where evidence actually comes_ from are,
arguably, under-developed in this and other contexts (Meder et al., 2022).
In fact, early work in social epistemology emphasised how simulations might
aid the discovery of appropriate norms for communication (Olsson, 2011).
Beyond the question of the frequency or intensity of communication within
epistemic collectives (e.g., (Angere and Olsson, 2017; Zollman, 2010; Borg et
al., 2019; Hahn et al., 2019; Hahn, 2022) very little progress has been made
with respect to this question. Arguably, this is because extant models have
been too restrictive to support much in the way of variation in communication
rules. Even the simple BNs explored in this paper, however, are rich enough to
allow one to formulate key elements of the factors that plausibly influence
communication in real life, such as speaker confidence (influencing whether or
not to speak at all), own belief and perceptions of argument strength which
feed into the pragmatic rules or maxims governing cooperative exchange (Grice,
1969; Levinson, 1983), as well as deviations in non-cooperative exchange.
Sixth and last, communication rules shape the rise and fall of arguments.
Incorporating arguments into the model opens a new world of exploring argument
dynamics alongside belief dynamics. Exploring how argument dynamics are shaped
by communication rules should open up new avenues for linking argumentation
research to work in computational social science. For one, examining real-
world patterns of argument spread across online social media and comparing
this with models could inform the identification of underlying (real-world)
communication rules.
We conclude with some indication of future directions.
#### 4.3.1. Future Directions
Crucially, NormAN is conceptualised not as a specific model, but as a
framework in which to develop concrete models by adjusting key components:
world, agent belief formation, communication rules, and network
characteristics.
As argued extensively above, the fact that our framework has normative
grounding is (to us) an essential requirement. That said, however, it is
entirely possible to strip away any such interpretation and simply treat the
Bayesian belief revision implemented for our agents as purely descriptive,
that is, as ‘just another update rule’. From that perspective, Bayesian belief
revision is simply another weighted averaging rule (Jones and Love, 2011).
This makes it an interesting research question how it compares, on both the
individual and collective level, to other popular rules in models of opinion
dynamics.
Second, our initial case studies highlight just how much model behaviour (and,
by the same token, real-world belief- and opinion dynamics) are shaped by
agents’ communication rules. This makes studying the impact of communication
rules a central topic for future research. Accordingly, future versions of
NormAN should implement richer notions of communication: for example, rules
that include a model of the listener (e.g., that listener’s knowledge states
(Levinson, 1983)). This would also enable agents to include strategic
considerations in their communications (Roth et al., 2007; Matt and Toni,
2008; Rahwan and Larson, 2009).
Third, as mentioned above, future NormAN models should allow agents to possess
subjective models of the world (BNs) that differ from the ground truth world,
and that differ across agents.
Fourth, richer agent models should incorporate notions of trust in order to
connect with the rich modelling literature on testimony (e.g., (Olsson, 2011;
Bovens and Hartmann, 2003; Shafto et al., 2012)) and with bounded confidence
models of opinion dynamics (e.g., (Hegselmann and Krause, 2002)).
With respect to ‘the world’, future research should involve systematic
exploration of the impact of different worlds, including richer models
involving many more variables.
Last, but not at all least, there should be a much deeper exploration of the
impact of network topology than the current version allows. In particular, it
will be important to study not just other types of networks (e.g.,
preferential attachment networks (Barabási and Albert, 1999)), and their
impact on argument dynamics. This should also include dynamic networks, in
which agents can change who they communicate with (Sekara et al., 2016); this
not only affords greater realism, but it will specifically allow the study of
the epistemological and argumentative impacts of homophily (McPherson et al.,
2001). Finally, this should include hierarchical networks (Ravasz and
Barabási, 2003).
## 5\. Conclusions
In this paper, we argued that there is currently a significant gap in the
research literature. On the one hand, traditional research on argumentation
does not connect well with the many real-world contexts that involve more than
two agents or competing perspectives. On the other, the wealth of research
trying to understand belief and opinion dynamics across social networks is
limited by the fact that it has not considered, or been able to consider
properly, individual arguments as the actual information exchanged that drives
those dynamics. In order to bridge that gap, agent-based models involving
argument exchange are required.
We argued further that a normative, Bayesian perspective provides a privileged
way to build such a model. We have sought to outline why normative models,
more generally, are relevant not just for research concerned with how we ought
to behave, but also for descriptively oriented research concerns. More
specifically, we have detailed how, within the argumentation literature, the
Bayesian framework allowed one to capture the content of arguments with
sufficient detail to advance long-standing research questions. We have
detailed also how the Bayesian framework allows one to capture belief dynamics
and evidence/argument aggregation. We have shown a novel application of the
Bayesian framework: namely how Bayesian Belief Networks, can be employed to
create a ground truth world and evidence distribution for agent-based
simulations.
These aspects are embodied in NormAN, a new framework for the study of
argument exchange across social networks. We have sought to illustrate with
case studies different ways in which NormAN models might benefit extant
research. It is hoped that NormAN will help bridge the current ‘gap’ and
support new research across the breadth of research on argumentation, opinion
dynamics, and communication discussed in this paper.
###### Acknowledgements.
The research reported in this paper was supported by the UK’s Arts and
Humanities Research Council grant AH/V003380/1, and the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) project number
455912038. L.A. was supported by a Konrad-Adenauer Stiftung scholarship.
Special thanks go to Michael Maes, Davide Grossi for many helpful discussions
and Borut Trpin for feedback on an initial draft of this manuscript.
### 5.1. CRediT statement
Conceptualization: L.A., R.F., U.H., A.J., and L.S. Data curation: K.P. Formal
analysis: K.P. Funding acquisition: U.H. Investigation: K.P. Methodology:
L.A., R.F., U.H., A.J., K.P., and L.S. Project administration: U.H. Resources:
K.P. Software: L.A., A.J., and L.S. Validation: L.A., U.H., K.P., and L.S.
Visualization: R.F., U.H., and K.P. Writing - original draft: L.A. and U.H.
Writing - review & editing: L.A., R.F., U.H., A.J., K.P., and L.S.
## References
* (1)
* Abelson (1964) Robert P Abelson. 1964. Mathematical models of the distribution of attitudes under controversy. _Contributions to mathematical psychology_ (1964).
* Abrams et al. (1990) Dominic Abrams, Margaret Wetherell, Sandra Cochrane, Michael A Hogg, and John C Turner. 1990. Knowing what to think by knowing who you are: Self-categorization and the nature of norm formation, conformity and group polarization. _British journal of social psychology_ 29, 2 (1990), 97–119.
* Anderson (1972) Philip W Anderson. 1972. More Is Different: Broken symmetry and the nature of the hierarchical structure of science. _Science_ 177, 4047 (1972), 393–396.
* Angere and Olsson (2017) Staffan Angere and Erik J Olsson. 2017. Publish late, publish rarely!: Network density and group performance in scientific communication. In _Scientific collaboration and collective knowledge_. Oxford University Press, 34–62.
* Bala and Goyal (1998) Venkatesh Bala and Sanjeev Goyal. 1998. Learning from neighbours. _The review of economic studies_ 65, 3 (1998), 595–621.
* Bala and Goyal (2000) Venkatesh Bala and Sanjeev Goyal. 2000. A strategic analysis of network reliability. _Review of Economic Design_ 5 (2000), 205–228.
* Baldassarri and Bearman (2007) Delia Baldassarri and Peter Bearman. 2007. Dynamics of political polarization. _American sociological review_ 72, 5 (2007), 784–811.
* Barabási and Albert (1999) Albert-László Barabási and Réka Albert. 1999. Emergence of scaling in random networks. _science_ 286, 5439 (1999), 509–512.
* Barash (2011) Vladimir Barash. 2011. _THE DYNAMICS OF SOCIAL CONTAGION_. Ph. D. Dissertation. Citeseer.
* Beinlich et al. (1989a) Ingo A Beinlich, Henri Jacques Suermondt, Martin R Chavez, and Gregory F Cooper. 1989a. The ALARM monitoring system: A case study with two probabilistic inference techniques for belief networks.. In _AIME 89: Second European Conference on Artificial Intelligence in Medicine_. 247–256.
* Beinlich et al. (1989b) Ingo A Beinlich, Henri Jacques Suermondt, R Martin Chavez, and Gregory F Cooper. 1989b. The ALARM monitoring system: A case study with two probabilistic inference techniques for belief networks. In _AIME 89: Second European Conference on Artificial Intelligence in Medicine, London, August 29th–31st 1989. Proceedings_. Springer, 247–256.
* Bench-Capon (2002) Trevor Bench-Capon. 2002. Value based argumentation frameworks. _arXiv preprint cs/0207059_ (2002).
* Benetos (2023) Kalliopi Benetos. 2023. Digital Tools for Written Argumentation. In _Digital Writing Technologies in Higher Education: Theory, Research, and Practice_. Springer, 81–99.
* Berger and Milkman (2012) Jonah Berger and Katherine L Milkman. 2012. What makes online content viral? _Journal of marketing research_ 49, 2 (2012), 192–205.
* Bhatia and Oaksford (2015) Jaydeep-Singh Bhatia and Mike Oaksford. 2015. Discounting testimony with the argument ad hominem and a Bayesian congruent prior model. _Journal of Experimental Psychology: Learning, Memory, and Cognition_ 41, 5 (2015), 1548.
* Bonevac (2003) Daniel Bonevac. 2003. Pragma-dialectics and beyond. _Argumentation_ 17 (2003), 451–459.
* Borg et al. (2019) AnneMarie Borg, Daniel Frey, Dunja Šešelja, and Christian Straßer. 2019. Theory-choice, transient diversity and the efficiency of scientific inquiry. _European Journal for Philosophy of Science_ 9, 2 (2019), 26.
* Bovens and Hartmann (2003) Luc Bovens and Stephan Hartmann. 2003. _Bayesian Epistemology_. Oxford University Press.
* Brady et al. (2017) William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel. 2017. Emotion shapes the diffusion of moralized content in social networks. _Proceedings of the National Academy of Sciences_ 114, 28 (2017), 7313–7318.
* Bramson et al. (2017) Aaron Bramson, Patrick Grim, Daniel J Singer, William J Berger, Graham Sack, Steven Fisher, Carissa Flocken, and Bennett Holman. 2017. Understanding polarization: Meanings, measures, and model evaluation. _Philosophy of science_ 84, 1 (2017), 115–159.
* Brown et al. (2022) Gordon DA Brown, Stephan Lewandowsky, and Zhihong Huang. 2022. Social sampling and expressed attitudes: Authenticity preference and social extremeness aversion lead to social norm effects and polarization. _Psychological review_ 129, 1 (2022), 18.
* Budak et al. (2011) Ceren Budak, Divyakant Agrawal, and Amr El Abbadi. 2011. Limiting the spread of misinformation in social networks. In _Proceedings of the 20th international conference on World wide web_. 665–674.
* Burbach et al. (2020) Laura Burbach, Poornima Belavadi, Patrick Halbach, Lilian Kojan, Nils Plettenberg, Johannes Nakayama, Martina Ziefle, and André Calero Valdez. 2020. Netlogo vs. Julia: Evaluating Different Options for the Simulation of Opinion Dynamics. In _International Conference on Human-Computer Interaction_. Springer, 3–19.
* Burnstein and Vinokur (1977) Eugene Burnstein and Amiram Vinokur. 1977. Persuasive argumentation and social comparison as determinants of attitude polarization. _Journal of experimental social psychology_ 13, 4 (1977), 315–332.
* Calegari et al. (2021) Roberta Calegari, Giovanni Ciatto, Viviana Mascardi, and Andrea Omicini. 2021. Logic-based technologies for multi-agent systems: a systematic literature review. _Autonomous Agents and Multi-Agent Systems_ 35, 1 (2021), 1.
* Carrera and Iglesias (2015) Álvaro Carrera and Carlos A Iglesias. 2015. A systematic review of argumentation techniques for multi-agent systems research. _Artificial Intelligence Review_ 44 (2015), 509–535.
* Castellano et al. (2009) Claudio Castellano, Santo Fortunato, and Vittorio Loreto. 2009. Statistical physics of social dynamics. _Reviews of Modern Physics_ 81, 2 (2009), 591–646. https://doi.org/10.1103/revmodphys.81.591 arXiv:0710.3256
* Centola (2018) Damon Centola. 2018. _How behavior spreads: The science of complex contagions_. Vol. 3. Princeton University Press Princeton, NJ.
* Cha et al. (2010) Meeyoung Cha, Hamed Haddadi, Fabricio Benevenuto, and Krishna Gummadi. 2010. Measuring user influence in twitter: The million follower fallacy. In _Proceedings of the international AAAI conference on web and social media_ , Vol. 4. 10–17.
* Chater et al. (2010) Nick Chater, Mike Oaksford, Ulrike Hahn, and Evan Heit. 2010. Bayesian models of cognition. _Wiley Interdisciplinary Reviews: Cognitive Science_ 1, 6 (2010), 811–823.
* Chater et al. (2006) Nick Chater, Joshua B Tenenbaum, and Alan Yuille. 2006. Probabilistic models of cognition: Conceptual foundations. _Trends in cognitive sciences_ 10, 7 (2006), 287–291.
* Chesnevar et al. (2000) Carlos Iván Chesnevar, Ana Gabriela Maguitman, and Ronald Prescott Loui. 2000. Logical models of argument. _ACM Computing Surveys (CSUR)_ 32, 4 (2000), 337–383.
* Cioffi-Revilla (2014) Claudio Cioffi-Revilla. 2014. _Introduction to computational social science_. Springer.
* Coady (1992) Cecil Anthony John Coady. 1992. _Testimony: A Philosophical Study_. Oxford University Press.
* Collins et al. (2018) Peter J Collins, Ulrike Hahn, Ylva von Gerber, and Erik J Olsson. 2018. The bi-directional relationship between source characteristics and message content. _Frontiers in Psychology_ 9 (2018), 18.
* Cook and Lewandowsky (2016) John Cook and Stephan Lewandowsky. 2016. Rational irrationality: Modeling climate change belief polarization using Bayesian networks. _Topics in cognitive science_ 8, 1 (2016), 160–179.
* Corner and Hahn (2009) Adam Corner and Ulrike Hahn. 2009. Evaluating science arguments: evidence, uncertainty, and argument strength. _Journal of Experimental Psychology: Applied_ 15, 3 (2009), 199.
* Corner and Hahn (2013) Adam Corner and Ulrike Hahn. 2013. Normative theories of argumentation: Are some norms better than others? _Synthese_ 190 (2013), 3579–3610.
* Corner et al. (2011) Adam Corner, Ulrike Hahn, and Mike Oaksford. 2011. The psychological mechanism of the slippery slope argument. _Journal of Memory and Language_ 64, 2 (2011), 133–152.
* Corti et al. (2022) Luca Corti, Michele Zanetti, Giovanni Tricella, and Maurizio Bonati. 2022. Social media analysis of Twitter tweets related to ASD in 2019–2020, with particular attention to COVID-19: topic modelling and sentiment analysis. _Journal of big Data_ 9, 1 (2022), 113.
* Dawid et al. (2015) Richard Dawid, Stephan Hartmann, and Jan Sprenger. 2015. The no alternatives argument. _The British Journal for the Philosophy of Science_ (2015).
* Deffuant et al. (2005) Guillaume Deffuant, Sylvie Huet, and Frédéric Amblard. 2005. An individual-based model of innovation diffusion mixing social value and individual benefit. _American journal of sociology_ 110, 4 (2005), 1041–1069.
* DeGroot (1974) Morris H DeGroot. 1974. Reaching a consensus. _Journal of the American Statistical association_ 69, 345 (1974), 118–121.
* Dorogovtsev et al. (2008) Sergey N Dorogovtsev, Alexander V Goltsev, and José FF Mendes. 2008. Critical phenomena in complex networks. _Reviews of Modern Physics_ 80, 4 (2008), 1275.
* Dorri et al. (2018) Ali Dorri, Salil S Kanhere, and Raja Jurdak. 2018. Multi-agent systems: A survey. _Ieee Access_ 6 (2018), 28573–28593.
* Douven (2019a) Igor Douven. 2019a. Computational Models in Social Epistemology. In _The Routledge Handbook of Social Epistemology_. Routledge, 457–465.
* Douven (2019b) Igor Douven. 2019b. Optimizing group learning: An evolutionary computing approach. _Artificial Intelligence_ 275 (2019), 235–251.
* Douven and Riegler (2010) Igor Douven and Alexander Riegler. 2010. Extending the hegselmann–krause model i. _Logic Journal of IGPL_ 18, 2 (2010), 323–335.
* Dung (1991) Phan Minh Dung. 1991. Negations as Hypotheses: An Abductive Foundation for Logic Programming.. In _ICLP_ , Vol. 91. 3–17.
* Dung (1995) Phan Minh Dung. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. _Artificial intelligence_ 77, 2 (1995), 321–357.
* Eagly and Chaiken (1993) Alice H Eagly and Shelly Chaiken. 1993. _The psychology of attitudes._ Harcourt brace Jovanovich college publishers.
* Eva and Hartmann (2018) Benjamin Eva and Stephan Hartmann. 2018. Bayesian argumentation and the value of logical validity. _Psychological Review_ 125, 5 (2018), 806.
* Eva et al. (2020) Benjamin Eva, Stephan Hartmann, and Soroush Rafiee Rad. 2020. Learning from conditionals. _Mind_ 129, 514 (2020), 461–508.
* Fang et al. (2021) Jianchun Fang, Giray Gozgor, and Cheng Yan. 2021. Does globalisation alleviate polarisation? _The World Economy_ 44, 4 (2021), 1031–1052.
* Fasce et al. (2023) Angelo Fasce, Jesús Adrián-Ventura, Stephan Lewandowsky, and Sander van der Linden. 2023. Science through a tribal lens: A group-based account of polarization over scientific facts. _Group Processes & Intergroup Relations_ 26, 1 (2023), 3–23.
* Feldman and Antony (2011) Richard Feldman and Louise Antony. 2011. Reasonable religious disagreements. _Social epistemology: Essential readings_ 137 (2011).
* Felton and Kuhn (2001) Mark Felton and Deanna Kuhn. 2001. The development of argumentive discourse skill. _Discourse processes_ 32, 2-3 (2001), 135–153.
* Fenton (2014) Norman Fenton. 2014. Assessing evidence and testing appropriate hypotheses. _Sci Justice_ 54, 6 (2014), 502–504. https://doi.org/10.1016/j.scijus.2014.10.007
* Fenton and Neil (2018) Norman Fenton and Martin Neil. 2018. _Risk assessment and decision analysis with Bayesian networks_. Crc Press.
* Fenton et al. (2013) Norman Fenton, Martin Neil, and David Lagnado. 2013. A general structure for legal arguments about evidence using Bayesian networks. _Cognitive science_ 37, 1 (2013), 61–102.
* Fiorina and Abrams (2008) Morris P Fiorina and Samuel J Abrams. 2008. Political polarization in the American public. _Annu. Rev. Polit. Sci._ 11 (2008), 563–588.
* French Jr (1956) John RP French Jr. 1956. A formal theory of social power. _Psychological review_ 63, 3 (1956), 181.
* Frey and Šešelja (2020) Daniel Frey and Dunja Šešelja. 2020. Robustness and idealizations in agent-based models of scientific interaction. _The British Journal for the Philosophy of Science_ (2020).
* Friedkin and Johnsen (2011) Noah E Friedkin and Eugene C Johnsen. 2011. _Social influence network theory: A sociological examination of small group dynamics_. Vol. 33. Cambridge University Press.
* Galam (2005) Serge Galam. 2005. Local dynamics vs. social mechanisms: A unifying frame. _Europhysics Letters_ 70, 6 (2005), 705.
* Godden and Zenker (2018) David Godden and Frank Zenker. 2018. A probabilistic analysis of argument cogency. _Synthese_ 195, 4 (2018), 1715–1740.
* Golub and Jackson (2010) Benjamin Golub and Matthew O Jackson. 2010. Naive learning in social networks and the wisdom of crowds. _American Economic Journal: Microeconomics_ 2, 1 (2010), 112–149.
* Gordon et al. (2007) Thomas F Gordon, Henry Prakken, and Douglas Walton. 2007. The Carneades model of argument and burden of proof. _Artificial Intelligence_ 171, 10-15 (2007), 875–896.
* Grice (1969) H Paul Grice. 1969. Utterer’s meaning and intention. _The philosophical review_ 78, 2 (1969), 147–177.
* Grim et al. (2013) Patrick Grim, Daniel J Singer, Steven Fisher, Aaron Bramson, William J Berger, Christopher Reade, Carissa Flocken, and Adam Sales. 2013. Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. _Episteme_ 10, 4 (2013), 441–464.
* Grimm et al. (2010) Volker Grimm, Uta Berger, Donald L DeAngelis, J Gary Polhill, Jarl Giske, and Steven F Railsback. 2010. The ODD protocol: a review and first update. _Ecological modelling_ 221, 23 (2010), 2760–2768.
* Gunaratne and Garibay (2021) Chathika Gunaratne and Ivan Garibay. 2021. NL4Py: Agent-based modeling in Python with parallelizable NetLogo workspaces. _SoftwareX_ 16 (2021), 100801.
* Hahn (2020) Ulrike Hahn. 2020. Argument quality in real world argumentation. _Trends in cognitive sciences_ 24, 5 (2020), 363–374.
* Hahn (2022) Ulrike Hahn. 2022. Collectives and epistemic rationality. _Topics in Cognitive Science_ 14, 3 (2022), 602–620.
* Hahn (2023) Ulrike Hahn. 2023. Individuals, collectives, and individuals in collectives: the in-eliminable role of dependence. _Perspectives on Psychological Science_ (2023).
* Hahn et al. (2018a) Ulrike Hahn, Jens Ulrik Hansen, and Erik J Olsson. 2018a. Truth tracking performance of social networks: how connectivity and clustering can make groups less competent. _Synthese_ (2018), 1–31.
* Hahn and Harris (2014) Ulrike Hahn and Adam JL Harris. 2014. What does it mean to be biased: Motivated reasoning and rationality. In _Psychology of learning and motivation_. Vol. 61. Elsevier, 41–102.
* Hahn and Hornikx (2016) Ulrike Hahn and Jos Hornikx. 2016. A normative framework for argument quality: Argumentation schemes with a Bayesian foundation. _Synthese_ 193, 6 (2016), 1833–1873.
* Hahn et al. (2018b) Ulrike Hahn, Christoph Merdes, and Momme von Sydow. 2018b. How good is your evidence and how would you know? _Topics in Cognitive Science_ 10, 4 (2018), 660–678.
* Hahn et al. (2023) Ulrike Hahn, Christoph Merdes, and Momme von Sydow. 2023. Knowledge through Social Networks: Accuracy, Error, and Polarisation. _PlOS one_ (2023).
* Hahn and Oaksford (2006) Ulrike Hahn and Mike Oaksford. 2006. A Bayesian approach to informal argument fallacies. _Synthese_ 152, 2 (2006), 207–236.
* Hahn and Oaksford (2007) Ulrike Hahn and Mike Oaksford. 2007. The rationality of informal argumentation: A Bayesian approach to reasoning fallacies. _Psychological review_ 114, 3 (2007), 704.
* Hahn and Oaksford (2012) Ulrike Hahn and Mike Oaksford. 2012. Rational argument. In _Oxford library of psychology. The Oxford handbook of thinking and reasoning_ , K. J. Holyoak and R. G. Morrison (Eds.). Oxford University Press, Oxford, 277–298.
* Hahn and Tešić (2023) Ulrike Hahn and Marko Tešić. 2023. Argument and explanation. _Philosophical Transactions of the Royal Society A_ 381, 2251 (2023), 20220043.
* Hahn et al. (2019) Ulrike Hahn, Momme von Sydow, and Christoph Merdes. 2019. How Communication Can Make Voters Choose Less Well. _Topics in Cognitive Science_ 11, 1 (2019), 194–206.
* Hamblin (1970) Charles L Hamblin. 1970. _Fallacies_. Methuen.
* Harris et al. (2012) Adam JL Harris, Anne S Hsu, and Jens K Madsen. 2012. Because Hitler did it! Quantitative tests of Bayesian argumentation using ad hominem. _Thinking & Reasoning_ 18, 3 (2012), 311–343.
* Hartmann (2021) Stephan Hartmann. 2021. 4.2 Bayes Nets and Rationality. , 253 pages.
* Heckerman (2008) David Heckerman. 2008. A tutorial on learning with Bayesian networks. _Innovations in Bayesian networks: Theory and applications_ (2008), 33–82.
* Hegselmann and Krause (2002) Rainer Hegselmann and Ulrich Krause. 2002. Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. _Journal of Artificial Societies and Social Simulation_ 5, 3 (2002).
* Hegselmann and Krause (2015) Rainer Hegselmann and Ulrich Krause. 2015. Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. _NHM_ 10, 3 (2015), 477–509.
* Hegselmann et al. (2006) Rainer Hegselmann, Ulrich Krause, et al. 2006\. Truth and cognitive division of labor: First steps towards a computer aided social epistemology. _Journal of Artificial Societies and Social Simulation_ 9, 3 (2006), 10.
* Hofman et al. (2021) Jake M Hofman, Duncan J Watts, Susan Athey, Filiz Garip, Thomas L Griffiths, Jon Kleinberg, Helen Margetts, Sendhil Mullainathan, Matthew J Salganik, Simine Vazire, et al. 2021\. Integrating explanation and prediction in computational social science. _Nature_ 595, 7866 (2021), 181–188.
* Hornikx et al. (2018) Jos Hornikx, Adam JL Harris, and Jordy Boekema. 2018. How many laypeople holding a popular opinion are needed to counter an expert opinion? _Thinking & Reasoning_ 24, 1 (2018), 117–128.
* Hutto and Gilbert (2014) Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In _Proceedings of the international AAAI conference on web and social media_ , Vol. 8. 216–225.
* Isenberg (1986) Daniel J Isenberg. 1986. Group polarization: A critical review and meta-analysis. _Journal of personality and social psychology_ 50, 6 (1986), 1141.
* Izquierdo et al. (2018) Segismundo S Izquierdo, Luis R Izquierdo, and Dunia López-Pintado. 2018. Mixing and diffusion in a two-type population. _Royal Society Open Science_ 5, 2 (2018), 172102.
* Jackson and Rogers (2007) Matthew O Jackson and Brian W Rogers. 2007. Relating network structure to diffusion properties through stochastic dominance. _The BE Journal of Theoretical Economics_ 7, 1 (2007), 0000102202193517041341.
* Jackson (1986) Peter Jackson. 1986. _Introduction to expert systems_. Addison-Wesley Pub. Co., Reading, MA.
* Java et al. (2007) Akshay Java, Xiaodan Song, Tim Finin, and Belle Tseng. 2007. Why we twitter: understanding microblogging usage and communities. In _Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis_. 56–65.
* Jaxa-Rozen and Kwakkel (2018) Marc Jaxa-Rozen and Jan H Kwakkel. 2018. Pynetlogo: Linking netlogo with python. _Journal of Artificial Societies and Social Simulation_ 21, 2 (2018).
* Jessen (1996) Finn V Jessen. 1996. _An Introduction to Bayesian Networks_. Springer-Verlag.
* Jitnah et al. (2000) Nathalie Jitnah, Ingrid Zukerman, Richard McConachy, and Sarah George. 2000. Towards the generation of rebuttals in a Bayesian argumentation system. In _INLG’2000 Proceedings of the First International Conference on Natural Language Generation_. 39–46.
* Jones and Love (2011) Matt Jones and Bradley C Love. 2011. Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. _Behavioral and brain sciences_ 34, 4 (2011), 169–188.
* Kahneman (2011) Daniel Kahneman. 2011. _Thinking, fast and slow_. macmillan.
* Kammouh et al. (2020) Omar Kammouh, Paolo Gardoni, and Gian Paolo Cimellaro. 2020. Probabilistic framework to evaluate the resilience of engineering systems using Bayesian and dynamic Bayesian networks. _Reliability Engineering & System Safety_ 198 (2020), 106813.
* Keppens (2019) Jeroen Keppens. 2019. Explainable Bayesian network query results via natural language generation systems. In _Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law_. 42–51.
* Kiss et al. (2017) István Z Kiss, Joel C Miller, Péter L Simon, et al. 2017\. Mathematics of epidemics on networks. _Cham: Springer_ 598 (2017).
* Korb and Nicholson (2010) Kevin B. Korb and Ann E. Nicholson. 2010. _Bayesian artificial intelligence_. CRC press.
* Krause (2015) Ulrich Krause. 2015. _Positive dynamical systems in discrete time: theory, models, and applications_. Vol. 62. Walter de Gruyter GmbH & Co KG.
* Kubin and von Sikorski (2021) Emily Kubin and Christian von Sikorski. 2021. The role of (social) media in political polarization: a systematic review. _Annals of the International Communication Association_ 45, 3 (2021), 188–206.
* Kuhn (1991) Deanna Kuhn. 1991. _The skills of argument_. Cambridge University Press.
* Kuhn and Udell (2003) Deanna Kuhn and Wadiya Udell. 2003. The development of argument skills. _Child development_ 74, 5 (2003), 1245–1260.
* Lagnado (2011) David Lagnado. 2011. _Evidence, inference and enquiry. Proceedings of the British Academy/171_. Oxford University Press, Chapter Thinking about evidence.
* Lamm (1988) Helmut Lamm. 1988. A review of our research on group polarization: Eleven experiments on the effects of group discussion on risk acceptance, probability estimation, and negotiation positions. _Psychological Reports_ 62, 3 (1988), 807–813.
* Landemore and Page (2015) Hélène Landemore and Scott E Page. 2015. Deliberation and disagreement: Problem solving, prediction, and positive dissensus. _Politics, philosophy & economics_ 14, 3 (2015), 229–254.
* Lauritzen and Spiegelhalter (1988a) Steffen L Lauritzen and David J Spiegelhalter. 1988a. Local Computation with Probabilities on Graphical Structures and their Application to Expert Systems (with discussion). _Journal of the Royal Statistical Society. Series B (Methodological)_ 50, 2 (1988), 157–224. https://www.jstor.org/stable/2345762
* Lauritzen and Spiegelhalter (1988b) Steffen L Lauritzen and David J Spiegelhalter. 1988b. Local computations with probabilities on graphical structures and their application to expert systems. _Journal of the Royal Statistical Society: Series B (Methodological)_ 50, 2 (1988), 157–194.
* Lazer et al. (2009) David Lazer, Alex Pentland, Lada Adamic, Sinan Aral, Albert-László Barabási, Devon Brewer, Nicholas Christakis, Noshir Contractor, James Fowler, Myron Gutmann, et al. 2009\. Computational social science. _Science_ 323, 5915 (2009), 721–723.
* Lee et al. (2014) Jae Kook Lee, Jihyang Choi, Cheonsoo Kim, and Yonghwan Kim. 2014. Social media, network heterogeneity, and opinion polarization. _Journal of communication_ 64, 4 (2014), 702–722.
* Lehrer and Wagner (1981) Keith Lehrer and Carl Wagner. 1981. _Rational Consensus in Science and Society. A Philosophical and Mathematical Study_. Dordrecht: D. Reidel Publ. Co.
* Levinson (1983) Stephen C Levinson. 1983. _Pragmatics_. Cambridge university press.
* Lewandowsky et al. (2020) Stephan Lewandowsky, Laura Smillie, David Garcia, Ralph Hertwig, Jim Weatherall, Stefanie Egidy, Ronald E Robertson, Cailin O’Connor, Anastasia Kozyreva, Philipp Lorenz-Spreen, et al. 2020\. _Technology and democracy: Understanding the influence of online technologies on political behaviour and decision-making_. Publications Office of the European Union.
* Lewiński and Aakhus (2014) Marcin Lewiński and Mark Aakhus. 2014. Argumentative polylogues in a dialectical framework: A methodological inquiry. _Argumentation_ 28 (2014), 161–185. |
# Trusted detector noise analysis for discrete modulation schemes of
continuous-variable quantum key distribution
Jie Lin Institute for Quantum Computing and Department of Physics and
Astronomy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 Norbert
Lütkenhaus Institute for Quantum Computing and Department of Physics and
Astronomy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1
###### Abstract
Discrete-modulated continuous-variable quantum key distribution protocols are
promising candidates for large-scale deployment due to the large technological
overlap with deployed modern optical communication devices. The security of
discrete modulation schemes has previously analyzed in the ideal detector
scenario in the asymptotic limit. In this work, we calculate asymptotic key
rates against collective attacks in the trusted detector noise scenario. Our
results show that we can thus cut out most of the effect of detector noise and
obtain asymptotic key rates similar to those had we access to ideal detectors.
## I Introduction
Quantum key distribution (QKD) [Bennett1984, Ekert1991] is a key establishment
protocol with the provable information-theoretic security. Various QKD
protocols with different advantages have been proposed, analyzed and
implemented. See e.g. Refs. [Scarani2009, Diamanti2015, Xu2020, Pirandola2019]
for reviews. Continuous-variable (CV) QKD protocols [Grosshans2002,
Silberhorn2002, Grosshans2003a, Weedbrook2004] have competitive advantages in
terms of massive deployment due to a significant overlap of devices used with
those in the optical classical communications. Many experiments of CVQKD on
both Gaussian modulation schemes such as Refs. [Lodewyck2007, Jouguet2013,
Qi2015, Soh2015, Huang2015, Huang2016, Zhang2020l] and discrete modulation
schemes like Refs. [Wittmann2010, Wang2013, Heim2014, Hirano2017,
Laudenbach2019a] have been demonstrated.
On one hand, Gaussian modulation schemes are simpler to analyze theoretically
than discrete modulation schemes, and they give secret key rates close to the
theoretical limits [Takeoka2014, Pirandola2017]. On the other hand, continuous
modulation itself is usually only approximated by a (relatively large) set of
discrete modulation settings. This approximation needs to be taken into
account during the full security analysis (see e.g. [Furrer2012, Jouguet2012,
Kaur2019]). Moreover, as Gaussian modulation schemes often require more
resources in terms of randomness and classical postprocessing resources,
discrete modulation schemes thus offer further simplification of
implementation. However, in previous experimental demonstrations of discrete
modulation schemes, either only effective entanglement has been verified
[Wittmann2010, Heim2014], which is a necessary precondition for QKD, or the
security has been established only against a restricted subset of collective
attacks [Wang2013, Hirano2017]. By now, there are asymptotic security proofs
against arbitrary collective attacks for binary [Zhao2009], ternary
[Bradler2018] as well as quaternary modulation schemes and beyond [Ghorai2019,
Lin2019]. Previous proofs for a general discrete modulation scheme
[Ghorai2019, Lin2019] investigate the untrusted detector noise scenario where
the imperfection of detectors can be controlled by Eve (and thus one can treat
detectors as ideal). In reality, the amount of electronic noise of an off-the-
shelf homodyne detector in a CVQKD experiment can be much higher than the
channel excess noise. As a result, the key rate in the untrusted detector
noise scenario drops very quickly to zero as the transmission distance
increases. However, since detectors are securely located in Bob’s laboratory
where Eve is unable to access, it is reasonable to assume that Eve does not
control detector imperfections especially those noise sources that are on the
electronic circuitry, which is more remote from the quantum mechanical part of
the signal detection.
In this work, we extend our previous analysis [Lin2019] to the trusted
detector noise scenario where detector imperfections (detector inefficiency
and electronic noise) are not accessible to Eve. We remark that Gaussian
modulation schemes have been analyzed in the trusted detector noise scenario
[Lodewyck2007, Fossier2009, Usenko2016, Laudenbach2019b] and it is known that
the effects of electronic noise and detector inefficiency on the key rates are
not very significant in the trusted detector noise scenario compared to the
ideal detector scenario under realistic experimental conditions. As we show in
this work, this observation also holds for discrete modulation schemes.
However, we emphasize that our analysis is not a trivial application of the
method used for Gaussian modulation protocols and instead we adopt a different
approach. The reason is that the previous method used in the Gaussian
modulation protocols relies on the fact [Navascues2006, Garcia-Patron2006]
that Eve’s optimal attacks for Gaussian modulation schemes correspond to
Gaussian channels, which make it easy to decouple the trusted detector noise
from the channel noise when one looks at the covariance matrix. However, we
cannot assume Gaussian channels here since Gaussian attacks are not expected
to be optimal for discrete modulation schemes. In our analysis, based on a
(commonly used) quantum optical model of the imperfect detector, we find its
corresponding mathematical description in terms of positive operator-valued
measure (POVM) and then use this POVM to construct observables corresponding
to quantities that are measured experimentally. These observables are then
used in our security proof. We also point out the crucial difference between
our analysis and Ref. [Namiki2018] for discrete modulation schemes: Our
asymptotic analysis is valid against arbitrary collective attacks while Ref.
[Namiki2018] uses the Gaussian channel assumption and thus its security
analysis [Namiki2018] is restricted to Gaussian collective attacks.
Our main contributions of this work are finding a suitable POVM description of
a noisy heterodyne detector and revising our previous analysis [Lin2019] by
using a new set of constraints from this POVM in the numerical key rate
optimization problem [Coles2016, Winick2018]. Similar to our previous
analysis, this method is applicable to both direct reconciliation and reverse
reconciliation schemes. Moreover, we study the postselection of data
[Silberhorn2002] in the trusted detector noise scenario. As a concrete
example, we apply our method to the quadrature phase-shift keying scheme with
heterodyne detection and focus on the reverse reconciliation scheme. Our
analysis here is still restricted to the asymptotic regime against collective
attacks and we make the same photon-number cutoff assumption as in the
previous works [Ghorai2019, Lin2019] to truncate the infinite-dimensional
Hilbert space in order to perform the numerical calculation. From the
numerical observation, we believe the results do not depend on the choice of
cutoff when it is appropriately chosen. We direct the discussion about this
assumption to Sec. III B of Ref. [Lin2019] and leave it for future work to
provide an analytical justification of this assumption beyond the numerical
evidences. To extend our analysis to the finite-key regime, we remark that we
have recently extended the numerical method of Ref. [Winick2018] on which our
analysis is based to include finite key analysis [George2020]. However, there
remain some technical challenges to solve before we can apply this method to
this protocol and thus we leave the finite key analysis for future work.
The rest of paper is outlined as follows. In Sec. II, we review the protocol
and proof method in Ref. [Lin2019]. We then present a trusted detector noise
model and the corresponding POVM description in Sec. III. In Sec. IV, we
modify the key rate optimization problem to take trusted detector noise into
account. We discuss our simulation method in Sec. V. We show the simulation
results without postselection in Sec. VI and with postselection in Sec. VII.
Finally, we summarize the results and provide insights for future directions
in Sec. VIII. We present technical details in the Appendices.
## II Background
Our key rate calculation in the trusted detector noise scenario uses a similar
proof method as in our previous work [Lin2019]; that is, we numerically
perform the key rate optimization problem [Winick2018] with a modified set of
constraints. In particular, we discuss how to modify the key rate optimization
problem in Sec. IV based on the POVM description of a noisy heterodyne
detector in Sec. III. To help understand this modification, we first review
main ideas of the proof in Ref. [Lin2019].
For illustration, we focus on the quadrature phase-shift keying scheme with
heterodyne detection. We remark that since the previous proof can be
generalized to other discrete modulation schemes beyond four coherent states
at the cost of more computational resources, our modified analysis for the
trusted detector noise scenario can also be generalized in the same way.
Moreover, one can apply a similar idea presented in this paper to study the
homodyne detection scheme in the presence of trusted detector noise.
### II.1 Quadrature phase-shift keying protocol
To begin with, we review the quadrature phase-shift keying (QPSK) scheme with
heterodyne detection. The quantum part of the protocol consists of many
repetitions of the following two steps: (1) Alice obtains a uniform random
number $x\in\\{0,1,2,3\\}$, selects the state $\ket{\alpha_{x}}=\ket{\alpha
e^{\frac{ix\pi}{2}}}$ from the set
$\\{\ket{\alpha},\ket{i\alpha},\ket{-\alpha},\ket{-i\alpha}\\}$ according to
the value of $x$, and sends it to Bob. (2) Bob applies the heterodyne
detection to the received state and obtains a measurement outcome
$y\in\mathbb{C}$.
After the quantum communication phase of the protocol, they proceed with the
classical postprocessing part of the protocol including announcement, sifting,
parameter estimation, key map (with discretization), error correction and
privacy amplification. In particular, the parameter estimation step is done
according to the key rate optimization problem in Eq. (22) discussed later. As
the classical part is similar to other CVQKD protocols and is not the focus of
our discussion, we highlight only the key map step below for our discussion
and skip the details of the remaining classical postprocessing procedures
here. We direct readers to Ref. [Lin2019] for a more detailed description.
In the case of reverse reconciliation, for each measurement outcome $y$
written as $y=\absolutevalue{y}e^{i\theta}$, where
$\theta\in[-\frac{\pi}{4},\frac{7\pi}{4})$, Bob obtains a discretized value
$z$ according to the following rule:
$z=\begin{cases}j,&\text{if
}\theta\in\Big{[}\frac{(2j-1)\pi}{4},\frac{(2j+1)\pi}{4}\Big{)}\ \ \text{and}\
\absolutevalue{y}\geq\Delta_{a}\\\ \perp,&\text{otherwise},\end{cases}$ (1)
where $j\in\\{0,1,2,3\\}$ and $\Delta_{a}$ is a postselection parameter that
needs to be optimized for the selected protocol and experimental parameters
111In our previous work [Lin2019], we also considered a postselection
parameter $\Delta_{p}$ related to the phase of the measurement outcome.
However, when we performed simulations with this postselection parameter, we
did not obtain any noticeable advantage. Thus, we omit the introduction of
this parameter in this work.. A protocol without postselection corresponds to
setting $\Delta_{a}=0$.
To perform the postselection of data in combination of reverse reconciliation,
Bob announces positions where he obtains the value $\perp$. After removing the
positions related to the value $\perp$, Alice’s string $\vec{\mathbf{X}}$
consists of her random number $x$’s in the remaining positions, and Bob’s raw
key string $\vec{\mathbf{Z}}$ consists of his discretized outcome $z$’s left.
(Alternatively, they may choose to announce and keep positions related to the
value $\perp$ and let the privacy amplification subprotocol effectively remove
those positions.) Alice and Bob may decide to recast their strings to binary
strings before or during the error correction step depending on their choice
of the error-correction code. For the consistency of our presentation, we use
the alphabet $\\{0,1,2,3\\}$ and let $\mathbf{X}$ and $\mathbf{Z}$ denote the
single-round version of $\vec{\mathbf{X}}$ and $\vec{\mathbf{Z}}$,
respectively.
Figure 1: Key map for quadrature phase-shift keying scheme in terms of Bob’s
measurement outcome $y\in\mathbb{C}$. Each colored region $\mathcal{A}_{j}$
corresponds to a discretized key value $j$. The measurement outcome in the
central disk with a radius $\Delta_{a}$ is discarded during the postselection
of data and is mapped to the symbol $\perp$.
### II.2 Review of security proof method
#### II.2.1 Source-replacement scheme
The first step of our security proof is to apply the source-replacement scheme
[Bennett1992, Grosshans2003b, Curty2004, Ferenczi2012] to obtain an equivalent
entanglement-based scheme for the given prepare-and-measure protocol. Then we
proceed to prove the security of the entanglement-based scheme.
Given Alice’s state ensemble $\\{\ket{\alpha_{x}},p_{x}\\}$ (where
$p_{x}=\frac{1}{4}$ for this protocol) for her preparation in the prepare-and-
measure scheme, Alice effectively prepares a bipartite state
$\ket{\Psi}_{AA^{\prime}}$ in the source-replacement scheme, which is defined
as
$\ket{\Psi}_{AA^{\prime}}=\sum_{x=0}^{3}\sqrt{p_{x}}\ket{x}_{A}\ket{\alpha_{x}}_{A^{\prime}},$
(2)
where $\\{\ket{x}\\}$ is an orthonormal basis for register $A$. Then Alice
sends the register $A^{\prime}$ to Bob via an insecure quantum channel and
keeps register $A$ for her measurement described by the POVM
$M^{A}=\\{M^{A}_{x}=\outerproduct{x}{x}:x\in\\{0,1,2,3\\}\\}$. The quantum
channel that maps register $A^{\prime}$ to Bob’s register $B$ is described by
a completely positive (CP) trace-preserving (TP) map,
$\mathcal{E}_{A^{\prime}\rightarrow B}$ and is assumed to be under Eve’s
control. Thus, Alice and Bob’s joint state $\rho_{AB}$ before their
measurements is
$\displaystyle\rho_{AB}=(\operatorname{id}_{A}\otimes\mathcal{E}_{A^{\prime}\rightarrow
B})(\outerproduct{\Psi}{\Psi}_{AA^{\prime}}),$ (3)
where $\operatorname{id}_{A}$ is the identity channel on Alice’s system $A$.
When Alice performs a local measurement using her POVM $\\{M^{A}_{x}\\}$ on
register $A$ and obtains an outcome $x$, she effectively sends the coherent
state $\ket{\alpha_{x}}$ to Bob. Bob’s received state $\rho_{B}^{x}$
conditioned on Alice’s choice of $x$ is
$\rho_{B}^{x}=\frac{1}{p_{x}}\Tr_{A}[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\mathds{1}_{B})].$
(4)
Bob applies his POVM $M^{B}=\\{M^{B}_{y}\\}$ to register $B$ to obtain his
measurement outcomes. In the case of untrusted detector noise (or ideal
heterodyne detector), the POVM of the heterodyne detection is
$\\{E_{y}=\frac{1}{\pi}\outerproduct{y}{y}:y\in\mathbb{C}\\},$ where $\ket{y}$
denotes a coherent state with complex amplitude $y$.
#### II.2.2 Key rate optimization
The next step is to formulate the key rate optimization problem for the
entanglement-based scheme. One can rewrite the well-known Devetak-Winter
formula [Devetak2005] into the following form [Coles2016, Winick2018]
$\displaystyle
R^{\infty}=\min_{\rho_{AB}\in\mathbf{S}}D\Big{(}\mathcal{G}(\rho_{AB})||\mathcal{Z}[\mathcal{G}(\rho_{AB})]\Big{)}-p_{\text{pass}}\delta_{EC},$
(5)
where $\delta_{EC}$ is the actual amount of information leakage per signal
pulse in the error-correction step,
$D(\rho||\sigma)=\Tr(\rho\log_{2}\rho)-\Tr(\rho\log_{2}\sigma)$ is the quantum
relative entropy between two (subnormalized) density operators $\rho$ and
$\sigma$, $\mathcal{G}$ is a CP, trace nonincreasing map for postprocessing
and $\mathcal{Z}$ is a pinching quantum channel for accessing results of the
key map. The set $\mathbf{S}$ contains all density operators compatible with
experimental observations. A more detailed discussion about the map
$\mathcal{G}$ can be found in Appendix A of Ref. [Lin2019]. For the reverse
reconciliation scheme, we can express the cost of error correction
$\delta_{EC}$ by
$\displaystyle\delta_{\text{EC}}$
$\displaystyle=\operatorname{H}(\mathbf{Z})-\beta\operatorname{I}(\mathbf{X};\mathbf{Z}),$
(6)
where $\operatorname{H}(\mathbf{Z})$ is the Shannon entropy of the raw key
$\mathbf{Z}$, $\beta$ is the reconciliation efficiency of the chosen error-
correction code, and $\operatorname{I}(\mathbf{X};\mathbf{Z})$ is the
classical mutual information between $\mathbf{X}$ and $\mathbf{Z}$.
Before we review the set of constraints as well as $\mathcal{G}$ and
$\mathcal{Z}$ maps for the quadrature phase-shift keying scheme, we start with
basic definitions. Given the annihilation operator $\hat{a}$ and creation
operator $\hat{a}^{\dagger}$ of a single-mode state with the usual commutation
relation $[\hat{a},\hat{a}^{\dagger}]=\mathds{1}$, we define the quadrature
operators $\hat{q}$ and $\hat{p}$, respectively, as
$\displaystyle\hat{q}$
$\displaystyle=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}),\;\;\;\hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}).$
(7)
They obey the commutation relation $[\hat{q},\hat{p}]=i\mathds{1}$. To utilize
the second-moment observations $\langle\hat{q}^{2}\rangle$ and
$\langle\hat{p}^{2}\rangle$ to constrain $\rho_{AB}$, we previously defined
the following two operators
$\hat{n}=\frac{1}{2}(\hat{q}^{2}+\hat{p}^{2}-\mathds{1})=\hat{a}^{\dagger}\hat{a}$
and $\hat{d}=\hat{q}^{2}-\hat{p}^{2}=\hat{a}^{2}+(\hat{a}^{\dagger})^{2}$
[Lin2019]. The relation between these observables and the heterodyne detection
POVM is highlighted in Sec. IV.1.
For the untrusted detector noise (or ideal heterodyne detector) scenario, the
key rate optimization problem [Lin2019] is
minimize $\displaystyle
D\big{(}\mathcal{G}(\rho_{AB})||\mathcal{Z}[\mathcal{G}(\rho_{AB})]\big{)}$
(8) subject to
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{q})]=p_{x}\langle\hat{q}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{p})]=p_{x}\langle\hat{p}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{n})]=p_{x}\langle\hat{n}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{d})]=p_{x}\langle\hat{d}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}]=1,$
$\displaystyle\Tr_{B}[\rho_{AB}]=\sum_{i,j=0}^{3}\sqrt{p_{i}p_{j}}\bra{\alpha_{j}}\ket{\alpha_{i}}\outerproduct{i}{j}_{A},$
$\displaystyle\rho_{AB}\geq 0,$
where the index $x$ runs over the set $\\{0,1,2,3\\}$ and
$\langle\hat{q}\rangle_{x},\langle\hat{p}\rangle_{x},\langle\hat{n}\rangle_{x}$,
and $\langle\hat{d}\rangle_{x}$ denote the corresponding expectation values of
operators $\hat{q},\hat{p},\hat{n}$, and $\hat{d}$ for the conditional state
$\rho_{B}^{x}$, respectively.
As indicated in Fig. 1, the protocol can perform postselection of data. To
perform postselection, we defined the region operators in Ref. [Lin2019] as
$\displaystyle R_{j}$
$\displaystyle=\frac{1}{\pi}\int_{\Delta_{a}}^{\infty}\int_{\frac{(2j-1)\pi}{4}}^{\frac{(2j+1)\pi}{4}}r\outerproduct{re^{i\theta}}{re^{i\theta}}\;d\theta\;dr$
(9)
for $j\in\\{0,1,2,3\\}$. The area of integration for each operator corresponds
to a region shown in Fig. 1.
The postprocessing map $\mathcal{G}$ in the reverse reconciliation scheme is
given by $\mathcal{G}(\sigma)=K\sigma K^{\dagger}$ for any input state
$\sigma$, where the Kraus operator $K$ is
$\displaystyle
K=\sum_{z=0}^{3}\ket{z}_{R}\otimes\mathds{1}_{A}\otimes(\sqrt{R_{z}})_{B},$
(10)
where $\\{\ket{0}_{R},\ket{1}_{R},\ket{2}_{R},\ket{3}_{R}\\}$ is the standard
basis for register $R$. The pinching quantum channel $\mathcal{Z}$ is given by
projections
$\\{\outerproduct{j}{j}_{R}\otimes\mathds{1}_{AB}:j\in\\{0,1,2,3\\}\\}$ as
$\displaystyle\mathcal{Z}(\sigma)=\sum_{j=0}^{3}(\outerproduct{j}{j}_{R}\otimes\mathds{1}_{AB})\sigma(\outerproduct{j}{j}_{R}\otimes\mathds{1}_{AB}).$
(11)
## III Noisy heterodyne detection
In this section, we present one physical model for a noisy heterodyne detector
and give the corresponding POVM description. We start with a slightly more
general model and then we make a simplification for the ease of calculation at
the end of this section. This simplified model then reduces to a model
commonly used in the literature.
### III.1 Trusted detector noise model
As a heterodyne detector consists of two homodyne detectors and a beam-
splitter, we consider imperfections in each homodyne detector. A homodyne
detector may have nonunity detector efficiency and also have some amount of
electronic noise which is the additional noise introduced to the measured data
by its electronic components. In an experiment, one is able to measure the
amount of electronic noise and the value of detector efficiency by a
calibration routine. To model a realistic homodyne detector with nonunity
detector efficiency and some amount of electronic noise, we use a quantum
optical model which is used in Refs. [Lodewyck2007, Fossier2009, Usenko2016,
Namiki2018, Laudenbach2019b], although the source of this electronic noise is
in the actual electronics part of the detector. An alternative view of the
electronic noise is that we can think about the detector as being a perfect
detector followed by some classical postprocessing of the data, which adds
noise. One should note that in a trusted device scenario, the characterization
of the actual noise should be experimentally verified. Our physical model is
chosen for convenience of calculating the POVM of the actual measurement. We
depict this physical model of a noisy heterodyne detector in Fig. 2. In this
diagram, we consider a more general case where two homodyne detectors have
different imperfections. We label the efficiency of the homodyne detector used
for $q$ quadrature measurement as $\eta_{1}$ and its electronic noise as
$\nu_{1}$ (expressed in shot noise units). Similarly, the efficiency of the
homodyne detector used for $p$ quadrature measurement is labeled as $\eta_{2}$
and its electronic noise is labeled as $\nu_{2}$.
Since our treatment for each homodyne detector in this heterodyne setup is the
same, we take one homodyne detector (shown in each dashed box in Fig. 2) as an
example and treat the other one similarly by using its corresponding
efficiency and electronic noise. An imperfect homodyne detector with its
efficiency $\eta_{j}<1$ and electronic noise $\nu_{j}\geq 0$ (for $j=1$ or
$2$) can be modeled by a beam-splitter placed before a perfect homodyne
detector with the following specification. (1) The ratio of transmission to
reflection of this beam-splitter is $\eta_{j}:1-\eta_{j}$. (2) One input port
of this beam-splitter is the signal pulse and the other input port is a
thermal state used to model electronic noise, which is equivalent to sending
one mode of a two-mode squeezed vacuum state (EPR state) to the beam-splitter.
Each quadrature’s variance of this ancillary thermal state is related to the
value of electronic noise $\nu_{j}$. More specifically, it is
$[1+\nu_{j}/(1-\eta_{j})]N_{0}$ [Lodewyck2007], where $N_{0}=1/2$ denotes the
shot-noise variance. In Fig. 2, we choose to parametrize the thermal state in
terms of its mean photon number as $\bar{n}_{j}=\frac{\nu_{j}}{2(1-\eta_{j})}$
instead of the variance of each quadrature, which is convenient for writing of
expressions in later sections 222The electronic noise $\nu_{j}$ is the thermal
noise added by the detection electronics. In the quantum mechanical model of
the detector shown in each dashed box of Fig. 2 , the electronic noise is
modeled by an ancillary thermal state added to the second input port of the
beam-splitter that models the detector efficiency. Since the value of
electronic noise is unaffected by the detector efficiency, to simulate the
desired amount of noise before this beam-splitter, one then needs to scale it
by the reflectance of the beam-splitter which is $1-\eta_{j}$. As the variance
of a thermal state with a mean photon number $\bar{n}$ is $(1+2\bar{n})N_{0}$,
one can easily see that the mean photon number of this ancillary thermal state
is $\bar{n}_{j}=\frac{\nu_{j}}{2(1-\eta_{j})}$. . We note that this way of
modeling electronic noise is valid when $\eta_{j}\neq 1$. Furthermore, we
assume $\eta_{j}\neq 0$. That is, we consider the case $\eta_{j}\in(0,1)$,
which is the case of a realistic detector of our interest.
Figure 2: Physical model for a noisy heterodyne detector. The homodyne
detector for the $q$ quadrature measurement has detector efficiency $\eta_{1}$
and electronic noise $\nu_{1}$. The homodyne detector for the $p$ quadrature
measurement has detector efficiency $\eta_{2}$ and electronic noise $\nu_{2}$.
The notation $\rho_{th}(\bar{n})$ stands for a thermal state with a mean
photon number $\bar{n}$. In particular,
$\bar{n}_{1}=\frac{\nu_{1}}{2(1-\eta_{1})}$ and
$\bar{n}_{2}=\frac{\nu_{2}}{2(1-\eta_{2})}$ (see main text for more
explanations). beam-splitters are 50:50 unless specified otherwise. Each
homodyne detector inside a gray box is ideal. Each dashed box encloses the
physical model for a noisy homodyne detector. LO stands for local oscillator.
In the next section, we derive the POVM corresponding to this detector model.
We then choose to consider a simplified scenario where these two homodyne
detectors are identical for the purpose of illustration and the ease of
numerical calculation. That is, we later assume they both have the same
detector efficiency $\eta_{1}=\eta_{2}=:\eta_{d}$ and the same electronic
noise $\nu_{1}=\nu_{2}=:\nu_{\text{el}}$.
### III.2 POVM description
We use the Wigner function formulation to find the POVM
$\\{G_{y}:y\in\mathbb{C}\\}$ corresponding to this noisy heterodyne detector
model. When two homodyne detectors give two real numbers $q_{s}$ and $p_{s}$
for $q$ and $p$ quadrature measurements, we label the outcome as
$y=q_{s}+ip_{s}$. By considering $\Tr(\rho G_{y})$ for an arbitrary input
density operator $\rho$ to the noisy heterodyne detector, we are able to find
the Wigner function $W_{G_{y}}$ of the POVM element $G_{y}$ as
$\displaystyle W_{G_{y}}(\gamma)=$
$\displaystyle\frac{1}{\sqrt{\eta_{1}\eta_{2}}\pi}\frac{2}{\pi}\frac{1}{\sqrt{1+\frac{2(1-\eta_{1}+\nu_{1})}{\eta_{1}}}}\frac{1}{\sqrt{1+\frac{2(1-\eta_{2}+\nu_{2})}{\eta_{2}}}}$
(12)
$\displaystyle\times\exp(\frac{-2[\real(\gamma)-\frac{1}{\sqrt{\eta_{1}}}\real(y)]^{2}}{1+\frac{2(1-\eta_{1}+\nu_{1})}{\eta_{1}}})$
$\displaystyle\times\exp(\frac{-2[\imaginary(\gamma)-\frac{1}{\sqrt{\eta_{2}}}\imaginary(y)]^{2}}{1+\frac{2(1-\eta_{2}+\nu_{2})}{\eta_{2}}}).$
By comparing this Wigner function with that of a displaced squeezed thermal
state, we can identify that the POVM element $G_{y}$ is a projection onto a
displaced squeezed thermal state up to a prefactor
$\frac{1}{\sqrt{\eta_{1}\eta_{2}}\pi}$. We give a full derivation of this
Wigner function and the explicit parameters for displacement, squeezing and
thermal state mean photon number in terms of detector parameters
$\eta_{1},\eta_{2},\nu_{1}$ and $\nu_{2}$ in Appendix A.
For the rest of the paper, we restrict our discussion to a simpler scenario
where we assume both homodyne detectors have the same imperfection for the
ease of numerical calculation and for the purpose of illustration. We discuss
how to perform the calculation in the general case in Appendix
LABEL:app:representation. In this simple case, we set
$\eta_{1}=\eta_{2}=\eta_{d}$ and $\nu_{1}=\nu_{2}=\nu_{\text{el}}$ in Eq.
(12). This equation is simplified to be
$\displaystyle W_{G_{y}}(\gamma)$
$\displaystyle=\frac{1}{\eta_{d}\pi}\frac{2}{\pi}\frac{1}{1+\frac{2(1-\eta_{d}+\nu_{\text{el}})}{\eta_{d}}}\exp(\frac{-2\absolutevalue{\gamma-\frac{y}{\sqrt{\eta_{d}}}}^{2}}{1+\frac{2(1-\eta_{d}+\nu_{\text{el}})}{\eta_{d}}}).$
(13)
One can observe that it is the Wigner function of a displaced thermal state
apart from the prefactor $1/(\eta_{d}\pi)$. Therefore, the POVM element
$G_{y}$ in this case is a scaled projection onto a displaced thermal state.
More precisely,
$\displaystyle
G_{y}=\frac{1}{\eta_{d}\pi}\hat{D}(\frac{y}{\sqrt{\eta_{d}}})\rho_{\text{th}}(\frac{1-\eta_{d}+\nu_{\text{el}}}{\eta_{d}})\hat{D}^{\dagger}(\frac{y}{\sqrt{\eta_{d}}}),$
(14)
where $\hat{D}(\frac{y}{\sqrt{\eta_{d}}})$ is the displacement operator with
the amount of displacement $y/\sqrt{\eta_{d}}$ and
$\rho_{\text{th}}(\frac{1-\eta_{d}+\nu_{\text{el}}}{\eta_{d}})$ is a thermal
state with the mean photon number $(1-\eta_{d}+\nu_{\text{el}})/\eta_{d}$,
which can be expressed in the photon-number basis as
$\displaystyle\rho_{\text{th}}(\bar{n})=\sum_{n=0}^{\infty}\frac{\bar{n}^{n}}{(1+\bar{n})^{n+1}}\outerproduct{n}{n}.$
(15)
Later in Sec. IV, we need to express operators defined in terms of POVM
elements $G_{y}$’s in the photon-number basis for the numerical key rate
calculation. Analytical expressions of matrix elements $\bra{m}G_{y}\ket{n}$
are known in the literature [Mollow1967] and shown in Appendix
LABEL:app:representation.
Let us end this section with a few remarks about the simplification considered
here. Firstly, as we later define operators involving integrals of POVM
elements $G_{y}$’s and need to find their matrix representations in the
photon-number basis for the numerical key rate calculation, we are able to
find efficiently computable analytical expressions for these operators under
this simplification. Without this simplification, one may need to perform some
numerical integrations. We emphasize that the principles presented in this
work also hold for the general case and we choose to present results based on
this simplified case for the ease of calculation. Secondly, with this
simplification, our detector model is then optically equivalent to the
detector model used in other works [Fossier2009, Laudenbach2019b]. Thirdly, if
two homodyne detectors in the heterodyne detection scheme do not have the same
imperfection, one can instead use the POVM in the general case by following
the procedure outlined in Appendix LABEL:app_sec:general_case despite being
more numerically challenging.
## IV Key rate optimization problem
We start with a reformulation of the optimization problem in Eq. (8) in the
untrusted detector noise scenario which serves as a basis for our modification
in the trusted detector noise scenario. The purpose of this reformulation is
that once we substitute the POVM of the noisy heterodyne detector in place of
the one for the ideal heterodyne detector, we can easily formulate the
optimization problem in the trusted detector noise scenario. Specifically, we
change Bob’s POVM $\\{M^{B}_{y}\\}$ from the ideal heterodyne detection
$\\{E_{y}=\frac{1}{\pi}\outerproduct{y}{y}\\}$ to the POVM description of the
noisy heterodyne detection $\\{G_{y}\\}$ found in Eq. (14). Moreover, compared
with our previous work [Lin2019], some constraints are modified to match with
how data are processed in a typical experiment.
### IV.1 Reformulation of the optimization problem in the untrusted detector
noise scenario
We reconsider the key rate optimization problem in the untrusted detector
noise scenario by rewriting region operators in Eq. (9) and observables in Eq.
(8) in terms of the POVM of an ideal heterodyne detector $\\{E_{y}\\}$. In the
case of ideal heterodyne detection, the POVM description of Bob’s measurement
$\\{M^{B}_{y}\\}$ is $M^{B}_{y}=E_{y}=\frac{1}{\pi}\outerproduct{y}{y}$, the
projection onto a coherent state $\ket{y}$. By writing $y=re^{i\theta}$ in the
polar coordinate and integrating over the corresponding region
$\mathcal{A}_{j}$, we obtain Eq. (9). If we rewrite Eq. (9) in terms of
$M^{B}_{y}$, we see region operators $R_{j}$’s are defined by
$\displaystyle R_{j}=\int_{y\in\mathcal{A}_{j}}M^{B}_{y}d^{2}y,$ (16)
where the region of integration $\mathcal{A}_{j}$ in the complex plane is
shown in Fig. 1 and $d^{2}y=d\real(y)d\imaginary(y)$.
From the heterodyne detection, we obtain a probability density function $P(y)$
for the outcome $y\in\mathbb{C}$. (We obtain such a probability density
function for each conditional state $\rho_{B}^{x}$. While it is more proper to
denote this conditional probability density function as $P(y|x)$, for
simplicity of notation in this section, we use $P(y)$.) When the heterodyne
detector is ideal, this probability density function is the Husimi $Q$
function. In particular, as discussed in our previous work [Lin2019], the
expectation values of operators $\hat{q},\hat{p},\hat{n}$ and $\hat{d}$
defined in Sec. II.2 are related to the $Q$ function via
$\displaystyle\langle\hat{q}\rangle_{x}$
$\displaystyle=\frac{1}{\sqrt{2}}\int(y+y^{*})Q_{x}(y)d^{2}y,$ (17)
$\displaystyle\langle\hat{p}\rangle_{x}$
$\displaystyle=\frac{i}{\sqrt{2}}\int(y^{*}-y)Q_{x}(y)d^{2}y,$
$\displaystyle\langle\hat{n}\rangle_{x}$
$\displaystyle=\int(\absolutevalue{y}^{2}-1)Q_{x}(y)d^{2}y,$
$\displaystyle\langle\hat{d}\rangle_{x}$
$\displaystyle=\int[y^{2}+(y^{*})^{2}]Q_{x}(y)d^{2}y,$
where the subscript $x$ labels the conditional state $\rho_{B}^{x}$.
In general, one may be interested in a quantity like $\int
f(y,y^{*})P(y)d^{2}y$ where $f(y,y^{*})$ is a real-valued function on $y$ and
$y^{*}$ such that the integral converges. Such a quantity can be described as
the expectation value of an observable that is defined in the following way
$\hat{O}=\int f(y,y^{*})M^{B}_{y}d^{2}y$ (18)
since
$\displaystyle\Tr[\rho\;\hat{O}]$ $\displaystyle=\int
d^{2}y\;f(y,y^{*})\Tr(\rho M^{B}_{y})$ (19) $\displaystyle=\int
d^{2}y\;f(y,y^{*})P(y).$
In other words, operators constructed in this way correspond to expectation
values $\int f(y,y^{*})P(y)d^{2}y$ obtained in an experiment. By comparing Eq.
(19) to Eq. (17) and identifying $P(y)$ by $Q_{x}(y)$, we observe the
following choices of $f(y,y^{*})$ for $\hat{q}$, $\hat{p}$, $\hat{n}$ and
$\hat{d}$:
$\displaystyle\hat{q}\longleftrightarrow$
$\displaystyle\;f(y,y^{*})=\frac{y+y^{*}}{\sqrt{2}},$ (20)
$\displaystyle\hat{p}\longleftrightarrow$
$\displaystyle\;f(y,y^{*})=\frac{i(y^{*}-y)}{\sqrt{2}},$
$\displaystyle\hat{n}\longleftrightarrow$
$\displaystyle\;f(y,y^{*})=\absolutevalue{y}^{2}-1,$
$\displaystyle\hat{d}\longleftrightarrow$
$\displaystyle\;f(y,y^{*})=y^{2}+(y^{*})^{2}.$
We remark that this way of defining these observables corresponds to the
antinormally ordered expansion of operators [Cahill1969a, Cahill1969b].
### IV.2 Revised optimization problem in the trusted detector noise scenario
In Ref. [Lin2019], we chose observables
$\\{\hat{O}\\}=\\{\hat{q},\hat{p},\hat{n},\hat{d}\\}$ by using
$M^{B}_{y}=E_{y}$ in Eq. (18) for the untrusted detector noise scenario. In
this work, we change to a new set of observables
$\\{\hat{q},\hat{p},\hat{n}+\hat{d}/2+\mathds{1},\hat{n}-\hat{d}/2+\mathds{1}\\}$,
which gives the same key rates as the old one since the last two observables
in this new set are linear combinations of observables $\hat{n}$ and $\hat{d}$
as well as the identity operator. This new set of observables corresponds to
the set of
$\\{f(y,y^{*})\\}=\\{\sqrt{2}\real(y),\sqrt{2}\imaginary(y),2\real(y)^{2},2\imaginary(y)^{2}\\}$
333Due to our definition of quadrature operators, we include the factor
$\sqrt{2}$ so that we can simply enter values reported in an experiment using
shot noise units as expectation values of corresponding observables.. The sole
purpose of this change compared with Ref. [Lin2019] is to make the data
postprocessing in an agreement with the typical classical postprocessing in an
experiment. That is, in an experiment, when a heterodyne detection gives two
real numbers $q_{s}$ and $p_{s}$ which we set $\real(y)=q_{s}$ and
$\imaginary(y)=p_{s}$, one usually computes variances of $\real(y)$ and
$\imaginary(y)$ by computing the expectation values of $\real(y)^{2}$ and
$\imaginary(y)^{2}$ in addition to expectation values of $\real(y)$ and
$\imaginary(y)$.
In the trusted detector noise scenario, we need to substitute $M^{B}_{y}$ in
Eqs. (16) and (18) by $G_{y}$. To distinguish operators defined in this way
from the first and second moment of quadrature operators $\hat{q}$ and
$\hat{p}$, we call first-moment observables $\hat{F}_{Q}$ and $\hat{F}_{P}$
and second-moment observables $\hat{S}_{Q}$ and $\hat{S}_{P}$. More
explicitly, they are defined as
$\displaystyle\hat{F}_{Q}$
$\displaystyle=\int\frac{y+y^{*}}{\sqrt{2}}G_{y}d^{2}y,$ (21)
$\displaystyle\hat{F}_{P}$
$\displaystyle=\int\frac{i(y^{*}-y)}{\sqrt{2}}G_{y}d^{2}y,$
$\displaystyle\hat{S}_{Q}$
$\displaystyle=\int(\frac{y+y^{*}}{\sqrt{2}})^{2}G_{y}d^{2}y,$
$\displaystyle\hat{S}_{P}$
$\displaystyle=\int[\frac{i(y^{*}-y)}{\sqrt{2}}]^{2}G_{y}d^{2}y.$
Then the revised key rate optimization problem becomes
minimize $\displaystyle
D\big{(}\mathcal{G}(\rho_{AB})||\mathcal{Z}[\mathcal{G}(\rho_{AB})]\big{)}$
(22) subject to
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{F}_{Q})]=p_{x}\langle\hat{F}_{Q}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{F}_{P})]=p_{x}\langle\hat{F}_{P}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{S}_{Q})]=p_{x}\langle\hat{S}_{Q}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}(\outerproduct{x}{x}_{A}\otimes\hat{S}_{P})]=p_{x}\langle\hat{S}_{P}\rangle_{x},$
$\displaystyle\Tr[\rho_{AB}]=1,$
$\displaystyle\Tr_{B}[\rho_{AB}]=\sum_{i,j=0}^{3}\sqrt{p_{i}p_{j}}\bra{\alpha_{j}}\ket{\alpha_{i}}\outerproduct{i}{j}_{A},$
$\displaystyle\rho_{AB}\geq 0,$
where the index $x$ runs over the set $\\{0,1,2,3\\}$ and the Kraus operator
for the postprocessing map $\mathcal{G}$ has the same form as in Eq. (10) but
now with the region operators defined in terms of $G_{y}$’s in Eq. (16).
In Appendix LABEL:app:representation, we discuss how to represent these
operators in the photon-number basis. Combining with the photon-number cutoff
assumption (i.e.
$\rho_{AB}=(\mathds{1}_{A}\otimes\Pi_{N})\rho_{AB}(\mathds{1}_{A}\otimes\Pi_{N})$,
where $N$ is the cutoff photon number and $\Pi_{N}$ is the projection onto the
subspace spanned by the photon-number states from $0$ to $N$ photons), we can
directly solve this key rate optimization problem in Eq. (22) numerically. We
direct readers to Sec. IV B of Ref. [Lin2019] for the discussion about the
numerical algorithm for the optimization problem and its performance.
## V Simulation method
In an experiment, the expectation values shown in the optimization problem in
Eq. (22) can be obtained from some suitable postprocessing of noisy heterodyne
detection results. Without doing experiments, we perform simulations of a
corresponding experiment with a noisy heterodyne detector to obtain those
expectation values. With these values specified, one can solve the key rate
optimization problem using a numerical convex optimization package to obtain
numerical results. We emphasize that our security proof technique does not
depend on the specific channel model used for the simulation.
### V.1 Channel model for simulation
To understand how the protocol behaves in the trusted detector noise scenario,
we simulate the quantum channel by using a realistic physical channel in an
honest implementation of the protocol. A realistic physical channel in the
context of the optical fiber communication can be modeled by a phase-invariant
Gaussian channel with the transmittance $\eta_{t}$ and excess noise $\xi$. In
a typical fiber for optical communication, the attenuation coefficient is 0.2
dB/km and thus $\eta_{t}=10^{-0.02L}$ for a distance $L$ in kilometers. The
excess noise $\xi$ is defined as
$\displaystyle\xi=\frac{(\Delta q_{\text{obs}})^{2}}{(\Delta
q_{\text{vac}})^{2}}-1,$ (23)
where $(\Delta q_{\text{vac}})^{2}=N_{0}=1/2$ is the variance in $q$
quadrature of the vacuum state and $(\Delta q_{\text{obs}})^{2}$ is the
observed variance in $q$ quadrature of the measured signal state. As the value
of $\xi$ is normalized with respect to the vacuum variance, the channel excess
noise $\xi$ is reported in the shot noise units (SNU) and independent of
different conventions of defining quadrature operators.
Apart from the shot noise, there are several contributions to the total noise
in the measurement data such as preparation noise, detector noise and noises
introduced in the fiber due to Raman scattering. As we treat the detection
noise as trusted, we assume all other contributions are under Eve’s control.
In other words, all additional noises beyond the shot noise except for the
detector noise become a part of the effective quantum channel regardless of
the physical origin of each noise component, and they contribute to the value
of the excess noise $\xi$. In the literature, the value of the excess noise
$\xi$ is commonly reported at the input of the quantum channel corresponding
to measuring $(\Delta q_{\text{obs}})^{2}$ at the output of Alice’s lab. By
choosing this convention of reporting the value of excess noise, we may
alternatively imagine that this effective quantum channel first introduces the
amount of excess noise $\xi$ to the signal state at the input of the channel
and the rest of this quantum channel is then lossy but noise-free. Under this
channel model, a coherent state $\ket{\alpha}$, after transmitting through
this quantum channel, becomes a displaced thermal state centered at
$\sqrt{\eta_{t}}\alpha$ with its variance $\frac{1}{2}(1+\eta_{t}\xi)$ for
each quadrature.
### V.2 Simulated statistics
From our simulation, the simulated state $\sigma^{x}_{B}$ conditioned on the
choice of $x$ is a displaced thermal state whose Wigner function is
$\displaystyle W_{\sigma^{x}_{B}}(\gamma)$
$\displaystyle=\frac{1}{\pi}\frac{1}{\frac{1}{2}(1+\eta_{t}\xi)}\exp[-\frac{\absolutevalue{\gamma-\sqrt{\eta_{t}}\alpha_{x}}^{2}}{\frac{1}{2}(1+\eta_{t}\xi)}].$
(24)
When Bob applies his heterodyne measurement described by the POVM
$\\{G_{y}\\}$, the probability density function $P(y|x)$ for the measurement
outcome $y$ conditioned on Alice’s choice $x$ is
$\displaystyle P(y|x)$
$\displaystyle=\frac{1}{\pi(1+\frac{1}{2}\eta_{d}\eta_{t}\xi+\nu_{\text{el}})}\exp[-\frac{\absolutevalue{y-\sqrt{\eta_{d}\eta_{t}}\alpha_{x}}^{2}}{1+\frac{1}{2}\eta_{d}\eta_{t}\xi+\nu_{\text{el}}}].$
(25)
The observables defined in Eq. (21) have the following expectation values from
the simulation:
$\displaystyle\langle\hat{F}_{Q}\rangle_{x}$
$\displaystyle=\sqrt{2\eta_{d}\eta_{t}}\real(\alpha_{x}),$ (26)
$\displaystyle\langle\hat{F}_{P}\rangle_{x}$
$\displaystyle=\sqrt{2\eta_{d}\eta_{t}}\imaginary(\alpha_{x}),$
$\displaystyle\langle\hat{S}_{Q}\rangle_{x}$
$\displaystyle=2\eta_{d}\eta_{t}\real(\alpha_{x})^{2}+1+\frac{1}{2}\eta_{d}\eta_{t}\xi+\nu_{\text{el}},$
$\displaystyle\langle\hat{S}_{P}\rangle_{x}$
$\displaystyle=2\eta_{d}\eta_{t}\imaginary(\alpha_{x})^{2}+1+\frac{1}{2}\eta_{d}\eta_{t}\xi+\nu_{\text{el}}.$
### V.3 Estimation of error correction cost
We estimate the cost of error correction from the simulated statistics. From
the probability density function $P(y|x)$ shown in Eq. (25), we can obtain the
joint probability distribution $\widetilde{P}(x,z)$ for Alice’s choice
$\mathbf{X}=x$ and Bob’s discretized key value $\mathbf{Z}=z$ by the following
integral
$\displaystyle\widetilde{P}(z|x)=\int_{\Delta_{a}}^{\infty}dr\;r\int_{\frac{2z-1}{4}\pi}^{\frac{2z+1}{4}\pi}d\theta
P(re^{i\theta}|x).$ (27)
Since $\widetilde{P}(x)=p_{x}=\frac{1}{4}$, we then obtain the joint
probability distribution
$\widetilde{P}(x,z)=\widetilde{P}(z|x)\widetilde{P}(x)$. Using the definition
of $\operatorname{I}(\mathbf{X};\mathbf{Z})$ in terms of $\widetilde{P}(x,z)$,
we can approximate the cost of error correction by Eq. (6) for the reverse
reconciliation scheme considered in this work. When $\Delta_{a}$ is not zero,
that is, in the presence of postselection, the sifting factor
$p_{\text{pass}}$ is the sum of $\widetilde{P}(x,z)$ over
$x,z\in\\{0,1,2,3\\}$. We then renormalize the probability distribution before
plugging it in the definition of $\operatorname{I}(\mathbf{X};\mathbf{Z})$.
For the purpose of illustration, we choose the error correction efficiency
$\beta$ to be 95% for our simulations, which is around typical values for the
state-of-the-art error correction codes (see e.g. Ref. [Milicevic2018]).
## VI Key rate in the absence of postselection
In this section, we present results when no postselection is performed, that
is, $\Delta_{a}=0$. We make two comparisons. The first one is to compare key
rates in the trusted and untrusted detector noise scenarios. The second one is
to analyze how different imperfections in detectors affect key rates in the
trusted detector noise scenario.
### VI.1 Comparison between trusted and untrusted detector noise scenarios
For this comparison, we supply the same set of simulated data from Eq. (26) to
the optimization problem for the untrusted detector noise scenario in Eq. (8)
and the one for the trusted detector noise scenario in Eq. (22). For
simulation, we choose parameters $\eta_{d}=0.719$, $\nu_{\text{el}}=0.01$ from
Ref. [Soh2015] for illustration. The result is shown in Fig. 3.
Figure 3: Secure key rate versus the transmission distance for untrusted
detector noise (black diamonds) and trusted detector noise (red stars)
scenarios. The excess noise is $\xi=0.01$ at the input of the quantum channel.
Parameters for detector are $\eta_{d}=0.719$, $\nu_{\text{el}}=0.01$[Soh2015].
The error correction efficiency is $\beta=0.95$. The coherent state amplitude
is optimized via a coarse-grained search over the interval $[0.5,0.9]$ with a
step size of $0.05$ and the channel transmittance is $\eta_{t}=10^{-0.02L}$
for each distance $L$ in kilometers. The effective channel excess noise in the
untrusted detector scenario is shown with the $y$ axis on the right. At 20 km,
the effective channel excess noise $\xi_{\text{eff}}$ is roughly 0.045.
As we can see from this figure, the key rate of the untrusted detector noise
scenario drops quickly at a short distance less than 20 km even though the
electronic noise is only 0.01 SNU, which is a low value compared to detectors
used in many other CV experiments. On the other hand, the key rate in the
trusted detector noise scenario extends to much longer distances, which
exhibits a similar behavior as the results shown in Ref. [Lin2019] when the
detector is treated as ideal. One explanation for this behavior is that in
Ref. [Lin2019], we observe that the key rate for the QPSK scheme drops quickly
when the channel excess noise $\xi$ is large. Since the value of $\xi$ is
reported at the input of the quantum channel while the value of
$\nu_{\text{el}}$ is measured at Bob’s side, to treat $\nu_{\text{el}}$ as a
part of channel excess noise in the untrusted detector noise scenario, one
needs to define the effective value of $\xi$ to include the value of
$\nu_{\text{el}}$. For the effective value $\xi_{\text{eff}}$, the electronic
noise $\nu_{\text{el}}$ needs to be scaled by a factor of $1/\eta_{t}$ (in
addition to $1/\eta_{d}$), which is large for slightly long distances as
$\eta_{t}$ becomes small. As a result, the redefined value $\xi_{\text{eff}}$
of $\xi$ is quite large as shown in Fig. 3 and this behavior of key rate is
then expected. By the observation made from this figure, it is not surprised
that for a larger value of electronic noise, the key rate in the untrusted
detector noise scenario would drop to zero at an even shorter distance.
### VI.2 Detector imperfection in the trusted detector noise scenario
To guide the experimental implementation of the QPSK scheme, we may be
interested in the robustness of the protocol in the presence of detector
inefficiency and electronic noise in the trusted detector noise scenario. For
this purpose, we investigate the effects of different levels of detector
efficiency and electronic noise on the key rate. For curves in Figs. 4 and 5,
our simulation uses the same channel model but different detector
imperfections, that is, in Eq. (26), the same values of channel parameters
$\eta_{t}$ and $\xi$ but different values of detector efficiency $\eta_{d}$
and electronic noise $\nu_{\text{el}}$ (as specified in the captions) for
different curves.
In Fig. 4, we choose values of $\eta_{d}$ and $\nu_{\text{el}}$ for a homodyne
detector from two experiments [Jouguet2013, Soh2015] and compare these results
with the ideal detector. For the comparison, we optimize $\alpha$ via a
coarse-grained search for each distance. We see that with a noisy heterodyne
detector, the key rate drops moderately from the key rate of using an ideal
detector. The amount of decrease is like a constant prefactor in the key rate.
As the detector is noisier, the key rate becomes lower as expected.
Figure 4: Secure key rate versus transmission distance for different detector
imperfections reported in experiments in a comparison to the ideal detector.
Other parameters are the excess noise $\xi=0.01$, error-correction efficiency
$\beta=0.95$, and the transmittance $\eta_{t}=10^{-0.02L}$ for each distance
$L$ in kilometers. For each distance, the coherent state amplitude $\alpha$ is
optimized via a coarse-grained search in the interval $[0.5,0.9]$ with a step
size of $0.05$. Black curve with diamond markers is for the ideal heterodyne
detector; red curve with star markers is for the detector used in Ref.
[Soh2015]; cyan curve with square markers is for the detector used in Ref.
[Jouguet2013].
To show that different values of electronic noise have little impacts on the
secure key rates in the trusted noise scenario, we compare key rates with two
choices of the electronic noise value in Fig. 5a while we fix the value of
detector efficiency $\eta_{d}$ to be 0.7. As the key rate difference is
relatively small between the curve with $\nu_{\text{el}}=0.05$ and that with
$\nu_{\text{el}}=0.08$, we also plot the difference of key rate (that is, the
key rate with $\nu_{\text{el}}=0.05$ minus the key rate with
$\nu_{\text{el}}=0.08$) in the same figure. (Note that the non-smoothness in
the curve of difference is due to the coarse-grained search for the coherent
state amplitude in the presence of the numerical performance issue discussed
in Ref. [Lin2019].) We observe that when the electronic noise is trusted, its
impact on the secure key rates is insignificant. This result eases the
requirements of a detector in a CVQKD experiment with the QPSK scheme.
Similarly, we investigate the effects of detector efficiency in Fig. 5b. In
particular, we fix the value of electronic noise $\nu_{\text{el}}$ to be 0.05
SNU and plot four choices of detector efficiency between 0.5 and 0.8. We see
the key rate curves are close to each other.
(a)
(b)
Figure 5: Secure key rate versus transmission distance for different detector
imperfections with the excess noise $\xi=0.01$. For both plots, the coherent
state amplitude is optimized via a coarse-grained search over the interval
$[0.5,0.9]$ with a step size 0.05 and $\beta=0.95$. (a) Comparison of key
rates between two values of the electronic noise when the detector efficiency
is set to be $\eta_{d}=0.7$ for both curves. The difference of two curves is
also plotted with the secondary $y$-axis on the right. (b) Comparison of key
rates for different values of detector efficiency when the electronic noise is
$\nu_{\text{el}}=0.05.$
In Fig. 6, we investigate the tradeoff between trusting the detector
efficiency and lumping it together with the channel transmittance, similar to
a scenario studied in Ref. [Zhang2020] for discrete-variable systems. For the
fixed amount of total transmittance $\eta:=\eta_{t}\eta_{d}$, it is
interesting to see how trusting different values of detector efficiency
affects the key rate. We observe that when the value of the product of channel
transmittance $\eta_{t}$ and detector efficiency $\eta_{d}$ is fixed, if the
detector efficiency $\eta_{d}$ is lower, meaning that if more contribution to
the total transmittance $\eta$ is trusted, then the key rate is higher. This
observation is similar to the observation made for discrete-variable systems
in Ref. [Zhang2020].
Figure 6: Secure key rate versus the detector efficiency $\eta_{d}$ for a
fixed value of total transmittance $\eta:=\eta_{t}\eta_{d}=0.3155$. This
figure studies the tradeoff between the key rate and the amount of trusted
loss. Other parameters are the excess noise $\xi=0.01$, the electronic noise
$\nu_{\text{el}}=0.01$, and the error-correction efficiency $\beta=0.95$. We
include two curves for different choices of coherent state amplitude $\alpha$.
To summarize, in a discrete modulation experiment, if one is able to obtain
accurate values of $\eta_{d}$ and $\nu_{\text{el}}$ by a suitable calibration
procedure and able to maintain a low level of the effective channel excess
noise $\xi$ to a value like $0.01$, then the QPSK scheme is able to extend to
a distance beyond 100 km in the asymptotic regime. We remark that the optimal
amplitude for the QPSK scheme in the trusted detector noise scenario is around
0.75 corresponding to a mean photon number of around 0.56, similar to the
optimal amplitude in the ideal or untrusted detector noise scenario reported
in our previous work [Lin2019]. This mean photon number is much lower than
that for Gaussian modulation schemes.
## VII Key rate with postselection
In this section, we investigate the effects of postselection in the trusted
detector noise scenario. As demonstrated in our previous analysis [Lin2019],
postselection of data can improve the key rate of the QPSK scheme in the
untrusted detector noise scenario. Postselection is simple to implement in an
experiment. It not only improves the key rate but also reduces the required
volume of data postprocessing. Thus, it is advantageous to include a
postselection step in the protocol. As expected, we show here that this
advantage also exists in the trusted detector noise scenario.
In Fig. 7, we search for the optimal postselection parameter for different
transmission distances and take the distances $L=50$ km and $L=75$ km as
examples. For this figure, we also optimize the choice of coherent state
amplitude via a coarse-grained search. The $x$ axis in each plot is the
postselection parameter $\Delta_{a}$. We observe the optimal value of the
postselection parameter $\Delta_{a}$ is around 0.6 for both $L=50$ km and
$L=75$ km. We also observe that the optimal choice of the postselection
parameter $\Delta_{a}$ does not change significantly for different distances.
(a)
(b)
Figure 7: (a) Secure key rate versus postselection parameter $\Delta_{a}$ for
$L=50$ km. (b) Secure key rate versus postselection parameter $\Delta_{a}$ for
$L=75$ km. For both plots, the channel excess noise is $\xi=0.01$ and the
error-correction efficiency $\beta=0.95$. The coherent state amplitude is
optimized via a coarse-grained search in the interval [0.6, 0.8] with a step
size of 0.05. Parameters for detectors are $\eta_{d}=0.552$ and
$\nu_{\text{el}}=0.015$ from Ref. [Jouguet2013].
In Fig. 8, we show the key rate as a function of transmission distance for two
scenarios: with or without postselection. Since the optimal postselection
parameter does not change significantly for different distances, we optimize
the postselection parameter $\Delta_{a}$ via a coarse-grained search in a
restricted interval. For this figure, we fix the coherent state amplitude to
be 0.75 and the channel excess noise $\xi$ to be 0.01. We see postselection
can indeed improve the key rate. The percentage of improvement compared to the
key rate without postselection is roughly between 5% to 8% and the probability
of being postselected is around 70% to 80%. Thus, postselection can reduce the
amount of data for postprocessing by around 20% to 30% while improving the key
rate.
Figure 8: Comparison of key rates with or without postselection. Detector
parameters are from Ref. [Jouguet2013] where $\eta_{d}=0.552$ and
$\nu_{\text{el}}=0.015$. The difference of two curves is also plotted with the
secondary $y$ axis on the right. Other parameters are the channel excess noise
$\xi=0.01$, the coherent state amplitude $\alpha=0.75$, and the error-
correction efficiency $\beta=0.95$. The postselection parameter is optimized
via a coarse-grained search in the interval [0.45,0.7] with a step size 0.05.
We end this section with a remark on the postselection pattern. The
postselection pattern (see Fig. 1) studied in this work is a simple,
intuitive, and convenient choice when we evaluate the region operators.
However, it is not necessarily the optimal way to postselect data
[Silberhorn2002, Heid2006]. It is an interesting future work to investigate
other patterns of postselection.
## VIII Summary and future directions
We provide a method to analyze the asymptotic security of a discrete
modulation scheme of CVQKD in the trusted detector noise scenario where both
nonunity detector efficiency and electronic noise are trusted. In particular,
we find the POVM elements corresponding to a noisy heterodyne detector. As we
demonstrate our method on the quadrature phase-shift keying scheme, we show
that when the detector imperfection is trusted, the key rates are similar to
the one with the ideal heterodyne detector studied previously [Lin2019]. Our
analysis in this work eases the requirements of an experimental implementation
of the discrete modulation scheme as the detector imperfection is usually a
major source of noise.
We point out the limitations in the current work. First, the analysis in this
work is still restricted to the asymptotic scenario. We notice that there is a
recent work on the finite key analysis of binary modulation protocol
[Matsuura2020]. However, the key rate there was very pessimistic and one
expects that quadrature-shift keying schemes will have much better
performance. It remains an open question to provide a finite key analysis of
general discrete modulation beyond binary modulation. As we recently extend
the underlying numerical method used in this security analysis to finite-key
regime [George2020], we hope to perform the finite key analysis for discrete
modulation schemes, especially the protocol studied in this work. However,
there remain technical challenges to solve before such an analysis can be
carried out and thus we leave the finite key analysis for future work. The
second limitation is the same photon-number cutoff assumption used in Refs.
[Ghorai2019, Lin2019]. While numerical evidences show that our results are
stable when the cutoff photon number is chosen appropriately, we plan to have
a more rigorous analysis on the effects of truncation beyond numerical
evidences in future work. Thirdly, we present simulation results in a simple
scenario where two homodyne components are treated as identical. This scenario
is commonly assumed in previous studies of Gaussian modulation schemes. In the
simple scenario, we are able to provide simplified expressions for region
operators and observables used in the key rate optimization problem. However,
our principles presented in this paper work for the general case where two
detectors are not identical. To handle the general case, one may perform the
numerical integration of POVM element $G_{y}$’s to find necessary operators in
the photon-number basis from the photon-number basis representation of each
POVM element $G_{y}$ shown in Appendix LABEL:app_sec:general_case. It may
become numerically demanding to perform these integrals. Alternatively, one
may attempt to simplify expressions analytically similar to what we have done
for the simple case. It remains as a technical question to efficiently compute
the matrix elements of operators defined in terms of $G_{y}$ in the photon-
number basis, which we expect can be solved. Nevertheless, this current
limitation does not affect the principles and methodology we present in this
work about the treatment of trusted detector noise. It is also expected that
observations in the general case will be similar to observations we make here
in the simple case.
Finally, we remark on the generality of our method of treating trusted
detector noise. If a different physical model of a detector is adopted (which
needs to be verified experimentally), we expect that a similar method as
described here can be used to find a correct POVM description for the given
physical model and then this POVM can be used in the security analysis.
###### Acknowledgements.
We thank Mi Zou and Feihu Xu for helpful discussions related to experiments.
We also thank Twesh Upadhyaya for code review. The work is performed at the
Institute for Quantum Computing (IQC), University of Waterloo, which is
supported by Industry Canada. J. L. acknowledges the support of Mike and
Ophelia Lazaridis Fellowship from IQC. The research has been supported by
NSERC under the Discovery Grants Program, Grant No. 341495, and under the
Collaborative Research and Development Program, Grant No. CRDP J 522308-17.
Financial support for this work has been partially provided by Huawei
Technologies Canada Co., Ltd.
## APPENDIX A Derivation of noisy heterodyne detection POVM via Wigner
functions
### A.1 Basic Wigner functions
As we use the Wigner function approach for our derivation, we recall useful
expressions from Ref. [Leonhardt2010] for later references.
To calculate $\Tr(FG)$ for two operators $F$ and $G$ in terms of their Wigner
functions $W_{F}$ and $W_{G}$, the overlap formula is
$\displaystyle\Tr(FG)=\pi\int d^{2}\alpha\;W_{F}(\alpha)W_{G}(\alpha).$ (28)
We can easily generalize the formula to multimode cases. The input-output
Wigner functions under a beam-splitter transformation whose transmittance is
$\eta$ are related by
$\displaystyle
W_{\text{out}}(\alpha,\beta)=W_{\text{in}}(\sqrt{\eta}\alpha+\sqrt{1-\eta}\beta,\sqrt{1-\eta}\alpha-\sqrt{\eta}\beta).$
(29)
We list Wigner functions for some quantum states that are relevant for our
discussions here. The Wigner function of a vacuum state $\ket{0}$ is
$\displaystyle
W_{\ket{0}}(\gamma)=\frac{2}{\pi}e^{-2\absolutevalue{\gamma}^{2}}.$ (30)
The Wigner function of a thermal state $\rho_{\text{th}}(\bar{n})$ with the
mean photon number $\bar{n}$ is
$\displaystyle
W_{\rho_{\text{th}}(\bar{n})}(\gamma)=\frac{2}{\pi}\frac{1}{1+2\bar{n}}e^{-\frac{2\absolutevalue{\gamma}^{2}}{1+2\bar{n}}}.$
(31)
The Wigner function of a displaced thermal state (DTS)
$\rho_{\text{DTS}}(\alpha,\bar{n}):=\hat{D}(\alpha)\rho_{\text{th}}(\bar{n})\hat{D}^{\dagger}(\alpha)$
with the amount of displacement $\alpha$ is
$\displaystyle
W_{\rho_{\text{DTS}}(\alpha,\bar{n})}(\gamma)=\frac{2}{\pi}\frac{1}{1+2\bar{n}}e^{-\frac{2\absolutevalue{\gamma-\alpha}^{2}}{1+2\bar{n}}}.$
(32)
We notice that if we set $\alpha=0$, it reduces to Eq. (31).
It is also useful to note the Wigner functions of a squeezed thermal state
(STS) and of a displaced squeezed thermal state (DSTS). Let $\hat{S}(\xi)$
denote the squeezing operator with a squeezing parameter $\xi$. For our
discussion, we restrict $\xi\in\mathbb{R}$. For a squeezed thermal state
$\rho_{\text{STS}}(\xi,\bar{n}):=\hat{S}(\xi)\rho_{\text{th}}(\bar{n})\hat{S}^{\dagger}(\xi)$,
its Wigner function reads (see, e.g., Eq. (4.13) of Ref. [Kim1989])
$\displaystyle
W_{\rho_{\text{STS}}(\xi,\bar{n})}(\gamma)=\frac{2}{\pi}\frac{1}{1+2\bar{n}}\exp{-2[\frac{e^{2\xi}\real(\gamma)^{2}+e^{-2\xi}\imaginary(\gamma)^{2}}{1+2\bar{n}}]}$
(33)
The Wigner function of a displaced squeezed thermal state
$\rho_{\text{DSTS}}(\alpha,\xi,\bar{n}):=\hat{D}(\alpha)\hat{S}(\xi)\rho_{\text{th}}(\bar{n})\hat{S}^{\dagger}(\xi)\hat{D}^{\dagger}(\alpha)$
can be similarly written as
$\displaystyle
W_{\rho_{\text{DSTS}}(\alpha,\xi,\bar{n})}(\gamma)=\frac{2}{\pi}\frac{1}{1+2\bar{n}}\exp{-2[\frac{e^{2\xi}\real(\gamma-\alpha)^{2}+e^{-2\xi}\imaginary(\gamma-\alpha)^{2}}{1+2\bar{n}}]}$
(34)
### A.2 Derivation
Figure 9: A concise but equivalent view of the noisy heterodyne detector
model depicited in Fig. 2. Input modes are labeled in terms of Wigner
functions.
As the physical model of a noisy heterodyne detector is presented in Fig. 2,
our goal here is to find the corresponding POVM elements that correctly
produce the probability density function $P(y)$ of obtaining an outcome
$y\in\mathbb{C}$ for an arbitrary input state $\rho$ to the detector. In our
trusted noise model, the homodyne detector for the $q$ quadrature measurement
has its detector efficiency $\eta_{1}$ and electronic noise $\nu_{1}$ which is
related to a thermal state of the mean photon number
$\bar{n}_{1}=\frac{\nu_{1}}{2(1-\eta_{1})}$. Similarly, the homodyne detector
for the $p$ quadrature measurement has its detector efficiency $\eta_{2}$ and
electronic noise $\nu_{2}$ which corresponds to a thermal state with the mean
photon number $\bar{n}_{2}=\frac{v_{2}}{2(1-\eta_{2})}.$ Figure 9 shows a
compact but equivalent representation of Fig. 2 with Wigner functions
associated to input modes. In this setup, for an output state
$W_{\text{out}}(\alpha,\beta,\gamma,\omega)$ at the step labeled in Fig. 9, we
measure the $q$ quadrature of the mode $\alpha$ and $p$ quadrature of the mode
$\beta$ with two ideal homodyne detectors, and discard the rest modes $\gamma$
and $\omega$. The Wigner function of an ideal homodyne detector for the $q$
quadrature measurement that produces a measurement outcome $\real(y)$ is
$W_{H_{\real(y)}}(\alpha)=\frac{1}{\sqrt{2}\pi}\delta(\real(\alpha)-\frac{\real(y)}{\sqrt{2}})$
where $\delta$ is the Dirac delta function and similarly, the one for the $p$
quadrature measurement with a measurement outcome $\imaginary(y)$ is
$W_{H_{\imaginary(y)}}(\alpha)=\frac{1}{\sqrt{2}\pi}\delta(\imaginary(\alpha)-\frac{\imaginary(y)}{\sqrt{2}})$.
The factors of $\sqrt{2}$ are included such that we can rederive the ideal
heterodyne detector POVM $\\{E_{y}:y\in\mathbb{C}\\}$ in the limit of unity
detector efficiency and zero electronic noise. To discard modes $\gamma$ and
$\omega$ that are not measured, we perform the integration over variables
$\gamma$ and $\omega$.
For any input state $\rho$ to the detector, one can in principle obtain the
underlying probability density function $P(y)=\Tr(\rho G_{y})$ for every
measurement outcome $y\in\mathbb{C}$. As the correct POVM element $G_{y}$
needs to produce the observed probability density function $P(y)=\Tr(\rho
G_{y})$, this requirement in terms of Wigner functions becomes $P(y)=\pi\int
d^{2}\alpha W_{\rho}(\alpha)W_{G_{y}}(\alpha)$, where $W_{\rho}$ is the Wigner
function of the input state $\rho$ and $W_{G_{y}}$ is the Wigner function of
the operator $G_{y}$, by the overlap formula in Eq. (28). In Fig. 9, we know
the mathematical description of measurements on the right, but the description
of the state $W_{\text{out}}$ is unknown. On the other hand, we want to find
the description of the measurement directly acting on the input state and the
Wigner function description of the input state and those of ancillary modes on
the left are either assumed to be given or known. To connect these known
descriptions on the two sides of this diagram to find the desired Wigner
function of the POVM element $G_{y}$ that acts on the input state directly, we
start from the right-hand side of this diagram with an unknown four-mode state
$W_{\text{out}}$ and the known measurements on these modes, perform inverse
beam-splitter transformations from right to left of this diagram and finally
obtain $W_{G_{y}}$ by integrating over variables other than $\alpha$. By
starting with the multimode overlap formula for $P(y)$ on the right-hand side
of the diagram and performing the process as described, we obtain
$\displaystyle P(y)$ $\displaystyle=\pi^{4}\int d^{2}\alpha\int d^{2}\beta\int
d^{2}\gamma\int
d^{2}\omega\;\frac{1}{\pi^{2}}W_{\text{out}}(\alpha,\beta,\gamma,\omega)W_{H_{\real(y)}}(\alpha)W_{H_{\imaginary(y)}}(\beta)$
(35) $\displaystyle=\pi^{2}\int d^{2}\alpha\;W_{\rho}(\alpha)\int
d^{2}\beta\;W_{\ket{0}}(\beta)\int
d^{2}\gamma\;W_{\rho_{\text{th}}(\bar{n}_{1})}(\gamma)W_{H_{\real(y)}}(\sqrt{\eta_{1}}\frac{\alpha+\beta}{\sqrt{2}}+\sqrt{1-\eta_{1}}\gamma)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times\int
d^{2}\omega\;W_{\rho_{\text{th}}(\bar{n}_{2})}(\omega)W_{H_{\imaginary(y)}}(\sqrt{\eta_{2}}\frac{\alpha-\beta}{\sqrt{2}}+\sqrt{1-\eta_{2}}\omega).$
The next step is to substitute the Wigner function of the vacuum state in Eq.
(30) and that of the thermal state in Eq. (31) and then to perform the
integrals over variables $\beta,\gamma$ and $\omega$. We first integrate over
the variable $\omega$. The relevant integral that involves the variable
$\omega$ is
$\displaystyle\int
d^{2}\omega\;W_{\rho_{\text{th}}(\bar{n}_{2})}(\omega)W_{H_{\imaginary(y)}}(\sqrt{\eta_{2}}\frac{\alpha-\beta}{\sqrt{2}}+\sqrt{1-\eta_{2}}\omega)$
(36) $\displaystyle=$
$\displaystyle\frac{1}{\pi\sqrt{\pi}}\frac{1}{\sqrt{(1-\eta_{2})(1+2\bar{n}_{2})}}\exp(-\frac{\eta_{2}[\imaginary(\beta)+\frac{1}{\sqrt{\eta_{2}}}\imaginary(y)-\imaginary(\alpha)]^{2}}{(1+2\bar{n}_{2})(1-\eta_{2})}).$
Next, we perform the integral related to the variable $\gamma$. Since Eq.
(LABEL:eq:wigner_integration1) does not involve the variable $\gamma$, we do
not need to plug it back to solve the integral that involves the variable
$\gamma$. This integration shown in Eq. (LABEL:eq:wigner_integration2) is
actually similar to the integration that we just did in Eq.
(LABEL:eq:wigner_integration1).
$\displaystyle\int
d^{2}\gamma\;W_{\rho_{\text{th}}(\bar{n}_{1})}(\gamma)W_{H_{\real(y)}}(\sqrt{\eta_{1}}\frac{\alpha+\beta}{\sqrt{2}}+\sqrt{1-\eta_{1}}\gamma)$
(37) $\displaystyle=$
$\displaystyle\frac{1}{\pi\sqrt{\pi}}\frac{1}{\sqrt{(1-\eta_{1})(1+2\bar{n}_{1})}}\exp(-\frac{\eta_{1}\big{[}\real(\beta)-\frac{1}{\sqrt{\eta_{1}}}\real(y)+\real(\alpha)\big{]}^{2}}{(1+2\bar{n}_{1})(1-\eta_{1})}).$
Finally, we integrate over the variable $\beta$. We now need to substitute
results of Eqs. (LABEL:eq:wigner_integration1) and
(LABEL:eq:wigner_integration2) back to Eq. (35). The prefactor is simplified
to be
$\frac{1}{\pi^{3}}\frac{1}{\sqrt{(1-\eta_{1})(1+2\bar{n}_{1})(1-\eta_{2})(1+2\bar{n}_{2})}}$.
Except this prefactor, we perform the following integral
(38)
|
∎
11institutetext: Hai-Rui Wei 22institutetext: 1 School of Mathematics and
Physics, University of Science and Technology Beijing, Beijing 100083, China
Wen-Qiang Liu 33institutetext: 1 School of Mathematics and Physics, University
of Science and Technology Beijing, Beijing 100083, China
Leong-Chuan Kwek 44institutetext: 2Centre for Quantum Technologies, National
University of Singapore, Singapore 117543, Singapore
3 MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, Singapore UMI
3654, Singapore
4 National Institute of Education and Institute of Advanced Studies, Nanyang
Technological University, Singapore 637616, Singapore
44email<EMAIL_ADDRESS>
# Efficient Fusion of Photonic $W$-states with Nonunitary Partial-swap Gates
Hai-Rui Wei1∗ Wen-Qiang Liu1 and Leong-Chuan Kwek2,3,4
(Received: date / Accepted: date)
###### Abstract
We introduce a nonunitary partial-swap gate for fusing arbitrary small-sized
photonic $W$-states into a large-scale entangled network of $W$-state
efficiently without ancillary photons. A partial-swap gate is designed in an
optical architecture based on linear optics elements. By introducing auxiliary
degree of freedom, this gate provides a higher success probability with less
cost. Our implementation can create a larger target state with a simpler set-
up than previous proposals for $W$-state fusion. Also all “garbage” states are
recyclable, i.e., there is no complete failure output in our scheme in
principle.
Keywords: $W$-state fusion, Multipartite entanglement, Quantum gate
## 1 Introduction
Quantum entanglement is a key resource in many quantum information processing
(QIP) tasks book , such as quantum teleportation teleportation1 ;
teleportation2 , quantum superdense coding superdense , quantum key
distribution distribution , quantum algorithms algorithm , and measurement-
based quantum computation one-way . In particular, depending on the
requirement of a specific task, the preparation and the manipulation of
multiqubit entangled states with different multipartite features (for
instance, the Greenberger–Horne–Zeilinger GHZ , Dicke Dick , $W$ W , and
cluster states cluster ) are needed, and these different types of entangled
states cannot be converted into each other by using local operations and
classical communications. However, there are still theoretical and
experimental challenges in study of multipartite entanglement due to the more
complex mathematical form and rapidly growing resource overhead with the
number of particles increasing. In recent years, the weblike property of the
$W$-state, due to its robustness against particle loss and decoherence
effects, has rendered it to be a useful resource for quantum communication
fujii2011robust . Indeed, the $W$-state has been shown to be relevant for many
schemes and applications ranging from its use in the foundation of quantum
mechanics test ; mattar2017experimental ; wu2014robust , in anonymous quantum
networks network , in quantum telecloning and teleportation telecloning , in
quantum computation cloning-machine ; Ai , in quantum memories
choi2010entanglement and as a probe for reading information guha2013reading .
So far, many theoretical proposals and realistic experiments for generating
small-size $W$ states have been proposed exp-W1 ; exp-W2 ; exp-W3 . Currently,
there are two efficient ways to generate large-scale photonic $W$ states:
expansion and fusion. Both schemes have now been achieved in a wide range of
physical settings optics1 ; optics2 ; ion ; NMR . In 2008, Tashima _et al_.
optics1 proposed a scheme for locally expanding any polarization-based
$n$-photon $W$ ($|W_{n}\rangle$) state to an $(n+2)$-photon $W$
($|W_{n+2}\rangle$) state by accessing just one of the qubits with a success
probability of $(n+2)/(16n)$. This scheme was subsequently demonstrated
experimentally in 2010 demonstrated . Schemes for expanding $|W_{n}\rangle$
locally to $|W_{n+1}\rangle$ with a success probability of $(n+1)/(5n)$ were
also proposed in 2009 expand2009 , and even one for $|W_{n}\rangle$ to
$|W_{n+k}\rangle$ was proposed in 2011 expand2011 . Notably, the success
probability of the expansion from $|W\rangle_{n}$ to $|W_{n+k}\rangle$
decreases with an approximately exponential dependence with increasing $n$.
Fusion, on the other hand, was first proposed in 2011 by Özdemir _et al_.
Ozdemir . The idea was to join $|W_{n}\rangle$ and $|W_{m}\rangle$ to give the
$|W_{n+m-2}\rangle$ with a success probability of $(n+m-2)/(nm)$. In 2013,
enhancement the $W$-state fusion process was proposed through the use of a
Fredkin gate, Bugu _et al_. Bugu then proposed a mechanism to fuse
$|W_{n}\rangle$ and $|W_{m}\rangle$ with one ancillary single photon into
$|W_{n+m-1}\rangle$ with a success probability of $(n+m-1)/(mn)$. In 2014,
Ozaydin _et al_. Ozaydin generalized the setup for fusing three $W$ states:
$|W_{n}\rangle$, $|W_{m}\rangle$, and $|W_{t}\rangle$ states and one ancillary
single photon were joined into $|W_{n+m+t-3}\rangle$ with a success
probability of $(n+m+t-3)/(mnt)$ with a Fredkin gate. Using three CNOT gates
and one Toffoli gate, Yesilyurt _et al_. Yesilyurt further generalized the
scheme for creating $|W_{n+m+t+z-4}\rangle$ from $|W_{n}\rangle$,
$|W_{m}\rangle$, $|W_{z}\rangle$, and $|W_{t}\rangle$ states with a success
probability of $(n+m+t+z-4)/(mntz)$. However, the success probabilities of the
required CNOT KLM ; 1/9 ; 1/41 ; 1/42 , Toffoli Toffoli0 ; Toffoli1 ; Toffoli
; Toffoli2 ; Toffoli3 , and Fredkin Fredkin1 ; Fredkin ; Fredkin2 ; Fredkin3 ;
Fredkin4 gates with linear optical elements were generally not considered.
Currently, nonlinear fusion schemes for fusing $|W_{n}\rangle$ and
$|W_{m}\rangle$ into $|W_{n+m}\rangle$ without qubit loss have also been
proposed loss ; Gaoting .
In this paper, we propose a protocol for fusing $W$ states of arbitrary size
into a larger one via nonunitary partial-swap gates. By introducing auxiliary
spatial degrees of freedom and using $(n-1)$ partial-swap gates, a
$|W_{nm-n+1}\rangle$ state can be created from $n$ arbitrary-sized
$|W_{m}\rangle$ states. All the “garbage” states are recyclable, and our
scheme avoids failed outcomes. Moreover, additional ancillary photon is not
required for our scheme. The length (cost or complexity) of our scheme
(measured by the number of the two-qubit entangling gates needed to construct
the scheme) is much less than the Fredkin- and CNOT-Toffoli-based schemes Bugu
; Yesilyurt .
Figure 1: Schematic diagram of the proposed scheme for fusing two
$|W_{3}\rangle$ states into a larger one $|W_{5}\rangle$. The fusion gate
operates on the two qubits in the dashed blue rectangle.
## 2 Simplifying a fusion-based $W$ state with nonunitary partial-swap gate
### 2.1 Fusion of $|W_{n}\rangle$ and $|W_{m}\rangle$ to give
$|W_{n+m-1}\rangle$
Suppose Alice and Bob possess $n$\- and $m$-partite polarization encoded $W$
states, $|W_{n}\rangle_{A}$ and $|W_{m}\rangle_{B}$, respectively, and they
wish to fuse their states together. A schematic example for the fusion process
of two three-partite $W$-states is depicted in Fig. 1. The entangled
$W$-states of Alice ($|W_{n}\rangle_{A}$) and Bob ($|W_{m}\rangle_{B}$) can be
written as
$\displaystyle\begin{split}|W_{n}\rangle_{A}=(|(n-1)_{H}\rangle_{a}|1_{V}\rangle_{1}+\sqrt{n-1}|W_{n-1}\rangle_{a}|1_{H}\rangle_{1})/\sqrt{n},\end{split}$
(1)
$\displaystyle\begin{split}|W_{m}\rangle_{B}=(|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{2}+\sqrt{m-1}|W_{m-1}\rangle_{b}|1_{H}\rangle_{2})/\sqrt{m},\end{split}$
(2)
where $|(N-k)_{H}\rangle_{i}|k_{V}\rangle_{j}$ represents the superposition of
all possible permutations of $N-k$ photons with a horizontal polarization
($H$) in mode $i$ and $k$ photons with a vertical polarization ($V$) in mode
$j$. Captial letters $A$ and $B$ label the $W$ states are held by Alice and
Bob, respectively. Therefore, the initial state of the system composed of
Alice and Bob can be written as
$\displaystyle\begin{split}|W_{n}\rangle_{A}\otimes|W_{m}\rangle_{B}=&\frac{1}{\sqrt{nm}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{1}|1_{V}\rangle_{2}\\\
&+\frac{\sqrt{m-1}}{\sqrt{nm}}|(n-1)_{H}\rangle_{a}|W_{m-1}\rangle_{b}|1_{V}\rangle_{1}|1_{H}\rangle_{2}\\\
&+\frac{\sqrt{n-1}}{\sqrt{nm}}|W_{n-1}\rangle_{a}|(m-1)_{H}\rangle_{b}|1_{H}\rangle_{1}|1_{V}\rangle_{2}\\\
&+\frac{\sqrt{(n-1)(m-1)}}{\sqrt{nm}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|1_{H}\rangle_{1}|1_{H}\rangle_{2}.\end{split}$
(3)
Figure 2: Fusion gate for fusing two $W$ states of arbitrary size to obtain a larger $W$ state. The circle “$\circ$” denotes $|H\rangle$, signifying that the states of the two photons in modes 1 and 2 will be exchanged if the photon in mode 1 is the $H$-polarized states, and has no effect otherwise. $D_{1}$ and $D_{2}$ are single-photon detectors. Table 1: Truth table of the polarization partial-swap gate. $x_{1}x_{2}$ | $\rightarrow$ | $y_{1}y_{2}$ |
---|---|---|---
$H_{1}H_{2}$ | | $H_{1}H_{2}$ |
$H_{1}V_{2}$ | | $V_{1}H_{2}$ |
$V_{1}H_{2}$ | | $V_{1}H_{2}$ |
$V_{1}V_{2}$ | | $V_{1}V_{2}$ |
As shown in Fig. 2, the fusion of $|W_{m}\rangle_{A}$ and $|W_{n}\rangle_{B}$
states into a larger $W$ state is achieved by sending photons in mode 1 (2),
i.e., $|1_{H}\rangle_{1}$ and $|1_{V}\rangle_{1}$ ($|1_{H}\rangle_{2}$ and
$|1_{V}\rangle_{2}$), into the partial-swap gate and those in mode a (b) are
kept intact at Alice’s (Bob’s) site. Note that the partial-swap gate swaps the
states of the two photons if the first photon is $H$ polarization, and has no
effect otherwise (see Table 1). That is, the action of this gate on the four
input states yields
$\displaystyle\begin{split}&|1_{H}\rangle_{1}|1_{H}\rangle_{2}\xrightarrow{\text{p-swap}}|1_{H}\rangle_{1}|1_{H}\rangle_{2},\quad|1_{H}\rangle_{1}|1_{V}\rangle_{2}\xrightarrow{\text{p-swap}}|1_{V}\rangle_{1}|1_{H}\rangle_{2},\\\
&|1_{V}\rangle_{1}|1_{H}\rangle_{2}\xrightarrow{\text{p-swap}}|1_{V}\rangle_{1}|1_{H}\rangle_{2},\quad|1_{V}\rangle_{1}|1_{V}\rangle_{2}\xrightarrow{\text{p-swap}}|1_{V}\rangle_{1}|1_{V}\rangle_{2}.\end{split}$
(4)
Therefore, such partial-swap gate completes the transformation
$\displaystyle\begin{split}|W_{n}\rangle_{A}\otimes|W_{m}\rangle_{B}\rightarrow&\frac{1}{\sqrt{nm}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{1}|1_{V}\rangle_{2}\\\
&+\frac{\sqrt{m-1}}{\sqrt{nm}}|(n-1)_{H}\rangle_{a}|W_{m-1}\rangle_{b}|1_{V}\rangle_{1}|1_{H}\rangle_{2}\\\
&+\frac{\sqrt{n-1}}{\sqrt{nm}}|W_{n-1}\rangle_{a}|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{1}|1_{H}\rangle_{2}\\\
&+\frac{\sqrt{(n-1)(m-1)}}{\sqrt{nm}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|1_{H}\rangle_{1}|1_{H}\rangle_{2}\\\
=\;&\frac{\sqrt{n+m-1}}{\sqrt{nm}}|W_{n+m-1}\rangle_{a,b,2}|1_{V}\rangle_{1}+\frac{\sqrt{(n-1)(m-1)}}{\sqrt{nm}}\\\
&\times|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|1_{H}\rangle_{1}|1_{H}\rangle_{2}.\end{split}$
(5)
The photon in mode 1 is then measured in the $\\{|H\rangle,\;|V\rangle\\}$
basis. From Eq. (5), one sees that (i) When the photon in mode 1 is
$V$-polarized and detector $D_{1}$ clicks, the scheme is successful with
probability (success probability) of $(n+m-1)/(nm)$, and the system collapses
into the desired state
$\displaystyle\begin{split}\frac{\sqrt{n+m-1}}{\sqrt{nm}}|W_{n+m-1}\rangle_{a,b,2}.\end{split}$
(6)
(ii) When the photon in mode 1 is $H$-polarized state and detector $D_{2}$
clicks, then the remaining photon collapses into state
$|W_{n-1}\rangle_{a}\otimes|W_{m-1}\rangle_{b}\otimes|1_{H}\rangle_{2}$ with
probability (recycle probability) of $(n-1)(m-1)/(nm)$. It is interesting to
see that the “garbage” state $|W_{n-1}\rangle_{a}\otimes|W_{m-1}\rangle_{b}$
of Alice and Bob remains a $W$ state but with a reduced number of qubits, and
therefore this state can be recycled, much like a repeat-until-success scheme
lim2005repeat . Remarkably, the fail probability of the designed scheme is
zero in principle as the system can not collapse into the failure states, such
as $|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{2}$.
In Table 2, we compare our scheme with previous protocols. Here the success
probabilities of the Fredkin, Toffoli, CNOT, and partial-swap gates are
disregarded. The linear optical entangling gates are inherently probabilistic.
The optimal cost of a Fredkin or Toffoli gate is five two-qubit gates Optimal1
; Optimal2 , therefore, the complexity of our partial-swap-based scheme is
much lower than the Fredkin-based one Bugu and the Toffoli-CNOT-based one
Fu-T . Moreover, extra ancillary photon is necessary for the schemes presented
in Refs. Bugu ; Fu-T , and is not required in our scheme. Remarkably, our
protocol is less complex with a higher success probability than any of the
other protocols for the generation of a larger $W$ state with the same size
Ozdemir ; Bugu ; Fu-T .
Table 2: Quantum resource required and success probability of various protocols for creating a larger $W$ state. $H$ is an extra ancillary $H$-polarized photon required for the fusion program. Proposed | Initial | Success | Success | Recycle | Fail
---|---|---|---|---|---
protocol | state | result | probability | probability | probability
with $I$ Ozdemir | $W_{m},W_{n}$ | $W_{m+n-2}$ | $\frac{m+n-2}{mn}$ | $\frac{(m-1)(n-1)}{mn}$ | $\frac{1}{mn}$
with 1 Fredkin Bugu | $W_{m},W_{n},H$ | $W_{m+n-1}$ | $\frac{m+n-1}{mn}$ | $\frac{(m-1)(n-1)}{mn}$ | 0
with 1 Toffoli, 1 CNOT Fu-T | $W_{m},W_{n},H$ | $W_{m+n-1}$ | $\frac{m+n-1}{mn}$ | $\frac{(m-1)(n-1)}{mn}$ | 0
ours with 1 partial-swap | $W_{m},W_{n}$ | $W_{m+n-1}$ | $\frac{m+n-1}{mn}$ | $\frac{(m-1)(n-1)}{mn}$ | 0
### 2.2 Fusing $n$ arbitrary-sized $|W_{m}\rangle$ states into a large-
scalable $|W_{nm-n+1}\rangle$ state
Fig. 3 displays a scheme for fusing $|W_{n}\rangle_{A}$, $|W_{m}\rangle_{B}$,
and $|W_{t}\rangle_{C}$ states into $|W_{n+m+t-2}\rangle$ by using two
partial-swap gates. We denote polarization-based entangled $W$ states of
Alice, Bob, and Charlie as
$\displaystyle\begin{split}|W_{n}\rangle_{A}=(|(n-1)_{H}\rangle_{a}|1_{V}\rangle_{1}+\sqrt{n-1}|W_{n-1}\rangle_{a}|1_{H}\rangle_{1})/\sqrt{n},\end{split}$
(7)
$\displaystyle\begin{split}|W_{m}\rangle_{B}=(|(m-1)_{H}\rangle_{b}|1_{V}\rangle_{2}+\sqrt{m-1}|W_{m-1}\rangle_{b}|1_{H}\rangle_{2})/\sqrt{m},\end{split}$
(8)
$\displaystyle\begin{split}|W_{t}\rangle_{C}=(|(t-1)_{H}\rangle_{c}|1_{V}\rangle_{3}+\sqrt{t-1}|W_{t-1}\rangle_{c}|1_{H}\rangle_{3})/\sqrt{t}.\end{split}$
(9)
As shown Fig. 3, after Alice, Bob, and Charlie send one of their photons
($|1_{H}\rangle_{1}$ and $|1_{V}\rangle_{1}$, $|1_{H}\rangle_{2}$ and
$|1_{V}\rangle_{2}$, $|1_{H}\rangle_{3}$ and $|1_{V}\rangle_{3}$) to the two
partial-swap gates through modes 1, 2, and 3, respectively, the two partial-
swap gates lead to the following transformations:
$\displaystyle\begin{split}|W_{n}\rangle\otimes&|W_{m}\rangle\otimes|W_{t}\rangle\\\
\rightarrow&\frac{1}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{V}\rangle_{1}|1_{V}\rangle_{2}|1_{V}\rangle_{3}\\\
&+\frac{\sqrt{n-1}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|(m-1)_{H}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{V}\rangle_{1}|1_{V}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{m-1}}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|W_{m-1}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{V}\rangle_{1}|1_{V}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{t-1}}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|W_{t-1}\rangle_{c}|1_{V}\rangle_{1}|1_{V}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{(n-1)(m-1)}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{H}\rangle_{1}|1_{V}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{(n-1)(t-1)}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|(m-1)_{H}\rangle_{b}|W_{t-1}\rangle_{c}|1_{V}\rangle_{1}|1_{H}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{(m-1)(t-1)}}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|W_{m-1}\rangle_{b}|W_{t-1}\rangle_{c}|1_{V}\rangle_{1}|1_{H}\rangle_{2}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{(n-1)(m-1)(t-1)}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|W_{t-1}\rangle_{c}|1_{H}\rangle_{1}|1_{H}\rangle_{2}|1_{H}\rangle_{3}.\end{split}$
(10)
Figure 3: Fusion gate for fusing three $W$ states of arbitrary size to obtain
a larger $W$ state.
Eq. (10) implies the following four possible outcomes:
(i) When the photon in mode 1 is $V$-polarized state, the photon in mode 2 is
also $V$-polarized state and detectors $D_{1}$ and $D_{3}$ click, the system
collapses into the successful state $|W_{n+m+t-2}\rangle$
$\displaystyle\begin{split}|W_{n+m+t-2}\rangle=&\frac{1}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{V}\rangle_{3}\\\
&+\frac{\sqrt{n-1}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|(m-1)_{H}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{m-1}}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|W_{m-1}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{H}\rangle_{3}\\\
&+\frac{\sqrt{t-1}}{\sqrt{nmt}}|(n-1)_{H}\rangle_{a}|(m-1)_{H}\rangle_{b}|W_{t-1}\rangle_{c}|1_{H}\rangle_{3}.\end{split}$
(11)
(ii) When the photon in mode 1 is $H$-polarized state, the photon in mode 2 is
also $H$-polarized state and detectors $D_{2}$ and $D_{4}$ click, the system
collapses into the recyclable state
$\displaystyle\frac{\sqrt{(n-1)(m-1)(t-1)}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|W_{t-1}\rangle_{c}|1_{H}\rangle_{3}.$
(12)
(iii) When the photon in mode 1 is $H$-polarized state, the photon in mode 2
is $V$-polarized state and detectors $D_{2}$ and $D_{3}$ click, the system
collapses into the “garbage” state
$\displaystyle\frac{\sqrt{(n-1)(m-1)}}{\sqrt{nmt}}|W_{n-1}\rangle_{a}|W_{m-1}\rangle_{b}|(t-1)_{H}\rangle_{c}|1_{H}\rangle_{3}.$
(13)
We call this case “a partial recyclable” because the states between Alice and
Bob remain a $W$-state but Charlie needs to prepare a new $W$ state for the
subsequent round of the fusion process.
(iv) When the photon in mode 1 is $V$-polarized state, the photon in mode 2 is
$H$-polarized state and detectors $D_{1}$ and $D_{4}$ click, the system
collapses into the “garbage” state
$\displaystyle\begin{split}\frac{\sqrt{(t-1)(m+n-2)}}{\sqrt{nmt}}|W_{n+m-2}\rangle_{a,b}|W_{t-1}\rangle_{c}|1_{H}\rangle_{3}.\end{split}$
(14)
We call this case “a partial success” because the state between Alice and Bob
has been fused but not Charlie.
Fig. 4 shows a scheme for fusing multiple $W$ states of arbitrary size
simultaneously. Table 3 compares the success and failure probabilities and an
estimation of the required quantum resources for our proposal against previous
schemes. Compared with other proposals for generating a $W$ state of given
size, our proposal scores a higher success probability and a lower failure
probability, with a simpler network.
Table 3: Quantum resources required and success probabilities of various protocols for fusing multiple $W$ states into a larger one simultaneously. $F=((5-m-n-z-t)-(n-1)(m-1)(t-1)(z-1))/(mntz)$. Proposed | Initial | Achieved | Success | Fail
---|---|---|---|---
protocol | state | state | probability | probability
with 1 Fredkin Bugu | $W_{m},W_{n},W_{t},H$ | $W_{m+n+t-3}$ | $\frac{m+n+t-3}{mnt}$ | $\frac{(t-1)(m+n-2)+1}{mnt}$
ours with 2 partial-swaps | $W_{m},W_{n},W_{t},$ | $W_{m+n+t-2}$ | $\frac{m+n+t-2}{mnt}$ | 0
with 1 Toffoli, 3 CNOTs Yesilyurt | $W_{m},W_{n},W_{t},W_{z}$ | $W_{m+n+t+z-4}$ | $\frac{m+n+t+z-4}{mntz}$ | $F$
ours with 3 partial-swaps | $W_{m},W_{n},W_{t},W_{z}$ | $W_{m+n+t+z-3}$ | $\frac{m+n+z+t-3}{mntz}$ | 0
Figure 4: Fusion gate for fusing $n$ arbitrary-sized $W$ states
simultaneously.
## 3 Linear-optics fusion-based $W$ state using auxiliary spatial degrees of
freedom
Figure 5: Linear-optical post-selected partial-swap gate. HWP${}^{45^{\circ}}$
is a half-wave plate (HWP) rotated by 45∘ to induce the transformation
$|H\rangle\leftrightarrow|V\rangle$. Setting HWP${}^{22.5^{\circ}}$
(HWP${}^{67.5^{\circ}}$) to 22.5∘ (67.5∘) completes
$|H\rangle\leftrightarrow(|H\rangle+|V\rangle)/\sqrt{2}$ and
$|V\rangle\leftrightarrow(|H\rangle-|V\rangle)/\sqrt{2}$
($|H\rangle\leftrightarrow(-|H\rangle+|V\rangle)/\sqrt{2}$ and
$|V\rangle\leftrightarrow(|H\rangle+|V\rangle)/\sqrt{2}$).
Based on Sec. 2, one can see that the key component of our fusion gates is the
partial-swap gate described by Eq. (4). The matrix form of this partial-swap
gate in the basis $\\{|1_{H}\rangle|1_{H}\rangle$,
$|1_{H}\rangle|1_{V}\rangle$, $|1_{V}\rangle|1_{H}\rangle$,
$|1_{V}\rangle|1_{V}\rangle\\}$ can be written as
$\displaystyle N_{\text{p-swap}}=\left(\begin{array}[]{cccc}1&0&0&0\\\
0&0&0&0\\\ 0&1&1&0\\\ 0&0&0&1\\\ \end{array}\right).$ (19)
Obviously, this operation is not a unitary one due to $NN^{\dagger}\neq
N^{\dagger}N\neq I$, with $I$ being an identity matrix.
The nonunitary gate can be implemented by utilizing the framework of quantum
measurement, or by expanding the state space to a larger one, and then
performing a proper unitary operation and an orthogonal measurement in the
enlarged space in succession. Here, we employ auxiliary spatial degrees of
freedom introduced by polarizing beam splitters PBS1 and PBS2 (see Fig. 5) to
implement the nonunitary polarization partial-swap gate. Next, we provide a
step-by-step description of our protocol for implementing this partial-swap
gate.
We consider photons 1 and 2 as being initially prepared in an arbitrary two-
qubit polarization-encoded state
$\displaystyle\begin{split}|\psi_{0}\rangle=&\alpha_{1}|1_{H}\rangle_{in}|1_{H}\rangle_{in^{\prime}}+\alpha_{2}|1_{H}\rangle_{in}|1_{V}\rangle_{in^{\prime}}\\\
&+\alpha_{3}|1_{V}\rangle_{in}|1_{H}\rangle_{in^{\prime}}+\alpha_{4}|1_{V}\rangle_{in}|1_{V}\rangle_{in^{\prime}}.\end{split}$
(20)
In the first step, as shown in Fig. 5, photons 1 and 2 pass through PBS1 and
PBS2, respectively. Next, photons in modes 1, 3, and 4 interact with half-wave
plates (HWP) oriented at $45^{\circ}$ (HWP${}^{45^{\circ}}$), $67.5^{\circ}$
(HWP${}^{67.5^{\circ}}$), and $22.5^{\circ}$ (HWP${}^{22.5^{\circ}}$),
respectively. The PBSs and HWPs cause the state to evolve from
$|\psi\rangle_{0}$ to
$\displaystyle\begin{split}|\psi_{1}\rangle=&\frac{1}{\sqrt{2}}(\alpha_{1}|1_{V}\rangle_{1}(|1_{H}\rangle_{4}+|1_{V}\rangle_{4})+\alpha_{2}|1_{V}\rangle_{1}(|1_{H}\rangle_{3}+|1_{V}\rangle_{3})\\\
&+\alpha_{3}|1_{V}\rangle_{2}(|1_{H}\rangle_{4}+|1_{V}\rangle_{4})+\alpha_{4}|1_{V}\rangle_{2}(|1_{H}\rangle_{3}+|1_{V}\rangle_{3})).\end{split}$
(21)
A PBS transmits the $H$-polarized and reflects the $V$-polarized components.
Therefore, PBS1 and PBS2 impart the spatial degrees of freedom of the incident
photon. The HWPs oriented at $45^{\circ}$ (denoted HWP${}^{45^{\circ}}$)
induce the qubit-flip operation $|1_{H}\rangle\leftrightarrow|1_{V}\rangle$
while the HWP${}^{67.5^{\circ}}$ results in
$\displaystyle\begin{split}&|1_{H}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(-|1_{H}\rangle+|1_{V}\rangle),\quad|1_{V}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|1_{H}\rangle+|1_{V}\rangle).\end{split}$
(22)
Finally, the HWP${}^{22.5^{\circ}}$ completes the transformation
$\displaystyle\begin{split}&|1_{H}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|1_{H}\rangle+|1_{V}\rangle),\qquad|1_{V}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|1_{H}\rangle-|1_{V}\rangle).\end{split}$
(23)
In the second step, the photons in modes 2 and 3 are then mixed at PBS3 before
going through HWP${}^{67.5^{\circ}}$ while the photons in modes 1 and 4 are
mixed at PBS4 before going through HWP${}^{22.5^{\circ}}$. The completion of
these operations leads to the joint state
$\displaystyle\begin{split}|\psi_{2}\rangle=&\frac{1}{2\sqrt{2}}(\alpha_{1}(|1_{H}\rangle_{7}-|1_{V}\rangle_{7})(|1_{H}\rangle_{7}+|1_{V}\rangle_{7}+|1_{H}\rangle_{8}-|1_{V}\rangle_{8})\\\
&+\alpha_{2}(|1_{H}\rangle_{7}-|1_{V}\rangle_{7})(-|1_{H}\rangle_{5}+|1_{V}\rangle_{5}+|1_{H}\rangle_{6}+|1_{V}\rangle_{6})\\\
&+\alpha_{3}(|1_{H}\rangle_{5}+|1_{V}\rangle_{5})(|1_{H}\rangle_{7}+|1_{V}\rangle_{7}+|1_{H}\rangle_{8}-|1_{V}\rangle_{8})\\\
&+\alpha_{4}(|1_{H}\rangle_{5}+|1_{V}\rangle_{5})(-|1_{H}\rangle_{5}+|1_{V}\rangle_{5}+|1_{H}\rangle_{6}+|1_{V}\rangle_{6})).\end{split}$
(24)
In the third step, the photons in modes 5 and 8 (6 and 7) are combined at PBS5
(PBS6), and the photon in mode 10 (12) passes through HWP${}^{45^{\circ}}$
(HWP${}^{45^{\circ}}$). The operations
$\text{PBS}_{5}\rightarrow\text{HWP}^{45^{\circ}}$ and
$\text{PBS}_{6}\rightarrow\text{HWP}^{45^{\circ}}$ make $|\psi\rangle_{2}$
become
$\displaystyle\begin{split}|\psi_{3}\rangle=&\frac{1}{2\sqrt{2}}(\alpha_{1}(|1_{H}\rangle_{11}-|1_{H}\rangle_{12})(|1_{H}\rangle_{11}+|1_{H}\rangle_{12}+|1_{H}\rangle_{9}-|1_{H}\rangle_{10})\\\
&+\alpha_{2}(|1_{H}\rangle_{11}-|1_{H}\rangle_{12})(-|1_{V}\rangle_{10}+|1_{V}\rangle_{9}+|1_{V}\rangle_{12}+|1_{V}\rangle_{11})\\\
&+\alpha_{3}(|1_{V}\rangle_{10}+|1_{V}\rangle_{9})(|1_{H}\rangle_{11}+|1_{H}\rangle_{12}+|1_{H}\rangle_{9}-|1_{H}\rangle_{10})\\\
&+\alpha_{4}(|1_{V}\rangle_{10}+|1_{V}\rangle_{9})(-|1_{V}\rangle_{10}+|1_{V}\rangle_{9}+|1_{V}\rangle_{12}+|1_{V}\rangle_{11})).\end{split}$
(25)
Eq. (25) indicates that the two-qubit partial-swap operation (i.e., exchanges
the information of the two photons, conditional on the first photon being
$H$-polarized) is completed when a coincidence is observed between modes 9 and
11 (10 and 12). Table 4 lists the photon count rates in modes 9 and 11 (10 and
12) for computing basis inputs.
Table 4: Coincident expectation values calculated for the four logic basis inputs. | $\langle n_{|1_{H}\rangle_{9}}n_{|1_{H}\rangle_{11}}\rangle$ | $\langle n_{|1_{H}\rangle_{9}}n_{|1_{V}\rangle_{11}}\rangle$ | $\langle n_{|1_{V}\rangle_{9}}n_{|1_{H}\rangle_{11}}\rangle$ | $\langle n_{|1_{V}\rangle_{9}}n_{|1_{V}\rangle_{11}}\rangle$
---|---|---|---|---
Input | $\langle n_{|1_{H}\rangle_{10}}n_{|1_{H}\rangle_{12}}\rangle$ | $\langle n_{|1_{H}\rangle_{10}}n_{|1_{V}\rangle_{12}}\rangle$ | $\langle n_{|1_{V}\rangle_{10}}n_{|1_{H}\rangle_{12}}\rangle$ | $\langle n_{|1_{V}\rangle_{10}}n_{|1_{V}\rangle_{12}}\rangle$
$|1_{H}\rangle_{in}|1_{H}\rangle_{in^{\prime}}$ | 1/8 | 0 | 0 | 0
$|1_{H}\rangle_{in}|1_{V}\rangle_{in^{\prime}}$ | 0 | 0 | 1/8 | 0
$|1_{V}\rangle_{in}|1_{H}\rangle_{in^{\prime}}$ | 0 | 0 | 1/8 | 0
$|1_{V}\rangle_{in}|1_{V}\rangle_{in^{\prime}}$ | 0 | 0 | 0 | 1/8
## 4 Discussion and Conclusion
In this paper, we have proposed an effective scheme for fusing $|W_{n}\rangle$
and $|W_{m}\rangle$ states into a large-size $|W_{n+m-1}\rangle$ state by
using a partial-swap gate. We have also designed a scheme for fusing multiple
$W$ states of arbitrary size simultaneously (see Fig. 4). By exploiting the
spatial degrees of freedom of single-photons introduced by the PBSs, the
partial-swap gate was implemented using an optically polarized architecture
designed with linear-optical elements.
As shown in Table 2, our scheme outperforms previous ones in fusing two $W$
states of arbitrary size into a large-sized $W$ state. An ancillary photon,
which is necessary in the Fredkin- and Toffoli-based schemes Bugu ; Fu-T to
create a $|W_{n+m-1}\rangle$ state, is not required in our scheme. Moreover,
our scheme minimizes failure outcomes. From Table 3, one can see that, if the
gate (Fredkin, Toffoli, CNOT, and partial-swap gates) operations are
considered, the fail probability in the presented scheme is lower than that
with the Fredkin- and Toffoli-based schemes Bugu ; Yesilyurt .
Our presented scheme has the further advantage of reducing cost in terms of
the number of two-qubit gates. In previous studies Bugu ; Fu-T , the fusion of
two $W$ states required either one Fredkin gate, or one Toffoli and one CNOT
gate. Our presented approach requires just one partial-swap gate. Notably, the
optimal cost of an unconstructed Toffoli or Fredkin gate is five two-qubit
gates Optimal1 ; Optimal2 . If we impose a further condition of using only
CNOT gates, at least six CNOT gates are required to synthesize a Toffoli and
at least eight for a Fredkin gate Optimal3 . In contrast, the current
proposals based on partial-swap gates surpass the Fredkin-gate scheme Bugu
and Toffoli-CNOT-scheme Fu-T , and also surpass the scheme based on one
Toffoli gate and three CNOT gates Yesilyurt (see Table 2 and Table 3).
Another important advantage of the our scheme is its increased success
probability. It is known that entangling quantum gates can be implemented only
probabilistically using linear-optical elements. With a polynomial quantum
resource, a linear-optical CNOT gate can be implemented with a maximal success
probability of 3/4 KLM , and a post-selected CNOT gate with a success
probability of 1/9 1/9 . The most efficient scheme for a CNOT gate with a
success probability of 1/4 is achieved with the help of a maximally entangled
photon pair 1/41 ; 1/42 . Moreover, the ideal success probability of a Toffoli
gate is 1/9 under a linear-optics setup Toffoli . At present, the optimal
success probability of a linear optical Fredkin gate is 1/64 Fredkin . In
contrast, the proposed partial-swap gate with a success probability of 1/4 is
achievable. To sum up, our partial-swap-based fusion schemes outperform the
Fredkin-based Bugu and Toffoli-CNOT-based schemes Yesilyurt ; Fu-T in term
of the cost and success probability.
## Acknowledgment
The work was supported by the National Natural Science Foundation of China
under Grant No. 11604012, and the Fundamental Research Funds for the Central
Universities under Grant Nos. FRF-BR-17-004B and 230201506500024, and a grant
from the China Scholarship Council. KLC is supported by the Ministry of
Education and the National Reserach Foundation Singapore.
## References
* (1) Nielsen M A and Chuang I L _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, UK, 2000).
* (2) Bennett C H, Brassard G, Crépeau C, Jozsa R, Peres A and Wootters W K 1993 Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels _Phys. Rev. Lett._ 70 1895-1899
* (3) Luo Y H, Zhong H S, Erhard M, Wang X L, Peng L C, Krenn M, Jiang X, Li L, Liu N L, Lu C Y, Zeilinger A and Pan J W 2019 Quantum teleportation in high dimensions _Phys. Rev. Lett._ 123 070505
* (4) Mattle K, Weinfurter H, Kwiat P G and Zeilinger A 1996 Dense coding in experimental quantum communication _Phys. Rev. Lett._ 76 4656-4659
* (5) Scarani V, Bechmann-Pasquinucci H, Cerf N J, Dušek M, Lütkenhaus N and Peev M 2009 The security of practical quantum key distribution _Rev. Mod. Phys._ 81 1301-1350
* (6) Long G L 2001 Grover algorithm with zero theoretical failure rate _Phys. Rev. A_ 64 022307
* (7) Walther P, Resch K J, Rudolph T, Schenck E, Weinfurter H, Vedral V, Aspelmeyer M and Zeilinger A 2005 Experimental one-way quantum computing _Nature_ 434 169-176
* (8) Greenberger D M, Horne M, Shimony A and Zeilinger A 1990 Bell’s theorem without inequalities _Am. J. Phys._ 58 1131-1143
* (9) Dicke R H 1954 Coherence in spontaneous radiation processes _Phys. Rev._ 93 99-110
* (10) Dür W 2001 Multipartite entanglement that is robust against disposal of particles _Phys. Rev. A_ 63 020303(R)
* (11) Briegel H J and Raussendorf R 2001 Persistent entanglement in arrays of interacting particles _Phys. Rev. Lett._ 86 910-913
* (12) Fujii K, Maeda H and Yamamoto K 2011 Robust and scalable scheme to generate large-scale entanglement webs _Phys. Rev. A_ 83 050303(R)
* (13) Brunner N, Cavalcanti D, Pironio S, Scarani V and Wehner S 2014 Bell nonlocality _Rev. Mod. Phys._ 86 419-478
* (14) Máttar A, Skrzypczyk P, Aguilar G H, Nery R V, Ribeiro P S, Walborn S P and Cavalcanti D 2017 Experimental multipartite entanglement and randomness certification of the $W$ state in the quantum steering scenario _Quantum Sci. Technol._ 2 015011
* (15) Wu X, Cai Y, Yang T H, Le H N, Bancal J D and Scarani V 2014 Robust self-testing of the three-qubit $W$ state _Phys. Rev. A_ 90 042339
* (16) Lipinska V, Murta G and Wehner S 2018 Anonymous transmission in a noisy quantum network using the $W$ state _Phys. Rev. A_ 98 052320
* (17) Murao M, Jonathan D, Plenio M B and Vedral V 1999 Quantum telecloning and multiparticle entanglement _Phys. Rev. A_ 59 156-161
* (18) D’Hondt E and Panangaden P 2006 The computational power of the $W$ and GHZ states _Quantum Inf. Comput._ 6 173-183
* (19) Wang B X, Tao M J, Ai Q, Xin T, Lambert N, Ruan D, Cheng Y C, Nori F, Deng F G and Long G L 2018 Efficient quantum simulation of photosynthetic light harvesting _npj Quantum Inform._ 4 52
* (20) Choi K S, Goban A, Papp S B, Van Enk S J and Kimble H J 2010 Entanglement of spin waves among four quantum memories _Nature_ 468 412-416
* (21) Guha S and Shapiro J H 2013 Reading boundless error-free bits using a single photon _Phys. Rev. A_ 87 062306
* (22) Tashima T, Wakatsuki T, Özdemir Ş K, Yamamoto T, Koashi M and Imoto N 2009 Local transformation of two Einstein-Podolsky-Rosen photon pairs into a three-photon $W$ state _Phys. Rev. Lett._ 102 130502
* (23) Eibl M, Kiesel N, Bourennane M, Kurtsiefer C, Weinfurter H 2014 Experimental realization of a three-qubit entangled $W$ state _Phys. Rev. Lett._ 92 077901
* (24) Gräfe M, Heilmann R, Perez-Leija A, Keil R, Dreisow F, Heinrich M, Moya-Cessa H, Nolte S, Christodoulides D N and Szameit A 2014 On-chip generation of high-order single-photon $W$-states _Nat. Photonics_ 8 791-795
* (25) Tashima T, Özdemir Ş K, Yamamoto T, Koashi M and Imoto N 2008 Elementary optical gate for expanding an entanglement web _Phys. Rev. A_ 77 030302
* (26) Mikami H, Li Y, Fukuoka K and Kobayashi T 2005 New high-efficiency source of a three-photon $W$ state and its full characterization using quantum state tomography _Phys. Rev. Lett._ 95 150404
* (27) Häffner H, Hänsel W, Roos C F, Benhelm J, Chekalkar D, Chwalla M, Körber T, Rapol U D, Riebe M, Schmidt P O, Becher C, Gühne O, Dür W and Blatt R 2005 Scalable multiparticle entanglement of trapped ions _Nature_ 438 643-646
* (28) Teklemariam G, Fortunato E M, Pravia M A, Sharf Y, Havel T F, Cory D G, Bhattaharyya A and Hou J 2002 Quantum erasers and probing classifications of entanglement via nuclear magnetic resonance _Phys. Rev. A_ 66 012309
* (29) Tashima T, Kitano T, Özdemir Ş K, Yamamoto T, Koashi M and Imoto N 2010 Demonstration of local expansion toward large-scale entangled webs _Phys. Rev. Lett._ 105 210503
* (30) Tashima T, Özdemir Ş K, Yamamoto T, Koashi M and Imoto N 2009 Local expansion of photonic $W$ state using a polarization-dependent beamsplitter _New J. Phys._ 11 023024
* (31) Ikuta R, Tashima T, Yamamoto T, Koashi M and Imoto N 2011 Optimal local expansion of $W$ states using linear optics and Fock states _Phys. Rev. A_ 83 012314
* (32) Özdemir Ş K, Matsunaga E, Tashima T, Yamamoto T, Koashi M and Imoto N 2011 An optical fusion gate for $W$-states _New J. Phys._ 13 103003
* (33) Bugu S, Yesilyurt C and Ozaydin F 2013 Enhancing the $W$-state quantum-network-fusion process with a single Fredkin gate _Phys. Rev. A_ 87 032331
* (34) Ozaydin F, Bugu S, Yesilyurt C, Altintas A A, Tame M and Özdemir Ş K 2014 Fusing multiple $W$ states simultaneously with a Fredkin gate _Phys. Rev. A_ 89 042311
* (35) Yesilyurt C, Bugu S and Ozaydin F 2013 An optical gate for simultaneous fusion of four photonic $W$ or Bell states _Quantum Inf. Process._ 12 2965-2975
* (36) Knill E, Laflamme R and Milburn G J 2001 A scheme for efficient quantum computation with linear optics _Nature_ 409 46-52
* (37) Kiesel N, Schmid C, Weber U, Ursin R and Weinfurter H 2005 Linear optics controlled-phase gate made simple _Phys. Rev. Lett._ 95 210505
* (38) Pittman T B, Jacobs B C and Franson J D 2011 Probabilistic quantum logic operations using polarizing beam splitters _Phys. Rev. A_ 64 062311
* (39) Zeuner J, Sharma A N, Tillmann M, Heilmann R, Gräfe M, Moqanaki A, Szameit A and Walther P 2018 Integrated-optics heralded controlled-NOT gate for polarization-encoded qubits _npj Quantum Inform._ 4 13
* (40) Fiurášek J 2006 Linear-optics quantum Toffoli and Fredkin gates _Phys. Rev. A_ 73 062313
* (41) Lanyon B P, Barbieri M, Almeida M P, Jennewein T, Ralph T C, Resch K J, Pryde G J, O’Brien J L, Gilchrist A and White A G 2009 Simplifying quantum logic using higher-dimensional Hilbert spaces _Nat. Phys._ 5 134-140
* (42) Lemr K, Černoch A, Soubusta J, Kieling K, Eisert J and Dušek M 2011 Experimental implementation of the optimal linear-optical controlled phase gate _Phys. Rev. Lett._ 106 013602
* (43) Mičuda M, Sedlák M, Straka I, Miková M, Dušek M, Ježek M and Fiurášek J 2013 Efficient experimental estimation of fidelity of linear optical quantum Toffoli gate _Phys. Rev. Lett._ 111 160407
* (44) Mičuda M, Miková M, Straka I, Sedlák M, Dušek M, Ježek M and Fiurášek J 2015 Tomographic characterization of a linear optical quantum Toffoli gate _Phys. Rev. A_ 92 032312
* (45) Fiurás̆ek J 2008 Linear optical Fredkin gate based on partial-SWAP gate _Phys. Rev. A_ 78 032317
* (46) Gong Y X, Guo G C and Ralph T C 2008 Methods for a linear optical quantum Fredkin gate _Phys. Rev. A_ 78 012305
* (47) Černoch A, Soubusta J, Bartu̇šková L, Dušek M and Fiurášek J 2008 Experimental realization of linear-optical partial swap gates _Phys. Rev. Lett._ 100 180501
* (48) Patel R B, Ho J, Ferreyrol F, Ralph T C and Pryde G J 2016 A quantum Fredkin gate _Sci. Adv._ 2 e1501531
* (49) Ono T, Okamoto R, Tanida M, Hofmann H F and Takeuchi S 2017 Implementation of a quantum controlled-SWAP gate with photonic circuits _Sci. Rep._ 7 45353
* (50) Li K, Kong F Z, Yang M, Yang Q and Cao Z L 2016 Qubit-loss-free fusion of $W$ states _Phys Rev A_ 94 062315
* (51) Wang M, Hao Q, Yan F and Gao T 2018 Qubit-loss-free fusion of $W$ states _Quantum Inf. Comput._ 18 75-84
* (52) Lim Y L, Beige A and Kwek L C 2005 Repeat-until-success linear optics distributed quantum computing _Phys. Rev. Lett._ 95 030505
* (53) Yu N, Duan R and Ying M 2013 Five two-qubit gates are necessary for implementing the Toffoli gate _Phys. Rev. A_ 88 010304(R)
* (54) Yu N and Ying M 2015 Optimal simulation of Deutsch gates and the Fredkin gate _Phys. Rev. A_ 91 032302
* (55) Bugu S, Ozaydin F, Ferrus T and Kodera T 2020 Preparing multipartite entangled spin qubits via pauli spin blockade _Sci. Rep._ 10 3481
* (56) Shende V V and Markov I L 2009 On the CNOT-cost of Toffoli gates _Quant. Inf. Comp._ 9 461-486
|
# GANonymization: A GAN-based Face Anonymization Framework for Preserving
Emotional Expressions
Fabio Hellmann<EMAIL_ADDRESS>0000-0001-6404-0827
University of AugsburgUniversitaetsstrasse 6aAugsburgBavariaGermany86159 ,
Silvan Mertes<EMAIL_ADDRESS>0000-0001-5230-5218
University of AugsburgUniversitaetsstrasse 6aAugsburgBavariaGermany86159 ,
Mohamed Benouis<EMAIL_ADDRESS>0000-0002-9107-9329 University of AugsburgUniversitaetsstrasse
6aAugsburgBavariaGermany86159 , Alexander Hustinx<EMAIL_ADDRESS>0000-0003-4592-3979 University of BonnVenusberg-Campus 1BonnNorth Rhine-
WestphaliaGermany53127 , Tzung-Chien Hsieh<EMAIL_ADDRESS>0000-0003-3828-4419 University of BonnVenusberg-Campus 1BonnNorth Rhine-
WestphaliaGermany53127 , Cristina Conati<EMAIL_ADDRESS>0000-0002-8434-9335 University of British Columbia2366 Main Mall
VancouverVancouverBCCanadaV6T1Z4 , Peter Krawitz<EMAIL_ADDRESS>0000-0002-3194-8625 University of BonnVenusberg-Campus 1BonnNorth Rhine-
WestphaliaGermany53127 and Elisabeth André<EMAIL_ADDRESS>0000-0002-2367-162X University of AugsburgUniversitaetsstrasse
6aAugsburgBavariaGermany86159
(2023; 5 April 2023)
###### Abstract.
In recent years, the increasing availability of personal data has raised
concerns regarding privacy and security. One of the critical processes to
address these concerns is data anonymization, which aims to protect individual
privacy and prevent the release of sensitive information. This research
focuses on the importance of face anonymization. Therefore, we introduce
GANonymization, a novel face anonymization framework with facial expression-
preserving abilities. Our approach is based on a high-level representation of
a face, which is synthesized into an anonymized version based on a generative
adversarial network (GAN). The effectiveness of the approach was assessed by
evaluating its performance in removing identifiable facial attributes to
increase the anonymity of the given individual face. Additionally, the
performance of preserving facial expressions was evaluated on several affect
recognition datasets and outperformed the state-of-the-art methods in most
categories. Finally, our approach was analyzed for its ability to remove
various facial traits, such as jewelry, hair color, and multiple others. Here,
it demonstrated reliable performance in removing these attributes. Our results
suggest that GANonymization is a promising approach for anonymizing faces
while preserving facial expressions.
face anonymization, emotion recognition, data privacy, emotion preserving,
facial landmarks
††copyright: acmcopyright††journalyear: 2023††doi: XXXXXXX.XXXXXXX††ccs:
Security and privacy Privacy-preserving protocols††ccs: Security and privacy
Pseudonymity, anonymity and untraceability
## 1\. Introduction
In the current machine learning landscape, models are getting more and more
complex. This complexity places a significant demand on the availability of
large, high-quality datasets, particularly when leveraging deep learning (DL)
techniques. However, building such datasets is not always easy - besides the
time-consuming process of acquiring and annotating data, privacy is a serious
obstacle here. While extensive datasets exist for non-sensitive data, the
acquisition of data for sensitive use cases, especially those involving human
data, is an intricate task when the subjects’ privacy needs to be ensured.
Particularly when it comes to scenarios involving the human face, it is
generally a hard task to collect appropriate data, especially if datasets are
to be made publicly available. On the other hand, developing DL models that
use images of human faces offers promising opportunities. For instance,
assessing affective states like emotions or stress might be beneficial to
infer more serious conditions, such as chronic overload or depression, and
react accordingly. However, not only training data for DL algorithms run the
risk of violating humans’ privacy - it is inference data too. When employing
fully trained DL models in real-world scenarios, dealing with data that
reveals a human’s identity poses additional difficulties, as sovereignty over
one’s data is endangered. In general, it can be stated that different use
cases require different degrees of anonymization to assure human privacy. On
the other hand, different DL models require a different set of undiluted
features in order to be able to model the problem at hand. In the case of
facial affective state assessment, most of the context information is
unimportant and should be eliminated to reduce the features for re-
identification. Therefore, an approach is needed that offers the research
community a pipeline to anonymize faces while only preserving affective state
relevant information.
Further, face anonymization can be vital in promoting ethics and fairness in
machine learning. Not anonymized data can lead to unfair AI decisions, as
facial recognition models have been shown to exhibit bias against people of
color and women (Klare et al., 2012). However, current research on face
anonymization algorithms often neglects the fact that mere anonymization does
not necessarily remove those traits. For instance, a face image of a woman of
color might still show a woman of color after applying state-of-the-art face
anonymization techniques, although her exact identity might not be recognized
anymore. For the task of emotion recognition, in particular, traits like skin
color, gender, or hairstyle are not needed, which might introduce bias when
being considered.
Additionally, the importance of face anonymization is evident in its ability
to protect individual privacy, promote ethical considerations, and ensure
compliance with legal requirements. By employing face anonymization
techniques, researchers can prevent the misuse of personal information and
enable the development of machine learning models that are more broadly
applicable and ethical. Face anonymization conceals personal information such
as identity, race, ethnicity, gender, or age, reducing the risk of re-
identification. It is essential in sensitive datasets like medical records and
criminal justice data, where anonymity is critical for individuals’ privacy
and safety. It is crucial in healthcare to ensure patient confidentiality when
sharing medical images with researchers or medical professionals. In the
criminal justice system, face anonymization can protect the identity of
witnesses, victims, and suspects from potential harm. The protection of
personal data by anonymization or pseudonymization is also enforced in the
European Union by law with the General Data Protection Regulation (GDPR)
(Gruschka et al., 2018). Industries such as healthcare and finance are also
subject to additional regulations and standards that require anonymization to
protect sensitive data. For example, US law states that the Health Insurance
Portability and Accountability Act (HIPAA) mandates anonymizing Protected
Health Information (PHI) to ensure compliance with privacy and security
regulations.
To address these shortcomings, this work presents a novel approach to face
anonymization that addresses that problem specifically in the context of
emotion recognition. Existing work predominantly tries to find a trade-off
between anonymization and task performance by formalizing the problem as a
min-max game in which the objective is to find a good compromise between both
requirements (Nasr et al., 2018; Wu et al., 2018, 2019). However, features
that are neither benefiting the task at hand nor taking away from identity
obfuscation (i.e., not affecting either of the two objectives) are mostly
ignored. As such, traits like skin color, gender, or age are still apparent in
the anonymized versions of the images, conserving bias and inequity in the
underlying data. Instead of engaging in the aforementioned min-max game, as
done by previous approaches, we follow a different paradigm: we completely
discard all information except a minimal feature representation that is needed
for our chosen use case - emotion recognition - and subsequently re-synthesize
arbitrary information for the lost features. By doing so, we obtain a complete
face image with the same facial expression as the original face while,
contrary to existing approaches, removing irrelevant traits for the use case
of emotion recognition. After reviewing relevant literature (Ko, 2018; Gupta,
2018; Nguyen et al., 2017; Sun et al., 2017), we found that facial landmarks
can be a good feature set for that task while not exposing too much
unnecessary information. Therefore, as this work focuses on emotion
recognition, we chose to extract facial landmarks as an intermediate
representation. To disregard all unimportant information, we chose to extract
facial landmarks as an intermediate representation. Subsequently, we use a
Generative Adversarial Network (GAN) architecture, namely _pix2pix_ (Isola et
al., 2016), to re-synthesize a realistic face that incorporates exclusively
the features included in the landmarks. By doing so, our approach - which we
call _GANonymization_ \- has the advantage of not preserving any traits that
were not present in the landmark representation. As such, features like
hairstyle, skin color, or gender are diluted from the intermediate
representation, which sets our approach apart from most existing methods.
We evaluate our approach in a three-fold manner:
1. (1)
We validate if our anonymization method can anonymize faces sufficiently by
using a standard measure in this research (Serengil and Ozpinar, 2020, 2021).
2. (2)
We validate if our anonymization method keeps important features to preserve
emotional expressions by analyzing how the anonymization process affects the
predictions of an auxiliary emotion classifier in both a training as well as
an inference setting.
3. (3)
We seek to explain the outcomes of the evaluation steps above by analyzing
which facial traits are preserved or removed with our anonymization method. To
do so, we study how the anonymization process affects the predictions of an
auxiliary facial feature detection model.
We show that our approach significantly outperforms state-of-the-art methods
in preserving most facial emotional expressions in an anonymized synthesized
face.
## 2\. Related Work
In this section, we provide an overview of previous research on privacy
preservation in the context of facial anonymization. The discussion is
organized into four key concepts: Obfuscation, Adversarial Techniques,
Differential Privacy, and Latent Representations. Note that those concepts are
not distinct mechanisms, but different approaches can make use of several of
those ideas, as depicted in Figure 1.
Figure 1. Existing privacy preservation concepts in the context of face
anonymization.
### 2.1. Obfuscation
Obfuscation techniques have been pivotal in anonymizing facial data by
modifying or masking sensitive areas in images or videos. These techniques,
including pixelation, blurring, and masking, aim to obscure facial features
related to identity while retaining identity-independent characteristics
(Newton et al., 2005).
For instance, Jourabloo et al. (Jourabloo et al., 2015) presented an
attribute-preserving face de-identification approach. While this approach
achieved a commendably low face recognition rate, it succeeded in preserving
critical facial attributes. The method employed an Active Appearance Model and
the K-same algorithm to reconstruct new face images while averaging selected
features. Wu et al. (Yang et al., 2022) introduced a face-blurring approach to
obfuscate faces in the ImageNet dataset, and Raval et al. (Raval et al., 2017)
employed an adversarial perturbation mechanism to protect visual information
in camera feeds without substantially impairing application functionality.
Obfuscation techniques are indeed effective in achieving high degrees of
anonymity, but they invariably degrade the naturalness and quality of face
images, limiting their reusability for diverse facial applications (Kuang et
al., 2021). In contrast, our approach takes a different path. Although it
involves the removal of various facial traits, it excels in producing high-
quality, naturalistic face images. We achieve this by re-synthesizing complete
face images using a GAN-based architecture.
### 2.2. Adversarial Techniques
Many existing approaches to facial anonymization are based on training
anonymization models using adversarial techniques. Generally, the term
_adversarial_ refers to the paradigm of two _contrary_ objectives being
maximized at the same time. For face anonymization, these objectives are the
anonymization performance and the so-called _Utility_ , i.e., the ability to
preserve features that are relevant to solving a certain auxiliary task. This
dual objective can create a min-max game, where improving one objective often
results in the degradation of the other. As such, solving a min-max game with
methods of DL inevitably results in finding a compromise between the two
objectives.
For example, Nasr et al. (Nasr et al., 2018) developed an adversarial
regularization training method aimed at minimizing classification loss while
maximizing defense against membership inference attacks. Wu et al. (Wu et al.,
2018) utilized GANs to learn a degradation transformation that balances action
recognition performance with video privacy. Wu et al. (Wu et al., 2019)
introduced a face de-identification framework that generated de-identified
faces with high feature attributes and minimized identity information by
incorporating a contrastive verification loss and a structure similarity loss
into the GAN training process.
Our approach differs from these methods in that we don’t formulate the
anonymization problem as a min-max game. Instead, we make use of adversarial
learning techniques within our framework, particularly by employing a GAN-
based architecture to re-synthesize full-face images from our latent
representations. However, our method stands apart as we don’t incorporate
_privacy norms_ into the GAN training but focus on feature reduction before
GAN training. This unique approach enables us to remove traits that affect
neither anonymization nor utility, setting our method apart from mere
compromises between the two.
### 2.3. Differential Privacy
Differential privacy is a concept dependent on the specific application’s
notion of neighboring databases, which is the core of privacy preservation. In
deep learning, differential privacy involves the introduction of random noise
into a training inference model, which is computed from the underlying
stochastic gradient descent (SGD) training gradient. This noise is added to
ensure a balanced distribution of the results, aligning both utility and
privacy considerations (Abadi et al., 2016). Complementing differential
privacy and the SGD helps maintain a balance between accurate model
predictions and privacy protection.
For instance, Croft et al. (Croft et al., 2021) successfully anonymized images
by integrating differential privacy into the latent representation of a
generative model. However, the practical implementation of differential
privacy in _real-world scenarios_ presents a significant challenge.
Determining precise privacy boundaries is critical, as adding noise to protect
sensitive information may disrupt the entire data distribution, leading to
unrecognizable output images (Yoon et al., 2020).
In contrast, our approach does not introduce noise during training or
generation. Instead, we focus on information reduction before training,
retaining only a minimal latent representation, such as facial landmarks.
While this approach may pose challenges in finding a suitable representation
for domains other than emotion recognition, it distinctly sidesteps the
pitfalls associated with noisy data distribution and data unrecognizability.
### 2.4. Latent Representations
Traditional GAN-based models often struggle to preserve complex facial
attributes, such as emotion, pose, and background, due to image space’s high
dimensionality and complexity. This challenge often results in latent
representations being softer in facial style change compared to image space
manipulation. Latent representation, as an abstract and compressed
representation inferred from data, captures essential features while
discarding redundant information. This makes it easier for models to perform
tasks like classification and generation.
Le et al. (Le and Carlsson, 2022) introduced StyleID, a GAN that brings images
into a latent representation, uncovers features with significant identity
disentanglement, and changes these features in latent space or pixel space.
However, StyleID may preserve facial traits that have the potential to
introduce bias or unfairness, even if they don’t correlate directly with
identity. Other methods, such as Sun et al. (Sun et al., 2017), Hu et al. (Hu
et al., 2022), and Maximov et al. (Maximov et al., 2020) with CIAGAN, employed
inpainting mechanisms in conjunction with GANs to anonymize faces based on
facial landmarks. These approaches, while effective, retain context-relevant
information outside of the face-segmented area, such as hair color, hairstyle,
and gender. On the other hand, Hukkelås and Lindseth introduced DeepPrivacy2
(Hukkelås and Lindseth, 2022), an enhanced guided GAN framework for
anonymizing human figures and faces. The DeepPrivacy2 framework entails three
detection components for each task: i) face detection with a Dual Shot Face
Detector (Li et al., 2018), ii) dense pose estimation with Continuous Surface
Embeddings (Neverova et al., 2020), and iii) Mask R-CNN (He et al., 2017) for
instance segmentation. Additionally, three task-specific Surface-guided GANs
(Hukkelås et al., 2022) were trained to synthesize either human figures with
conditions, human figures without conditions, or faces. However, the use of
inpainting mechanisms in these approaches may inadvertently retain context-
relevant information, potentially introducing bias or unfairness.
In contrast, our approach focuses on excluding context-relevant information by
removing all context information except the facial structure with many facial
landmarks. By concentrating on the elimination of contextual traits, we aim to
reduce the potential for bias or unfairness in the dataset.
Overall, DeepPrivacy2 can be regarded as a state-of-the-art full-body
anonymization method since it outperformed a variety of other methods in the
past (Hukkelås and Lindseth, 2022). Furthermore, CIAGAN can be considered as
another state-of-the-art face anonymization method, which is also based on
landmarks (Maximov et al., 2020). While CIAGAN utilizes inpainting mechanisms
to only anonymize the face area below the forehead, DeepPrivacy2 anonymizes
the full facial area, including the forehead. Consequently, we used
DeepPrivacy2 and CIAGAN as the baseline for all our performance evaluations.
## 3\. Method
Figure 2. Architecture of the GANonymization pipeline.
This section introduces the structure of our GANonymization framework (see
Figure 2) and gives a detailed description of each component and the steps
taken for training.111Our framework’s implementation will be made publicly
available at https://github.com/hcmlab/GANonymization upon acceptance. The
complete GANonymization framework entails four components.
##### Training Scenario
In the first step, faces are detected, extracted, and brought into the right
format afterward. The image’s background is removed in the second step to
eliminate distracting features. In the third step, facial landmarks are
extracted from the face. In the last step, the GAN’s generator synthesizes a
new, anonymized face based on those landmarks. The discriminator evaluates the
facial landmarks and the synthesized face to determine whether it is real or
fake.
##### Inference Scenario
The inference requires fewer steps than the training scenario, as the first
and second steps are unnecessary. Only the extraction of the facial landmarks
is required to feed the generator to synthesize an anonymized face.
### 3.1. Face Extraction
The first component in the pipeline is face extraction. The RetinaFace
framework222https://github.com/serengil/retinaface (Serengil and Ozpinar,
2020) is utilized for this component, which is based on the RetinaFace
algorithm (Deng et al., 2020). RetinaFace has been tested against the WIDER
(Yang et al., 2016) dataset to ensure maximum efficiency in detecting and
aligning faces in various scenarios correctly. However, RetinaFace does not
detect all faces every time, especially when factors like poor image quality,
extreme angles, or heavy occlusions are in play. This component includes the
following steps:
1. (1)
_Face Crop._ The input image is analyzed to detect and extract all visible
faces.
2. (2)
_Face Align._ According to the literature, aligning the faces supports an
increase in accuracy for face recognition models (Parkhi et al., 2015).
Therefore, the faces are aligned before the GAN receives them as input. By
doing so, the GAN is prevented from focusing too much on the head orientation
and instead takes only the face itself into account.
3. (3)
_Image Resize._ The input size of the images for the GAN is set to $512\times
512$ pixels. Therefore, the cropped and aligned faces are up-scaled to 512
pixels for the greatest axis, while maintaining the aspect ratio.
4. (4)
_Zero Padding._ To achieve the final $512\times 512$ pixels for the required
input shape of the GAN, we apply zero padding to the sides [(right and left)
or (top and bottom)] of the image to keep the face centered in the image.
### 3.2. Face Segmentation
The second component of the pipeline is face segmentation. Even though this
step could be skipped, we observed that the pix2pix architecture we used for
re-synthesis of the faces (see Section 3.4) yielded visually better results
when not having to deal with variations in the background. Consequently, the
original background is removed by applying face segmentation and setting all
pixel intensities outside the face segments to $0$. Therefore, a head
segmentation model333https://github.com/wiktorlazarski/head-segmentation based
on a U-Net is utilized.
### 3.3. Facial Landmarks
After the pre-processing steps, we generate intermediate representations of
the faces. Here, we aim for a representation that (i) does not contain
information that could be used to identify the original face and (ii) holds
all necessary information needed for facial expression analysis tasks.
Existing literature on the topic (Ko, 2018; Gupta, 2018; Nguyen et al., 2017;
Sun et al., 2017) indicates that facial landmarks fulfill both of these
requirements in the context of emotion recognition. Note that although this
work focuses on the context of emotion recognition exclusively, the concept
could be transferred to other domains as well. Therefore, a suitable
intermediate representation, which might not be facial landmarks, would have
to be found for the specific task. For our experiments, we extract 478
3-dimensional coordinate facial landmarks utilizing the media-pipe face-mesh
model (Kartynnik et al., 2019) to receive an abstract representation of the
facial shape. The resulting 3D landmarks are projected onto a 2D image with a
black background where each landmark point is represented by a single white
pixel. It is necessary to translate the 3D landmarks into a 2D image due to
the image-to-image type of model used for the re-synthesis of the faces (as
described in the following section 3.4).
### 3.4. Re-Synthesis
To obtain an anonymized version of the input that still looks highly
realistic, we aim for a re-synthesis of high-quality faces. Therefore, we use
the _pix2pix_ architecture, a GAN-based image-to-image model. The original
purpose of _pix2pix_ is to convert image data from a particular source domain
to a target domain. Our specific goal in the re-synthesis stage is to transfer
the landmark representations back to random, high-quality face images that
expose the same facial landmark structure. The _pix2pix_ architecture has been
successfully applied to similar use cases in the past, e.g., for synthetic
data augmentation in the context of defect detection (Mertes et al., 2020b,
a), where segmentation masks of material defects (which, on a technical level,
are quite similar to visual landmark representations) were converted to
realistic looking data. More recent GAN-based architectures like ProGAN
(Karras et al., 2018), StyleGAN (Karras et al., 2021), or StyleGANv2 (Karras
et al., 2020), that impress with their ability to generate hyper-realistic
data, are specifically designed to create new data from scratch. To use those
models for image-to-image conversion tasks, a projection of the input image
has to be found in the GAN’s latent space, which is highly inefficient and
might not be possible at all for some data instances. As such, we chose to use
_pix2pix_ , as it is specifically tailored for end-to-end image-to-image
translation. For the training of the _pix2pix_ model, we used existing face
images as the _target_ domain, whereas for the _source_ domain, we used
landmark representations that we priory extracted from those images. In other
terms, we trained the _pix2pix_ network to learn the inverse transformation of
a landmark extractor - we perform an image-to-image translation from an image
representation of landmark features to realistic-looking face images. By using
that approach, we are able to automatically create geometrically aligned
source/target image pairs for training. Contrary to architectures such as
CycleGAN (Zhu et al., 2017) that work with non-parallel training data,
_pix2pix_ directly takes advantage of having mapped training pairs, which
again supports our architecture choice.
We process the CelebA (Liu et al., 2015) dataset within our pipeline to
extract and align the faces (section 3.1), remove the background of the faces
by face segmentation (section 3.2), and extract a face-mesh of each face which
represents the landmark/image pairs for training. CelebA was used because of
its size (202,599 images) and because it contains only images of high quality
- using low-quality images would limit the quality of GANonymization’s output
images unnecessarily. We used the same pipeline for the landmark extraction to
anonymize the images. Additionally, training images were normalized to
$mean=(0.5,0.5,0.5)$ and $std=(0.5,0.5,0.5)$. Our implementation was built
upon Erik Linder-Norén’s pix2pix
implementation444https://github.com/eriklindernoren/PyTorch-GAN#pix2pix, which
in turn strongly adheres to the original _pix2pix_ publication(Isola et al.,
2016). We trained the model for 25 epochs with a batch size of 32. The Adam
optimizer was used with a learning rate of 0.0002, $\beta_{1}$ decay of 0.5,
and $\beta_{2}$ decay of 0.999. After training, our model could transfer
landmark representations to face images that show the same facial expression
expressed by the original face. In the case of an issue with face detection
and, therefore, no available facial landmarks, an empty (black) image can be
inferred with our model with the result of a synthesized average face, which
is based on the faces seen by the model during the training process. Exemplary
outputs of our pipeline are shown in Figures 3, 4, 5, 6, and 8.
## 4\. Evaluation
In the following sections, we describe how we validate our approach using
three different evaluations. First, we evaluate the anonymization capability
of the approach. Second, we evaluate the suitability of the approach for the
task of emotion recognition, i.e., whether our approach preserves information
that is relevant to facial emotion recognition. Finally, we go into detail
about the facial features that get preserved or removed with our anonymization
approach.
### 4.1. Anonymization Performance
In this first part of the evaluation, the anonymization performance of our
approach was assessed. Hereby, with the term anonymization performance, we
refer to the capability of the method to alter input images in a way that they
ideally cannot be re-identified. Therefore, we compared the synthesized images
of our approach with the original images and versions synthesized by
DeepPrivacy2, CIAGAN, and basic methods like pixelation and blurring.
#### 4.1.1. Dataset
The dataset used for the comparison was the WIDER (Yang et al., 2016) dataset,
which is commonly used for benchmarking face detection algorithms. Further,
the authors of DeepPrivacy2 had already used it in their original publication.
Therefore, by using it in our experiments too, we do not introduce a bias
towards GANonymization by using a dataset that DeepPrivacy2 might not be
suited for. It contains images of people in various sceneries whose faces vary
in scale, pose, and occlusion. In each image, one or more faces are apparent.
In total, WIDER embodies 32,203 images in 61 event settings. The many
different head orientations, obfuscations, facial expressions, lighting
conditions, and others enable an optimal evaluation setting to measure the
overall performance in anonymizing these faces. After we applied our pre-
processing pipeline with the face extraction (section 3.1) and face
segmentation (section 3.2) components, the images were split into a training
and validation set of 92,749 and 22,738 face images, respectively.
#### 4.1.2. Setup
The performance measurement is based on the comparison of the original images
and their synthesized counterparts. The synthesized images are produced by our
method, DeepPrivacy2, and CIAGAN, respectively. Exemplary anonymized images
for WIDER can be seen in Figure 3.
Figure 3. Sample of synthesized faces based on the WIDER dataset.
#### 4.1.3. Metric
A widely used method to assess the anonymization degree of a face image is to
compute the cosine distance between image encodings of the original and
anonymized image versions. Here, a lower cosine distance equals higher
similarity between the faces and is commonly considered as the anonymized face
being _more recognizable_ to the original face. Specialized frameworks for
face recognition like DeepFace555https://github.com/serengil/deepface make use
of that paradigm and thus can be used as an evaluation tool for anonymization
algorithms (Serengil and Ozpinar, 2020, 2021). As such, for the comparison of
the anonymization performance of our approach versus the other methods, we use
the DeepFace framework. As a backbone model for image encoding, we use the
state-of-the-art face recognition model Facenet512 (Firmansyah et al., 2023),
which is also integrated into DeepFace. The cosine distance is defined as
follows:
(1) $cdistance=1-\frac{I_{o}\cdot I_{a}}{\lVert I_{o}\rVert\lVert
I_{a}\rVert}$
where $I_{o}$ and $I_{a}$ are the Facenet512 feature embedding space
representations of the original and anonymized images, respectively. When the
cosine distance exceeds $0.3$, it indicates that the feature embedding space
has diverged significantly from the original space, making re-identification
impractical. We computed the cosine distance of the image pairs for each
method with the original image.
#### 4.1.4. Results
Method | Cosine Distance
---|---
Original | 0.0000
Ours | 0.7145
DeepPrivacy2(Hukkelås and Lindseth, 2022) | 0.8119
CIAGAN(Maximov et al., 2020) | 0.9280
Pixel 8x8 | 0.8791
Pixel 16x16 | 0.6651
Blur 9x9 | 0.0102
Blur 17x17 | 0.0725
Table 1. The mean cosine distances between the original images and the
anonymized versions obtained through GANonymization, DeepPrivacy2 (DP2),
CIAGAN (CIA), pixelation with a kernel sized 8x8 and 16x16, and blurring with
a kernel sized 9x9 and 17x17. The methods with a cosine distance in bold
exceed the threshold of $0.3$.
Our approach achieved a mean cosine distance of $0.7145$, while DeepPrivacy2
and CIAGAN reached a greater cosine distance of $0.8119$ and $0.9280$,
respectively (see Table 1). The pixelation with a kernel sized $8\times 8$
achieved $0.8791$, while the bigger kernel sized $16\times 16$ achieved
$0.6651$. Blurring with a kernel sized $9\times 9$ and $17\times 17$ stayed
below the threshold necessary for no re-identification with a cosine distance
of $0.0102$ and $0.0725$, respectively.
#### 4.1.5. Discussion
Our evaluation measures the mean cosine distance between the Facenet512-face-
based image encodings of original and anonymized face images. Accordingly, the
distance between two encodings marks the non-similar features and how complex
the reconstruction of one encoding towards another encoding is, which is
conventionally interpreted as _degree of anonymization_. Comparing the results
of our approach with the others, we found that DeepPrivacy2, CIAGAN, and
pixelation achieved a mean cosine distance above the threshold of $0.3$, which
indicates that the feature embedding space diverged significantly from the
original image.
While pixelation changes only the underlying image resolution to obfuscate the
face, the quality of the image suffers accordingly and the face could still be
re-identified - at least for the kernel sized $16\times 16$. Blurring does not
modify the image resolution but reduces the overall image quality nonetheless.
On the other hand, CIAGAN synthesized a new face inside of the facial landmark
segment of the original image. The result of the synthesized face inside the
original image by CIAGAN lacks in quality. However, a face with its emotional
expression can still be determined. The low quality and high number of
artifacts can be a reason for the high cosine distance to the original image.
DeepPrivacy2, on the other side, synthesized a face that does not necessarily
preserve the orientation of the face or the facial expression. In some cases,
it can be observed that the outputted face does not have much similarity to a
face due to the extreme dysmorphism of facial areas. Accordingly, the
dysmorphism can be a result of the increased cosine distance to the original
images compared to our approach.
Therefore, we can claim that our approach has a great quality in synthesized
faces and solid anonymization performance despite the lower cosine distance
compared to DeepPrivacy2, CIAGAN, and pixelation $8\times 8$.
### 4.2. Preserved Emotional Expressions
After showing our approach’s anonymization capabilities in section 4.1, we
need to ensure that this performance does not come at the expense of the
primary task that the data will be used for, in our case, affect recognition.
Thus, in this section, we examine whether our method can anonymize faces while
maintaining their original emotional expressions. For this evaluation, we use
three different datasets which are commonly used in the research field of
affect recognition, namely _AffectNet_ (Mollahosseini et al., 2017), _CK+_
(Lucey et al., 2010), and _FACES_ (Ebner et al., 2010).
#### 4.2.1. Datasets
We used three different datasets to cover a wide variety of different
settings.
The first dataset we’ve chosen is the _AffectNet_ dataset. We chose it because
it contains in-the-wild data, resulting in emotions being expressed in a quite
natural way. It contains around 0.4 million images manually labeled according
to eight emotional expressions: _Neutral_ , _Happy_ , _Angry_ , _Sad_ , _Fear_
, _Surprise_ , _Disgust_ , and _Contempt_. The faces in this dataset have a
great variety of individuals, head orientations, lighting conditions, and
ethnicities. The dataset was pre-processed with face extraction (section 3.1)
and face segmentation (section 3.2). In the process, images in which no face
was detected were discarded. Accordingly, the training and validation splits
contained 210,174 and 2,874 images, respectively.
The second dataset, namely _CK+_ , contains 593 video sequences with 123
subjects aged 18 to 50 years and of various genders and heritage. Each video
sequence shows the transition from a neutral facial expression to a non-
neutral one, recorded at 30 frames per second. We chose the dataset because,
due to the emotional transitions, single image frames also cover facial
expressions where the emotions are shown quite subtly. Overall, 327 of those
videos are labeled with one of seven emotional expressions: _Anger_ ,
_Contempt_ , _Disgust_ , _Fear_ , _Happiness_ , _Sadness_ , and _Surprise_.
Again, we applied our pre-processing pipeline with face extraction and face
segmentation on the dataset and received a training and validation set of 259
and 68 images, respectively.
Lastly, the _FACES_ dataset with a total of 2,052 images with different age
groups and gender embodies six emotional expressions: _Neutral_ , _Sad_ ,
_Disgust_ , _Fear_ , _Anger_ , and _Happy_. We used that dataset as it
contains only images of acted emotions, making it a good counterpart for the
other two datasets. By including it, we also cover emotional expressions that
are shown in a rather exaggerated way. The images in this dataset have high
quality. Further, the dataset contains only frontal shots of the faces with
optimal lighting conditions. As was done for the previous datasets, we also
applied pre-processing with face extraction and face segmentation, resulting
in 1,827 images in the training split and 214 images in the validation split.
#### 4.2.2. Setup
We created anonymized versions of the three datasets, resulting in 12 datasets
in total: the three original ones, those anonymized with GANonymization, and
those anonymized with DeepPrivacy2 and CIAGAN. Exemplary anonymized images for
AffectNet, CK+, and FACES can be seen in Figure 4, 5, and 6, respectively.
Note that although the CK+ dataset consists of greyscale images, the
anonymized versions of our approach are colored - this is a nice byproduct of
our approach since we only use the landmarks as an intermediate
representation, whereas the re-synthesis is still based on the GAN that was
trained on CelebA. We splitted the evaluation of the emotional expression
preserving capabilities into two sub-evaluations. First, we assessed how the
emotional expression gets preserved during an _inference_ setting, thus, how a
model trained on original data behaves when fed with anonymized data. Second,
we evaluated how the model influences the training process of a model trained
on anonymized data.
##### Inference Scenario Evaluation.
To measure how well GANonymization can preserve emotional expressions, we
first trained an emotion classifier separately for the three original
datasets. Subsequently, we applied the trained models to the original and the
anonymized datasets and studied the prediction changes caused by the
anonymization methods. Here, big changes in prediction probability can be
interpreted as poor preservation of features contributing to emotional
expressions. We decided to go for three separate dataset-specific models
instead of one across-dataset model, as our evaluation methodology relies on
the classifiers accurately modeling the relation between data and emotion for
the specific datasets. As the datasets differ substantially, we argue that an
across-dataset model, although having the potential to gain a greater overall
generalizability, would under-perform on the single datasets due to dataset-
specific details that would get lost (e.g., the CK+ dataset is greyscale,
FACES are frontal-only, etc.).
As classifier architecture, we chose the base version of the ConvNeXt, which
is considered one of the state-of-the-art DL architectures for computer vision
tasks (Liu et al., 2022). Furthermore, the model was pre-trained on the
ImageNet (Deng et al., 2009) dataset. The classification model’s last linear
layer’s amount of output nodes was changed to match the number of classes,
which differed for each dataset. We used the cross-entropy loss for training.
Class weights were calculated on the train split of each dataset individually.
The AdamW (Loshchilov and Hutter, 2017) optimizer was used with a learning
rate of $0.0003$ and a weight decay of $0.001$. Additionally, the learning
rate was reduced when the validation loss reached a plateau for three
consecutive epochs. The images were pre-processed by normalizing with
$mean=(0.485,0.456,0.406)$ and $std=(0.229,0.224,0.225)$ for both, training
and testing. Hereby, the mean and standard values for normalization were based
on the pre-trained model’s dataset (ImageNet). During the training phase,
images were randomly flipped horizontally with a probability of 50% for data
augmentation. The classification models converged on the validation split
within 3, 12, and 9 epochs for AffectNet, CK+, and FACES, respectively. For
comparing the anonymization approaches, namely ours, DeepPrivacy2, and CIAGAN,
we used the trained emotion classifiers to make predictions on the original
images as well as for the anonymized versions. By doing so, we can assess to
which degree the anonymization process preserves features that hold
information on emotional expressions.
##### Training Scenario Evaluation.
In this sub-evaluation, we assess how the performance of an emotion
recognition model’s performance degraded when trained on the anonymized
versions. To do so, we used the same classifiers that were trained in the
_Inference Scenario_ but additionally trained the same architecture once with
the data anonymized by GANonymization and once anonymized by DeepPrivacy2 and
CIAGAN. Thus, we use 12 different models for this experiment, each trained on
one of the datasets mentioned above. Subsequently, we compare the performance
of the models on the original datasets’ validation splits.
Figure 4. Sample of synthesized faces based on the AffectNet dataset. Figure
5. Sample of synthesized faces based on the CK+ dataset. Figure 6. Sample of
synthesized faces based on the FACES dataset.
#### 4.2.3. Metric
##### Inference Scenario Evaluation.
We measure the ability of each anonymization approach to preserve the original
emotional expressions by looking at how the prediction probabilities for the
emotion classifiers change when applied to the original datasets vs. each of
the anonymized datasets. I.e., for each image, we measure how the class
probability of a certain emotion predicted from the original image differs
from the class probability of that same emotion in the anonymized version of
the image. Subsequently, we average the resulting probability differences of
the images in the validation sets for each emotion. Here, a higher mean
difference indicates that the anonymization process obfuscated more features
defining the respective emotion. In comparison, a lower difference implies
that the anonymization process preserved more emotion-related features.
##### Training Scenario Evaluation.
Here, we compare the F1 score of the different models on the respective
validation splits. F1 score was chosen as, especially in the CK+ data, a
relatively high class imbalance is apparent.
#### 4.2.4. Results
##### Inference Scenario Evaluation.
The results are depicted in Figure 7 and Table 2. As can be seen,
GANonymization outperformed DeepPrivacy2 in all emotions except _Fear_ and
_Happy_ in the AffectNet dataset. Compared to CIAGAN, our approach
outperformed in most emotions except _Fear_ , _Happy_ , and _Sadness_ in the
AffectNet dataset, also _Contempt_ , _Fear_ , and _Surprise_ in the CK+
dataset, and only _Happy_ in the FACES dataset.
(a) AffectNet
(b) CK+
(c) FACES
Figure 7. The mean distance of the class probability prediction for each
emotion for our method, DeepPrivacy2, and CIAGAN on each dataset (lower is
better).
| Ours | DP2 | CIA
---|---|---|---
Neutral | 0.09 | 0.17 | 0.10
Anger | 0.14 | 0.19 | 0.15
Contempt | 0.09 | 0.18 | 0.10
Disgust | 0.10 | 0.19 | 0.12
Fear | 0.15 | 0.12 | 0.11
Happy | 0.13 | 0.11 | 0.10
Sadness | 0.10 | 0.16 | 0.09
Surprise | 0.10 | 0.20 | 0.10
(a) AffectNet
| Ours | DP2 | CIA
---|---|---|---
Anger | 0.14 | 0.30 | 0.18
Contempt | 0.09 | 0.25 | 0.04
Disgust | 0.21 | 0.30 | 0.23
Fear | 0.06 | 0.08 | 0.06
Happy | 0.07 | 0.31 | 0.19
Sadness | 0.09 | 0.14 | 0.14
Surprise | 0.08 | 0.26 | 0.05
(b) CK+
| Ours | DP2 | CIA
---|---|---|---
Neutral | 0.11 | 0.31 | 0.12
Anger | 0.11 | 0.36 | 0.14
Disgust | 0.08 | 0.18 | 0.15
Fear | 0.02 | 0.16 | 0.07
Happy | 0.04 | 0.17 | 0.02
Sadness | 0.13 | 0.31 | 0.15
(c) FACES
Table 2. The mean class probability distances between the original images and
the anonymized versions obtained through GANonymization, DeepPrivacy2 (DP2),
and CIAGAN (CIA).
To assess if these differences are statistically significant, we conducted
statistical hypothesis tests for each emotion as well as each dataset. As a
Shapiro-Wilk test revealed that the data was not normally distributed for any
of the datasets, Wilcoxon tests were used for the post-hoc analysis.
Subsequently, we did a dataset-wise p-value correction using Bonferroni’s
method. We report the resulting statistics in Table 3. As can be seen, we
found significant differences for all emotions in the AffectNet dataset for
DeepPrivacy2 and CIAGAN except for _Neutral_ and _Anger_. In CK+, we found
significant differences for all classes except _Sadness_ and _Surprise_ for
DeepPrivacy2 and _Disgust_ for CIAGAN. In the FACES dataset, we found
significant differences for all classes except _Happy_ and _Fear_ for
DeepPrivacy2 and _Happy_ and _Anger_ for CIAGAN.
| Ours vs. DeepPrivacy2 | Ours vs. CIAGAN |
---|---|---|---
| $p$ | $Z$ | $r$ | $p$ | $Z$ | $r$ | $N$
neutral | ¡0.001*** | -26.149391 | -0.499194 | 0.119 | -26.183915 | -0.492635 | 2744
anger | ¡0.001*** | -10.844502 | -0.207023 | 1.000 | -10.331517 | -0.194381 | 2744
contempt | ¡0.001*** | -26.046682 | -0.497233 | 0.013* | -26.187686 | -0.492706 | 2744
disgust | ¡0.001*** | -25.431412 | -0.485488 | ¡0.001*** | -25.441451 | -0.478666 | 2744
fear | ¡0.001*** | -15.905089 | -0.303630 | ¡0.001*** | -15.850979 | -0.298227 | 2744
happy | ¡0.001*** | -12.989478 | -0.247970 | ¡0.001*** | -12.821720 | -0.241233 | 2744
sadness | ¡0.001*** | -4.724173 | -0.090185 | ¡0.001*** | -4.195525 | -0.078936 | 2744
surprise | ¡0.001*** | -25.921903 | -0.494851 | 0.034* | -26.142871 | -0.491863 | 2744
(d) AffectNet
| Ours vs. DeepPrivacy2 | Ours vs. CIAGAN |
---|---|---|---
| $p$ | $Z$ | $r$ | $p$ | $Z$ | $r$ | $N$
anger | ¡0.001*** | -3.941178 | -0.477938 | ¡0.001*** | -3.415688 | -0.414213 | 68
contempt | ¡0.001*** | -4.326130 | -0.524620 | ¡0.001*** | -6.495306 | -0.787672 | 68
disgust | ¡0.001*** | -3.635660 | -0.440889 | 0.158 | -1.411492 | -0.171169 | 68
fear | 0.002** | -3.067397 | -0.371977 | 0.001** | -3.201825 | -0.388278 | 68
happy | ¡0.001*** | -3.415688 | -0.414213 | ¡0.001*** | -3.440129 | -0.417177 | 68
sadness | 0.146 | -1.454264 | -0.176355 | ¡0.001*** | -4.741634 | -0.575008 | 68
surprise | 0.051 | -1.949203 | -0.236376 | ¡0.001*** | -4.069495 | -0.493499 | 68
(e) CK+
| Ours vs. DeepPrivacy2 | Ours vs. CIAGAN |
---|---|---|---
| $p$ | $Z$ | $r$ | $p$ | $Z$ | $r$ | $N$
neutral | ¡0.001*** | -6.869167 | -0.469567 | ¡0.001*** | -4.080480 | -0.278936 | 214
happy | 0.965 | -0.043556 | -0.002977 | 0.102 | -1.633626 | -0.111672 | 214
sadness | ¡0.001*** | -4.305428 | -0.294313 | 0.018* | -2.366910 | -0.161799 | 214
fear | 0.287 | -1.064641 | -0.072777 | ¡0.001*** | -8.291628 | -0.566804 | 214
disgust | 0.014* | -2.459535 | -0.168130 | ¡0.001*** | -8.192387 | -0.560020 | 214
anger | ¡0.001*** | -6.771028 | -0.462858 | 0.052 | -1.945685 | -0.133004 | 214
(f) FACES
Table 3. The statistics for the cosine distance of GANonymization the
DeepPrivacy2, and CIAGAN method to the original based on dataset a), b), and
c). If a p-value is less than 0.05, it is flagged with one star (*). If a
p-value is less than 0.01, it is flagged with 2 stars (**). If a p-value is
less than 0.001, it is flagged with three stars (***)
##### Training Scenario Evaluation.
The full evaluation results for the training scenario can be found in Table 4
in the appendix, whereas the confusion matrices for the single models are
shown in Figure 10 in the appendix. For the AffectNet dataset, the classifier
trained on the original data achieved an overall F1 score of $0.58$. In
contrast, the classifier trained on the data anonymized with GANonymization
achieved an overall F1 score of $0.37$. DeepPrivacy2 led to a worse
performance, reaching only an F1 score of $0.30$. CIAGAN could acquire a
slightly increased F1 score of $0.38$ than our method.
The other datasets continue the trend for DeepPrivacy2 but worsen the
performance for CIAGAN: CK+ (Original data: $0.99$, GANonymization data:
$0.69$, DeepPrivacy2 data: $0.46$, CIAGAN data: $0.62$) and FACES (Original
data: $0.97$, GANonymization data: $0.81$, DeepPrivacy2 data: $0.67$, CIAGAN
data: $0.75$).
#### 4.2.5. Discussion
##### Inference Scenario Evaluation.
The overall results indicate the superior performance of our approach in
preserving facial expressions.
It outperformed the mean distance of DeepPrivacy2 for all emotions except
_Fear_ and _Happy_ in AffectNet. However, we did not find statistical evidence
(see Table 3) for the performance differences for _all_ of those classes in
CK+ and FACES (which might be because those two datasets include a
substantially lower amount of images than AffectNet). This could be because
many predictions from _Fear_ and _Happy_ of the synthesized images of our
approach were mixed classified (see Figure 10 in the appendix). For example,
the emotions _Happy_ and _Surprise_ were mainly predicted as _Fear_ by our
classification model.
Compared to the synthesized images by CIAGAN, the cosine distances are closer
to our method. CIAGAN preserved _Fear_ , _Happy_ , and _Sadness_ significantly
better in the AffectNet dataset (see Table 3). Additionally, the emotions
_Contempt_ , _Fear_ , and _Surprise_ also performed significantly better in
CK+ judging by the mean distance. In the FACES dataset, CIAGAN outperformed
our method only for the emotion _Happy_. However, the results from the
significance test for the FACES dataset in Table 3 show that it does not have
any statistical significance. An explanation for the small gap between the
cosine distances from our method and CIAGAN could be a similar approach to the
facial landmarks. The facial landmarks preserve the facial expression mostly
accurately. However, in increasing the number of facial landmark points with
our approach, it becomes clear that the affective state preservation can be
enhanced.
##### Training Scenario Evaluation.
Here, we could observe that data obtained through the anonymization methods
led to substantially worse F1 scores for the trained classifiers than the
original data. However, GANonymization still performed better for each
dataset, except for a very slightly worsening performance in the AffectNet
dataset compared to CIAGAN.
### 4.3. Analysis of Facial Feature Anonymization
To better understand which features are being preserved and which are
discarded by GANonymization, we performed an analysis using a pre-trained
model for facial feature classification on the CelebA (Liu et al., 2015)
dataset. By analyzing how the predictions of that model change when applied to
original versus anonymized images, we aim to infer insights about which facial
features our model removes.
#### 4.3.1. Dataset
We’ve chosen the CelebA dataset due to its vast amount of 202,599 face images
with 10,177 identities and 40 binary features representing about facial
attributes per subject. For example, those attributes entail eyeglasses,
hairstyle, hair color, facial shape, and beard. Thus, this dataset is well
suited for analyzing which attributes might change with our anonymization
method. However, it should be noted that the dataset contains primarily images
of young celebrities - as those might visually not be a representative sample
of the entirety of people, it might influence the analysis. We applied our
pre-processing pipeline with face extraction and face segmentation on the
dataset and received a training and validation set of 166,223 and 20,259
images, respectively.
#### 4.3.2. Setup
Similar to section 4.1 and section 4.2, the analysis of which traits of the
original face images are removed through our anonymization pipeline is based
on utilizing an auxiliary classifier to compare original versus anonymized
images. We trained the same model architecture described in Section 4.2.2, but
this time to classify facial features rather than emotions. The only changes
made to the architecture were matching the output layer to fit the number of
features incorporated in the CelebA dataset, switching to a binary-cross-
entropy loss, and changing the output activation function to Sigmoid, as in
this case, we dealt with a multi-label task (i.e., multiple traits can be
present at once). Here, each feature can be interpreted as a facial trait that
is apparent in the model’s face input. Exemplary anonymized images for CelebA
can be seen in Figure 8. The performance of the classification model can be
looked up in the appendix in Figure 11 and Table 5.
Figure 8. Sample of synthesized faces based on the CelebA dataset.
#### 4.3.3. Metric
To examine which of those traits get removed, for each trait we take the
subset of images in the original dataset where the classifier predicted that
trait, i.e., the classifier assigned it a probability of $>0.5$. Subsequently,
we assess the portion of anonymized versions of those images where the
classifier did not predict the respective trait.
#### 4.3.4. Results
The results are depicted in Figure 9. Here, we ordered the features according
to the percentage of cases where they have been removed. The actual
percentages are in the appendix in Table 6.
Figure 9. The removed categories in % over the total number of available
samples for each category in the CelebA dataset.
#### 4.3.5. Discussion
As can be seen in the results, some traits were removed in 100% of the cases,
whilst others were preserved in almost all images. Traits that refer to head
or facial hair, e.g., _Bald_ , _Gray Hair_ , _Mustache_ , or _Goatee_ are
removed quite frequently. This is not surprising since the only information
that our re-synthesis model can rely on is the landmark representation of the
input face. Also, wearing specific accessories like neckties, hats, or
necklaces is not encoded in landmark representations, resulting in them
getting reliably removed. The _Smiling_ feature, which is highly correlated to
emotional expressions, gets preserved quite well, which again supports our
claim of being able to preserve such expressions. On the other hand, a
surprising observation is that _Heavy Makeup_ and _Wearing Lipstick_
predominantly are getting preserved. The training data we used for our GAN
model is a possible explanation. For that, the CelebA dataset, containing
exclusively celebrity face images, was used, too. In the world of celebrities,
it is common practice for women to dress up and apply makeup for their
appearance at public events. As the GAN model aims to resemble the data
distribution imposed by the training data, these traits are also apparent in
the anonymized versions. The same goes for features like _No Beard_ or _Young_
\- the vast majority of subjects in the CelebA dataset are relatively young
and do not wear a beard (Rudd et al., 2016). Besides that, an interesting
observation is that the _Chubby_ trait was removed in the vast majority of
cases where it was apparent. Intuitively, the facial landmark representation
should have covered that trait, but apparently, it wasn’t. The same goes for
_Big Nose_ and _Big Lips_ \- which were also removed frequently. Removing
those traits advocates our approach since they are typical examples of
features that could introduce unfairness and bias into datasets.
The feature _Male_ got removed in 27.62% of the cases. It has to be noted that
there is no _Female_ feature in the dataset, and as such, the absence of the
_Male_ trait is mainly interpreted as the face being female. Therefore, it is
good that the _Male_ trait did not get removed in 100% of the cases - which
would mean that the anonymized versions are _always_ female. Here, a medium
removal rate indicates that the gender sometimes changes and sometimes does
not, indicating that it indeed gets diluted by GANonymization.
Finally, the feature _Blurry_ was removed in over 99% of the cases. Although
this trait doesn’t refer to the face itself but to the image quality, it is a
good indicator that the results of GANonymization are of high quality - even
if the original images are not.
## 5\. Conclusion
This research aimed to evaluate the anonymization performance using our method
- GANonymization. Our method is a generative adversarial network (GAN) based
image-to-image translation approach to anonymize faces and preserve their
original facial expression. Facial landmarks serve as image input into a
pix2pix architecture to re-synthesize high-quality, anonymized versions of the
input face image.
First, we measured the efficiency of our approach in removing identifiable
facial attributes to increase the anonymity of the given individual face. Our
method proved its anonymization performance in the chosen metric on the WIDER
dataset.
Second, we evaluated the performance regarding preserving emotional facial
expressions on the AffectNet, CK+, and FACES datasets. Our approach
significantly outperformed DeepPrivacy2 in most categories. However,
DeepPrivacy2 significantly outperformed our approach in the emotion _Fear_ and
_Happy_ from the AffectNet dataset. Compared to CIAGAN we could show a
significant improvement in most of the preserved emotional facial expressions
for _Neutral_ , _Anger_ , _Contempt_ (in AffectNet), _Disgust_ , _Fear_ (in
FACES), _Happy_ (in CK+), _Sadness_ (in CK+ and FACES), and _Surprise_ (in
AffectNet). Furthermore, a noticeable quality difference in the image could be
seen between the different methods. Here, our method showed the highest
quality in the synthesized faces.
Last, analyzing facial traits removed by our approach showed that some traits
were eliminated in almost 100% of the cases while others were preserved.
Especially jewelry, clothing, and hair, e.g., _Bald_ , _Gray Hair_ ,
_Mustache_ , or _Goatee_ are removed quite reliably.
In future efforts, training the GAN with a wider variety of facial expressions
and facial traits might be supportive in increasing the overall performance in
preserving the facial expressions, especially in adding more diversity to the
generated faces. Finally, it has to be studied how GANonymization can be
transferred to other, not emotion-related contexts. Therefore, suiting
intermediate representations for the specific tasks have to be investigated.
The medical domain is one field where adapting our approach might have a major
impact. Here, anonymizing patient photos could reduce the sparsity of
available data, which is crucial for that field. Doing so might enhance the
data basis researchers can work with without being restricted by data privacy
regulations.
## References
* (1)
* Abadi et al. (2016) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_. 308–318.
* Croft et al. (2021) William L Croft, Jörg-Rüdiger Sack, and Wei Shi. 2021. Obfuscation of images via differential privacy: from facial images to general images. _Peer-to-Peer Networking and Applications_ 14 (2021), 1705–1733.
* Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009\. ImageNet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
* Deng et al. (2020) Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. 2020. RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Ebner et al. (2010) Natalie C Ebner, Michaela Riediger, and Ulman Lindenberger. 2010\. FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. _Behavior research methods_ 42 (2010), 351–362.
* Firmansyah et al. (2023) Andrian Firmansyah, Tien Fabrianti Kusumasari, and Ekky Novriza Alam. 2023. Comparison of Face Recognition Accuracy of ArcFace, Facenet and Facenet512 Models on Deepface Framework. In _2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)_. IEEE, 535–539.
* Gruschka et al. (2018) Nils Gruschka, Vasileios Mavroeidis, Kamer Vishi, and Meiko Jensen. 2018. Privacy issues and data protection in big data: a case study analysis under GDPR. In _2018 IEEE International Conference on Big Data (Big Data)_. IEEE, 5027–5033.
* Gupta (2018) Shivam Gupta. 2018\. Facial emotion recognition in real-time and static images. In _2018 2nd International Conference on Inventive Systems and Control (ICISC)_. 553–560. https://doi.org/10.1109/ICISC.2018.8398861
* He et al. (2017) Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. 2017. Mask R-CNN. _CoRR_ abs/1703.06870 (2017). arXiv:1703.06870 http://arxiv.org/abs/1703.06870
* Hu et al. (2022) Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, and Libing Wu. 2022. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 15014–15023.
* Hukkelås et al. (2022) Håkon Hukkelås, Morten Smebye, Rudolf Mester, and Frank Lindseth. 2022. Realistic Full-Body Anonymization with Surface-Guided GANs. _CoRR_ abs/2201.02193 (2022). arXiv:2201.02193 https://arxiv.org/abs/2201.02193
* Hukkelås and Lindseth (2022) Håkon Hukkelås and Frank Lindseth. 2022. DeepPrivacy2: Towards Realistic Full-Body Anonymization. https://doi.org/10.48550/ARXIV.2211.09454
* Isola et al. (2016) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to-Image Translation with Conditional Adversarial Networks. _CoRR_ abs/1611.07004 (2016). arXiv:1611.07004 http://arxiv.org/abs/1611.07004
* Jourabloo et al. (2015) Amin Jourabloo, Xi Yin, and Xiaoming Liu. 2015. Attribute preserved face de-identification. In _2015 International conference on biometrics (ICB)_. IEEE, 278–285.
* Karras et al. (2018) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018\. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net. https://openreview.net/forum?id=Hk99zCeAb
* Karras et al. (2021) Tero Karras, Samuli Laine, and Timo Aila. 2021. A Style-Based Generator Architecture for Generative Adversarial Networks. _IEEE Trans. Pattern Anal. Mach. Intell._ 43, 12 (2021), 4217–4228. https://doi.org/10.1109/TPAMI.2020.2970919
* Karras et al. (2020) Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020\. Analyzing and Improving the Image Quality of StyleGAN. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020_. Computer Vision Foundation / IEEE, 8107–8116. https://doi.org/10.1109/CVPR42600.2020.00813
* Kartynnik et al. (2019) Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann. 2019. Real-time facial surface geometry from monocular video on mobile GPUs. _arXiv preprint arXiv:1907.06724_ (2019).
* Klare et al. (2012) Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. 2012\. Face recognition performance: Role of demographic information. _IEEE Transactions on information forensics and security_ 7, 6 (2012), 1789–1801.
* Ko (2018) Byoung Chul Ko. 2018\. A brief review of facial emotion recognition based on visual information. _sensors_ 18, 2 (2018), 401.
* Kuang et al. (2021) Zhenzhong Kuang, Huigui Liu, Jun Yu, Aikui Tian, Lei Wang, Jianping Fan, and Noboru Babaguchi. 2021. Effective de-identification generative adversarial network for face anonymization. In _Proceedings of the 29th ACM International Conference on Multimedia_. 3182–3191.
* Le and Carlsson (2022) Minh-Ha Le and Niklas Carlsson. 2022. StyleID: Identity Disentanglement for Anonymizing Faces. _arXiv preprint arXiv:2212.13791_ (2022).
* Li et al. (2018) Jian Li, Yabiao Wang, Changan Wang, Ying Tai, Jianjun Qian, Jian Yang, Chengjie Wang, Ji-Lin Li, and Feiyue Huang. 2018. DSFD: Dual Shot Face Detector. _CoRR_ abs/1810.10220 (2018). arXiv:1810.10220 http://arxiv.org/abs/1810.10220
* Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015\. Deep Learning Face Attributes in the Wild. In _Proceedings of International Conference on Computer Vision (ICCV)_.
* Liu et al. (2022) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. 2022\. A ConvNet for the 2020s. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. 11976–11986.
* Loshchilov and Hutter (2017) Ilya Loshchilov and Frank Hutter. 2017. Fixing Weight Decay Regularization in Adam. _CoRR_ abs/1711.05101 (2017). arXiv:1711.05101 http://arxiv.org/abs/1711.05101
* Lucey et al. (2010) Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010\. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops_. 94–101. https://doi.org/10.1109/CVPRW.2010.5543262
* Maximov et al. (2020) Maxim Maximov, Ismail Elezi, and Laura Leal-Taixé. 2020\. Ciagan: Conditional identity anonymization generative adversarial networks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 5447–5456.
* Mertes et al. (2020a) Silvan Mertes, Andreas Margraf, Steffen Geinitz, and Elisabeth André. 2020a. Alternative data augmentation for industrial monitoring using adversarial learning. In _International Conference on Deep Learning Theory and Applications_. Springer, 1–23.
* Mertes et al. (2020b) Silvan Mertes, Andreas Margraf, Christoph Kommer, Steffen Geinitz, and Elisabeth André. 2020b. Data Augmentation for Semantic Segmentation in the Context of Carbon Fiber Defect Detection using Adversarial Learning. In _Proceedings of the 1st International Conference on Deep Learning Theory and Applications, DeLTA 2020, Lieusaint, Paris, France, July 8-10, 2020_ , Ana L. N. Fred and Kurosh Madani (Eds.). ScitePress, 59–67. https://doi.org/10.5220/0009823500590067
* Mollahosseini et al. (2017) Ali Mollahosseini, Behzad Hassani, and Mohammad H. Mahoor. 2017\. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. _CoRR_ abs/1708.03985 (2017). arXiv:1708.03985 http://arxiv.org/abs/1708.03985
* Nasr et al. (2018) Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In _Proceedings of the 2018 ACM SIGSAC conference on computer and communications security_. 634–646.
* Neverova et al. (2020) Natalia Neverova, David Novotný, Vasil Khalidov, Marc Szafraniec, Patrick Labatut, and Andrea Vedaldi. 2020. Continuous Surface Embeddings. _CoRR_ abs/2011.12438 (2020). arXiv:2011.12438 https://arxiv.org/abs/2011.12438
* Newton et al. (2005) Elaine M Newton, Latanya Sweeney, and Bradley Malin. 2005\. Preserving privacy by de-identifying face images. _IEEE transactions on Knowledge and Data Engineering_ 17, 2 (2005), 232–243.
* Nguyen et al. (2017) Binh T. Nguyen, Minh H. Trinh, Tan V. Phan, and Hien D. Nguyen. 2017. An efficient real-time emotion detection using camera and facial landmarks. In _2017 Seventh International Conference on Information Science and Technology (ICIST)_. 251–255. https://doi.org/10.1109/ICIST.2017.7926765
* Parkhi et al. (2015) O Parkhi, A Vedaldi, and A Zisserman. 2015. Deep face recognition. _BMVC 2015 - Proceedings of the British Machine Vision Conference 2015_ , 1–12.
* Raval et al. (2017) Nisarg Raval, Ashwin Machanavajjhala, and Landon P Cox. 2017\. Protecting visual secrets using adversarial nets. In _2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, 1329–1332.
* Rudd et al. (2016) Ethan M Rudd, Manuel Günther, and Terrance E Boult. 2016\. Moon: A mixed objective optimization network for the recognition of facial attributes. In _Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14_. Springer, 19–35.
* Serengil and Ozpinar (2020) Sefik Ilkin Serengil and Alper Ozpinar. 2020. LightFace: A Hybrid Deep Face Recognition Framework. In _2020 Innovations in Intelligent Systems and Applications Conference (ASYU)_. IEEE, 23–27. https://doi.org/10.1109/ASYU50717.2020.9259802
* Serengil and Ozpinar (2021) Sefik Ilkin Serengil and Alper Ozpinar. 2021. HyperExtended LightFace: A Facial Attribute Analysis Framework. In _2021 International Conference on Engineering and Emerging Technologies (ICEET)_. IEEE, 1–4. https://doi.org/10.1109/ICEET53442.2021.9659697
* Sun et al. (2017) Qianru Sun, Liqian Ma, Seong Joon Oh, Luc Van Gool, Bernt Schiele, and Mario Fritz. 2017\. Natural and Effective Obfuscation by Head Inpainting. _CoRR_ abs/1711.09001 (2017). arXiv:1711.09001 http://arxiv.org/abs/1711.09001
* Wu et al. (2019) Yifan Wu, Fan Yang, Yong Xu, and Haibin Ling. 2019\. Privacy-protective-GAN for privacy preserving face de-identification. _Journal of Computer Science and Technology_ 34 (2019), 47–60.
* Wu et al. (2018) Zhenyu Wu, Zhangyang Wang, Zhaowen Wang, and Hailin Jin. 2018\. Towards privacy-preserving visual recognition via adversarial training: A pilot study. In _Proceedings of the European conference on computer vision (ECCV)_. 606–624.
* Yang et al. (2022) Kaiyu Yang, Jacqueline H Yau, Li Fei-Fei, Jia Deng, and Olga Russakovsky. 2022. A study of face obfuscation in imagenet. In _International Conference on Machine Learning_. PMLR, 25313–25330.
* Yang et al. (2016) Shuo Yang, Ping Luo, Chen-Change Loy, and Xiaoou Tang. 2016\. WIDER FACE: A Face Detection Benchmark. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Yoon et al. (2020) Jinsung Yoon, Lydia N Drumright, and Mihaela Van Der Schaar. 2020\. Anonymization through data synthesis using generative adversarial networks (ads-gan). _IEEE journal of biomedical and health informatics_ 24, 8 (2020), 2378–2388.
* Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In _IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017_. IEEE Computer Society, 2242–2251. https://doi.org/10.1109/ICCV.2017.244
## Appendix A Appendix
AffectNet
---
Original | GANonymization | DeepPrivacy2 | CIAGAN
| | |
CK+
Original | GANonymization | DeepPrivacy2 | CIAGAN
| | |
FACES
Original | GANonymization | DeepPrivacy2 | CIAGAN
| | |
Figure 10. For each specified dataset a multi-class classification model was trained. Accordingly, the confusion matrices depict the classification model’s performance on the validation sets. The column ”Original” model was trained on the original images from the training split. The column ”GANonymization” and ”DeepPrivacy2” contains the models trained on the synthesized images of the training split, respectively. Table 4. For each specified dataset a multi-class classification model was trained. Accordingly, the classification reports show the classification model’s performance on the validation sets. The column ”Original” model was trained on the original images from the training split. The column ”GANonymization” and ”DeepPrivacy2” contains the models trained on the synthesized images of the training split, respectively. (P) Precision; (R) Recall; (F1) F1-Score; (N) Support | | Original | GANonymization | DeepPrivacy2 | CIAGAN |
---|---|---|---|---|---|---
Dataset | | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | N
AffectNet | Neutral | 0.49 | 0.41 | 0.45 | 0.43 | 0.08 | 0.14 | 0.26 | 0.14 | 0.18 | 0.30 | 0.34 | 0.32 | 360
Anger | 0.58 | 0.59 | 0.58 | 0.34 | 0.50 | 0.41 | 0.22 | 0.24 | 0.23 | 0.28 | 0.38 | 0.32 | 346
Contempt | 0.48 | 0.67 | 0.56 | 0.37 | 0.28 | 0.32 | 0.28 | 0.23 | 0.25 | 0.41 | 0.27 | 0.33 | 354
Disgust | 0.58 | 0.54 | 0.56 | 0.62 | 0.12 | 0.20 | 0.25 | 0.36 | 0.29 | 0.32 | 0.44 | 0.38 | 357
Fear | 0.65 | 0.62 | 0.63 | 0.32 | 0.69 | 0.44 | 0.38 | 0.39 | 0.38 | 0.54 | 0.20 | 0.29 | 357
Happy | 0.57 | 0.51 | 0.53 | 0.30 | 0.36 | 0.33 | 0.28 | 0.40 | 0.33 | 0.39 | 0.45 | 0.42 | 362
Sadness | 0.68 | 0.77 | 0.72 | 0.48 | 0.74 | 0.58 | 0.41 | 0.49 | 0.45 | 0.60 | 0.61 | 0.60 | 352
Surprise | 0.66 | 0.55 | 0.60 | 0.39 | 0.19 | 0.25 | 0.36 | 0.16 | 0.22 | 0.39 | 0.36 | 0.37 | 337
accuracy | | | 0.58 | | | 0.37 | | | 0.30 | | | 0.38 | 2825
macro avg | 0.59 | 0.58 | 0.58 | 0.41 | 0.37 | 0.33 | 0.31 | 0.30 | 0.29 | 0.40 | 0.38 | 0.38 | 2825
weighted avg | 0.58 | 0.58 | 0.58 | 0.41 | 0.37 | 0.33 | 0.30 | 0.30 | 0.29 | 0.40 | 0.38 | 0.38 | 2825
CK+ | Anger | 1.00 | 1.00 | 1.00 | 0.00 | 0.00 | 0.00 | 0.25 | 0.33 | 0.29 | 0.33 | 0.22 | 0.27 | 9
Contempt | 0.75 | 1.00 | 0.86 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.67 | 0.67 | 0.67 | 3
Disgust | 1.00 | 1.00 | 1.00 | 0.73 | 0.65 | 0.69 | 0.73 | 0.47 | 0.57 | 0.58 | 0.65 | 0.61 | 17
Fear | 1.00 | 1.00 | 1.00 | 0.75 | 1.00 | 0.86 | 0.25 | 0.33 | 0.29 | 1.00 | 0.33 | 0.50 | 3
Happy | 1.00 | 1.00 | 1.00 | 0.85 | 1.00 | 0.92 | 0.39 | 0.41 | 0.40 | 0.75 | 0.53 | 0.62 | 17
Sadness | 1.00 | 1.00 | 1.00 | 0.25 | 0.67 | 0.36 | 0.00 | 0.00 | 0.00 | 0.20 | 0.67 | 0.31 | 3
Surprise | 1.00 | 0.94 | 0.97 | 0.78 | 0.88 | 0.82 | 0.55 | 0.75 | 0.63 | 0.88 | 0.94 | 0.91 | 16
accuracy | | | 0.99 | | | 0.69 | | | 0.46 | | | 0.62 | 68
macro avg | 0.96 | 0.99 | 0.97 | 0.48 | 0.60 | 0.52 | 0.31 | 0.33 | 0.31 | 0.63 | 0.57 | 0.55 | 68
weighted avg | 0.99 | 0.99 | 0.99 | 0.62 | 0.69 | 0.65 | 0.45 | 0.46 | 0.44 | 0.67 | 0.62 | 0.62 | 68
FACES | neutral | 0.97 | 0.97 | 0.97 | 0.95 | 0.56 | 0.70 | 0.86 | 0.53 | 0.66 | 0.64 | 0.64 | 0.64 | 36
happy | 0.97 | 1.00 | 0.99 | 0.86 | 1.00 | 0.92 | 0.78 | 0.97 | 0.86 | 0.97 | 1.00 | 0.99 | 36
sadness | 0.94 | 0.94 | 0.94 | 0.70 | 0.72 | 0.71 | 0.44 | 0.58 | 0.50 | 0.65 | 0.67 | 0.66 | 36
fear | 1.00 | 1.00 | 1.00 | 0.92 | 1.00 | 0.96 | 0.64 | 0.88 | 0.74 | 0.78 | 0.94 | 0.85 | 34
disgust | 0.94 | 0.94 | 0.94 | 1.00 | 0.67 | 0.80 | 0.68 | 0.58 | 0.63 | 0.74 | 0.56 | 0.63 | 36
anger | 0.97 | 0.94 | 0.96 | 0.62 | 0.92 | 0.74 | 0.86 | 0.50 | 0.63 | 0.69 | 0.69 | 0.69 | 36
accuracy | | | 0.97 | | | 0.81 | | | 0.67 | | | 0.75 | 214
macro avg | 0.97 | 0.97 | 0.97 | 0.84 | 0.81 | 0.81 | 0.71 | 0.67 | 0.67 | 0.75 | 0.75 | 0.74 | 214
weighted avg | 0.97 | 0.97 | 0.97 | 0.84 | 0.81 | 0.80 | 0.71 | 0.67 | 0.67 | 0.75 | 0.75 | 0.74 | 214
Original
---
GANonymization
DeepPrivacy2
CIAGAN
Figure 11. A multi-label classification model was trained on the CelebA dataset. Accordingly, the confusion matrices depict the classification model’s performance on the validation sets. The column ”Original” model was trained on the original images from the training split. The column ”GANonymization” and ”DeepPrivacy2” contains the models trained on the synthesized images of the training split, respectively. Table 5. A multi-label classification model was trained on the CelebA dataset. Accordingly, the classification reports show the classification model’s performance on the validation sets for each label. The column ”Original” model was trained on the original images from the training split. The column ”GANonymization” and ”DeepPrivacy2” contains the models trained on the synthesized images of the training split, respectively. (P) Precision; (R) Recall; (F1) F1-Score; (N) Support | Original | GANonymization | DeepPrivacy2 | CIAGAN |
---|---|---|---|---|---
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | N
5 o Clock Shadow | 0.72 | 0.79 | 0.75 | 0.00 | 0.00 | 0.00 | 0.71 | 0.67 | 0.69 | 0.66 | 0.20 | 0.30 | 2345
Arched Eyebrows | 0.74 | 0.69 | 0.72 | 0.75 | 0.40 | 0.52 | 0.63 | 0.49 | 0.55 | 0.65 | 0.59 | 0.62 | 5134
Attractive | 0.79 | 0.86 | 0.83 | 0.87 | 0.47 | 0.61 | 0.77 | 0.81 | 0.79 | 0.78 | 0.79 | 0.79 | 10332
Bags Under Eyes | 0.67 | 0.52 | 0.59 | 0.60 | 0.07 | 0.12 | 0.54 | 0.47 | 0.50 | 0.62 | 0.29 | 0.40 | 4120
Bald | 0.74 | 0.48 | 0.58 | 0.00 | 0.00 | 0.00 | 0.73 | 0.46 | 0.56 | 0.74 | 0.23 | 0.35 | 410
Bangs | 0.84 | 0.86 | 0.85 | 0.83 | 0.05 | 0.10 | 0.81 | 0.86 | 0.84 | 0.84 | 0.85 | 0.85 | 2913
Big Lips | 0.62 | 0.22 | 0.33 | 0.37 | 0.43 | 0.40 | 0.54 | 0.21 | 0.31 | 0.59 | 0.18 | 0.28 | 3044
Big Nose | 0.79 | 0.44 | 0.56 | 0.61 | 0.49 | 0.54 | 0.63 | 0.52 | 0.57 | 0.65 | 0.48 | 0.56 | 4940
Black Hair | 0.78 | 0.75 | 0.76 | 0.65 | 0.13 | 0.21 | 0.76 | 0.72 | 0.74 | 0.68 | 0.80 | 0.74 | 4143
Blond Hair | 0.82 | 0.85 | 0.84 | 0.88 | 0.05 | 0.09 | 0.77 | 0.87 | 0.82 | 0.77 | 0.86 | 0.82 | 3054
Blurry | 0.72 | 0.45 | 0.55 | 0.77 | 0.01 | 0.02 | 0.62 | 0.38 | 0.47 | 0.65 | 0.35 | 0.45 | 929
Brown Hair | 0.68 | 0.64 | 0.66 | 1.00 | 0.00 | 0.00 | 0.70 | 0.56 | 0.62 | 0.74 | 0.42 | 0.53 | 4792
Bushy Eyebrows | 0.79 | 0.67 | 0.73 | 0.93 | 0.00 | 0.01 | 0.58 | 0.45 | 0.51 | 0.73 | 0.43 | 0.54 | 2830
Chubby | 0.68 | 0.48 | 0.57 | 0.62 | 0.04 | 0.07 | 0.50 | 0.50 | 0.50 | 0.63 | 0.29 | 0.40 | 1215
Double Chin | 0.70 | 0.50 | 0.59 | 0.57 | 0.01 | 0.02 | 0.51 | 0.50 | 0.50 | 0.69 | 0.29 | 0.40 | 975
Eyeglasses | 0.97 | 0.96 | 0.97 | 0.64 | 0.23 | 0.34 | 0.90 | 0.86 | 0.88 | 0.84 | 0.45 | 0.58 | 1380
Goatee | 0.81 | 0.69 | 0.75 | 0.60 | 0.01 | 0.01 | 0.79 | 0.61 | 0.69 | 0.67 | 0.17 | 0.28 | 1460
Gray Hair | 0.81 | 0.70 | 0.75 | 1.00 | 0.00 | 0.00 | 0.77 | 0.68 | 0.72 | 0.82 | 0.57 | 0.67 | 966
Heavy Makeup | 0.88 | 0.92 | 0.90 | 0.87 | 0.71 | 0.78 | 0.80 | 0.91 | 0.85 | 0.80 | 0.88 | 0.84 | 7751
High Cheekbones | 0.92 | 0.80 | 0.86 | 0.78 | 0.87 | 0.82 | 0.81 | 0.78 | 0.79 | 0.75 | 0.87 | 0.81 | 8926
Male | 0.97 | 0.98 | 0.98 | 0.91 | 0.85 | 0.88 | 0.94 | 0.95 | 0.94 | 0.94 | 0.93 | 0.93 | 8443
Mouth Slightly Open | 0.94 | 0.94 | 0.94 | 0.73 | 0.96 | 0.83 | 0.84 | 0.69 | 0.76 | 0.86 | 0.92 | 0.89 | 9569
Mustache | 0.72 | 0.49 | 0.59 | 0.00 | 0.00 | 0.00 | 0.52 | 0.34 | 0.41 | 0.42 | 0.04 | 0.08 | 1002
Narrow Eyes | 0.51 | 0.67 | 0.58 | 0.64 | 0.23 | 0.33 | 0.40 | 0.03 | 0.06 | 0.33 | 0.00 | 0.00 | 1491
No Beard | 0.97 | 0.98 | 0.98 | 0.87 | 0.97 | 0.92 | 0.95 | 0.98 | 0.96 | 0.91 | 0.95 | 0.93 | 16326
Oval Face | 0.67 | 0.29 | 0.40 | 0.87 | 0.00 | 0.01 | 0.58 | 0.31 | 0.41 | 0.51 | 0.39 | 0.44 | 5564
Pale Skin | 0.58 | 0.66 | 0.62 | 0.88 | 0.06 | 0.11 | 0.71 | 0.32 | 0.44 | 0.63 | 0.38 | 0.48 | 856
Pointy Nose | 0.65 | 0.45 | 0.53 | 0.74 | 0.02 | 0.04 | 0.55 | 0.34 | 0.42 | 0.61 | 0.24 | 0.35 | 5658
Receding Hairline | 0.64 | 0.43 | 0.52 | 0.60 | 0.01 | 0.02 | 0.64 | 0.41 | 0.50 | 0.54 | 0.43 | 0.48 | 1429
Rosy Cheeks | 0.77 | 0.40 | 0.52 | 1.00 | 0.00 | 0.00 | 0.51 | 0.58 | 0.54 | 0.54 | 0.48 | 0.50 | 1358
Sideburns | 0.84 | 0.75 | 0.79 | 0.00 | 0.00 | 0.00 | 0.84 | 0.65 | 0.73 | 0.75 | 0.20 | 0.32 | 1366
Smiling | 0.95 | 0.90 | 0.92 | 0.83 | 0.94 | 0.88 | 0.86 | 0.80 | 0.83 | 0.83 | 0.91 | 0.87 | 9601
Straight Hair | 0.60 | 0.41 | 0.49 | 0.00 | 0.00 | 0.00 | 0.55 | 0.40 | 0.47 | 0.52 | 0.26 | 0.34 | 4082
Wavy Hair | 0.68 | 0.64 | 0.66 | 0.57 | 0.22 | 0.32 | 0.66 | 0.62 | 0.64 | 0.67 | 0.56 | 0.61 | 5492
Wearing Earrings | 0.77 | 0.59 | 0.67 | 0.80 | 0.01 | 0.02 | 0.72 | 0.58 | 0.64 | 0.76 | 0.46 | 0.57 | 3789
Wearing Hat | 0.87 | 0.89 | 0.88 | 0.87 | 0.14 | 0.25 | 0.84 | 0.88 | 0.86 | 0.89 | 0.82 | 0.86 | 939
Wearing Lipstick | 0.88 | 0.96 | 0.92 | 0.87 | 0.82 | 0.85 | 0.83 | 0.95 | 0.89 | 0.83 | 0.94 | 0.89 | 8860
Wearing Necklace | 0.51 | 0.15 | 0.23 | 0.00 | 0.00 | 0.00 | 0.48 | 0.06 | 0.10 | 0.38 | 0.01 | 0.02 | 2396
Wearing Necktie | 0.60 | 0.29 | 0.39 | 0.00 | 0.00 | 0.00 | 0.54 | 0.29 | 0.38 | 0.57 | 0.09 | 0.16 | 1442
Young | 0.87 | 0.97 | 0.92 | 0.78 | 0.98 | 0.87 | 0.86 | 0.96 | 0.90 | 0.86 | 0.95 | 0.90 | 14821
micro avg | 0.84 | 0.76 | 0.80 | 0.79 | 0.51 | 0.62 | 0.78 | 0.71 | 0.74 | 0.79 | 0.68 | 0.73 | 176143
macro avg | 0.76 | 0.65 | 0.69 | 0.63 | 0.25 | 0.28 | 0.69 | 0.59 | 0.62 | 0.69 | 0.50 | 0.55 | 176143
weighted avg | 0.82 | 0.76 | 0.78 | 0.73 | 0.51 | 0.52 | 0.76 | 0.71 | 0.72 | 0.75 | 0.68 | 0.69 | 176143
samples avg | 0.83 | 0.75 | 0.78 | 0.79 | 0.51 | 0.60 | 0.78 | 0.70 | 0.72 | 0.78 | 0.67 | 0.70 | 176143
| GANonymization
---|---
Bald | 1.000000
Gray Hair | 1.000000
Double Chin | 0.998494
Blurry | 0.996370
Pale Skin | 0.996337
Wearing Hat | 0.993348
Wearing Necktie | 0.992764
Mustache | 0.992661
Chubby | 0.984813
Goatee | 0.973311
Wearing Necklace | 0.972358
Eyeglasses | 0.966012
Sideburns | 0.949251
Big Nose | 0.899965
Receding Hairline | 0.877510
Bags Under Eyes | 0.852971
Big Lips | 0.780942
Wearing Earrings | 0.768467
Black Hair | 0.729177
Bushy Eyebrows | 0.721409
5 o Clock Shadow | 0.636142
Straight Hair | 0.630562
Bangs | 0.620606
Rosy Cheeks | 0.615530
Blond Hair | 0.615213
Pointy Nose | 0.516256
Brown Hair | 0.480853
Wavy Hair | 0.410118
Narrow Eyes | 0.400334
Male | 0.276199
Arched Eyebrows | 0.097596
Mouth Slightly Open | 0.089010
High Cheekbones | 0.083279
Heavy Makeup | 0.054131
Wearing Lipstick | 0.048915
Smiling | 0.046791
Oval Face | 0.044784
No Beard | 0.031004
Attractive | 0.028488
Young | 0.001595
Table 6. The table shows the percentage of removed traits over the total
number of available samples for each trait in the validation set of the CelebA
dataset.
|
###### Abstract
For tropical $n$-variable polynomials $f,g$ a criterion of containment for
tropical hypersurfaces $Trop(f)\subset Trop(g)$ is provided in terms of their
Newton polyhedra $N(f),N(g)\subset{\mathbb{R}}^{n+1}$. Namely, $Trop(f)\subset
Trop(g)$ iff for every vertex $v$ of $N(g)$ there exist a homothety $t\cdot
N(f),t>0$ and a parallel shift $s:{\mathbb{R}}^{n+1}\to{\mathbb{R}}^{n+1}$
such that $v\in s(t\cdot N(f))\subset N(g)$.
A CRITERION OF CONTAINMENT FOR TROPICAL HYPERSURFACES
Dima Grigoriev
CNRS, Mathématique, Université de Lille, Villeneuve d’Ascq, 59655, France
e-mail<EMAIL_ADDRESS>
URL: http://en.wikipedia.org/wiki/Dima_Grigoriev
keywords: containment of tropical hypersurfaces, inscribable Newton polyhedra
AMS classification: 14T05
## Introduction
Consider a tropical polynomial [6]
$f=\min_{1\leq i\leq k}\\{M_{i}\\},\ M_{i}=\sum_{1\leq j\leq
n}a_{i,j}x_{j}+a_{i,0},\ 0\leq a_{i,j}\in{\mathbb{Z}}\cup\\{\infty\\},\
a_{i,0}\in{\mathbb{R}}\cup\\{\infty\\}.$ (1)
The tropical hypersurface $Trop(f)\subset{\mathbb{R}}^{n}$ consists of points
$(x_{1},\dots,x_{n})$ such that the minimum in (1) is attained at least at two
tropical monomials $M_{i},1\leq i\leq k$.
For each $1\leq i\leq k$ consider the ray $\\{(a_{i,1},\dots,a_{i,n},a)\ :\
a_{i,0}\leq a\in{\mathbb{R}}\\}\subset{\mathbb{R}}^{n+1}$ with the apex at the
point $(a_{i,1},\dots,a_{i,n},a_{i,0})$. The convex hull of all these rays for
$1\leq i\leq k$ is Newton polyhedron $N(f)$. Rays of this form we call
vertical, and the last coordinate we call vertical. Note that $N(f)$ contains
edges (of finite length) and vertical rays. Further, by edges we mean just
edges of finite length.
A point $(x_{1},\dots,x_{n})\in Trop(f)$ iff a parallel shift $H_{x}^{\prime}$
of the hyperplane $H_{x}=\\{(z_{1},\dots,z_{n},x_{1}z_{1}+\cdots+x_{n}z_{n})\
:\ z_{1},\dots,z_{n}\in{\mathbb{R}}\\}\subset{\mathbb{R}}^{n+1}$ has at least
two common points (vertices) with $N(f)$, so that $N(f)$ is located in the
half-space above $H_{x}^{\prime}$ (with respect to the vertical coordinate).
In this case $H_{x}^{\prime}$ has (at least) a common edge with $N(f)$, and we
say that $H_{x}^{\prime}$ supports $N(f)$ at $H_{x}^{\prime}\cap N(f)$.
The goal of the paper is to provide for tropical polynomials $f,g$ an explicit
criterion of containment $Trop(f)\subset Trop(g)$ in terms of Newton polyhedra
$N(f),N(g)$. Note that a criterion of emptiness of a tropical prevariety
$Trop(f_{1},\dots,f_{l})$ is established in [3] (one can treat this as a
tropical weak Nullstellensatz), further developments one can find in [5], [1].
The issue of containment of tropical hypersurfaces is a particular case of an
open problem of a tropical strong Nullstellensatz, i.e. a criterion of a
containment $Trop(f_{1},\dots,f_{l})\subset Trop(g)$. We mention that in [4]
(which improves [2]) a strong Nullstellensatz is provided for systems of min-
plus equations of the form $f=g$ (in terms of congruences of tropical
polynomials). Observe that the family of all tropical prevarieties coincides
with the family of all min-plus prevarieties (and both coincide with the
family of all finite unions of polyhedra given by linear constraints with
rational coefficients [6]). On the hand, the issue of a strong Nullstellensatz
is different for these two types of equations.
## 1 Containment of tropical hypersurfaces and inscribable polyhedra
For a polyhedron $P$ and $0<t\in{\mathbb{R}}$ denote by $t\cdot P$ the
homothety (with some center) of $P$ with the coefficient $t$.
###### Definition 1.1
For polyhedra $P,Q$ we say that $P$ is inscribed in $Q$ at a point $x$ if
$x\in P\subset Q$.
We say that $P\subset{\mathbb{R}}^{n}$ is totally inscribable in $Q$ if for
every vertex $v$ of $Q$ an appropriate parallel shift
$s:{\mathbb{R}}^{n}\to{\mathbb{R}}^{n}$ of the homothety $s(t\cdot P)$ is
inscribed in $Q$ at $v$ for suitable $0<t\in{\mathbb{R}}$.
###### Theorem 1.2
For tropical polynomials $f,\ g$ is $n$ variables it holds $Trop(f)\subset
Trop(g)$ iff Newton polyhedron $N(f)\subset{\mathbb{R}}^{n+1}$ is totally
inscribable in $N(g)$.
###### Remark 1.3
Under the conditions of Theorem 1.2 $s^{\prime}(t_{0}\cdot N(f))$ is inscribed
in $N(g)$ at an arbitrary chosen point of $N(g)$ (for an appropriate shift
$s^{\prime}$) where $t_{0}$ is the minimum of $t$ (see Definition 1.1) over
all the vertices of $N(g)$ (however, we don’t make use of this remark).
Proof of the theorem. First assume that for every vertex $v$ of $N(g)$ there
exists a shift $s$ and $t>0$ such that $s(t\cdot N(f))$ is inscribed in $N(g)$
at $v$. Suppose that $Trop(f)\nsubseteq Trop(g)$, then there exists a
hyperplane ${\mathbb{R}}^{n+1}\supset H\in Trop(f)\setminus Trop(g)$.
Therefore, a parallel shift of $H$ supports $N(g)$ at some its vertex $v$. By
the assumption an appropriate shift $s(t\cdot N(f))$ is inscribed in $N(g)$ at
$v$ for suitable $t>0$. This contradicts to that $H\in Trop(f)$ since a
parallel shift of $H$ has a single common point $v$ with $s(t\cdot N(f))$.
This proves that $Trop(f)\subset Trop(g)$.
Now conversely, assume that $Trop(f)\subset Trop(g)$. Denote by
$p:{\mathbb{R}}^{n+1}\twoheadrightarrow{\mathbb{R}}^{n}$ the projection along
the last coordinate. Take a vertex $v$ of $N(g)$. Consider a cone
$C\subset{\mathbb{R}}^{n+1}$ with the apex $v$ being the convex hull of the
rays generated by the edges of $N(g)$ adjacent to $v$ (with the added vertical
ray). Then $N(g)\subset C$. Moreover, there exists a ball
$B\subset{\mathbb{R}}^{n}$ with the center at $p(v)$ such that $p^{-1}(B)\cap
N(g)=p^{-1}(B)\cap C$.
Choose a hyperplane $H\subset{\mathbb{R}}^{n+1}$ (not containing a vertical
line) such that $H\cap N(g)=\\{v\\}$, hence $H$ supports $N(g)$ at $v$. Take a
vertex $u$ of $N(f)$ for which $H^{\prime}\cap N(f)=\\{u\\}$ where
$H^{\prime}$ is a hyperplane parallel to $H$, and $H^{\prime}$ supports
$N(f)$. Observe that $H^{\prime}\cap N(f)$ is a point since otherwise $H\in
Trop(f)\setminus Trop(g)$.
Pick a sufficiently small $t>0$ such that $s(t\cdot N(f))\subset p^{-1}(B)$
where for the shift $s$ holds $s(u_{1})=v$, and $u_{1}$ is the image of $u$
under the homothety (in particular, $v\in s(t\cdot N(f))$). We claim that
$s(t\cdot N(f))\subset C$. Indeed, denote by $H_{1}$ a hyperplane parallel to
$H$ and located above $H$. Denote by
$L_{1},\dots,L_{q}\subset{\mathbb{R}}^{n+1}$ the rays with their common apex
at $v$ containing edges of $s(t\cdot N(f))$ adjacent with $v$ (with the added
vertical ray), and by $C_{0}\subset{\mathbb{R}}^{n+1}$ the cone generated by
$L_{1},\dots,L_{q}$. Then $s(t\cdot N(f))\subset C_{0}$.
Thus, to justify the claim it suffices to verify that $C_{0}\subset C$.
Suppose the contrary. Denote by $E_{1},\dots,E_{m}$ the rays with their common
apex at $v$ containing edges of $N(g)$ adjacent to $v$ (with the added
vertical ray), in other words $C$ is the convex hull of $E_{1},\dots,E_{m}$.
Denote points $l_{i}:=L_{i}\cap H_{1},1\leq i\leq q,\ e_{j}:=E_{j}\cap
H_{1},1\leq j\leq m$. Consider the convex hull $Q\subset H_{1}$ of the points
$l_{1},\dots,l_{q},e_{1},\dots,e_{m}$. Then a point $l_{i}$ is one of the
vertices of $Q$ for suitable $1\leq i\leq q$ (according to the supposition).
Therefore, there exists a hyperplane $h\subset H_{1}$ such that $l_{i}\in h$
and all the points $l_{i},\dots,l_{i-1},l_{i+1},\dots,l_{q},e_{1},\dots,e_{m}$
are located in the same of two open half-spaces of $H_{1}$ separated by $h$.
Hence the hyperplane $H_{0}\subset{\mathbb{R}}^{n+1}$ spanned by $h$ and $v$
belongs to $Trop(g)$, while $H_{0}\cap s(t\cdot N(f))=\\{v\\}$, i.e.
$H_{0}\notin Trop(f)$ (observe that $H_{0}$ does not contain a vertical line
since the vertical ray lies in $C\cap C_{0}$). The obtained contradiction
verifies that $C_{0}\subset C$ and the claim.
Finally, we conclude with
$s(t\cdot N(f))=s(t\cdot N(f))\cap p^{-1}(B)\subset C\cap p^{-1}(B)=N(g)\cap
p^{-1}(B)\subset N(g).$
$\Box$
###### Remark 1.4
i) In the proof of Theorem 1.2 we have chosen a hyperplane $H$ supporting
$N(g)$ at a single vertex $v$ in an arbitrary way. On the other hand, a choice
of a vertex $u$ of $N(f)$ is subsequently unique (independently of a choice of
$H$). Indeed, the space of possible hyperplanes $H$ is connected, and if there
were possible to choose another vertex $u_{1}\neq u$ then for an appropriate
choice, $H$ would support $N(f)$ at least at two points, hence $H\in
Trop(f)\setminus Trop(g)$.
ii) It would be interesting to provide a criterion of containment for tropical
prevarieties $Trop(f_{1},\dots,f_{k})\subset Trop(g)$. Note that the latter
problem is NP-hard [7], while one can test whether $Trop(f)\subset Trop(g)$
within polynomial complexity (e.g. relying on Theorem 1.2 and invoking linear
programming).
## References
* [1] M. Akian, A. Béreau and S. Gaubert. The tropical Nullstellensatz and Positivstellensatz for sparse polynomial systems. ACM Proc. Int. Symp. Symb. Alg. Comput., 43-52, 2023.
* [2] A. Bertram and R. Easton. The tropical Nullstellensatz for congruences. Adv. Math., 308:36-82, 2017.
* [3] D. Grigoriev and V. Podolskii. Tropical effective primary and dual Nullstellensaetze. Discr. Comput. Geometry, 59:507–552, 2018.
* [4] D. Joo and K. Mincheva. Prime congruences of additively idempotent semirings and a Nullstellensatz for tropical polynomials. Selecta Math., 24:2207-2233, 2018.
* [5] D. Maclagan and F. Rincon. Tropical ideals. Compos. Math., 154:640-670, 2018.
* [6] D. Maclagan and B. Sturmfels. Introduction to Tropical Geometry:, volume 161 of Graduate Studies in Mathematics. American Mathematical Society, 2015.
* [7] T. Theobald. On the frontiers of polynomial computations in tropical geometry. J. Symb. Comput., 41:1360-1375, 2006.
|
# Teleparallel Minkowski Spacetime with Perturbative Approach for Teleparallel
Gravity on a Proper Frame
A. Landry<EMAIL_ADDRESS>Department of Mathematics and Statistics, Dalhousie
University, P.O. Box 15 000, Halifax, Nova Scotia, Canada, B3H 4R2 R. J. van
den Hoogen<EMAIL_ADDRESS>Department of Mathematics and Statistics, St.
Francis Xavier University, Antigonish, Nova Scotia, Canada, B2G 2W5
###### Abstract
A complete perturbation theory suitable for teleparallel gravity is developed.
The proposed perturbation scheme takes into account perturbations of the
coframe, the metric, and the spin-connection, while ensuring that the
resulting perturbed system continues to describe a teleparallel gravity
situation. The resulting perturbation scheme can be transformed to one in
which perturbations all take place within the co-frame. A covariant definition
of a teleparallel Minkowski geometry is proposed. We compute the perturbed
field equations for $f(T)$ teleparallel gravity and discuss the stability of
the teleparallel Minkowski geometry within $f(T)$ teleparallel gravity.
††preprint: arXiv:2303.16089
###### Contents
1. I Introduction
2. II Teleparallel Theories of Gravity
1. II.1 Notation
2. II.2 Torsion-Based Theories
3. II.3 Geometrical Framework for Teleparallel Gravity
4. II.4 Linear Transformations and Gauge Choices
5. II.5 Action for $f(T)$ Teleparallel Gravity
6. II.6 Field Equations for $f(T)$ Teleparallel Gravity
3. III Constant Torsion Spacetimes
1. III.0.1 Definition: Minkowski Geometry and Minkowski Spacetime
4. IV Perturbations in Teleparallel Geometries
1. IV.1 Proper Orthonormal Perturbation of the Co-Frame
2. IV.2 Perturbed $f(T)$ Teleparallel Field Equations: General
3. IV.3 Perturbed $f(T)$ Teleparallel Field Equations: Constant Torsion Scalar
4. IV.4 Perturbed $f(T)$ Teleparallel Field Equations: Zero Torsion Scalar
5. IV.5 Perturbed $f(T)$ Teleparallel Field Equations: The Zero Torsion Scalar Perturbation Limit
6. IV.6 Perturbed $f(T)$ Teleparallel Field Equations: Minkowski
5. V Effects of Perturbations and the Minkowski Spacetime Symmetries Conditions for Stability
1. V.1 Rotation/Boost Perturbation in a Minkowski Background
2. V.2 General Linear Perturbation in a Minkowski Background
3. V.3 Perturbations on Trivial Coframes by Each Part of the Perturbation
1. V.3.1 Trace
2. V.3.2 Full Symmetric Perturbation
3. V.3.3 Full Antisymmetric Perturbation
4. V.3.4 A Mixed Situation and Minkowski Spacetime
6. VI Discussion and Conclusions
7. A Perturbed Physical Quantities in Teleparallel Theories
8. B General Perturbed Torsion-Based Field Equation via Linearization
9. C The Derivation of Minkowski Spacetime Symmetries: Conditions for Stability
1. C.1 Rotation/Boost Perturbation
2. C.2 General Linear Perturbation
## I Introduction
There are two major classes of theories for physical phenomena: gravitational
theories and quantized theories [1, 2, 3, 4]. The first class of theories are
used to explain phenomena at the astrophysical scale; for example, General
Relativity (GR) has been very successful in explaining astrophysical phenomena
[5, 6, 7, 8]. However, the second class of theories concerns phenomena
occurring at the microscopic scale involving fundamental quantum particles.
Attempts have been made to reconcile the two classes of theories in order to
have a general, all-encompassing theory. A theory that is capable of dealing
with very low-amplitude physical and geometrical quantities, as is the case
for theories based on quantization, is desirable.
Indeed, Quantum Mechanics (QM) as well as Quantum Field Theory (QFT) have
well-established perturbative theories: a potential is perturbed, generating a
correction of the eigenvalues of the energies, as well as corrections to the
wave functions [1, 2, 3, 4]. QM and QFT are well established and have been
used to describe the gravitational corrections of curved spacetimes of
physical phenomena that can occur at the microscopic scale [9, 10, 11, 12].
Unfortunately, this perturbative approach to GR is problematic, primarily
because one requires an identifiable background on which to perform the
perturbations [13]. One can, of course, use gauge invariant variables to
address this challenge.
Recently, there has been a growing interest in the development of teleparallel
gravity as an alternative theory to GR [14, 15, 16, 17, 18, 19, 20, 21].
Teleparallel gravity needs to be better understood and developed in order to
address foundational, physical, and geometrical problems. Here, we will
illuminate some of the challenges and nuances that are present within
perturbative approaches to teleparallel gravity.
Golovnev and Guzman [22] studied a class of perturbations within a geometry
having a Minkowski metric. They applied perturbations to a particular boosted
coframe in which the metric has the Minkowski form and the torsion scalar is
zero, but where the torsion tensor is non-zero. One may argue that any
geometry in which the torsion tensor is non-zero is inherently not a Minkowski
geometry, but this is a matter of definition. In another paper, Jimenez et al.
performed perturbations of Minkowski spacetime in $f(T)$ teleparallel gravity
by using a trivial tetrad and having the perturbations encoded in
infinitesimal Lorentz transformations [23]. Their approach, while correct, is
restrictive when working towards a general perturbation theory within
teleparallel gravity. In ref [24], the authors develop a complete perturbation
theory that can be employed for perturbation analysis in Minkowski and flat
Robertson-Walker-type cosmological spacetimes. Our analysis provides a
different perspective and can be used as a general framework, and therefore,
it complements the work in ref [24].
Recently, within a cosmological setting, Bahamonde et al. [25] investigated
perturbations occurring on a FLRW-type background. They defined a very
specific form for the perturbation compatible with this background. They then
obtain the perturbed field equations. In addition, they investigated the
consequent effects of perturbations on the torsion and on different physical
quantities. Most of the types of perturbations studied lead to the flat FLRW
background case under some precise limits. On the other hand, some
perturbation modes do not propagate, which maintains the strong coupling. This
is the case of the scalar and the pseudo-scalar parts of the perturbations.
Here, we still have work with a limited scope; hence, the need for a more
general theory of perturbations in teleparallel gravity.
Bamba and Cai’s papers focus on Gravitational Waves (GWs) in teleparallel
gravity [26, 27]. GWs are a class of wave-like perturbations of Minkowski
spacetime. They are still dealing here with a specific case of perturbation.
In Bamba [26], they place themselves in the Minkowski background to process
the GWs in teleparallel gravity. In Cai [27], they place themselves in the
FLRW background. They therefore have a generalization of Bamba’s work for GWs
that are compatible with the cosmological models. In addition, in [27], they
add the effects of scalar fields in their perturbations. Not only are they
still dealing with specific cases of perturbations, but they are moving from
the Minkowski background to the FLRW background. However, they still do not
have a general theory for the Minkowski background. Therefore, a more general
and fundamental theory that is applicable for any perturbation and any co-
frame in Minkowski spacetime in teleparallel gravity is needed.
We begin this paper with a definition of Minkowski geometry and Minkowski
spacetime within teleparallel gravity. Then, we will investigate the effects
of perturbations in teleparallel gravity. After, we will study the stability
of Minkowski spacetime by using the perturbed quantities and field equations.
In teleparallel gravity, co-frames encode both the gravitational and inertial
effects. Our goal is to explore the perturbations of gravity, and therefore,
we shall carefully construct a perturbative theory that achieves this goal. If
we transform initially to “proper” frames which encode only the gravitational
effects and then perform perturbations on all physical quantities,
consequently ensuring that the resulting perturbed theory is still within the
class of teleparallel theories of gravity will yield the general allowable
form for perturbations within teleparallel gravity. We will perturb the
physical quantities which maintain the “proper frames”, thus avoiding the
challenge of interpreting the spurious inertial effects that may appear in
“non-proper frames” [28, 14, 29, 16, 15].
We want to highlight the effects of perturbations in teleparallel gravity. For
example, in an absolute vacuum, one can highlight the effects of perturbations
modifying this same vacuum. For example, we will determine the gravitational
Energy-Momentum associated with a perturbation. We will apply this theory of
perturbations in teleparallel gravity to some examples and problems of Physics
[30, 31, 16]. Particularly, we will study through these coframe perturbations
the stability of the Minkowski background, and determine the required symmetry
conditions to satisfy.
This paper is divided as follows. In Section II, we present a summary of
teleparallel gravity and propose a definition of Minkowski geometry within
teleparallel gravity. In Section IV, we will define the perturbations
maintaining the “proper frames”, the orthonormal framework, and we will also
provide the perturbed Field Equations (FEs). In Section V, we will explore
some coframe perturbations to determine the stability criterions for Minkowski
spacetime. We can also generalize these criterions to null and constant
torsion spacetimes.
## II Teleparallel Theories of Gravity
### II.1 Notation
Greek indices $(\mu,\nu,\dots)$ are employed to represent the spacetime
coordinate indices, while Latin indices $(a,b,\dots)$, are employed to
represent frame or tangent-space indices. As is standard notation, round
parentheses surrounding indices represent symmetrization, while square
brackets represent anti-symmetrization. Any quantity that is computed using a
Levi-Civita connection ${\overset{\circ}{\omega}}{}^{a}_{\phantom{a}b\mu}$
will have a circle above the symbol. A comma will denote a partial derivative.
The metric signature is assumed to be $(-,+,+,+)$.
### II.2 Torsion-Based Theories
Torsion-based theories of gravity are a subclass of Einstein-Cartan theories
[32, 15, 16]. This superclass of theories contains theories based solely on
the curvature, for example, General Relativity, or $f\left(R\right)$ theories
where $R$ is Ricci curvature scalar. Einstein-Cartan theories of gravity also
contain theories of gravity that are based solely on the torsion, for example,
teleparallel theories of gravity, including New General Relativity [33] and
$f\left(T\right)$ theories where $T$ is the torsion scalar. In addition,
theories of gravity based on both the curvature and torsion scalars
($f\left(R,T\right)$-type) are also subclasses of the Einstein-Cartan theories
of gravity. Recently, there has been an emergence of theories based on non-
metricity ($f\left(Q\right)$-type), although they are less well known [34, 35,
16]. In this paper, we are interested in teleparallel gravity, and in
particular, $f(T)$ teleparallel gravity [14, 15, 17, 18, 19, 20, 16, 29].
### II.3 Geometrical Framework for Teleparallel Gravity
Let $M$ be a $4$-dimensional differentiable manifold with coordinates
$x^{\mu}$. Then, the geometry of the manifold is characterized by the three
geometrical objects.
* •
The Co-frame: $h^{a}=h^{a}_{\;\;\mu}dx^{\mu}$. This quantity generally encodes
both the gravitational and inertial effects in a gravitational system. The
dual of the co-frame is defined as the vector field
$h_{a}=h_{a}^{~{}\mu}\frac{\partial}{\partial x^{\mu}}$, such that
$h^{a}_{~{}\mu}h_{b}^{~{}\mu}=\delta^{a}_{b}$.
* •
The Gauge Metric: $g_{ab}$. This object expresses the “metric” of the tangent
space, such that $g_{ab}=g(h_{a},h_{b})$. Having a metric allows one to define
the lengths and angles.
* •
The Spin-connection: $\omega^{a}_{\;\;b}=\omega^{a}_{\;\;b\mu}dx^{\mu}$.
Having a connection allows one to “parallel transport’,’ or equivalently, it
allows one to define a covariant differentiation.
In teleparallel gravity, the co-frame, gauge metric, and spin connection are
restricted and interdependent, characterized by the following two postulates
[14, 15, 16]:
* •
Null Curvature:
$R^{a}_{\;\;b\nu\mu}\equiv\omega^{a}_{~{}b\mu,\nu}-\omega^{a}_{~{}b\nu,\mu}+\omega^{a}_{~{}c\nu}\omega^{c}_{~{}b\mu}-\omega^{a}_{~{}c\mu}\omega^{c}_{~{}b\nu}=0$
(1)
* •
Null Non-Metricity:
$Q_{ab\mu}\equiv-
g_{ab,\mu}+\omega^{c}_{~{}a\mu}g_{cb}+\omega^{c}_{~{}b\mu}g_{ac}=0$ (2)
In teleparallel gravity, the only remaining non-null field strength is the
torsion defined as
$T^{a}_{\phantom{a}\mu\nu}=h^{a}_{\phantom{a}\nu,\mu}-h^{a}_{\phantom{a}\mu,\nu}+\omega^{a}_{\phantom{a}b\mu}h^{b}_{\phantom{a}\nu}-\omega^{a}_{\phantom{a}b\nu}h^{b}_{\phantom{a}\mu}$
(3)
It is now possible to construct a gravitational theory that depends only on
the torsion. However, before proceeding, we illustrate the effects of gauge
transformations on the geometry, and how we can judiciously choose a gauge to
simplify our computations.
### II.4 Linear Transformations and Gauge Choices
From the Principle of Relativity, we impose the requirement that the physical
gravitational system under consideration be invariant under $GL(4,\mathbb{R})$
local linear transformations of the frame. These types of transformations
allow one to pass from one frame of reference to another frame of reference.
For the fundamental geometrical quantities
$\\{h^{a},g_{ab},\omega^{a}_{~{}bc}\\}$, we have the following transformation
rules under a general linear transformation $M^{a}_{~{}b}\in
GL(4,\mathbb{R})$:
$\displaystyle h^{\prime a}_{~{}\mu}$ $\displaystyle=$ $\displaystyle
M^{a}_{~{}b}\,h^{b}_{~{}\mu},$ (4) $\displaystyle g^{\prime}_{ab}$
$\displaystyle=$ $\displaystyle M_{a}^{~{}e}\,M_{b}^{~{}f}\,g_{ef},$ (5)
$\displaystyle\omega^{\prime a}_{\,~{}b\mu}$ $\displaystyle=$ $\displaystyle
M^{a}_{~{}e}\,\omega^{e}_{~{}f\mu}\,M_{b}^{~{}f}+M^{a}_{~{}e}\,\partial_{\mu}\,M_{b}^{~{}e}.$
(6)
where $M_{b}^{~{}a}=(M^{-1})^{a}_{~{}b}$ represents the inverse matrix.
Equation (6) shows that the Spin-connection transforms non-homogeneously under
a general linear transformation.
#### Gauge Choices and Teleparallel Gravity
Physical phenomena must respect the principle of Gauge Invariance. The
physical phenomenon must be explainable and valid, regardless of the gauge and
its possible transformations. If this general principle is important for
quantized theories, then this same principle is also important for
teleparallel gravity. Generally, we have a tremendous choice of gauge,
depending on the assumed symmetries of the physical system. However, once we
have made a gauge choice, the consequent field equations describing the theory
must transform covariantly (i.e., they are invariant) under any remaining
gauge freedom.
##### Proper Orthonormal Frame
The Null Curvature postulate guarantees that there exists an element
$M^{a}_{~{}b}\in GL(4,\mathbb{R})$, such that
$\omega^{a}_{~{}b\mu}\equiv(M^{-1})^{a}_{~{}b}\partial_{\mu}(M^{b}_{~{}c})$
(7)
Since the connection transforms non-homogeneously under local linear
transformations, we can always apply the linear transformation $M^{a}_{~{}b}$
to transform to a proper frame in which $\omega^{a}_{~{}b\mu}=0$. Further,
within this proper frame, given the Null Non-Metricity postulate, it is then
possible to apply a second constant linear transformation to bring the gauge
metric to some desired form. For example, we can transform to a gauge in which
the spin connection is null and the gauge metric is
$g_{ab}=\mathrm{Diag}[-1,1,1,1]$, which we will call a “proper orthonormal
frame”. The only remaining gauge freedom in this case are global (constant)
Lorentz transformations.
##### Orthonormal Frame
If one prefers not to be restricted to a proper frame, then there is more
flexibility. Since the gauge metric is symmetric, we can still always choose
an “orthonormal frame” in which the gauge metric becomes
$g_{ab}=\mathrm{Diag}[-1,1,1,1]$, but where the spin connection may be non-
trivial. Assuming an orthonormal frame, the remaining gauge freedom is
represented by proper orthochronous Lorentz transformations in the
$SO^{+}(1,3)$ subgroup of $GL(4,\mathbb{R})$. Other gauge choices might
include Complex-Null, Half-Null, Angular-Null, and others [17, 18, 19]. In the
orthonormal frame, given the Null Curvature postulate, there exists a
$\Lambda^{a}_{~{}b}\in SO^{+}(1,3)$, such that the spin connection is [36,
37]:
$\omega^{a}_{~{}b\mu}\equiv(\Lambda^{-1})^{a}_{~{}b}\partial_{\mu}(\Lambda^{b}_{~{}c})$
(8)
and given the Null Non-Metricity postulate, we have the restriction
$\omega_{(ab)\mu}=0$.
However, in either choice of gauge, we note that the spin connection,
$\omega^{a}_{~{}b\mu}$, is not a true dynamical variable and that it only
encodes inertial effects present in the choice of frame [14, 15, 17, 18, 19,
20, 16, 29, 28].
### II.5 Action for $f(T)$ Teleparallel Gravity
In principle, one can construct a Lagrangian density from any of the scalars
built from the torsion tensor. One such scalar is [14, 15, 17, 18, 19, 20, 16,
29]:
$\displaystyle
T=\frac{1}{4}T^{a}_{~{}bc}T_{a}^{~{}bc}+\frac{1}{2}T^{a}_{~{}bc}T^{cb}_{~{}~{}a}-T^{a}_{~{}ca}T^{bc}_{~{}~{}b},$
(9)
which we will call “the” torsion scalar $T$ . Another related scalar, for
example, used in New General Relativity [33], is
$\displaystyle\widetilde{T}=c_{1}T^{a}_{~{}bc}T_{a}^{~{}bc}+c_{2}T^{a}_{~{}bc}T^{cb}_{~{}~{}a}+c_{3}T^{a}_{~{}ca}T^{bc}_{~{}~{}b}$
(10)
Other torsion scalars could be included, but these scalars are not invariant
under $SO^{+}(1,3)$, and they include parity violating terms [33].
Here, we are interested in a particular class of teleparallel gravity
theories, $f(T)$ teleparallel gravity. The action describing the $f(T)$
teleparallel theory of gravity containing matter is [14, 15, 17, 18, 19, 20,
16, 29]:
$S_{f\left(T\right)}=\int\,d^{4}\,x\,\left[\frac{h}{2\,\kappa}\,f\left(T\right)+\mathcal{L}_{Matter}\right].$
(11)
where $h=\mbox{Det}\left(h^{a}_{~{}\mu}\right)$ is the determinant of the
veilbein, the parameter $\kappa$ is the gravitational coupling constant which
contains the physical constants, and $f\left(T\right)$ is an arbitrary
function of the torsion scalar $T$, given by Equation (9).
### II.6 Field Equations for $f(T)$ Teleparallel Gravity
From the action integral expressed by Equation (11), we determine the field
equations by varying with respect to the coframe $h^{a}_{~{}\mu}$ [14, 15, 17,
18, 19, 20, 16, 29]:
$\displaystyle\kappa\,\Theta_{a}^{~{}~{}\mu}=\frac{f_{T}(T)}{h}\,\partial_{\nu}\,\left(h\,S_{a}^{~{}~{}\mu\nu}\right)+f_{TT}(T)\,S_{a}^{~{}~{}\mu\nu}\,\partial_{\nu}T+\frac{f(T)}{2}\,h_{a}^{~{}~{}\mu}-f_{T}(T)\,\left(\omega^{b}_{~{}~{}a\nu}+T^{b}_{~{}~{}a\nu}\right)\,S_{b}^{~{}~{}\mu\nu}.$
The superpotential is defined as [14, 15, 18, 17]:
$S_{a}^{~{}\mu\nu}=\frac{1}{2}\left(T_{a}^{~{}\mu\nu}+T_{~{}~{}a}^{\nu\mu}-T_{~{}~{}a}^{\mu\nu}\right)-h_{a}^{~{}\nu}\,T^{\rho\mu}_{~{}~{}\rho}+h_{a}^{~{}\mu}\,T^{\rho\nu}_{~{}~{}\rho}.$
(13)
The canonical Energy-Momentum is defined as [18]:
$h\,\Theta_{a}^{~{}\mu}\equiv\frac{\delta\mathcal{L}_{Matter}}{\delta
h^{a}_{~{}\mu}}.$ (14)
Now, expressing the field equations (II.6) in terms of the tangent-space
components allows one to split the field equations into symmetric and
antisymmetric parts. The symmetric and antisymmetric parts of the $f(T)$
teleparallel gravity FEs are respectively [17, 18, 19]:
$\displaystyle\kappa\Theta_{\left(ab\right)}\,$ $\displaystyle=$
$\displaystyle\,f_{TT}\left(T\right)\,S_{\left(ab\right)}^{~{}~{}~{}\mu}\,\partial_{\mu}T+f_{T}\left(T\right)\,\overset{\
\circ}{G}_{ab}+\frac{g_{ab}}{2}\,\left[f\left(T\right)-T\,f_{T}\left(T\right)\right],$
$\displaystyle 0\,$ $\displaystyle=$
$\displaystyle\,f_{TT}\left(T\right)\,S_{\left[ab\right]}^{~{}~{}~{}\mu}\,\partial_{\mu}T,$
(15)
where $\overset{\ \circ}{G}_{ab}$ is the Einstein tensor computed from the
Levi-Civita connection of the metric.
We note with an orthonormal gauge choice, and consequent invariance under
$SO^{+}(1,3)$ transformations, it can be shown that
$\Theta_{[ab]}=0,$ (16)
and that the metrical energy-momentum $T_{ab}$ and the symmetric part of the
canonical energy-momentum satisfy
$\Theta_{(ab)}=T_{ab}\equiv\frac{1}{2}\frac{\delta L_{Matt}}{\delta g_{ab}}.$
(17)
## III Constant Torsion Spacetimes
A class of interesting spacetimes are those leading to a constant torsion
scalar, i.e., $T=T_{0}=\text{Const}$. This class of spacetimes includes the
Minkowski spacetime, amongst others. In this case, the equations (II.6) will
simplify with $\partial_{\mu}T=0$ as follows, leaving only the symmetric part
of the field equations:
$\displaystyle\kappa\Theta_{\left(ab\right)}\,$ $\displaystyle=$
$\displaystyle f_{T}\left(T_{0}\right)\,\overset{\
\circ}{G}_{ab}+\frac{g_{ab}}{2}\,\left[f\left(T_{0}\right)-T_{0}\,f_{T}\left(T_{0}\right)\right].$
(18)
The antisymmetric part of the field equations becomes identically satisfied.
We can now divide Equation (18) by $f_{T}\left(T_{0}\right)$ to obtain:
$\displaystyle\kappa_{eff}\Theta_{\left(ab\right)}\,$ $\displaystyle=$
$\displaystyle\overset{\
\circ}{G}_{ab}+g_{ab}\,\left[\frac{f\left(T_{0}\right)}{2\,f_{T}\left(T_{0}\right)}-\frac{T_{0}}{2}\right]$
(19) $\displaystyle=$ $\displaystyle\overset{\
\circ}{G}_{ab}+g_{ab}\,\Lambda\left(T_{0}\right).$
where we define the re-scaled gravitational coupling constant
$\kappa_{eff}=\frac{\kappa}{f_{T}\left(T_{0}\right)}$ and an effective
cosmological constant $\Lambda\left(T_{0}\right)$, both dependent on the value
of $T=T_{0}$. We observe that if $T=T_{0}=\text{Const}$, then the $f(T)$
teleparallel field equations reduce to those of GR, having a re-scaled
gravitational coupling and a cosmological constant.
Due to its importance in characterizing the Minkowski geometry, we carefully
consider the case of $T_{0}=0$ for further consideration.
### Null Torsion Scalar Spacetimes
When $T_{0}=0$, the field equations reduce to:
$\displaystyle\kappa_{eff}\Theta_{\left(ab\right)}\,$ $\displaystyle=$
$\displaystyle\overset{\
\circ}{G}_{ab}+g_{ab}\,\left[\frac{f\left(0\right)}{2\,f_{T}\left(0\right)}\right],$
(20) $\displaystyle=$ $\displaystyle\overset{\
\circ}{G}_{ab}+g_{ab}\,\Lambda\left(0\right).$
where $\kappa_{eff}=\frac{\kappa}{f_{T}\left(0\right)}$ and
$\Lambda\left(0\right)=\frac{f(0)}{2\,f_{T}\left(0\right)}$. If $f(0)\neq 0$,
then the Cosmological Constant $\Lambda(0)\neq 0$.
#### III.0.1 Definition: Minkowski Geometry and Minkowski Spacetime
Before obtaining the field equations and introducing the perturbations on
such, one must clearly define the true nature of the Minkowski spacetime in
teleparallel gravity in a covariant way. This will make it possible to better
understand the nature and origin of the equations involving the dominant
quantities with respect to the perturbed quantities. This geometry is
characterized as follows:
* •
Maximally symmetric: The Minkowski geometry is invariant under a $G_{10}$
group of transformations [18].
* •
Null Curvature: $R_{~{}b\mu\nu}^{a}=0$
* •
Null Torsion: $T^{a}_{~{}\mu\nu}=0$
* •
Null Non-Metricity: $Q_{ab\mu}=0$
One of the consequences is that Minkowski geometry is everywhere a smooth
geometry without singularity. This covariant definition of teleparallel
Minkowski geometry has been proposed also by Beltran et al. [38].
We distinguish between Minkowski geometry and Minkowski spacetime in
teleparallel gravity as follows. Minkowski geometry is defined independently
of any field equations, while Minkowski spacetime is a Minkowski geometry that
is a solution to the teleparallel gravity field equations where the matter
source is a vacuum, $\Theta_{ab}=0$.
If the geometry is Minkowski, then the torsion scalar is identically zero.
Note that the converse is not necessarily true. The Einstein tensor
$\overset{\ \circ}{G}_{ab}=0$, and since the matter source is a vacuum,
$\Theta_{ab}=0$, the field equations (20) reduce to
$0=\frac{f\left(0\right)}{2}\,g_{ab}.$ (21)
From the field equations (21), if the geometry is Minkowski and
$\Theta_{ab}=0$, then $f(0)=0$. In this case, the solution is a Minkowski
spacetime, a Minkowski geometry that satisfies the field equations in vacuum.
Alternatively, if $f(0)\not=0$, then a solution to the field equations (21)
necessarily requires a non-null $\Theta_{ab}$, and consequently, this
spacetime is not a Minkowski spacetime, even though the geometry is Minkowski.
Of course, the non-trivial $\Theta_{ab}$ can be interpreted as the energy
density of the vacuum. Expressing the statement clearly, Minkowski geometry is
a solution to the vacuum $f(T)$ teleparallel gravity field equations only if
$f(0)=0$.
## IV Perturbations in Teleparallel Geometries
### IV.1 Proper Orthonormal Perturbation of the Co-Frame
As described earlier, a teleparallel geometry is characterized in general via
the triplet of quantities, the co-frame one form $h^{a}$, the spin connection
one-form $\omega^{a}_{~{}b}$, and the metric tensor field $g_{ab}$, with two
constraints, Null Curvature and Null Non-Metricity. As argued earlier,
assuming that the physical system is invariant under the $GL(4,\mathbb{R})$
linear transformations (see also ref. [38]), this means that even before
constructing a perturbative theory, one can always choose to begin in a
“proper orthonormal frame” as our background without a loss of generality:
${h}^{a}={h}^{a}_{~{}\mu}dx^{\mu},\qquad{\omega}^{a}_{~{}b}=0,\qquad{g}_{ab}=\eta_{ab}=\mathrm{Diag}[-1,1,1,1].$
(22)
Now, we apply a perturbation to all three quantities, as follows:
$h^{\prime a}={h}^{a}+\delta h^{a},\qquad\omega^{\prime
a}_{~{}b}=\delta\omega^{a}_{~{}b},\qquad g^{\prime}_{ab}=\eta_{ab}+\delta
g_{ab}$ (23)
The perturbed geometry is no longer expressed in a proper orthonormal frame.
The perturbed system is only proper if $\delta\omega^{a}_{~{}b}=0$, and
orthonormal if $\delta g_{ab}=0$. However, we shall show that we can always
transform to a proper orthonormal perturbation scheme.
We note that the perturbed geometry given by the triplet $\\{h^{\prime
a},\omega^{\prime a}_{~{}b},g^{\prime}_{ab}\\}$ must still satisfy the Null
Curvature and Null Non-Metricity constraints or else one is moving outside of
the theory of teleparallel gravity. In general, the perturbations $\delta
h^{a}$, $\delta\omega^{a}_{~{}b}$, and $\delta g_{ab}$ are not all
independent. The Null Curvature constraint for the perturbed connection
$\omega^{\prime a}_{~{}b}$ implies that there exists some local linear
transformation $L^{a}_{~{}b}\in GL(4,\mathbb{R})$, such that
$\delta\omega^{a}_{~{}b}=(L^{-1})^{a}_{~{}c}dL^{c}_{~{}b}$ (24)
where $d$ indicates the exterior derivative. This means that we can apply this
general linear transformation to the perturbed system to express it in a
perturbed proper frame
$\bar{h}^{\prime a}=L^{a}_{~{}b}({h}^{b}+\delta
h^{b}),\qquad\bar{\omega}^{\prime
a}_{~{}b}=0,\qquad\bar{g}^{\prime}_{ab}=(L^{-1})^{c}_{~{}a}(L^{-1})^{d}_{~{}b}(\eta_{cd}+\delta
g_{cd})$ (25)
where we have used a bar to indicate that we are now in a proper frame.
The Null Non-Metricity condition applied to this “perturbed proper frame” (25)
means that $\bar{g}^{\prime}_{ab}$ is a symmetric matrix of the constants
which can diagonalized. That is, there exists a matrix $P^{a}_{~{}b}\in
GL(4,\mathbb{R})$ of constants such that
$\bar{g}^{\prime}_{ab}=(P^{-1})^{c}_{~{}a}(P^{-1})^{d}_{~{}b}\eta_{cd}$. So,
we can apply this constant transformation $P^{a}_{~{}b}$ to the “perturbed
proper frame” (25) to obtain a “perturbed proper orthonormal frame” without a
loss of generality.
$\displaystyle\hat{h}^{\prime a}$ $\displaystyle=$ $\displaystyle
P^{a}_{~{}b}\bar{h}^{\prime b}=P^{a}_{~{}b}L^{b}_{~{}c}({h}^{c}+\delta
h^{c}),$ (26a) $\displaystyle\hat{\omega}^{\prime a}_{~{}b}$ $\displaystyle=$
$\displaystyle 0,$ (26b) $\displaystyle\hat{g}_{ab}^{\prime}$ $\displaystyle=$
$\displaystyle\eta_{ab}.$ (26c)
We observe that we can investigate perturbations in teleparallel geometries by
simply looking at the perturbations in a co-frame, using proper orthonormal
frames. Doing so ensures that the Null Curvature and Null Non-Metricity
constraints are respected. If we define the compositions of the two linear
transformations as matrix $M^{a}_{~{}b}=P^{a}_{~{}c}L^{c}_{~{}b}\in
GL(4,\mathbb{R})$, then the “perturbed proper orthonormal frame” becomes
$\hat{h}^{\prime a}=M^{a}_{~{}b}\left({h}^{b}+\delta h^{b}\right).$ (27)
which encodes all possible perturbations within a proper orthonormal
framework. If $M^{a}_{~{}b}=\delta^{a}_{b}$, then the only perturbations are
perturbations in the original proper orthonormal frame. The matrix
$M^{a}_{~{}b}$ encodes the perturbations that took place originally in the
spin connection and metric, but it ensures that the resulting perturbed system
is teleparallel in nature. For completeness, the original perturbations can be
expressed in terms of $M^{a}_{~{}b}$, as
$\delta\omega^{a}_{~{}b}=(M^{-1})^{a}_{~{}c}dM^{c}_{~{}b},\qquad\delta
g_{ab}=(M^{-1})^{c}_{~{}a}(M^{-1})^{d}_{~{}b}\eta_{cd}-\eta_{ab}$ (28)
Now, in a perturbative approach, to the first order, we have that
$\displaystyle M^{a}_{~{}b}$ $\displaystyle\approx$
$\displaystyle\delta^{a}_{b}+\mu^{a}_{~{}b}$ (29) $\displaystyle\delta h^{a}$
$\displaystyle\approx$ $\displaystyle\nu^{a}_{~{}b}h^{b}$ (30)
for some $\mu^{a}_{~{}b}$ and $\nu^{a}_{~{}b}\in\mathfrak{gl}(4,\mathbb{R})$.
Therefore, putting it all together, we have to first order
$\displaystyle\hat{h}^{\prime a}$ $\displaystyle=$ $\displaystyle
h^{a}+(\mu^{a}_{~{}b}+\nu^{a}_{~{}b})h^{b}=h^{a}+\lambda^{a}_{~{}b}h^{b},$
(31a) $\displaystyle\hat{\omega}^{\prime a}_{~{}b}$ $\displaystyle=$
$\displaystyle 0,$ (31b) $\displaystyle\hat{g}_{ab}^{\prime}$ $\displaystyle=$
$\displaystyle\eta_{ab},$ (31c)
where $\lambda^{a}_{~{}b}\in M(4,\mathbb{R})$, the set of $4\times 4$ real-
valued matrices. Perturbations of the independent quantities in teleparallel
geometry can always be transformed to the form (31). The matrix $\lambda$ can
be invariantly decomposed into trace, symmetric trace-free, and anti-symmetric
parts.
For the next section and in the appendix, we will apply the perturbations
$\delta
h^{a}=\lambda^{a}_{~{}b}h^{b},\qquad\delta\omega^{a}_{~{}b}=0,\qquad\delta
g_{ab}=0,$ (32)
to the $f(T)$ teleparallel field equations in a proper orthonormal frame. In
particular, we will look at perturbations of constant scalar torsion
spacetimes.
### IV.2 Perturbed $f(T)$ Teleparallel Field Equations: General
Considering the perturbations of the field equations (II.6), we obtain
$\displaystyle\kappa\left[\Theta_{\left(ab\right)}+\delta\Theta_{(ab)}\right]$
$\displaystyle=$ $\displaystyle f_{TT}\left(T+\delta
T\right)\,\left[S_{(ab)}^{~{}~{}~{}\mu}+\delta
S_{(ab)}^{~{}~{}~{}\mu}\right]\left[\partial_{\mu}T+\partial_{\mu}\left(\delta
T\right)\right]$ (33a) $\displaystyle\quad+f_{T}\left(T+\delta
T\right)\,\left[\overset{\ \circ}{G}_{ab}+\delta\overset{\
\circ}{G}_{ab}\right]$
$\displaystyle\quad+\frac{g_{ab}}{2}\left[f\left(T+\delta
T\right)-\left(T+\delta T\right)\,f_{T}\left(T+\delta T\right)\right],$
$\displaystyle 0$ $\displaystyle=$ $\displaystyle f_{TT}\left(T+\delta
T\right)\,\left[S_{[ab]}^{~{}~{}~{}\mu}+\delta
S_{[ab]}^{~{}~{}~{}\mu}\right]\partial_{\mu}\left(T+\delta T\right),$ (33b)
which to the first order in the perturbations yields
$\displaystyle\kappa\,\delta\Theta_{\left(ab\right)}$ $\displaystyle\approx$
$\displaystyle\left[f_{TTT}\,S_{(ab)}^{~{}~{}~{}\mu}\partial_{\mu}T+f_{TT}\,\left(\overset{\
\circ}{G}_{ab}-\frac{T}{2}\,g_{ab}\right)\right]\,\delta
T+f_{T}\,\delta\overset{\ \circ}{G}_{ab}$ (34a)
$\displaystyle\quad+f_{TT}\left[\delta
S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}T+S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)\right]+O\left(|\delta h|^{2}\right),$ $\displaystyle 0$
$\displaystyle\approx$ $\displaystyle
f_{TTT}\,\left[S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}T\right]\,\delta
T+f_{TT}\,\left[S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}\left(\delta
T\right)+\delta S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}T\right]+O\left(|\delta
h|^{2}\right),$ (34b)
where we no longer explicitly show functional dependence in $F$.
In Appendix A, perturbations of different dependent quantities are explicitly
computed in terms of the perturbations (32), for example $\delta T,\delta
S_{[ab]}^{~{}~{}~{}\mu}$, etc. Here, $\delta T$ is given by Equation (5) and
$\delta S_{ab}^{~{}~{}~{}\mu}$ is given by Equation (8). Equation (34) gives
us expressions for the perturbations to the matter resulting from the
perturbations in the co-frame, and constraints on the perturbations to the
antisymmetric part of the super-potential.
### IV.3 Perturbed $f(T)$ Teleparallel Field Equations: Constant Torsion
Scalar
To study the effects of perturbations of the co-frame in constant torsion
scalar spacetimes, one substitutes $T=T_{0}=\text{Const}$ into Equation (34).
This means $\partial_{\nu}T=0$. If we divide by $f_{T}\left(T_{0}\right)$,
Equation (34) becomes:
$\displaystyle\kappa_{eff}\,\delta\Theta_{(ab)}$ $\displaystyle\approx$
$\displaystyle\delta\overset{\
\circ}{G}_{ab}+\frac{f_{TT}\left(T_{0}\right)}{f_{T}\left(T_{0}\right)}\left[S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)+\delta T\,\left(\overset{\
\circ}{G}_{ab}-\frac{T_{0}}{2}\,g_{ab}\right)\right]+O\left(|\delta
h|^{2}\right),$ (35a) $\displaystyle 0$ $\displaystyle\approx$
$\displaystyle\left(\frac{f_{TT}\left(T_{0}\right)}{f_{T}\left(T_{0}\right)}\right)S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}(\delta
T)+O\left(|\delta h|^{2}\right),$ (35b)
where $\kappa_{eff}=\frac{\kappa}{f_{T}\left(T_{0}\right)}$. In general,
$S_{[ab]}^{~{}~{}\mu}\neq 0$, and therefore, the perturbations in the torsion
scalar are constant. Of course, in situations in which some component of
$S_{[ab]}^{~{}~{}\mu}=0$, and then the corresponding $\partial_{\mu}(\delta
T)\not=0$.
### IV.4 Perturbed $f(T)$ Teleparallel Field Equations: Zero Torsion Scalar
For spacetimes that have a zero torsion scalar, $T=0$, and Equations (35a) and
(35b) become:
$\displaystyle\kappa_{eff}\,\delta\Theta_{\left(ab\right)}$
$\displaystyle\approx$ $\displaystyle\delta\overset{\
\circ}{G}_{ab}+\frac{f_{TT}\left(0\right)}{f_{T}\left(0\right)}\left[S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)+\delta T\,\overset{\ \circ}{G}_{ab}\right]+O\left(|\delta
h|^{2}\right),$ (36a) $\displaystyle 0$ $\displaystyle\approx$
$\displaystyle\left(\frac{f_{TT}\left(0\right)}{f_{T}\left(0\right)}\right)S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}(\delta
T)+O\left(|\delta h|^{2}\right),$ (36b)
where $\kappa_{eff}=\frac{\kappa}{f_{T}\left(0\right)}$. As before, in
general, $S_{ab}^{~{}~{}\mu}\neq 0$, and therefore, the perturbations in the
torsion scalar are constant. These equations represent perturbations in non-
Minkowski but zero torsion scalar spacetimes. However, they can reduce to
perturbations of the $f(T)$ teleparallel field equations with a teleparallel
Minkowksi geometry when $S_{ab}^{~{}~{}\mu}=0$ and $\overset{\
\circ}{G}_{ab}=0$, which are the conditions that are compatible with a
teleparallel Minkowski spacetime, as defined in Section III.0.1.
### IV.5 Perturbed $f(T)$ Teleparallel Field Equations: The Zero Torsion
Scalar Perturbation Limit
We are curious to know what happens in the restricted perturbation scheme in
which $\delta T\rightarrow 0$ only. Starting with Equation (34), we take the
limit $\delta T\rightarrow 0$, and these perturbed field equations become:
$\displaystyle\kappa\,\delta\Theta_{\left(ab\right)}$ $\displaystyle\approx$
$\displaystyle f_{T}\,\delta\overset{\ \circ}{G}_{ab}+f_{TT}\left[\delta
S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}T+S_{(ab)}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)\right]+O\left(|\delta h|^{2}\right),$ (37a) $\displaystyle 0$
$\displaystyle\approx$ $\displaystyle f_{TT}\,\left[\delta
S_{[ab]}^{~{}~{}~{}\mu}\partial_{\mu}T+S_{[ab]}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)\right]+O\left(|\delta h|^{2}\right).$ (37b)
Looking at Equation (37b), given that in general,
$S_{[ab]}^{~{}~{}~{}\mu}\not=0$ and $\delta S_{[ab]}^{~{}~{}~{}\mu}\not=0$ (or
equivalently, the torsion tensor and perturbations of the torsion tensor are
non-trivial, respectively), we observe that if the torsion scalar is not
constant, $\partial_{\mu}T\not=0$, and then the perturbations of the torsion
scalar are also not constant, that is, $\partial_{\mu}(\delta T)\not=0$.
Conversely, if $\partial_{\mu}T=0$, then $\partial_{\mu}(\delta T)=0$.
### IV.6 Perturbed $f(T)$ Teleparallel Field Equations: Minkowski
For the Minkowski spacetimes, as defined in Section III.0.1, since the torsion
tensor is zero by definition, the superpotential terms
$S_{(ab)}^{~{}~{}~{}\mu}=S_{[ab]}^{~{}~{}~{}\mu}=0$. Further, the Einstein
tensor $\overset{\ \circ}{G}_{ab}=0$, and as argued before, $f(0)=0$, so that
Equations (36a) and (36b) reduce as follows:
$\displaystyle\kappa_{eff}\,\delta\Theta_{(ab)}$ $\displaystyle\approx$
$\displaystyle\delta\overset{\ \circ}{G}_{ab}+O\left(|\delta h|^{2}\right),$
(38a) $\displaystyle 0$ $\displaystyle\approx$ $\displaystyle O\left(|\delta
h|^{2}\right).$ (38b)
Equation (38b) for the antisymmetric part of the field equations is
identically satisfied, while Equation (38a) shows that a variation
$\delta\overset{\ \circ}{G}_{ab}$ associated with a perturbation is directly
related to a variation of the energy-momentum tensor
$\delta\Theta_{\left(ab\right)}$. This shows that the perturbations of
Minkowski spacetime as defined in Section III.0.1 for $f(T)$ teleparallel
gravity follow the perturbative treatments of Minkowski spacetime in GR.
## V Effects of Perturbations and the Minkowski Spacetime Symmetries
Conditions for Stability
### V.1 Rotation/Boost Perturbation in a Minkowski Background
We would like to know if orthonormal coframe perturbations as expressed by
Equation (32) lead to the stability of a pure Minkowski spacetime background.
To achieve this goal, we will first test the stability for the rotation/boost
perturbations as described in Equation (32). Secondly, we will also test the
stability and its impact for a translated form of this Equation (32). We will
finish by studying the effects of the trace, symmetric, and antisymmetric
parts of perturbation, and their respective impacts on torsion and
superpotential perturbations.
In fact, Equation (32) for the orthonormal gauge is exactly the rotation/boost
perturbation in Minkowski spacetime. The perturbation is described as follows:
$\delta h^{a}_{\;\;\mu}=\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}.$ (39)
By substituting Equation (9) inside Equation (38a), the field equation with
the Equation (39) perturbation inside is exactly:
$\displaystyle\kappa_{eff}\,\delta\Theta_{(ab)}$ $\displaystyle\approx$
$\displaystyle\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\Bigg{[}h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\delta{\overset{\
\circ}{R}}_{~{}m\alpha\nu}^{k}-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\delta\overset{\
\circ}{R}_{~{}m\alpha\rho}^{k}\Bigg{]}$ $\displaystyle\quad+O\left(|\delta
h|^{2}\right),$ $\displaystyle 0$ $\displaystyle\approx$ $\displaystyle
O\left(|\delta h|^{2}\right).$ (40)
Here, we obtain the perturbed FEs in terms of $\delta\overset{\
\circ}{R}_{~{}m\alpha\rho}^{k}$ and $h^{a}_{\;\;\mu}$. If we have that
$\delta\overset{\ \circ}{R}_{~{}m\alpha\nu}^{k}\rightarrow 0$, then we obtain
$\delta\Theta_{(ab)}\rightarrow 0$ for Equation (39), as is also required by
GR and TEGR. We might also express Equation (V.1) in terms of
$\lambda^{a}_{\;\;b}$, and we have shown that pure Minkowski spacetime is
stable from the zero curvature criteria, as required by the teleparallel
postulates.
From Equation (5), and by substituting Equation (39), the torsion scalar
perturbation $\delta T$ is expressed by Equation (1) in Appendix C. This last
equation can be summarized as:
$\displaystyle\delta T\rightarrow
0\quad\quad\quad\text{for}\;T^{a}_{~{}\mu\nu}=\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\rightarrow
0.$ (41)
From here, we obtain that the condition for $\delta T\rightarrow 0$ is
described by the zero torsion tensor criteria $T^{a}_{~{}~{}\mu\nu}=0$
relation as:
$\displaystyle\partial_{\mu}\left(h^{a}_{\;\;\nu}\right)\approx\partial_{\nu}\left(h^{a}_{\;\;\mu}\right)$
(42)
From Equation (8), and by substituting Equation (39), the superpotential
perturbation $\delta S_{ab}^{~{}~{}~{}\mu}$ is expressed by Equation (2) in
Appendix C. This equation can be summarized as:
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}\rightarrow
0\quad\quad\quad\text{for}\;\delta
T^{a}_{~{}\mu\nu}=\partial_{\mu}\,\left(\lambda^{a}_{~{}c}\,h^{c}_{\;\;\nu}\right)-\partial_{\nu}\,\left(\lambda^{a}_{~{}c}\,h^{c}_{\;\;\mu}\right)\rightarrow
0.$ (43)
From this result, we obtain that the condition for $\delta
S_{ab}^{~{}~{}~{}\mu}\rightarrow 0$ is also described by the zero perturbed
torsion tensor criteria $\delta T^{a}_{~{}~{}\mu\nu}=0$ relation as:
$\displaystyle\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)\approx\partial_{\nu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}\right).$
(44)
Equation (44) (the zero perturbed torsion criteria) is complementary to
Equation (42) (zero torsion criteria) for obtaining the limit $\delta
S_{ab}^{~{}~{}~{}\mu}\rightarrow 0$. We apply Equation (42) before applying
Equation (44). From here, the Equations (42) and (44) are the two fundamental
symmetry conditions for Minkowski spacetime stability.
If we set $\delta T\rightarrow 0$ and $\delta S_{ab}^{~{}~{}~{}\mu}\rightarrow
0$ for Equations (36a) and (36b) for all zero torsion spacetimes, we still
respect Equations (42) and (44), as for pure Minkowski spacetimes. Hence, the
zero torsion tensor and zero perturbed torsion tensor criterions are still
valid for all zero torsion spacetimes, Minkowski or not.
Even for the constant torsion spacetimes, by always setting $\delta
T\rightarrow 0$ and $\delta S_{ab}^{~{}~{}~{}\mu}\rightarrow 0$ inside
Equations (35a) and (35b), we respect again Equations (42) and (44), as for
the zero torsion scalar spacetimes. This is another generalization of the
Minkowski spacetime result to a most general class of spacetimes as the
constant torsion ones.
There are some other consequences for Minkowski spacetime on a proper frame.
By applying the null covariant derivative criteria to Equation (39), we use
Equation (3) in the Appendix C result to obtain as a relation:
$\displaystyle\delta\Gamma^{\rho}_{\;\;\nu\mu}=h_{a}^{\;\;\rho}\left[\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)-\left(h_{c}^{\;\;\sigma}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\sigma}\right)\right],$
(45)
where
$\Gamma^{\rho}_{\;\;\nu\mu}=h_{c}^{\;\;\rho}\,\partial_{\mu}\,h^{c}_{\;\;\nu}$
is the Weitzenbock connection for a proper frame. For trivial coframes as
$h^{a}_{\;\;\mu}=\delta^{a}_{\;\;\mu}=Diag[1,1,1,1]$, Equation (45) becomes:
$\displaystyle\delta\Gamma^{\rho}_{\;\;\nu\mu}=h_{a}^{\;\;\rho}\left[\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}\right)\right]=\delta_{a}^{\;\;\rho}\,\partial_{\mu}\left(\lambda^{a}_{\;\;b}\right)\,\delta^{b}_{\;\;\mu}.$
(46)
In the next subsection, we will study the effect of a translation applied to
the perturbation described by Equation (39) on Equations (45) and (46). The
goal is to know the effects of the perturbations on the Weitzenbock connection
and its perturbation.
We can now see by the Equations (V.1)–(46) the effect of the perturbation
described by Equation (39), maintaining the proper frame and respecting the
$GL(4,\mathbb{R})$ invariance transformation. In addition, Equations (42) and
(44) give the Minkowski spacetime stability conditions on proper frames for
the perturbation described by Equation (39) [39, 40, 41, 42].
### V.2 General Linear Perturbation in a Minkowski Background
A more general perturbation scheme requires one to deal with the following
general linear perturbation:
$\delta
h^{a}_{\;\;\mu}=\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}+\epsilon^{a}_{\;\;\mu},$
(47)
where $|\lambda^{a}_{\;\;b}|,|\epsilon^{a}_{\;\;\mu}|\ll 1$. We have here the
transformation described by Equation (39), superposed with a translation in
Minkowski tangent space.
For the Equation (47) perturbation, Equation (V.1) becomes as follows:
$\displaystyle\kappa_{eff}\,\delta\Theta_{(ab)}$ $\displaystyle\approx$
$\displaystyle\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\Bigg{[}h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\delta{\overset{\
\circ}{R}}^{k}_{~{}m\alpha\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\delta{\overset{\
\circ}{R}}^{k}_{~{}m\alpha\rho}\Bigg{]}$ $\displaystyle\quad+O\left(|\delta
h|^{2}\right),$ $\displaystyle 0$ $\displaystyle\approx$ $\displaystyle
O\left(|\delta h|^{2}\right).$ (48)
Here, again we obtain the perturbed FEs in terms of $\delta{\overset{\
\circ}{R}}^{k}_{~{}m\alpha\rho}$ and $h^{a}_{\;\;\mu}$. As for Equation (V.1),
$\delta{\overset{\ \circ}{R}}^{k}_{~{}m\alpha\nu}\rightarrow 0$, we still then
obtain $\delta\Theta_{(ab)}\rightarrow 0$ for Equation (47), as is also
required by GR and TEGR [39, 40, 41, 42]. We might express Equation (V.2) in
terms of $\lambda^{a}_{\;\;b}$ and $\epsilon^{a}_{\;\;\mu}$. Here again, we
have shown that pure Minkowski spacetime is still stable from the zero
curvature criteria, as required by teleparallel postulates.
From Equation (5), and by substituting Equation (47), the torsion scalar
perturbation $\delta T$ is expressed by Equation (1) in Appendix C and can be
summarized as:
$\displaystyle\delta T\rightarrow
0\quad\quad\quad\,\text{for}\;T^{a}_{~{}\mu\nu}=\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\rightarrow
0.$ (49)
The condition for $\delta T\rightarrow 0$ is still described by Equation (42)
for the zero torsion tensor criteria $T^{a}_{~{}~{}\mu\nu}=0$.
From Equation (8), and by substituting Equation (47), the superpotential
perturbation $\delta S_{ab}^{~{}~{}~{}\mu}$ is expressed by Equation (2) in
Appendix C and can also be summarized as:
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}\rightarrow 0.$ (50)
Equation (50) is satisfied if we respect
$\partial_{a}\epsilon_{b}^{\;\;\mu}=\partial_{b}\epsilon_{a}^{\;\;\mu}=0$ (a
constant translation condition for Equation (47)) and after applying the
Equation (42) criteria. The condition for $\delta
S_{ab}^{~{}~{}~{}\mu}\rightarrow 0$ is still described by Equation (44) for
the zero perturbed torsion tensor criteria $\delta T^{a}_{~{}~{}\mu\nu}=0$,
only if the constant translation criteria are respected as:
$\displaystyle\partial_{\mu}\epsilon^{a}_{\;\;\nu}=\partial_{\nu}\epsilon^{a}_{\;\;\mu}=0.$
(51)
Hence, for the Equation (47) perturbation, we still respect Equations (42) and
(44) as the two first symmetry conditions for Minkowski spacetime stability,
but we must also respect Equation (51) before Equation (44). A simple
translation does not affect these Equations (42) and (44) only if we respect
Equation (51), and the translation term $\epsilon^{a}_{\;\;\nu}$ must be
constant inside Equation (47). This constant translation criteria as expressed
by Equation (51) is a third symmetry condition for Minkowski spacetime
stability.
As for Equations (45) and (46), we apply the null covariant derivative
criteria to Equation (47) and we obtain as a relation:
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\partial_{\mu}\,\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}+\epsilon^{a}_{\;\;\nu}\right)-\left(h_{c}^{\;\;\rho}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\,\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\rho}+\epsilon^{a}_{\;\;\rho}\right)-\delta\Gamma^{\rho}_{\;\;\nu\mu}h^{a}_{\;\;\rho}$
(52)
$\displaystyle\Rightarrow\delta\Gamma^{\rho}_{\;\;\nu\mu}=h_{a}^{\;\;\rho}\left[\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)-\left(h_{c}^{\;\;\sigma}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\sigma}+\epsilon^{a}_{\;\;\sigma}\right)\right].$
where
$\Gamma^{\sigma}_{\;\;\nu\mu}=h_{c}^{\;\;\sigma}\,\partial_{\mu}\,h^{c}_{\;\;\nu}$
is the Weitzenbock connection for a proper frame and
$\partial_{\mu}\,\epsilon^{a}_{\;\;\nu}=0$ because of constant translation.
Equation (52) is slightly different from Equation (45) according to the term
$-\left(h_{c}^{\;\;\sigma}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\,\epsilon^{a}_{\;\;\sigma}$.
For non-trivial coframes (i.e., $\partial_{\mu}\,h^{c}_{\;\;\nu}\neq 0$),
Equation (52) is not invariant under Equation (51). For trivial coframes
(i.e., $\partial_{\mu}\,h^{c}_{\;\;\nu}=0$), Equation (52) becomes exactly
Equation (46), as for the perturbation described by Equation (39). From this
result, we now respect the constant coframe criteria as (or null Weitzenbock
connection $\Gamma^{\rho}_{\;\;\nu\mu}=0$ criteria):
$\displaystyle\partial_{\mu}\,h^{c}_{\;\;\nu}=0.$ (53)
With Equation (53), we also satisfy the invariance under Equation (51), the
constant translation criteria for the Weitzenbock connection perturbation.
Hence, Equations (45) and (52) show that the Weitzenbock connection
perturbation $\delta\Gamma^{\rho}_{\;\;\nu\mu}$ is invariant only if we
respect Equation (53), the constant coframe criteria. This criteria, as
expressed by Equation (53), is a fourth symmetry condition for Minkowski
spacetime stability.
Now, Equations (V.2)–(53) generalize Equations (V.1)–(46) by applying a
constant translation $\epsilon^{a}_{\;\;\nu}$ to the linear transformation
described by Equation (39), which maintains the proper frame and the
invariance under the $GL(4,\mathbb{R})$ transformation. By respecting Equation
(51), the constant translation criteria, we still respect Equations (42) and
(44) for Equation (47), and this generalization shows that Minkowski spacetime
and all zero torsion spacetimes are stable everytime [39, 40, 41, 42].
However, Equations (45) and (52), both giving Equation (46), show that the
Weitzenbock connection perturbation $\delta\Gamma^{\rho}_{\;\;\nu\mu}$ is
invariant only if we work with constant or trivial coframes respecting
Equation (53).
### V.3 Perturbations on Trivial Coframes by Each Part of the Perturbation
Before properly dealing with more complex cases of coframes, it is imperative
to deal with perturbations on the trivial coframe. This coframe is defined as
follows:
$h^{a}_{\;\;\mu}=\delta^{a}_{\;\;\mu}=Diag\left[1,\,1,\,1,\,1\right].$ (54)
The coframe described by Equation (54) is defined in the orthonormal gauge.
This equation (54) respects Equation (53), the fourth symmetry condition for
Minkowski spacetime stability. From there, we will study the following general
perturbations which will be applied to Equation (54) in terms of
$\lambda^{a}_{~{}b}$ and respecting Equations (42) and (44), and if necessary,
Equation (51). In addition, we will compare with another recent similar study
on so-called “cosmological” perturbations in order to better situate the
results for Minkowski spacetime for a scale factor of $1$ [24]. Their
$\lambda^{a}_{~{}b}$ equivalent matrix is expressed as:
$\displaystyle\left(\lambda^{a}_{~{}b}\right)_{Golov}=\left[\begin{array}[]{cc}\phi&\partial_{a}\,\xi+v_{a}\\\
\partial_{i}\,\beta+u_{i}&\left[-\psi\,\delta^{a}_{j}+\partial^{2}_{a\,j}\sigma+\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right]\end{array}\right],$
(57)
where we must respect the constraints $\partial^{a}\,v_{a}=0$,
$\partial^{k}\,w_{k}=0$, $\partial^{i}\,u_{i}=0$, and $\partial^{a}\,c_{a}=0$,
and the tensorial part is also traceless.
#### V.3.1 Trace
We have first as $\lambda^{a}_{~{}b}$ for a full trace perturbation:
$\left(\lambda^{a}_{~{}b}\right)_{Trace}=\lambda=Trace\left[Diag\left[a_{00},\,a_{11},\,a_{22},\,a_{33}\right]\right]=a_{00}+a_{11}+a_{22}+a_{33}.$
(58)
Equation (39) will be exactly $\left(\delta
h^{a}_{\;\;\mu}\right)_{Trace}=\frac{\lambda}{4}\delta^{a}_{\;\;\mu}$, and by
setting $h^{a}_{\;\;\mu}=\delta^{a}_{\;\;\mu}$, Equations (41) and (43) are:
$\displaystyle\delta T\approx O\left(|\delta h|^{2}\right)\rightarrow 0,$ (59)
which respects Equation (42) and
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}$
$\displaystyle\rightarrow\Bigg{[}\frac{1}{8}\left(\partial_{a}\left(\lambda\,\delta_{b}^{~{}\mu}\right)-\partial_{b}\left(\lambda\,\delta_{a}^{~{}\mu}\right)\right)+\frac{1}{4}\left(\partial_{b}\left(\lambda\,\delta_{~{}a}^{\mu}\right)-\partial_{a}\left(\lambda\,\delta_{~{}b}^{c}\,h_{~{}c}^{\mu}\right)\right)$
$\displaystyle\quad\quad-\frac{1}{4}\delta_{~{}b}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda\,\delta_{a}^{~{}\rho}\right)-\partial_{a}\,\left(\lambda\,\delta_{c}^{~{}\rho}\right)\right]+\frac{1}{4}\,\delta_{~{}a}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda\,\delta_{b}^{~{}\rho}\right)-\partial_{b}\,\left(\lambda\,\delta_{c}^{~{}\rho}\right)\right]\Bigg{]}$
$\displaystyle\quad\quad+O\left(|\delta h|^{2}\right)\quad\quad\text{by
applying Equation \eqref{436a} (the zero torsion criteria).}$
$\displaystyle\rightarrow
0\quad\quad\quad\quad\quad\quad\quad\,\text{for}\;\delta
T^{a}_{~{}\mu\nu}=\partial_{\mu}\,\left(\lambda\right)\,\delta^{a}_{\;\;\nu}-\partial_{\nu}\,\left(\lambda\right)\,\delta^{a}_{\;\;\mu}\rightarrow
0.$ (60)
Equation (44) will be expressed as:
$\displaystyle\partial_{\mu}\,\left(\lambda\right)\,\delta^{a}_{\;\;\nu}\approx\partial_{\nu}\,\left(\lambda\right)\,\delta^{a}_{\;\;\mu}.$
(61)
By comparing with Equation (57), we obtain the following equations for the
rectangular coordinates [24]:
* •
Equation (58) becomes:
$\displaystyle\left(\lambda^{a}_{~{}b}\right)_{Trace\,Golov}=\lambda_{Golov}=\phi-\psi+\partial^{2}\,\sigma+\frac{h}{2},$
(62)
where $\epsilon_{ajk}=0$ because $a=j$ and $h=Trace(h_{aj})$.
* •
From Equation (58), we obtain as the supplementary constraints:
$\displaystyle\partial_{a}\,\xi+v_{a}=0\quad\text{and}\quad\partial_{i}\,\beta+u_{i}=0$
(63)
* •
Equation (61) will be expressed in terms of Equations (62) and (63):
$\displaystyle\partial_{\mu}\,\left(\lambda_{Golov}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\lambda_{Golov}\right)\,\delta^{a}_{\;\;\mu}$
$\displaystyle\partial_{\mu}\,\left(\phi-\psi+\partial^{2}\,\sigma+\frac{h}{2}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\phi-\psi+\partial^{2}\,\sigma+\frac{h}{2}\right)\,\delta^{a}_{\;\;\mu}.$
(64)
#### V.3.2 Full Symmetric Perturbation
For the perfect symmetric perturbation, we have as the $\lambda^{a}_{~{}b}$
perturbation with null diagonal components:
$\left(\lambda^{a}_{~{}b}\right)_{Sym}=\tilde{\lambda}^{a}_{~{}b}=\left[\begin{array}[]{cccc}0&b_{10}&b_{20}&b_{30}\\\
b_{10}&0&b_{12}&b_{13}\\\ b_{20}&b_{12}&0&b_{23}\\\
b_{30}&b_{13}&b_{23}&0\end{array}\right].$ (65)
Equation (39) will be exactly $\left(\delta
h^{a}_{\;\;\mu}\right)_{Sym}=\tilde{\lambda}^{a}_{\;\;b}\,\delta^{b}_{\;\;\mu}$,
and by setting $h^{a}_{\;\;\mu}=\delta^{a}_{\;\;\mu}$, Equation (41) is still
expressed by Equation (59), respecting the Equations (42) and (43):
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}=$
$\displaystyle\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\tilde{\lambda}_{b}^{~{}c}\,\delta_{c}^{~{}\mu}\right)-\partial_{b}\left(\tilde{\lambda}_{a}^{~{}c}\,\delta_{c}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\tilde{\lambda}_{~{}a}^{c}\,\delta_{~{}c}^{\mu}\right)-\partial_{a}\left(\tilde{\lambda}_{~{}b}^{c}\,\delta_{~{}c}^{\mu}\right)\right)$
$\displaystyle\quad\quad-\delta_{~{}b}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\tilde{\lambda}_{a}^{~{}f}\delta_{f}^{~{}\rho}\right)-\partial_{a}\,\left(\tilde{\lambda}_{c}^{~{}f}\delta_{f}^{~{}\rho}\right)\right]+\delta_{~{}a}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\tilde{\lambda}_{b}^{~{}f}\delta_{f}^{~{}\rho}\right)-\partial_{b}\,\left(\tilde{\lambda}_{c}^{~{}f}\delta_{f}^{~{}\rho}\right)\right]\Bigg{]}$
$\displaystyle\quad\quad+O\left(|\delta h|^{2}\right)\quad\quad\text{by
applying Equation \eqref{436a} (the zero torsion criteria).}$
$\displaystyle\rightarrow
0\quad\quad\quad\quad\quad\quad\quad\,\text{for}\;\delta
T^{a}_{~{}\mu\nu}=\partial_{\mu}\,\left(\lambda^{a}_{~{}c}\right)\delta^{c}_{\;\;\nu}-\partial_{\nu}\,\left(\lambda^{a}_{~{}c}\right),\delta^{c}_{\;\;\mu}\rightarrow
0.$ (66)
Equation (44) will be expressed as:
$\displaystyle\partial_{\mu}\,\left(\tilde{\lambda}^{a}_{~{}c}\right)\delta^{c}_{\;\;\nu}\approx\partial_{\nu}\,\left(\tilde{\lambda}^{a}_{~{}c}\right),\delta^{c}_{\;\;\mu}.$
(67)
By comparing with Equation (57) again, we obtain the following equations for
the rectangular coordinates [24]:
* •
Equation (65) becomes:
$\displaystyle\left(\lambda^{a}_{~{}b}\right)_{Sym\,Golov}$ $\displaystyle=$
$\displaystyle\left(\tilde{\lambda}^{a}_{~{}b}\right)_{Golov}$ (70)
$\displaystyle=$
$\displaystyle\left[\begin{array}[]{cc}0&\partial_{a}\,\xi+v_{a}\\\
\partial_{a}\,\xi+v_{a}&\left[\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right]\end{array}\right],$
where $a\neq j\neq k$, $\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)=0$
and $\partial_{a}\,\xi+v_{a}=\partial_{i}\,\beta+u_{i}$ because we have a
symmetric perturbation. As a supplement, we deduce that $\phi=0$ and $\psi=0$
for Equation (65), because of the null diagonal components.
* •
The Equation (67) components will be expressed in terms of Equation (70):
$\displaystyle\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\;\;\mu}$
$\displaystyle\partial_{\mu}\,\left(\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\partial^{2}_{a\,j}\sigma+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\;\;\mu}$
(71)
#### V.3.3 Full Antisymmetric Perturbation
For the full antisymmetric perturbation, we have as the $\lambda^{a}_{~{}b}$
perturbation with null diagonal components:
$\left(\lambda^{a}_{~{}b}\right)_{AntiSym}=\bar{\lambda}^{a}_{~{}b}=\left[\begin{array}[]{cccc}0&b_{10}&b_{20}&b_{30}\\\
-b_{10}&0&b_{12}&b_{13}\\\ -b_{20}&-b_{12}&0&b_{23}\\\
-b_{30}&-b_{13}&-b_{23}&0\end{array}\right].$ (72)
Equation (39) will be exactly $\left(\delta
h^{a}_{\;\;\mu}\right)_{AntiSym}=\bar{\lambda}^{a}_{\;\;b}\,\delta^{b}_{\;\;\mu}$,
and by setting $h^{a}_{\;\;\mu}=\delta^{a}_{\;\;\mu}$, Equation (41) is still
expressed by Equation (59), respecting Equations (42) and (43):
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}=$
$\displaystyle\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\bar{\lambda}_{b}^{~{}c}\,\delta_{c}^{~{}\mu}\right)-\partial_{b}\left(\bar{\lambda}_{a}^{~{}c}\,\delta_{c}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\bar{\lambda}_{~{}a}^{c}\,\delta_{~{}c}^{\mu}\right)-\partial_{a}\left(\bar{\lambda}_{~{}b}^{c}\,\delta_{~{}c}^{\mu}\right)\right)$
$\displaystyle\quad\quad-\delta_{~{}b}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\bar{\lambda}_{a}^{~{}f}\delta_{f}^{~{}\rho}\right)-\partial_{a}\,\left(\bar{\lambda}_{c}^{~{}f}\delta_{f}^{~{}\rho}\right)\right]+\delta_{~{}a}^{\mu}\,\delta_{~{}\rho}^{c}\left[\partial_{c}\,\left(\bar{\lambda}_{b}^{~{}f}\delta_{f}^{~{}\rho}\right)-\partial_{b}\,\left(\bar{\lambda}_{c}^{~{}f}\delta_{f}^{~{}\rho}\right)\right]\Bigg{]}$
$\displaystyle\quad\quad+O\left(|\delta h|^{2}\right)\quad\quad\text{by
applying Equation \eqref{436a} (the zero torsion criteria).}$
$\displaystyle\rightarrow
0\quad\quad\quad\quad\quad\quad\quad\,\text{for}\;\delta
T^{a}_{~{}\mu\nu}=\partial_{\mu}\,\left(\bar{\lambda}^{a}_{~{}c}\right)\,\delta^{c}_{\;\;\nu}-\partial_{\nu}\,\left(\bar{\lambda}^{a}_{~{}c}\right)\,\delta^{c}_{\;\;\mu}\rightarrow
0.$ (73)
Equation (44) will be expressed as:
$\displaystyle\partial_{\mu}\,\left(\bar{\lambda}^{a}_{~{}c}\right)\,\delta^{c}_{\;\;\nu}\approx\partial_{\nu}\,\left(\bar{\lambda}^{a}_{~{}c}\right)\,\delta^{c}_{\;\;\mu}.$
(74)
By still comparing with Equation (57), we obtain the following equations for
the rectangular coordinates [24]:
* •
Equation (72) becomes:
$\displaystyle\left(\lambda^{a}_{~{}b}\right)_{AntiSym\,Golov}$
$\displaystyle=$ $\displaystyle\left(\bar{\lambda}^{a}_{~{}b}\right)_{Golov}$
(77) $\displaystyle=$
$\displaystyle\left[\begin{array}[]{cc}0&\partial_{a}\,\xi+v_{a}\\\
-\left(\partial_{a}\,\xi+v_{a}\right)&\left[\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right]\end{array}\right]$
where $a\neq j\neq k$,
$\partial^{2}_{a\,j}\sigma=-\partial^{2}_{j\,a}\sigma=0$,
$\partial_{j}\,c_{a}=-\partial_{a}\,c_{j}$, $h_{aj}=-h_{ja}$, and
$\partial_{a}\,\xi+v_{a}=-\left(\partial_{i}\,\beta+u_{i}\right)$ because we
have an antisymmetric perturbation. We deduce again that $\phi=0$ and $\psi=0$
for Equation (72), because of the null diagonal components.
* •
The Equation (74) components will be expressed in terms of Equation (77):
$\displaystyle\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right)\,\delta^{a}_{\;\;\mu}$
$\displaystyle\partial_{\mu}\,\left(\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\;\;\nu}$
$\displaystyle\approx$
$\displaystyle\partial_{\nu}\,\left(\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right)\,\delta^{a}_{\;\;\mu}$
#### V.3.4 A Mixed Situation and Minkowski Spacetime
Here, we will treat the most general case. It is the combination of the three
previous sorts, as:
$\displaystyle\left(\lambda^{a}_{~{}b}\right)_{Mixed}=\lambda^{a}_{~{}b}$
$\displaystyle=$
$\displaystyle\frac{\delta^{a}_{~{}b}}{4}\,\left(\lambda^{a}_{~{}b}\right)_{Trace}+\left(\lambda^{a}_{~{}b}\right)_{Sym}+\left(\lambda^{a}_{~{}b}\right)_{AntiSym},$
(79) $\displaystyle=$
$\displaystyle\frac{\lambda}{4}\delta^{a}_{~{}b}+\tilde{\lambda}^{a}_{~{}b}+\bar{\lambda}^{a}_{~{}b}.$
In general, we always obtain for Equation (79) that
$\left(\lambda^{a}_{~{}b}\right)_{Mixed}$ is exactly Equation (57) when we
compare it to the linear parametrization of ref [24]. Then, we obtain as the
components of Equation (44) the most general relations for perturbation in the
Minkowski background as:
$\displaystyle\partial_{\mu}\,\phi\delta^{c}_{\;\;\nu}$
$\displaystyle\approx\;\partial_{\nu}\,\phi,\delta^{c}_{\;\;\mu}$ (80a)
$\displaystyle\partial_{\mu}\,\left(\partial_{a}\,\xi+v_{a}\right)\delta^{c}_{\;\;\nu}$
$\displaystyle\approx\;\partial_{\nu}\,\left(\partial_{a}\,\xi+v_{a}\right),\delta^{c}_{\;\;\mu}$
(80b)
$\displaystyle\partial_{\mu}\,\left(\partial_{i}\,\beta+u_{i}\right)\delta^{c}_{\;\;\nu}$
$\displaystyle\approx\;\partial_{\nu}\,\left(\partial_{i}\,\beta+u_{i}\right),\delta^{c}_{\;\;\mu}$
(80c)
$\displaystyle\partial_{\mu}\,\left[-\psi\,\delta^{a}_{j}+\partial^{2}_{a\,j}\sigma+\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right]\delta^{c}_{\;\;\nu}$
$\displaystyle\approx\;\partial_{\nu}\,\left[-\psi\,\delta^{a}_{j}+\partial^{2}_{a\,j}\sigma+\epsilon_{ajk}\,\left(\partial_{k}\,s+w_{k}\right)+\partial_{j}\,c_{a}+\frac{h_{aj}}{2}\right]\delta^{c}_{\;\;\mu}.$
(80d)
Equation (39) will be exactly $\left(\delta
h^{a}_{\;\;\mu}\right)_{Mixed}=\lambda^{a}_{\;\;b}\,\delta^{b}_{\;\;\mu}$, and
we exactly obtain Equations (41) and (43) by respecting Equations (42) and
(44) via superposition. In Equation (79), the first two terms (Trace and
Symmetric terms) represent the symmetric part of
$\left(\lambda^{a}_{~{}b}\right)_{Mixed}$, and the last term (Antisymmetric
term) represents the Antisymmetric part of
$\left(\lambda^{a}_{~{}b}\right)_{Mixed}$. For every case, we satisfy the
Equations (42), (44), (51), and (53) in the supplement of the energy-momentum
stability from $\delta\overset{\ \circ}{R}^{k}_{~{}m\alpha\nu}\rightarrow 0$,
leading to $\delta\Theta_{(ab)}\rightarrow 0$ [39, 40, 41, 42].
For the trivial coframe cases expressed by Equation (54), we verify the
energy-momentum stability by Equation (V.1); and the four other symmetries
conditions stated by Equations (42), (44), (51), and (53) are all satisfied.
The Minkowski spacetime is stable with these four symmetry conditions. From
these considerations for pure Minkowski spacetime, we have shown that
$\delta\Theta_{(ab)}\rightarrow 0$ by Equations (V.1) and (V.2) when all of
the perturbed quantities proceed to zero. From this, we must absolutely have
$\Theta_{(ab)}=0$ when we are in a pure vacuum: the full absence of a
gravitational source.
## VI Discussion and Conclusions
The purpose of this paper is to clarify the meaning of Minkowski and constant
scalar torsion geometries within a teleparallel gravity framework. A
perturbation scheme is developed, which is general and applicable to all
possible teleparallel spacetimes that respect the Null Curvature and Null Non-
Metricity postulates. The perturbation scheme is then applied to different
constant torsion scalar scenarios, with a particular emphasis on perturbations
of the teleparallel Minkowksi spacetimes.
We obtained in Section IV the perturbed field equations (perturbed FEs) in
terms of the perturbed torsion scalar $\delta T$ and perturbed superpotential
$\delta S_{ab}^{~{}~{}~{}\mu}$. These two quantities are themselves dependent
on the coframe perturbation $\delta h^{a}_{\;\;\mu}$. The perturbed field
equations make it possible to relate these perturbed quantities to the
perturbation of the energy-momentum $\delta\Theta_{\left(ab\right)}$. This is
analogous to the field equations for the non-perturbed quantities and how they
relate to the physical quantities in the Energy-Momentum
$\Theta_{\left(ab\right)}$.
In Section V, we look at the field Equations (V.1) and (V.2) when the
curvature perturbation criteria $\delta{\overset{\
\circ}{R}}^{k}_{~{}m\alpha\nu}$ proceeds to zero, and we observe that the
energy-momentum perturbation $\delta\Theta_{\left(ab\right)}$ also goes to
zero, as in GR. In GR, it is known that a curvature perturbation leads to an
energy-momentum perturbation. We show that the same thing occurs for the
teleparallel Minkowski spacetime with Equations (V.1) and (V.2).
Then, we obtain via the null torsion tensor and the null perturbed torsion
tensor criteria as defined by Equations (42) and (44) that the torsion scalar
perturbation $\delta T$ and superpotential perturbation $\delta
S_{ab}^{~{}~{}~{}\mu}$ go to zero for pure Minkowski spacetime when we use the
Equation (39) perturbation (boost/rotation perturbation). These Equations (42)
and (44) are the two first fundamental Minkowski spacetime stability
conditions on proper frames. However, if we use the more general linear
perturbation as defined by Equation (47), we need to respect the constant
translation criteria as defined by Equation (51) in order for the
superpotential perturbation $\delta S_{ab}^{~{}~{}~{}\mu}$ to proceed to zero.
This is a third Minkowski spacetime stability condition for the proper frames
to respect for the Equation (47) perturbation. In this way, by respecting
Equation (51), we then respect the Equations (42) and (44), as for the
Equation (39) perturbation.
Another consequence from the Equation (47) perturbation is about the
Weitzenbock connection perturbation $\delta\Gamma^{\rho}_{\;\;\nu\mu}$.
Equations (45) and (52) have shown that we need to respect the constant
coframe criteria as defined by Equation (53). Equation (53) is a fourth
Minkowski spacetime stability condition for proper frames to respect for the
Equation (47) perturbation, allowing for the invariance for the Weitzenbock
connection perturbation.
To generalize, these steps applied for the Minkowski spacetime, given these
stability criteria, can also be applied for null torsion scalar spacetimes, as
well as the constant torsion scalar spacetimes. Indeed, with the analysis made
in Sections V.1 and V.2, and the stability criteria obtained for the Minkowski
spacetime, Equations (36a) and (36b) for the null torsion scalar spacetimes
make it possible to generalize these treatments, and in the end to obtain the
same stability criteria, which are Equations (42) and (44), and if necessary,
Equations (51) and (53). This is also the case for the constant torsion scalar
spacetimes described by the Equations (35a) and (35b) if we take the limits
$\delta T\rightarrow 0$ and $\delta S_{ab}^{~{}~{}~{}\mu}\rightarrow 0$, as
for the Minkowski and null torsion scalar spacetimes.
One can expand upon the work here on perturbations in covariant teleparallel
gravity to more general teleparallel spacetimes and to broader classes of
teleparallel gravity theories. For example, in the case of static spherically
symmetric teleparallel spacetimes [43, 44] in which the torsion scalar is not
constant, what is the stability of the static spherically symmetric solution?
Further, this perturbation scheme can also be applied to cosmological
geometries in $f(T)$ teleparallel gravity [21], thereby enhancing the previous
work of [24]. Additionally, one can also look at perturbations in other
non-$f(T)$ teleparallel gravity theories.
The current analysis could also bring some light to a couple of unresolved
challenges in teleparallel gravity. The first challenge concerns the number of
degrees of freedom (DOF) in 4-dimensional $f(T)$ teleparallel gravity [45, 46,
47, 48]. In [45], the authors employ a Hamiltonian analysis to determine that
$f(T)$ teleparallel gravity has three extra DOF when compared to GR.
Unfortunately, it appears that the analysis is flawed, in that it is not
general, for they assumed a diagonal metric to reach some of their
conclusions. Later, Ferraro and Guzman [46] made an argument that the number
of extra DOF is 1. However, the analysis appears to be somewhat incomplete and
only applicable to teleparallel gravity geometries in which the torsion scalar
is constant [48]. More recently, the authors of [47] go through a Hamiltonian
analysis to conclude that the number of extra DOF is 3. A couple of challenges
in their results have been identified in [48]. Obviously, this is still an
unresolved problem which requires further investigation. Another unresolved
complex physical problem is the strong coupling of teleparallel perturbations.
This physical problem occurs as one approaches the Planck scale where the
quantum field effects become non-negligible, particularly for second-order
perturbations and higher. At these scales, the kinetic energy part will become
dominant when compared to the gravity and background parts. This strong
coupling issue with teleparallel perturbations needs further development and
understanding within the covariant $f(T)$ teleparallel-gravity framework.
Here, with the material developed in this present paper, we have a more
complete perturbation framework that is suitable for use in teleparallel
gravity, and the toolkit needed for studying several and more complex problems
in teleparallel gravity.
## Acknowledgments
R.J.v.d.H. is supported by the Natural Sciences and Engineering Research
Council of Canada, and by the W.F. James Chair of Studies in the Pure and
Applied Sciences at St.F.X. A.L. is supported by an AARMS fellowship.
## Abbreviations
The following abbreviations are used in this paper:
FE | Field Equation
---|---
GR | General Relativity
TEGR | Teleparallel Equivalent of General Relativity
DOF | Degrees of Freedom
## Appendix A Perturbed Physical Quantities in Teleparallel Theories
To complete the analysis of Teleparallel theories and geometries, we want to
perturb various physical quantities that may be involved. As explained in
Section IV.1, we are able to always consider perturbations of the co-frame
only within a proper orthonormal gauge.
$\displaystyle\hat{h}^{\prime a}_{~{}\mu}$ $\displaystyle=$ $\displaystyle
h^{a}_{~{}\mu}+\delta h^{a}_{~{}\mu},$ (81a)
$\displaystyle\hat{\omega}^{\prime a}_{~{}\,b\mu}$ $\displaystyle=$
$\displaystyle 0,$ (81b) $\displaystyle\hat{g}^{\prime}_{ab}$ $\displaystyle=$
$\displaystyle\eta_{ab},$ (81c)
where $\delta h^{a}=\delta h^{a}_{~{}\mu}\,dx^{\mu}=\lambda^{a}_{~{}b}h^{b}$.
Here, we apply the coframe perturbations to the main physical and geometrical
quantities involved in Teleparallel Gravity.
1. 1.
The inverse coframe perturbation $\delta h_{a}^{~{}\mu}$:
$\displaystyle h_{a}^{~{}\mu}+\delta h_{a}^{~{}\mu}$ $\displaystyle=$
$\displaystyle
h_{a}^{~{}\mu}+\left[\lambda_{~{}a}^{b}\right]^{-1}\,h_{a}^{~{}\mu},$
$\displaystyle=$ $\displaystyle
h_{a}^{~{}\mu}+\lambda_{a}^{~{}b}\,h_{a}^{~{}\mu},$
$\displaystyle\Rightarrow\delta h_{a}^{~{}\mu}$ $\displaystyle=$
$\displaystyle\lambda_{a}^{~{}b}\,h_{a}^{~{}\mu}$ (82)
2. 2.
Determinant of the co-frame $h=\text{Det}(h^{a}_{~{}\mu})$:
$\displaystyle h+\delta h$ $\displaystyle=$
$\displaystyle\text{Det}(h^{a}_{~{}\mu}+\delta h^{a}_{~{}\mu})$
$\displaystyle\approx$ $\displaystyle
h+\text{Det}(\lambda^{a}_{~{}b}\,h^{b}_{~{}\mu})=h+\lambda\,h$
$\displaystyle\Rightarrow\delta h$ $\displaystyle\approx$
$\displaystyle\lambda\,h$ (83)
where $\lambda=\text{Det}(\lambda^{a}_{~{}b})\ll 1$ and $\text{Det}(\delta
h^{a}_{~{}\mu})=\text{Det}(\lambda^{a}_{~{}b}\,h^{b}_{~{}\mu})=\lambda\,h$.
3. 3.
Metric tensor $g_{\mu\nu}$:
$\displaystyle g_{\mu\nu}+\delta g_{\mu\nu}$ $\displaystyle=$
$\displaystyle\eta_{ab}\left[h^{a}_{\;\;\mu}+\delta
h^{a}_{\;\;\mu}\right]\left[h^{b}_{\;\;\nu}+\delta h^{b}_{\;\;\nu}\right],$
$\displaystyle\approx$ $\displaystyle g_{\mu\nu}+\eta_{ab}\left[\delta
h^{a}_{\;\;\mu}h^{b}_{\;\;\nu}+h^{a}_{\;\;\mu}\delta
h^{b}_{\;\;\nu}\right]+O\left(|\delta h|^{2}\right),$
$\displaystyle\Rightarrow\delta g_{\mu\nu}$ $\displaystyle\approx$
$\displaystyle\eta_{ab}\left[\delta
h^{a}_{\;\;\mu}h^{b}_{\;\;\nu}+h^{a}_{\;\;\mu}\delta
h^{b}_{\;\;\nu}\right]+O\left(|\delta h|^{2}\right).$ (84)
4. 4.
Torsion tensor $T^{a}_{\;\;\mu\nu}$ and $T^{\rho}_{\;\;\mu\nu}$:
$\displaystyle T^{a}_{\;\;\mu\nu}+\delta T^{a}_{\;\;\mu\nu}$ $\displaystyle=$
$\displaystyle\partial_{\mu}h^{a}_{\;\;\nu}+\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}h^{a}_{\;\;\mu}+\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)$ $\displaystyle\approx$ $\displaystyle
T^{a}_{\;\;\mu\nu}+\left[\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right]+O\left(|\delta h|^{2}\right)$
$\displaystyle\Rightarrow\delta T^{a}_{\;\;\mu\nu}$ $\displaystyle\approx$
$\displaystyle\left[\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right]+O\left(|\delta h|^{2}\right)$ (85)
If we also have that
$T^{\rho}_{\;\;\mu\nu}=h^{~{}\rho}_{a}\,T^{a}_{\;\;\mu\nu}$, then:
$\displaystyle T^{\rho}_{\;\;\mu\nu}+\delta T^{\rho}_{\;\;\mu\nu}$
$\displaystyle=$ $\displaystyle\left(h^{~{}\rho}_{a}+\delta
h^{~{}\rho}_{a}\right)\left(T^{a}_{\;\;\mu\nu}+\delta
T^{a}_{\;\;\mu\nu}\right)$ $\displaystyle\approx$ $\displaystyle
T^{\rho}_{\;\;\mu\nu}+\delta
h^{~{}\rho}_{a}\,T^{a}_{\;\;\mu\nu}+h^{~{}\rho}_{a}\,\delta
T^{a}_{\;\;\mu\nu}+O\left(|\delta h|^{2}\right)$
$\displaystyle\Rightarrow\delta T^{\rho}_{\;\;\mu\nu}$ $\displaystyle\approx$
$\displaystyle\delta
h^{~{}\rho}_{a}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]+h^{~{}\rho}_{a}\,\left[\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right]+O\left(|\delta h|^{2}\right)$
5. 5.
Torsion scalar $T$:
$\displaystyle T+\delta T$ $\displaystyle=$
$\displaystyle\frac{1}{4}\left(T^{a}_{\;\;\mu\nu}+\delta
T^{a}_{\;\;\mu\nu}\right)\left(T_{a}^{\;\;\mu\nu}+\delta
T_{a}^{\;\;\mu\nu}\right)+\frac{1}{2}\left(T^{a}_{\;\;\mu\nu}+\delta
T^{a}_{\;\;\mu\nu}\right)\left(T^{\nu\mu}_{\;\;a}+\delta
T^{\nu\mu}_{\;\;a}\right)$
$\displaystyle\qquad-\left(T^{\nu}_{\;\;\mu\nu}+\delta
T^{\nu}_{\;\;\mu\nu}+\right)\left(T^{\rho\mu}_{\;\;\rho}+\delta
T^{\rho\mu}_{\;\;\rho}\right)$ $\displaystyle=$ $\displaystyle
T+\frac{1}{4}\left(\delta
T^{a}_{\;\;\mu\nu}T_{a}^{\;\;\mu\nu}+T^{a}_{\;\;\mu\nu}\delta
T_{a}^{\;\;\mu\nu}\right)+\frac{1}{2}\left(\delta
T^{a}_{\;\;\mu\nu}T^{\nu\mu}_{\;\;a}+T^{a}_{\;\;\mu\nu}\delta
T^{\nu\mu}_{\;\;a}\right)$ $\displaystyle\qquad-\left(\delta
T^{\nu}_{\;\;\mu\nu}T^{\rho\mu}_{\;\;\rho}+T^{\nu}_{\;\;\mu\nu}\delta
T^{\rho\mu}_{\;\;\rho}\right)+O\left(|\delta h|^{2}\right)$
$\displaystyle\Rightarrow\delta T$ $\displaystyle=$
$\displaystyle\frac{1}{4}\left(\delta
T^{a}_{\;\;\mu\nu}T_{a}^{\;\;\mu\nu}+T^{a}_{\;\;\mu\nu}\delta
T_{a}^{\;\;\mu\nu}\right)+\frac{1}{2}\left(\delta
T^{a}_{\;\;\mu\nu}T^{\nu\mu}_{\;\;a}+T^{a}_{\;\;\mu\nu}\delta
T^{\nu\mu}_{\;\;a}\right)$ (87) $\displaystyle\qquad-\left(\delta
T^{\nu}_{\;\;\mu\nu}T^{\rho\mu}_{\;\;\rho}+T^{\nu}_{\;\;\mu\nu}\delta
T^{\rho\mu}_{\;\;\rho}\right)+O\left(|\delta h|^{2}\right)$
In terms of Equations (4) and (4), Equation (5) becomes as:
$\displaystyle\delta T=$
$\displaystyle\frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right)\left(\partial^{\mu}\,h_{a}^{\;\;\nu}-\partial^{\nu}\,h_{a}^{\;\;\mu}\right)+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)$
$\displaystyle\times\,\left(\partial^{\mu}\,\left(\delta
h_{a}^{\;\;\nu}\right)-\partial^{\nu}\,\left(\delta
h_{a}^{\;\;\mu}\right)\right)\Bigg{]}+\frac{1}{2}\Bigg{[}\left(\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right)\left(\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right)$
$\displaystyle+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)\left(\partial^{\nu}\,\left(\delta
h^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\delta
h^{\nu}_{~{}a}\right)\right)\Bigg{]}$ $\displaystyle-\Bigg{[}\left(\delta
h^{~{}\nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]+h^{~{}\nu}_{a}\,\left[\partial_{\mu}\left(\delta
h^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\delta
h^{a}_{\;\;\mu}\right)\right]\right)\left(h^{a}_{~{}\rho}\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)\right)$
$\displaystyle+\left(h^{~{}\nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]\right)\left(\delta
h^{a}_{~{}\rho}\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)+h^{a}_{~{}\rho}\left(\partial^{\rho}\,\left(\delta
h^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\delta
h^{\rho}_{~{}a}\right)\right)\right)\Bigg{]}$ $\displaystyle+O\left(|\delta
h|^{2}\right).$ (88)
6. 6.
Lagrangian density $\mathcal{L}_{Grav}$:
$\displaystyle\mathcal{L}_{Grav}+\delta\mathcal{L}_{Grav}$ $\displaystyle=$
$\displaystyle\frac{1}{2\kappa}\left(h+\delta h\right)\,f\left(T+\delta
T\right),$ $\displaystyle\approx$
$\displaystyle\mathcal{L}_{Grav}+\frac{1}{2\kappa}\left[\delta
h\,f\left(T\right)+h\,f_{T}\left(T\right)\,\delta T\right]+O\left(|\delta
h|^{2}\right),$ $\displaystyle\Rightarrow\delta\mathcal{L}_{Grav}$
$\displaystyle\approx$ $\displaystyle\frac{1}{2\kappa}\left[\delta
h\,f\left(T\right)+h\,f_{T}\left(T\right)\,\delta T\right]+O\left(|\delta
h|^{2}\right).$ (89)
7. 7.
Sum of the Torsion and Ricci Curvature scalar $\overset{\ \circ}{R}+T$: Here,
$\overset{\ \circ}{R}$ is the Ricci scalar computed from the Levi-Civita
connection.
$\displaystyle\delta(\overset{\ \circ}{R}+T)$ $\displaystyle=$
$\displaystyle\,\delta\left[\frac{2}{h}\,\delta_{\mu}\left(h\,T^{\nu\mu}_{\;\;\nu}\right)\right]=2\left[\delta\left(\frac{1}{h}\right)\,\delta_{\mu}\left(h\,T^{\nu\mu}_{\;\;\nu}\right)+\frac{1}{h}\delta_{\mu}\left[\delta\left(h\,T^{\nu\mu}_{\;\;\nu}\right)\right]\right]$
(90) $\displaystyle\approx$ $\displaystyle\frac{2}{h}\,\left[-\frac{\delta
h}{h}(\delta_{\mu}h)\,T^{\nu\mu}_{\;\;\nu}+\left(\delta_{\mu}(\delta
h)\right)\,T^{\nu\mu}_{\;\;\nu}+\left(\delta_{\mu}h\right)\,\delta
T^{\nu\mu}_{\;\;\nu}+h\,\delta_{\mu}\left(\delta
T^{\nu\mu}_{\;\;\nu}\right)\right]$ $\displaystyle+O\left(|\delta
h|^{2}\right)$
By using Equation (4), Equation (90) becomes as:
$\displaystyle\delta(\overset{\ \circ}{R}+T)\approx$
$\displaystyle\frac{2}{h}\,\Bigg{[}-\frac{\delta
h}{h}(\delta_{\mu}h)\,\left(h^{a}_{~{}\nu}\left[\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right]\right)+\left(\delta_{\mu}(\delta
h)\right)\,\left(h^{a}_{~{}\nu}\left[\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right]\right)$
$\displaystyle+\left(\delta_{\mu}h\right)\,\left(\delta
h^{a}_{~{}\nu}\left[\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right]+h^{a}_{~{}\nu}\left[\partial^{\nu}\,\left(\delta
h^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\delta
h^{\nu}_{~{}a}\right)\right]\right)$
$\displaystyle+h\,\delta_{\mu}\left(\delta
h^{a}_{~{}\nu}\left[\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right]+h^{a}_{~{}\nu}\left[\partial^{\nu}\,\left(\delta
h^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\delta
h^{\nu}_{~{}a}\right)\right]\right)\Bigg{]}+O\left(|\delta h|^{2}\right)$
8. 8.
Superpotential $S_{ab}^{\;\;\mu}$:
$\displaystyle S_{ab}^{\;\;\;\mu}+\delta S_{ab}^{\;\;\;\mu}=$
$\displaystyle\,\frac{1}{2}\left(T_{ab}^{\;\;\mu}+\delta
T_{ab}^{\;\;\mu}+T^{\mu}_{\;\;ba}+\delta
T^{\mu}_{\;\;ba}-T^{\mu}_{\;\;ab}-\delta T^{\mu}_{\;\;ab}\right)$
$\displaystyle\qquad-\left(h_{~{}b}^{\mu}+\delta
h_{~{}b}^{\mu}\right)\left(T_{\rho a}^{\;\;\rho}+\delta T_{\rho
a}^{\;\;\rho}\right)+\left(h_{~{}a}^{\mu}+\delta
h_{~{}a}^{\mu}\right)\left(T_{\rho b}^{\;\;\rho}+\delta T_{\rho
b}^{\;\;\rho}\right)$ $\displaystyle\approx$
$\displaystyle\,S_{a}^{\;\;\mu\nu}+\Bigg{[}\frac{1}{2}\left(\delta
T_{ab}^{\;\;\mu}+\delta T^{\mu}_{\;\;ba}-\delta T^{\mu}_{\;\;ab}\right)-\delta
h_{~{}b}^{\mu}T_{\rho a}^{\;\;\rho}-h_{~{}b}^{\mu}\delta T_{\rho
a}^{\;\;\rho}+\delta h_{~{}a}^{\mu}T_{\rho b}^{\;\;\rho}$
$\displaystyle\quad+h_{~{}a}^{\mu}\delta T_{\rho
b}^{\;\;\rho}\Bigg{]}+O\left(|\delta h|^{2}\right)$
$\displaystyle\Rightarrow\delta S_{ab}^{\;\;\;\mu}\approx$
$\displaystyle\left[\frac{1}{2}\left(\delta T_{ab}^{\;\;\mu}+2\,\delta
T^{\mu}_{\;\;ba}\right)-\delta h_{~{}b}^{\mu}T_{\rho
a}^{\;\;\rho}-h_{~{}b}^{\mu}\delta T_{\rho a}^{\;\;\rho}+\delta
h_{~{}a}^{\mu}T_{\rho b}^{\;\;\rho}+h_{~{}a}^{\mu}\delta T_{\rho
b}^{\;\;\rho}\right]+O\left(|\delta h|^{2}\right)$
In terms of $\delta h^{a}_{~{}\mu}$, Equation (8) becomes:
$\displaystyle\delta S_{ab}^{\;\;\;\mu}\approx$
$\displaystyle\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\delta
h_{b}^{~{}\mu}\right)-\partial_{b}\left(\delta
h_{a}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\delta
h_{~{}a}^{\mu}\right)-\partial_{a}\left(\delta
h_{~{}b}^{\mu}\right)\right)-\delta
h_{~{}b}^{\mu}\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]\right)$
$\displaystyle-h_{~{}b}^{\mu}\left(\delta
h_{~{}\rho}^{c}\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]+h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\delta
h_{a}^{~{}\rho}\right)-\partial_{a}\,\left(\delta
h_{c}^{~{}\rho}\right)\right]\right)+\delta
h_{~{}a}^{\mu}\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]\right)$
$\displaystyle+h_{~{}a}^{\mu}\left(\delta
h_{~{}\rho}^{c}\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]+h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\delta
h_{b}^{~{}\rho}\right)-\partial_{b}\,\left(\delta
h_{c}^{~{}\rho}\right)\right]\right)\Bigg{]}+O\left(|\delta h|^{2}\right)$
(93)
9. 9.
Einstein tensor $\overset{\ \circ}{G}_{\mu\nu}$:
$\displaystyle\overset{\ \circ}{G}_{ab}+\delta\overset{\ \circ}{G}_{ab}$
$\displaystyle=$ $\displaystyle\left(\overset{\
\circ}{G}_{\mu\nu}+\delta\overset{\
\circ}{G}_{\mu\nu}\right)\left(h_{a}^{\;\;\mu}+\delta
h_{a}^{\;\;\mu}\right)\left(h_{b}^{\;\;\nu}+\delta h_{b}^{\;\;\nu}\right)$
$\displaystyle\approx$ $\displaystyle\overset{\
\circ}{G}_{ab}+\left[\overset{\ \circ}{G}_{\mu\nu}\left(\delta
h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}+h_{a}^{\;\;\mu}\delta
h_{b}^{\;\;\nu}\right)+\delta\overset{\
\circ}{G}_{\mu\nu}\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\right]+O\left(|\delta
h|^{2}\right)$ $\displaystyle\Rightarrow\delta\overset{\ \circ}{G}_{ab}$
$\displaystyle\approx$ $\displaystyle\left[\overset{\
\circ}{G}_{\mu\nu}\left(\delta
h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}+h_{a}^{\;\;\mu}\delta
h_{b}^{\;\;\nu}\right)+\delta\overset{\
\circ}{G}_{\mu\nu}\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\right]+O\left(|\delta
h|^{2}\right).$ (94)
If $\overset{\ \circ}{G}_{\mu\nu}=\overset{\
\circ}{R}_{\mu\nu}-\frac{1}{2}\,g^{\sigma\rho}\,g_{\mu\nu}\,\overset{\
\circ}{R}_{\sigma\rho}=\overset{\
\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ab}}{2}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{a}_{\;\;\mu}\,h^{b}_{\;\;\nu}\right]\,\overset{\
\circ}{R}_{\sigma\rho}$, then we obtain from Equation (3):
$\displaystyle\delta\overset{\ \circ}{G}_{\mu\nu}\approx$
$\displaystyle\,\delta\overset{\
\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ab}}{2}\,\Bigg{[}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{a}_{\;\;\mu}\,h^{b}_{\;\;\nu}\right]\,\delta\overset{\
\circ}{R}_{\sigma\rho}+\Bigg{[}\delta
h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{a}_{\;\;\mu}\,h^{b}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,\delta
h_{d}^{\;\;\rho}\,h^{a}_{\;\;\mu}\,h^{b}_{\;\;\nu}$
$\displaystyle+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,\delta
h^{a}_{\;\;\mu}\,h^{b}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{a}_{\;\;\mu}\,\delta
h^{b}_{\;\;\nu}\Bigg{]}\,\overset{\
\circ}{R}_{\sigma\rho}\Bigg{]}+O\left(|\delta h|^{2}\right)$ (95)
By substituting Equation (9) into Equation (9), we obtain that:
$\displaystyle\delta\overset{\ \circ}{G}_{ab}\approx$
$\displaystyle\Bigg{[}\overset{\
\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,\overset{\
\circ}{R}_{\sigma\rho}\Bigg{]}\left(\delta
h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}+h_{a}^{\;\;\mu}\delta h_{b}^{\;\;\nu}\right)$
$\displaystyle+\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\Bigg{[}\delta\overset{\
\circ}{R}_{\mu\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\Bigg{[}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,\delta\overset{\
\circ}{R}_{\sigma\rho}$ $\displaystyle+\left[\delta
h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,\delta
h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,\delta
h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,\delta
h^{f}_{\;\;\nu}\right]\,\overset{\ \circ}{R}_{\sigma\rho}\Bigg{]}\Bigg{]}$
$\displaystyle+O\left(|\delta h|^{2}\right)$ (96)
Now, if we have that $\overset{\
\circ}{R}_{\mu\nu}=h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\nu}$, then Equation (9) becomes
$\displaystyle\delta\overset{\ \circ}{G}_{ab}\approx$
$\displaystyle\Bigg{[}h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}\Bigg{]}\left(\delta
h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}+h_{a}^{\;\;\mu}\delta h_{b}^{\;\;\nu}\right)$
$\displaystyle+\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\Bigg{[}\left[\left(\delta
h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}+h_{k}^{~{}\alpha}\,\delta
h^{m}_{~{}\mu}\right)\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\nu}+h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\delta\overset{\
\circ}{R}^{k}_{~{}m\alpha\nu}\right]$
$\displaystyle-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\Bigg{[}\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,\left[\left(\delta
h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}+h_{k}^{~{}\alpha}\,\delta
h^{m}_{~{}\mu}\right)\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}+h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\delta\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}\right]$ $\displaystyle+\left[\delta
h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,\delta
h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,\delta
h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}+h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,\delta
h^{f}_{\;\;\nu}\right]$
$\displaystyle\quad\times\,h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}\Bigg{]}\Bigg{]}+O\left(|\delta h|^{2}\right)$
(97)
For pure Minkowski spacetime, we have that $\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}=0$ by default and Equation (9) reduces as:
$\displaystyle\delta\overset{\ \circ}{G}_{ab}\approx$
$\displaystyle\left(h_{a}^{\;\;\mu}h_{b}^{\;\;\nu}\right)\Bigg{[}h_{k}^{~{}\alpha}\,h^{m}_{~{}\mu}\,\delta\overset{\
\circ}{R}^{k}_{~{}m\alpha\nu}-\frac{\eta^{cd}\,\eta_{ef}}{2}\,\left[h_{c}^{\;\;\sigma}\,h_{d}^{\;\;\rho}\,h^{e}_{\;\;\mu}\,h^{f}_{\;\;\nu}\right]\,h_{k}^{~{}\alpha}\,h^{m}_{~{}\sigma}\,\delta\overset{\
\circ}{R}^{k}_{~{}m\alpha\rho}\Bigg{]}+O\left(|\delta h|^{2}\right).$
Equation (9) is useful for Equations (V.1) and (V.2) and the energy-momentum
stability test.
## Appendix B General Perturbed Torsion-Based Field Equation via
Linearization
Here, we can also obtain the perturbed field equation (Equations (33) and
(34)) using Equation (6), with a matter contribution as follows:
$\displaystyle\delta\mathcal{L}$ $\displaystyle\approx$
$\displaystyle\frac{1}{2\kappa}\left[\delta
h\,f\left(T\right)+h\,f_{T}\left(T\right)\,\delta
T\right]+\delta\mathcal{L}_{Matter}+O\left(|\delta h|^{2}\right)$ (99)
As for the non-perturbed FEs, we have here that $\delta\Theta_{(ab)}=\delta
T_{ab}\equiv\frac{1}{2}\frac{\delta\left(\delta L_{Matt}\right)}{\delta
g_{ab}}$.
For the term $\frac{1}{2\kappa}\,\delta h\,f\left(T\right)$, we obtain by
analogy with Equation (II.6) the following part (here, $\delta g_{ab}=0$ for
the orthonormal framework):
$\displaystyle\frac{\delta h\,f\left(T\right)}{2\kappa}\,$
$\displaystyle\rightarrow$ $\displaystyle f_{TT}\left[\delta
S_{\left(ab\right)}^{~{}~{}~{}\mu}\,\partial_{\mu}T+S_{\left(ab\right)}^{~{}~{}~{}\mu}\,\partial_{\mu}\left(\delta
T\right)\right]+f_{T}\,\delta\overset{\
\circ}{G}_{ab}-\frac{g_{ab}}{2}\,f_{T}\,\delta T\quad\quad\text{Symmetric}$
$\displaystyle\rightarrow$ $\displaystyle
f_{TT}\,\left[S_{\left[ab\right]}^{~{}~{}~{}\mu}\partial_{\mu}\left(\delta
T\right)+\delta
S_{\left[ab\right]}^{~{}~{}~{}\mu}\partial_{\mu}T\right]\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{Antisymmetric}$
At Equation (B), we only perturb the physical quantities linked by $\delta h$,
giving $\delta T$, $\delta\overset{\ \circ}{G}_{ab}$, and $\delta
S_{ab}^{\;\;\mu}$. We do not perturb $f(T)$ and its derivatives.
For the term $\frac{1}{2\kappa}\,h\,f_{T}\left(T\right)\,\delta T$, we still
obtain by analogy with Equation (II.6) the part (here again, $\delta
g_{ab}=0$):
$\displaystyle\frac{h\,f_{T}\left(T\right)\,\delta T}{2\kappa}\,$
$\displaystyle\rightarrow$
$\displaystyle\left[f_{TTT}\,S_{\left(ab\right)}^{~{}~{}~{}\mu}\partial_{\mu}T+f_{TT}\,\overset{\
\circ}{G}_{ab}+\frac{g_{ab}}{2}\left(f_{T}-T\,f_{TT}\right)\right]\,\delta
T\quad\quad\text{Symmetric}$ $\displaystyle\rightarrow$ $\displaystyle
f_{TTT}\,\left[S_{\left[ab\right]}^{~{}~{}~{}\mu}\partial_{\mu}T\right]\,\delta
T\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{Antisymmetric}$
At Equation (B), we only change $f(T)\rightarrow f_{T}(T)\,\delta T$,
$f_{T}(T)\rightarrow f_{TT}(T)\,\delta T$, and $f_{TT}(T)\rightarrow
f_{TTT}(T)\,\delta T$. We does not perturb the physical quantities themselves.
By adding the Equations (B) and (B), we obtain exactly at the first order the
Equations (33) and (34). This is the sign that the linearization of gravity
and the direct perturbation of the field equation described by Equation (II.6)
are both equivalent. Through these two methods, we obtain the field equation
described by Equations (33) and (34), which is in the order of things.
## Appendix C The Derivation of Minkowski Spacetime Symmetries: Conditions
for Stability
In order to shorten the text, we put in this appendix some long calculations
that are necessary for the results of Sections V.1 and V.2.
### C.1 Rotation/Boost Perturbation
1. 1.
Torsion scalar perturbation $\delta T$: by using Equation (5) and by
substituting Equation (39) inside, we obtain the expression:
$\displaystyle\delta T=$
$\displaystyle\frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}\right)\right)\left(\partial^{\mu}\,h_{a}^{\;\;\nu}-\partial^{\nu}\,h_{a}^{\;\;\mu}\right)+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)$
$\displaystyle\times\left(\partial^{\mu}\,\left(\lambda_{a}^{\;\;b}\,h_{b}^{\;\;\nu}\right)-\partial^{\nu}\,\left(\lambda_{a}^{\;\;b}\,h_{b}^{\;\;\mu}\right)\right)\Bigg{]}+\frac{1}{2}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{~{}b}\,h^{b}_{~{}\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{~{}b}\,h^{b}_{~{}\mu}\right)\right)\left(\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right)$
$\displaystyle+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)\left(\partial^{\nu}\,\left(\lambda^{b}_{~{}a}\,h^{\mu}_{~{}b}\right)-\partial^{\mu}\,\left(\lambda^{b}_{~{}a}\,h^{\nu}_{~{}b}\right)\right)\Bigg{]}$
$\displaystyle-\Bigg{[}\left(\lambda_{a}^{\;\;b}\,h^{~{}\nu}_{b}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]+h^{~{}\nu}_{a}\,\left[\partial_{\mu}\left(\lambda^{a}_{~{}b}\,h^{b}_{\;\;\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{~{}b}\,h^{b}_{\;\;\mu}\right)\right]\right)\left(h^{a}_{~{}\rho}\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)\right)$
$\displaystyle+\left(h^{~{}\nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]\right)\left(\lambda^{a}_{~{}b}h^{b}_{~{}\rho}\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)+h^{a}_{~{}\rho}\left(\partial^{\rho}\,\left(\lambda^{b}_{~{}a}\,h^{\mu}_{~{}b}\right)-\partial^{\mu}\,\left(\lambda^{b}_{~{}a}\,h^{\rho}_{~{}b}\right)\right)\right)\Bigg{]}$
$\displaystyle+O\left(|\delta h|^{2}\right)$ $\displaystyle\rightarrow 0$
(102)
We need to impose
$T^{a}_{~{}\mu\nu}=\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\rightarrow
0$ to obtain the final result for Equation (1).
2. 2.
Superpotential perturbation $\delta S_{ab}^{~{}~{}~{}\mu}$: by using Equation
(8) and by substituting Equation (39) inside, we obtain the expression:
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}$ $\displaystyle=$
$\displaystyle\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{~{}c}\,h_{c}^{~{}\mu}\right)-\partial_{b}\left(\lambda_{a}^{~{}c}\,h_{c}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\lambda_{~{}a}^{c}\,h_{~{}c}^{\mu}\right)-\partial_{a}\left(\lambda_{~{}b}^{c}\,h_{~{}c}^{\mu}\right)\right)-\lambda_{~{}b}^{e}\,h_{~{}e}^{\mu}\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]\right)$
$\displaystyle-
h_{~{}b}^{\mu}\left(\lambda^{c}_{~{}e}\,h_{~{}\rho}^{e}\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]+h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{a}^{~{}f}h_{f}^{~{}\rho}\right)-\partial_{a}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}\right)\right]\right)$
$\displaystyle+\lambda_{~{}a}^{e}h_{~{}e}^{\mu}\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]\right)+h_{~{}a}^{\mu}\lambda^{c}_{~{}e}h_{~{}\rho}^{e}\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]$
$\displaystyle+h_{~{}a}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{b}^{~{}f}h_{f}^{~{}\rho}\right)-\partial_{b}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}\right)\right]\Bigg{]}+O\left(|\delta
h|^{2}\right)$
$\displaystyle\rightarrow\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{~{}c}\,h_{c}^{~{}\mu}\right)-\partial_{b}\left(\lambda_{a}^{~{}c}\,h_{c}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\lambda_{~{}a}^{c}\,h_{~{}c}^{\mu}\right)-\partial_{a}\left(\lambda_{~{}b}^{c}\,h_{~{}c}^{\mu}\right)\right)$
$\displaystyle\quad\quad-
h_{~{}b}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{a}^{~{}f}h_{f}^{~{}\rho}\right)-\partial_{a}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}\right)\right]+h_{~{}a}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{b}^{~{}f}h_{f}^{~{}\rho}\right)-\partial_{b}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}\right)\right]\Bigg{]}$
$\displaystyle\quad\quad+O\left(|\delta h|^{2}\right)\quad\quad\text{by
applying Equation \eqref{436a} (the zero torsion criteria).}$
$\displaystyle\rightarrow 0.$ (103)
We need to impose $\delta
T^{a}_{~{}\mu\nu}=\partial_{\mu}\,\left(\lambda^{a}_{~{}c}\,h^{c}_{\;\;\nu}\right)-\partial_{\nu}\,\left(\lambda^{a}_{~{}c}\,h^{c}_{\;\;\mu}\right)\rightarrow
0$ to obtain the final result for Equation (2).
3. 3.
Weitzenbock connection perturbation $\delta\Gamma^{\rho}_{\;\;\nu\mu}$: from
the null covariant derivative criteria, we make the following derivation as:
$\displaystyle 0=\nabla_{\mu}\,\delta h^{a}_{\;\;\nu}$ $\displaystyle=$
$\displaystyle\nabla_{\mu}\,\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)=\partial_{\mu}\,\delta
h^{a}_{\;\;\nu}-\Gamma^{\rho}_{\;\;\nu\mu}\,\delta
h^{a}_{\;\;\rho}-\delta\Gamma^{\rho}_{\;\;\nu\mu}h^{a}_{\;\;\rho}$
$\displaystyle=$
$\displaystyle\partial_{\mu}\,\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)-\left(h_{c}^{\;\;\rho}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\,\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\rho}\right)-\delta\Gamma^{\rho}_{\;\;\nu\mu}h^{a}_{\;\;\rho}$
$\displaystyle\Rightarrow\delta\Gamma^{\rho}_{\;\;\nu\mu}=h_{a}^{\;\;\rho}\left[\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}\right)-\left(h_{c}^{\;\;\sigma}\,\partial_{\mu}\,h^{c}_{\;\;\nu}\right)\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\sigma}\right)\right],$
where
$\Gamma^{\rho}_{\;\;\nu\mu}=h_{c}^{\;\;\rho}\,\partial_{\mu}\,h^{c}_{\;\;\nu}$
is the Weitzenbock connection for a proper frame.
### C.2 General Linear Perturbation
1. 1.
The torsion scalar perturbation $\delta T$:
$\displaystyle\delta T=$
$\displaystyle\frac{1}{4}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\nu}+\epsilon^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{\;\;b}\,h^{b}_{\;\;\mu}+\epsilon^{a}_{\;\;\mu}\right)\right)\left(\partial^{\mu}\,h_{a}^{\;\;\nu}-\partial^{\nu}\,h_{a}^{\;\;\mu}\right)+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)$
$\displaystyle\times\left(\partial^{\mu}\,\left(\lambda_{a}^{\;\;b}\,h_{b}^{\;\;\nu}+\epsilon_{a}^{\;\;\nu}\right)-\partial^{\nu}\,\left(\lambda_{a}^{\;\;b}\,h_{b}^{\;\;\mu}+\epsilon_{a}^{\;\;\mu}\right)\right)\Bigg{]}$
$\displaystyle+\frac{1}{2}\Bigg{[}\left(\partial_{\mu}\left(\lambda^{a}_{~{}b}\,h^{b}_{~{}\nu}+\epsilon^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{~{}b}\,h^{b}_{~{}\mu}+\epsilon^{a}_{\;\;\mu}\right)\right)\left(\partial^{\nu}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\nu}_{~{}a}\right)$
$\displaystyle+\left(\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right)\left(\partial^{\nu}\,\left(\lambda^{b}_{~{}a}\,h^{\mu}_{~{}b}+\epsilon^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\lambda^{b}_{~{}a}\,h^{\nu}_{~{}b}+\epsilon^{\nu}_{~{}a}\right)\right)\Bigg{]}$
$\displaystyle-\Bigg{[}\left(\left(\lambda_{a}^{\;\;b}\,h^{~{}\nu}_{b}+\epsilon^{~{}\nu}_{a}\right)\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]+h^{~{}\nu}_{a}\,\left[\partial_{\mu}\left(\lambda^{a}_{~{}b}\,h^{b}_{\;\;\nu}+\epsilon^{a}_{\;\;\nu}\right)-\partial_{\nu}\left(\lambda^{a}_{~{}b}\,h^{b}_{\;\;\mu}+\epsilon^{a}_{\;\;\mu}\right)\right]\right)$
$\displaystyle\times\left(h^{a}_{~{}\rho}\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)\right)+\left(h^{~{}\nu}_{a}\,\left[\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\right]\right)$
$\displaystyle\times\left(\left(\lambda^{a}_{~{}b}h^{b}_{~{}\rho}+\epsilon^{a}_{~{}\rho}\right)\left(\partial^{\rho}\,h^{\mu}_{~{}a}-\partial^{\mu}\,h^{\rho}_{~{}a}\right)+h^{a}_{~{}\rho}\left(\partial^{\rho}\,\left(\lambda^{b}_{~{}a}\,h^{\mu}_{~{}b}+\epsilon^{\mu}_{~{}a}\right)-\partial^{\mu}\,\left(\lambda^{b}_{~{}a}\,h^{\rho}_{~{}b}+\epsilon^{\rho}_{~{}a}\right)\right)\right)\Bigg{]}$
$\displaystyle+O\left(|\delta h|^{2}\right)$ $\displaystyle\rightarrow 0.$
(105)
We again need to impose
$T^{a}_{~{}\mu\nu}=\partial_{\mu}\,h^{a}_{\;\;\nu}-\partial_{\nu}\,h^{a}_{\;\;\mu}\rightarrow
0$ as for Equation (1) to obtain Equation (1).
2. 2.
The superpotential perturbation $\delta S_{ab}^{~{}~{}~{}\mu}$ is expressed
as:
$\displaystyle\delta S_{ab}^{~{}~{}~{}\mu}=$
$\displaystyle\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{~{}c}\,h_{c}^{~{}\mu}+\epsilon_{b}^{~{}\mu}\right)-\partial_{b}\left(\lambda_{a}^{~{}c}\,h_{c}^{~{}\mu}+\epsilon_{a}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\lambda_{~{}a}^{c}\,h_{~{}c}^{\mu}+\epsilon_{~{}a}^{\mu}\right)-\partial_{a}\left(\lambda_{~{}b}^{c}\,h_{~{}c}^{\mu}+\epsilon_{~{}b}^{\mu}\right)\right)$
$\displaystyle-\left(\lambda_{~{}b}^{e}\,h_{~{}e}^{\mu}+\epsilon_{~{}b}^{\mu}\right)\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]\right)$
$\displaystyle-
h_{~{}b}^{\mu}\left(\left(\lambda^{c}_{~{}e}\,h_{~{}\rho}^{e}+\epsilon_{~{}\rho}^{c}\right)\left[\partial_{c}\,h_{a}^{~{}\rho}-\partial_{a}\,h_{c}^{~{}\rho}\right]+h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{a}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{a}^{~{}\rho}\right)-\partial_{a}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{c}^{~{}\rho}\right)\right]\right)$
$\displaystyle+\left(\lambda_{~{}a}^{e}h_{~{}e}^{\mu}+\epsilon_{~{}a}^{\mu}\right)\left(h_{~{}\rho}^{c}\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]\right)+h_{~{}a}^{\mu}\left(\lambda^{c}_{~{}e}h_{~{}\rho}^{e}+\epsilon_{~{}\rho}^{c}\right)\left[\partial_{c}\,h_{b}^{~{}\rho}-\partial_{b}\,h_{c}^{~{}\rho}\right]$
$\displaystyle+h_{~{}a}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{b}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{b}^{~{}\rho}\right)-\partial_{b}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{c}^{~{}\rho}\right)\right]\Bigg{]}+O\left(|\delta
h|^{2}\right)$
$\displaystyle\rightarrow\Bigg{[}\frac{1}{2}\left(\partial_{a}\left(\lambda_{b}^{~{}c}\,h_{c}^{~{}\mu}+\epsilon_{b}^{~{}\mu}\right)-\partial_{b}\left(\lambda_{a}^{~{}c}\,h_{c}^{~{}\mu}+\epsilon_{a}^{~{}\mu}\right)\right)+\left(\partial_{b}\left(\lambda_{~{}a}^{c}\,h_{~{}c}^{\mu}+\epsilon_{~{}a}^{\mu}\right)-\partial_{a}\left(\lambda_{~{}b}^{c}\,h_{~{}c}^{\mu}+\epsilon_{~{}b}^{\mu}\right)\right)$
$\displaystyle\quad\quad-
h_{~{}b}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{a}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{a}^{~{}\rho}\right)-\partial_{a}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{c}^{~{}\rho}\right)\right]$
$\displaystyle\quad\quad+h_{~{}a}^{\mu}\,h_{~{}\rho}^{c}\left[\partial_{c}\,\left(\lambda_{b}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{b}^{~{}\rho}\right)-\partial_{b}\,\left(\lambda_{c}^{~{}f}h_{f}^{~{}\rho}+\epsilon_{c}^{~{}\rho}\right)\right]\Bigg{]}+O\left(|\delta
h|^{2}\right)$ $\displaystyle\rightarrow 0,$ (106)
where we apply Equation (42) and we respect the
$\partial_{a}\epsilon_{b}^{\;\;\mu}=\partial_{b}\epsilon_{a}^{\;\;\mu}=0$
condition.
## References
* Peskin and Schroeder [1995] Peskin, M.; Schroeder, D. An Introduction to Quantum Field Theory; Perseus Books: New York, NY, USA, 1995\.
* Srednicki [2007] Srednicki, M. Quantum Field Theory; Cambridge University Press: Cambridge, UK, 2007.
* Schiff [1949] Schiff, J. Quantum Mechanics, 1st ed.; McGraw-Hill Book Company, Inc.: New York, NY, USA, 1949.
* Griffiths [1995] Griffiths, D.J. Introduction to Quantum Mechanics; Prentice Hall: Pearson, NJ, USA, 1995.
* Weinberg [1972] Weinberg, S. Gravitation and Cosmology: Principe and Applications of the General Theory of Relativity; John Wiley and Sons: New York, NY, USA, 1972.
* Misner et al. [1973] Misner, C.; Thorne, K.; Wheeler, J. Gravitation; Princeton University Press: Princeton, NJ, USA, 1973.
* Griffiths and Podolsky [2009] Griffiths, J.; Podolsky, J. Exact Spacetimes in EinsteinâÄôs General Relativity; Cambridge University Press: New York, NY, USA, 2009.
* Will [2018] Will, C. Theory and Experiment in Gravitational Physics, 2nd ed.; Cambridge University Press: Cambridge, UK, 2018.
* Landry and Hammad [2021] Landry, A.; Hammad, F. Landau levels in a gravitational field: The Schwarzschild spacetime case. Universe 2021, 7, 144.
* Hammad and Landry [2020] Hammad, F.; Landry, A. Landau levels in a gravitational field: The Levi-Civita and Kerr spacetimes case. Eur. Phys. J. Plus 2020, 135, 90.
* Hammad et al. [2021] Hammad, F.; Landry, A.; Mathieu, K. Prospects on the possibility of testing the inverse-square law and gravitomagnetism using quantum interference. Int. J. Mod. Phys. D 2021, 30, 2150004.
* Hammad et al. [2020] Hammad, F.; Landry, A.; Mathieu, K. A fresh look at the influence of gravity on the quantum Hall effect. Eur. Phys. J. Plus 2020, 135, 449.
* Ellis and van Elst [1998] Ellis, G.; van Elst, H. Cosmological Models. Cargèse Lectures 1998, arXiv:gr-qc/9812046.
* de Andrade et Al. [2000] de Andrade, V.C.; Guillen, L.C.T.; Pereira, J.G. Teleparallel Gravity: An Overview _arXiv_ 2000, arXiv:gr-qc/0011087.
* Aldrovandi and Pereira [2013] Aldrovandi, R.; Pereira, J. An Introduction to Teleparallel Gravity; Springer: Berlin, Germany, 2013.
* Bahamonde et al. [2021] Bahamonde et al., S. Teleparallel Gravity: From Theory to Cosmology. _arXiv_ 2021, arXiv:gr-qc/2106.13793.
* McNutt et al. [2021] McNutt, D.; Coley, A.; van den Hoogen, R. Teleparallel geometries not characterized by their scalar polynomial Torsion invariants. J. Math. Phys. 2021, 62, 052501.
* Coley et al. [2020] Coley, A.; van den Hoogen, R.; McNutt, D. Symmetry and equivalence in Teleparallel gravity. J. Math. Phys. 2020, 61, 072503.
* Krssak et al. [2019] Krssak, M.; van den Hoogen, R.J.; Pereira, J.G.; Böhmer, C.G.; Coley, A.A. Teleparallel theories of gravity: Illuminating a fully invariant approach. Class. Quant. Grav. 2019, 36, 183001. https://doi.org/10.1088/1361-6382/ab2e1f.
* Coley et al. [2022a] Coley, A.; van den Hoogen, R.; McNutt, D. Symmetric Teleparallel geometries. Class. Quant. Grav. 2022, 39, 22LT01.
* Coley et al. [2022b] Coley, A.; Gholami, F.; van den Hoogen, R.; Landry, A.; McNutt, D. TdS geometries. 2022, in preparation.
* Golovnev and Guzman [2020] Golovnev, A.; Guzman, M. Non-trivial Minkowski backgrounds in $f\left(T\right)$ gravity. _arXiv_ 2020, arXiv:gr-qc/2012.00696.
* Jimenez et al. [2021] Jimenez, J.; Golovnev, A.; Koivisto, T.; Veerm ae, H. Minkowski space in $f(T)$ gravity. Phys. Rev. D 2021, 103, 024054.
* Golovnev and Koivisto [2018] Golovnev, A.; Koivisto, T. Cosmological perturbations in modified teleparallel gravity models. J. Cosmol. Astropart. Phys. 2018, 11, 012.
* Bahamonde et al. [2023] Bahamonde, S.; Dialektopoulos, K.; Hohmann, M.; Said, J.; Pfeifer, C.; Saridakis, E. Perturbations in Non-Flat Cosmology for $F(T)$ gravity. Eur. Phys. J. C 2023, 83, 193.
* Bamba et Al [2013] Bamba et Al, K. No further gravitational wave modes in $F(T)$ gravity. _arXiv_ 2013, arXiv:gr-qc/1309.2698.
* Cai et al. [2018] Cai, Y.F.; Li, C.; Saridakis, E.; Xue, L.Q. $f(T)$ gravity after GW170817 and GRB170817A. Phys. Rev. D 2018, 97, 103513.
* de Andrade et al. [2001] de Andrade, V.; Guillen, L.; Pereira, J. Teleparallel Spin Connection. Phys. Rev. D 2001, 64, 027502.
* Hohmann [2022] Hohmann, M. Teleparallel gravity. _arXiv_ 2022, arXiv:gr-qc/2207.06438.
* Hashim et al. [2021] Hashim, M.; El Hanafy, W.; Golovnev, A.; El-Zant, A. Toward a Concordance Teleparallel Cosmology I: Background Dynamics. J. Cosmol. Astropart. Phys. (JCAP) 2021, 07, 052.
* Mandal and Sahoo [2020] Mandal, S.; Sahoo, P. A Complete Cosmological Scenario in Teleparallel Gravity. Eur. Phys. J. Plus 2020, 135, 706.
* Trautman [2006] Trautman, A. Einstein-Cartan Theory. Encycl. Math. Phys. 2006, 2, 189–195.
* Hayashi and Shirafuji [1979:(Addendum: Phys.Rev.D 24, 3312-3314 (1982] Hayashi, K.; Shirafuji, T. New General Relativity. Phys. Rev. D 1979, 19, 3524–3553. Addendum in Phys. Rev. D 1982, _24_ , 3312–3314.
* D’Ambrosio et al. [2020] D’Ambrosio, F.; Garg, M.; Heisenberg, L. Non-linear extension of non-metricity scalar for MOND. Phys. Lett B. 2020, 811, 135970.
* Jarv, L. and Runkla, M. and Saal, M. and Vilson, O. [2018] Jarv, L.; Runkla, M.; Saal, M.; Vilson, O. Nonmetricity formulation of General Relativity and its Scalar-Tensor extension. Phys. Rev. D 2018, 97, 124025.
* Golovnev et al. [2017] Golovnev, A.; Koivisto, T.; Sandstad, M. On the covariance of teleparallel gravity theories. Class. Quantum Gravity 2017, 34, 145013.
* Krssak and Saridakis [2016] Krssak, M.; Saridakis, E. The covariant formulation of $f(T)$ gravity. Class. Quantum Gravity 2016, 33, 115009.
* Beltran and Koivisto [2021] Beltran, J.J.; Koivisto, T. Accidental gauge symmetries of Minkowski spacetime in Teleparallel theories. Universe 2021, 7, 143.
* Christodoulou and Klainerman [1989-1990] Christodoulou, D.; Klainerman, S. The global nonlinear stability of the Minkowski space. In Séminaire Équations aux dérivées partielles (Polytechnique); 1989–1990; pp. 1–29.
* Shen [2022] Shen, D. Stability of Minkowski spacetime in exterior regions. _arXiv_ 2022, arXiv:gr-qc/2211.15230.
* LeFloch and Ma [2017] LeFloch, P.; Ma, Y. The global nonlinear stability of Minkowski space, Einstein equations, $f(R)$ modified gravity, and Klein-Gordon fields. _arXiv_ 2017, arXiv:gr-qc/1712.10045.
* Lindblad and Rodnianski [2010] Lindblad, H.; Rodnianski, I. The global stability of Minkowski space-time in harmonic gauge. Annals of Mathematics 2010, 171, 1401–1477.
* McNutt et al. [2023] McNutt, D.D.; Coley, A.A.; van den Hoogen, R.J. A frame based approach to computing symmetries with non-trivial isotropy groups. J. Math. Phys. 2023, 64, 032503. https://doi.org/10.1063/5.0134596.
* Coley, A.A. and van den Hoogen, R.J. and Landry, A. and McNutt, D.D. [2023] Coley, A.A.; van den Hoogen, R.J.; Landry, A.; McNutt, D.D. Spherically symmetric teleparallel geometries. 2023, In preparation.
* Li et al. [2011] Li, M.; Miao, R.X.; Miao, Y.G. Degrees of freedom of $f(T)$ gravity. J. High Energy Phys. 2011, 1107, 108.
* Ferraro and Guzmán [2018] Ferraro, R.; Guzman, M.J. Hamiltonian formalism for f(T) gravity. Phys. Rev. 2018, D97, 104028. https://doi.org/10.1103/PhysRevD.97.104028.
* Blagojevic and Nester [2020] Blagojevic, M.; Nester, J.M. Local symmetries and physical degrees of freedom in $f(T)$ gravity: A Dirac Hamiltonian constraint analysis. Phys. Rev. D 2020, 102, 064025.
* Golovnev and Guzmán [2021] Golovnev, A.; Guzmán, M.J. Foundational issues in f(T) gravity theory. Int. J. Geom. Meth. Mod. Phys. 2021, 18, 2140007. https://doi.org/10.1142/S0219887821400077.
|
# Exact solutions to the quantum many-body problem using the geminal density
matrix
Nicholas Cox ICFO–Institut de Ciencies Fotoniques, The Barcelona Institute of
Science and Technology, 08860 Castelldefels, Barcelona, Spain The College of
Optics and Photonics (CREOL), University of Central Florida, Orlando, Florida
32816, USA<EMAIL_ADDRESS>
###### Abstract
It is virtually impossible to directly solve the Schrödinger equation for a
many-electron wave function due to the exponential growth in degrees of
freedom with increasing particle number. The two-body reduced density matrix
(2-RDM) formalism reduces this coordinate dependence to that of four particles
irrespective of the wave function’s dimensionality, providing a promising path
to solve the many-body problem. Unfortunately, errors arise in this approach
because the 2-RDM cannot practically be constrained to guarantee that it
corresponds to a valid wave function. Here we approach this so-called
$N$-representability problem by expanding the 2-RDM in a complete basis of
two-electron wave functions and studying the matrix formed by the expansion
coefficients. This quantity, which we call the geminal density matrix (GDM),
is found to evolve in time by a unitary transformation that preserves
$N$-representability. This evolution law enables us to calculate eigenstates
of strongly correlated systems by a fictitious adiabatic evolution in which
the electron-electron interaction is slowly switched on. We show how this
technique is used to diagonalize atomic Hamiltonians, finding that the problem
reduces to the solution of $\sim N(N-1)/2$ two-electron eigenstates of the
Helium atom on a grid of electron-electron interaction scaling factors.
## I Introduction
In 1955, Löwdin [1, 2, 3] and Mayer [4] presented similar methods to express
the ground state energy of a many-electron quantum system as a functional of
the two-body reduced density matrix ($2$-RDM). Their work inspired belief in
the feasibility of solving complex many-body problems with an effective two-
particle analysis [5]. However, early calculations significantly
underestimated experimental ground state energies because the $2$-RDM was not
adequately constrained to ensure that it represents a valid many-body wave
function. Full determination of these constraints, known as the
$N$-representability conditions [6], would indeed yield a method to reduce the
many-body problem to an effective two-particle system. The development of new
constraints continues to improve the accuracy of 2-RDM calculations, but the
general problem remains unsolved.
As detailed in Ref. [7], modern $2$-RDM analysis primarily employs one of two
methods. The first begins by deriving a contracted Schrödinger equation (CSE)
that computes the energy as a function of the two and four-body reduced
density matrices ($4$-RDM) [8, 9, 10, 11]. Approximating the $4$-RDM as a
function of the $2$-RDM allows one to solve the CSE for a set of candidate
eigenstates from which the physically valid states are selected by imposing
$N$-representability conditions. The second technique, density matrix
variational theory (DMVT), aims to directly minimize the energy as a
functional of the $2$-RDM [12, 13, 14, 15, 16]. The most successful
applications of DMVT use a convex optimization scheme called semi-definite
programming [17, 15] in which the $N$-representability conditions are included
by a set of positivity conditions that restrict the search space for
solutions.
Here we take a conceptually simpler approach that begins by expanding the
$2$-RDM in a basis of two-electron geminal [18] eigenstates. The resulting
expansion coefficients are collected into a quantity we call the geminal
density matrix (GDM) that can be used to compute many-body observables from
effective two-body operators. The technique was first introduced by Bopp in an
attempt to calculate the ground state energy of selected ions [19]. Although
his method was exact, his results differed quite significantly from
experimental ground state energies. These errors were later attributed to the
non-$N$-representability of the assumed ground state matrix [6]. Until now,
very little work has been done to advance this matrix-based approach.
We will detail how the GDM formalism enables us to calculate the stationary
states of a general many-body Hamiltonian with two-body (electron-electron)
interactions. To do so, we need to place Bopp’s work on a solid theoretical
foundation to make sense of $N$-representability in the context of GDMs. Most
importantly, we find that the GDM must evolve unitarily in time by the
Liouville-Von Neumann equation in order to produce the same expectation values
as the time-dependent $N$-electron wave function. Since the wave function is
only useful insofar as it generates observables, any matrix that reproduces
these quantities is clearly a faithful representation of the quantum state.
Because the equation of motion preserves $N$-representability, we find it
useful to examine a Hamiltonian that slowly switches on the electron-electron
interaction by some time-dependent scaling. We show by the adiabatic theorem
that eigenstates of the non-interacting (initial) system evolve into those of
the interacting (final) system. In this way, we find it possible to construct
$N$-electron stationary states using $\sim N(N-1)/2$ eigenstates of an
effective two-electron Hamiltonian computed on a grid of interaction scaling
strengths.
As an example, we show that the effective Hamiltonian for an arbitrary atom or
ion reduces to a coordinate-scaled Helium Hamiltonian with some specific
electron-electron interaction strength. The result is that all atomic electron
energy eigenstates can be found strictly from the solution to this Helium atom
problem. While atoms provide the simplest use case, the formalism presented
here applies equally well to molecular and solid state systems.
The paper is organized as follows. Section II introduces the $2$-RDM then
expands it in a two-electron basis to define the GDM. Section III gives
examples of valid density matrices that serve as the starting point for
eigenstate calculations. Section IV derives the unitary time evolution law of
the GDM, which Section V applies to the cases in which the electron-electron
interaction is switched on suddenly or slowly. The latter leads to the
adiabatic theorem, which is exploited in section VI to solve the many-body
Schrödinger equation.
Appendix A derives the four necessary $N$-representability constraints listed
in Section II. Appendix B defines the matrix transformation imposed by a
change of geminal basis, and Appendix C provides an alternate derivation of
the GDM equation of motion found in Section IV.
Hartree atomic units with $\hbar=e=m_{0}=1/(4\pi\epsilon_{0})=1$ will be used
throughout.
## II The geminal density matrix
This section presents the GDM as the basic quantity needed to calculate the
observables of a many-body quantum system. It begins with an introductory
derivation of the $2$-RDM and proceeds to define the GDM by expanding the
$2$-RDM in a two-electron basis. The section ends by presenting a short list
of necessary $N$-representability conditions that are derived in Appendix A.
### II.1 The 2-RDM
The state of a given $N$-electron system is fully described by its wave
function $\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$, where we combined
spatial and spin degrees of freedom into the symbol
$\mathbf{x}_{i}=\mathbf{r}_{i}\sigma_{i}.$ (1)
The wave function is the position representation
$\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\braket{\mathbf{x}_{1},\dots,\mathbf{x}_{N}}{\alpha}$
of the abstract vector $\ket{\alpha}$, and it must be anti-symmetric under
coordinate exchange $\mathbf{x}_{i}\leftrightarrow\mathbf{x}_{j}$ for any pair
$i$ and $j$. We impose the normalization condition
$\int|\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})|^{2}\prod_{i=1}^{N}d\mathbf{x}_{i}=1,$
(2)
with the integral over $d\mathbf{x}$ including a sum over spin coordinates by
$\int f(\mathbf{x})d\mathbf{x}=\sum_{\sigma}\int
f(\mathbf{r}\mathbf{\sigma})d\mathbf{r}.$ (3)
Observables are represented by $N$-body linear operators with position
representation
$A(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\braket{\mathbf{x}_{1},\dots,\mathbf{x}_{N}}{\hat{A}}{\mathbf{x}_{1},\dots,\mathbf{x}_{N}}$.
An operator must be symmetric under particle exchange so that it maps anti-
symmetrized wave functions to anti-symmetrized wave functions. In general, $A$
contains one-body components acting on the coordinates of single particles and
two-body components acting on pairs. Denoting one-body contributions by lower
case letters and two-body by upper case, $A$ is expressed in the position
representation as
$A(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\sum_{k}a_{1}(\mathbf{x}_{k})+\sum_{i,j>i}A_{2}(\mathbf{x}_{i},\mathbf{x}_{j}).$
(4)
We will find it convenient to combine Eq. 4 into the single sum over pairs
$A(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\sum_{i,j>i}A(\mathbf{x}_{i},\mathbf{x}_{j}),$
(5)
with the summand $A(\mathbf{x}_{i},\mathbf{x}_{j})$ defined by
$A(\mathbf{x}_{i},\mathbf{x}_{j})=A_{1}(\mathbf{x}_{i},\mathbf{x}_{j})+A_{2}(\mathbf{x}_{i},\mathbf{x}_{j}).$
(6)
The term $A_{1}(\mathbf{x}_{i},\mathbf{x}_{j})$ is the promotion of a one-body
operator to act on pairs
$A_{1}(\mathbf{x}_{i},\mathbf{x}_{j})=\frac{a_{1}(\mathbf{x}_{i})+a_{1}(\mathbf{x}_{j})}{N-1},$
(7)
where the denominator divides out the overcounting that occurs in the pair
sum.
Using Eq. 5, we calculate the expectation value of $A$ by the inner product
$\displaystyle\braket{A}$
$\displaystyle=\sum_{i,j>i}\int\Psi^{*}(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$
$\displaystyle\times
A(\mathbf{x}_{i},\mathbf{x}_{j})\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})\prod_{i=1}^{N}d\mathbf{x}_{i}.$
(8)
Each integral in the pair sum is found to be equivalent by first swapping the
names $\mathbf{x}_{i}\leftrightarrow\mathbf{x}_{1}$ and
$\mathbf{x}_{j}\leftrightarrow\mathbf{x}_{2}$ then permuting the same pairs
within the argument lists of $\Psi$ and $\Psi^{*}$. Since each wave function
changes sign under coordinate permutation, the argument swapping leaves the
integrand unchanged.
Combining like variables by the shorthand
$\mathbf{X}=\mathbf{x}_{1},\mathbf{x}_{2}$ and
$\mathbf{Y}=\mathbf{x}_{3},\dots,\mathbf{x}_{N}$, we find that
$\braket{A}=\begin{pmatrix}N\\\
2\end{pmatrix}\int\Psi^{*}(\mathbf{X},\mathbf{Y})A(\mathbf{X})\Psi(\mathbf{X},\mathbf{Y})d\mathbf{X}d\mathbf{Y},$
(9)
where $d\mathbf{X}=d\mathbf{x}_{1}d\mathbf{x}_{2}$ and
$d\mathbf{Y}=d\mathbf{x}_{3}\cdots d\mathbf{x}_{N}$. The prefactor, equal to
$N(N-1)/2$, is the number of equivalent integrals in the pair sum. The
operator $A(\mathbf{X})$ is exactly the contribution of the single pair in Eq.
6,
$\displaystyle
A(\mathbf{X})=\frac{a_{1}(\mathbf{x}_{1})+a_{2}(\mathbf{x}_{2})}{N-1}+A_{2}(\mathbf{X}).$
(10)
Noting that $A(\mathbf{X})$ does not depend on $\mathbf{Y}$, we would like to
remove it from the integral over $d\mathbf{Y}$. We cannot factor out
$A(\mathbf{X})$ directly because it may contain derivatives that act on the
integral by the chain rule. We circumvent this problem by introducing a set of
primed coordinates $\mathbf{X}^{\prime}$ upon which the operator
$A(\mathbf{X}^{\prime})$ is taken to act. This definition permits the re-
formulation of Eq. 9 as
$\displaystyle\braket{A}=$ $\displaystyle\begin{pmatrix}N\\\
2\end{pmatrix}\int
d\mathbf{X}d\mathbf{X}^{\prime}\delta(\mathbf{X}^{\prime}-\mathbf{X})$
$\displaystyle\times
A(\mathbf{X}^{\prime})\int\Psi^{*}(\mathbf{X},\mathbf{Y})\Psi(\mathbf{X}^{\prime},\mathbf{Y})d\mathbf{Y}.$
(11)
In Eq. II.1 we defined the delta function
$\delta(\mathbf{X}^{\prime}-\mathbf{X})=\prod_{i=1}^{N}\delta(\mathbf{r}_{i}^{\prime}-\mathbf{r}_{i})\delta_{\sigma_{i}^{\prime},\sigma_{i}}$
(12)
that includes Dirac delta functions for position and Kronecker delta functions
for spin polarization.
Finally, we define the $2$-RDM
$\rho(\mathbf{X},\mathbf{X}^{\prime})=\begin{pmatrix}N\\\
2\end{pmatrix}\int\Psi^{*}(\mathbf{X},\mathbf{Y})\Psi(\mathbf{X}^{\prime},\mathbf{Y})d\mathbf{Y},$
(13)
so that Eq. II.1 simplifies to
$\braket{A}=\int
d\mathbf{X}d\mathbf{X}^{\prime}\delta(\mathbf{X}^{\prime}-\mathbf{X})A(\mathbf{X}^{\prime})\rho(\mathbf{X},\mathbf{X}^{\prime}).$
(14)
Eqs. 13 and 14 provide an enormous complexity reduction compared to Eq. II.1,
as they depend only on $\mathbf{X}$ and $\mathbf{X}^{\prime}$ regardless of
the number of particles under investigation.
### II.2 From the 2-RDM to the GDM
We will now represent the $2$-RDM in matrix form, starting by expressing it as
the position representation of a non-local two-body linear operator $\hat{D}$:
$\rho(\mathbf{X},\mathbf{X}^{\prime})=\braket{\mathbf{X}^{\prime}}{\hat{D}}{\mathbf{X}}.$
(15)
Introducing a complete set of geminal basis states $\ket{i}$ with wave
functions $\psi_{i}(\mathbf{X})=\braket{\mathbf{X}}{i}$, we expand $\hat{D}$
by inserting two resolutions of the identity $1=\sum_{i}\ket{i}\bra{i}$ to
find
$\hat{D}=\sum_{mn}\ket{m}\braket{m}{\hat{D}}{n}\bra{n}.$ (16)
Defining $D_{mn}=\braket{m}{\hat{D}}{n}$ and taking the position
representation by pre-multiplying $\bra{\mathbf{X}^{\prime}}$ and post-
multiplying $\ket{\mathbf{X}}$, the $2$-RDM as expressed in Eq. 15 takes the
form
$\rho(\mathbf{X},\mathbf{X}^{\prime})=\sum_{mn}D_{mn}\psi_{n}^{*}(\mathbf{X})\psi_{m}(\mathbf{X}^{\prime}).$
(17)
We can prove the validity of this expansion by construction; applying
$\int\psi_{n^{\prime}}(\mathbf{X})\psi_{m^{\prime}}^{*}(\mathbf{X}^{\prime})d\mathbf{X}d\mathbf{X}^{\prime}$
to either side of Eq. 17 isolates element $D_{m^{\prime}n^{\prime}}$ as a
functional of the $2$-RDM. See Eq. 129 in Appendix A for more detail.
Inserting Eq. 17 into Eq. 14 yields
$\braket{A}=\sum_{mn}D_{mn}A_{nm},$ (18)
where
$A_{nm}=\int\psi_{n}^{*}(\mathbf{X})A(\mathbf{X})\psi_{m}(\mathbf{X})d\mathbf{X}.$
(19)
We now define the GDM as the matrix $\mathbf{D}$ of coefficients $D_{mn}$.
Similarly defining $\mathbf{A}$ to have coefficients $A_{mn}$, Eq. 18 reduces
to the matrix form
$\braket{A}=\operatorname{Tr}[\mathbf{D}\mathbf{A}].$ (20)
Eq. 20 was derived without approximation so it exactly reproduces any
observable quantity given by the many-body wave function. Thus, knowledge of
the matrix $\mathbf{D}$ is equivalent to knowledge of
$\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$.
Although we can always generate a unique GDM from the many-body wave function,
the process can not be reversed to construct the wave function from the GDM.
As a result, it is necessary to constrain $\mathbf{D}$ so that it is
guaranteed to satisfy $N$-representability. In Appendix A we derive the
following four necessary conditions:
$\displaystyle\mathbf{D}$ $\displaystyle=\mathbf{D}^{\dagger}$ (21a)
$\displaystyle 0\leq$ $\displaystyle D_{nn}\leq 1$ (21b)
$\displaystyle\operatorname{Tr}[\mathbf{D}]$
$\displaystyle=\begin{pmatrix}N\\\ 2\end{pmatrix}$ (21c) $\displaystyle
0\leq\operatorname{Tr}[\mathbf{D}^{2}]$ $\displaystyle\leq\begin{pmatrix}N\\\
2\end{pmatrix}.$ (21d)
Condition (21a) says the matrix must be Hermitian. We can interpret rule (21b)
as the maximum occupation number for a given geminal and (21c) as the total
number of electron pairs present in the wave function. The final expression,
(21d), is derivable from the first three and provides a way to distinguish
states in a manner that is invariant under unitary matrix transformation. In
fact, we will find that all matrices of interest in this work satisfy the
strict equality $\operatorname{Tr}[\mathbf{D}^{2}]=N(N-1)/2$.
In Section III we find that Eq. 21 is an insufficient set of constraints
because it is possible to define a non-$N$-representable GDM that obeys these
rules.
## III Matrix Examples
This section begins with examples of simple GDMs followed by a demonstration
of the $N$-representability problem. Subsection III.2 explores the application
of these matrices to solve for the stationary states of interacting many-body
Hamiltonians. It is found that solving such systems may be possible through a
time-dependent analysis.
### III.1 Matrices with a non-interacting geminal basis
Using an orthonormal basis of one-electron wave functions
$\phi_{i}(\mathbf{x})$, we can build $N$-electron Slater determinants by
$\Psi_{\\{\alpha\\}}(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\hat{S}_{-}\prod_{i=1}^{N}\phi_{\alpha_{i}}(\mathbf{x}_{i}).$
(22)
In Eq. 22, we defined a configuration $\\{\alpha\\}$ to be an ordered
collection of integers $\alpha_{i}$ that specify the single-particle
eigenstates included in a given product. The operator $\hat{S}_{-}$ transforms
the product into an anti-symmetrized wave function by the determinant operator
in Eqs. 134 and 135.
Per Eq. 17, the GDM is defined with respect to a complete basis of two-
electron eigenstates. For now we take this basis to be the two-particle Slater
determinants
$\psi_{\mathbf{n}}(\mathbf{X})=\frac{1}{\sqrt{2}}\left(\phi_{n_{1}}(\mathbf{x}_{1})\phi_{n_{2}}(\mathbf{x}_{2})-\phi_{n_{1}}(\mathbf{x}_{2})\phi_{n_{2}}(\mathbf{x}_{1})\right)$
(23)
labeled by the pair of integers $\mathbf{n}=\\{n_{1},n_{2}\\}$. For a many-
electron state defined by a single configuration $\\{\alpha\\}$, we find in
Appendix A that the $2$-RDM can be represented as a rank four tensor indexed
by the pairs
$D_{\mathbf{mn}}=\begin{cases}1,&\text{if }\mathbf{m}=\mathbf{n}\text{ and
}n_{1},n_{2}\in\\{\alpha\\}\\\ 0,&\text{otherwise}.\end{cases}$ (24)
To transform Eq. 24 into a matrix with indices $D_{mn}$, we must assign each
pair $\mathbf{n}$ to a single integer. For this purpose we choose the mapping
shown in Table 1.
Table 1: Geminal index map $\mathbf{n}$ | $(1,2)$ | $(1,3)$ | $(2,3)$ | $(1,4)$ | $(2,4)$ | $(3,4)$ | …
---|---|---|---|---|---|---|---
$n$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $\dots$
We now have all the necessary tools to give an example; suppose our wave
function is the three-electron Slater determinant
$\Psi(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3})=\hat{S}_{-}\phi_{1}(\mathbf{x}_{1})\phi_{2}(\mathbf{x}_{2})\phi_{3}(\mathbf{x}_{3})$.
From Eq. 24 it is clear that $D_{\mathbf{nn}}=1$ for
$\mathbf{n}=\\{1,2\\},\\{1,3\\}$ and $\\{2,3\\}$. Converting to a matrix by
Table 1, we find that $D_{nn}=1$ for $n=1,2,3$. This example generalizes
trivially to $N$-electron Slater determinants, which are represented by
matrices with $N(N-1)/2$ ones placed along the diagonal. Such matrices obey
the strict equality of Eq. 21d
$\operatorname{Tr}[\mathbf{D}^{2}]=\begin{pmatrix}N\\\ 2\end{pmatrix}$ (25)
and the equivalent idempotence condition
$\mathbf{D}^{2}=\mathbf{D}.$ (26)
It is tempting to conclude that we can arbitrarily place ones on the diagonal,
but this assumption fails immediately for the matrix $\mathbf{D}^{\prime}$
with $D^{\prime}_{nn}=1$ for $n=1,2,4$. According to Table 1, the Slater
determinant which generates this matrix must contains pairs
$\\{1,2\\},\\{1,3\\},$ and $\\{1,4\\}$. It is impossible for a single three-
electron product to contain all four basis functions
$\phi_{1}(\mathbf{x}),\phi_{2}(\mathbf{x}),\phi_{3}(\mathbf{x})$ and
$\phi_{4}(\mathbf{x})$, so the matrix is not $N$-representable. We conclude
that the restrictions in Eq. 21 are insufficient to ensure
$N$-representability because $\mathbf{D}^{\prime}$ satisfies all the rules and
fails to correspond to a valid wave function.
We may also create valid states that do not satisfy Eqs. 25 and 26 by taking a
linear superposition of configurations
$\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\sum_{\\{\alpha\\}}C_{\\{\alpha\\}}\Psi_{\\{\alpha\\}}(\mathbf{x}_{1},\dots,\mathbf{x}_{N}),$
(27)
whose diagonal elements are (Eq. 142)
$\displaystyle
D_{\mathbf{n}\mathbf{n}}=\sum_{\\{\alpha\\}\ni\mathbf{n}}\left|C_{\\{\alpha\\}}\right|^{2}.$
(28)
The remaining elements $D_{\mathbf{mn}}$ with $\mathbf{m}\neq\mathbf{n}$ are
only non-zero when two configurations share all but two basis functions (Eq.
141). Take for example the constant superposition of $M$ disjoint anti-
symmetrized configurations
$\displaystyle\Psi$
$\displaystyle(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\frac{1}{\sqrt{M}}\hat{S}_{-}\Big{[}\Big{(}\phi_{1}(\mathbf{x}_{1})\cdots\phi_{N}(\mathbf{x}_{N})\Big{)}$
$\displaystyle+\Big{(}\phi_{N+1}(\mathbf{x}_{1})\cdots\phi_{2N}(\mathbf{x}_{N})\Big{)}$
$\displaystyle+\dots+\Big{(}\phi_{(M-1)N+1}(\mathbf{x}_{1})\cdots\phi_{MN}(\mathbf{x}_{N})\Big{)}\Big{]},$
(29)
where each $\phi_{i}(\mathbf{x})\neq\phi_{j}(\mathbf{x})$. Clearly, the off-
diagonal elements of Eq. III.1 are all zero. We then find from Eq. 28 that
$D_{\mathbf{nn}}=1/M$ because each pair in the expansion appears in a single
configuration with coefficient $1/\sqrt{M}$. It is straightforward to show
that $\operatorname{Tr}[\mathbf{D}^{2}]=N(N-1)/(2M^{2})$, which decreases as
the number $M$ of disjoint configurations increases.
### III.2 Matrices representing a real system
The previous examples were defined without regard to their relationship to a
physical system. We now aim to make use of the GDM to solve the eigenvalue
relation
$H(\mathbf{x}_{1},\dots,\mathbf{x}_{N})\Psi_{i}(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\mathcal{E}_{i}\Psi_{i}(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$
(30)
for an $N$-electron Hamiltonian $H(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$
containing one and two-body terms as in Eq. 4. We discovered in Section II
that any wave function $\Psi_{i}(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$
corresponds to a matrix $\mathbf{D}_{i}$ that produces the eigenvalues
$\mathcal{E}_{i}$ of Eq. 30 by
$\mathcal{E}_{i}=\operatorname{Tr}[\mathbf{D}_{i}\mathbf{H}].$ (31)
The matrix $\mathbf{H}$ has elements
$H_{mn}=\int\psi_{m}^{*}(\mathbf{X})H(\mathbf{X})\psi_{n}(\mathbf{X})d\mathbf{X},$
(32)
where $H(\mathbf{X})$ is the effective two-particle Hamiltonian (Eq. 10)
acting on the geminal basis functions $\psi_{i}(\mathbf{X})$. Instead of using
the two-electron Slater determinants of Eq. 23, it will be advantageous to
choose the basis that diagonalizes $H(\mathbf{X})$ by
$H(\mathbf{X})\psi_{j}(\mathbf{X})=E_{j}\psi_{j}(\mathbf{X}).$ (33)
Suppose that we have solved for at least the $N(N-1)/2$ lowest energy
eigenstates of Eq. 31 and computed the diagonal matrix $\mathbf{H}$ by Eq. 32.
We may then discover the ground state by finding the $N$-representable matrix
$\mathbf{D}_{i}$ that yields the minimum possible energy in Eq. 31. Dropping
subscript $i$ from the GDM, the minimum energy state that satisfies the rules
of Eq. 21 is
$\displaystyle D_{mn}$ $\displaystyle=\begin{cases}1&\text{if }m=n\text{ and
}n\leq\begin{pmatrix}N\\\ 2\end{pmatrix}\\\ 0&\text{otherwise}.\end{cases}$
(34)
Unfortunately, the matrix in Eq. 34 is not guaranteed to be $N$-representable
because it was discovered through a minimization subject to an insufficient
set of constraints. It is nonetheless worthwhile to introduce Eq. 34 for two
reasons, the first being that it is precisely the form of the ground state
matrices postulated by Bopp [19] for his atomic calculations. The second use
is to gain insight into the $N$-representability problem and the path toward
its resolution.
We may wonder if the correct ground state GDM may be non-diagonal in contrast
to Eq. 34. However, borrowing intuition from single-particle mixed-case
density matrices, we expect off-diagonal elements to introduce temporal
density oscillations that render the state non-stationary. Another alternative
is that the eigenstate matrices are diagonal but the elements may be any real
number between 0 and 1. In this case, the GDM formalism seems not to reduce
difficulty of the energy minimization because the optimization occurs over a
possibly infinite set of matrix coefficients.
There is a hint following from a physical argument that stationary states do
not exhibit such non-integral occupation levels. Suppose $N$ electrons begin
in a Slater determinant in a system governed by a Hamiltonian with electron-
electron interactions. The initial GDM is trivially $N$-representable as it is
constructed directly from the wave function, but it is clearly not stationary.
Given enough time, however, we expect the system to relax to the ground state
by processes like the emission of radiation. Using single particle density
matrices as a guide again, we may conjecture that the electron-radiation
interaction proceeds as a unitary transformation of the GDM.
Since $\operatorname{Tr}[\mathbf{D}^{2}]$ is conserved under unitary
transformation, the ground state must also satisfy
$\operatorname{Tr}[\mathbf{D}^{2}]=N(N-1)/2$ and $\mathbf{D}^{2}=\mathbf{D}$.
These conditions are inconsistent with the presence of non-integer occupation
numbers that decrease the trace of the squared GDM. Accordingly, we may
predict that all stationary states are represented by $N(N-1)/2$ ones along
the diagonal but the remaining difficulty is to determine which collection of
occupied states defines an $N$-representable GDM.
The preceding discussion suggests that Bopp’s model may be closer to correct
than previously thought. After all, his method calculated the ground state of
O5+ with a remarkably low error of $0.017\%$ (See tables I and II of Ref
[19]). Although the method fared worse for Be+ with an error of $0.86\%$, the
matrix designated as the first excited state curiously had energy within
$0.040\%$ of the experimental ground state. While it is always possible that
these anomalies are the result of coincidence, it deserves to be investigated
whether Bopp’s errors originated from a simple improper accounting of two-
electron eigenstates.
Clearly, it is not immediately obvious that the GDM should actually evolve by
unitary matrix transformation. For this reason, the next section carefully
derives the evolution law.
## IV The time evolution equation
Subsection III.2 emphasized the importance of modeling the time evolution of
the GDM for the discovery of many-body eigenstates. In this section we derive
the governing equation, finding that the GDM indeed evolves in unitary fashion
by the Liouville-Von Neumann equation.
We begin by generalizing the equations of Section II to apply at arbitrary
times. Most importantly, our wave function will be the position representation
of the time-dependent abstract operator $\alpha(t)$
$\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N}|t)=\braket{\mathbf{x}_{1},\dots,\mathbf{x}_{N}}{\alpha(t)}.$
(35)
The expectation value of a generally time-dependent linear operator
$\hat{A}(t)$ for the state in Eq. 35 is
$\braket{A}(t)=\braket{\alpha(t)}{\hat{A}(t)}{\alpha(t)}.$ (36)
We can also calculate observables by the equivalent $2$-RDM formulation,
defining the time-dependent $2$-RDM
$\rho(\mathbf{X},\mathbf{X}^{\prime}|t)=\int\Psi^{*}(\mathbf{X},\mathbf{Y}|t)\Psi(\mathbf{X}^{\prime},\mathbf{Y}|t)d\mathbf{Y}$
(37)
following Eq. 13. The expectation value is then given by
$\braket{A}(t)=\int\delta(\mathbf{X}-\mathbf{X}^{\prime})A(\mathbf{X}^{\prime}|t)\rho(\mathbf{X},\mathbf{X}^{\prime}|t)d\mathbf{X}d\mathbf{X}^{\prime}.$
(38)
The question that we need to answer is: How must
$\rho(\mathbf{X},\mathbf{X}^{\prime}|t)$ evolve in time so that the
expectation values computed by Eq. 38 match those given in Eq. 36 by the many-
body wave function? The ability to compute the same observable quantities for
a given time means that $\rho(\mathbf{X},\mathbf{X}^{\prime}|t)$ and its
corresponding $\mathbf{D}(t)$ furnish a complete representation of the quantum
state.
The first step in answering the posed question is to differentiate Eq. 38 to
find
$\displaystyle\frac{d}{dt}\braket{A}(t)$
$\displaystyle=\Braket{\frac{dA}{dt}}$ $\displaystyle+\int
d\mathbf{X}d\mathbf{X}^{\prime}\delta(\mathbf{X}-\mathbf{X}^{\prime})A(\mathbf{X}^{\prime})\dot{\rho}(\mathbf{X},\mathbf{X}^{\prime}|t).$
(39)
We must find a matching expression for Eq. 37 in the hope of finding an
equation that connects $\dot{\rho}(\mathbf{X},\mathbf{X}^{\prime}|t)$ to known
quantities. For brevity, we will group all $N$ coordinate sets together into
the symbol
$\overline{\mathbf{X}}=\mathbf{x}_{1},\dots,\mathbf{x}_{N}$ (40)
so that we can express the $N$-electron wave function at some initial time
$t_{0}$ as $\Psi(\overline{\mathbf{X}}|t_{0})$. This wave function evolves
according to the unitary time evolution operator $U(t,t_{0})$ by
$\Psi(\overline{\mathbf{X}}|t)=U(t,t_{0})\Psi(\overline{\mathbf{X}}|t_{0}).$
(41)
Expressing the time-dependent operator $A(\overline{\mathbf{X}}|t)$ as a pair
sum following Eq. 5, we find the position representation of the inner product
in Eq. 36
$\displaystyle\braket{A}(t)$
$\displaystyle=\sum_{i,j>i}\int\Psi^{*}(\overline{\mathbf{X}}|t_{0})U^{\dagger}(t,t_{0})$
$\displaystyle\times
A(\mathbf{x}_{i},\mathbf{x}_{j}|t)U(t,t_{0})\Psi(\overline{\mathbf{X}}|t_{0})d\overline{\mathbf{X}}.$
(42)
Recalling the notation of Section II, we have defined the differential
$d\overline{\mathbf{X}}=d\mathbf{x}_{1}\cdots d\mathbf{x}_{N}$ which includes
spin sums by Eq. 3.
We proceed to differentiate Eq. IV, distributing derivatives by the chain rule
to $U^{\dagger}(t,t_{0})$, $A(\mathbf{x}_{i},\mathbf{x}_{j})$ and
$U(t,t_{0})$. Applying the time-dependent Schrödinger equations
$i\partial_{t}U(t)=H(t)U(t)$ and
$-i\partial_{t}U^{\dagger}(t)=U^{\dagger}(t)H(t)$, we find that
$\displaystyle\frac{d}{dt}\braket{A}(t)=\Braket{\frac{dA}{dt}}-iK(t),$ (43)
where $K(t)$ is the expectation value of the many-body commutator
$\displaystyle K(t)$ $\displaystyle=\int
d\overline{\mathbf{X}}\Psi^{*}(\overline{\mathbf{X}}|t)$
$\displaystyle\times\sum_{\begin{subarray}{c}i,j>i\\\
k,l>k\end{subarray}}\left[H(\mathbf{x}_{i},\mathbf{x}_{j}|t),A(\mathbf{x}_{k},\mathbf{x}_{l}|t)\right]\Psi(\overline{\mathbf{X}}|t)d\overline{\mathbf{X}}.$
(44)
In Eq. IV we expressed the many-body Hamiltonian $H(\overline{\mathbf{X}}|t)$
as a pair sum per Eq. 5.
Simplify Eq. IV by splitting $H(\mathbf{x}_{i},\mathbf{x}_{j}|t)$ and
$A(\mathbf{x}_{k},\mathbf{x}_{l}|t)$ into their one and two body components
following Eq. 6. Defining $K_{\alpha\beta}(t)$ to be the integral involving
the commutator between $\alpha$-body terms of the Hamiltonian and $\beta$-body
terms of $A$, $K(t)$ decomposes into the sum
$K(t)=K_{11}(t)+K_{12}(t)+K_{21}(t)+K_{22}(t).$ (45)
We will now calculate $K_{1\beta}(t)$, which deals with the commutator between
one-body Hamiltonian terms and both one and two-body portions of $A$. By
swapping each $\mathbf{x}_{k}$ to $\mathbf{x}_{1}$ and $\mathbf{x}_{l}$ to
$\mathbf{x}_{2}$, the sum over $k$ and $l>k$ in IV reduces to $N(N-1)/2$
identical integrals. Continuing to expand
$H_{1}(\mathbf{x}_{i},\mathbf{x}_{j})$ by Eq. 7, we can separate
$\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ contributions to find
$\displaystyle K_{1\beta}(t)$ $\displaystyle=\begin{pmatrix}N\\\
2\end{pmatrix}\int d\overline{\mathbf{X}}\Psi^{*}(\overline{\mathbf{X}}|t)$
$\displaystyle\times\Bigg{(}$
$\displaystyle\sum_{i,j>i}\left[\frac{h_{1}(\mathbf{x}_{i}|t)}{N-1},A_{\beta}(\mathbf{x}_{1},\mathbf{x}_{2}|t)\right]$
$\displaystyle+$
$\displaystyle\sum_{i,j>i}\left[\frac{h_{1}(\mathbf{x}_{j}|t)}{N-1},A_{\beta}(\mathbf{x}_{1},\mathbf{x}_{2}|t)\right]\Bigg{)}\Psi(\overline{\mathbf{X}}|t).$
(46)
Most terms in Eq. IV cancel because operators acting on different coordinates
always commute. The only surviving contributions are those in which
$\mathbf{x}_{i}$ or $\mathbf{x}_{j}$ are equal to $\mathbf{x}_{1}$ or
$\mathbf{x}_{2}$. From the second line we pick up $N-1$ copies of the
commutator $[h_{1}(\mathbf{x}_{1}),A_{\beta}(\mathbf{x}_{1},\mathbf{x}_{2})]$
by fixing $i=1$ and running over all $N-1$ elements in the $j$ sum. We also
find from this expression a single instance of the commutator
$[h_{1}(\mathbf{x}_{2}),A_{\beta}(\mathbf{x}_{1},\mathbf{x}_{2})]$. In the
third line we find $N-2$ more copies of the commutator
$[h_{1}(\mathbf{x}_{2}),A_{\beta}(\mathbf{x}_{1},\mathbf{x}_{2})]$ by fixing
$i=2$ and summing over the $N-2$ values of $j$, so that each portion appears a
total of $N-1$ times.
Noting that the above argument applies identically to the computation of
$K_{\alpha 1}$, we find that
$\displaystyle K_{\alpha\beta}(t)$ $\displaystyle=\begin{pmatrix}N\\\
2\end{pmatrix}(N-1)\int\Psi^{*}(\overline{\mathbf{X}}|t)$
$\displaystyle\times\left[H_{\alpha}(\mathbf{X}|t),A_{\beta}(\mathbf{X}|t)\right]\Psi(\overline{\mathbf{X}}|t)d\overline{\mathbf{X}}$
(47)
for $\alpha\beta=11,12,21$. Eq. IV does not immediately apply to $K_{22}(t)$
because, unlike in Eq. IV, two-body operators cannot be separated into single-
coordinate expressions. As a result, 3-coordinate terms such as
$[H(\mathbf{x}_{1},\mathbf{x}_{2}),A(\mathbf{x}_{1},\mathbf{x}_{3})]$ remain
in the equation for $K_{22}(t)$. By restricting two-body terms of the
Hamiltonian and operator $A$ to depend only on position in the form
$\sum_{i,j>i}f(|\mathbf{r}_{i}-\mathbf{r}_{j}||t)$, two body terms will always
commute so that $[H_{2}(\mathbf{X}|t),A_{2}(\mathbf{X}|t)]=0$ and Eq. IV
applies trivially to $\alpha\beta=22$. The consequences of this requirement
are discussed in the Conclusion.
Absorb the prefactor $(N-1)$ of Eq. IV into the Hamiltonian by defining
$H^{\prime}(\mathbf{X}|t)=(N-1)H(\mathbf{X}|t).$ (48)
Summing the $K_{\alpha\beta}$ by Eq. 45 allows us to express $K(t)$ as
$\displaystyle K(t)$ $\displaystyle=\begin{pmatrix}N\\\ 2\end{pmatrix}\int
d\mathbf{X}d\mathbf{Y}\Big{\\{}$
$\displaystyle\Big{[}H^{\prime}(\mathbf{X}|t)\Psi^{*}(\mathbf{X},\mathbf{Y}|t)\Big{]}A(\mathbf{X}|t)\Psi(\mathbf{X},\mathbf{Y}|t)$
$\displaystyle-\Psi^{*}(\mathbf{X},\mathbf{Y}|t)H^{\prime}(\mathbf{X}|t)A(\mathbf{X}|t)\Psi(\mathbf{X},\mathbf{Y}|t)\Big{\\}}.$
(49)
In Eq. IV, we separated the terms of the commutator and chose one Hermitian
operator $H^{\prime}(\mathbf{X}|t)$ to act on the left copy of the wave
function. We can finally substitute Eq. 37 into Eq. IV to find the derivative
of the expectation value by Eq. 43:
$\displaystyle\frac{d}{dt}\braket{A}(t)$
$\displaystyle=\Braket{\frac{dA}{dt}}-i\int
d\mathbf{X}d\mathbf{X}^{\prime}\delta(\mathbf{X}-\mathbf{X}^{\prime})$ (50)
$\displaystyle\times
A(\mathbf{X}^{\prime})[H^{\prime}(\mathbf{X})-H^{\prime}(\mathbf{X}^{\prime})]\rho(\mathbf{X},\mathbf{X}^{\prime}|t).$
(51)
Note that one copy of the Hamiltonian depends on coordinates $\mathbf{X}$ as a
consequence of its acting to the left in the inner product. Comparing Eq. 50
to Eq. IV, we find the desired expression for
$\dot{\rho}(\mathbf{X},\mathbf{X}^{\prime}|t)$:
$\dot{\rho}(\mathbf{X},\mathbf{X}^{\prime}|t)=-i[H(\mathbf{X})-H(\mathbf{X}^{\prime})]\rho(\mathbf{X},\mathbf{X}^{\prime}|t).$
(52)
We proceed to derive a more convenient matrix representation of Eq. 52
starting with a time-dependent geminal expansion of the $2$-RDM
$\rho(\mathbf{X},\mathbf{X}^{\prime}|t)=\sum_{mn}D_{mn}(t)\psi_{n}^{*}(\mathbf{X}|t)\psi_{m}(\mathbf{X}^{\prime}|t).$
(53)
Note that Eq. 53 represents the most general case in which both the matrix
elements $D_{mn}(t)$ and the geminal basis functions may vary in time.
Plugging into Eq. 52 gives
$\displaystyle i\frac{d}{dt}$
$\displaystyle\braket{\mathbf{X}^{\prime}}{\hat{D}}{\mathbf{X}}=\sum_{mn}D_{mn}(t)\psi_{m}(\mathbf{X}^{\prime}|t)\left[H^{\prime}(\mathbf{X}|t)\psi_{n}^{*}(\mathbf{X}|t)\right]$
$\displaystyle-\sum_{mn}D_{mn}(t)\left[H^{\prime}(\mathbf{X}^{\prime}|t)\psi_{m}(\mathbf{X}^{\prime}|t)\right]\psi_{n}^{*}(\mathbf{X}|t),$
(54)
where we have rearranged the $\psi_{i}(\mathbf{X})$ into the most convenient
order. Substitute into Eq. IV the following identities
$\displaystyle\psi_{m}(\mathbf{X}^{\prime}|t)$
$\displaystyle=\braket{\mathbf{X}^{\prime}}{m(t)}$ (55)
$\displaystyle\psi_{n}^{*}(\mathbf{X}|t)$
$\displaystyle=\braket{n(t)}{\mathbf{X}}$ (56) $\displaystyle
H^{\prime}(\mathbf{X}^{\prime}|t)\psi_{m}(\mathbf{X}^{\prime}|t)$
$\displaystyle=\braket{\mathbf{X}^{\prime}}{\hat{H}^{\prime}(t)}{m(t)}$ (57)
$\displaystyle H^{\prime}(\mathbf{X}|t)\psi_{n}^{*}(\mathbf{X}|t)$
$\displaystyle=\braket{n(t)}{\hat{H}^{\prime}(t)}{\mathbf{X}},$ (58)
so that everything on the right-hand side sits between
$\bra{\mathbf{X}^{\prime}}$ and $\ket{\mathbf{X}}$. Recalling the definition
of $\hat{D}(t)$ (Eq. 16), we extract from Eq. IV the abstract operator
equation
$\frac{d}{dt}\hat{D}(t)=-i[\hat{H}^{\prime}(t),\hat{D}(t)].$ (59)
Eq. 59 is the familiar Liouville-Von Neumann equation.
We can specialize Eq. 59 to a matrix equation by choosing a time-independent
basis $\ket{i}$ with which to expand
$\hat{D}(t)=\sum_{mn}D_{mn}\ket{m}\bra{n}$ and
$\hat{H}(t)=H_{mn}\ket{m}\bra{n}$. The result is that
$\dot{\mathbf{D}}(t)=-i[\mathbf{H}^{\prime}(t),\mathbf{D}(t)],$ (60)
which we find by an alternate derivation in Appendix C. Eq. 60 can be shown to
evolve through matrix transformation by the operator $\mathbf{U}(t,t_{0})$ as
$\mathbf{D}(t)=\mathbf{U}(t,t_{0})\mathbf{D}(t_{0})\mathbf{U}^{\dagger}(t,t_{0}).$
(61)
Plugging Eq. 61 into Eq. 60 and matching expressions on either side we find
that $\mathbf{U}(t)$ must satisfy
$\dot{\mathbf{U}}(t,t_{0})=-i\mathbf{H}(t)\mathbf{U}(t,t_{0}),$ (62)
and it is a routine procedure to successively integrate Eq. 62 to infinite
order for
$\mathbf{U}(t,t_{0})=\mathcal{T}_{t}\exp\left(-i\int_{t_{0}}^{t}d\tau\mathbf{H}^{\prime}(\tau)\right).$
(63)
$\mathbf{U}(t)$, being the exponential of a skew-Hermitian matrix, is clearly
unitary. As mentioned at the end of Section II, we now see that the unitary
transformation-invariant property
$\mathbf{D}^{2}=\mathbf{D}$ (64)
holds for any state connected to a Slater determinant by a time-dependent
Hamiltonian. Since states can only change by such a unitary transformation,
Eq. 64 must be true for any GDM with fixed particle number.
## V Switching on the electron-electron interaction
Having confirmed unitary evolution of the GDM, we will now exploit this
property to solve the many-body Schrödinger equation. We do so by studying an
$N$-electron Hamiltonian in which the Coulomb potential is switched on by a
temporal function $\lambda(t)$. Using Hartree atomic units with
$\hbar=m_{0}=|e|=1/(4\pi\epsilon_{0})=1$, the Hamiltonian is written
$H(\overline{\mathbf{X}}|t)=\sum_{i,j>i}H_{1}(\mathbf{x}_{i},\mathbf{x}_{j})+\frac{\lambda(t)}{|\mathbf{r}_{i}-\mathbf{r}_{j}|},$
(65)
with $\lambda(0)=0$ and $\lambda(t)=1$ for $t$ larger than some switching time
$T$.
As we saw in Eq. 10, Eq. 65 reduces in the $2$-RDM formalism to the effective
two-electron form
$H(\mathbf{X}|t)=H_{1}(\mathbf{X})+\frac{\lambda(t)}{|\mathbf{r}_{1}-\mathbf{r}_{2}|}.$
(66)
We introduce two natural basis sets for the study of Eq. 66, the first being
the non-interacting basis that diagonalizes the Hamiltonian at $t=0$ by
$H(\mathbf{X}|0)\psi_{i}(\mathbf{X})=E_{i}\psi_{i}(\mathbf{X}).$ (67)
Similarly, we define the interacting basis by fixing $t=T$ and solving
$\displaystyle H(\mathbf{X}|T)\psi_{i}(\mathbf{X})$
$\displaystyle=\left(H_{1}(\mathbf{X})+\frac{1}{|\mathbf{r}_{1}-\mathbf{r}_{2}|}\right)\psi_{i}(\mathbf{X})$
$\displaystyle=E_{i}\psi_{i}(\mathbf{X}).$ (68)
Using the subscript $I$ and $N$ for the interacting and non-interacting bases,
respectively, we can change between the two representations by the unitary
transformation (see Appendix B):
$\mathbf{D}_{I}(t)=\mathbf{U}_{I}^{N}\mathbf{D}_{N}(t)\big{(}\mathbf{U}_{I}^{N}\big{)}^{\dagger}.$
(69)
In the following, we observe the time evolution of a Slater determinant under
the influence of the Hamiltonian in Eqs. 65 and 66. Section V.1 treats the
case in which the Coulomb interaction is quickly switched on and Section V.2
details a slow adiabatic change which allows us to construct fully-interacting
many-body solutions.
### V.1 The sudden approximation
Beginning with the electrons in a Slater determinant state, we instantaneously
turn on the Coulomb interaction by the step function $\lambda(t)=u(t-T)$. The
sudden approximation posits that the electron gas remains unchanged during
switching but starts to evolve according to the Hamiltonian
$H(\overline{\mathbf{X}}|t)$ for $t>T$. Since the Hamiltonian is constant for
$t>T$, the time-dependent GDM is found from Eqs. 61 and 63 to be
$\mathbf{D}(t)=e^{i\mathbf{H}^{\prime}(T)(t-T)}\mathbf{D}(T)e^{-i\mathbf{H}^{\prime}(T)(t-T)}.$
(70)
Selecting the interacting basis, we have that the Hamiltonian $\mathbf{H}$ is
diagonal while $\mathbf{D}(T)$ is non-diagonal by the transformation in Eq.
69. This choice of basis simplifies the matrix equation in Eq. 70 to
$D_{mn}(t)=e^{iE_{nm}^{\prime}(t-T)}D_{mn}(T),$ (71)
with $E^{\prime}_{mn}=(N-1)(E_{m}-E_{n})$ accounting for the multiplicative
constant attached to $H^{\prime}(\mathbf{X})$ in Eq. 48.
With the time-dependent GDM given by Eq. 71, we can calculate the observable
quantity described by the operator with position representation
$\rho(\mathbf{x};\mathbf{X})=\frac{\delta(\mathbf{x}_{1}-\mathbf{x})+\delta(\mathbf{x}_{2}-\mathbf{x})}{N-1}.$
(72)
Recalling that $\mathbf{x}=\mathbf{r}\sigma$, the expectation value
$\braket{\rho(\mathbf{x})}(t)$ gives the electron density at position
$\mathbf{r}$ with spin $\sigma$. For simplicity, we will define the symbol
$\rho(\mathbf{x},t)=\braket{\rho(\mathbf{x})}(t),$ (73)
which we calculate by the trace relation
$\rho(\mathbf{x},t)=\operatorname{Tr}[\mathbf{D}(t)\boldsymbol{\rho}(\mathbf{x})].$
(74)
The matrix elements of $\boldsymbol{\rho}(\mathbf{x})$ take the simplified
form
$\rho_{nm}(\mathbf{x})=\frac{2}{N-1}\int\psi_{n}^{*}(\mathbf{x},\mathbf{x}_{2})\psi_{m}(\mathbf{x},\mathbf{x}_{2})d\mathbf{x}_{2}.$
(75)
Computing the trace in Eq. 74 finally yields the electron density
$\rho(\mathbf{x},t)=\sum_{mn}D_{mn}(T)e^{-iE^{\prime}_{mn}(t-T)}\rho_{nm}(\mathbf{x}).$
(76)
We perform a simple check that particle number is conserved; integrating Eq.
76 over $\mathbf{x}$, we find from orthonormality and Eq. 21c that
$\int\rho(\mathbf{x},t)d\mathbf{x}=N$ as required. In the next subsection we
delve deeper into the implications of the electron density equation.
#### V.1.1 The origin of incoherent quantum fluctuations
We can extract a surprising amount of insight from the simple relation in Eq.
76. Separating the stationary and oscillating terms allows us to express the
density as
$\displaystyle\rho(\mathbf{x},t)$
$\displaystyle=\sum_{l}D_{ll}\rho_{ll}(\mathbf{x})+\sum_{\begin{subarray}{c}m,n\\\
E_{m}^{\prime}=E_{n}^{\prime}\end{subarray}}\text{Re}\\{P_{mn}(\mathbf{x})\\}$
$\displaystyle+\sum_{\begin{subarray}{c}p,q\\\ E_{p}^{\prime}\neq
E_{q}^{\prime}\end{subarray}}|P_{pq}(\mathbf{x})|\cos\left(E_{pq}^{\prime}t+\theta_{pq}(\mathbf{x})\right),$
(77)
with
$\displaystyle P_{ij}(\mathbf{x})$
$\displaystyle=D_{ij}(T)\rho_{ji}(\mathbf{x})$ (78)
$\displaystyle\theta_{ij}(\mathbf{x})$
$\displaystyle=\arg\left(P_{ij}(\mathbf{x})\right).$ (79)
The first sum in Eq. V.1.1 gives the contribution to the density from diagonal
matrix elements, and the second and third are from the degenerate and non-
degenerate off-diagonals, respectively.
Defining the time average functional for $t>T$ by
$\overline{f(\mathbf{x},t)}=\lim_{\tau\rightarrow\infty}\frac{1}{\tau}\int_{T}^{T+\tau}f(\mathbf{x},t^{\prime})dt^{\prime},$
(80)
we find the time-averaged density at $\mathbf{x}$ to be determined entirely by
the diagonal and degenerate terms
$\overline{\rho(\mathbf{x},t)}=\sum_{l}D_{ll}\rho_{ll}(\mathbf{x})+\sum_{\begin{subarray}{c}m,n\\\
E_{m}^{\prime}=E_{n}^{\prime}\end{subarray}}\text{Re}\\{P_{mn}(\mathbf{x})\\}.$
(81)
On the other hand, the non-degenerate off-diagonals introduce temporal density
fluctuations with variance
$\sigma^{2}(\mathbf{x})=\overline{\rho^{2}(\mathbf{x},t)}-\overline{\rho(\mathbf{x},t)}^{2}=\frac{1}{2}\sum_{\begin{subarray}{c}p,q\\\
E_{p}^{\prime}\neq E_{q}^{\prime}\end{subarray}}|P_{pq}(\mathbf{x})|^{2}.$
(82)
The fluctuations increase in magnitude as the number of off-diagonal terms
increases, an idea which can also be understood directly from Eq. V.1.1
whereby summing an increasing number of out-of-phase cosines leads to peaks in
the density that decrease in duration but increase in intensity. A similar
argument can be made for the spatial extent of these quasi-random density
spikes.
Eqs. V.1.1–82 are best understood in the context of a physical example.
Suppose we have a system of electrons experiencing the potential of a set of
nuclei at positions $\mathbf{R}_{i}(t)$ that are free to move in time. The
resulting effective electronic Hamiltonian is
$\displaystyle H^{\prime}(\mathbf{X}|t)$
$\displaystyle=\sum_{i=1}^{2}\left(-\frac{\nabla_{i}^{2}}{2}-\sum_{j}\frac{1}{|\mathbf{r}_{i}-\mathbf{R}_{j}(t)|}\right)$
$\displaystyle+(N-1)\frac{\lambda(t)}{|\mathbf{r}_{1}-\mathbf{r}_{2}|},$ (83)
noting that we used $H^{\prime}(\mathbf{X}|t)=(N-1)H(\mathbf{X}|t)$. Once
again the presence of $\lambda(t)=u(t-T)$ indicates that we abruptly switch on
the Coulomb potential at time $t=T$.
The electron-nuclei system evolves according to the coupled equations
$\displaystyle\dot{\mathbf{D}}(t)$
$\displaystyle=-i[\mathbf{H}^{\prime}(t),\mathbf{D}(t)]$ (84) $\displaystyle
M_{i}\ddot{\mathbf{R}}_{i}(t)$ $\displaystyle=\sum_{j\neq
i}\frac{1}{|\mathbf{R}_{i}(t)-\mathbf{R}_{j}(t)|^{2}}-\sum_{\sigma}\int\frac{\rho(\mathbf{r}\sigma,t)}{|\mathbf{r}-\mathbf{R}_{i}|^{2}}d\mathbf{r},$
(85)
where Eq. 85 is the classical non-relativistic equation of motion for nucleus
$i$ with mass $M_{i}$. Each nucleus feels a repulsive Coulomb force induced by
the other nuclei and an attractive Coulomb force from the electron gas. The
sum over electrons spins is explicitly written following Eq. 3. We take the
system to be at rest for $t<T$, meaning that all $\mathbf{R}_{i}(t)$ are fixed
in place and the electrons are in an eigenstate represented by a diagonal
matrix.
When $t>T$, the electron density fluctuates by a series of spikes localized in
space and time. Each peak causes an abrupt change in the electron-nucleus
Coulomb force in Eq. 85 which causes a near-instantaneous scattering of the
nuclei. The resulting motion of $\mathbf{R}_{i}(t)$ affects the Hamiltonian in
Eq. V.1.1 and subsequently the GDM by Eq. 84. The net effect is the transfer
of energy from the electrons to the nuclei. This process continues until an
equilibrium is reached wherein net energy ceases to flow between the
subsystems. There is an alternate picture that expands Eq. 85 as a sum of
normal modes of motion excited by the oscillatory terms of Eq. V.1.1. Summing
the out-of-phase normal mode oscillation amplitudes yields the same
thermalized motion as the scattering picture.
Instead of evolving by the non-physical Hamiltonian in Eq. V.1.1, we could
have fixed $\lambda=1$ and included the effect of a time dependent
electromagnetic potential $A(\mathbf{r},t)$. In this case we would see that
the field excites electron density fluctuations by introducing non-zero off-
diagonal elements to $\mathbf{D}$, which induces motion in the nuclei leading
again to thermalization. This example and the previous one illustrate a major
strength of the GDM formalism. By properly treating all electrons together, we
begin to see the emergence of classical behavior in a quantum system.
### V.2 The degenerate adiabatic theorem
Section IV derived the equation of motion (Eq. 59)
$\frac{d}{dt}\hat{D}(t)=-i[\hat{H}^{\prime}(t),\hat{D}(t)]$ (86)
for the abstract density operator $\hat{D}(t)$. We found the corresponding
matrix equation (Eq. 60) after expanding $\hat{D}(t)$ in a time-independent
geminal basis.
Here we derive the adiabatic theorem, beginning by determining the matrix
equation of motion for a GDM expanded in a time-dependent basis in which the
Hamiltonian is always diagonal. That is, the basis functions satisfy
$\hat{H}(t)\ket{i(t)}=E_{i}(t)\ket{i(t)},$ (87)
where $E_{i}(t)$ is the instantaneous eigenvalue of state $\ket{i(t)}$ at time
$t$. Labeling matrix elements by $\mathcal{D}_{pq}(t)$, we have that
$\hat{D}(t)=\sum_{pq}\mathcal{D}_{pq}(t)\ket{p(t)}\bra{q(t)}.$ (88)
Inserting Eq. 88 into Eq. 86 and picking out the $mn$ component by pre-
multiplying $\bra{m(t)}$ and post-multiplying $\ket{n(t)}$, we find that the
right-hand side evaluates to $-iE_{mn}^{\prime}(t)D_{mn}(t)$ after applying
Eq. 87. Repeating for the left-hand side, by the chain rule we have that
$\displaystyle\Braket{m(t)}{\frac{d}{dt}\hat{D}(t)}{n(t)}=\dot{\mathcal{D}}_{mn}(t)$
$\displaystyle+\sum_{p}\mathcal{D}_{pn}(t)\braket{m(t)}{\dot{p}(t)}+\sum_{q}\mathcal{D}_{mq}(t)\braket{\dot{q}(t)}{n(t)}.$
(89)
Because $(d/dt)\braket{q(t)}{n(t)}=0$ by orthonormality at time $t$, we can
simplify Eq. V.2 by substituting
$\braket{\dot{q}(t)}{n(t)}\rightarrow-\braket{q(t)}{\dot{n}(t)}$ and taking
these terms to be the elements of the skew-Hermitian matrix $\mathbf{M}$ with
coefficients
$M_{ij}(t)=\Braket{i(t)}{\frac{d}{dt}}{j(t)}.$ (90)
After doing so, the final equation of motion for the $mn$ element is
$\displaystyle\dot{\mathcal{D}}_{mn}(t)=-iE_{mn}^{\prime}(t)\mathcal{D}_{mn}(t)$
$\displaystyle-\sum_{p}\mathcal{D}_{pn}(t)M_{mp}(t)+\sum_{q}\mathcal{D}_{mq}(t)M_{qn}(t).$
(91)
We will now simplify Eq. V.2 for a Hamiltonian that varies slowly over a long
time interval $T$. In the context of this work, $T$ will be the switching time
that appears in the Coulomb potential scaling $\lambda(t)$ of Eq. 66. Closely
following Ref. [20], we will discover how $\boldsymbol{\mathcal{D}}(t)$
evolves when $T\rightarrow\infty$. Defining a natural time
$s(t)=\frac{t}{T}$ (92)
scaled by the switching duration, we use the fact that $(d/dt)=T(d/ds)$ to see
that
$\displaystyle\dot{\mathcal{D}}_{mn}(s)$
$\displaystyle=-iTE_{mn}^{\prime}(s)\mathcal{D}_{pn}(s)$ $\displaystyle+$
$\displaystyle\sum_{q}\mathcal{D}_{mq}(s)M_{qn}(s)-\sum_{p}M_{mp}(s)\mathcal{D}_{pn}(s).$
(93)
In Eq. V.2 we used that $M_{ij}(s)=(1/T)M_{ij}(t)$ and chose a more convenient
ordering for the sum terms. The first term on the right hand side represents
the dynamical phase, which we factor out by defining $\tilde{\mathcal{D}}(t)$
as
$\mathcal{D}_{mn}(s)=\tilde{\mathcal{D}}_{mn}(s)e^{-iT\int_{0}^{s}E_{mn}^{\prime}(s^{\prime})ds^{\prime}},$
(94)
so that
$\displaystyle\frac{d}{ds}\tilde{\mathcal{D}}_{mn}(s)=\sum_{\begin{subarray}{c}q\\\
E_{q}=E_{n}\end{subarray}}\tilde{\mathcal{D}}_{mq}(s)M_{qn}(s)-\sum_{\begin{subarray}{c}p\\\
E_{p}=E_{m}\end{subarray}}M_{mp}(s)\tilde{\mathcal{D}}_{pn}(s)$
$\displaystyle+\sum_{\begin{subarray}{c}q\\\ E_{q}\neq
E_{n}\end{subarray}}\frac{d}{ds}\int_{0}^{s}ds^{\prime}\tilde{\mathcal{D}}_{mq}(s^{\prime})M_{qn}(s^{\prime})e^{-iT\int_{0}^{s^{\prime}}E_{nq}^{\prime}(s^{\prime\prime})ds^{\prime\prime}}$
$\displaystyle-\sum_{\begin{subarray}{c}p\\\ E_{p}\neq
E_{m}\end{subarray}}\frac{d}{ds}\int_{0}^{s}ds^{\prime}M_{mp}(s^{\prime})\tilde{\mathcal{D}}_{pn}(s^{\prime})e^{-iT\int_{0}^{s^{\prime}}E_{pm}^{\prime}(s^{\prime\prime})ds^{\prime\prime}}.$
(95)
In Eq. V.2 we separated the $p$ and $q$ sums into terms that accumulate
dynamical phase and those that do not. To the latter we have applied the
identity operator $\hat{1}=(d/ds)\int_{0}^{s}ds^{\prime}$.
We will now evaluate the last line of Eq. V.2 to understand the implications
of the dynamical phase. To begin, we simplify by defining
$F_{p}(s^{\prime})=M_{mp}(s^{\prime})\mathcal{D}_{pn}(s^{\prime})$ (96)
and
$g_{pm}(s^{\prime})=\int_{0}^{s^{\prime}}E_{pm}^{\prime}(s^{\prime\prime})ds^{\prime\prime}$
(97)
so that the integral over $ds^{\prime}$ (which we call $I_{p}(s)$) takes the
form
$I_{p}(s)=\int_{0}^{s}F_{p}(s^{\prime})e^{-iTg_{pm}(s^{\prime})}ds^{\prime}.$
(98)
Using that $\dot{g}_{pm}(s^{\prime})=E^{\prime}_{pm}(s^{\prime})$ by Eq. 97,
we multiply the integrand by the identity
$1=iT\dot{g}_{pm}(s^{\prime})/(iTE^{\prime}_{pm}(s^{\prime}))$ so that
$I_{p}(s)=\int_{0}^{s}\frac{F_{p}(s^{\prime})}{iTE_{pm}^{\prime}(s^{\prime})}\left(iT\dot{g}_{pm}(s^{\prime})e^{iTg_{pm}(s^{\prime})}\right)ds^{\prime}.$
(99)
Because
$iT\dot{g}_{pm}(s^{\prime})e^{iTg_{pm}(s^{\prime})}=(d/ds^{\prime})e^{iTg_{pm}(s^{\prime})}$,
we are able to integrate Eq. 99 parts to finally find
$\displaystyle I_{p}(s)$
$\displaystyle=\frac{1}{iT}\left[\frac{F_{p}(s^{\prime})}{E_{pm}^{\prime}(s^{\prime})}e^{iTg_{pm}(s^{\prime})}\right]_{0}^{s}$
$\displaystyle-\frac{1}{iT}\int_{0}^{s}\frac{d}{ds^{\prime}}\left(\frac{F_{p}(s^{\prime})}{E^{\prime}_{pm}(s^{\prime})}\right)e^{iTg_{pm}(s^{\prime})}ds^{\prime}.$
(100)
As long as $F_{p}(s^{\prime})$ is differentiable, we can take the adiabatic
approximation by letting $T\rightarrow\infty$ so that $I_{p}(s)\rightarrow 0$.
Since the integral in the second line proceeds identically, we conclude that
any contribution that accumulates dynamical phase will evaluate to zero.
Canceling these integral terms and reverting to the unscaled time $t$
simplifies Eq. V.2 to
$\frac{d}{dt}\tilde{\mathcal{D}}_{mn}(t)=\sum_{\begin{subarray}{c}q\\\
E_{q}=E_{n}\end{subarray}}\tilde{\mathcal{D}}_{mq}(t)M_{qn}(t)-\sum_{\begin{subarray}{c}p\\\
E_{p}=E_{m}\end{subarray}}M_{mp}(t)\tilde{\mathcal{D}}_{pn}(t),$ (101)
which we study in two separate cases. First suppose that states $m$ and $n$
are non-degenerate. Under this condition, the sum restriction $E_{q}=E_{n}$
implies that $E_{q}\neq E_{m}$ because the contrary would contradict the
assumption of non-degeneracy ($E_{m}\neq E_{n}$). The same applies for the $p$
sum so that only terms depending on $\mathcal{D}_{ij}(t)$ with $E_{i}(s)\neq
E_{j}$ appear on the right hand side. If we choose matrices with $D_{ij}(0)=0$
for all states $i,j$ with $E_{i}\neq E_{j}$, these terms remain $D_{ij}(t)=0$
at all times because these elements only intermix with each other.
What remains is to evaluate Eq. 101 when $m$ and $n$ are degenerate. In this
case, $p$ and $q$ both run over all states degenerate with $m$ and $n$ so that
the right-hand side reduces to a single sum. We see from Eq. 94 that the
degeneracy condition implies that
$\tilde{\boldsymbol{\mathcal{D}}}(t)=\boldsymbol{\mathcal{D}}(t)$. Taking the
basis to be ordered by increasing $E_{i}(t)$, we can define submatrices
$\boldsymbol{\mathcal{D}}_{\mu}(t)$ for each degenerate subspace $\mu$. Noting
that the terms in Eq. 101 are in the form of matrix multiplications, we find
that the equation of motion separates into the commutation relations
$\dot{\boldsymbol{\mathcal{D}}}_{\mu}(t)=-[\mathbf{M}_{\mu}(t),\boldsymbol{\mathcal{D}}_{\mu}(t)]$
(102)
for each subspace $\mu$. This expression details the accumulation of geometric
phase within a given degenerate subspace as time progresses. The final GDM is
simply the block diagonal direct sum
$\boldsymbol{\mathcal{D}}(t)=\text{diag}[\boldsymbol{\mathcal{D}}_{1}(t),\boldsymbol{\mathcal{D}}_{2}(t),\dots,\boldsymbol{\mathcal{D}}_{\mu}(t),\dots],$
(103)
valid for all times including $t>T$. As a final remark, note that Eq. 102
describes a unitary transformation because the matrix $\mathbf{M}(t)$ is skew-
Hermitian. We may also choose to define a Hermitian matrix
$\tilde{\mathbf{M}}(t)=-i\mathbf{M}(t)$ so that the expression more closely
resembles Eq. 60.
#### V.2.1 Practical considerations of the degenerate adiabatic theorem
The application of Eqs. 102 and 103 requires knowledge of the matrix
$\mathbf{M}(t)$. Using the relation $(d/dt)=\dot{\lambda}(d/d\lambda)$, we can
calculate the matrix elements from Eq. 90 in the position representation
$M_{ij}(t)=-i\dot{\lambda}(t)\int\psi_{i}^{*}(\mathbf{X}|\lambda)\frac{\partial}{\partial\lambda}\psi_{j}(\mathbf{X}|\lambda)d\mathbf{X}.$
(104)
However, our only prescription for the instantaneous eigenstates was that they
diagonalized the Hamiltonian at time $t$. We have not fixed the connection
between eigenstates $\psi_{i}(\mathbf{X}|\lambda)$ and
$\psi_{i}(\mathbf{X}|\lambda+\Delta\lambda)$. In fact, an eigenstate solver
may output a different arbitrary rotation for each value of $\lambda$ so that
$\psi_{j}(\mathbf{X}|\lambda)$ is discontinuous in $\lambda$ and the
$T\rightarrow\infty$ limit of Eqs. V.2 is no longer valid. Thus, we must
require that $\psi_{j}(\mathbf{X}|\lambda)$ is smooth as a function of
$\lambda$. In Section VI we will perturb the Hamiltonian with an asymmetric
potential to lift the degeneracy so that the connection between time-adjacent
eigenstates is obvious.
The final issue pertains to the $\lambda$-dependent spectrum of geminal
eigenstates. As eigenvalues approach each other with increasing $\lambda$,
they may cross or anti-cross depending on symmetry. Anti-crossings are handled
perfectly fine by Eqs. 102 and 103, but the derivation of the adiabatic
theorem breaks down at a crossing point. In a sense, the point acts as a pole
where the degenerate submatrices intermix by instantaneously picking up
geometric phase within the expanded degenerate subspace at the intersection.
We determine the behavior around these accidental degeneracies by a physical
argument. First note that the total energy as computed by the many-body wave
function must be differentiable, which we see upon differentiating the energy
expectation value by the Hellman-Feynman theorem:
$\frac{d\mathcal{E}}{dt}=\dot{\lambda}(t)\sum_{i,j>i}\int\frac{|\Psi(\overline{\mathbf{X}}|t)|^{2}}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}d\overline{\mathbf{X}}.$
(105)
It follows that $\mathcal{E}(t)$ is differentiable because the right hand side
of Eq. 105 is finite at all times. The GDM that represents the wave function
computes the energy as
$\mathcal{E}(t)=\sum_{\mu}\operatorname{Tr}[\boldsymbol{\mathcal{D}}_{\mu}(t)]E_{\mu}(\lambda(t)),$
(106)
where $\operatorname{Tr}[\boldsymbol{\mathcal{D}}_{\mu}(t)]$ is the population
in a given degenerate subspace with energy $E_{\mu}(\lambda(t))$. An abrupt
re-distribution of population at a crossing point $t_{0}$ would yield an
$\mathcal{E}(t)$ that is not differentiable at $t_{0}$. Since we know by Eq.
105 that $\mathcal{E}(t)$ must be differentiable, we conclude that no
instantaneous population interchange can occur at a level crossing.
## VI Computing many-body eigenstates using the adiabatic theorem
We are now equipped to solve the many-electron Schrödinger equation
$\left(\sum_{ij}H(\mathbf{x}_{i},\mathbf{x}_{j})\right)\Psi_{n}(\overline{\mathbf{X}})=\mathcal{E}_{n}\Psi_{n}(\overline{\mathbf{X}})$
(107)
for the matrices $\mathbf{D}_{n}$ that represent wave functions
$\Psi_{n}(\overline{\mathbf{X}})$. The Hamiltonian of interest has terms
$\displaystyle
H(\mathbf{x}_{i},\mathbf{x}_{j})=H_{1}(\mathbf{x}_{i},\mathbf{x}_{j})+\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|},$
(108)
containing the one-body contribution
$H_{1}(\mathbf{x}_{i},\mathbf{x}_{j})=\frac{1}{N-1}\left[-\frac{\nabla_{i}^{2}}{2}-\frac{\nabla_{j}^{2}}{2}+v(\mathbf{x}_{i})+v(\mathbf{x}_{j})\right]$
(109)
along with the Coulomb potential $1/|\mathbf{r}_{i}-\mathbf{r}_{j}|$. For
reasons that will be explained shortly, we treat Eqs. 107 and 108 as a special
case of the two-parameter Hamiltonian
$\displaystyle H(\epsilon,\lambda)$
$\displaystyle=\sum_{i,j>i}\Bigg{[}H_{1}(\mathbf{x}_{i},\mathbf{x}_{j})$
$\displaystyle+\epsilon\left(\frac{v_{p}(\mathbf{x}_{i})+v_{p}(\mathbf{x}_{j})}{N-1}\right)+\lambda\left(\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}\right)\Bigg{]}$
(110)
that reduces to the original operator when $\epsilon=0$ and $\lambda=1$. The
one-body perturbing potential $v_{p}(\mathbf{x})$ is chosen to have symmetry
group containing only the identity so that the spectrum of the auxiliary many-
body system
$H(\overline{\mathbf{X}}|\epsilon,\lambda)\Psi_{n}(\overline{\mathbf{X}}|\epsilon,\lambda)=\mathcal{E}_{n}(\epsilon,\lambda)\Psi_{n}(\overline{\mathbf{X}}|\epsilon,\lambda)$
(111)
is nondegenerate when $\epsilon\neq 0$.
We represent a given eigenstate
$\Psi_{n}(\overline{\mathbf{X}}|\epsilon,\lambda)$ of Eq. 111 by its GDM
$\boldsymbol{\mathcal{D}}(\epsilon,\lambda)$, dropping the subscript $n$. The
electronic energy is then calculated by
$\mathcal{E}_{n}(\epsilon,\lambda)=\sum_{i}\mathcal{D}_{ii}(\epsilon,\lambda)E_{i}(\epsilon,\lambda),$
(112)
where each $E_{i}(\epsilon,\lambda)$ is an eigenvalue of the effective two-
particle Hamiltonian $H(\mathbf{X}|\epsilon,\lambda)$ satisfying the
Schrödinger equation
$H(\mathbf{X}|\epsilon,\lambda)\psi_{i}(\mathbf{X}|\epsilon,\lambda)=E_{i}(\epsilon,\lambda)\psi_{i}(\mathbf{X}|\epsilon,\lambda)$
(113)
for the geminal eigenstate $\psi_{i}(\mathbf{X}|\epsilon,\lambda)$.
As mentioned, the analysis takes parameters $\epsilon$ and $\lambda$ to be
functions of time. The dependence is chosen in such a way that the Hamiltonian
slowly changes from non-interacting to fully interacting over some time
interval. Beginning at $t=0$ with $\epsilon(0)=\lambda(0)=0$, we slowly ramp
the symmetry-breaking parameter $\epsilon(t)$ over time $T_{1}$ so that
$\epsilon(T_{1})=1$ and $\lambda(T_{1})=0$. The nondegenerate eigenstates at
$T_{1}$ remain Slater determinants due to the absence of electron-electron
interactions.
The crucial next step is to smoothly switch on the Coulomb interaction by
increasing $\lambda(t)$ from $0$ at $T_{1}$ to $1$ at $T_{2}$ with
$\epsilon=1$. Assuming adiabatic evolution in which
$T_{2}-T_{1}\rightarrow\infty$, we have by Eq. 102 that the GDM follows the
trivial relation $\dot{\boldsymbol{\mathcal{D}}}(t)=0$ because the two-
electron spectrum is nondegenerate. In simple terms, all elements of the GDM
stay fixed while the geminal eigenstates evolve to satisfy Eq. 113 for each
$\lambda(t)$ with $\epsilon=1$. Fig. 1 is an illustration of how these
parameter-dependent spectra may look. The curves do not represent the
diagonalization of a real Hamiltonian, but they provide insight into how the
technique works.
Figure 1: Example illustration of the eigenvalues of $H(\mathbf{X}|1,\lambda)$
versus $\lambda$. $\lambda=0$ and $\lambda=1$ eigenstate indices are marked by
numbers on the vertical axes. The curves do not represent the diagonalization
of a real Hamiltonian.
For the simplest non-trivial example, consider a 3-electron state constructed
from the geminal eigenstates of Fig. 1. Suppose that the GDM with geminals 1–3
occupied represents a valid Slater determinant at $\lambda(T_{1})=0$. As time
progresses, the matrix elements remain unchanged as each eigenstate evolves to
its $\lambda(T_{2})=1$ counterpart with $E_{i}(1,\lambda)$ varying smoothly
with $\lambda$. The smoothness condition is imposed to define the behavior at
level crossings; the total energy calculated by Eq. 112 should be
differentiable at all times by Eq. 105. As a consequence of the level crossing
between eigenvalues 3 and 4, the $\lambda=1$ (interacting) GDM with the 3
lowest energy eigenstates occupied is not $N$-representable if the
configuration $\\{1,2,4\\}$ is not a valid $\lambda=0$ Slater determinant.
Generalizing to $N$ electrons, the principal task is to compute at least
$N(N-1)/2$ geminal eigenstates on a grid of $\lambda$ points to generate
curves like those in Fig. 1. This number of states is a lower bound due to
constraints on the initial Slater determinant as well as the existence of
level crossings. With sufficiently fine $\lambda$ sampling, it is possible to
resolve all such crossings to connect the $\lambda=0$ and $\lambda=1$
geminals. This two-electron diagonalization can be performed with standard
techniques like the configuration interaction, noting that the adiabatic
theorem only serves to select which states are $N$-representable by virtue of
their being the result of evolution under a time-dependent Hamiltonian.
After determining valid symmetry-broken states by the above procedure, we must
finally ramp $\epsilon$ to zero at time $T_{3}$ to discover solutions to the
target Hamiltonian with $\epsilon=0$ and $\lambda=1$. During this process, the
degenerate subspaces recombine as $\epsilon$ decreases. We can calculate the
exact total energy at $T_{3}$ by
$\mathcal{E}=\sum_{\mu}\operatorname{Tr}[\boldsymbol{\mathcal{D}}_{\mu}(0)]E_{\mu}(0,1),$
(114)
where $\operatorname{Tr}[\boldsymbol{\mathcal{D}}_{\mu}(0)]$ is the time-
invariant population of subspace $\mu$. Even though the Slater determinants
were chosen at $T_{1}$, we may simply back-propagate to determine the
population at the initial time. Repeating for various $t=T_{1}$ Slater
determinants, the ground state of the interacting system is that which yields
the minimum energy by Eq. 114.
For observables other than the energy, the best we can do is calculate the
$\epsilon$-dependent expectation value by
$\braket{A}(\epsilon)=\operatorname{Tr}[\boldsymbol{\mathcal{D}}(T_{1})\mathbf{A}(\epsilon)].$
(115)
The matrix $\mathbf{A}$ can be determined to arbitrary precision by computing
it from the geminal eigenstates with sufficiently small $\epsilon$.
Conceptually, this limiting procedure of switching on and off the perturbation
$\epsilon$ can be considered to account for the geometric phase that would be
picked up in a degenerate adiabatic evolution.
### VI.1 Calculating atomic energy eigenstates
This section details a simple but powerful example of the calculation of many-
body eigenstates using the GDM. We will find that the many-body Schrödinger
equation for an arbitrary atom or ion is analyzable strictly through the
solution of an appropriately-scaled Helium atom problem.
Begin with the Hamiltonian $H_{Z,N}$ for a central potential with nuclear
charge $Z$ and $N$ electrons. The fully-interacting system obeys the effective
two-particle Hamiltonian of Eq. 66 with $\lambda=1$:
$\displaystyle H_{Z,N}(\mathbf{r}_{1},\mathbf{r}_{2}|1)$
$\displaystyle=\frac{1}{N-1}\Bigg{[}-\frac{\nabla_{1}^{2}}{2}-\frac{\nabla_{2}^{2}}{2}$
$\displaystyle-Z\left(\frac{1}{|\mathbf{r}_{1}|}+\frac{1}{|\mathbf{r}_{2}|}\right)+\frac{N-1}{|\mathbf{r}_{1}-\mathbf{r}_{2}|}\Bigg{]}.$
(116)
It will be convenient to reduce the bracketed expression into a form that
resembles the Helium atom Hamiltonian
$H_{2,2}(\mathbf{r}_{1},\mathbf{r}_{2}|\lambda)=-\frac{\nabla_{1}^{2}}{2}-\frac{\nabla_{1}^{2}}{2}-\frac{2}{|\mathbf{r}_{1}|}-\frac{2}{|\mathbf{r}_{2}|}+\frac{\lambda}{|\mathbf{r}_{1}-\mathbf{r}_{2}|}.$
(117)
for some choice of $\lambda$. We find this simplification by transforming the
coordinates into a yet-undetermined natural scale
$\overline{\mathbf{r}}=a\mathbf{r}.$ (118)
This transformation changes Eq. VI.1 to
$\displaystyle H_{Z,N}(\overline{\mathbf{r}}_{1},\overline{\mathbf{r}}_{2}|1)$
$\displaystyle=\frac{a^{2}}{N-1}\Bigg{[}-\frac{\overline{\nabla}_{1}^{2}}{2}-\frac{\overline{\nabla}_{2}^{2}}{2}$
$\displaystyle-\frac{Z}{a}\left(\frac{1}{|\overline{\mathbf{r}}_{1}|}+\frac{1}{|\overline{\mathbf{r}}_{2}|}\right)+\frac{N-1}{a}\frac{1}{|\overline{\mathbf{r}}_{1}-\overline{\mathbf{r}}_{2}|}\Bigg{]}.$
(119)
Eq. VI.1 reduces to a scaled copy of the Helium Hamiltonian in Eq. 117 if we
choose
$a=\frac{Z}{2},$ (120)
from which it follows that $\lambda=2(N-1)/Z$. In the end we relate the $Z,N$
and $2,2$ (Helium) Hamiltonians by
$H_{Z,N}(\overline{\mathbf{r}}_{1},\overline{\mathbf{r}}_{2}|1)=\left(\frac{Z}{2}\right)^{2}\frac{1}{N-1}H_{2,2}\Bigg{(}\overline{\mathbf{r}}_{1},\overline{\mathbf{r}}_{2}\Bigg{|}\frac{2(N-1)}{Z}\Bigg{)}.$
(121)
Eq. 121 allows us to diagonalize any atomic Hamiltonian by solving for
eigenstates of the Helium atom with the appropriate values of $\lambda$.
Accounting for coordinate scaling, the geminal basis functions are
$\psi_{Z,N}(\mathbf{r}_{1},\mathbf{r}_{2}|1)=\psi_{2,2}\left(\frac{Z\mathbf{r}_{1}}{2},\frac{Z\mathbf{r}_{2}}{2}\right.\left|\frac{2(Z-1)}{Z}\right).$
(122)
Eqs. 121 and 122 can be used in conjunction with the two-parameter solution
method outlined in Section VI. Finally, we observe that for any neutral atom
with $Z=N$ it suffices to calculate $\sim N(N-1)/2$ eigenstates of
$H_{2,2}(\overline{\mathbf{r}}_{1},\overline{\mathbf{r}}_{1}|\lambda)$ for
$\lambda$ between $0$ and $2$.
## VII Discussion
We introduced the geminal density matrix as the basic variable to describe a
many-body quantum state. The GDM was found to faithfully represent the
$N$-electron wave function because it reproduces the expectation value of an
arbitrary observable. The GDM is remarkably simple because it derives directly
from the exchange symmetries of the $N$-electron wave function and a given
operator $A$ without postulating the existence of statistical ensembles.
Furthermore, it is analyzed strictly by matrix operations without needing
higher order tensors.
The key behavior of the GDM is that it evolves by unitary matrix
transformation, which we derived by assuming that the two-body contributions
of the Hamiltonian and operator $A$ depended only on position. This likely
imposes no limitations on the applicability of the GDM because it is unclear
whether two-body interactions with derivatives exist. We can certainly solve
for the eigenstates of a system with Coulomb repulsion along with more exotic
interactions like the attractive potential mediated by lattice vibrations in
the theory of superconductivity. In the case that two-body derivative terms
exist, we must restrict the valid operators for which the expectation value is
computable to include only one-body operators and the Hamiltonian itself.
The simple time evolution law allowed us to recover the classical intuition of
electron-nuclei thermalization from an exact quantum-mechanical treatment of
the electron gas. We continued to derive a degenerate adiabatic theorem which
we exploited to calculate the stationary states of an arbitrary many-body
Hamiltonian. We found that the problem of directly minimizing the energy
functional for an $N$-electron wave function reduces to the calculation of
around $N(N-1)/2$ eigenstates of an effective two-electron Hamiltonian on a
grid of electron-electron interaction scaling strengths.
We finally displayed the power of this diagonalization method by applying it
to atomic Hamiltonians, which reduced to an analysis of an effective Helium
atom. So long as we know $\sim Z(Z-1)/2$ eigenstates of the Helium atom with
coulomb interaction scaled by $\lambda$ from $0$ to $2$, we are able to
compute the exact eigenstates of any atom. While the solution to the two-body
problem is not trivial, it provides an incredible speedup for the solution of
the many-body problem.
###### Acknowledgements.
I thank Jens Biegert for the fruitful conversations that inspired this work
along with his guidance and support throughout its completion. I also thank
Eric Van Stryland and David Hagan for critical readings of the manuscript and
acknowledge funding from the US Fulbright Student Program and the Air Force
Office of Scientific Research grant FA9550-20-1-0322.
## Appendix A Matrix Constraints
This appendix derives constraints on the form of $\mathbf{D}$ that follow
directly from the definitions (Eq. 13)
$\rho(\mathbf{X},\mathbf{X}^{\prime})=\int\Psi^{*}(\mathbf{X},\mathbf{Y})\Psi(\mathbf{X}^{\prime},\mathbf{Y})d\mathbf{Y}$
(123)
and (Eq. 17)
$\rho(\mathbf{X},\mathbf{X}^{\prime})=\sum_{mn}D_{mn}\psi^{*}_{n}(\mathbf{X})\psi_{m}(\mathbf{X}^{\prime}).$
(124)
These restrictions will serve as necessary $N$-representability conditions for
the GDM to represent a valid $N$-electron wave function.
The first rule arises from Eq. 123 which has the property
$\rho(\mathbf{X}^{\prime},\mathbf{X})=\rho^{*}(\mathbf{X},\mathbf{X}^{\prime})$.
Applying this transformation directly to the expansion in Eq. 124 yields
$\displaystyle\sum_{mn}D_{mn}\psi_{n}^{*}(\mathbf{X}^{\prime})\psi_{m}(\mathbf{X})=\sum_{mn}D_{mn}^{*}\psi_{n}(\mathbf{X})\psi_{m}^{*}(\mathbf{X}^{\prime}).$
After swapping the sum index labels on the right hand side, we find the
symmetry to be satisfied when $D_{nm}^{*}=D_{mn}$. This relationship implies
the matrix identity
$\mathbf{D}^{\dagger}=\mathbf{D}.$ (125)
We uncover another $N$-representability requirement by fixing
$\mathbf{X}^{\prime}=\mathbf{X}$ in Eq. 123 and integrating both sides over
the remaining free coordinates $\mathbf{X}$. The integral on the right-hand
side reduces to unity by the normalization condition of the wave function.
Choosing a convenient representation for the left hand side gives
$\int\delta(\mathbf{X}-\mathbf{X}^{\prime})\rho(\mathbf{X},\mathbf{X}^{\prime})d\mathbf{X}d\mathbf{X}^{\prime}=\begin{pmatrix}N\\\
2\end{pmatrix}.$ (126)
Once again expanding $\rho(\mathbf{X},\mathbf{X}^{\prime})$ by Eq. 124, we
find using the orthonormality of the geminal basis that
$\displaystyle\operatorname{Tr}[\mathbf{D}]=\begin{pmatrix}N\\\
2\end{pmatrix}.$ (127)
The requirement for antisymmetry under the exchange
$\mathbf{x}_{1}\leftrightarrow\mathbf{x}_{2}$ (or
$\mathbf{x}_{1}^{\prime}\leftrightarrow\mathbf{x}_{2}^{\prime}$) follows from
that of the many-body wave function, so that
$\rho(\mathbf{x}_{2},\mathbf{x}_{1},\mathbf{X}^{\prime})=-\rho(\mathbf{X},\mathbf{X}^{\prime})$.
Swapping these coordinates in the two-body expansion of Eq. 124 gives
$\displaystyle\rho(\mathbf{x}_{2},\mathbf{x}_{1},\mathbf{X}^{\prime})$
$\displaystyle=\sum_{mn}D_{mn}\psi_{n}^{*}(\mathbf{x}_{2},\mathbf{x}_{1})\psi_{m}(\mathbf{X}^{\prime})$
$\displaystyle=-\rho(\mathbf{X},\mathbf{X}^{\prime}),$ (128)
indicating that the property is inherited from the anti-symmetry of the basis
functions and does not further restrict $\mathbf{D}$.
Unfortunately, we have now found all the $N$-representability conditions that
follow directly from Eqs. 123 and 124. To further understand the
$N$-representability problem we must choose a basis and derive expressions for
the matrix elements $D_{mn}$. We compute these matrix elements by pre-
multiplying Eq. 124 by $\psi_{n}(\mathbf{X})\psi_{m}^{*}(\mathbf{X}^{\prime})$
and integrating over $d\mathbf{X}$ and $d\mathbf{X}^{\prime}$ to find
$D_{mn}=\int\psi_{n}(\mathbf{X})\psi_{m}^{*}(\mathbf{X}^{\prime})\rho(\mathbf{X},\mathbf{X}^{\prime})d\mathbf{X}d\mathbf{X}^{\prime}.$
(129)
Continuing to substitute Eq. 123 into Eq. 129 yields the simple equation
$D_{mn}=\int\Theta_{n}^{*}(\mathbf{Y})\Theta_{m}(\mathbf{Y})d\mathbf{Y},$
(130)
with overlap functions $\Theta_{m}(\mathbf{Y})$ defined to be
$\Theta_{m}(\mathbf{Y})=\int\psi_{m}^{*}(\mathbf{X})\Psi(\mathbf{X},\mathbf{Y})d\mathbf{X}.$
(131)
We choose the geminal basis functions $\psi_{i}(\mathbf{X})$ to be those
formed by the anti-symmetrized product of two single particle spinors
$\phi_{i}(\mathbf{x})$. Grouping the two index labels into the symbol
$\mathbf{n}=\\{n_{1},n_{2}\\}$, the basis functions take the form
$\psi_{\mathbf{n}}(\mathbf{X})=\frac{1}{\sqrt{2}}\left[\phi_{n_{1}}(\mathbf{x}_{1})\phi_{n_{2}}(\mathbf{x}_{2})-\phi_{n_{2}}(\mathbf{x}_{1})\phi_{n_{1}}(\mathbf{x}_{2})\right].$
(132)
Because our basis functions are labeled by two integers, our $2$-RDM expansion
will temporarily take the form of a rank four tensor with components
$D_{\mathbf{mn}}$. It will eventually be necessary to map each $\mathbf{n}$ to
a single integer index to flatten this tensor into a matrix (see the Table 1
and the surrounding discussion).
The most general wave function $\Psi(\mathbf{X},\mathbf{Y})$ that will appear
in Eq. 131 is a possibly infinite linear superposition of $N$-body Slater
determinants
$\Psi(\mathbf{X},\mathbf{Y})=\sum_{\\{\alpha\\}}C_{\\{\alpha\\}}\Psi_{\\{\alpha\\}}(\mathbf{X},\mathbf{Y}).$
(133)
As in the main body of this text, we defined a configuration $\\{\alpha\\}$ to
be an ordered list of single-particle spinors present in a given product of
states. The Slater determinants are formed by the antisymmetrization operator
$\Psi(\mathbf{x}_{1},\dots,\mathbf{x}_{N})=\hat{S}_{-}\prod_{i=1}^{N}\phi_{\alpha_{i}}(\mathbf{x}_{i})$
(134)
equivalent to the determinant expression
$\Psi_{\\{\alpha\\}}(\mathbf{X},\mathbf{Y})=\frac{1}{\sqrt{N!}}\left|\begin{matrix}\phi_{\alpha_{1}}(\mathbf{x}_{1})&\phi_{\alpha_{2}}(\mathbf{x}_{1})&\dots&\phi_{\alpha_{N}}(\mathbf{x}_{1})\\\
\phi_{\alpha_{1}}(\mathbf{x}_{2})&\phi_{\alpha_{2}}(\mathbf{x}_{2})&\dots&\phi_{\alpha_{N}}(\mathbf{x}_{2})\\\
\vdots&\vdots&\ddots&\vdots\\\
\phi_{\alpha_{1}}(\mathbf{x}_{N})&\phi_{\alpha_{2}}(\mathbf{x}_{N})&\dots&\phi_{\alpha_{N}}(\mathbf{x}_{N})\end{matrix}\right|.$
(135)
We now expand each $\Psi_{\\{\alpha\\}}(\mathbf{X},\mathbf{Y})$ in Eq. 133
along minors of the top two rows of Eq. 135 to isolate the $\mathbf{x}_{1}$
and $\mathbf{x}_{2}$ dependence. The result is
$\displaystyle\Psi(\mathbf{X},\mathbf{Y})=\frac{1}{\sqrt{N(N-1)}}\sum_{\\{\alpha\\}}C_{\\{\alpha\\}}\sum_{i,j>i}(-1)^{i+j-1}\left[\phi_{\alpha_{i}}(\mathbf{x}_{1})\phi_{\alpha_{j}}(\mathbf{x}_{2})-\phi_{\alpha_{j}}(\mathbf{x}_{1})\phi_{\alpha_{i}}(\mathbf{x}_{2})\right]\Psi_{\\{\alpha\\}_{ij}}(\mathbf{Y}),$
(136)
where the reduced configuration
$\\{\alpha\\}_{ij}=\\{\alpha\\}\setminus\\{\alpha_{i},\alpha_{j}\\}$ is the
set subtraction of $\alpha_{i}$ and $\alpha_{j}$ from the original list of
states. The state $\Psi_{\\{\alpha\\}_{ij}}(\mathbf{Y})$ is the determinant of
the matrix formed by removing rows $1$ and $2$ and columns $i$ and $j$ from
Eq. 135. As the normalization of this $N-2$ electron state requires the
prefactor $1/\sqrt{(N-2)!}$, we multiplied by its inverse which partially
canceled with the $1/\sqrt{N!}$ prefactor.
Continuing to normalize the $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ dependence
into a two-electron Slater determinant
$\psi_{\alpha_{i}\alpha_{j}}(\mathbf{X})$, we finally have
$\displaystyle\Psi(\mathbf{X},\mathbf{Y})=\sqrt{\frac{2}{N(N-1)}}$
$\displaystyle\sum_{\\{\alpha\\}}C_{\\{\alpha\\}}\sum_{i,j>i}(-1)^{i+j-1}$
$\displaystyle\times\psi_{\alpha_{i}\alpha_{j}}(\mathbf{X})\Psi_{{\\{\alpha\\}}_{ij}}(\mathbf{Y}).$
(137)
Plugging Eq. A into Eq. 131 for the overlap $\Theta_{\mathbf{m}}(\mathbf{Y})$,
the integration over $d\mathbf{X}$ reduces the two-electron wave functions to
$\delta_{\alpha_{i},m_{1}}\delta_{\alpha_{j},m_{2}}$ by orthonormality. Thus,
a given configuration that does not contain $\mathbf{m}=\\{m_{1},m_{2}\\}$
will not contribute to $D_{\mathbf{mn}}$. Consequently, we may reduce the
$\\{\alpha\\}$ (configuration) sum into one over
$\\{\alpha\\}\ni\\{m_{1},m_{2}\\}$. The remaining sum is reduced to the single
term with $(i,j)=\\{m_{1},m_{2}\\}$ so that
$\displaystyle\Theta_{\mathbf{m}}(\mathbf{Y})=\sqrt{\frac{2}{N(N-1)}}\sum_{\\{\alpha\\}\ni\mathbf{m}}$
$\displaystyle
C_{\\{\alpha\\}}\mathcal{S}_{\alpha}[\mathbf{m}]\Psi_{{\\{\alpha\\}}_{\mathbf{m}}}(\mathbf{Y}),$
(138)
where
$\\{\alpha\\}_{\mathbf{m}}=\\{\alpha\\}_{m_{1}m_{2}}=\\{\alpha\\}\setminus\\{m_{1},m_{2}\\}$.
The symbol $S_{\alpha}[\mathbf{m}]$ is the sign function
$S_{\alpha}[\mathbf{m}]=(-1)^{I_{\alpha}[m_{1}]+I_{\alpha}[m_{2}]-1}$ (139)
with $I_{\alpha}[p]$ the index of basis function $p$ in configuration
$\\{\alpha\\}$. We absorb this sign into the expansion coefficient by defining
$\mathcal{C}_{\\{\alpha\\}}=C_{\\{\alpha\\}}S_{\alpha}[\mathbf{m}]$.
Finally, using that $\Theta_{n}^{*}(\mathbf{Y})$ is the complex conjugate of
Eq. 138, we compute $D_{\mathbf{mn}}$ by Eq. 130:
$\displaystyle D_{\mathbf{mn}}$
$\displaystyle=\sum_{\begin{subarray}{c}\\{\alpha\\}\ni\mathbf{m}\\\
\\{\beta\\}\ni\mathbf{n}\end{subarray}}\mathcal{C}^{*}_{\\{\beta\\}}\mathcal{C}_{\\{\alpha\\}}\int\Psi^{*}_{{\\{\beta\\}}_{\mathbf{n}}}(\mathbf{Y})\Psi_{{\\{\alpha\\}}_{\mathbf{m}}}(\mathbf{Y})d\mathbf{Y}.$
(140)
The integral, being the inner product between orthonormal $N-2$ electron
Slater determinants, equals one when
$\\{\alpha\\}_{\mathbf{n}}=\\{\beta\\}_{\mathbf{m}}$ and zero otherwise.
Therefore,
$\displaystyle D_{\mathbf{m}\mathbf{n}}$
$\displaystyle=\sum_{\begin{subarray}{c}\\{\alpha\\}\ni\mathbf{n}\\\
\\{\beta\\}\ni\mathbf{m}\end{subarray}}\mathcal{C}^{*}_{\\{\beta\\}}\mathcal{C}_{\\{\alpha\\}}\delta_{\\{\alpha\\}_{\mathbf{n}},\\{\beta\\}_{\mathbf{m}}}.$
(141)
The diagonal matrix elements are then found from Eq. 141 to take the simple
form
$\displaystyle
D_{\mathbf{n}\mathbf{n}}=\sum_{\\{\alpha\\}\ni\mathbf{n}}\left|C_{\\{\alpha\\}}\right|^{2}.$
(142)
Because Eq. 142 is a sum over the magnitude squared of all expansion
coefficients of configurations containing $\mathbf{n}$, the overall
normalization condition $\sum_{\\{\alpha\\}}|C_{\\{\alpha\\}}|^{2}=1$ implies
that
$0\leq D_{\mathbf{nn}}\leq 1.$ (143)
The maximum diagonal value, $D_{\mathbf{nn}}=1$, occurs when every
configuration $\\{\alpha\\}$ contains $\mathbf{n}$. In this case we encounter
the additional rule that
$D_{\mathbf{nn}}=1\implies\forall\mathbf{m}\neq\mathbf{n},D_{\mathbf{mn}}=D_{\mathbf{nm}}=0,$
(144)
meaning that $1$ on the diagonal in position $\mathbf{n}$ forces all other
elements in the column and row $\mathbf{n}$ to zero.
The proof of Eq. 144 proceeds as follows. Per Eq. 142, any non-zero term must
simultaneously satisfy the conditions $\\{\alpha\\}\ni\\{m_{1},m_{2}\\}$,
$\\{\beta\\}\ni\\{n_{1},n_{2}\\}$ and
$\\{\alpha\\}\setminus\\{m_{1},m_{2}\\}=\\{\beta\\}\setminus\\{n_{1},n_{2}\\}$.
By this equality, $\\{\alpha\\}\setminus\\{m_{1},m_{2}\\}$ does not contain
$\\{n_{1},n_{2}\\}$. Since $\\{\alpha\\}$ is formed by the set addition of
some $\\{m_{1},m_{2}\\}\neq\\{n_{1},n_{2}\\}$, we have that
$\\{\alpha\\}\not\ni\\{n_{1},n_{2}\\}$. Supposing now that
$D_{\mathbf{mn}}\neq 0$ implies existence of some $\\{\alpha\\}$ in the state
expansion that does not contain $\\{n_{1},n_{2}\\}$. This contradicts the
requirement that must be met for $D_{\mathbf{nn}}=1$ so we conclude that the
existence of $1$ on a diagonal implies all other elements in that row and
column are $0$.
By the Hermiticity (Eq. 125) of $\mathbf{D}$, it can always be transformed
into diagonal form by a unitary basis transformation (see Appendix B). The
resulting diagonal matrix obeying Eq. 143 has the property
$0\leq\operatorname{Tr}[\mathbf{D}^{2}]\leq\begin{pmatrix}N\\\
2\end{pmatrix},$ (145)
which follows trivially from the fact that $a^{2}\leq a$ for a number $a\leq
1$. $\operatorname{Tr}[\mathbf{D}^{2}]$ is a basis-indepedent quantity as it
is invariant under unitary transformation by the cyclic property of the trace.
We finally summarize the necessary $N$-representability constraints on the
matrix $\mathbf{D}$:
$\displaystyle\mathbf{D}$ $\displaystyle=\mathbf{D}^{\dagger}$ (146a)
$\displaystyle 0\leq$ $\displaystyle D_{nn}\leq 1$ (146b)
$\displaystyle\operatorname{Tr}[\mathbf{D}]$
$\displaystyle=\begin{pmatrix}N\\\ 2\end{pmatrix}$ (146c) $\displaystyle
0\leq\operatorname{Tr}[\mathbf{D}^{2}]$ $\displaystyle\leq\begin{pmatrix}N\\\
2\end{pmatrix}.$ (146d)
## Appendix B Change of basis
The formula for a change of basis is identical to the transformation for any
density matrix, but we re-derive it here for completeness. We begin with the
two-body density matrix
$\displaystyle\mathbf{D}=\sum_{mn}D_{mn}\psi^{*}_{n}(\mathbf{X})\psi_{m}(\mathbf{X})$
(147)
and introduce a new orthonormal basis with wave functions
$\phi_{i}(\mathbf{X})$. Expanding the initial states $\psi_{n}(\mathbf{X})$ in
terms of the new, we find
$\displaystyle\mathbf{D}$
$\displaystyle=\sum_{mn}D_{mn}\left(\sum_{j}U^{*}_{jn}\phi^{*}_{j}(\mathbf{X})\right)\left(\sum_{i}U_{im}\phi_{i}(\mathbf{X}^{\prime})\right)$
$\displaystyle=\sum_{ij}\left(\sum_{mn}U_{im}D_{mn}U^{*}_{jn}\right)\phi^{*}_{j}(\mathbf{X})\phi_{i}(\mathbf{X}^{\prime}).$
(148)
The parenthetical term gives the expression for $D_{ij}$, which we take to be
the coefficients of matrix $\mathbf{D}^{\prime}$. We finally find the matrix
form for the change of basis
$\mathbf{D}^{\prime}=\mathbf{U}\mathbf{D}\mathbf{U}^{\dagger},$ (149)
where $\mathbf{U}$ is the unitary matrix of coefficients $U_{ij}$.
## Appendix C Alternate derivation of the GDM evolution equation
We found the matrix Liouville-von Neumann equation (Eq. 60) by deriving its
most general operator equivalent then specializing to the case of a time-
independent basis. We can perform an alternate derivation by starting with the
time-independent expansion of the $2$-RDM
$\rho(\mathbf{X},\mathbf{X}^{\prime}|t)=\sum_{mn}D_{mn}(t)\psi_{n}^{*}(\mathbf{X})\psi_{m}(\mathbf{X}^{\prime}).$
(150)
We further assume that our dummy operator $A(\mathbf{X})$ is time independent.
Noting that $(d/dt)\braket{A}(t)=-iK(t)$ for a time-independent
$A(\mathbf{X})$, we restart from Eq. IV,
$\frac{d}{dt}\braket{A}(t)=-i\int\psi_{n}^{*}(\overline{\mathbf{X}}|t)[H(\mathbf{X}|t),A(\mathbf{X})]\psi_{m}(\overline{\mathbf{X}}|t)d\overline{\mathbf{X}}.$
(151)
Expressing Eq. 151 in terms of the $2$-RDM as
$\displaystyle\frac{d}{dt}\braket{A}(t)$ $\displaystyle=\int
d\mathbf{X}d\mathbf{X}^{\prime}\delta(\mathbf{X}-\mathbf{X}^{\prime})$
$\displaystyle\times\left[H^{\prime}(\mathbf{X}|t),A(\mathbf{X})\right]\rho(\mathbf{X},\mathbf{X}^{\prime}|t),$
(152)
we can apply the expansion in Eq. 150 to Eq. C. In terms of the abstract
effective two-electron operators, the result is
$\frac{d}{dt}\braket{A}(t)=\sum_{mn}D_{mn}[\braket{m}{\hat{H}^{\prime}(t)\hat{A}}{n}-\braket{m}{\hat{A}\hat{H}^{\prime}(t)}{n}],$
(153)
into which we can insert the identity $1=\sum_{i}\ket{i}\bra{i}$ to form the
matrix equation
$\frac{d}{dt}\braket{A}(t)=-i\operatorname{Tr}\left[\mathbf{D}(t)[\mathbf{H}^{\prime}(t),\mathbf{A}]\right].$
(154)
For comparison, directly differentiating the trace relation
$\braket{A}(t)=\operatorname{Tr}[\mathbf{D}(t)\mathbf{A}]$ gives
$\frac{d}{dt}\braket{A}(t)=\operatorname{Tr}[\dot{\mathbf{D}}(t)\mathbf{A}],$
(155)
whose result must agree with Eq. 154.
Using the cyclic property and linearity of the trace, we can rearrange Eq. 154
as
$\operatorname{Tr}[\mathbf{D}\mathbf{H}^{\prime}\mathbf{A}-\mathbf{D}\mathbf{A}\mathbf{H}^{\prime}]=\operatorname{Tr}[\mathbf{D}\mathbf{H}^{\prime}\mathbf{A}-\mathbf{H}^{\prime}\mathbf{D}\mathbf{A}]$
so that Eq. 154 takes the form
$\frac{d}{dt}\braket{A}(t)=\operatorname{Tr}[-i[\mathbf{D}(t),\mathbf{H}^{\prime}(t)]\mathbf{A}(t)].$
(156)
Comparing Eq. 156 to Eq. 155 yields the condition
$\dot{\mathbf{D}}(t)=-i[\mathbf{D}(t),\mathbf{H}^{\prime}(t)]$ (157)
to ensure the GDM gives the same expectation values as the wave function.
## References
* Löwdin [1955a] P. O. Löwdin, Quantum theory of many-particle systems. I. Physical interpretations by means of density matrices, natural spin-orbitals, and convergence problems in the method of configurational interaction, Physical Review 97, 1474 (1955a).
* Löwdin [1955b] P.-O. Löwdin, Quantum Theory of Many-Particle Systems. II. Study of the Ordinary Hartree-Fock Approximation, Physical Review 97, 1490 (1955b).
* Löwdin [1955c] P. O. Löwdin, Quantum theory of many-particle systems. III. Extension of the Hartree-Fock scheme to include degenerate systems and correlation effects, Physical Review 97, 1509 (1955c).
* Mayer [1955] J. E. Mayer, Electron Correlation, Physical Review 100, 1579 (1955).
* Coulson [1960] C. A. Coulson, Present state of molecular structure calculations, Reviews of Modern Physics 32, 170 (1960).
* Coleman [1963] A. J. Coleman, Structure of fermion density matrices, Reviews of Modern Physics 35, 668 (1963).
* Mazziotti [2012] D. A. Mazziotti, Two-Electron Reduced Density Matrix as the Basic Variable in Many-Electron Quantum Chemistry and Physics, Chemical Reviews 112, 244 (2012).
* Nakatsuji [1976] H. Nakatsuji, Equation for the direct determination of the density matrix, Physical Review A 14, 41 (1976).
* Nakatsuji and Yasuda [1996] H. Nakatsuji and K. Yasuda, Direct Determination of the Quantum-Mechanical Density Matrix Using the Density Equation, Phys. Rev. Lett. 76, 1039 (1996).
* Mazziotti [1998] D. A. Mazziotti, Contracted Schrödinger equation: Determining quantum energies and two-particle density matrices without wave functions, Phys. Rev. A 57, 4219 (1998).
* Mazziotti [1999] D. A. Mazziotti, Comparison of contracted Schrödinger and coupled-cluster theories, Phys. Rev. A 60, 4396 (1999).
* Garrod and Percus [1964] C. Garrod and J. K. Percus, Reduction of the N ‐Particle Variational Problem, Journal of Mathematical Physics 5, 1756 (1964).
* Mihailović and Rosina [1975] M. Mihailović and M. Rosina, The variational approach to the density matrix for light nuclei, Nuclear Physics A 237, 221 (1975).
* Mazziotti and Erdahl [2001] D. A. Mazziotti and R. M. Erdahl, Uncertainty relations and reduced density matrices: Mapping many-body quantum mechanics onto four particles, Physical Review A - Atomic, Molecular, and Optical Physics 63, 1 (2001).
* Nakata _et al._ [2001] M. Nakata, H. Nakatsuji, M. Ehara, M. Fukuda, K. Nakata, and K. Fujisawa, Variational calculations of fermion second-order reduced density matrices by semidefinite programming algorithm, The Journal of Chemical Physics 114, 8282 (2001).
* Mazziotti [2004] D. A. Mazziotti, Realization of Quantum Chemistry without Wave Functions through First-Order Semidefinite Programming, Phys. Rev. Lett. 93, 213001 (2004).
* Vandenberghe and Boyd [1996] L. Vandenberghe and S. Boyd, Semidefinite Programming, SIAM Review 38, 49 (1996), publisher: Society for Industrial and Applied Mathematics.
* Surján [1999] P. R. Surján, An Introduction to the Theory of Geminals, in _Correlation and Localization_, Vol. 203, edited by A. de Meijere, K. N. Houk, H. Kessler, J.-M. Lehn, S. V. Ley, S. L. Schreiber, J. Thiem, B. M. Trost, F. Vögtle, H. Yamamoto, P. R. Surján, R. J. Bartlett, F. Bogár, D. L. Cooper, B. Kirtman, W. Klopper, W. Kutzelnigg, N. H. March, P. G. Mezey, H. Müller, J. Noga, J. Paldus, J. Pipek, M. Raimondi, I. Røeggen, J. Q. Sun, P. R. Surján, C. Valdemoro, and S. Vogtner (Springer Berlin Heidelberg, Berlin, Heidelberg, 1999) pp. 63–88, series Title: Topics in Current Chemistry.
* Bopp [1959] F. Bopp, Ableitung der Bindungsenergie von N-Teilchen-Systemen aus 2-Teilchen-Dichtematrizen, Zeitschrift für Physik 156, 348 (1959).
* Aguiar Pinto _et al._ [2002] A. Aguiar Pinto, K. Fonseca Romero, and M. Thomaz, Adiabatic approximation in the density matrix approach: non-degenerate systems, Physica A: Statistical Mechanics and its Applications 311, 169 (2002).
|
# Physical Artificial Intelligence: The Concept Expansion of Next-Generation
Artificial Intelligence
Yingbo Li Hainan University
<EMAIL_ADDRESS>Yucong Duan *Corresponding author<EMAIL_ADDRESS>Hainan University
<EMAIL_ADDRESS>Anamaria-Beatrice Spulber Visionogy
<EMAIL_ADDRESS>Haoyang Che Zeekr Group,
<EMAIL_ADDRESS>Zakaria Maamar Zayed University
<EMAIL_ADDRESS>Zhao Li Alibaba Group
<EMAIL_ADDRESS>Chen Yang Ghent University
<EMAIL_ADDRESS>Yu lei Inner Mongolia University,
<EMAIL_ADDRESS>
###### Abstract
Artificial Intelligence has been a growth catalyst to our society and is
cosidered across all idustries as a fundamental technology. However, its
development has been limited to the signal processing domain that relies on
the generated and collected data from other sensors. In recent research,
concepts of Digital Artificial Intelligence and Physicial Artifical
Intelligence have emerged and this can be considered a big step in the
theoretical development of Artifical Intelligence. In this paper we explore
the concept of Physicial Artifical Intelligence and propose two subdomains:
Integrated Physicial Artifical Intelligence and Distributed Physicial
Artifical Intelligence. The paper will also examine the trend and governance
of Physicial Artifical Intelligence.
###### Index Terms:
Physical Artificial Intelligence, PAI, Artificial Intelligence, DIKW, Deep
learning
## 1 Introduction
Artificial Intelligence (AI) has been one of the most popular topics in the
Information and Communication Technologies (ICT) field. AI powered the
development of many advanced systems such as robotics. AI used to be confined
to digital signal processing such as text processing, image object
recognition, and speech recognition. However, when considering computer
science holistically, signal processing is only a small part of the field. The
AI applications have been extended to include robots, Internet of Things
(IoT), smart cities, etc. In [1], Miriyev and Kovac classified AI into Digital
AI which processes signals, and Physical AI including physical robots. In this
paper we explore the concept of Physical AI and extend it to Integrated
Physical AI such as robots or Distributed Physical AI such as IoT. In [1],
authors considered Integrated Physical AI as Physical AI, whose components are
together and in a restricted space. We propose Distributed Physical AI as a
kind of Physical AI too, whose components could be distributed in a wide
space. The aalysis of the Physical AI concepts brings the opportunity to
discuss about AI and Physical AI from a larger perspective. Additionally, it
enables us to explore further the manifestations of Physical AI.
In this paper, we will begin by reviewing the state of the art of Artificial
Intelligence, and conclude with a discussion about the concept of Physical
Artificial Intelligence and how can we leverage the benefits of it across
diffenrent domains. Throughout the paper we will review the trends in Physical
Artificial Intelligence and the potential governence problem implications.
Addtionally, we propose to use Knowledge Graph and Data-Information-Knowledge-
Wisdom (DIKW) to further develop the research on Physical Artificial
Intelligence. The intenrion of this paper is to advance the theoretical
development of Physical Artificial Intelligence.
## 2 Overview of Artificial Intelligence
AI is well known for its outperforming human capabilities in popular
benchmarks such ImageNet [56]. Its various industrial applications, in both
its own domains such as Natural Language Processing (NLP), speech recognition,
face detection, and image classification, or other disciplines such as
agriculture, biology, and chemistry has widely been recognized.
AI originates from the principle of building a Turing machine from neurons, a
concept proposed by McCulloch and Pitts in 1943 [2]. Since 1988 different NN
milestones such as Backpropagation algorithm continued to develop [3]. Lecun
invented Convolutional Neural Network (CNN) with backpropagation in 1998, and
in 2006 the fast training of Neural Network (NN) was addressed. Considering
the above, both NN and even AI, have begun their fast pace development in 2012
[4]. To succeed, AI needs the support of advanced and affordable computing
hardware such as GPU cards, and machine learning algorithms, especially NN.
The relationships between NN, AI, and other related concepts are illustrated
in Figure 1. NN is essential in powering the AI, however, the development of
the AI domain, involves various disciplines such as knowledge modelling, a
highly researched discipline that involves Knowledge Graph to DIKW. With the
increasing and fundamental importance of NN in mind, we will start by
reviewing the history and successful algorithms of NN.
The success of deep learning originates from deep NN, especially CNN applied
to image classification [5]. Primarily, most NN algorithms were of supervised
type, such as CNN and Recurrent NN (RNN). CNN and its variants are involved
with the classification and recognition purpose such as image classifciation
and face recognition. RNN is different from CNN as it considers the temporal
information in the NN, and as such RNN, including its variant Long Short-Term
Memory (LSTM), has become popular in speech recognition and language
translation uses. The semi-supervised learning such as Generative Adversarial
Networks (GAN) is often used in image generation, image enhancement, and video
game [6].
Figure 1: Disciplines and techniques associated with AI AI architecture
Deep learning algorithms could be classified into supervised, semi-supervised,
unsupervised, and reinforcement learning based on supervision level during the
model training period [7]. At first, the deep learning most supervised
algorithms have been extensively used in face recognition, text sentiment
classification, speech recognition, and other similar cases. When the training
data is not entirely labelled, variants of supervised deep learning algorithms
such as semi-supervised learning algorithms could be used. The unsupervised
learning on the other hand, does not rely on training data labelling but it
learns from the internal relations caused by the initial defined features,
such as Auto-Encoders(AE), GAN, and Restricted Boltzmann Machines (RBM). In
reinforcement learning, the algorithms can only obtain the incremental data
instead of all pre-existing data in each processing step.
Apart from computer science applications, AI has been used in academia and
various industries. For example, it has been used to faciliate the prediction
of the process of catalysis utilization [8]. Other uses involved the financial
market, where AI has been used in the dynamic pricing and fraud detection [9].
In the energy domain, AI is used to reduce the electricity [10] and solar
modelling [11]. In the agriculture AI has been used in the detection of fruit
ripening [12].
Although AI has proved to be useful in various domains of research and
industries, AI has also encountered a few limitations. Most of the current AI
applications are limited to the individual applications. One example is that
CNN is useful in image classifiation and text classification, while RNN is
useful in machine translation and speech recognition. AI still encounters
challenges in mananging trivial details and annoying business rules, and some
of these problems have been the focus of researchers [13]. Almost all AI
algorithms need to understand binary codes or numbers, they lack of high
logical inference and problem solving capabilities that humans have, and this
is mainly because not every real problem can be converted into pure mathematic
problems. For example, AI finds it is hard to understand the sentence
differences between the ”Macbook can be used as the chopping board” and
“Macbook is a computer” in the architecutre concept of DIKW [14]. In addition,
AI mostly worked until now like a black, and while researchers know AI works
well, they are not clear about the reasons behind its success for any specific
problems. Therefore, Explainable Artificial Intelligence (XAI) [15] has been a
research domain that is focused on discovering the reasons behind the success
of some specific NN algorithms.
## 3 The concept expansion of Physical Artificial Intelligence
Currently the concept of Artificial Intelligence, as described in the above
section, is related to processing the data and signals in the computer system.
Even the hardware that is related to the AI only captures the input data and
deliver the output data from the AI system, as illustrated in Figure 2. One
example is the Smart Home [16] supported by Amazon’s Alexa speech assistant
[17]. In [1], Miriyev and Kovac proposed the concept of Digital Artificial
Intelligence (DIAI) that refers to the current popular data-based and date-
processing AI.
Figure 2: The hardware architecture related to AI
Contrary to Digital AI, Miriyev and Kovac [1] have proposed the concept of
Physical Artificial Intelligence (PAI), which refers to the nature-like robots
that are driven by the intelligence. Miriyev and Kovac used the bee robot to
explain the concept of PAI, a multi-discipline that combines autonomous
robots, materials, structures, and perceptions. PAI requires the balance
between software - the intelligent system, and hardware - material, mechanics,
and etc. As illustrated in Figure 3, PAI bears its roots in the materials
science, mechanical engineering, computer science, chemistry and biology.
Figure 3: Multidisciplinary nature of PAI
In the proposed concept of PAI by Miriyev and Mirko [1][54], PAI refers to the
typical robot and robot system. In this paper we propose to extend the concept
of PAI to all the potential applications identifying the advantages of AI for
both the hardware and software. Several examples are used to explain the
extended concept of PAI:
* •
PAI in IoT. IoT is the typical mixed application of the cloud, sensor,
software and data analytics [18]. The robot concatenates the hardware and
software in one complete intelligent machine, while IoT can be distributed to
either a small space such as a room or to a wide area such as the city. Since
AI can be used to improve the stability of each node of the IoT such as a
sensor or the central data analytics and predication, IoT is a fertile
application domain for PAI. The node of IoT used for sensing and controlling
needs the support of the science and technologies, materials, chemistry,
mechanism, and computer science, even the biology.
* •
PAI in automobile. The self-driving car can be considered as a variant of the
intelligent robot system. The self-driving car has the same necessary features
as the normal robot: the sensor, the embedded computing module, the mechanical
system, the new material and so on. The self-driving car is often connected to
the Internet for navigation and the latter provides the IoT feature to the
self-driving car.
* •
PAI in agriculture. The agriculture is one of the most successful applications
of Physical Artificial Intelligence. The sensors including the cameras,
temperature meter and hygrometer are used to monitor the growth progress of
the plants and predict the best harvest time. The defects are often detected
to alarm the potential risk for a intervention.
* •
PAI in healthcare. The healthcare, especially the healthcare for the
prevention, is a typical usage of Physical Artificial Intelligence. The
biological sensors and the chemical sensors are used to monitor the old man
and the patient to predict the potential risks such as falling or an unstable
situation; the centrual center is notified by the edge device when the risk
happens. The computing happends both at the edge sides and the centrual
servers.
* •
PAI in Logistics. PAI has been extensively used in multiple aspects of the
logistics. The ”last mile” is the expensive and hard problem of the logistic
industry at involves parcel and food delivery. Some delivery robots and drones
[19] have been used in the delivery market to replace the humans. The
automatic sorting robot has been used in the sorting center of the logistics
[20].
In the above survey, the extended concept of PAI has been extensively used in
multiple industries outside the robot industry. The concept of PAI is based on
the interdiscipline research of five disciplines proposed in Figure 3 [1].
## 4 The Trend of Physical Artificial Intelligence
Until 2012 Digital Artificial Intelligence (DIAI) mimicked the brain
capability of logical thinking and induction in human brain, to process the
data and signals percepted by human eyes and ears. As far as we know, the
capabilities of human beings are not limited to the logical thinking of the
brain. The brain of the human beings is only responsible for processing the
signals and transmitting commands to other parts of the body, that are
responsible for many functions, such as movement, vision perception, sound
perception, digestion and etc. Therefore, DIAI just uncovers a limited part of
the powerful potentials of AI, while PAI like a whole human body with respect
to the whole human body, would heavily extend the application of AI from the
academics to the industries.
PAI has the potential to use deep learning to mimick not only the individual
human but also the human society as a whole. Robots are a typical example of
Integrated PAI (IPAI) that mimick the individual humans, and integrates the
perception of the physical world through multiple sensors that collect signals
and data, the induction from multiple indices, and the physical response in
the physical world as shown in Fig. 4, that illustrates the most important
modules in IPAI. A robot’s perception, computing, and mechanical modules are
confined into a limited space, while similar to the human society Distributed
PAI (DPAI) distributes the perception, the computing and the response modules
across a wide space, such as a factory or a city, as shown in Fig. 5.
Industrial IoT system is a good illustration of DPAI [22].
Figure 4: Integrated PAI Figure 5: Distributed PAI
PAI needs to fuse multiple streams of information including materials,
temperature, vision, sound,etc. from multiple sensors as per Fig. 3.
Therefore, multimodal processing is mandatory to understand the information in
PAI. Through the fuse of the multimodal information, PAI can easier use more
kinds of information to make better decision and better precisions [34, 35].
The data and information sources bring multiple kinds of data, which
outperform a single source of data, to make real-time decisions and
predictions. This is a significant feature of PAI.
We use Fig. 6 to illustrate the components and relations of PAI:IPAI and DPAI.
IPAI will be researched and applied in both home environment and industry
environment. The home environment [23] will receive home service robots like
household robots, while the industry environment will be extensively used in
multiple areas of the Industry 4.0 [24] from the automative to the security.
DPAI will become more and more popular when the edge computing [25] is mature
and every device is connected to the network. IoT and edge computing are
typical DPAI subdomains. Since it is popular for every intelligent system to
be online, IPAI and DPAI will have more overlapped areas as shown in Fig. 6.
Figure 6: IPAI and DPAI
## 5 The DIKW Supported Physical Artificial Intelligence
Artificial Intelligence needs a large volume of data as the ”fuel” to train
the model for the tasks of the classifications and the predictions. Digital
Artificial Intelligence such as image classification and automatic speech
recognition is typically the approach of processing the signal and data from
the sources of the image, the sound, the text and the temporal data. In order
to organize the data used in Digital Artificial Intellgence well, the
researchers and industry use the Knowledge Graph [26] to store the ontology
from different data. Knowledge graph is a complete and correct approach to
associate the semantic data. Kowledge Graph considers all the data inside as
the same hierarchical layer, but it does not work very well in the real world.
For example, the sentence ”the spoiled food can not be eaten” represents one
knowledge or a rule, not only the data indicating ”food”. So DIKW[14]
architecture is proposed to construct the information architecture. The DIKW
architecture is illustrated in Figure 7. In DIKW architecture, the $data$ and
$information$ could be used to infer the $knowledge$, while the $wisdom$ as
partial $knowledge$ needs the support from the $data$ and $information$. One
important feature of DIKW architecture is the presentation the 5 Ws:
$Who,What,When,Where$, and $Why$. $Knowledge$ can well describe $What$
happens. $Wisdom$ in DIKW represents $how$. $Data$ is related to $Who$,
$When$, and $Where$. And $What$ and $How$ can be infered from $Information$
and $Knowledge$ too.
Figure 7: DIKW architecture
Digital Artificial Intelligence originates from data and signal processing,
especially the text, image and acoustic processing. In the DIKW architecture,
most algorithms of above categories belong to the $Data$ layer. For example,
the image object recognition [27] is to use large volume of object image data
to train a model and then recognize an object name in testing images. While,
automatic speech recognition is to convert the speech in the sound to the data
of the text. In the research, the knowledge extraction in Digital Artificial
Intelligence exists but is not as popular as the data extraction. In the
article [28] the authors use the multimodal data processing to extract the
knowledge of the image or video, like ”One bird flies in the sky”. According
to the best of our knowledge, it is rare to find extensive deep learning model
to deal with more advanced knowledge processing. Therefore, Physical
Artificial Intelligence encounters challenging problems as it needs to process
the data, information and knowledge and it is not limited to signals as
Digital Artificial Intelligence is. PAI needs to accept and process the signal
and data from at least five categories: materials, mechanics, chemistry,
bilogy, and computer sensors. In order to deal with more categories of signal
and data, PAI has to use knowledge graph to support the processing and
storage, as illustrated in Figure 8.
Figure 8: Knowledge graph supported PAI
As shown in Figure 8, each node of knowledge graph will contain 5 categories
of data from PAI. All data nodes of the same category are internally and
organically associated to one another. Knowledge graph could handle the
complexity of multiple-indices data. So in Figure 9 we propose to integrate 5
categoris of data with 5 $W$s and 4 layers of DIKW. Thus the semantic
information of PAI could be inferred and stored in DIKW architecture while
keep its original relations to the metadata and other basic data.
## 6 Physical Artificial Intelligence Governance and Sustainable Development
Digitial Artificial Intellgience has been facing the challenges of risk and
governance problems [55]. Among the challenges for DIAI, the most important
challenges will be discussed in this section:
* •
The security of DIAI [29]. The training and prediction of AI model needs large
volume of data, so the security of the data storage is important. The storage
security will need both hardware and software protection. The data masking
[30] is often used to separate the data with the original source in the
software and algorithm level protection.
* •
The fake data of DIAI. Deepfake [31] attracted much attention when it appeared
on the Internet. Deepfake could convert the human face in the video to the
desired face, and in many situations the coverted video looks real. The fake
image and video cause the doubt of ”seeing is believing”, which could lead to
the social and legal problem.
* •
The social privacy of DIAI. The face recognition in the public space has been
banned and identified as illegal in many countries [32]. DIAI has enabled the
tracking of our behavior as easy as possible. In addition, DIAI could easily
track the online data including the social media and infer the profiles of a
person. Therefore, the social privacy has been a big focus in the past years.
* •
The bias in DIAI. In our society the bias exists even if it is hidden, for
example the data from the Internet. Most of training data of DIAI is from web
source, which means that the training model of DIAI naturally contains the
property of bias. This bias has been found in the hiring screening AI system
[33].
Physical Artificial Intelligence (PAI) has more problems to resolve because of
its characteristics of complexity and ubiquitousness compared to DIAI:
* •
The existence problem. PAI like IoT needs more extensive installation of
multiple kinds of sensors. If it is in a limited space like a factory, it does
not have much regulation problem. However, if the space is extended to a
larger space which is not under the same regulation, PAI will face more
problems of regulation and social problem.
* •
The information organization problem. As discussed in the previous section,
the organization of multiple kinds and multiple layers of data and information
will cause the problem of complexity. The proposed Knowledge graph and DIKW
supported PAI could be the potential solution.
* •
Cannikin Law. The development of PAI depends on at least 5 disciplines of
materials science, mechanical engineering, chemistry, biology and computer
science. Therefore, the slower development of one discipline will cause the
problem of cannikin law and prohibit the development of PAI.
* •
The social acceptance. Similar to the dilemma of DIAI, the ubiquitous
application of PAI will cause the worry of the society regarding to the
unemployment, the privacy and etc.
We illustrates above problem of PAI in Figure 9.
Figure 9: PAI governence problems
As the future format of Artificial Intelligence, Physical Artificial
Intelligence will be the next popular research topic following the Digital
Artificial Intelligence, because Artificial Intelligence will be more and more
applied in other industries. Physical Artificial Intelligence will support the
development of the mechanics or the agriculture because of their hardware
characteristics. Physical Artificial Intelligence will advance AI application
as a fundamental technology for the world.
## 7 Conclusion
In this paper we have started by reviewing the basic knowledge of artificial
intelligence, including its history, categories and popular algorithms. Then
we reviewed the concept of Physical Artifical Intelligence proposed by Aslan
Miriyev and Mirko Kovac, and discussed the reason of extending the concept of
Physical Aritificial Intelligence by Integrated Physical Artificial
Intelligence and Distributed Physical Artificial Intelligence. After that, we
proposed to use DIKW and knowledge graph to extend the concept of Physical
Artificial Intelligence. Finally we discussed the governance of Physical
Artificial Intelligence and its sustainable development, compared to the
current popular topics of Digital Artificial Intelligence governance. We wish
to use this paper to discuss the potential development of Physical Artificial
Intelligence as the next generation of Artificial Intelligence, and inspire
more research and application of Physcial Artifical Intelligence with the
discussed theoretical support.
## Acknowledgments
Supported by Natural Science Foundation of China Project (No. 61662021 and
No.72062015).
## References
* [1] Miriyev A, Kovač M. Skills for physical artificial intelligence[J]. Nature Machine Intelligence, 2020, 2(11): 658-660.
* [2] Zhang L, Zhang B. A geometrical representation of McCulloch-Pitts neural model and its applications[J]. IEEE Transactions on Neural Networks, 1999, 10(4): 925-929.
* [3] Hecht-Nielsen R. Theory of the backpropagation neural network. Neural networks for perception. Academic Press, 1992: 65-93.
* [4] Yadav N, Yadav A, Kumar M. History of neural networks[M]//An Introduction to Neural Network Methods for Differential Equations. Springer, Dordrecht, 2015: 13-15.
* [5] Alom M Z, Taha T M, Yakopcic C, et al. The history began from alexnet: A comprehensive survey on deep learning approaches[J]. arXiv preprint arXiv:1803.01164, 2018.
* [6] Creswell A, White T, Dumoulin V, et al. Generative adversarial networks: An overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53-65.
* [7] Dey A. Machine learning algorithms: a review[J]. International Journal of Computer Science and Information Technologies, 2016, 7(3): 1174-1179.
* [8] Li H, Zhang Z, Liu Z. Application of artificial neural networks for catalysis: a review[J]. Catalysts, 2017, 7(10): 306.
* [9] Ryman-Tubb N F, Krause P, Garn W. How Artificial Intelligence and machine learning research impacts payment card fraud detection: A survey and industry benchmark[J]. Engineering Applications of Artificial Intelligence, 2018, 76: 130-157.
* [10] Cheng L, Yu T. A new generation of AI: A review and perspective on machine learning technologies applied to smart energy and electric power systems[J]. International Journal of Energy Research, 2019, 43(6): 1928-1973.
* [11] Belu R. Artificial intelligence techniques for solar energy and photovoltaic applications[M]//Robotics: Concepts, methodologies, tools, and applications. IGI Global, 2014: 1662-1720.
* [12] May Z, Amaran M H. Automated oil palm fruit grading system using artificial intelligence[J]. Int. J. Eng. Sci, 2011, 11(21): 30-35.
* [13] Xu Y, Shieh C H, van Esch P, et al. AI customer service: Task complexity, problem-solving ability, and usage intention[J]. Australasian Marketing Journal (AMJ), 2020, 28(4): 189-199.
* [14] Frické M. The knowledge pyramid: the DIKW hierarchy[J]. KO KNOWLEDGE ORGANIZATION, 2019, 46(1): 33-46.
* [15] Arrieta A B, Díaz-Rodríguez N, Del Ser J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58: 82-115.
* [16] Marikyan D, Papagiannidis S, Alamanos E. A systematic review of the smart home literature: A user perspective[J]. Technological Forecasting and Social Change, 2019, 138: 139-154.
* [17] Karppi T, Granata Y. Non-artificial non-intelligence: Amazon’s Alexa and the frictions of AI[J]. AI & SOCIETY, 2019, 34(4): 867-876.
* [18] Srinivasan C R, Rajesh B, Saikalyan P, et al. A review on the different types of Internet of Things (IoT)[J]. Journal of Advanced Research in Dynamical and Control Systems, 2019, 11(1): 154-158.
* [19] Janebäck E, Kristiansson M. Friendly robot delivery: Designing an autonomous delivery droid for collaborative consumption[D]. , 2019.
* [20] Dekhne A, Hastings G, Murnane J, et al. Automation in logistics: Big opportunity, bigger uncertainty[J]. McKinsey Q, 2019: 1-12.
* [21] Marechal C, Mikolajewski D, Tyburek K, et al. Survey on AI-Based Multimodal Methods for Emotion Detection[J]. 2019.
* [22] Cheng J, Chen W, Tao F, et al. Industrial IoT in 5G environment towards smart manufacturing[J]. Journal of Industrial Information Integration, 2018, 10: 10-19.
* [23] Wilson G, Pereyda C, Raghunath N, et al. Robot-enabled support of daily activities in smart home environments[J]. Cognitive Systems Research, 2019, 54: 258-272.
* [24] Dalenogare L S, Benitez G B, Ayala N F, et al. The expected contribution of Industry 4.0 technologies for industrial performance[J]. International Journal of Production Economics, 2018, 204: 383-394.
* [25] Yu W, Liang F, He X, et al. A survey on the edge computing for the Internet of Things[J]. IEEE access, 2017, 6: 6900-6919.
* [26] Wang Q, Mao Z, Wang B, et al. Knowledge graph embedding: A survey of approaches and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(12): 2724-2743.
* [27] Sukanya C M, Gokul R, Paul V. A survey on object recognition methods[J]. International Journal of Science, Engineering and Computer Technology, 2016, 6(1): 48.
* [28] He X, Deng L. Deep learning for image-to-text generation: A technical overview[J]. IEEE Signal Processing Magazine, 2017, 34(6): 109-116.
* [29] Gil L, Liska A. Security with AI and Machine Learning[M]. O’Reilly Media, Incorporated, 2019.
* [30] Asenjo J C. Data masking, encryption, and their effect on classification performance: trade-offs between data security and utility[J]. 2017.
* [31] Güera D, Delp E J. Deepfake video detection using recurrent neural networks[C]//2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2018: 1-6.
* [32] Deb D, Wiper S, Gong S, et al. Face recognition: Primates in the wild[C]//2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 2018: 1-10.
* [33] Dattner B, Chamorro-Premuzic T, Buchband R, et al. The legal and ethical implications of using AI in hiring[J]. Harvard Business Review, 2019, 25.
* [34] Meyer T, Schmitt M, Dietzek B, et al. Accumulating advantages, reducing limitations: Multimodal nonlinear imaging in biomedical sciences–the synergy of multiple contrast mechanisms[J]. Journal of biophotonics, 2013, 6(11‐12): 887-904.
* [35] Deng L. Deep learning: from speech recognition to language and multimodal processing[J]. APSIPA Transactions on Signal and Information Processing, 2016, 5.
* [36] Gadepally V, Goodwin J, Kepner J, et al. Ai enabling technologies: A survey[J]. arXiv preprint arXiv:1905.03592, 2019.
* [37] Cully, A., Clune, J., Tarapore, D. & Mouret, J.-B. Nature 521, 503–507 (2015).
* [38] Bilodeau, R. A. & Kramer, R. K. Front. Robot. AI 4, 48 (2017).
* [39] Petersen, K. H., Napp, N., Stuart-Smith, R., Rus, D. & Kovac, M. Sci. Robot. 4, eaau8479 (2019).
* [40] Xia, B. et al. Actuators 9, 62 (2020).
* [41] 5\. Pena-Francesch, A., Jung, H., Demirel, M. C. & Sitti, M. Nat. Mater. 19, 1230–1235 (2020).
* [42] Sadeghi, A., Mondini, A. & Mazzolai, B. Soft Robot. 4, 211–223 (2017).
* [43] Man, K. & Damasio, A. Nat. Mach. Intell. 1, 446–452 (2019).
* [44] Sol, J. A. H. P. et al. Chem. Commun. 55, 1726–1729 (2019).
* [45] Pfeifer, R., Bongard, J. & Grand, S. How the Body Shapes the Way We Think: A New View of Intelligence (MIT Press, 2007).
* [46] Mengüç, Y., Correll, N., Kramer, R. & Paik, J. Sci. Robot. 2, eaar4527 (2017).
* [47] Lipson, H. & Pollack, J. B. Nature 406, 974–978 (2000).
* [48] Yang, G.-Z. et al. Sci. Robot. 3, eaar7650 (2018).
* [49] Kovac, M. Science 352, 895–896 (2016).
* [50] Hauser, H. Nat. Mach. Intell. 1, 338–339 (2019).
* [51] Howard, D. et al. Nat. Mach. Intell. 1, 12–19 (2019).
* [52] Chrisley, R. & Ziemke, T. In Encyclopedia of Cognitive Science (Wiley, 2006).
* [53] Miriyev, A., Stack, K. & Lipson, H. Nat. Commun. 8, 596 (2017).
* [54] João Paulo Costeira and Pedro Lima (editors), “A simple guide to Physical AI”. Published on the AI4EU platform: http://ai4eu.eu. June 24, 2020.
* [55] Dafoe, A. (2018). AI governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK.
* [56] Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database[C]//2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009: 248-255.
|
# vi-mistral-x
James Vo
AI Algorithm Research Team
AgileSoDA Inc.
Seoul, South Korea
<EMAIL_ADDRESS>
http://agilesoda.ai
###### Abstract
The advancement of Large Language Models (LLMs) has significantly transformed
the field of natural language processing, although the focus on English-
centric models has created a noticeable research gap for specific languages,
including Vietnamese. To address this issue, this paper presents vi-mistral-x,
an innovative Large Language Model designed expressly for the Vietnamese
language. It utilizes a unique method of continual pre-training, based on the
Mistral architecture, which incorporates grouped-query attention and sliding
window attention techniques. This model, vi-Mistral-X, marks a significant
step forward in improving the understanding and generation of the Vietnamese
language. It introduces an additional phase of continual pre-training,
specifically adapted for Vietnamese, enhancing the model’s capability in
understanding complex language nuances and generating accurate, context-aware
Vietnamese text. Through comprehensive testing on various benchmarks, vi-
mistral-x has shown to outperform existing Vietnamese LLMs in several key
areas, including text classification, question answering, and text generation.
Particularly, in the Vietnamese Multitask Language Understanding (VMLU)
benchmark, vi-mistral-x sets a new standard, outperforming other available
models significantly. This paper highlights the critical role of continual
pre-training in advancing language-specific LLMs and opens new avenues for the
development of multilingual models. We aim for vi-mistral-x to not just be an
important asset for processing the Vietnamese language but also to encourage
more advancements in creating large language models for languages that are
less represented.
_Keywords_ Vietnamese $\cdot$ LLM $\cdot$ Pretraining
## 1 Introduction
The field of natural language processing (NLP) has witnessed a paradigm shift
with the advent of Large Language Models (LLMs), which have shown tremendous
potential in understanding and generating human language. LLMs like ChatGPT
and GPT-4 have paved the way for innovations that edge closer to achieving
Artificial General Intelligence (AGI). However, the progress in this domain
has predominantly centered around English, leading to a substantial disparity
in the development and performance of LLMs for other languages. This disparity
not only limits the global applicability of such models but also underscores a
crucial gap in the research and development of language models that cater to
the diverse linguistic landscape of our world.
In particular, the Vietnamese language, with its unique syntactic and semantic
complexities, has not been adequately represented in the current wave of LLM
advancements. This oversight hinders the ability of Vietnamese NLP
applications to achieve the same level of sophistication and effectiveness as
their English counterparts, thereby creating a significant bottleneck in the
development of Vietnamese language technology.
To bridge this gap, our paper introduces vi-mistral-x, an LLM specifically
designed to address the challenges associated with processing and generating
the Vietnamese language. Building on the foundation laid by the innovative
Mistral architecture, vi-mistral-x incorporates advanced techniques such as
grouped-query attention (GQA) and sliding window attention (SWA) Jiang et al.
(2023). These features are part of a unique approach to continual pre-training
that is tailored to the Vietnamese language, enabling the model to capture its
linguistic nuances more accurately.
The development of vi-mistral-x is inspired by recent efforts in the field to
extend the capabilities of existing models to additional languages. This
includes the adaptation of LLaMA for the Chinese language Cui et al. and the
Korean language L. Junbum (2023). By employing a similar methodology of
extending vocabulary and incorporating language-specific pre-training and
fine-tuning phases, we aim to achieve a leap in the quality of text
understanding and generation in Vietnamese. This approach is underscored by
the success of the Mistral 7B model, which has demonstrated the effectiveness
of GQA and SWA in improving performance and efficiency across various NLP
tasks.
Our work on vi-mistral-x represents a critical step toward closing the
research gap for the Vietnamese language within the NLP community. By
detailing our approach and sharing our findings, we hope to not only enhance
the capabilities of language models for Vietnamese but also to encourage
further research and development efforts focused on other underrepresented
languages. This endeavor aligns with our broader goal of promoting inclusivity
and diversity in the advancement of natural language processing technologies,
ensuring that the benefits of these innovations are accessible to a wider
audience around the globe.
## 2 Proposed Method
This section details the methodology employed in developing vi-Mistral-X,
focusing on the adaptation of the Mistral architecture for the Vietnamese
language. The process encompasses five main stages: corpus preparation,
tokenizer training, model initialization, model training, and model alignment.
### 2.1 Effective Corpus Preparation
The stage involves refining a Vietnamese text corpus extracted from CulturaX,
a comprehensive multilingual dataset designed to support the development of
Large Language Models (LLMs) across 167 languages, including Vietnamese Nguyen
et al. (2023a). The primary goal was to reduce the original corpus to a more
manageable size while enhancing its quality, which is critical for the
effective training of language models. We employed a multi-step preprocessing
pipeline with the following components:
##### Random Selection
As an initial step, we used a random selection technique to significantly
reduce the corpus’s size. This method allowed us to manage computational
resources better by focusing on a smaller, yet representative, subset of the
original dataset.
##### N-gram-based Filtering for Deduplication
We applied an n-gram-based filtering method to ensure the dataset’s
uniqueness. This technique analyzes the frequency of contiguous sequences of
n-gram in the text to identify and remove duplicate or nearly identical
content. Such deduplication is crucial to reduce the risk of overfitting the
model on repetitive data.
##### BERT-based Binary Classifier for Toxicity Filtering
To further enhance the corpus quality, we used a high-precision BERT-based
binary classifier to filter out toxic content. Deploying this classifier
helped exclude data that could propagate undesirable biases or harmful
expressions within the trained model.
##### Perplexity-based Filtering
The final preprocessing step was perplexity-based filtering. Perplexity, a
measure of a probability model’s predictive accuracy, was used to assess and
filter documents based on their coherence and quality. This criterion is vital
for developing language models, as it ensures that only high-quality and
coherent documents contribute to the training process.
This comprehensive preprocessing pipeline was designed to enhance the quality
of the Vietnamese corpus from CulturaX. Table 1 presents a detailed comparison
between the original CulturaX corpus and the refined corpus used for training
the vi-mistral-x model. Although a formal evaluation employing quantitative
measures to ascertain the processed data’s specific impact on model training
compared to the original data has not yet been conducted, there is reason to
believe that selecting and refining data to improve the consistency and
quality of each data sample can lead to enhanced computational efficiency.
Specifically, reducing the size of the data by removing noisy and non-uniform
samples can decrease computing costs due to lower resource requirements and
may also improve the training quality of the model by concentrating on high-
quality data, thereby optimizing the learning process.
| CulturaX/vi | Selected corpus
---|---|---
No. of documents | 54,988,654 | 7,331,840
Size in GB (parquet) | 150.91 | 20.656
No. of tokens | NA | 8,323,137,536
Table 1: Detailed comparison of the original CulturaX/vi and the refined
corpus for vi-mistral-x model training
### 2.2 Effective Tokenizer Training
The second phase in the adaptation of the pretrained Mistral model for
Vietnamese language processing involves the development of a tokenizer capable
of efficiently handling Vietnamese text. Initially, we utilized Google
SentencePiece111https://github.com/google/sentencepiece to train a new
SentencePiece model (SPM). Subsequently, we performed rule-based token
filtering on the trained SPM, with a focus on Vietnamese character
recognition. The enhanced SPM was then integrated with the original Mistral’s
SPM model. This hybrid tokenizer maintains the ability to process English and
other languages previously supported by Mistral-7B, while also effectively
managing Vietnamese text. This capability is pivotal for facilitating
bilingual or multilingual continual training in the future.
##### SPM Model Training
The new SPM model was developed by employing Google SentencePiece to train a
new model on our refined corpus, which was obtained in the initial stages. The
corpus was significantly reduced to a manageable size (20GB) without
necessitating extra sampling, limiting maximum sentence length, or filtering
character coverage. The vocabulary size was determined by balancing the trade-
off between input complexity and model complexity. A larger vocabulary tends
to decrease the number of tokens passed to the model, thereby reducing input
complexity, but it increases model complexity due to the expansion of the
embedding and language model head dimensions. As illustrated in Figure 1, a
vocabulary size of 8,096 is optimal for our dataset and the Mistral model,
based on our observations.
##### SPM Model Refining
This phase involved the removal of abnormal characters from the trained SPM
model to achieve a high-quality and coherent tokenizer. The refinement rules
were established based on manual definitions and prioritized tokens with the
highest frequency.
##### Model Combination
The refined SPM was integrated with the original Mistral’s SPM model to create
the final tokenizer. This integration process involved several rounds of
tokenizer training and analysis, ensuring the new tokenizer model includes a
comprehensive and relevant vocabulary for our project.
This meticulous approach to tokenizer training and refinement underscores the
importance of adapting language processing tools to efficiently manage
specific linguistic characteristics, thereby enhancing bilingual or
multilingual training capabilities.
Figure 1: Visualization of the Mistral SPM model and customized Vietnamese SPM Vocab Size | Relative Input Complexity | Relative Model Embedding Complexity
---|---|---
1000 | 0.841395049 | 1.01640625
2000 | 0.630757254 | 1.04403125
3000 | 0.584167181 | 1.073125
4000 | 0.557951001 | 1.1025
5000 | 0.539983642 | 1.13171875
6000 | 0.526697283 | 1.16078125
7000 | 0.516199365 | 1.18978125
8000 | 0.50747301 | 1.21909375
9000 | 0.500297369 | 1.2479375
10000 | 0.494061494 | 1.27684375
11000 | 0.488440053 | 1.30575
12000 | 0.483667799 | 1.33428125
13000 | 0.479315439 | 1.36334375
14000 | 0.475368937 | 1.39240625
15000 | 0.471816595 | 1.42109375
16000 | 0.468521178 | 1.4500625
17000 | 0.465535533 | 1.4789375
18000 | 0.462752376 | 1.50753125
19000 | 0.460176296 | 1.5363125
20000 | 0.45770208 | 1.5655625
30000 | 0.439826305 | 1.85509375
40000 | 0.422321581 | 2.14471875
80000 | 0.403156539 | 3.31371875
120000 | 0.395356195 | 4.49840625
Table 2: Input Complexity and Model Embedding Complexity by Vocab Size Figure
2: Relative Input Complexity and Relative Model Embedding Complexity by Vocab
Size.
### 2.3 Effective Model Initialization
For model initialization, we adapted the Mistral architecture to accommodate
the newly-generated Vietnamese token embeddings produced by the novel
tokenizer. This adaptation necessitated the expansion of both the model’s
embedding layer and language model head to include the Vietnamese-specific
tokens, whilst preserving the integrity of the original model’s architecture.
Figure 2 illustrates the architectural comparison between the original Mistral
framework and our modified version tailored to accommodate Vietnamese-specific
tokens.
Figure 3: Model Architecture of the Mistral Model and Our Expanded Model
##### Initilization
Let $V=\\{1,\ldots,n\\}$ be the model’s vocabulary, where $n=32000$. Let
$w_{1:T}$ be a sequence of words. Let $p_{\theta}(w_{i}|w_{1:i-1})$ be the LM
parameterized by $\theta$, defined by:
$P(\boldsymbol{w}_{i}|\boldsymbol{w}_{i-n+1:i-1})=\frac{\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{w_{i}})}{\sum_{j=1}^{m}\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{j})}$
where $\boldsymbol{h}_{i-1}=\phi_{\theta}(w_{1:i-1})\in\mathbb{R}^{d}$ is the
representation of the prefix, and $\boldsymbol{e}_{i}\in\mathbb{R}^{d}$ is the
embedding for word $i\in V$. The $\boldsymbol{e}_{i}$ are contained in
$\theta$.
When a new word, $\text{input\\_ids}\in[32000,38658]$, $n+1\notin V$ is added
to the vocab of the pretrained LM, the new word $n+1\notin V$’s embedding
needs to be initialized as $\boldsymbol{e}_{n+1}$. Let
$p_{\theta^{\prime}}(w_{i}|w_{1:i-1})$ be the new LM, which has parameters
$\theta^{\prime}=\theta\cup\\{\boldsymbol{e}_{n+1}\\}$, defined by:
$p_{\theta^{\prime}}(w_{i}|w_{1:i-1})=\frac{\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{w_{i}})}{\sum_{j=1}^{m}\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{j})+\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{n+1})}$
$p_{\theta}(w_{i}|w_{1:i-1})=p_{\theta}(w_{i}|w_{1:i-1})\times\frac{1}{1+\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{n+1})/\sum_{j=1}^{m}\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{j})}$
The updated probability of a particular word is the original probability of
that word scaled by a multiplicative element that is less than one, denoted as
$\frac{1}{1+\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{n+1})/\sum_{j=1}^{m}\exp(\boldsymbol{h}_{i-1}^{T}\boldsymbol{e}_{j})}$.
This leads the model to incorrectly assign a probability of 1 to newly added
words and 0 to the original words.
Therefore, the new embedding is adjusted as follows:
$\exp(\boldsymbol{h}_{i}^{T}\boldsymbol{e}_{n+1})=\exp\left(\boldsymbol{h}_{i}^{T}\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{e}_{j}\right)$
Finally, the model’s embedding and language model (LM) head are resized to
$V^{\prime}\times H$ and $H\times V^{\prime}$, respectively, where
$V^{\prime}=38659$.
### 2.4 Effective Model training
##### Memory and Computational Efficiency in Training
The core of vi-mistral-x’s development involved continual pre-training on a
Vietnamese corpus. In this stage, our research is driven by the need to
leverage the most effective resources in computational linguistics and machine
learning. We focus on addressing two main challenges in Large Language Models
(LLMs): memory capacity limitations, which lead to Out Of Memory (OOM) errors,
and the requirement for significant computational power, causing long training
times. Our work seeks to overcome these issues by optimizing the model
architecture and training processes.
In our pursuit, we have concentrated on a curated selection of model
architectures, including Llama2, Mistral, and Gemma. These were chosen based
on their potential for high efficiency and compatibility with our objectives.
Additionally, our strategy encompasses the integration of advanced parallelism
techniques, such as Fully Sharded Data Parallelism (FSDP), DeepSpeed ZeRO-3
(DSZERO3), Pipeline Parallelism (PP), and Tensor Parallelism (TP). These
methods are instrumental in distributing the computational load and memory
usage across multiple devices, thereby alleviating the aforementioned
constraints.
Our optimizations have significantly increased training speed, making our
library about twice as fast as similar open-source options. Specifically, it’s
1.6 times faster than both Dao (2023) and the PyTorch version of Scaled Dot
Product Attention (SDPA) (2024).
Our findings highlight the possibility of greatly improving the efficiency of
training transformer-based models, advancing artificial intelligence research.
##### Optimization
Let $W\notin\mathbb{R}^{m\times n}$ be a weight matrix. Let
$G_{t}=-\nabla_{W}\phi_{t}(W_{t})\in\mathbb{R}^{m\times n}$
be the gradient matrix at step $t$. The updated weight matrix is computed by:
$W_{T}=W_{0}+\eta\sum_{t=0}^{T-1}G_{t}$
where $\eta$ is the learning rate and $\phi_{t}$ is a stateful gradient
regularizer, which is memory-intensive. For instance,
[AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
takes $4\times m\times n$ memory for gradient, variance, momentum, and
parameters.
To enhance memory and computational efficiency during training, LoRA and its
derivatives, which perform a low-rank projection in the weight space as
$W_{T}=W_{0}+B_{T}A_{T}$, were not selected due to their inherent low-rank
limitations. Our goal is to achieve a full-rank update that is both memory-
and computation-efficient. Therefore, ‘Tokenizer‘, ‘Model‘, ‘Trainer‘, and
‘Optimizer‘ are imported from our XLLM library Vo (2023), which has the same
interface as those in the
[Transformers](https://github.com/huggingface/transformers.git) library.
Techniques such as learning rate warm-up and learning rate scheduling, which
adjust the learning rate appropriately across layers, are also applied to
optimize the training process.
##### Training
The model underwent training on a computational framework consisting of eight
Nvidia H100 80GB SXM5 GPUs. Due to the intentional interruption of the
training process for purposes of advanced evaluation and optimization, an
exact duration of training was not documented. However, a rough estimate
suggests that, under conditions of uninterrupted training, the process would
span approximately 104 hours. Financially, this duration translates to an
approximate expenditure of $3902.08,giventheoperationalcostof$37.52 per hour
per node.
### 2.5 Model alignment
Following the pre-training phase, vi-mistral-x underwent a series of fine-
tuning processes aimed at aligning the model with specific NLP tasks. This
alignment involved training the model on task-specific Vietnamese datasets,
such as text classification, question answering, and text generation. Each
task-focused fine-tuning phase allowed vi-mistral-x to adjust its parameters
to optimize performance on that task, thereby ensuring its applicability and
effectiveness across a wide range of NLP applications. This step was crucial
for benchmarking vi-mistral-x against existing Vietnamese LLMs and
demonstrating its superior performance across several key areas.
Through these methodological steps, vi-mistral-x represents a significant
advancement in the development of LLMs for the Vietnamese language, offering
enhanced understanding and generation capabilities that set a new benchmark
for performance in Vietnamese NLP tasks.
## 3 Experimental Results
### 3.1 Pretrained Model
#### 3.1.1 Loss and Accuracy
##### References
* •
anhdungitvn/vi-mistral-x
* •
mistralai/Mistral-7B-v0.1 Jiang et al. (2023)
* •
Viet-Mistral/Vistral-7B-Chat Nguyen et al. (2023b)
* •
vinai/PhoGPT-7B5 Nguyen et al. (2024)
* •
bkai-foundation-models/vietnamese-llama2-7b-120GB
* •
meta-llama/Llama-2-7b-hf Touvron et al. (2023)
* •
meta-llama/Llama-2-13b-hf Touvron et al. (2023)
* •
google/gemma-7b Team et al. (2024)
##### Setting
* •
Task: CLM (next token prediction)
* •
Test data: anhdungitvn/wiki_vi_splitted
* •
Test data selection: random train_test_split
* •
Test data size: 10000 documents
* •
Metrics:
* –
Tokens: smaller is better
* –
Loss: smaller is better
* –
Accuracy: larger is better
##### Experimental Results
Model | Type | Length | Tokens | Loss | Accuracy*****
---|---|---|---|---|---
anhdungitvn/vi-mistral-x* | Pretrained | 4096 | 2068480 | 2.1566 | 0.5622
mistralai/Mistral-7B-v0.1 | Pretrained | 4096 | 4517888 | 1.3687 | 0.6813
Viet-Mistral/Vistral-7B-Chat | Finetuned** | 4096 | 2224128 | 1.7354 | 0.6223
vinai/PhoGPT-7B5 | Pretrained | 2048*** | 1982464 | 16.5563**** | 0.0029
bkai…/vietnamese-llama2-7b-120GB | Pretrained | 4096 | 2191360 | 2.4808 | 0.5207
meta-llama/Llama-2-7b | Pretrained | 4096 | 4632576 | 1.1287 | 0.7295
meta-llama/Llama-2-13b | Pretrained | 4096 | 4632576 | 0.9543 | 0.7700
google/gemma-7b | Pretrained | 4096 | 2232320 | … | …
Table 3: Comparison of Pretrained Models
* *
The model vi-mistral-x* is currently under development. The shown results were
obtained by evaluating a checkpoint at epoch 0.08.
* **
The Viet-Mistral/Vistral-7B pretrained model is unpublished, so we evaluated
the Viet-Mistral/Vistral-7B finetuned model.
* ***
The model vinai/PhoGPT-7B5 doesn’t support an input sequence length of 4096. A
RuntimeError occurs in modeling_mpt.py on line 138: "The size of tensor a
(4096) must match the size of tensor b (2048) at non-singleton dimension 3."
* ****
The same evaluation method was applied to all models. The results indicate
that the loss for this particular model is unusually high, suggesting that the
evaluation method employed may not be appropriate for this model. Further
investigation is required.
* *****
Improved accuracy in a Causal Language Model (CLM) for next-token prediction
does not guarantee enhanced performance in other tasks or on different
datasets. Loss and accuracy metrics merely indicate the model’s current
training state and can differ substantially among various models. Therefore,
they cannot be directly compared based solely on loss and accuracy.
#### 3.1.2 Vietnamese Multitask Language Understanding (VMLU)
##### References
* •
VMLU
* •
anhdungitvn/vmlu_v1.5
##### VMLU
VMLU is a benchmark suite aimed at evaluating foundation models’ capabilities,
focusing on the Vietnamese language. It includes 10,880 multiple-choice
questions across 58 subjects within STEM, Humanities, Social Sciences, and
more, covering difficulty levels from basic to advanced.
##### Dataset: anhdungitvn/vmlu_v1.5
The dataset anhdungitvn/vmlu_v1.5 was originally created from vmlu_v1.5 by
formatting it into the Hugging Face datasets format for easier use.
##### Example
Figure 4: Example of VMLU
##### Experimental Results
# | Model | Creator | Access | EvalDate | STEM | SS | Hum | Others | Avg
---|---|---|---|---|---|---|---|---|---
1 | GPT-4 | OpenAI | API | 08/01/2024 | 63.84 | 71.78 | 66.14 | 60.37 | 65.53
2 | gemini | Google | API | 30/01/2024 | 42.8 | 60.31 | 55.35 | 51.30 | 51.03
3 | ChatGPT | OpenAI | API | 08/01/2024 | 43.24 | 51.67 | 46.96 | 46.32 | 46.33
4 | ViGPT-1.6B-v1 | Vin BigData | Private | 08/01/2024 | 35.06 | 48.72 | 47.20 | 42.54 | 42.34
5 | gemma-7b-it | Google | Weight | 22/02/2024 | 39.95 | 44.93 | 43.39 | 40.11 | 41.9
6 | Qwen-7B | Alibaba Cloud | Weight | 08/01/2024 | 30.64 | 35.07 | 34.15 | 32.68 | 32.81
7 | vi-mistral-x* | James | TBD | 15/03/2024 | 24.88 | 34.08 | 35.11 | 29.26 | 30.32
8 | gemma-2b-it | Google | Weight | 22/02/2024 | 24.39 | 29.59 | 31.01 | 26.81 | 27.72
9 | sealion7b | AI Singapore | Weight | 08/01/2024 | 26.28 | 28.57 | 27.66 | 27.34 | 26.73
10 | bloom-1b7 | BigScience | Weight | 08/01/2024 | 25.13 | 25.09 | 26.34 | 25.19 | 25.51
Table 4: Comparision of Pretrained Models on VMLU
The model “vi-mistral-x*” is currently under development. The shown results
were obtained by evaluating a checkpoint at epoch 0.08.
The comparison is shown in Table 4, and the detailed evaluation of “vi-
mistral-x” is presented in Table 5.
Table 5: Detailed Evaluation of VI-Mistral-X* on VMLU Category_Subcategory | Score
---|---
total | 30.32
stem_applied_informatics | 39.44
stem_computer_architecture | 31.11
stem_computer_network | 34.64
stem_discrete_mathematics | 23.64
stem_electrical_engineering | 22.73
stem_elementary_mathematics | 19.44
stem_elementary_science | 55.00
stem_high_school_biology | 15.00
stem_high_school_chemistry | 22.78
stem_high_school_mathematics | 16.22
stem_high_school_physics | 23.33
stem_introduction_to_chemistry | 14.53
stem_introduction_to_physics | 23.12
stem_introduction_to_programming | 29.05
stem_metrology_engineer | 22.70
stem_middle_school_biology | 31.18
stem_middle_school_chemistry | 18.33
stem_middle_school_mathematics | 17.59
stem_middle_school_physics | 21.67
stem_operating_system | 30.56
stem_statistics_and_probability | 10.34
stem_total | 24.88
other_clinical_pharmacology | 26.11
other_driving_license_certificate | 45.61
other_environmental_engineering | 11.70
other_internal_basic_medicine | 34.50
other_preschool_pedagogy | 34.31
other_tax_accountant | 20.69
other_tax_civil_servant | 41.52
other_total | 29.26
other_accountant | 21.43
other_civil_servant | 27.49
humanity_economic_law | 29.81
humanity_education_law | 33.13
humanity_elementary_history | 49.72
humanity_high_school_history | 31.11
humanity_high_school_literature | 25.56
humanity_history_of_world_civilization | 41.11
humanity_idealogical_and_moral_cultivation | 49.44
humanity_introduction_to_laws | 39.68
humanity_introduction_to_vietnam_culture | 28.33
humanity_logic | 18.97
humanity_middle_school_history | 37.78
humanity_middle_school_literature | 37.36
humanity_revolutionary_policy_of_the_vietnamese_commununist_part | 36.67
humanity_vietnamese_language_and_literature | 17.24
humanity_total | 35.11
humanity_administrative_law | 37.78
humanity_business_law | 39.11
humanity_civil_law | 41.11
humanity_criminal_law | 38.04
social_science_middle_school_geography | 27.21
social_science_principles_of_marxism_and_leninism | 36.67
social_science_sociology | 39.89
social_science_business_administration | 20.69
social_science_high_school_civil_education | 43.89
social_science_high_school_geography | 33.33
social_science_ho_chi_minh_ideology | 41.34
social_science_macroeconomics | 21.67
social_science_microeconomics | 23.89
social_science_middle_school_civil_education | 52.25
social_science_total | 34.08
### 3.2 Finetuned Model
The following session is being updated.
## References
* Jiang et al. [2023] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023.
* [2] Yiming Cui, Ziqing Yang, and Xin Yao. Efficient and effective text encoding for chinese llama and alpaca.
* L. Junbum [2023] L. Junbum. llama-2-ko-7b (revision 4a9993e), 2023. URL https://huggingface.co/beomi/llama-2-ko-7b.
* Nguyen et al. [2023a] Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. Culturax: A cleaned, enormous, and multilingual dataset for large language models in 167 languages, 2023a.
* Dao [2023] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning, 2023.
* Vo [2023] James Vo. Enhancing memory and computational efficiency in training transformer-based models, 2023.
* Nguyen et al. [2023b] Chien Van Nguyen, Thuat Nguyen, Quan Nguyen, Huy Nguyen, Björn Plüster, Nam Pham, Huu Nguyen, Patrick Schramowski, and Thien Nguyen. Vistral-7b-chat - towards a state-of-the-art large language model for vietnamese. 2023b.
* Nguyen et al. [2024] Dat Quoc Nguyen, Linh The Nguyen, Chi Tran, Dung Ngoc Nguyen, Dinh Phung, and Hung Bui. Phogpt: Generative pre-training for vietnamese, 2024.
* Touvron et al. [2023] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023.
* Team et al. [2024] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Pier Giuseppe Sessa, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, and Kathleen Kenealy. Gemma: Open models based on gemini research and technology, 2024.
|
On the other hand, $\hat{v}^{i}$ satisfies (6.57)-(6.59) with equalities. By
classical comparison theorem, we deduce that
$\displaystyle\overline{v}_{\kappa}^{i}(t,z)\leq\varphi^{i}(t,z)\leq\hat{v}^{i}(t,z),\,\mbox{
in }[0,T)\times D_{0}.$ (6.60)
Step 2: By definition of $\hat{v}^{i}$ and since against the no intervention
strategy of player $i$, the best response of player $j$ is also not to make an
intervention, we have $\hat{v}^{i}(t,z)\leq v_{\kappa}^{i}(t,z)$ for all
$(t,z)$ in a neighborhood of $[0,T)\times D_{0}$. From Proposition 6.3, the
function $\hat{v}^{i}$ is continuous. It yields that:
$\displaystyle\hat{v}^{i}(t^{{}^{\prime}},z^{{}^{\prime}})\leq\underline{v}_{\kappa}^{i}(t^{{}^{\prime}},z^{{}^{\prime}})\mbox{
for all }(t^{{}^{\prime}},z^{{}^{\prime}})\mbox{ in a neighborhood of
}(t,z)\in[0,T)\times D_{0}.$
Using again the continuity property of $\hat{v}^{i}$ (See Proposition 6.3),
and since $\underline{v}_{\kappa}^{i}$ is lsc, we obtain:
$\displaystyle\hat{v}^{i}(t,z)\leq\displaystyle\liminf_{(t^{{}^{\prime}},z^{{}^{\prime}})\longrightarrow(t,z)}\underline{v}_{\kappa}^{i}(t^{{}^{\prime}},z^{{}^{\prime}})=\underline{v}_{\kappa}^{i}(t,z)\mbox{
for all }(t,z)\in[0,T)\times D_{0}.$ (6.61)
From inequalities (6.60), (6.61) and , we deduce inquality (6.53) and the
continuity property of $v_{\kappa}^{i}$ in the boundary (6.54). $\Box$
Finally, combining the previous results, we obtain the following PDE
characterization of the value function.
###### Corollary 6.1
The value function $v_{\kappa}^{i}$ is continuous on $[0,T)\times{\cal S}$ and
is the unique (in $[0,T)\times{\cal S}$) constrained viscosity solution to the
system of QVIs (6.1)-(6.2) lying in the class of functions with linear growth
in $x$ uniformly in $(t,y^{i},y^{j})$ and satisfying the boundary condition :
$\displaystyle\lim_{(t^{\prime},z^{\prime})\rightarrow(t,z)}v_{\kappa}^{i}(t^{\prime},z^{\prime})=\begin{cases}0&\quad\text{if
}(t,z)\in[0,T)\times\partial^{y^{1}}{\cal S}\cup\partial^{y^{2}}{\cal S},\\\
-\frac{x}{2}(\frac{e^{(\mu-\rho^{i})(T-t)}-1}{\mu-\rho^{i}}+e^{(\mu-\rho^{i})(T-t)})&\quad\text{if
}(t,z)\in[0,T)\times\partial^{x}{\cal S},\end{cases}$
and the terminal condition
$\displaystyle v_{\kappa}^{i}(T,z)$ $\displaystyle=$ $\displaystyle
g^{i}(z),\;\;\;\forall z\in\bar{\cal S}.$
Proof. We have $\bar{v}_{\kappa}^{i}$ is an usc viscosity subsolution to
(6.1)-(6.2) in $[0,T)\times\bar{\cal S}$ and $\underline{v}^{i}_{\kappa}$ is a
lsc viscosity supersolution to (6.1)-(6.2) in $[0,T)\times{\cal S}$. Moreover,
by Proposition 6.4 and Proposition 3.3, we have
$\overline{v}_{\kappa}^{i}(t,z)$$\leq$
$\displaystyle\liminf_{(t^{{}^{\prime}},z^{{}^{\prime}})\longrightarrow(t,z)}\underline{v}_{\kappa}^{i}(t^{{}^{\prime}},z^{{}^{\prime}})$,
for all $(t,z)$ $\in$ $[0,T)\times D_{0}$, and
$\overline{v}^{i}_{\kappa}(T,z)$ $=$ $\underline{v}_{\kappa}^{i}(T,z)$ $=$
$g^{i}(z)$ for all $z$ $\in$ $\bar{\cal S}$. Then by Theorem 6.1, we deduce
$\bar{v}_{\kappa}^{i}$ $\leq$ $\underline{v}_{\kappa}^{i}$ on
$[0,T]\times{\cal S}$, which proves the continuity of $v_{\kappa}^{i}$ on
$[0,T)\times{\cal S}$. On the other hand, suppose that $w_{\kappa}^{i}$ is
another constrained viscosity solution to (6.1)-(6.2) with
$\displaystyle\lim_{(t^{\prime},z^{\prime})\rightarrow(t,z)}w_{\kappa}^{i}(t^{\prime},z^{\prime})=w_{\kappa}^{i}(t,z)=v_{\kappa}^{i}(t,z),\;\;\;\mbox{
for all }(t,z)\in[0,T)\times D_{0},$
and $w_{\kappa}^{i}(T,z)$ $=$ $g^{i}(z)$ for $z$ $\in$ $\bar{\cal S}$. Then,
$\bar{w}_{\kappa}^{i}(t,z)$ $=$ $\underline{v}_{\kappa}^{i}(t,z)$ $=$
$\bar{v}_{\kappa}^{i}(t,z)$ $=$ $\underline{w}_{\kappa}^{i}(t,z)$ for $(t,z)$
$\in$ $[0,T)\times D_{0}$ and $\bar{w}_{\kappa}^{i}(T,z)$ $=$
$\underline{v}_{\kappa}^{i}(T,z)$ $=$ $\bar{v}_{\kappa}^{i}(T,z)$ $=$
$\underline{w}_{\kappa}^{i}(T,z)$ for $z$ in $\bar{\cal S}$. We then deduce by
Theorem 6.1 that $\bar{v}_{\kappa}^{i}$ $\leq$ $\underline{w}_{\kappa}^{i}$
$\leq$ $\bar{w}_{\kappa}^{i}$ $\leq$ $\underline{v}_{\kappa}^{i}$ on
$[0,T]\times{\cal S}$. This proves $v_{\kappa}^{i}$ $=$ $w_{\kappa}^{i}$ on
$[0,T]\times{\cal S}$.
$\Box$
## 7 Numerical illustrations
In this section we provide some numerical results describing the value
functions of the players and their optimal policy. A forward computation of
the value function and the optimal strategy is in our knowledge impossible due
to the high dimension of the state process and the complexity of our model,
therefore we used a numerical scheme based on a quantization technique (see
[18]). The convergence of the numerical solution towards the real solution can
be shown using consistency, monotonicity and stability arguments and will be
further investigated in a future work. A detailed description of the numerical
algorithm can be found in the Appendix.
Numerical tests are performed on the localized and discretized grid
$[0,T]\times[x_{min},..,x_{max}]\times[y_{min}^{1},..,y_{max}^{1}]\times[y_{min}^{2},..,y_{max}^{2}]$.
We used the following values for the parameters of the model: $T=1$, $\mu=0$,
$\sigma=0.5$, $\zeta_{min}=-2.2$, $\zeta_{max}=1.8$, $x_{min}$ $=$
$y_{min}^{1}$ $=$ $y_{min}^{2}=10$, $x_{max}$ $=$ $y_{max}^{1}$ $=$
$y_{max}^{2}$ $=90$, $\lambda=0.1$ and $g^{1}=f^{1}$ and $g^{2}=f^{2}$,
$\rho_{1}=\rho_{2}=0$, $\phi_{1}=5$, $\phi_{2}=2.5$. Besides, the running
costs are $f^{1}(x,y^{1},y_{2})=(y^{1}-x)Q(y^{1}-y^{2})$ and
$f^{2}(x,y^{1},y^{2})=(y^{2}-x)Q(y^{2}-y^{1})$ where
$\displaystyle
Q(x)=\mathds{1}_{]-\infty,-\Delta]}-\frac{x-\Delta}{2\Delta}\mathds{1}_{[-\Delta,\Delta]},$
with $\Delta=40$. Further, the terminal payoffs are chosen such that
$g^{1}(x,y^{1},y^{2})=f^{1}(x,y^{1},y^{2})$ and
$g^{2}(x,y^{1},y^{2})=f^{2}(x,y^{1},y^{2})$.
Figure 1: _The optimal policies for a fixed $(t,x)=(,)$ for the first player
(First Line) and the second player (Second Line). Color code: red: concerned
player intervenes, green: concerned player waits, blue: concerned player
endures the intervention of the other player. _
First, the Figure 1 presents the optimal transaction policy for the two
players, i.e. the different regions of interventions and continuations in the
plane ($y^{1},y^{2})$ for $t=0.5$ and $x=50$ €/MWh. The first line (resp.
second line) of Figure 1 corresponds to the optimal policy regions and the
corresponding interventions of the player 1 (resp. player 2). In the first
column we can distinguish, for both of the players, three different regions,
represented by three different colors, corresponding to the optimal action
given a state $(y^{1},y^{2})$. Indeed, the blue region represents the states
$(y^{1},y^{2})$ where a player is subject to the intervention of the other
player, the green regions represents the states where a player chooses to not
intervene and the red region represents the states where the player makes an
intervention. The second column represents, whenever a player decides to
intervene, the size of the intervention. If the quantity is positive it means
that the price is increased and if it is negative it means that the price is
lowered.
We can see that, as expected, both the players tend to keep the price spread
$|y^{1}-y^{2}|$ as low as possible in order to avoid market share losses. In
fact, for instance, at the state $(y^{1}=85,y^{2}=60)$, player 1 chooses to
push down her price to keep an acceptable market share position. On the other
hand, at the state $(y^{1}=30,y^{2}=70)$, player 1 chooses to push up her
price which allows her to make benefits whilst keeping a reasonable market
share position.
Figure 2: (Left) One path-scenario of the wholesale market price and the
players’ retail prices. (Right) Retail electricity bill compared to wholesale
price in the UK (source Ofgem).
Second, the Figure 2 (Left) gives an example of a trajectory of the wholesale
electricity price $X$ together with the corresponding retail prices
trajectories $Y^{1}$ and $Y^{2}$ of the two players, where the initial state
is $(X_{0}=30,Y^{1}_{0}=40,Y^{2}_{0}=35)$. As a matter of comparison, Figure 2
(Right) shows the trajectories of the wholesale price of electricity and
retail prices of the six largest energy providers in the UK from January 2004
to March, 2010. We observe several comparable features of the optimal
retailers price resulting from our impulse game and the real-life experience.
Increases in the wholesale price is not immediately followed by an increase in
retail prices. There is a delay given by the optimal time to reach the
boundary of the action region. Further, even if our model only involves two
players, we observe that they do not intervene at the same time, as it is the
case in the UK market example. However, they appear to follow an almost
synchronised behaviour: an increase by a first player is mostly to be followed
by an increase of the second player and not by a decreases. Further, the
optimal trajectories of the retail prices can be increasing while the
wholesale price is decreasing (from $0.2$ to $0.3$ for instance), a phenomenon
which is also observed in the UK case (from April, 2006 to March 2007, for
instance). The optimal trajectories can also decrease, even if these decrease
are limited compared to the same reference case of the UK market. Thus,
contrary to the belief of the UK energy regulator, the Ofgem555The British
energy regulator launched an inquiry on energy retailers in 2014. The headline
findings of the assessment were: (…) Possible tacit co-ordination: The
assessment has not found evidence of explicit collusion between suppliers.
However, there is evidence of possible tacit coordination reflected in the
timing and size of price announcements and new evidence that prices rise
faster when costs rise than they reduce when costs fall. Although tacit
coordination is not a breach of competition law, it reduces competition and
worsens outcomes for consumers. Published on Ofgem website on June 26th, 2014,
at the address: www.ofgem.gov.uk/press-releases/ofgem-refers-energy-market-
full-competition-investigation., the observed behaviour of almost synchronised
increase and decrease of retailers prices might not be the result of a tacit
collusion mechanism, but is simply the result of optimal decision in a Nash
equilibrium.
Figure 3: _The average trajectory of the market price and the players’
prices._
Finally, the Figure 3 shows the average trajectories of the market price and
the players’ optimal retail price processes over ten thousand simulated
trajectories of $X$, $Y^{1}$ and $Y^{2}$ on the horizon $[0,T]$. The initial
state is the same as in the Figure 2. We notice that, although the wholesale
price $X$ is a martingale, the retail prices offered by the two players are
increasing. In addition to this observation, we note that the players have
almost the same tendency as they try to keep a balanced market share
configuration until the maturity. With our choice of parameters, we observe
that player 2 starts with a price lower than the player 1’s price and attains
the maturity with a higher price. This is because the interventions for the
player 2 are less expensive making her more dynamic. We can also observe that,
throughout the time period, the price spread between the two players is quite
small preventing the market share to be imbalanced.
Our model suggests that the players would rather propose increasing prices to
maximize their profit. This result might be surprising as one would expect
that the players would stick to the wholesale price tendency and would propose
a mean constant prices. But, in our model the market shares are split between
the two players only according to the difference in the price they offer:
consumers do not have an outside option to switch to another energy and no
market entry of a competitor may threaten the two players for practicing
increasing prices. The thing we find remarkable in this result is that without
setting any potential communication device between the two players, we observe
on average a behaviour that looks like tacit collusion.
## Appendix
In the following, we give a detailed description of the numerical procedure
used to compute the value function and the optimal policies associated to the
optimal control problem. We recall that we used a numerical scheme based on a
quantization technique (see [18]) mixed with an iterative procedure. The
convergence of the numerical solution towards the real solution can be shown
using consistency, monotonicity and stability arguments and will be further
investigated in a future work.
For a time step $h>0$ on the interval $[0,T],$ we introduce a numerical
backward scheme that approximates the solution of the HJB-QVI system via the
couple of functions $v^{i}_{h},i=1,2$ through:
$\displaystyle\left\\{\begin{array}[]{rlll}\mathcal{M}^{i}v^{i}_{h}(t,z)&-v^{i}_{h}(t,z)\leq
0\\\
v^{i}_{h}(t,z)&=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h}(t,z),\mathcal{H}^{i}v^{i}_{h}(t,z))\quad\textrm{in}\quad\overline{\mathcal{I}^{i}}\\\
v^{i}_{h}(t,z)&=\max\left[\mathbb{E}[v^{i}_{h}(t+h,Z^{t,z}_{t+h})]+\Sigma_{i}(t,z),\mathcal{M}^{i}v^{i}_{h}(t,z)\right]\textrm{in}~{}~{}{\mathcal{I}^{i}}\\\
v^{i}_{h}(T,z)&=g^{i}(z),\quad\textrm{in}~{}~{}\mathcal{S}.\end{array}\right.$
(7.5)
Where
$\Sigma_{i}(t,z)=\int^{t+h}_{t}f^{i}(Z_{s}^{t,z})ds.$
This approximation scheme seems a priori implicit due to the nonlocal obstacle
terms $\mathcal{M}^{i}$ and $\mathcal{H}^{i}$. This is typically the case in
impulse control problems, and the usual way to circumvent this problem is to
iterate the scheme by considering a sequence of optimal stopping problems:
$\displaystyle\left\\{\begin{array}[]{rlll}\mathcal{M}^{i}v^{i}_{h,n}(t,z)&-v^{i}_{h,n+1}(t,z)\leq
0\\\
v^{i}_{h,n+1}(t,z)&=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h,n}(t,z),\mathcal{H}^{i}v^{i}_{h,n}(t,z))\quad\textrm{in}\quad\overline{\mathcal{I}_{i}}\\\
v^{i}_{h,n+1}(t,z)&=max\left[\mathbb{E}[v^{i}_{h,n+1}(t+h,Z^{t,z}_{t+h})]+\Sigma_{i}(t,z),\mathcal{M}^{i}v^{i}_{h,n}(t,z)\right]~{}~{}\textrm{in}~{}~{}{\mathcal{I}^{i}}\\\
v^{i}_{h,n+1}(T,z)&=g^{i}(z)\quad\textrm{in}~{}~{}\mathcal{S}.\end{array}\right.$
(7.10)
#### Time and Space discretization
$\bullet$ Now let us consider the time grid
$\mathbb{T}:=\\{t_{k}=kh,~{}~{}k=0,..,M,~{}~{}h=\frac{T}{M}\\}$ and
$M\in\mathbb{N}\setminus\\{0\\},$ $z\in\mathcal{S}$ and starting from a pair
$(v^{1}_{0},v^{2}_{0})$ two fixed vectors.
$\displaystyle\left\\{\begin{array}[]{rlll}\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z)&-v^{i}_{h,n+1}(t_{k},z)\leq
0\\\
v^{i}_{h,n+1}(t_{k},z)&=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z),\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z))\quad\textrm{in}\quad\overline{\mathcal{I}^{i}}\\\
v^{i}_{h,n+1}(t_{k},z)&=max\left[\mathbb{E}[v^{i}_{h,n+1}(t_{k+1},Z^{t_{k},z}_{t_{k+1}})]+\Sigma_{i}(t_{k},z),\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z)\right]\quad\textrm{in}\quad{\mathcal{I}^{i}}\\\
v^{i}_{h,n+1}(T,z)&=g^{i}(z_{j})\quad\textrm{in}~{}~{}\mathcal{S}.\\\
\end{array}\right.$ (7.15)
$\bullet$ Let $\mathbb{X}$ the uniform grid on $[x_{min},x_{max}]$ of step
$dx=\frac{x_{max}-x_{min}}{(N_{x}-1)},$ where $x_{min}<x_{max}\in(0,+\infty)$
and $N_{x}>0$. For $j=0,...,N_{x},$ we denote $x_{j}:=x_{min}+jdx$.
$\bullet$ For $i\in\\{1,2\\}$, let $\mathbb{Y}_{i}$ the uniform grid on
$[y_{min}^{i},y_{max}^{i}]$ of step
$dy_{i}=\frac{y_{max}^{i}-y_{min}^{i}}{(N_{y}-1)},$ where
$y_{min}^{i}<y_{max}^{i}\in(0,+\infty).$ For $j=0,...,N_{y},$ we denote
$y_{j}^{i}:=y_{min}^{i}+jdy_{i}$.
Let
$z_{j}=(x_{j},y_{j}^{1},y_{j}^{2})\in\mathbb{G}:=\mathbb{X}\times\mathbb{Y}_{1}\times\mathbb{Y}_{2}$,
we define the following problem:
$\displaystyle\left\\{\begin{array}[]{rlll}\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z_{j})&-v^{i}_{h,n+1}(t_{k},z_{j})\leq
0\\\
v^{i}_{h,n+1}(t_{k},z_{j})&=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j}),\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j}))\quad\textrm{
in }\overline{\mathcal{I}^{i}}\cap\mathbb{G}\\\
v^{i}_{h,n+1}(t_{k},z_{j})&=max\left[\mathbb{E}[v^{i}_{h,n+1}(t_{k+1},Z^{t_{k},z_{j}}_{t_{k+1}})]+\Sigma_{i}(t_{k},z_{j}),\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z_{j})\right]\textrm{
in }\mathcal{I}^{i}\cap\mathbb{G}\\\
v^{i}_{h,n+1}(T,z_{j})&=g^{i}(z_{j})\textrm{ in }\mathcal{S}\cap\mathbb{G}.\\\
\end{array}\right.$ (7.20)
### Quantization of the Brownian Motion
To compute the conditional expectations arising in the numerical backward
scheme, we use the optimal quantization method. The main idea is to use the
quantization theory to construct a suitable approximation of the Brownian
motion.
It is known that there exists a unique strong solution for the SDE,
$\frac{dX_{s}^{t,x}}{X_{s}^{t,x}}=\mu ds+\sigma dW_{s}$. So it suffices to
consider a quantization of the Brownian motion itself.
Recall that the optimal quantization technique consists in approximating the
expectation $\mathbb{E}[f(Z)]$, where $Z$ is a normal distributed variable and
$f$ is a given real function, by
$\displaystyle\mathcal{E}[f(\xi)]$ $\displaystyle=$
$\displaystyle\sum_{k\in\xi(\Omega)}f(k)\mathbb{P}(\xi=k)\;.$
The distribution of the discrete variable $\xi$ is known for a fixed
$N:=card(\xi(\Omega))$ and the approximation is optimal as the $L^{2}$-error
between $\xi$ and $Z$ is of order $1/N$ (see [18]). The optimal grid
$\xi(\Omega)$ and the associated weights $\mathbb{P}(\xi=k)$ can be downloaded
from the website: http://www.quantize.maths-fi.com/downloads.
Let $N$ denote the number of elementary quantizers used to quantize process
$\hat{X}_{s}$. We replace ${X}_{s}$ in by its quantized random vector
$\hat{X}_{s},$ the optimal quantization of $X_{s}$ and we obtain the quantized
dynamic programming backward scheme:
$\displaystyle\left\\{\begin{array}[]{rlll}\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z_{j})&-v^{i}_{h,n+1}(t_{k},z_{j})\leq
0\\\
v^{i}_{h,n+1}(t_{k},z_{j})&=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j}),\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j}))\quad\textrm{
in }\overline{\mathcal{I}^{i}}\cap\mathbb{G}\\\
v^{i}_{h,n+1}(t_{k},z_{j})&=max\left[\mathcal{E}[v^{i}_{h,n+1}(t_{k+1},Z^{t_{k},z_{j}}_{t_{k+1}}))]+\Sigma_{i}(t_{k},z_{j}),\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z_{j})\right],\textrm{
in }\mathcal{I}^{i}\cap\mathbb{G}\\\
v^{i}_{h,n+1}(T,z_{j})&=g^{i}(z_{j})\;\textrm{ in
}\mathcal{S}\cap\mathbb{G}.\\\ \end{array}\right.$ (7.25)
Hence, the expectations arising in the backward scheme are approximated by
$\displaystyle\mathcal{E}[v^{i}_{h,n+1}(t+h,Z^{t,z}_{t+h})]=\sum_{l=1}^{N}v^{i}_{h,n+1}(t+h,xe^{(\mu-\frac{\sigma^{2}}{2})h+\sigma\sqrt{h}\hat{u_{l}}},y^{i},y^{j}){P}_{l},$
where $\hat{u}_{l}$ is the $N$ quantizer of the standard normal distribution.
The weight associated to this quantizer is
${P}_{l}=\mathbb{P}(\hat{U}=\hat{u}_{l})$. The optimal grid $\hat{u}_{l}$ and
the associated weights ${P}_{l}$ are downloaded from the website:
http://www.quantize.maths-fi.com/downloads.
Finally, to approximate the integral $\Sigma_{i}$, we use the rectangle rule
and we obtain:
$\Sigma_{i}(t_{k},z_{j})=\displaystyle{\int_{t_{k}}^{t_{k+1}}f^{i}(Z_{s}^{t_{k},z_{j}}}ds\simeq\displaystyle{hf^{i}(z_{j})}.$
### Final Numerical Algorithm
Thus, considering the iterative scheme defined in (7.25), we obtain the
following final backward scheme for
$(t_{k},z_{j})\in\mathbb{T}\times\mathbb{G}$:
Algorithm 1 Policy iteration for system of QVIs (one-player)
0: Set $\varepsilon>0$ (numerical tolerance) and $n_{max}\in\mathbb{N}$
(maximum iterations).
0: Pick initial guess: $v^{i}_{h,0}\in\mathbb{R}.$
0: Let $n=0$ (iteration counter) and $R^{0}=+\infty$..
0: while ${R}^{n}>\varepsilon$ and $n\leq n_{max}$ do
0: $\begin{array}[]{lcl}v^{i}_{h,n+1}(T,z_{j})=g^{i}(z_{j}),\\\
v^{i}_{h,n+1}(t_{k},z_{j})=max\left[\mathcal{E}[v^{i}_{h,n+1}(t_{k+1},Z^{t_{k},z_{j}}_{t_{k+1}})]+\Sigma_{i},\mathcal{M}^{i}v^{i}_{h,n}(t_{k},z_{j})\right].\\\
\end{array}$
0: Let $R^{n+1}$ be the largest pointwise residual to the QVI,
i.e.$R^{n+1}=||v_{h,n+1}^{i}-v_{h,n}^{i}||.$
0: Let $n=n+1.$
0: end while.
The final Algorithm is as follows
Algorithm 2 Policy iteration for system of QVIs (two players)
0: Set $\varepsilon>0$ (numerical tolerance) $,0<\alpha<1,$ $~{}r^{0}>0$
(relaxation parameters) and $n_{max}\in\mathbb{N}$ (maximum iterations).
0: Pick initial guess:
$(v^{1}_{h,0},v^{2}_{h,0})\in\mathbb{R}\times\mathbb{R}.$
0: Let $n=0$ (iteration counter) and $R^{0}=+\infty$.
0: while ${R}^{n}>\varepsilon$ and $n\leq n_{max}$ do
0: for $i=1,2$ (player $i$) do
0: $l=3-i$ (player $l.$)
0:
$\mathcal{C}^{n}_{l}:=\\{\mathcal{M}^{l}v^{l}_{n}-v^{l}_{n}<-r^{n}\\}\cap\mathbb{G}.$
0: For $t_{k}\in\mathbb{T}$ and $z_{j}\notin\mathcal{C}^{n}_{l},$ let
$v^{n+1}_{i}(t_{k},z_{j})=max(\mathcal{M}^{i}\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j}),\mathcal{H}^{i}v^{i}_{h,n}(t_{k},z_{j})).$
0: For $(t_{k},z_{j})\in\mathbb{T}\times\mathcal{C}^{n}_{l},$ solve for
$v_{n+1}^{i}(t_{k},z_{j})$ by applying Algorithm $1$ to
$\min\\{{-\frac{\partial v_{n+1}^{i}}{\partial
t}-\mathcal{L}^{i}v_{n+1}^{i}-f^{i},v_{n+1}^{i}-\mathcal{M}^{i}v_{n+1}^{i}}\\}=0$
0: end for.
0: Let $R^{n+1}$ be the largest pointwise residual to the system of QVIs,
i.e.$R^{n+1}=max(||v_{h,n+1}^{1}-v_{h,n}^{1}||,||v_{h,n+1}^{2}-v_{h,n}^{2}||).$
0: $r^{n+1}:=max\\{\alpha R^{n+1},\varepsilon\\}$
0: Let $n=n+1.$
0: end while.
## References
* [1] Akian M., Sulem A. and Taksar M. (2001) : "Dynamic optimization of long term growth rate for a portfolio with transaction costs and logarithm utility", Math. Finance, 11, 153-188.
* [2] Aïd R., F. Bernal, M. Mnif , D. Zabaljauregui and Zubelli J. P. (2019) : "A policy iteration algorithm for nonzero-sum stochastic impulse games", ESAIM, 65, p. 27-45.
* [3] Aïd R., M. Basei, G. Callegaro, L. Campi, T. Vargiolu (2019) : "Nonzero-Sum Stochastic Differential Games with Impulse Controls: A Verification Theorem with Applications", Mathematics of Operations Research, 1-28.
* [4] Barles G. (1994) : Solutions de viscosité des équations d’Hamilton-Jacobi, Math. et Appli., Springer Verlag.
* [5] Basei M. (2019) : "Optimal price management in retail energy markets: an impulse control problem with asymptotic estimates", Math. Methods in Operations Research, 89(3), 355-383.
* [6] Bertsekas D. and S. Shreve (1978) : "Stochastic optimal control; the discrete-time case", Math. in Sci. and Eng., Academic Press.
* [7] Bouchard B. and N. Touzi (2011): "Weak dynamic programming principle for viscosity solutions". SIAM J. Control Optim., 49(3), 948–962
* [8] Crandall M., H. Ishii and P.L. Lions (1992) : "User’s guide to viscosity solutions of second order partial differential equations", Bul. Amer. Math. Soc., 27, 1-67.
* [9] Cosso A. (2013) : "Stochastic differential games involving impulse controls and double-obstacle quasi-variational inequalities", SIAM J. Control Optim. 51(3), 2102–2131.
* [10] Evans L. C., P. E. Souganidis (1984) : "Differential games and representation formulas for solutions of Hamilton-Jacobi-Isaacs equations", Indiana Univ. Math. J., 33(5), 773-797.
* [11] Gilbarg D. and N. Trudinger (1977) : Elliptic partial differential equations of second order, Springer Verlag, Berlin.
* [12] Isaacs R. (1965) : Differential games. A mathematical theory with applications to warfare and pursuit, control and optimization, John Wiley and Sons, Inc., New York-London-Sydney.
* [13] Ishii K. (1993): "Viscosity solutions of nonlinear second order elliptic PDEs associated with impulse control problems", Funkcial. Ekvac., 36, 123-141.
* [14] Ly Vath V., M. Mnif and H. Pham (2007) : "A model of optimal portfolio selection under liquidity risk and price impact", Finance and Stochastics, 11, 51-90.
* [15] Øksendal B. (2003) : Stochastic Differential Equations, Springer-Verlag, Berlin.
* [16] Pagès, G. and H. Luschgy (2006) : "Functional quantization of a class of Brownian diffusions: a constructive approach", Stoch. Process. Appl., 116, 310–336.
* [17] Pagès, G., Luschgy, H. (2002) : "Functional quantization of Gaussian Process", J. Funct. Anal., 196, 486–531.
* [18] Pagès G., H. Pham and J. Printems (2004) : "Optimal quantization methods and applications to numerical problems in finance", Handbook on Numerical Methods in Finance (S. Rachev, ed.) 253-298.
* [19] Soner H. (1986) : "Optimal control with state-space constraint, I and II", SIAM J. Cont. Optim., 24, 552-561, and 1110-1122.
* [20] Zariphopoulou T. (1988) : Optimal investment-consumption models with constraints, Phd Thesis, Brown University. |
# The use of Octree in point cloud analysis with application to cultural
heritage
Rafał Bieńkowski1 and Krzysztof Rutkowski2
###### Abstract.
In this article we present the effects of our work on the subject of the
technical approach to the 3D point cloud data analysis through the use of the
Octree method to compress, analyse and compute the initial data.
###### Key words and phrases:
octree, 3D point cloud, data classification
1 Systems Research Institute of the Polish Academy of Sciences, 01-447 Warsaw,
Poland, Newelska 6,
2 Cardinal Stefan Wyszyński University, Faculty of Mathematics and Natural
Sciences. School of Exact Sciences, Warsaw, Poland, Dewajtis 5
## 1\. Introduction
3D documentation and renderings are becoming more and more ubiquitous in
numerous fields, such as engineering, architecture, urban planning, large-
scale landscape analysis, and cultural heritage (Art History, Archaeology, and
Museum Studies). With the ongoing improvement of acquisition tools (e.g. laser
scanning, photogrammetry, LiDAR) and methods of 3D model generation (3D
modelling and prototyping software), the accuracy and resolution of widely
available 3D data have greatly improved.
In our article, we address two aspects of handling large 3D point clouds, that
is size reduction and point classification. For both of these aspects, we
apply the octree approach.
The process of improving the 3D data quality follows a similar development to
the use of 2D images, from small bitmaps to high-resolution images. As in the
2D case, for the purpose of storage, analysis or transfer 3D files should be
reduced in size without any significant loss of quality.
For a 3D point cloud to be useful, in most applications, the points need to be
classified first. In many applications, a large number of points can be
classified as noise. Below, we propose an approach to size reduction in 3D
point clouds. We focus on the detection of two types of areas present in point
clouds: 1) vegetation, and 2) regions of insufficient point density to produce
reliable documentation.
Below, we propose one such approach to size reduction of 3D data coming from
the field of Archaeology.
## 2\. State of the art
Topography point cloud analysis is a time and resource consuming process,
especially in terms of manual analysis, like classification. There are a lot
of different methods of point cloud creation such as laser scanning or
Structure from Motion (SfM) [4]. In the case of our experiment, we use the
database on the SfM method, based on collecting 2D images and computing them
to create a 3D object.
The idea of Octree was first published by Donald Meagher in 1980 as a method
to represent and process 3D objects in computer graphics [5]. In modern
scientific work, there are a lot of publications on the application of Octree
in different fields of computer science. Below we would like to mention just a
couple of examples:
1. (1)
Octree Grid Topology – used to segment objects that have a known topology [2],
2. (2)
Nearest neighbour search [3],
3. (3)
Colour Quantization [6].
## 3\. Problem statement
In the present contribution, we investigate the use of the octree method for
the size reduction and classification of 3D point clouds. In the experiments
and analysis, we use numerical data sets representing an area of cultural
heritage interest (an archaeological trench, documented during ongoing
fieldwork and its surroundings - topographical data). The data sets are given
in the form of point clouds based on photogrammetry. Each point in the point
cloud (data set) is represented by its georeferenced position in space and its
colour is given in the RGB system.
In our investigation, we use the Octree method to choose points to be merged
based on the distance criterion. In the 3D Octree method, a space/object is
represented by cuboids, of various sizes. If points are “close enough” in
cuboids of a suitable length of the edges, points are merged.
## 4\. Data sets
Below we present a short description of the data sets used in the
investigation. For the preliminary results, presented in Section “Numerical
experiment”, we used three data sets. All sets come from the photogrammetric
documentation (based on image processing) of an archaeological site.
Photogrammetric documentation in our data sets has been created in Agisoft
Metashape based on the orthogonal photos taken from a drone.
The sets are as follows:
Set 1 – a point cloud documenting a cross-section of an archaeological trench
with remains of architecture (stone walls) inside the trench. The points cloud
covers an area around 2,5 by 6 meters in plan and ca. 70-100 cm deep.
Set 2 – similar to Set 1, this set represents/documents part of an
archaeological trench, but with a strong focus on its surroundings, not the
contents of the trench. This point cloud covers an area of ca. 3,5 by 6,5
meters. During the acquisition of this data, the vegetation around the trench
was also of interest, hence the vertical measurements registered on the cloud
are from 2m above ground (tree height) to 1 m of depth (inside trench).
Set 3 – a point cloud documenting a part of the archaeological heritage site.
The points cloud covers an area around 10 by 10 meters and ca. 30 meters in
height. This data set has been chosen based on the high vegetation in the
centre of the documented area - a large tree.
All points have location data in a georeferenced coordinates system. In our
case, it is the UTM coordinate system with data represented as latitude,
longitude and elevation. The UTM zone codes for our data sets are UTM37T and
UTM38T. An example of the location data for one, the selected point takes the
following form (case of variable sites in Georgia).
## 5\. Data processing
It is characteristic for geographic data to reverse the order of the first two
axes, namely the first values given are from $y$ axis, the second values are
given $x$ axis and the third values given are from $z$ axis. In our
application, we decided to work with the geographic order of the axis and
therefore we used this order of data in our algorithm.
From the data set we extract the following information:
* •
$y_{\text{min}}$ – minimal value on y-axis of vertex,
* •
$y_{\text{max}}$ – maximal value on y-axis of vertex,
* •
$x_{\text{min}}$ – minimal value on x-axis of vertex,
* •
$x_{\text{max}}$ – maximal value on x-axis of vertex,
* •
$z_{\text{min}}$ – minimal value on z-axis of vertex,
* •
$z_{\text{max}}$ – maximal value on z-axis of vertex.
and due to the very small differences of the $yx$-position of points (which
differs on at most 4 positions after the decimal point), we perform the
following change of the $yx$-data: for $y$-values and $x$-values we drop the
decimal precision before 4 positions after the decimal point and scale the
result by $10^{4}$. The value of $z$ remains unchanged.
The selected level cuboids, which contain points, are of dimensions
$\frac{y_{\text{max}}-y_{\text{min}}}{lev}\times\frac{x_{\text{max}}-x_{\text{min}}}{lev}\times\frac{z_{\text{max}}-z_{\text{min}}}{lev},$
where $lev$ represents the maximal level of division in the Octree method.
The Octree method is as follows
1. (1)
Given data we put into a cuboid of dimension
$(y_{\text{max}}-y_{\text{min}})\times(x_{\text{max}}-x_{\text{min}})\times(z_{\text{max}}-z_{\text{min}})$,
2. (2)
If the cuboid contains a vertex, split the cuboid into 8 cuboids of equal
dimensions (by dividing each edge by two),
3. (3)
For each new cuboid we assign step 2, whenever the level of nesting is less or
equal to lev.
The preview of this procedure is illustrated in the following graph.
The result of this procedure are cuboids up to the desired level of nesting.
From the cuboids of maximal depth, we extract the cuboids which contain
vertices. In our approach, we have chosen to use cuboids. It is however
possible to implement a similar method using exclusively cubes by choosing
initial data to be contained in a cube. We found that cuboids are better
suited to our application. In a real-life application, the whole process
benefits from the use of cuboids, instead of cubes, as cuboids are able to fit
better into the investigated shapes.
## 6\. Algorithm
The algorithm to classify the point cloud firstly sorts the cuboids of max
depth which contain vertices with respect to $z_{k}$,
$z_{k}=z_{\text{min}}+\frac{z_{\text{max}}-z_{\text{min}}}{lev}$,
$k=0,1\dots,2^{lev}-1$ values for $y$,$x$ dimensions
$[y_{i},y_{i+1}]\times[x_{j},x_{j+1}]$, $i,j\in 0,\dots,2^{lev}-1$. Then for
each coordinate $[y_{i},y_{i+1}]\times[x_{j},x_{j+1}]$, $i,j\in
0,\dots,2^{lev}-1$ we find the connected cuboids of minimal height $z$. We
mark these cuboids as "surface" and the rest cuboids of these coordinates mark
as "above".
The whole process of processing the data is illustrated in the following
graph:
Input objData in matrix $n\times 3$Read obj packagevertices positionsOctree
structureBuilt Octree packageCuboids of selected levelcontaining
verticesIntersectionClassified cuboidsAlgorithm
The algorithm is presented as follows:
Algorithm 1 Finding surface cuboids
for each cuboid of selected depth level containing vertices do
Sort the cuboid data with respect to the $z$ variable for each $yx$-coordinate
end for
for For each $yx$ coordinate do
for For each following pair of cuboids in $yx$ cordinate111Note, that if there
is only one cuboid in $yx$ coordinate then we mark it as "surface" do
if Distance between two following cuboids in $z$ is greater than $0$ then
Mark the first cuboid as "surface"
Break the loop of "For each cuboid of $yx$ coordinate"
else
Mark the first cuboid as "surface"
end if
end for
end for
The cost of the algorithm is $O(lev^{3})$, since in the pessimistic case we
need to analyse each of the $yxz$ cuboids of the selected level depth.
Below we present the second algorithm to classify cuboids "surface" and
"above" and also fill the gap cuboids between cuboids "surface"-"above" or
"above"-"above" as "gap" cuboids.
Algorithm 2 Finding surface, above cuboids and gaps
for each cuboid of selected depth level containing vertices do
Sort the cuboid data with respect to the $z$ variable for each $yx$-coordinate
end for
for For each $yx$ coordinate do
for For each following pair of cuboids in $yx$ cordinate do
if Distance between two following cuboids in $z$ is greater than $0$ then
Mark the second cuboid as "above" and the following cuboids of $yx$
coordinates as "above"
Fill in the cuboid of coordinate $yx$, height $[z_{1},z_{2}]$, where $z_{1}$
is the max height of the first cuboid of the pair, $z_{2}$ is the min height
of the second cuboid of the pair
end if
end for
end for
Mark the cuboids of selected depth level containing vertices which are not
"above" as "surface".
## 7\. Numerical Experiment
We consider data from three sets, as described in Section 4. Sets $1$, $2$,
$3$ are made up of $626831$, $1219669$ and $993802$ points respectively. The
level of depth of cuboids was set to $5$ (starting from the initial level
$0$).
Each of the points has a representation in $x,y,z$ values of the georeferenced
coordinates system and the colour in the RGB system. The numerical experiment
was performed on the computer with the following hardware parameters:
processor AMD Ryzen 9 3950X 16-Core, 128 GB RAM DDR4. The software used for
calculations was MatLab 2020 with the help of the readObj package (see [1])
and the OCTree package (see [7]. The result of performing Octree on the data
by choosing the most nested cuboids, which contain points of the data is
displayed in the following pictures:
For the respective data sets the number of cuboids is as follows:
* •
For set 1 – 12297 cuboids on levels 0-5, where 5003 cuboids of level 5 contain
vertices,
* •
For set 2 – 8281 cuboids on levels 0-5, where 3479 cuboids of level 5 contain
vertices,
* •
For set 3 - 4792 cuboids on levels 0-5, where 1677 cuboids of level 5 contain
vertices.
The results are displayed in the appendix A. Figures.
## References
* [1] Bernard Abayowa. readObj. https://www.mathworks.com/matlabcentral/fileexchange/18957-readobj, 2007\. [Online; accessed 3-December-2022].
* [2] Ying Bai, Xiao Han, and Jerry L. Prince. Octree grid topology preserving geometric deformable model for three-dimensional medical image segmentation. In Nico Karssemeijer and Boudewijn Lelieveldt, editors, Information Processing in Medical Imaging, pages 556–568, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg.
* [3] Bertram H. Drost and Slobodan Ilic. Almost constant-time 3d nearest-neighbor lookup using implicit octrees. Machine Vision and Applications, 29(2):299–311, Feb 2018.
* [4] J. Markiewicz, M. Pilarska, S. Łapiński, A. Kaliszewska, R. Bieńkowski, and A. Cena. Quality assessment of the use of a medium format camera in the investigation of wall paintings: An image-based approach. Measurement, 132:224–237, 2019.
* [5] Donald Meagher. Octree encoding: A new technique for the representation, manipulation and display of arbitrary 3-d objects by computer. 10 1980.
* [6] Hyun Park, Kwang Kim, and Eui-Young Cha. An effective color quantization method using octree-based self-organizing maps. Computational Intelligence and Neuroscience, 2016, 01 2016.
* [7] Sven. octree - partitioning 3D points into spatial subvolumes. https://www.mathworks.com/matlabcentral/fileexchange/40732-octree-partitioning-3d-points-into-spatial-subvolumes, 2013\. [Online; accessed 3-December-2022].
## Appendix A Figures
(a)
(b)
(c)
(d)
Figure A.1. Data set 1. A - coloured points (presented 5% of the total number
of the points). B \- Points without RGB colour information. C \- Cuboids of
maximal depth containing points. D - Cuboids of maximal depth containing
points with the coloured regions of interest.
(a)
(b)
(c)
(d)
Figure A.2. Data set 2. A - coloured points (presented 5% of the total number
of the points). B - Points without RGB colour information. C - Cuboids of
maximal depth containing points. D - Cuboids of maximal depth containing
points with the coloured regions of interest.
(a)
(b)
(c)
(d)
Figure A.3. Data set 3. A - coloured points (presented 5% of the total number
of the points). B - Points without RGB colour information. C - Cuboids of
maximal depth containing points. D - Cuboids of maximal depth containing
points with the coloured regions of interest.
|
# Round-Robin Beyond Additive Agents:
Existence and Fairness of Approximate Equilibria††thanks: This work was
supported by the ERC Advanced Grant 788893 AMDROMA “Algorithmic and Mechanism
Design Research in Online Markets”, the MIUR PRIN project ALGADIMAR
“Algorithms, Games, and Digital Markets”, and the NWO Veni project No.
VI.Veni.192.153.
Georgios Amanatidis Georgios Birmpas Philip Lazos Input Output; London, UK
Stefano Leonardi Rebecca Reiffenhäuser Institute for Logic, Language and
Computation
University of Amsterdam; Amsterdam, The Netherlands
###### Abstract
Fair allocation of indivisible goods has attracted extensive attention over
the last two decades, yielding numerous elegant algorithmic results and
producing challenging open questions. The problem becomes much harder in the
presence of _strategic_ agents. Ideally, one would want to design _truthful_
mechanisms that produce allocations with fairness guarantees. However, in the
standard setting without monetary transfers, it is generally impossible to
have truthful mechanisms that provide non-trivial fairness guarantees.
Recently, Amanatidis et al. [5] suggested the study of mechanisms that produce
fair allocations in their equilibria. Specifically, when the agents have
additive valuation functions, the simple Round-Robin algorithm always has pure
Nash equilibria and the corresponding allocations are _envy-free up to one
good_ (EF1) with respect to the agents’ _true valuation functions_. Following
this agenda, we show that this outstanding property of the Round-Robin
mechanism extends much beyond the above default assumption of additivity. In
particular, we prove that for agents with _cancelable_ valuation functions (a
natural class that contains, e.g., additive and budget-additive functions),
this simple mechanism always has equilibria and even its approximate
equilibria correspond to approximately EF1 allocations with respect to the
agents’ true valuation functions. Further, we show that the approximate EF1
fairness of approximate equilibria surprisingly holds for the important class
of _submodular_ valuation functions as well, even though exact equilibria fail
to exist!
## 1 Introduction
Fair division refers to the problem of dividing a set of resources among a
group of agents in a way that every agent feels they have received a “fair”
share. The mathematical study of (a continuous version of) the problem dates
back to the work of Banach, Knaster, and Steinhaus [36], who, in a first
attempt to formalize fairness, introduced the notion of _proportionality_ ,
i.e., each of the $n$ agents receives at least $1/n$-th of the total value
from fer perspective. Since then, different variants of the problem have been
studied in mathematics, economics, political science, and computer science,
and various fairness notions have been defined. The most prominent fairness
notion is _envy-freeness_ [22, 21, 37], where each agent values her set of
resources at least as much as the set of any other agent. When the available
resources are _indivisible_ items, i.e., items that cannot be split among
agents, notions introduced for infinitely divisible resources, like
proportionality and envy-freeness are impossible to satisfy, even
approximately. In the last two decades fair allocation of indivisible items
has attracted extensive attention, especially within the theoretical computer
science community, yielding numerous elegant algorithmic results for various
new fairness notions tailored to this discrete version of the problem, such as
_envy-freeness up to one good_ (EF1) [28, 16], _envy-freeness up to any good_
(EFX) [18], and _maximin share fairness_ (MMS) [16]. We refer the interested
reader to the surveys of Procaccia [34], Bouveret et al. [15], Amanatidis et
al. [6].
In this work, we study the problem of fairly allocating indivisible _goods_ ,
i.e., items of non-negative value, to _strategic_ agents, i.e., agents who
might misreport their private information if they have an incentive to do so.
Incentivising strategic agents to truthfully report their valuations is a
central goal—and often a notorious challenge—in mechanism design, in general.
Specifically in fair division, this seems particularly necessary, since any
fairness guarantee on the outcome of a mechanism typically holds with respect
to its input, namely the _reported_ preferences of the agents rather than
their true, private preferences which they may have chosen not to reveal.
Without truthfulness, fairness guarantees seem to become meaningless.
Unfortunately, when monetary transfers are not allowed, as is the standard
assumption in fair division, such _truthful_ mechanisms fail to exist for any
meaningful notion of fairness, even for simple settings with two agents who
have additive valuation functions [2].
As an alternative, Amanatidis et al. [5] initiated the study of _equilibrium
fairness_ : when a mechanism always exhibits stable (i.e., pure Nash
equilibrium) states, each of which corresponds to a fair allocation with
respect to the _true_ valuation functions, the need for extracting agents’
true preferences is mitigated. Surprisingly, they show that for the standard
case of additive valuation functions, the simple _Round-Robin_ routine is such
a mechanism with respect to EF1 fairness. Round-Robin takes as input an
ordering of the goods for each agent, and then cycles through the agents and
allocates the goods one by one, giving to each agent their most preferred
available good. For agents with additive valuation functions, Round-Robin is
known to produce EF1 allocations (see, e.g., [30]). Note that, without
monetary transfers, what distinguishes a mechanism from an algorithm is that
its input is the, possibly misreported, agents’ preferences.
To further explore the interplay between incentives and fairness, we take a
step back and focus solely on this very simple, yet fundamental, allocation
protocol. It should be noted that the Round-Robin algorithm is one of the very
few fundamental procedures one can encounter throughout the discrete fair
division literature. Its central role is illustrated by various prominent
results, besides producing EF1 allocations: it can be modified to produce
approximate MMS allocations [3], as well as EF1 allocations for _mixed goods
and chores_ (i.e., items with negative value) [9]. It produces _envy-free_
allocations with high probability when the values are drawn from distributions
[29], it is used to produce a “nice” initial allocation as a subroutine in the
state-of-the-art approximation algorithms for _pairwise maximin share fair_
(PMMS) allocations [25] and EFX allocations [4], it has the lowest
communication complexity of any known fair division algorithm, and, most
relevant to this work, it is the _only_ algorithm for producing fair
allocations for more than two agents that, when viewed as a mechanism, is
known to even have equilibria [8].
We investigate the existence and the EF1 guarantees of approximate pure Nash
equilibria of the Round-Robin mechanism beyond additive valuation functions,
i.e., when the goods already assigned to an agent potentially change how they
value the remaining goods. In particular, we are interested in whether
anything can be said about classes that largely generalize additive functions,
like _cancelable_ functions, i.e., functions where the marginal values with
respect to any subset maintain the relative ordering of the goods, and
_submodular_ functions, i.e., functions capturing the notion of diminishing
returns. Although the stability and equilibrium fairness properties of Round-
Robin have been visited before [8, 5], to the best of our knowledge, we are
the first to study the problem for non-additive valuation functions and go
beyond exact pure Nash equilibria. Cancelable functions also generalize
budget-additive, unit-demand, and multiplicative valuation functions [12], and
recently have been of interest in the fair division literature as several
results can be extended to this class [12, 1, 19]. For similar reasons,
cancelable functions seem to be a good pairing with Round-Robin as well, at
least in the algorithmic setting (see, e.g., Proposition 2.5).
Nevertheless, non-additive functions seem to be massively harder to analyze in
our setting and come with various obstacles. First, it is immediately clear
that, even without strategic agents, the input of an ordinal mechanism
implemented as a simultaneous-move one-shot game, like the Round-Robin
mechanism we study here, can no longer capture the complexity of a submodular
function (see also the relevant discussion in Our Contributions). As a result,
translating this sequential assignment to an estimate on the value of each
agent’s _bundle_ of goods, is not obvious. Lastly, and this applies to
cancelable functions as well, assuming equilibria do exist and enough can be
shown about the value of the assigned bundles to establish fairness, there is
no reason to expect that any fairness guarantee will hold with respect to the
true valuation functions, as the agents may misreport their preferences in an
arbitrary fashion.
### 1.1 Contribution and Technical Considerations
We study the well-known Round-Robin mechanism (Mechanism 1) for the problem of
fairly allocating a set of indivisible goods to a set of strategic agents. We
explore the existence of approximate equilibria, along with the fairness
guarantees that the corresponding allocations provide with respect to the
agents’ true valuation functions. Qualitatively, we generalize the surprising
connection between the stable states of this simple mechanism and its fairness
properties to all approximate equilibria equilibria and for valuation
functions as general as subadditive cancelable and submodular. In more detail,
our main contributions can be summarized as follows:
* •
We show that the natural generalization of the _bluff profile_ of Aziz et al.
[8] is an exact PNE that always corresponds to an EF1 allocation, when agents
have _cancelable_ valuation functions (Theorem 3.2 along with Proposition
2.5). Our proof is simple and intuitive and generalizes the results of Aziz et
al. [8] and Amanatidis et al. [5].
* •
For agents with submodular valuation functions, we show that there are
instances where no $(3/4+\varepsilon)$-approximate PNE exists (Proposition
3.4), thus creating a separation between the cancelable and the submodular
cases. Nevertheless, we prove that an appropriate generalization of the bluff
profile is a $1/2$-approximate PNE (Theorem 3.7) that also produces an
$1/2$-EF1 allocation with respect to the true valuation functions (Theorem
3.8).
* •
We provide a unified proof that connects the factor of an approximate PNE with
the fairness approximation factor of the respective allocation. In particular,
any $\alpha$-approximate PNE results in a ${\alpha}/{2}$-EF1 allocation for
subadditive cancelable agents (Theorem 4.5), and in a ${\alpha}/{3}$-EF1
allocation for submodular agents (Theorem 4.4). We complete the picture by
providing lower bounds in both cases (Theorem 4.3 and Proposition 4.8), which
demonstrate that our results are almost tight.
While this is not the first time Round-Robin is considered for non-additive
agents, see, e.g., [13], to the best of our knowledge, we are the first to
study its fairness guarantees for cancelable and submodular valuation
functions, independently of incentives. As a minor byproduct of our work,
Theorem 3.8 and the definition of the bluff profile imply that, given _value
oracles_ for the submodular functions, we can use Round-Robin as a subroutine
to produce ${1}/{2}$-EF1 allocations.
This also raises the question of whether one should allow a more expressive
bid, e.g., a value oracle. While, of course, this is a viable direction, we
avoid it here as it comes with a number of issues. Allowing the input to be
exponential in the number of goods is already problematic, especially when
simplicity and low communication complexity are two appealing traits of the
original mechanism. Moreover, extracting orderings from value oracles would
essentially result in a mechanism equivalent to ours (if the ordering of an
agent depended only on _her_ function) or to a sequential game (if the
orderings depended on all the functions) which is not what we want to explore
here. Note that less information is not necessarily an advantage towards our
goal. While this results in a richer space of equilibria, fairness guarantees
are increasingly harder to achieve.
As a final remark, all the algorithmic procedures we consider run in
polynomial time, occasionally assuming access to value oracles, e.g.,
Algorithms 2, 3, 4. Although we do not consider computational complexity
questions here, like how do agents compute best responses or how do they reach
approximate equilibria, we do consider such questions interesting directions
for future work.
### 1.2 Further Related Work
The problem of fairly allocating indivisible goods to additive agents in the
non-strategic setting has been extensively studied; for a recent survey, see
Amanatidis et al. [6]. Although the additivity of the valuation functions is
considered a standard assumption, there are many works that explore richer
classes of valuation functions. Some prominent examples include the
computation of EF1 allocations for agents with general non-decreasing
valuation functions [28], EFX allocations (or relaxations of EFX) under agents
with cancelable valuation functions [12, 1, 19] and subaditive valuation
functions [33, 20], respectively, and approximate MMS allocations for
submodular, XOS, and subadditive agents [11, 23].
Moving to the strategic setting, Caragiannis et al. [17] and Markakis and
Psomas [31] were the first to consider the question of whether it is possible
to have mechanisms that are truthful and fair at the same time, again assuming
additive agents. Amanatidis et al. [2] resolved this question for two agents,
showing there is no truthful mechanism with fairness guarantees under any
meaningful fairness notion. As a result, subsequent papers considered truthful
mechanism design under restricted valuation function classes [24, 10].
The stability of Round-Robin was first studied by Aziz et al. [8], who proved
that it always has PNE by using a special case of retracted result of Bouveret
and Lang [13] (this did not affect the former though; see [7]). Finally,
besides the work of Amanatidis et al. [5] mentioned earlier, the fairness
properties of Round-Robin under strategic agents have recently been studied by
Psomas and Verma [35]. Therein it is shown that Round-Robin, despite being
non-truthful, satisfies a relaxation of truthfulness, as it is _not obviously
manipulable_.
## 2 Preliminaries
For $a\in\mathbb{N}$, let $[a]$ denote the set $\\{1,2,\ldots,a\\}$. We will
use $N=[n]$ to denote the set of agents and $M=\\{g_{1},\ldots,g_{m}\\}$ to
denote the set of goods. Each agent $i\in N$ has a valuation function
$v_{i}:2^{M}\to\mathbb{R}_{\geq 0}$ over the subsets of goods. We assume that
all $v_{i}$ are _normalized_ , i.e., $v_{i}(\emptyset)=0$. We also adopt the
shortcut $v_{i}(T\,|\,S)$ for the _marginal value_ of a set $T$ with respect
to a set $S$, i.e., $v_{i}(T\,|\,S)=v_{i}(T\cup S)-v(S)$. If $T=\\{g\\}$, we
write $v_{i}(g\,|\,S)$ instead of $v(\\{g\\}\,|\,S)$. For each agent $i\in N$,
we say that $v_{i}$ is
* •
_non-decreasing_ (often referred to as _monotone_), if $v_{i}(S)\leq v_{i}(T)$
for any $S\subseteq T\subseteq M$.
* •
_submodular_ , if $v_{i}(g\,|\,S)\geq v_{i}(g\,|\,T)$ for any $S\subseteq
T\subseteq M$ and $g\notin T$.
* •
_cancelable_ , if $v_{i}(S\cup\\{g\\})>v_{i}(T\cup\\{g\\})\Rightarrow
v_{i}(S)>v_{i}(T)$ for any $S,T\subseteq M$ and $g\in M\setminus(S\cup T)$.
* •
_additive_ , if $v_{i}(S\cup T)=v_{i}(S)+v_{i}(T)$ for every $S,T\subseteq M$
with $S\cap T=\emptyset$.
* •
_subadditive_ , if $v_{i}(S\cup T)\leq v_{i}(S)+v_{i}(T)$ for every
$S,T\subseteq M$.
Throughout this work, we only consider non-decreasing valuation functions,
e.g., when we refer to submodular functions, we mean non-decreasing submodular
functions. Note that although both submodular and (subadditive) cancelable
functions are strict superclasses of additive functions, neither one is a
superclass of the other.
We will occasionally need an alternative characterization of submodular
functions due to Nemhauser et al. [32].
###### Theorem 2.1 (Nemhauser et al. [32]).
A function $v:2^{M}\rightarrow\mathbb{R}_{\geq 0}$ is (non-decreasing)
submodular if and only if we have $v(T)\leq v(S)+\sum_{i\in T\setminus
S}v(i\,|\,S)$, for all $S,T\subseteq M$.
Also, the following lemma summarizes some easy observations about cancelable
functions.
###### Lemma 2.2.
If $v:2^{M}\rightarrow\mathbb{R}_{\geq 0}$ is cancelable, then $v_{i}(S\cup
R)>v_{i}(T\cup R)\Rightarrow v_{i}(S)>v_{i}(T)$, implying that $v_{i}(S)\geq
v_{i}(T)\Rightarrow v_{i}(S\cup R)\geq v_{i}(T\cup R)$, for any
$S,T,R\subseteq M$, such that $R\subseteq M\setminus S\cup T$. In particular,
$v_{i}(S)=v_{i}(T)\Rightarrow v_{i}(S\cup R)=v_{i}(T\cup R)$.
Note that, for $S,T\subseteq M$, Lemma 2.2 directly implies that
$\operatorname*{arg\,max}_{g\in T}v(g)\subseteq\operatorname*{arg\,max}_{g\in
T}v(g\,|\,S)$.
Despite the fact that the agents have valuation functions, the mechanism we
study (Mechanism 1) is _ordinal_ , i.e., it only takes as input a _preference
ranking_ from each agent. Formally, the preference ranking $\succ_{i}$, which
agent $i$ reports, defines a total order on $M$, i.e., $g\succ_{i}g^{\prime}$
implies that good $g$ precedes good $g^{\prime}$ in agent $i$’ declared
preference ranking.111See the discussion after the statement of Mechanism 1
about why assuming that the reported preference rankings are total (rather
than partial) orders is without loss of generality. We call the vector of the
agents’ declared preference rankings,
$\bm{\succ}\,=(\succ_{1},\ldots,\succ_{n})$, the _reported profile_ for the
instance. So, while an instance to our problem is an ordered triple
$(N,M,\mathbf{v})$, where $\mathbf{v}=(v_{1},\ldots,v_{n})$ is a vector of the
agents’ valuation functions, the input to Mechanism 1 is $(N,M,\bm{\succ})$
instead.
Note that $\succ_{i}$ may not reflect the actual underlying values, i.e.,
$g\succ_{i}g^{\prime}$ does not necessarily mean that
$v_{i}(g)>v_{i}(g^{\prime})$ or, more generally,
$v_{i}(g\,|\,S)>v_{i}(g^{\prime}\,|\,S)$ for a given $S\subseteq M$. This
might be due to agent $i$ misreporting her preference ranking, or due to the
fact that any single preference ranking is not expressive enough to fully
capture all the partial orders induced by a submodular function. Nevertheless,
a valuation function $v_{i}$ does induce a _true preference ranking_
$\succcurlyeq^{*}_{i|S}$ for each set $S\subseteq M$, which is a partial
order, i.e., $g\succcurlyeq^{*}_{i|S}g^{\prime}\Leftrightarrow
v_{i}(g\,|\,S)\geq v_{i}(g^{\prime}\,|\,S)$ for all $g,g^{\prime}\in M$. We
use $\succ^{*}_{i|S}$ if the corresponding preference ranking is _strict_ ,
i.e., when
$g\succcurlyeq^{*}_{i|S}g^{\prime}\,\wedge\,g^{\prime}\succcurlyeq^{*}_{i|S}g\,\Rightarrow\,g=g^{\prime}$,
for all $g,g^{\prime}\in M\setminus S$. For additive (and more generally, for
cancelable) valuations, we drop $S$ for the notation and simply write
$\succcurlyeq^{*}_{i}$ or $\succ^{*}_{i}$. Finally, for a total order $\succ$
on $M$ and a set $T\subseteq M$, we use $\mathrm{top}(\succ,T)$ to denote the
“largest” element of $T$ with respect to $\succ$.
### 2.1 Fairness Notions
A fair division mechanism produces an _allocation_ $(A_{1},\ldots,A_{n})$,
where $A_{i}$ is the _bundle_ of agent $i$, which is a partition of $M$. The
latter corresponds to assuming no free disposal, namely all the goods must be
allocated.
There are several different notions which attempt to capture which allocations
are “fair”. The most prominent such notion in the fair division literature has
been _envy-freeness_ (EF) [22, 21, 37], which has been the starting point for
other relaxed notions, more appropriate for the indivisible goods setting we
study here, as _envy-freeness up to one good_ (EF1) [28, 16] and _envy-
freeness up to any good_ (EFX) [18]. Here we focus on EF1.
###### Definition 2.3.
An allocation $(A_{1},\ldots,A_{n})$ is
* •
$\alpha$-envy-free ($\alpha$-EF), if for every $i,j\in N$,
$v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j})$.
* •
$\alpha$-envy-free up to one good ($\alpha$-EF1), if for every pair of agents
$i,j\in N$, with $A_{j}\neq\emptyset$, there exists a good $g\in A_{j}$, such
that $v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j}\setminus\\{g\\})$.
When for every agent $j\in N$ with $A_{j}\neq\emptyset$, we have
$v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j}\setminus\\{g\\})$ for some good $g\in
A_{j}$, we say that $(A_{1},\ldots,A_{n})$ is $\alpha$-EF1 _from agent $i$’s
perspective_, even when the allocation is not $\alpha$-EF1!
### 2.2 Mechanisms and Equilibria
We are interested in _mechanisms_ that produce allocations with EF1
guarantees. When no payments are allowed, like in our setting, an allocation
mechanism $\mathcal{M}$ is just an allocation algorithm that takes as input
the agents’ reported preferences. In particular, Round-Robin, the mechanism of
interest here, takes as input the reported profile $\bm{\succ}$ and produces
an allocation of all the goods. This distinction in terminology is necessary
as the reported input may not be consistent with the actual valuation
functions due to the agents’ incentives. When the allocation returned by
$\mathcal{M}(\bm{\succ})$ has some fairness guarantee, e.g., it is $0.5$-EF1,
we will attribute the same guarantee to the reported profile itself, i.e., we
will say that $\bm{\succ}$ is $0.5$-EF1.
We study the fairness guarantees of the (approximate) pure Nash equilibria of
Round-Robin. Given a preference profile
$\bm{\succ}\,=({\succ}_{1},\ldots,{\succ}_{n})$, we write $\bm{\succ}_{-i}$ to
denote
$({\succ}_{1},\ldots,{\succ}_{i-1},\allowbreak{\succ}_{i+1},\ldots,{\succ}_{n})$
and given a preference ranking ${\succ}^{\prime}_{i}$ we use
$({\succ}^{\prime}_{i},\bm{\succ}_{-i})$ to denote the profile
$({\succ}_{1},\ldots,{\succ}_{i-1},\allowbreak{\succ}^{\prime}_{i},\allowbreak{\succ}_{i+1},\ldots,{\succ}_{n})$.
For the next definition we abuse the notation slightly: given an allocation
$(A_{1},\ldots,\allowbreak A_{n})$ produced by $\mathcal{M}(\bm{\succ})$, we
write $v_{i}(\mathcal{M}(\bm{\succ}))$ to denote $v_{i}({A}_{i})$; similarly
for $\mathcal{M}({\succ}^{\prime}_{i},\bm{\succ}_{-i})$.
###### Definition 2.4.
Let $\mathcal{M}$ be an allocation mechanism and consider a preference profile
$\bm{\succ}\,=({\succ}_{1},\ldots,\allowbreak{\succ}_{n})$. We say that the
total order ${\succ}_{i}$ is an _$\alpha$ -approximate best response_ to
$\bm{\succ}_{-i}$ if for every total order, i.e., permutation
${\succ}^{\prime}_{i}$ of $M$, we have $\alpha\cdot
v_{i}(\mathcal{M}({\succ}^{\prime}_{i},\bm{\succ}_{-i}))\leq
v_{i}(\mathcal{M}(\bm{\succ}))$. The profile $\bm{\succ}$ is an _$\alpha$
-approximate pure Nash equilibrium_ (PNE) if, for each $i\in N$, ${\succ}_{i}$
is an $\alpha$-approximate best response to $\bm{\succ}_{-i}$.
When $\alpha=1$, we simply refer to best responses and exact PNE.
### 2.3 The Round-Robin Mechanism
We state Round-Robin as a mechanism (Mechanism 1) that takes as input a
reported profile $({\succ}_{1},\ldots,{\succ}_{n})$. For the sake of
presentation, we assume that the agents in each _round_ (lines 3–6) are always
considered according to their “name”, i.e., agent $1$ is considered first,
agent $2$ second, and so on, instead of having a permutation determining the
priority of the agents as an extra argument of the input. This is without loss
of generality, as it only requires renaming the agents accordingly. We often
refer to the process of allocating a good to an agent (lines 4–6) as a _step_
of the mechanism.
Mechanism 1 Round-Robin$({\succ}_{1},\ldots,{\succ}_{n})$ // For $i\in N$,
${\succ}_{i}$ is the reported preference ranking of agent $i$.
1:$S=M$; $(A_{1},\dots,A_{n})=(\emptyset,\ldots,\emptyset)$; $k=\lceil
m/n\rceil$
2:for $r=1,\dots,k$ do // Each value of $r$ determines the corresponding
round.
3: for $i=1,\dots,n$ do // The combination of $r$ and $i$ determines the
corresponding step.
4: $g=\mathrm{top}(\succ_{i},S)$
5: $A_{i}=A_{i}\cup\\{g\\}$ // The current agent receives (what appears to be)
her favorite available good.
6: $S=S\setminus\\{g\\}$ // The good is no longer available.
7:return $(A_{1},\dots,A_{n})$
Note that there is no need for a tie-breaking rule here, as the reported
preference rankings are assumed to be total orders. Equivalently, one could
allow for partial orders (either directly or via cardinal bids as it is done
in [5]) paired with a deterministic tie-breaking rule, e.g., lexicographic
tie-breaking, a priori known to the agents.
In the rest of the paper, we will assume that $m=kn$ for some
$k\in\mathbb{N}$, for simplicity. Note that this is without loss of
generality, as we may introduce at most $n-1$ dummy goods that have marginal
value of $0$ with respect to any set for everyone and append them at the end
of the reported preference rankings to be allocated during the last steps of
the mechanism.
We have already mentioned that Round-Robin as an algorithm produces EF1
allocations for additive agents, where the input is assumed to be any strict
variant
$\bm{\succ}^{*}\,=(\succ^{*}_{1|\emptyset},\succ^{*}_{2|\emptyset},\ldots,\succ^{*}_{n|\emptyset})$
of the truthful profile
$(\succcurlyeq^{*}_{1|\emptyset},\succcurlyeq^{*}_{2|\emptyset},\ldots,\succcurlyeq^{*}_{n|\emptyset})$,
i.e., the profile where each agent ranks the goods according to their
singleton value. This property fully extends to cancelable valuation functions
as well. The proof of Proposition 2.5 is rather simple, but not as
straightforward as the additive case; note that it requires Lemma 3.3 from the
next section.
###### Proposition 2.5.
Let be $\bm{\succ}^{*}$ be as described above. When all agents have cancelable
valuation functions, the allocation returned by Round-Robin$(\bm{\succ}^{*})$
is EF1.
###### Proof.
Let $(A_{1},\dots,A_{n})$ be the allocation returned by Round-
Robin$(\bm{\succ}^{*})$. Fix two agents, $i$ and $j$, and let
$A_{i}=\\{x_{1},x_{2},\ldots,x_{k}\\}$ and
$A_{j}=\\{y_{1},y_{2},\ldots,y_{k}\\}$, where the goods in both sets are
indexed according to the round in which they were allocated to $i$ and $j$,
respectively. By the way Mechanism 1 is defined, we have
$x_{r}\succ^{*}_{i|\emptyset}y_{r+1}$, for all $r\in[k-1]$. Therefore,
$x_{r}\succcurlyeq^{*}_{i|\emptyset}y_{r+1}$, or equivalently,
$v_{i}(x_{r})\geq v_{i}(y_{r+1})$, for all $r\in[k-1]$. Thus, by Lemma 3.3, we
get $v_{i}(A_{i}\setminus\\{x_{k}\\})\geq v_{i}(A_{j}\setminus\\{y_{1}\\})$,
and using the fact that $v_{i}$ is non-decreasing, $v_{i}(A_{i})\geq
v_{i}(A_{j}\setminus\\{y_{1}\\})$. ∎
## 3 Existence of approximate PNE
At first glance, it is not clear why Mechanism 1 has any pure Nash equilibria,
even approximate ones for a constant approximation factor. For additive
valuation functions, however, it is known that for any instance we can
construct a simple preference profile, called the _bluff profile_ , which is
an exact PNE. While the proof of this fact, in its full generality, is
fragmented over three papers [8, 14, 5], we give here a simple proof that
generalizes the existence of exact PNE to cancelable valuation functions. As
we shall see later, extending this result to submodular functions is not
possible and even defining a generalization of the bluff profile which is a
$0.5$-approximate PNE is not straightforward.
### 3.1 Cancelable valuations
Defining the bluff profile for cancelable agents, we will start from a strict
variant of the truthful profile
$(\succcurlyeq^{*}_{1|\emptyset},\succcurlyeq^{*}_{2|\emptyset},\ldots,\succcurlyeq^{*}_{n|\emptyset})$,
i.e., the profile where each agent ranks the goods according to their value
(as singletons) in descending order, as we did for Proposition 2.5. Assume
that any ties are broken deterministically to get the strict version
$\bm{\succ}^{*}\,=(\succ^{*}_{1|\emptyset},\succ^{*}_{2|\emptyset},\ldots,\succ^{*}_{n|\emptyset})$.
Now, consider $\textrm{Round-Robin}(\bm{\succ}^{*})$ and let
$h_{1},h_{2},\ldots,h_{m}$ be a renaming of the goods according to the order
in which they were allocated and $\succ^{\mathrm{b}}$ be the corresponding
total order (i.e.,
$h_{1}\succ^{\mathrm{b}}h_{2}\succ^{\mathrm{b}}\ldots\succ^{\mathrm{b}}h_{m}$).
The _bluff profile_ is the preference profile
$\bm{\succ}^{\mathrm{b}}\,=(\succ^{\mathrm{b}},\succ^{\mathrm{b}},\ldots,\succ^{\mathrm{b}})$,
where everyone ranks the goods in the order they were allocated in Round-
Robin$(\bm{\succ}^{*})$. The following fact follows directly from the
definition of the bluff profile and the description of Round-Robin.
###### Fact 3.1.
If $(\bm{\succ}^{*})$ is a strict version of the truthful preference profile
and $(\bm{\succ}^{\mathrm{b}})$ is the corresponding bluff profile, then
$\mathrm{Round\text{-}Robin}(\bm{\succ}^{\mathrm{b}})$ and
$\mathrm{Round\text{-}Robin}(\bm{\succ}^{*})$ both return the same allocation.
An interesting observation about this fact is that, combined with Proposition
2.5 and Theorem 3.2, it implies that there is at least one PNE of Mechanism 1
which is EF1! Of course, it is now known that all exact PNE of Round-Robin are
EF1 for agents with _additive_ valuation functions and, as we will see later
on, even approximate PNE have (approximate) EF1 guarantees for much more
general instances, including the case of _subadditive cancelable_ valuation
functions.
###### Theorem 3.2.
When all agents have cancelable valuation functions, the bluff profile is an
exact PNE of Mechanism 1.
We first need to prove the following lemma that generalizes a straightforward
property of additive functions for cancelable functions.
###### Lemma 3.3.
Suppose that $v(\cdot)$ is a cancelable valuation function. Consider sets
$X=\\{x_{1},x_{2},\ldots,x_{k}\\}$ and $Y=\\{y_{1},y_{2},\ldots,y_{k}\\}$. If
for every $j\in[k]$, we have that $v(x_{j})\geq v(y_{j})$, then $v(X)\geq
v(Y)$.
###### Proof.
We begin by arguing that it is without loss of generality to first assume that
the elements of $X$ are ordered by non-increasing value with respect to $v$
and then also assume that $y_{j}\notin\\{x_{1},x_{2},\ldots,x_{j-1}\\}$, for
any $j\in[k]$. The former is indeed a matter of reindexing, if necessary, the
elements of $X$ and consistently reindexing the corresponding elements of $Y$.
For the latter, suppose that there exist $j$ such that $y_{j}=x_{t}$ for
$t\leq j-1$ and consider the smallest $t$ for which this happens. We have
$v(x_{t})\geq v(x_{t+1})\geq\ldots\geq v(x_{j})$ by the assumption on the
ordering of the elements of $X$, $v(x_{j})\geq v(y_{j})$ by hypothesis, and
$v(y_{j})=v(x_{t})$. Thus, $v(x_{t})=v(x_{t+1})=\ldots=v(x_{j})$. Now we may
rename the elements of $Y$ to $\\{y^{\prime}_{1},\ldots,y^{\prime}_{k}\\}$ by
inserting $y_{j}$ to the $t$-th position, i.e., $y^{\prime}_{t}=y_{j}$,
$y^{\prime}_{s}=y_{s-1}$, for $t+1\leq s\leq j$, and $y^{\prime}_{s}=y_{s}$,
for $s<t$ or $s>j$. Since only $y_{t},y_{t+1},\ldots,y_{j}$ changed indices
but $v(x_{t})=v(x_{t+1})=\ldots=v(x_{j})$, we again have that $v(x_{j})\geq
v(y^{\prime}_{j})$ for every $j\in[k]$. Moreover, now the smallest $\ell$ for
which there exist $j>\ell$ such that $y_{j}=x_{\ell}$ is strictly larger than
$t$. By repeating this renaming of the elements of $Y$ we end up with a
renaming $\\{y^{*}_{1},\ldots,y^{*}_{k}\\}$ such that for every $j\in[k]$,
$v(x_{j})\geq v(y^{*}_{j})$ and
$y^{*}_{j}\notin\\{x_{1},x_{2},\ldots,x_{j-1}\\}$.
So, assuming that the elements of $X$ are ordered in non-increasing value with
respect to $v$ and that $y_{j}\notin\\{x_{1},x_{2},\ldots,x_{j-1}\\}$, for any
$j\in[k]$, suppose towards a contradiction that $v(X)<v(Y)$. That is,
$v(\\{x_{1},x_{2},\ldots,x_{k}\\})\allowbreak<v(\\{y_{1},y_{2},\allowbreak\ldots,y_{k}\\})$.
Observe that if $v(\\{x_{1},x_{2},\ldots,x_{k-1}\\})\geq
v(\\{y_{1},y_{2},\ldots,y_{k-1}\\})$, this would imply that
$v(\\{x_{1},\ldots,x_{k-1},y_{k}\\})\geq v(\\{y_{1},\ldots,y_{k-1},y_{k}\\})$,
by the definition of cancelable valuations and the fact that
$y_{k}\notin\\{x_{1},\ldots,x_{k-1}\\}\cup\\{y_{1},\ldots,y_{k-1}\\}$. This
leads to
$v(\\{x_{1},\ldots,x_{k-1},x_{k}\\})\geq
v(\\{x_{1},\ldots,x_{k-1},y_{k}\\})\geq v(\\{y_{1},\ldots,y_{k-1},\allowbreak
y_{k}\\})\,,$
where the first inequality follows from $v(x_{k})\geq v(y_{k})$ and Fact 2.2,
contradicting our initial assumption. Therefore,
$v(\\{x_{1},\ldots,x_{k-1}\\})<v(\\{y_{1},\ldots,y_{k-1}\\})$. By repeating
the same argument $k-2$ more times, we end up with $v(x_{1})<v(y_{1})$, a
contradiction. ∎
###### Proof of Theorem 3.2.
Now we show that the bluff profile for cancelable valuations is an exact PNE.
Consider the goods named $h_{1},\dots,h_{m}$ as in the bluff profile, i.e., by
the order in which they are picked when each agent reports their preference
order to be the one induced by all singleton good values. Consider agent $i$.
Her assigned set of goods under the bluff profile is
$A_{i}^{\mathrm{b}}=\\{h_{i},h_{n+i},\dots,h_{(k-1)n+i}\\}$, where $k=m/n$.
Assume now that she deviates from $\succ^{\mathrm{b}}$ to $\succ_{i}$,
resulting in some allocated set $A_{i}=\\{y_{1},y_{2},\dots,y_{k}\\}$, where
we assume $y_{r}$ to be allocated in round $r$. We need to show
$v_{i}(A_{i}^{\mathrm{b}})\geq v_{i}(A_{i})$.
To this end, we compare the goods allocated to agent $i$ in both reports, one
by one. If $v_{i}(y_{r})\leq v_{i}(h_{(r-1)n+i})$ for every $r\in[k]$, then we
are done by applying Lemma 3.3 with $A_{i}^{\mathrm{b}}$ and $A_{i}$. If some
of these inequalities fail, let $r$ denote the latest round such that
$v_{i}(y_{r})>v_{i}(h_{(r-1)n+i}$. Therefore, in the execution of Mechanism 1
with the bluff profile as input, $y_{r}$ was no longer available in round $r$.
However, $y_{r}$ becomes available in round $r$ once agent $i$ deviates. This
can only stem from the fact that at some point before round $r$, a good
$h_{t}$ with $t>(r-1)n+i$ was picked (since the overall number of goods picked
per round always stays the same). Clearly, the only agent who could have done
so (since she is the only one deviating from the common bluff order) is agent
$i$. Therefore, it holds that $h_{t}=y_{j}$ for some $j<r$. Now, we replace
the ordered set $Y=(y_{1},y_{2},\dots,y_{k})$ by
$Y^{\prime}=(y_{1},\dots,y_{j-1},y_{r},y_{j+1},\dots,y_{r-1},y_{j},y_{r+1},\dots,y_{k})$,
i.e., we simply exchange $y_{r}$ and $y_{j}$. It will be convenient to rename
$y_{1},\ldots,y_{k}$ so that
$Y^{\prime}=(y^{\prime}_{1},y^{\prime}_{2},\dots,y^{\prime}_{k})$
We claim that it if agent $i$ reports a preference ranking
$\succ^{\prime}_{i}$ that starts with all goods in $Y^{\prime}$, in that
specific order, followed by everything else, in any order, she still gets
$A_{i}$ but the goods are allocated in the order suggested by $Y^{\prime}$.
Indeed, first notice that the first $j-1$ rounds of Round-Robin will be the
same as in the run with the original deviation $\succ_{i}$. Further,
$y^{\prime}_{j}=y_{r}$ is allocated earlier under $\succ^{\prime}_{i}$ than
under $\succ_{i}$, and thus it surely is available at the time. After that,
rounds $j-1$ to $r-1$ will be the same as in the run with the deviation
$\succ_{i}$. Now $y^{\prime}_{r}=y_{j}$ is allocated later than before, namely
in round $r$, but it is not among the first $(r-1)n+i$ goods in the bluff
order, as noted above, which means it is not allocated to any other agent in
any round before the $r$-th under $\succ^{\prime}_{i}$. Finally, rounds $r+1$
to $k$ will be the same as in the run with $\succ_{i}$.
Although agent $i$ still is assigned the same set $A_{i}$ by deviating to
$\succ^{\prime}_{i}$, we now have $v_{i}(y^{\prime}_{r})=v_{i}(y_{j})\leq
v_{i}(h_{(r-1)n+i}$, where the inequality holds because both goods are
available in round $r$ of the bluff run, and agent one prefers $h_{(r-1)n+i}$.
Also, all later goods in $Y^{\prime}$ remain unchanged, i.e.,
$y^{\prime}_{s}=y_{s}$ for $s>r$. Therefore, the latest occurrence of some
$y^{\prime}_{\ell}>h_{(\ell-1)n+i}$ now happens at an earlier point in the
sequence, if at all. Repeating this process until no such occurrence is left
yields an ordering $Y^{*}=(y^{*}_{1},y^{*}_{2},\dots,y^{*}_{k})$ of $A_{i}$
such that for all $r\in[k]$, $v_{i}(y^{*}_{r})\leq v_{i}(h_{(r-1)n+i})$. Now
using Lemma 3.3 completes the proof. ∎
### 3.2 Submodular valuations
We move on to the much more general class of submodular valuations. In order
to define the bluff profile in this case, we again would like to start from
the truthful profile. However, recall that Round-Robin restricts each agent’s
report to specifying an ordering on the good set $M$ and these preference
rankings are not expressive enough to fully capture submodular valuation
functions. In fact, it is not obvious what ‘truthful’ means here without
further assumptions on what information is known by the agents. Still, we
define a truthfully greedy allocation and use this as our starting point.
Imagine that, instead of having a full preference profile from the beginning,
we only ask the active agent $i$ (i.e., the agent to which we are about to
allocate a new good) for the good with the largest marginal value with respect
to her current set of goods $A_{i}$ and give this to her. Let
$h_{1},h_{2},\ldots,h_{m}$ be a renaming of the goods according to the order
in which they would be allocated in this hypothetical truthfully greedy
scenario and $\succ^{\mathrm{b}}$ be the corresponding total order. Like in
the cancelable case, the bluff profile is the preference profile
$\bm{\succ}^{\mathrm{b}}\,=(\succ^{\mathrm{b}},\succ^{\mathrm{b}},\ldots,\succ^{\mathrm{b}})$.
Formally, the renaming of the goods is performed as described in Algorithm 2
below. It should be noted that this definition of the bluff profile is
consistent with the definition for cancelable functions, assuming that all
ties are resolved lexicographically.
Algorithm 2 Greedy renaming of goods for defining the bluff profile
Input: $N$, $M$, value oracles for $v_{1}(\cdot),\ldots,v_{n}(\cdot)$
1:$X_{i}=\emptyset$ for $i\in[n]$
2:for $j=1,\dots,m$ do
3: $i=(j-1)\\!\pmod{n}+1$
4: $h_{j}=\displaystyle\operatorname*{arg\,max}_{g\in
M\setminus\bigcup_{\ell}X_{\ell}}v_{i}(g\,|\,X_{i})$ // Ties are broken
lexicographically.
5: $X_{i}=X_{i}\cup\\{h_{j}\\}$
6:return $(h_{1},h_{2},\ldots,h_{m})$
Also notice that the allocation
$\mathrm{Round\text{-}Robin}(\bm{\succ}^{\mathrm{b}})$ produced under the
bluff profile is exactly $(X_{1},X_{2},\allowbreak\ldots,\allowbreak X_{n})$,
as described in Algorithm 2, i.e.,
$X_{i}=A_{i}^{\mathrm{b}}=\\{h_{i},h_{n+i},\dots,h_{(k-1)n+i}\\}$, where
recall that $k=m/n$.
The main result of this section is Theorem 3.7 stating that the bluff profile
is a $\frac{1}{2}$-approximate PNE when agents have submodular valuation
functions. While this sounds weaker than Theorem 3.2, it should be noted that
for submodular agents Mechanism 1 does not have PNE in general, even for
relatively simple instances, as stated in Proposition 3.4. In fact, even the
existence of approximate equilibria can be seen as rather surprising, given
the generality of the underlying valuation functions.
###### Proposition 3.4.
There exists an instance where all agents have submodular valuation functions
such that Mechanism 1 has no $(\frac{3}{4}+\varepsilon)$-approximate PNE.
###### Proof.
Consider an instance with 2 agents and 4 goods
$M=\\{g_{1},g_{2},g_{3},g_{4}\\}$, with the following valuation for all
possible 2-sets:
$v_{1}(\\{g_{1},g_{2}\\})=3$ $v_{1}(\\{g_{1},g_{3}\\})=3$
$v_{1}(\\{g_{1},g_{4}\\})=4$ $v_{1}(\\{g_{2},g_{3}\\})=4$
$v_{1}(\\{g_{2},g_{4}\\})=3$ $v_{1}(\\{g_{3},g_{4}\\})=3$
$v_{2}(\\{g_{1},g_{2}\\})=4$ $v_{2}(\\{g_{1},g_{3}\\})=4$
$v_{2}(\\{g_{1},g_{4}\\})=3$ $v_{2}(\\{g_{2},g_{3}\\})=3$
$v_{2}(\\{g_{2},g_{4}\\})=4$ $v_{2}(\\{g_{3},g_{4}\\})=4$
In addition, all individual goods have the same value: $v_{1}(x)=v_{2}(x)=2$
for $x\in M$, while all $3$-sets and $4$-sets have value $4$, for both agents.
We begin by establishing that this valuation function is indeed submodular for
both agents. Observe for any set $S\subseteq M$ and $i\in[2],j\in[4]$ we have:
$\displaystyle|S|=0$ $\displaystyle\Rightarrow v_{i}(g_{j}\;|\;S)\in\\{2\\}$
$\displaystyle|S|=1$ $\displaystyle\Rightarrow v_{i}(g_{j}\;|\;S)\in\\{1,2\\}$
$\displaystyle|S|=2$ $\displaystyle\Rightarrow v_{i}(g_{j}\;|\;S)\in\\{0,1\\}$
$\displaystyle|S|=3$ $\displaystyle\Rightarrow v_{i}(g_{j}\;|\;S)=0\,,$
which immediately implies that both valuation functions are indeed submodular.
Notice that for any reported preferences ${\succ}_{1},{\succ}_{2}$, one of the
two agents will receive goods leading to a value of $3$. If this is the agent
$1$, she can easily deviate and get $4$ instead. In particular, if agent $2$
has good $g_{2}$ or $g_{3}$ first in their preferences then agent $1$ can get
$\\{g_{1},g_{4}\\}$, and if agent $2$ has good $g_{1}$ or $g_{4}$ as first
then agent $1$ can get $\\{g_{2},g_{3}\\}$ instead. On the other hand, if
agent $2$ received a value of $3$ they can also always deviate to $4$. Notice
that for any $g_{a}$, agent $2$ always has two sets different sets
$\\{g_{a},g_{b}\\},\\{g_{a},g_{c}\\}$ with value $4$ and one
$\\{g_{a},g_{d}\\}$ with value 3. Thus, for any preference of agent $1$ with
$g_{\hat{a}}\succ_{1}g_{\hat{b}}\succ_{1}g_{\hat{c}}\succ_{1}g_{\hat{d}}$,
agent 2 can deviate and get either $\\{g_{\hat{b}},g_{\hat{d}}\\}$ or
$\\{g_{\hat{c}},g_{\hat{d}}\\}$, one of which must have value $4$. Therefore,
in every outcome there exists an agent that can deviate to improve their value
from $3$ to $4$. ∎
Moving towards the proof of Theorem 3.7 for the submodular case, we note that
although it is very different from that of Theorem 3.2, we will still need an
analog of the main property therein, i.e., the existence of a good-wise
comparison between the goods an agent gets under the bluff profile and the
ones she gets by deviating. As expected, the corresponding property here (see
Lemma 3.5) is more nuanced and does not immediately imply Theorem 3.7 as we
are now missing the analog of Lemma 3.3.
Throughout this section, we are going to argue about an arbitrary agent $i$.
To simplify the notation, let us rename
$X_{i}=A_{i}^{\mathrm{b}}=\\{h_{i},h_{n+i},\dots,h_{(k-1)n+i}\\}$ to simply
$X=\\{x_{1},x_{2},\ldots,x_{k}\\}$, where we have kept the order of indices
the same, i.e., $x_{j}=h_{(j-1)n+i}$. This way, the goods in $X$ are ordered
according to how they were allocated to agent $i$ in the run of Mechanism 1
with the bluff profile as input.
We also need to define the ordering of the goods agent $i$ gets when she
deviates from the bluff bid $\succ^{\mathrm{b}}$ to another preference ranking
$\succ_{i}$. Let $A_{i}=Y=\\{y_{1},y_{2},\ldots,y_{k}\\}$ be this set of
goods. Instead of renaming the elements of $Y$ in a generic fashion like in
the proof of Theorem 3.2, doing so becomes significantly more complicated, and
we need to do it in a more systematic way, see Algorithm 3.
Algorithm 3 Greedy renaming of goods for the deviating agent $i$
Input: $X=\\{x_{1},x_{2},\ldots,x_{k}\\}$, $Y$, and a value oracle for
$v_{i}(\cdot)$
1:$Z=Y$
2:for $j=|Y|,\dots,1$ do
3: $y^{\prime}_{j}=\displaystyle\operatorname*{arg\,min}_{g\in
Z}v_{i}(g\,|\,\\{x_{1},\ldots,x_{j-1}\\})$ // Ties are broken
lexicographically.
4: $Z=Z\setminus\\{y^{\prime}_{j}\\}$
5:return $(y^{\prime}_{1},y^{\prime}_{2},\ldots,y^{\prime}_{|Y|})$
In what follows, we assume that the indexing $y_{1},y_{2},\ldots,y_{k}$ is
already the result of Algorithm 3. This renaming is crucial and it will be
used repeatedly. In particular, we need this particular ordering in order to
prove that $v_{i}(x_{j}\,|\,\\{x_{1},\ldots,x_{j-1}\\})\geq
v_{i}(y_{j}\,|\,\\{x_{1},\ldots,x_{j-1}\\})$, for all $j\in[k]$, in Lemma 3.5
below. Towards that, we need to fix some notation for the sake of readability.
For $j\in[k]$, we use $X^{j}_{-}$ and $X^{j}_{+}$ to denote the sets
$\\{x_{1},x_{2},\ldots,x_{j}\\}$ and $\\{x_{j},x_{j+1},\ldots,x_{k}\\}$,
respectively. The sets $Y^{j}_{-}$ and $Y^{j}_{+}$, for $j\in[k]$, are defined
analogously. We also use $X^{0}_{-}=Y^{0}_{-}=\emptyset$. The main high-level
idea of the proof is that if
$v_{i}(y_{\ell}\,|\,X^{\ell-1}_{-})>v_{i}(x_{\ell}\,|\,X^{\ell-1}_{-})$ for
some $\ell$, then it must be the case that during the execution of Round-
Robin$(\bm{\succ}^{\mathrm{b}})$ every good in
$Y^{\ell}_{-}=\\{y_{1},\ldots,y_{\ell}\\}$ is allocated before the turn of
agent $i$ in round $\ell$. Then, using a simple counting argument, we show
that agent $i$ cannot receive all the goods in $Y^{\ell}_{-}$ when deviating,
leading to a contradiction.
###### Lemma 3.5.
Let $X=\\{x_{1},x_{2},\ldots,x_{k}\\}$ be agent $i$’s bundle in Round-
Robin$(\bm{\succ}^{\mathrm{b}})$, where goods are indexed in the order they
were allocated, and $Y=\\{y_{1},y_{2},\ldots,y_{k}\\}$ be $i$’s bundle in
Round-Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$, where goods are
indexed by Algorithm 3. Then, for every $j\in[k]$, we have
$v_{i}(x_{j}\,|\,X^{j-1}_{-})\geq v_{i}(y_{j}\,|\,X^{j-1}_{-})$.
###### Proof.
The way goods in $X$ are indexed, we have that $x_{j}$ is the good allocated
to agent $i$ in round $j$ of Round-Robin$(\bm{\succ}^{\mathrm{b}})$. Suppose,
towards a contradiction, that there is some ${\ell}\in[k]$, for which we have
$v_{i}(y_{\ell}\,|\,X^{\ell-1}_{-})>v_{i}(x_{\ell}\,|\,X^{\ell-1}_{-})$. First
notice that ${\ell}\neq 1$, as $x_{1}$ is, by the definition of the bluff
profile, a singleton of maximum value for agent $i$ excluding the goods
allocated to agents $1$ through $i-1$ in round $1$, regardless of agent $i$’s
bid. Thus, ${\ell}\geq 2$.
Let $B\subseteq M$ and $D\subseteq M$ be the sets of goods allocated (to any
agent) up to right before a good is allocated to agent $i$ in round $\ell$ in
Round-Robin$(\bm{\succ}^{\mathrm{b}})$ and Round-
Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$, respectively. Clearly,
$|B|=|D|=(\ell-1)n+i-1$. In fact, we claim that in this case the two sets are
equal.
###### Claim 3.6.
It holds that $B=D$. Moreover, $\\{y_{1},\ldots,y_{\ell}\\}\subseteq B$.
###### Proof of the claim.
We first observe that $v_{i}(y_{j}\,|\,X^{\ell-1}_{-})\geq
v_{i}(y_{\ell}\,|\,X^{\ell-1}_{-})>v_{i}(x_{\ell}\,|\,X^{\ell-1}_{-})$, for
every $j\in[\ell-1]$, where the first inequality follows from way Algorithm 3
ordered the elements of $Y$. Now consider the execution of Round-
Robin$(\bm{\succ}^{\mathrm{b}})$. Since $x_{\ell}$ was the good allocated to
agent $i$ in round $\ell$, $x_{\ell}$ had maximum marginal value for agent $i$
with respect to $X^{\ell-1}_{-}$ among the available goods. Thus, none of the
goods $y_{1},\ldots,y_{\ell}$ were available at the time. That is,
$y_{1},\ldots,y_{\ell}$ were all already allocated to some of the agents
(possibly including agent $i$ herself). We conclude that
$\\{y_{1},\ldots,y_{l}\\}\subseteq B$.
Now suppose for a contradiction that $D\neq B$ and consider the execution of
Round-Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$. Recall that the goods
in $B$ are still the $(\ell-1)n+i-1$ most preferable goods for every agent in
$N\setminus\\{i\\}$ according to the profile
$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$. Therefore, all agents in
$N\setminus\\{i\\}$ will get goods from $B$ allocated to them up to the point
when a good is allocated to agent $i$ in round $\ell$, regardless of what
${\succ}_{i}$ is. If agent $i$ also got only goods from $B$ allocated to her
in the first $\ell-1$ rounds of Round-
Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$, then $D$ would be equal to
$B$. Thus, at least one good which is not in $B$ (and thus, not in
$\\{y_{1},\ldots,y_{\ell}\\}$) must have been allocated to agent $i$ in the
first $\ell-1$ rounds. As a result, at the end of round $\ell-1$, there are at
least two goods in $\\{y_{1},\ldots,y_{\ell}\\}$ that have not yet been
allocated to $i$.
However, we claim that up to right before a good is allocated to agent $i$ in
round $\ell+1$, all goods in $B$ (and thus in $\\{y_{1},\ldots,y_{\ell}\\}$ as
well) will have been allocated, leaving $i$ with at most $\ell-1$ goods from
$\\{y_{1},\ldots,y_{\ell}\\}$ in her final bundle and leading to a
contradiction. Indeed, this follows from a simple counting argument. Right
before a good is allocated to agent $i$ in round $\ell+1$, the goods allocated
to agents in $N\setminus\\{i\\}$ are exactly
$\ell(n-1)+i-1\geq(\ell-1)n+i-1=|B|$. As noted above, agents in
$N\setminus\\{i\\}$ will get goods from $B$ allocated to them as long as they
are available. Thus, no goods from $B$, or from $\\{y_{1},\ldots,y_{\ell}\\}$
in particular, remain unallocated right before a good is allocated to agent
$i$ in round $\ell+1$. Therefore, agent $i$ may get at most $\ell-1$ goods
from $\\{y_{1},\ldots,y_{\ell}\\}$ (at most $\ell-2$ in the first $\ell-1$
rounds and one in round $\ell$), contradicting the definition of the set $Y$.
We conclude that $D=B$. ∎
Given the claim, it is now easy to complete the proof. Clearly, in the first
$\ell-1$ rounds of Round-Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$ at
most $\ell-1$ goods from $\\{y_{1},\ldots,y_{\ell}\\}$ have been allocated to
agent $i$. However, when it is $i$’s turn in round $\ell$, only goods in
$M\setminus D$ are available, by the definition of $D$. By Claim 3.6, we have
$\\{y_{1},\ldots,y_{l}\\}\subseteq D$, and thus there is at least one good
$\\{y_{1},\ldots,y_{\ell}\\}$ that is allocated to another agent, which
contradicts the definition of $Y$. ∎
We are now ready to state and prove the main result of this section.
###### Theorem 3.7.
When all agents have submodular valuation functions, the bluff profile is a
$\frac{1}{2}$-approximate PNE of Mechanism 1. Moreover, this is tight, i.e.,
for any $\varepsilon>0$, there are instances where the bluff profile is not a
$\big{(}\frac{1}{2}+\varepsilon\big{)}$-approximate PNE.
###### Proof.
We are going to use the notation used so far in the section and consider the
possible deviation of an arbitrary agent $i$. Like in the statement of Lemma
3.5, $X=\\{x_{1},\ldots,x_{k}\\}$ is agent $i$’s bundle in Round-
Robin$(\bm{\succ}^{\mathrm{b}})$, with goods indexed in the order they were
allocated, and $Y=\\{y_{1},y_{2},\ldots,y_{k}\\}$ is $i$’s bundle in Round-
Robin$({\succ}_{i},\bm{\succ}^{\mathrm{b}}_{-i})$, with goods indexed by
Algorithm 3. Also, recall that $X^{j}_{-}=\\{x_{1},\ldots,x_{j}\\}$ and
$X^{j}_{+}=\\{x_{j},\ldots,x_{k}\\}$ (and similarly for $Y^{j}_{-}$ and
$Y^{j}_{+}$). We also use the convention that $Y_{+}^{k+1}=\emptyset$. For any
$j\in[k]$, we have
$\displaystyle v_{i}(X_{-}^{j})-v_{i}(X_{-}^{j-1})$
$\displaystyle=v_{i}(x_{j}\,|\,X_{-}^{j-1})$ $\displaystyle\geq
v_{i}(y_{j}\,|\,X_{-}^{j-1})$ $\displaystyle\geq
v_{i}(y_{j}\,|\,X_{-}^{j-1}\cup Y_{+}^{j+1})$
$\displaystyle=v_{i}(X_{-}^{j-1}\cup
Y_{+}^{j+1}\cup\\{y_{j}\\})-v_{i}(X_{-}^{j-1}\cup Y_{+}^{j+1})$
$\displaystyle=v_{i}(X_{-}^{j-1}\cup Y_{+}^{j})-v_{i}(X_{-}^{j-1}\cup
Y_{+}^{j+1})$ $\displaystyle\geq v_{i}(X_{-}^{j-1}\cup
Y_{+}^{j})-v_{i}(X_{-}^{j}\cup Y_{+}^{j+1})\,.$
The first inequality holds because Lemma 3.5 applies on $X$ and $Y$, whereas
the second inequality holds because of submodularity. Finally, the last
inequality holds since $X_{-}^{j-1}\subseteq X_{-}^{j}$ and $v_{i}(\cdot)$ is
non-decreasing, for every $i\in N$. Using these inequalities along with a
standard expression of the value of a set as a sum of marginals, we have
$\displaystyle v_{i}(X)$ $\displaystyle=v_{i}(X_{-}^{k})-v_{i}(X_{-}^{0})$
$\displaystyle=\sum_{j=1}^{k}\left(v_{i}(X_{-}^{j})-v_{i}(X_{-}^{j-1})\right)$
$\displaystyle\geq\sum_{j=1}^{k}\left(v_{i}(X_{-}^{j-1}\cup
Y_{+}^{j})-v_{i}(X_{-}^{j}\cup Y_{+}^{j+1})\right)$
$\displaystyle=v_{i}(X_{-}^{0}\cup Y_{+}^{1})-v_{i}(X_{-}^{k}\cup
Y_{+}^{k+1})$ $\displaystyle=v_{i}(Y)-v_{i}(X)\,.$
Thus, we have $v_{i}(X)\geq\frac{1}{2}\cdot v_{i}(Y)$, and we conclude that
$\bm{\succ}^{\mathrm{b}}$ is a $\frac{1}{2}$-approximate PNE of Mechanism 1.
To show that the result is tight, consider an example with two agents and five
goods. The valuation function of agent $1$ is additive and defined as follows
on the singletons:
$v_{1}(g_{1})=2\quad v_{1}(g_{2})=1\quad v_{1}(g_{3})=1-\varepsilon_{1}\quad
v_{1}(g_{2})=1-\varepsilon_{2}\quad v_{1}(g_{5})=1-\varepsilon_{3}\,,$
where $1\gg\varepsilon_{3}>\varepsilon_{2}>\varepsilon_{1}>0$.
The valuation function of agent $2$ is OXS222Roughly speaking, OXS functions
generalize unit-demand functions. The set of OXS functions is a strict
superset of additive functions and a strict subset of submodular functions.
See, [26, 27]. and defined by the maximum matchings in the bipartite graph
below, e.g., $v_{2}(\\{g_{1},g_{2}\\})=2+1=3$ and
$v_{2}(\\{g_{1},g_{4},g_{5}\\})=2+1-\varepsilon_{2}=3-\varepsilon_{2}$.
$g_{1}$$g_{2}$$g_{3}$$g_{4}$$g_{5}$21$1-\varepsilon_{1}$$1-\varepsilon_{2}$$1-\varepsilon_{3}$
It is not hard to see that the bluff profile for this instance consists of the
following declared ordering by both agents: $g_{1}>g_{2}>g_{3}>g_{4}>g_{5}$.
The allocation produced by Mechanism 1 for the bluff profile is then
$A=(A_{1},A_{2})$, where $A_{1}=\\{g_{1},g_{3},g_{5}\\}$, and
$A_{2}=\\{g_{2},g_{4}\\}$. Observe that
$v_{1}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}$ and $v_{2}(A_{2})=1$. It is
easy to see that there is no profitable deviation for agent $1$, while the
maximum value that agent $2$ can attain by deviating is
$2-\varepsilon_{1}-\varepsilon_{2}$. Agent $2$ achieves this by reporting the
preference ranking: $g_{3}>g_{4}>g_{1}>g_{2}>g_{5}$ and getting goods
$\\{g_{3},g_{4}\\}$. This implies that for any $\varepsilon>0$ one can chose
appropriately small $\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}$ so that
the bluff profile is not a $\big{(}\frac{1}{2}+\varepsilon\big{)}$-approximate
PNE. ∎
In Section 4, we show that every approximate PNE of Mechanism 1 results in an
approximately EF1 allocation. Here, as a warm-up, we start this endeavor with
an easy result which holds specifically for the bluff profile (and can be
extended to approximate PNE where all agents submit the same preference
ranking) but shows a better fairness guarantee than our general Theorem 4.4.
###### Theorem 3.8.
When all agents have submodular valuation functions $v_{1},\ldots,v_{n}$, the
allocation returned by Round-Robin$(\bm{\succ}^{\mathrm{b}})$ is
$\frac{1}{2}$-EF1 with respect to $v_{1},\ldots,v_{n}$. Moreover, this is
tight, i.e., for any $\varepsilon>0$, there are instances where this
allocation is not $\big{(}\frac{1}{2}+\varepsilon\big{)}$-EF1.
###### Proof.
In order to obtain a contradiction, suppose that the allocation
$(A_{1}^{\mathrm{b}},A_{2}^{\mathrm{b}},\ldots,A_{n}^{\mathrm{b}})$ returned
by Round-Robin$(\bm{\succ}^{\mathrm{b}})$ is not $\frac{1}{2}$-EF1. That is,
there exist agents $i$ and $j$ such that $v_{i}(A_{i}^{\mathrm{b}})<0.5\cdot
v_{i}(A_{j}^{\mathrm{b}}\setminus\\{g\\})$, for all $g\in A_{j}^{\mathrm{b}}$.
We are going to show that this allows us to construct a deviation for agent
$i$ where she gets value more than $2v_{i}(A_{i}^{\mathrm{b}})$, contradicting
the fact that $\bm{\succ}^{\mathrm{b}}$ is a $\frac{1}{2}$-approximate PNE.
Recall that using the renaming $h_{1},h_{2},\ldots$ produced by Algorithm 2,
we have $A_{i}^{\mathrm{b}}=\\{h_{i},h_{n+i},\dots,h_{(k-1)n+i}\\}$ and
$A_{j}^{\mathrm{b}}=\\{h_{j},h_{n+j},\dots,h_{(k-1)n+j}\\}$.
Let $\delta$ be the indicator variable of the event $j<i$, i.e., $\delta$ is
$1$ if $j<i$ and $0$ otherwise. We will show that it is possible for agent $i$
to get the set $\\{h_{\delta
n+j},h_{(1+\delta)n+j},h_{(2+\delta)n+j},\dots,h_{(k-1)n+j}\\}$, which is
either the entire $A_{j}^{\mathrm{b}}$ (when $i<j$) or
$A_{j}^{\mathrm{b}}\setminus\\{h_{j}\\}$ (when $j<i$). In particular, let
$\succ_{i}$ be a preference ranking that starts with all goods in
$A_{j}^{\mathrm{b}}$ in the same order as they were allocated to agent $j$ in
Round-Robin$(\bm{\succ}^{\mathrm{b}})$, followed by everything else, in any
order.
Consider the execution of Round-
Robin$(\succ_{i},\bm{\succ}_{-i}^{\mathrm{b}})$. The crucial, yet simple,
observation (that makes an inductive argument work) is that the first $i-1$
goods $h_{1},\ldots,h_{i-1}$ are allocated as before, then good $h_{\delta
n+j}$ (rather than $h_{i}$) is allocated to agent $i$, and after that the
$n-1$ top goods for all agents in $N\setminus\\{i\\}$ according to
$\bm{\succ}_{-i}^{\mathrm{b}}$ are $h_{i},h_{i+1},\dots,h_{\delta
n+j-1},h_{\delta n+j+1},\dots,h_{n+i-1}$, and these are allocated in the next
$n-1$ steps of the algorithm. As a result, right before a second good is
allocated to agent $i$, the available goods are
$h_{n+i},h_{n+i+1},\dots,h_{m}$ exactly as in the execution of Round-
Robin$(\bm{\succ}^{\mathrm{b}})$.
More generally, right before an $r$-th good is allocated to $i$, her bundle is
$\\{h_{\delta
n+j},h_{(1+\delta)n+j},h_{(2+\delta)n+j},\allowbreak\dots,h_{(r-2+\delta)n+j}\\}$,
and the available goods are $h_{(r-1)n+i},h_{(r-1)n+i+1},\dots,h_{m}$ (as they
were in the execution of Round-Robin$(\bm{\succ}^{\mathrm{b}})$). Then good
$h_{(r-1+\delta)n+j}$ (rather than $h_{(r-1)n+i}$) is allocated to agent $i$,
and after that the $n-1$ top goods for all agents according to
$\bm{\succ}_{-i}^{\mathrm{b}}$ are
$h_{(r-1)n+i},h_{(r-1)n+i+1},\dots,h_{(r-1+\delta)n+j-1},h_{(r-1+\delta)n+j+1},\dots,h_{rn+i-1}\,,$
and they are allocated in the next $n-1$ steps of the algorithm. At the end,
agent $i$ gets the entire $A_{j}^{\mathrm{b}}$ or
$A_{j}^{\mathrm{b}}\setminus\\{h_{j}\\}$ plus some arbitrary good, depending
on whether $i<j$ or $j<i$. In either case, by monotonicity, agent $i$’s value
for her bundle is at least
$v_{i}(A_{j}^{\mathrm{b}}\setminus\\{h_{j}\\})>2v_{i}(A_{i}^{\mathrm{b}})$,
where the last inequality follows from our assumption that
$(A_{1}^{\mathrm{b}},A_{2}^{\mathrm{b}},\ldots,A_{n}^{\mathrm{b}})$ is not
$\frac{1}{2}$-EF1. Therefore, by deviating from $\succ^{\mathrm{b}}$ to
$\succ_{i}$, agent $i$ increases her value by a factor strictly grater than
$2$, contradicting Theorem 3.7.
To show that this factor is tight, we again turn to the example given within
the proof of Theorem 3.7. Recall the allocation produced by Mechanism 1 for
the bluff profile is $A=(A_{1},A_{2})$, with $A_{1}=\\{g_{1},g_{3},g_{5}\\}$
and $A_{2}=\\{g_{2},g_{4}\\}$. Observe that agent $1$ is envy-free towards
agent $2$ as
$v_{1}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}>2-\varepsilon_{2}=v_{1}(A_{2})$.
On the other hand, $v_{2}(A_{2})=1$, whereas
$v_{2}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}$ and
$v_{2}(A_{1}\setminus\\{g_{1}\\})=2-\varepsilon_{1}-\varepsilon_{3}$. The
latter implies that for any $\varepsilon>0$ one can chose appropriately small
$\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}$ so that the bluff profile
does not result in a $\big{(}\frac{1}{2}+\varepsilon\big{)}$-EF1 allocation
with respect to the true valuation functions of the agents. ∎
## 4 Fairness properties of PNE
In Section 2.3, Proposition 2.5, we state the fairness guarantees of Round-
Robin—viewed as an algorithm—when all agents have cancelable valuation
functions. So far, we have not discussed this matter for the submodular case.
It is not hard to see, however, that Theorem 3.8 and the definition of the
bluff profile via Algorithm 2 imply that when we have (value oracles for) the
valuation functions, then we can use Round-Robin to algorithmically produce
$\frac{1}{2}$-EF1 allocations. Using similar arguments, we show next that for
any preference profile $\bm{\succ}\,=(\succ_{1},\ldots,\succ_{n})$ and any
$i\in N$, there is always a response $\succ^{\prime}_{i}$ of agent $i$ to
$\bm{\succ}_{-i}$, such that the allocation returned by Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$ is $\frac{1}{2}$-EF1 _from agent
$i$’s perspective_.
Towards this, we first need a variant of Algorithm 2 that considers everyone
in $N\setminus\\{i\\}$ fixed to their report in $\bm{\succ}_{-i}$ and greedily
determines a “good” response for agent $i$. An intuitive interpretation of
what Algorithm 4 below is doing, can be given if one sees Mechanism 1 as a
sequential game. Then, given that everyone else stays consistent with
$\bm{\succ}_{-i}$, agent $i$ _picks_ a good of maximum marginal value every
time her turn is up.
Algorithm 4 Greedy response of agent $i$ to $\bm{\succ}_{-i}$
Input: $N$, $M$, $\bm{\succ}_{-i}$, value oracle for $v_{i}$
1:$S=M$; $X=\emptyset$
2:for $j=1,\dots,m$ do
3: $\ell=(j-1)\\!\pmod{n}+1$
4: if $\ell=i$ then
5: $x_{\lceil j/n\rceil}=\displaystyle\operatorname*{arg\,max}_{g\in
S}v_{i}(g\,|\,X)$ // Ties are broken lexicographically.
6: $X=X\cup\\{x_{\lceil j/n\rceil}\\}$
7: $S=S\setminus\\{x_{\lceil j/n\rceil}\\}$
8: else
9: $g=\mathrm{top}(\succ_{\ell},S)$
10: $S=S\setminus\\{g\\}$
11:return
$x_{1}\succ^{\prime}_{i}x_{2}\succ^{\prime}_{i}\ldots\succ^{\prime}_{i}x_{k}\succ^{\prime}_{i}\ldots$
// Arbitrarily complete $\succ^{\prime}_{i}$ with goods in $M\setminus X$.
Proving the next lemma closely follows the proof of Theorem 3.7 but without
the need of an analog of Lemma 3.5, as we get this for free from the way the
greedy preference profile $\succ^{\prime}_{i}$ is constructed.
###### Lemma 4.1.
Assume that agent $i$ has a submodular valuation function $v_{i}$. If
$\succ^{\prime}_{i}$ is the ranking returned by Algorithm 4 when given $N$,
$M$, $\bm{\succ}_{-i}$, $v_{i}$, then the allocation
$(A^{\prime}_{1},A^{\prime}_{2},\ldots,A^{\prime}_{n})$ returned by Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$ is such that for every $j\in N$,
with $A^{\prime}_{j}\neq\emptyset$, there exists a good $g\in A^{\prime}_{j}$,
so that $v_{i}(A^{\prime}_{i})\geq\frac{1}{2}\cdot
v_{i}(A^{\prime}_{j}\setminus\\{g\\})$.
###### Proof.
First, it is straightforward to see that $A^{\prime}_{i}=X$, as computed in
Algorithm 4. Indeed, Algorithm 4 simulates Mechanism 1 for all $j\in
N\setminus\\{i\\}$ and iteratively builds $\succ^{\prime}_{i}$, so that in
every turn of Round-Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$ the good
allocated to agent $i$ is one of maximum marginal value. As a result, the
goods in $A^{\prime}_{i}=X=\\{x_{1},x_{2},\ldots,x_{k}\\}$ are already indexed
in the order they are allocated.
Now consider an arbitrary $j\in N\setminus\\{i\\}$ and let
$A^{\prime}_{j}=Y=\\{y_{1},y_{2},\ldots,y_{k}\\}$, where goods are again
indexed in the order they are allocated in Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$. Notice that when good $x_{r}$ is
allocated to agent $i$ in round $r$, goods $y_{r+1},y_{r+2},\ldots$ are still
available and, by construction of $X$, their marginal value with respect to
the set $\\{x_{1},x_{2},\ldots,x_{r-1}\\}$ is no better than the marginal
value of $x_{r}$. In particular,
$v_{i}(x_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\geq
v_{i}(y_{r+1}\,|\,\\{x_{1},\ldots,x_{r-1}\\})$.
Also, recall the use of $X^{r}_{-}$, $X^{r}_{+}$, $Y^{r}_{-}$, $Y^{r}_{+}$
notation from the proof of Theorem 3.7. We will use a similar calculation here
as well, but we will omit the first element of $Y$. For any $r\in[k]$, we have
$\displaystyle v_{i}(X_{-}^{r})-v_{i}(X_{-}^{r-1})$
$\displaystyle=v_{i}(x_{r}\,|\,X_{-}^{r-1})$ $\displaystyle\geq
v_{i}(y_{r+1}\,|\,X_{-}^{r-1})$ $\displaystyle\geq
v_{i}(y_{r+1}\,|\,X_{-}^{r-1}\cup Y_{+}^{r+2})$
$\displaystyle=v_{i}(X_{-}^{r-1}\cup
Y_{+}^{r+2}\cup\\{y_{r+1}\\})-v_{i}(X_{-}^{r-1}\cup Y_{+}^{r+2})$
$\displaystyle=v_{i}(X_{-}^{r-1}\cup Y_{+}^{r+1})-v_{i}(X_{-}^{r-1}\cup
Y_{+}^{r+2})$ $\displaystyle\geq v_{i}(X_{-}^{r-1}\cup
Y_{+}^{r+1})-v_{i}(X_{-}^{r}\cup Y_{+}^{r+2})\,,$
where we used the convention that $Y_{+}^{k+1}=Y_{+}^{k+2}=\emptyset$. The
first inequality holds by the construction of $X$ as discussed above, the
second inequality follows from submodularity, and the last inequality holds
because $v_{i}(\cdot)$ is non-decreasing. Using these inequalities and a
standard expression of the value of a set as a sum of marginals, we have
$\displaystyle v_{i}(X)$ $\displaystyle=v_{i}(X_{-}^{k})-v_{i}(X_{-}^{0})$
$\displaystyle=\sum_{r=1}^{k}\left(v_{i}(X_{-}^{r})-v_{i}(X_{-}^{r-1})\right)$
$\displaystyle\geq\sum_{r=1}^{k}\left(v_{i}(X_{-}^{r-1}\cup
Y_{+}^{r+1})-v_{i}(X_{-}^{r}\cup Y_{+}^{r+2})\right)$
$\displaystyle=v_{i}(X_{-}^{0}\cup Y_{+}^{2})-v_{i}(X_{-}^{k}\cup
Y_{+}^{k+2})$ $\displaystyle=v_{i}(Y\setminus\\{y_{1}\\})-v_{i}(X)\,.$
Thus, we have $v_{i}(A^{\prime}_{i})=v_{i}(X)\geq\frac{1}{2}\cdot
v_{i}(Y\setminus\\{y_{1}\\})=\frac{1}{2}\cdot
v_{i}(A^{\prime}_{j}\setminus\\{y_{1}\\})$. ∎
### 4.1 The Case of Two Agents
As a warm-up, we begin with the easier case of $n=2$. Not only the proofs of
our main results for submodular and additive functions are much simpler here,
but the fairness guarantees are stronger as well.
###### Theorem 4.2.
Let $\alpha\in(0,1]$. Assume we have a fair division instance with two agents,
whose valuation functions $v_{1},v_{2}$ are submodular. Then any allocation
that corresponds to a $\alpha$-approximate PNE of the Round-Robin mechanism is
$\frac{\alpha}{2}$-EF1 with respect to $v_{1},v_{2}$.
###### Proof.
Let $\bm{\succ}\,=(\succ_{1},\succ_{2})$ be a $\alpha$-approximate PNE of
Mechanism 1 for a given instance, and let $(A_{1},A_{2})$ be the allocation
returned by Round-Robin$(\bm{\succ})$. Consider one of the two agents; we call
this agent $i\in[2]$ and the other agent $j$. We are going to show that
$v_{i}(A_{i})\geq\frac{\alpha}{2}\cdot v_{i}(A_{j}\setminus\\{g\\})$ for some
good $g\in A_{j}$.
Suppose that agent $i$ deviates to $\succ^{\prime}_{i}$ produced by Algorithm
4 when given $\bm{\succ}_{-i}\,=(\succ_{j})$ and $v_{i}$, and let
$(A^{\prime}_{1},A^{\prime}_{2})$ be the allocation returned by Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$. Let
$A^{\prime}_{i}=\\{x_{1},x_{2},\ldots,x_{k}\\}$ and $A_{j}\setminus
A^{\prime}_{i}=\\{y_{t_{1}},y_{t_{2}},\ldots,y_{t_{\ell}}\\}$, where in both
sets goods are indexed by the round in which they were allocated in the run of
Round-Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$. Note that all indices in
$A_{j}\setminus A^{\prime}_{i}$ are distinct exactly because $n=2$ and, thus,
all these goods are allocated to agent $j$. This indexing guarantees that when
$x_{t_{\lambda}-1}$ gets allocated, $y_{t_{\lambda}}$ is still available for
$2\leq\lambda\leq\ell$ and, thus,
$v(x_{t_{\lambda}-1}\,|\,\\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\\})\geq
v(y_{t_{\lambda}}\,|\,\\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\\})\,,$ (1)
by the way $\succ^{\prime}_{i}$ is constructed (see also the proof of Lemma
4.1). Using Theorem 2.1, we have
$\displaystyle v_{i}(A_{j}\setminus\\{y_{t_{1}}\\})$ $\displaystyle\leq
v_{i}(A^{\prime}_{i})+\\!\\!\sum_{g\in(A_{j}\setminus\\{y_{t_{1}}\\})\setminus
A^{\prime}_{i}}\\!\\!\\!\\!\\!v(g\,|\,A^{\prime}_{i})$
$\displaystyle=v_{i}(A^{\prime}_{i})+\sum_{\lambda=2}^{\ell}v(y_{t_{\lambda}}\,|\,A^{\prime}_{i})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+\sum_{\lambda=2}^{\ell}v(y_{t_{\lambda}}\,|\,\\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\\})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+\sum_{\lambda=2}^{\ell}v(x_{t_{\lambda}-1}\,|\,\\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\\})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+\sum_{\lambda=1}^{k}v(x_{\lambda}\,|\,\\{x_{1},x_{2},\ldots,x_{\lambda-1}\\})$
$\displaystyle=v_{i}(A^{\prime}_{i})+v_{i}(A^{\prime}_{i})$
$\displaystyle\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})\,,$
where the first inequality follows directly from Theorem 2.1, the second one
follows from submodularity, the third inequality holds because of (1), the
fourth one follows from the monotonicity of $v_{i}$, and the last inequality
follows from the fact that $\bm{\succ}$ is a $\alpha$-approximate PNE and thus
$v_{i}(A_{i})\geq\alpha\cdot v_{i}(A^{\prime}_{i})$. We conclude that
$(A_{1},A_{2})$ is $\frac{\alpha}{2}$-EF1 with respect to the underlying
valuation functions. ∎
For additive valuation functions we can get a slightly stronger fairness
guarantee, which we show that is also tight for any $\alpha$, with an even
easier proof. Note that this reproduces the result of Amanatidis et al. [5]
for exact PNE in the case of two agents.
###### Theorem 4.3.
Let $\alpha\in(0,1]$. Assume we have a fair division instance with two agents,
whose valuation functions $v_{1},v_{2}$ are additive. Then any allocation that
corresponds to a $\alpha$-approximate PNE of the Round-Robin mechanism is
$\frac{\alpha}{2-\alpha}$-EF1 with respect to $v_{1},v_{2}$. This is tight,
i.e., for any $\varepsilon>0$, there are instances where a
$\alpha$-approximate PNE does not correspond to a
$(\frac{\alpha}{2-\alpha}+\varepsilon)$-EF1 allocation.
###### Proof.
Let $\bm{\succ}\,=(\succ_{1},\succ_{2})$, $A_{1}$, $A_{2}$ be as in the proof
of Theorem 4.2, but now consider the deviation of agent $i$ to
$\succ^{\prime}_{i}$ which is a strict version of her true preference ranking
$\succcurlyeq^{*}_{i}$. Again, let $(A^{\prime}_{1},A^{\prime}_{2})$ be the
allocation returned by Round-Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$.
Let $g$ be good of maximum value in $A^{\prime}_{j}$ according to $v_{i}$.
Since $\succ^{\prime}_{i}$ is a true preference ranking of agent $i$,
according to Proposition 2.5 $(A^{\prime}_{1},A^{\prime}_{2})$ is EF1 from the
point of view of agent $i$. That is, we have $v_{i}(A^{\prime}_{i})\geq
v_{i}(A^{\prime}_{j}\setminus\\{g\\})$ and, thus,
$v_{i}(A^{\prime}_{i})\geq\frac{1}{2}\cdot v_{i}(M\setminus\\{g\\})$.
Therefore,
$\displaystyle v_{i}(A_{j}\setminus\\{g\\})$
$\displaystyle=v_{i}(M\setminus\\{g\\})-v_{i}(A_{i})$ $\displaystyle\leq
2\cdot v_{i}(A^{\prime}_{i})-v_{i}(A_{i})$
$\displaystyle\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})-v_{i}(A_{i})$
$\displaystyle=\frac{2-\alpha}{\alpha}\cdot v_{i}(A_{i})\,,$
where the second inequality follows from the fact that $\bm{\succ}$ is a
$\alpha$-approximate PNE and thus $v_{i}(A_{i})\geq\alpha\cdot
v_{i}(A^{\prime}_{i})$. We conclude that $(A_{1},A_{2})$ is
$\frac{\alpha}{2-\alpha}$-EF1 with respect to $v_{1},v_{2}$.
To see that this guarantee is tight, consider an instance with two agents, and
a set of five goods $\\{g_{1},g_{2},\ldots,g_{5}\\}$. In addition, let the
valuation functions of the agents to be additive and defined by:
$v_{1}(g_{j})=\begin{cases}6,&\text{if $j=1$}\\\ 3+\delta,&\text{if $j=2$}\\\
3,&\text{if $j=3$}\\\ 0.5+\delta,&\text{if $j=4$}\\\ 0.5,&\text{if
$j=5$}\end{cases}$
$v_{2}(g_{j})=\begin{cases}6\beta,&\text{if $j=1$}\\\ 3\beta+\delta,&\text{if
$j=2$}\\\ 3\beta,&\text{if $j=3$}\\\ 0.5+\delta,&\text{if $j=4$}\\\
0.5,&\text{if $j=5$}\end{cases}$
where $0.5\gg\delta$, and $\beta>\frac{1}{6}+\delta$. Now suppose that the
agents bid as follows: Agent $1$ bids truthfully (i.e., an ordering
$\succ_{1}$ that is consistent with her true valuation function), while agent
$2$ bids $g_{5}\succ_{2}g_{4}\succ_{2}g_{1}\succ_{2}g_{2}\succ_{2}g_{3}$. It
is easy to confirm that the produced allocation is
$A=(A_{1},A_{2})=(\\{g_{1},g_{2},g_{3}\\},\\{g_{4},g_{5}\\})$. Regarding agent
1, she takes her three most desirable goods in this allocation so there is no
profitable deviation for her. For the same reason, she is envy-free towards
agent 2.
Moving to agent 2, by observing her valuation function, we immediately derive
that she is $\frac{1+\delta}{6\beta+\delta}$-EF1 towards agent 1. The only
thing that remains, is to check how much agent $2$ can improve her utility
through deviating. Initially notice that agent $2$ cannot get good $g_{1}$
regardless of her bid as this good is taken by agent $1$ in round 1. At the
same time, it is easy to verify that she cannot get both goods $g_{2}$ and
$g_{3}$ due to the declared ordering of agent 1. Thus, the best bundle of
goods that she can acquire is $\\{g_{2},g_{4}\\}$ by deviating to the bid:
$g_{2}\succ^{\prime}_{2}g_{4}\succ^{\prime}_{2}g_{1}\succ^{\prime}_{2}g_{3}\succ^{\prime}_{2}g_{5}$
and attain a value of $3\beta+0.5+2\delta$.
By setting $\alpha=\frac{1+\delta}{3\beta+0.5+2\delta}$ we trivially have that
$(\succ_{1},\succ_{2})$ is a $\alpha$-approximate PNE. On the other hand, for
a given $\varepsilon>0$, we have
$\frac{\alpha}{2-\alpha}+\varepsilon=\frac{1+\delta}{6\beta+3\delta}+\varepsilon$
which is strictly larger than $\frac{1+\delta}{6\beta+\delta}$ for
sufficiently small $\delta$. That is, there is a choice of $\delta$ so that
the $\alpha$-approximate PNE $(\succ_{1},\succ_{2})$ is not
$\frac{\alpha}{2-\alpha}+\varepsilon$-EF1. ∎
### 4.2 The Case of $n$ Agents
Looking back at the proofs of Theorems 4.2 and 4.3, the obvious fact that
everything not in $A_{i}$ or $A^{\prime}_{i}$ was allocated to agent $j$
played a key role in proving our sharp bounds. Moving to the general case of
$n$ agents, there is no reason to expect that we have some control on how the
goods are redistributed between agents in $N\setminus\\{i\\}$ when agent $i$
deviates from an (approximate) equilibrium. Surprisingly, we show that this
redistribution does not favor any agent too much from $i$’s perspective when
the valuation functions are submodular or subadditive cancelable (Lemmata 4.6
and 4.7). Consequently, the main results of this section have similar flavor
not only with respect to their statements, but with respect to their proofs as
well.
###### Theorem 4.4.
Let $\alpha\in(0,1]$. For instances with submodular valuation functions
$\\{v_{i}\\}_{i\in N}$, any $\alpha$-approximate PNE of the Round-Robin
mechanism is $\frac{\alpha}{3}$-EF1 with respect to $\\{v_{i}\\}_{i\in N}$.
###### Theorem 4.5.
Let $\alpha\in(0,1]$. For instances with subadditive cancelable valuation
functions $\\{v_{i}\\}_{i\in N}$, any $\alpha$-approximate PNE of the Round-
Robin mechanism is $\frac{\alpha}{2}$-EF1 with respect to $\\{v_{i}\\}_{i\in
N}$.
As the proofs of both theorems have the same general structure and share
Lemmata 4.6 and 4.7, we begin with some common wording and notation,
consistent with our proofs for two agents. Given any instance, we use
$\bm{\succ}\,=(\succ_{1},\ldots,\succ_{n})$ for an arbitrary
$\alpha$-approximate PNE of Mechanism 1. We then consider the deviation of
some agent $i$ to a preference ranking $\succ^{\prime}_{i}$; in the submodular
case $\succ^{\prime}_{i}$ is the output of Algorithm 4 when given
$\bm{\succ}_{-i}$ and $v_{i}$, whereas in the cancelable case
$\succ^{\prime}_{i}$ is a strict version of $i$’s true preference ranking
$\succcurlyeq^{*}_{i}$. We use $(A_{1},\ldots,A_{n})$ and
$(A^{\prime}_{1},\ldots,A^{\prime}_{n})$ to denote the allocations returned by
Round-Robin$(\bm{\succ})$ and Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$, respectively.
In order to show that $(A_{1},\ldots,A_{n})$ as $\frac{\alpha}{\kappa}$-EF1
from agent $i$’s perspective (where $\kappa$ is $3$ for submodular and $2$ for
cancelable functions), we use the stronger EF1 guarantees that
$(A^{\prime}_{1},\ldots,A^{\prime}_{n})$ has from her perspective. To this
end, we use $h_{r}^{\ell}$ to denote the good that was allocated to an agent
$\ell\in N$ in round $r$ of Round-Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$.
In particular, $A^{\prime}_{i}=\\{h^{i}_{1},h^{i}_{2},\ldots,h^{i}_{k}\\}$;
recall that $k=m/n$. Further, given that we have fixed agent $i$, we use
$S_{r}$ and $S^{\prime}_{r}$, for $0\leq r\leq k-1$, to denote the set of
goods that had been allocated up to right before a good was allocated to $i$
in round $r+1$ of Round-Robin$(\bm{\succ})$ and Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$, respectively. That is, for $0\leq
r\leq k-1$, $S_{r}$ and $S^{\prime}_{r}$ contain the goods allocated in steps
$1$ through $rn+i-1$ of Round-Robin$(\bm{\succ})$ and Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$, respectively.
For the next technical lemma we assume that the valuation functions are either
submodular or cancelable and, in each case, we use the corresponding
$\succ^{\prime}_{i}$ as described above.
###### Lemma 4.6.
For any $r\in[k]$, right before an $r$-th good is allocated to agent $i$ in
Round-Robin$(\bm{\succ})$, there are at most $r-1$ goods from
$S^{\prime}_{r-1}$ that are still unallocated, i.e.,
$\left|S^{\prime}_{r-1}\setminus S_{r-1}\right|\leq r-1$.
###### Proof.
We will prove the statement using induction on $r$. For $r=1$, it is
straightforward that $S_{0}=S^{\prime}_{0}$, as the preference rankings of
agents $1$ through $i-1$ are the same in the two runs of the mechanism and,
thus, the first goods allocated to them are exactly the same.
Now suppose that the statement is true for every round up to round $r$; we
will show that it is true for round $r+1$ as well. Initially, observe that if
the number of unallocated goods from $S^{\prime}_{r-1}$ is $r-1$ right before
a good is allocated to agent $i$ in round $r$, it will trivially be at most
$r-1$ right before a good is allocated to agent $i$ in round $r+1$ (as the
number of unallocated goods from any set cannot increase as the allocation
progresses). That is, $\left|S^{\prime}_{r-1}\setminus S_{r}\right|\leq r-1$.
Notice that the goods that might cause $S^{\prime}_{r}\setminus S_{r}$ to
increase are the elements of
$S^{\prime}_{r}\setminus
S^{\prime}_{r-1}=\\{h^{i}_{r},h^{i+1}_{r},\ldots,h^{n}_{r},h^{1}_{r+1},h^{2}_{r+1},\ldots,h^{i-1}_{r+1}\\}\,,$
and suppose that there are $\lambda$ goods therein which are still unallocated
right before a good is allocated to agent $i$ in round $r+1$ of Round-
Robin$(\bm{\succ})$. Clearly, if $\lambda\leq 1$, we are done. So, assume that
$\lambda\geq 2$. This means that there are $\lambda-1\geq 1$ unallocated goods
in $(S^{\prime}_{r}\setminus S^{\prime}_{r-1})\setminus\\{h^{i}_{r}\\}$. Let
$g$ be one of these goods and let $j$ be the agent to whom $g$ was given,
i.e., $g=h^{j}_{\bar{r}}$, where $\bar{r}=r$, if $j>i$, and $\bar{r}=r+1$, if
$j<i$. In either case, notice that according to $\succ_{j}$ the good $g$ is
better than any good in $M\setminus S^{\prime}_{r}$ or else it would not have
been allocated to $j$ at round $\bar{r}$ of Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$ when everything in $M\setminus
S^{\prime}_{r}$ is still available. We claim that $g$ does not increase the
number of elements in $S^{\prime}_{r}\setminus S_{r}$. Indeed, given that $g$
was available during step $(\bar{r}-1)n+j$ of Round-Robin$(\bm{\succ})$ and
that $j$’s declared preference ranking is still $\succ_{j}$, the only
possibility is that during that step one of the unallocated goods from
$S^{\prime}_{r-1}\cup\\{h^{i}_{r},h^{i+1}_{r},\ldots,h^{j-1}_{\bar{r}}\\}$ was
allocated to $j$ instead.
Therefore, the only good out of the $\lambda$ candidate goods of
$S^{\prime}_{r}\setminus S^{\prime}_{r-1}$ which might count towards the
number of elements in $S^{\prime}_{r}\setminus S_{r}$ is $h^{i}_{r}$. We
conclude that $S^{\prime}_{r}\setminus S_{r}\leq(r-1)+1=r$. ∎
Lemma 4.6 is global, illustrating that the sets $S_{r}$ and $S^{\prime}_{r}$
cannot differ in more than a $1/n$-th of their elements. The next lemma shows
that no agent can accumulate too many goods from $S^{\prime}_{r}$, for any
$0\leq r\leq k-1$. Again, we assume that the valuation functions are either
submodular or cancelable and, in each case, the appropriate
$\succ^{\prime}_{i}$ is used as discussed after the statements of Theorems 4.2
and 4.3. Note that $S^{\prime}_{0}$ in the lemma’s statement contains exactly
these goods which we will exclude when showing the EF1 guarantee for our two
theorems.
###### Lemma 4.7.
For any $r\in[k]$ and any $j\in N$, agent $j$ gets at most $2(r-1)$ goods from
$S^{\prime}_{r-1}\setminus S^{\prime}_{0}$ in the allocation
$(A_{1},\ldots,A_{n})$ returned by Round-Robin$(\bm{\succ})$, i.e.,
$|A_{j}\cap(S^{\prime}_{r-1}\setminus S^{\prime}_{0})|\leq 2(r-1)$.
###### Proof.
Fix an $r\in[k]$ and a $j\in N$. Consider the end of step $(r-1)n+i-1$ of
Round-Robin$(\bm{\succ})$, i.e., right before an $r$-th good is allocated to
agent $i$. Ignoring all the goods allocated before $i$ got her first good,
agent $j$ has received exactly $r-1$ goods up to this point. As a result, the
number of goods allocated to $j$ from $S^{\prime}_{r-1}\setminus
S^{\prime}_{0}$ at this point is at most $r-1$.
At the same time, the number of goods from $S^{\prime}_{r-1}\setminus
S^{\prime}_{0}$ that might end up in $A_{j}$ in any future steps of Round-
Robin$(\bm{\succ})$ are at most as many as the goods from $S^{\prime}_{r-1}$
that are still unallocated at the end of step $(r-1)n+i-1$. The latter, by
Lemma 4.6, are also at most $r-1$.
From these two observations, we have that the final bundle $A_{j}$ of agent
$j$ may contain at most $2(r-1)$ goods from $S^{\prime}_{r-1}\setminus
S^{\prime}_{0}$. ∎
With Lemma 4.7 at hand, we are now ready to prove Theorems 4.4 and 4.5;
###### Proof of Theorem 4.4.
We, of course, adopt the notation that has been used throughout this section,
focusing on an arbitrary agent $i\in N$ and assuming that her deviation
$\succ^{\prime}_{i}$ has been the output of Algorithm 4 with input
$\bm{\succ}_{-i}$ and $v_{i}$. In particular, $(A_{1},\ldots,A_{n})$ and
$(A^{\prime}_{1},\ldots,A^{\prime}_{n})$ are the allocations returned by
Round-Robin$(\bm{\succ})$ and Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$, respectively.
Consider another agent $j\in N\setminus\\{i\\}$. Let
$A^{\prime}_{i}=\\{x_{1},x_{2},\ldots,x_{k}\\}$ and
$A_{j}=\\{y_{1},y_{2},\ldots,y_{k}\\}$, where in both sets goods are indexed
in the order in which they were allocated in the run of Round-
Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$. For $A^{\prime}_{i}$, this means
that $x_{r}$ was allocated in round $r$ for all $r\in[k]$. For $A_{j}$, this
indexing guarantees that for every $0\leq\ell<r\leq k-1$, the goods in
$A_{j}\cap(S^{\prime}_{\ell}\setminus S^{\prime}_{\ell-1})$ all have smaller
indices than the goods in $A_{j}\cap(S^{\prime}_{r}\setminus
S^{\prime}_{r-1})$ (where we use the convention that
$S^{\prime}_{-1}=\emptyset$). We further partition $A_{j}\setminus\\{y_{1}\\}$
to $Y_{1}=\\{y^{1}_{1},\ldots,y^{1}_{\tau_{1}}\\}$ and
$Y_{2}=\\{y^{2}_{1},\ldots,y^{2}_{\tau_{2}}\\}$ which contain the goods of
$A_{j}\setminus\\{y_{1}\\}$ with odd and even indices, respectively, and are
both renamed according to Algorithm 3 with inputs $A^{\prime}_{i}$, $Y_{1}$,
$v_{i}$, and $A^{\prime}_{i}$, $Y_{2}$, $v_{i}$, respectively. Clearly,
$\tau_{1}=\lfloor\frac{k-1}{2}\rfloor$ and
$\tau_{2}=\lceil\frac{k-1}{2}\rceil$.
By Lemma 4.7, we have that $A_{j}$ contains at most $2(r-1)$ goods from
$S^{\prime}_{r-1}\setminus S^{\prime}_{0}$, for any $r\in[k]$. The original
ordering $y_{1},y_{2},\ldots$ of the goods in $A_{j}$ and the way
$A_{j}\setminus\\{y_{1}\\}$ was partitioned into $Y_{1}$ and $Y_{2}$ imply
that $\left||Y_{1}\cap(S^{\prime}_{r-1}\setminus
S^{\prime}_{0})|-|Y_{2}\cap(S^{\prime}_{r-1}\setminus
S^{\prime}_{0})|\right|\leq 1$ and, thus, each of $Y_{1}$ and $Y_{2}$ contains
at most $r-1$ goods from $S^{\prime}_{r-1}\setminus S^{\prime}_{0}$.
We also claim that, for $\ell\in\\{1,2\\}$ and $r\in[\tau_{\ell}]$, we have
$v_{i}(x_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\geq
v_{i}(y^{\ell}_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\,.$ (2)
Suppose not. That is, there are $\ell\in\\{1,2\\}$ and $r\in[\tau_{\ell}]$ so
that (2) is violated. Note that, by the way Algorithm 3 ordered the elements
of $Y_{1}$ and $Y_{2}$, this implies
$v_{i}(x_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})<v_{i}(y^{\ell}_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\leq
v_{i}(y^{\ell}_{t}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\,,$
for all $t\in[r]$. Since $x_{r}$ was the good allocated to agent $i$ at step
$(r-1)n+i$ of Round-Robin$(\succ^{\prime}_{i},\bm{\succ}_{-i})$, $x_{r}$ had
maximum marginal value for $i$ with respect to $\\{x_{1},\ldots,x_{r-1}\\}$
among the available goods. Thus, none of the goods
$y^{\ell}_{1},\ldots,y^{\ell}_{r}$ were available at the time, i.e.,
$y^{\ell}_{1},\ldots,y^{\ell}_{r}\in S^{\prime}_{r-1}$. Given that the only
good of $A_{j}$ that could possibly be in $S^{\prime}_{0}=S_{0}$ was $y_{1}$
which is not in $Y_{1}\cup Y_{2}$. Therefore,
$y^{\ell}_{1},\ldots,y^{\ell}_{r}\in S^{\prime}_{r-1}\setminus
S^{\prime}_{0}$, which contradicts the fact that
$|Y_{\ell}\cap(S^{\prime}_{r-1}\setminus S^{\prime}_{0})|\leq r-1$. We
conclude that (2) holds for all $\ell\in\\{1,2\\}$ and $r\in[\tau_{\ell}]$.
We are now ready to apply Theorem 2.1 to bound the value of
$A_{j}\setminus\\{y_{1}\\}$. We have
$\displaystyle v_{i}(A_{j}\setminus\\{y_{1}\\})$ $\displaystyle\leq
v_{i}(A^{\prime}_{i})+\\!\\!\sum_{g\in(A_{j}\setminus\\{y_{1}\\})\setminus
A^{\prime}_{i}}\\!\\!\\!\\!\\!v(g\,|\,A^{\prime}_{i})$
$\displaystyle=v_{i}(A^{\prime}_{i})+\\!\sum_{g\in Y_{1}\setminus
A^{\prime}_{i}}\\!\\!v(g\,|\,A^{\prime}_{i})+\\!\sum_{g\in Y_{2}\setminus
A^{\prime}_{i}}\\!\\!v(g\,|\,A^{\prime}_{i})$
$\displaystyle=v_{i}(A^{\prime}_{i})+\sum_{\ell=1}^{\tau_{1}}v(y^{1}_{{\ell}}\,|\,A^{\prime}_{i})+\sum_{\ell=1}^{\tau_{2}}v(y^{2}_{{\ell}}\,|\,A^{\prime}_{i})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+\sum_{\ell=1}^{\tau_{1}}v(y^{1}_{{\ell}}\,|\,\\{x_{1},\ldots,x_{\ell-1}\\})+\sum_{\ell=1}^{\tau_{2}}v(y^{2}_{{\ell}}\,|\,\\{x_{1},\ldots,x_{\ell-1}\\})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+\sum_{\ell=1}^{\tau_{1}}v(x_{{\ell}}\,|\,\\{x_{1},\ldots,x_{\ell-1}\\})+\sum_{\ell=1}^{\tau_{2}}v(x_{{\ell}}\,|\,\\{x_{1},\ldots,x_{\ell-1}\\})$
$\displaystyle\leq
v_{i}(A^{\prime}_{i})+2\cdot\\!\sum_{\ell=1}^{k}v(x_{\ell}\,|\,\\{x_{1},x_{2},\ldots,x_{\ell-1}\\})$
$\displaystyle=v_{i}(A^{\prime}_{i})+2\cdot v_{i}(A^{\prime}_{i})$
$\displaystyle\leq\frac{3}{\alpha}\cdot v_{i}(A_{i})\,,$
where the first inequality follows directly from Theorem 2.1, the second one
follows from submodularity, the third inequality holds because of (2), the
fourth one follows from the monotonicity of $v_{i}$, and the last inequality
follows from the fact that $\bm{\succ}$ is a $\alpha$-approximate PNE and thus
$v_{i}(A_{i})\geq\alpha\cdot v_{i}(A^{\prime}_{i})$. We conclude that
$(A_{1},A_{2},\ldots,A_{n})$ is $\frac{\alpha}{3}$-EF1 with respect to the
underlying valuation functions. ∎
###### Proof of Theorem 4.5.
Note that in the proof of Theorem 4.2, the submodularity of $v_{i}$ is not
used until the final bounding of $A_{j}\setminus\\{y_{1}\\}$. Up to that
point, the proof here is essentially identical (the only difference being that
now $\succ^{\prime}_{i}$ is a strict version of $i$’s true preference ranking
$\succcurlyeq^{*}_{i}$ but this does not change any of the arguments). In
particular, for $A^{\prime}_{i}=\\{x_{1},x_{2},\ldots,x_{k}\\}$,
$A_{j}=\\{y_{1},y_{2},\ldots,y_{k}\\}$,
$Y_{1}=\\{y^{1}_{1},\ldots,y^{1}_{\tau_{1}}\\}$, and
$Y_{2}=\\{y^{2}_{1},\ldots,y^{2}_{\tau_{2}}\\}$, like in the proof of Theorem
4.2, we still have (2), for any $\ell\in\\{1,2\\}$ and $r\in[\tau_{\ell}]$,
i.e., $v_{i}(x_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})\geq
v_{i}(y^{\ell}_{r}\,|\,\\{x_{1},\ldots,x_{r-1}\\})$.
Notice that (2) can be rewritten as
$v_{i}(\\{x_{1},\ldots,x_{r-1},x_{r}\\})\geq
v_{i}(\\{x_{1},\ldots,x_{r-1},y^{\ell}_{r}\\})$. Since $v_{1}$ is cancelable,
the latter implies that $v_{i}(x_{r})\geq v_{i}(y^{\ell}_{r})$, for
$\ell\in\\{1,2\\}$ and $r\in[\tau_{\ell}]$. Now we apply Lemma 3.3 to get
$v_{i}(\\{x_{1},x_{2},\ldots,x_{\tau_{\ell}}\\})\geq v_{i}(Y_{\ell})$, for
$\ell\in\\{1,2\\}$. At this point, we can easily bound the value of
$A_{j}\setminus\\{y_{1}\\}$. We have
$\displaystyle v_{i}(A_{j}\setminus\\{y_{1}\\})$
$\displaystyle=v_{i}(Y_{1}\cup Y_{2})$ $\displaystyle\leq
v_{i}(Y_{1})+v_{i}(Y_{2})$ $\displaystyle\leq
v_{i}(\\{x_{1},x_{2},\ldots,x_{\tau_{1}}\\})+v_{i}(\\{x_{1},x_{2},\ldots,x_{\tau_{2}}\\})$
$\displaystyle\leq v_{i}(A^{\prime}_{i})+v_{i}(A^{\prime}_{i})$
$\displaystyle\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})\,,$
where the first inequality follows from subadditivity, the third one follows
from the monotonicity of $v_{i}$, and the last inequality follows from the
fact that $\bm{\succ}$ is a $\alpha$-approximate PNE. We conclude that
$(A_{1},\ldots,A_{n})$ is $\frac{\alpha}{2}$-EF1 with respect to the
underlying valuation functions. ∎
The ${\alpha}/({2-\alpha})$ upper bound of Theorem 4.3 for the additive case
applies to both submodular and subadditive cancelable valuation functions,
leaving a very small gap for the latter. For the submodular case, we improve
this upper bound to ${\alpha}/{2}$.
###### Proposition 4.8.
Let $\alpha,\varepsilon\in(0,1]$. For instances with submodular valuation
functions $\\{v_{i}\\}_{i\in N}$, a $\alpha$-approximate PNE of the Round-
Robin mechanism may not be $(\frac{\alpha}{2}+\varepsilon)$-EF1 with respect
to $\\{v_{i}\\}_{i\in N}$.
###### Proof.
We construct an instance with four agents and nine goods, i.e., $N=[4]$ and
$M=\\{g_{1},g_{2},\ldots,g_{9}\\}$. Let
$1\gg\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{3}>\varepsilon_{4}>\varepsilon_{5}>\varepsilon_{6}$
and $\beta>({1+\varepsilon_{4}})/{2}$. The first three agents have additive
valuation functions, defined as follows:
$v_{1}(g_{j})=\begin{cases}5,&\text{if $j=1$}\\\ \varepsilon_{5},&\text{if
$j=2$}\\\ \varepsilon_{6},&\text{if $j=3$}\\\ 1,&\text{if $j=4$}\\\
2,&\text{if $j=5$}\\\ \varepsilon_{1},&\text{if $j=6$}\\\
\varepsilon_{2},&\text{if $j=7$}\\\ \varepsilon_{3},&\text{if $j=8$}\\\
\varepsilon_{4},&\text{if $j=9$}\end{cases}$
$v_{2}(g_{j})=\begin{cases}\varepsilon_{5},&\text{if $j=1$}\\\ 5,&\text{if
$j=2$}\\\ \varepsilon_{6},&\text{if $j=3$}\\\ 1,&\text{if $j=4$}\\\
\varepsilon_{1},&\text{if $j=5$}\\\ \varepsilon_{2},&\text{if $j=6$}\\\
2,&\text{if $j=7$}\\\ \varepsilon_{3},&\text{if $j=8$}\\\
\varepsilon_{4},&\text{if $j=9$}\end{cases}$
$v_{3}(g_{j})=\begin{cases}\varepsilon_{5},&\text{if $j=1$}\\\
\varepsilon_{6},&\text{if $j=2$}\\\ 5,&\text{if $j=3$}\\\
\varepsilon_{1},&\text{if $j=4$}\\\ \varepsilon_{2},&\text{if $j=5$}\\\
2,&\text{if $j=6$}\\\ \varepsilon_{3},&\text{if $j=7$}\\\
\varepsilon_{4},&\text{if $j=8$}\\\ 1,&\text{if $j=9$}.\end{cases}$
Agent $4$ has an OXS (and, thus, submodular) valuation function that is
defined by the maximum weight matchings in the bipartite graph below.
$g_{1}$$g_{2}$$g_{3}$$g_{4}$$g_{5}$$g_{6}$$g_{7}$$g_{8}$$g_{9}$$5\cdot\beta$$4\cdot\beta$$3\cdot\beta$$2\cdot\beta$$2\cdot\beta-\varepsilon_{4}$1$1-\varepsilon_{3}$$\varepsilon_{1}$$\varepsilon_{2}$
Now consider a bidding profile where the first three agents bid truthfully
(i.e., they bid the strict preference rankings
$\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3}$ which are consistent with
$v_{,}v_{2},v_{3}$), while the fourth agent bids the preference ranking
$\succ_{4}$:
$g_{3}\succ_{4}g_{6}\succ_{4}g_{8}\succ_{4}g_{1}\succ_{4}g_{2}\succ_{4}g_{4}\succ_{4}g_{5}\succ_{4}g_{7}\succ_{4}g_{9}$.
It is easy to confirm that the produced allocation is
$(A_{1},A_{2},A_{3},A_{4})=\\{\\{g_{1},g_{4},g_{5}\\},\\{g_{2},g_{7}\\},\\{g_{3},g_{9}\\},\\{g_{6},g_{8}\\}\\}$.
We first examine the first three agents. Agents $1$ and $2$ get their most
valuable goods in this allocation something that implies that there is no
profitable deviation for them. For the same reason they are also envy-free
towards the other agents. Regarding agent $3$, the only bundle that improves
her utility is $\\{g_{3},g_{6}\\}$. However, there is no bid that she can
report and get these two goods. The reason for this is that if she does not
get good $g_{3}$ in round 1 of Mechanism 1 (by not declaring it as her best
good among the available ones), then $g_{3}$ is lost to agent $4$. If, on the
other hand, she gets good $g_{3}$ in round 1 (by declaring it as her best good
among the available ones), then good $g_{6}$ is lost to agent $4$. Therefore,
there is no profitable deviation for her. Finally, it is easy to see that she
is also envy-free towards the other agents.
Moving to agent $4$, we have that
$v_{4}(A_{i})=\begin{cases}v_{4}(g_{1})+4\beta-\varepsilon_{4},&\text{if
$i=1$}\\\ v_{4}(g_{2})+1-\varepsilon_{3},&\text{if $i=2$}\\\
v_{4}(g_{3})+\varepsilon_{2},&\text{if $j=3$}\\\ 1+\varepsilon_{1},&\text{if
$j=4$},\end{cases}$
where $g_{1},g_{2},g_{3}$ are the most valuable goods from sets
$A_{1},A_{2},A_{3}$, respectively, according to agent $4$. Therefore,
$v_{4}(A_{1}\setminus\\{g_{1}\\})>v_{4}(A_{2}\setminus\\{g_{2}\\})>v_{4}(A_{3}\setminus\\{g_{3}\\})$,
and by comparing $v_{4}(A_{4})$ with $v_{4}(A_{1}\setminus\\{g_{1}\\})$ we get
that agent $4$ is $\frac{1+\varepsilon_{1}}{4\beta-\varepsilon_{4}}$-EF1
towards agent 1. The only thing that remains is to explore the possible
deviations of agent 4. Initially, notice that regardless of what agent $4$
declares, she cannot get goods $g_{1},g_{2},g_{3}$ as these are taken in round
1 by the agents that precede her. With that in mind, we will examine what is
the best attainable value through deviating, based on what she gets in round
1. Take note that she can get any goods from $\\{g_{4},g_{5},\ldots,g_{9}\\}$
in round 1 as they are available when her turn comes:
* •
Agent $4$ gets good $g_{4}$ in round 1. Based on the reported preferences
$\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3}$ of the other agents, in round 2 we
have the following: Good $g_{5}$ is lost to agent 1, good $g_{7}$ is lost to
agent 2, and good $g_{6}$ to agent 3. Therefore, only goods $g_{8}$ and
$g_{9}$ remain available for agent 4, and she can get only one of them. Thus,
the maximum attainable value for her is $2\beta+\varepsilon_{1}$.
* •
Agent $4$ gets good $g_{5}$ in round 1. In that case, based on the declaration
of the rest of the agents, in round 2 we have the following: Good $g_{4}$ is
lost to agent 1, good $g_{7}$ is lost to agent 2, and good $g_{6}$ to agent 3.
Therefore, only goods $g_{8}$ and $g_{9}$ remain available for agent 4, and
once more she can get only one of them. Thus, the maximum attainable value for
her is $2\beta-\varepsilon_{4}+\varepsilon_{1}$.
* •
Agent $4$ gets good $g_{6}$ in round 1. Based on the reported preferences
$\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3}$ of the other agents, in round 2 we
have the following: Good $g_{5}$ is lost to agent 1, good $g_{7}$ is lost to
agent 2, and good $g_{9}$ to agent 3. Therefore, only goods $g_{4}$ and
$g_{9}$ remain available for agent 4. Now observe that
$v_{4}(g_{4},g_{6})=2\beta$ (as this is the value of the maximum matching),
while $v_{4}(g_{9},g_{6})=1+\varepsilon_{2}$. Thus, the maximum attainable
value for her is $2\beta$.
* •
Agent $4$ gets good $g_{7}$ in round 1. Based on the reported preferences
$\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3}$ of the other agents, in round 2 we
have the following: Good $g_{5}$ is lost to agent 1, good $g_{4}$ is lost to
agent 2, and good $g_{6}$ to agent 3. Therefore, only goods $g_{8}$ and
$g_{9}$ remain available for agent 4, and once more she can get only one of
them. Thus, the maximum attainable value for her is
$1-\varepsilon_{3}+\varepsilon_{1}$.
* •
Agent $4$ gets good $g_{8}$ in round 1. Based on the reported preferences
$\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3}$ of the other agents, in round 2 we
have the following: Good $g_{5}$ is lost to agent 1, good $g_{7}$ is lost to
agent 2, and good $g_{6}$ to agent 3. Therefore, only goods $g_{4}$ and
$g_{9}$ remain available for agent 4, and once more she can get only one of
them. Thus, the maximum attainable value for her is $2\beta+\varepsilon_{1}$.
* •
Agent $4$ gets good $g_{9}$ in round 1. In that case, based on the declaration
of the rest of the agents, in round 2 we have the following: Good $g_{5}$ is
lost to agent 1, good $g_{7}$ is lost to agent 2, and good $g_{6}$ to agent 3.
Therefore, only goods $g_{4}$ and $g_{8}$ remain available for agent 4, and
once more she can get only one of them. Thus, the maximum attainable value for
her is $2\beta+\varepsilon_{2}$.
From the above discussion we get that the maximum value that agent $4$ can
attain through a deviation is $2\cdot\beta+\varepsilon_{1}$. At the same time
$v_{4}(A_{4})=1+\varepsilon_{1}$. By setting
$\alpha=\frac{1+\varepsilon_{1}}{2\cdot\beta+\varepsilon_{1}}$ we trivially
have that $(\succ_{1},\succ_{2})$ is a $\alpha$-approximate PNE. On the other
hand, for a given $\varepsilon>0$, we have that
$\frac{1+\varepsilon_{1}}{2\cdot\beta+\varepsilon_{1}}+\varepsilon$ is
strictly larger than $\frac{1+\varepsilon_{1}}{4\beta-\varepsilon_{4}}$ for
sufficiently small $\varepsilon_{1}$. That is, there is a choice of
$\varepsilon_{1},\ldots,\varepsilon_{6}$ so that the $\alpha$-approximate PNE
$(\succ^{*}_{1},\succ^{*}_{2},\succ^{*}_{3},\succ_{4})$ is not
$\frac{\alpha}{2}+\varepsilon$-EF1. ∎
## 5 Discussion and Future Directions
In this work we studied the existence and fairness guarantees of the
approximate pure Nash equilibria of the Round-Robin mechanism for agents with
cancelable and submodular valuation functions. In both cases, we generalized
the surprising connection between the stable states of the mechanism and its
fairness properties, a connection that was only known for exact equilibria and
additive valuation functions. For the function classes considered, we provide
tight or almost tight bounds, thus giving a complete picture of the strengths
and the limitations of the Round-Robin mechanism for these scenarios. There
are several interesting related directions, some of which we discuss below.
An obvious first direction is to explore function classes beyond the ones
studied here, with XOS or subadditive functions being prominent candidates.
Since our results heavily rely on the properties of cancelable and submodular
functions, it is likely that different approaches are needed for this
endeavour. As we mention in the introduction, a second interesting direction,
related to this one, is the study of the stability and fairness properties of
variants of the Round-Robin mechanism that allow the agents to be more
expressive. Analyzing mechanisms that take as an input value oracles seems to
be highly non-trivial, and although some of our results might transfer in this
setting, we suspect that, in general, strong impossibility results hold
regarding the fairness guarantees of approximate PNE.
Finally, although here we focused on Round-Robin and EF1, most fair division
algorithms have not been considered in the strategic setting. One promising
such algorithm, which is both fundamental in a number of variants of the
problem and simple enough, is the Envy-Cycle-Elimination algorithm of Lipton
et al. [28] which is known to compute EF1 allocations for general non-
decreasing valuation functions. An appealing alternative here is studying the
existence of equilibria of approximation algorithms for MMS allocations. An
impoertant advantage in this case is that once the existence of an approximate
PNE is shown, the corresponding MMS guarantee comes for free (see also the
related discussion in Remark 2.9 of Amanatidis et al. [5]).
## References
* Akrami et al. [2022] H. Akrami, B. R. Chaudhury, J. Garg, K. Mehlhorn, and R. Mehta. EFX allocations: Simplifications and improvements. _CoRR_ , abs/2205.07638, 2022. doi: 10.48550/arXiv.2205.07638. URL https://doi.org/10.48550/arXiv.2205.07638.
* Amanatidis et al. [2017a] G. Amanatidis, G. Birmpas, G. Christodoulou, and E. Markakis. Truthful allocation mechanisms without payments: Characterization and implications on fairness. In _Proceedings of the 2017 ACM Conference on Economics and Computation, EC’ 17_ , pages 545–562. ACM, 2017a.
* Amanatidis et al. [2017b] G. Amanatidis, E. Markakis, A. Nikzad, and A. Saberi. Approximation algorithms for computing maximin share allocations. _ACM Trans. Algorithms_ , 13(4):52:1–52:28, 2017b.
* Amanatidis et al. [2020] G. Amanatidis, E. Markakis, and A. Ntokos. Multiple birds with one stone: Beating 1/2 for EFX and GMMS via envy cycle elimination. _Theor. Comput. Sci._ , 841:94–109, 2020.
* Amanatidis et al. [2021] G. Amanatidis, G. Birmpas, F. Fusco, P. Lazos, S. Leonardi, and R. Reiffenhäuser. Allocating indivisible goods to strategic agents: Pure Nash equilibria and fairness. In _Proceedings of the 17th International Conference on Web and Internet Economics, WINE 2021_ , volume 13112 of _LNCS_ , pages 149–166, 2021.
* Amanatidis et al. [2022] G. Amanatidis, H. Aziz, G. Birmpas, A. Filos-Ratsikas, B. Li, H. Moulin, A. A. Voudouris, and X. Wu. Fair division of indivisible goods: A survey. _CoRR_ , abs/2208.08782, 2022. doi: 10.48550/arXiv.2208.08782. URL https://doi.org/10.48550/arXiv.2208.08782.
* Aziz et al. [2017a] H. Aziz, S. Bouveret, J. Lang, and S. Mackenzie. Complexity of manipulating sequential allocation. In _Proceedings of the 31st AAAI Conference on Artificial Intelligence, AAAI ’17_ , pages 328–334. AAAI Press, 2017a.
* Aziz et al. [2017b] H. Aziz, P. Goldberg, and T. Walsh. Equilibria in sequential allocation. In _Proceedings of the 5th International Conference on Algorithmic Decision Theory, ADT ’17_ , volume 10576 of _LNCS_ , pages 270–283. Springer, 2017b.
* Aziz et al. [2022] H. Aziz, I. Caragiannis, A. Igarashi, and T. Walsh. Fair allocation of indivisible goods and chores. _Autonomous Agents and Multi Agent Systems_ , 36(1):3, 2022.
* Babaioff et al. [2021] M. Babaioff, T. Ezra, and U. Feige. Fair and truthful mechanisms for dichotomous valuations. In _Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI 2021_ , pages 5119–5126. AAAI Press, 2021.
* Barman and Krishnamurthy [2020] S. Barman and S. K. Krishnamurthy. Approximation algorithms for maximin fair division. _ACM Trans. Economics and Comput._ , 8(1):5:1–5:28, 2020.
* Berger et al. [2022] B. Berger, A. Cohen, M. Feldman, and A. Fiat. Almost full EFX exists for four agents. In _Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022_ , pages 4826–4833. AAAI Press, 2022.
* Bouveret and Lang [2014] S. Bouveret and J. Lang. Manipulating picking sequences. In _Proceedings of the 21st European Conference on Artificial Intelligence - ECAI 2014_ , volume 263, pages 141–146. IOS Press, 2014.
* Bouveret and Lemaître [2016] S. Bouveret and M. Lemaître. Characterizing conflicts in fair division of indivisible goods using a scale of criteria. _Autonomous Agents and Multi-Agent Systems_ , 30(2):259–290, 2016.
* Bouveret et al. [2016] S. Bouveret, Y. Chevaleyre, and N. Maudet. Fair allocation of indivisible goods. In _Handbook of Computational Social Choice_ , pages 284–310. Cambridge University Press, 2016.
* Budish [2011] E. Budish. The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. _Journal of Political Economy_ , 119(6):1061–1103, 2011.
* Caragiannis et al. [2009] I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, and M. Kyropoulou. On low-envy truthful allocations. In _Proceedings of the 1st International Conference on Algorithmic Decision Theory, ADT 2009_ , volume 5783 of _LNCS_ , pages 111–119. Springer, 2009.
* Caragiannis et al. [2019] I. Caragiannis, D. Kurokawa, H. Moulin, A. D. Procaccia, N. Shah, and J. Wang. The unreasonable fairness of maximum Nash welfare. _ACM Trans. Economics and Comput._ , 7(3):12:1–12:32, 2019.
* Caragiannis et al. [2022] I. Caragiannis, J. Garg, N. Rathi, E. Sharma, and G. Varricchio. Existence and computation of epistemic efx allocations. _CoRR_ , abs/2206.01710, 2022. doi: 10.48550/arXiv.2206.01710. URL https://doi.org/10.48550/arXiv.2206.01710.
* Chaudhury et al. [2021] B. R. Chaudhury, J. Garg, and R. Mehta. Fair and efficient allocations under subadditive valuations. In _Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI 2021_ , pages 5269–5276. AAAI Press, 2021.
* Foley [1967] D. K. Foley. Resource allocation and the public sector. _Yale Economics Essays_ , 7:45–98, 1967.
* Gamow and Stern [1958] G. Gamow and M. Stern. _Puzzle-Math_. Viking press, 1958.
* Ghodsi et al. [2022] M. Ghodsi, M. T. Hajiaghayi, M. Seddighin, S. Seddighin, and H. Yami. Fair allocation of indivisible goods: Beyond additive valuations. _Artif. Intell._ , 303:103633, 2022.
* Halpern et al. [2020] D. Halpern, A. D. Procaccia, A. Psomas, and N. Shah. Fair division with binary valuations: One rule to rule them all. In _Proceedings of the 16th International Conference on Web and Internet Economics, WINE 2020_ , volume 12495 of _LNCS_ , pages 370–383. Springer, 2020.
* Kurokawa [2017] D. Kurokawa. _Fair Division in Game Theoretic Settings_. PhD thesis, Carnegie Mellon University, 2017.
* Lehmann et al. [2006] B. Lehmann, D. Lehmann, and N. Nisan. Combinatorial auctions with decreasing marginal utilities. _Games Econ. Behav._ , 55(2):270–296, 2006.
* Leme [2017] R. P. Leme. Gross substitutability: An algorithmic survey. _Games Econ. Behav._ , 106:294–316, 2017.
* Lipton et al. [2004] R. J. Lipton, E. Markakis, E. Mossel, and A. Saberi. On approximately fair allocations of indivisible goods. In _Proceedings of the 5th ACM Conference on Electronic Commerce, EC ’04_ , pages 125–131. ACM, 2004.
* Manurangsi and Suksompong [2021] P. Manurangsi and W. Suksompong. Closing gaps in asymptotic fair division. _SIAM Journal on Discrete Mathematics_ , 35(2):668–706, 2021.
* Markakis [2017] E. Markakis. Approximation algorithms and hardness results for fair division with indivisible goods. In _Trends in Computational Social Choice_ , chapter 12. AI Access, 2017.
* Markakis and Psomas [2011] E. Markakis and C. Psomas. On worst-case allocations in the presence of indivisible goods. In _Proceedings of the 7th International Conference on Web and Internet Economics, WINE 2011_ , volume 7090 of _LNCS_ , pages 278–289. Springer, 2011.
* Nemhauser et al. [1978] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions \- I. _Math. Program._ , 14(1):265–294, 1978.
* Plaut and Roughgarden [2020] B. Plaut and T. Roughgarden. Almost envy-freeness with general valuations. _SIAM J. Discret. Math._ , 34(2):1039–1068, 2020.
* Procaccia [2016] A. D. Procaccia. Cake cutting algorithms. In _Handbook of Computational Social Choice_ , pages 311–330. Cambridge University Press, 2016.
* Psomas and Verma [2022] A. Psomas and P. Verma. Fair and efficient allocations without obvious manipulations. In _Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022_ , 2022\.
* Steinhaus [1949] H. Steinhaus. Sur la division pragmatique. _Econometrica_ , 17 (Supplement):315–319, 1949.
* Varian [1974] H. R. Varian. Equity, envy and efficiency. _Journal of Economic Theory_ , 9:63–91, 1974.
|
Tomer Berg<EMAIL_ADDRESS>
School of Electrical Engineering, Tel Aviv University and Or Ordentlich
<EMAIL_ADDRESS>
School of Computer Science and Engineering, Hebrew University of Jerusalem and
Ofer Shayevitz<EMAIL_ADDRESS>
School of Electrical Engineering, Tel Aviv University
# Deterministic Finite-Memory Bias Estimation
###### Abstract
In this paper we consider the problem of estimating a Bernoulli parameter
using finite memory. Let $X_{1},X_{2},\ldots$ be a sequence of independent
identically distributed Bernoulli random variables with expectation $\theta$,
where $\theta\in[0,1]$. Consider a finite-memory deterministic machine with
$S$ states, that updates its state $M_{n}\in\\{1,2,\ldots,S\\}$ at each time
according to the rule $M_{n}=f(M_{n-1},X_{n})$, where $f$ is a deterministic
time-invariant function. Assume that the machine outputs an estimate at each
time point according to some fixed mapping from the state space to the unit
interval. The quality of the estimation procedure is measured by the
asymptotic risk, which is the long-term average of the instantaneous quadratic
risk. The main contribution of this paper is an upper bound on the smallest
worst-case asymptotic risk any such machine can attain. This bound coincides
with a lower bound derived by Leighton and Rivest, to imply that $\Theta(1/S)$
is the minimax asymptotic risk for deterministic $S$-state machines. In
particular, our result disproves a longstanding $\Theta(\log S/S)$ conjecture
for this quantity, also posed by Leighton and Rivest.
###### keywords:
Learning with Memory Constraints, Parametric Estimation, Minimax Estimation.
## 1 Introduction
The statistical hardness of a parametric estimation problem has been
traditionally characterized by the number of independent samples from the
distribution $P_{\theta}$ one needs to see in order to accurately estimate
$\theta$. However, as the amount of available data is constantly increasing,
collecting enough samples for accurate estimation is becoming less of a
problem, provided that the parameter $\theta$ is of a relatively low
dimension. In this regime, it is the computational resources dedicated to the
estimation task, rather than the number of samples, that constitute the main
bottleneck determining the quality of estimation one can attain.
As a result, the topic of estimation / learning under computational
constraints is currently drawing considerable attention; in particular, the
problem of estimation / learning under memory constraints has been recently
studied in various different setups, as we further elaborate in Subsection
1.1. Despite this ongoing effort, there are still substantial gaps in the
understanding of the effects memory limitations can have on the minimal
possible estimation error. This work addresses such a gap in arguably the
simplest setup possible: estimation of a single parameter $\theta\in[0,1]$
from an infinite number of independent samples from $P_{\theta}$, using a
finite-memory learning algorithm.
Specifically, we consider the bias estimation problem, defined as follows:
$X_{1},X_{2},\ldots$ is a sequence of independent identically distributed
random variables drawn according to the $\mathsf{Bern}(\theta)$ distribution.
An $S$-state estimation procedure for this problem consists of two functions:
$f$, and $\hat{\theta}$, where $f:[S]\times\\{0,1\\}\rightarrow[S]$ is a
deterministic state transition (or memory update) function (here
$[S]=\\{1,\ldots,S\\}$), and $\hat{\theta}:[S]\rightarrow[0,1]$ is the
estimate function. Letting $M_{n}$ denote the state of the memory at time $n$,
this finite-state machine evolves according to the rule
$\displaystyle M_{0}$ $\displaystyle=s_{\text{init}},$ (1) $\displaystyle
M_{n}$ $\displaystyle=f(M_{n-1},X_{n})\in[S],$ (2)
for some predetermined initial state $s_{\text{init}}\in[S]$. If the machine
is stopped at time $n$, it outputs the estimation $\hat{\theta}(M_{n})$. We
define the (asymptotic) quadratic risk attained by this estimation procedure,
given that the true value of the parameter is $\theta$, to be111It is not
difficult to show that the limit exists due to the underlying finite-state
structure and the independence of the samples.
$\displaystyle
R_{\theta}(f,\hat{\theta})=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}\operatorname{\mathbb{E}}\left(\hat{\theta}(M_{i})-\theta\right)^{2}.$
(3)
We are interested in the _minimax risk_ of the estimation problem, defined as
$\displaystyle
R(S)\triangleq\min_{f,\hat{\theta}}\max_{\theta\in[0,1]}R_{\theta}(f,\hat{\theta}),$
(4)
where the minimum is taken over all $S$-state estimation procedures. This
paper derives an upper bound on the minimax risk, which together with a known
lower bound, establishes the exact behavior of the minimax risk with $S$.
Note that here the memory update function $f$ is not allowed to depend on
time. First, as our focus here is on upper bounds, it is always desirable to
use the weakest possible model. Moreover, the restriction to time-invariant
algorithms is operationally appealing, since storing the time index
necessarily incurs a memory cost. Furthermore, since the number of samples is
unbounded, just storing the code generating a time-varying algorithm may
require unbounded memory.
Besides memory, another resource that plays an important role here is
randomness. While allowing the use of randomization in the estimation
procedure may certainly help, this resource has a cost. Even if one has access
to unlimited randomness (which is the case in our setting, since randomness
can be easily extracted from the i.i.d. sequence $X_{1},X_{2},\ldots$),
storing this randomness places a toll on one’s memory budget, which needs to
be taken into account in our deterministic setup. One can nevertheless define
the randomized minimax risk of the estimation problem, to be the smallest
asymptotic risk that can be uniformly guaranteed when randomized state-
transition functions $f$ are allowed, i.e.,
$\displaystyle R_{\mathsf{rand}}(S)\triangleq\min_{\text{randomized
}f,\hat{\theta}}\,\max_{\theta\in[0,1]}R_{\theta}(f,\hat{\theta}),$ (5)
We emphasize that in the above, in contrast to the deterministic setup we
consider in this paper, randomness is “free” and not counted toward the memory
budget. Our main result is that, in contrast to what was conjectured by
[Leighton and Rivest(1986)], $R(S)$ and $R_{\mathsf{rand}}(S)$ are equal up to
constants independent of $S$.
Let us be more precise. Prior to this work, it was known that
$R_{\mathsf{rand}}(S)=\Theta(1/S)$. The upper bound was proved by
[Samaniego(1973)], who constructed an $S$-state randomized estimation
procedure that asymptotically induces a $\mathrm{Binomial}(S-1,\theta)$
stationary distribution on the memory state space. The lower bound was
established a decade later by [Leighton and Rivest(1986)], using the Markov
chain tree theorem. In the same paper, [Leighton and Rivest(1986)] further
constructed a deterministic $S$-state estimation procedure by de-randomizing
Samaniego’s construction, and as a result showed that $R(S)=O(\log{S}/S)$.
They then conjectured that this is the best possible asymptotic minimax risk
any deterministic $S$-state estimation procedure can attain, and further
stated the problem of proving or disproving this conjecture as the first out
of five open problems left for future research. A nice interpretation of their
conjecture is the naturally appealing claim that an optimal deterministic
algorithm can be obtained by de-randomizing the optimal random algorithm. In
their deterministic algorithm, which they believed to be optimal, randomness
is extracted from the measurements by augmenting each state with $O(\log(S))$
additional states, which increases the overall MSE (see Section III of
[Leighton and Rivest(1986)]). Surprisingly, we show that such a de-
randomization is suboptimal, thereby disproving the conjecture of Leighton and
Rivest.
###### Theorem 1.1.
$\displaystyle R(S)=O\left(\frac{1}{S}\right).$ (6)
Since deterministic $S$-state estimation procedures are a subset of the class
of $S$-state randomized estimation procedures, we clearly have $R(S)\geq
R_{\mathsf{rand}}(S)=\Omega(1/S)$, where the lower bound is due to [Leighton
and Rivest(1986)]. Consequently:
###### Corollary 1.2.
$\displaystyle R(S)=\Theta\left(\frac{1}{S}\right).$ (7)
### 1.1 Related work
The study of learning and estimation under memory constraints has been
initiated in the late 1960s by Cover and Hellman (with a precursor by
[Robbins(1956)]) and remained an active research area for a decade or so. It
has then been largely abandoned, but recently it has been again enjoying much
attention, due to the reasons described above, and many works have addressed
different aspects of the learning under memory constraints problem over the
last few years. See, e.g., [Steinhardt and Duchi(2015), Steinhardt et
al.(2016)Steinhardt, Valiant, and Wager, Raz(2018), Dagan and Shamir(2018),
Dagan et al.(2019)Dagan, Kur, and Shamir, Sharan et al.(2019)Sharan, Sidford,
and Valiant] for a far from exhaustive list of recent works.
Most of the old work on learning with finite memory has been focused on the
hypothesis testing problem. For the problem of deciding whether an i.i.d.
sequence was drawn from $\mathsf{Bern}(p)$ or $\mathsf{Bern}(q)$,
[Cover(1969)] described a time-varying finite-state machine with only $S=4$
states, whose error probability approaches zero with the sequence length. As
time-varying procedures suffer from the shortcomings described above, [Hellman
and Cover(1970)] addressed the same binary hypothesis testing problem within
the class of time-invariant randomized procedures. They have found an _exact_
characterization of the smallest attainable error probability for this
problem. To demonstrate the important role randomization plays in approaching
this value, the same authors show in [Hellman and Cover(1971)] that for any
memory size $S<\infty$ and $\delta>0$, there exists problems such that any
$S$-state deterministic procedure has probability of error
$\operatorname{\mathsf{P_{e}}}\geq\frac{1}{2}-\delta$, while their randomized
procedure from [Hellman and Cover(1970)] has
$\operatorname{\mathsf{P_{e}}}\leq\delta$. Note that one can simulate a
randomized procedure with a deterministic one by using some of the samples of
$\\{X_{n}\\}$ for randomness extraction, e.g., using [Von Neumann(1951)]
extraction. However, the extracted random bits must be stored, which could
result in a substantial increase in memory, see [Chandrasekaran(1970)]. In a
recent paper [Berg et al.(2020)Berg, Ordentlich, and Shayevitz] derived a
lower bound on the error probability attained by any $S$-state deterministic
procedure, showing that while the smallest attainable error probability
decreases exponentially fast with $S$ in both the randomized and the
deterministic setups, the base of the exponent can be arbitrarily larger in
the deterministic case.
One of the earlier works on estimation with finite memory is due to [Roberts
and Tooley(1970)], who tackled the problem of estimation under quadratic risk
for a random variable with additive noise. [Hellman(1974)] studied the problem
of estimating the mean $\theta$ of a Gaussian distribution and discovered a
$S$-state estimation procedure that asymptotically achieves the same Bayesian
quadratic risk as the optimal $S$-level quantizer $Q(\theta)$ for $\theta$,
where $Q:\mathbb{R}\to[S]$. As already described above, [Samaniego(1973)] and
[Leighton and Rivest(1986)] have showed that
$R_{\mathsf{rand}}(S)=\Theta(1/S)$. [Meron and Feder(2004), Ingber and
Feder(2006), Dar and Feder(2014)] studied the subject of finite-memory
universal prediction of sequences using randomized/deterministic machines.
More recently, [Jain and Tyagi(2018)] studied the shrinkage in memory between
the hypothesis testing and the estimation problem, namely the interesting fact
that a machine with $S$ states can distinguish between two coins with biases
that differ by $1/S$, whereas the best additive accuracy it can achieve in
estimating the bias is only $1/\sqrt{S}$. We further note that the problem of
estimating statistics with bounded memory is attracting considerable attention
in the machine learning literature lately, see, e.g., [Chien et
al.(2010)Chien, Ligett, and McGregor, Kontorovich(2012), McGregor et
al.(2012)McGregor, Pavan, Tirthapura, and Woodruff, Steinhardt and
Duchi(2015), Steinhardt et al.(2016)Steinhardt, Valiant, and Wager, Raz(2018),
Dagan and Shamir(2018), Dagan et al.(2019)Dagan, Kur, and Shamir, Sharan et
al.(2019)Sharan, Sidford, and Valiant]. Another closely related active line of
work is that of estimating statistics under limited communication, e.g.,
[Zhang et al.(2013)Zhang, Duchi, Jordan, and Wainwright, Garg et
al.(2014)Garg, Ma, and Nguyen, Braverman et al.(2016)Braverman, Garg, Ma,
Nguyen, and Woodruff, Xu and Raginsky(2017), Jordan et al.(2018)Jordan, Lee,
and Yang, Han et al.(2018a)Han, Özgür, and Weissman, Han et al.(2018b)Han,
Ozgur, and Weissman, Barnes et al.(2018)Barnes, Han, and Özgür, Acharya et
al.(2018)Acharya, Canonne, and Tyagi, Hadar et al.(2019)Hadar, Liu,
Polyanskiy, and Shayevitz, Hadar and Shayevitz(2019), Acharya et
al.(2020)Acharya, Canonne, and Tyagi].
## 2 Proof of Theorem 1.1
We now proceed to prove Theorem 1.1. We will describe our deterministic
$S$-state estimation procedure and show that it attains quadratic risk of
$O(1/S)$ uniformly for all $\theta\in[0,1]$. In this section we provide the
entire proof, but for clarity we rely on several technical claims whose proofs
are relegated to the next section or to the Appendix.
Recall from (1) and (2) that any deterministic $S$-state estimation procedure
corresponds to a finite-state machine with $S$ states, with at most two
outgoing edges from each state, one for $X_{i}=0$ and one for $X_{i}=1$.
Running this machine on an i.i.d. $\mathsf{Bern}{(\theta)}$ input sequence
$X_{1},X_{2},\ldots$, generates a Markov chain $\\{M_{n}\\}_{n=1}^{\infty}$,
where $M_{n}$ denotes the state of the machine at time $n$. We emphasize that
the distribution of the process $\\{M_{n}\\}$ depends on $\theta$, which is
the parameter we are trying to estimate. To lighten notation, we nevertheless
leave this dependence implicit. The construction we describe below trivially
achieves $R_{\theta}(f,\hat{\theta})=O(1/S)$ for $\theta=0$ and $\theta=1$,
and thus in the remainder of the paper we assume without loss of generality
that $\theta\in(0,1)$.
The high-level idea underlying our scheme is to break down the memory-
constrained estimation task into a sequence of memory-constrained (composite)
binary hypothesis testing sub-problems. In each such sub-problem, the goal is
to decide whether the true underlying parameter $\theta$ satisfies
$\\{\theta<q\\}$ or $\\{\theta>p\\}$, for some $0<q<p<1$. Those decisions are
then used in order to traverse an induced Markov chain in a way that enables
us to accurately estimate $\theta$.
Let us now describe the particular structure of the proposed machine. In our
construction, the state space $[S]$ is partitioned into $K$ disjoint sets
denoted by $\mathcal{S}_{1},\ldots,\mathcal{S}_{K}$, where the estimation
function value is the same inside each $\mathcal{S}_{k}$, i.e.,
$\displaystyle\hat{\theta}(s)=\hat{\theta}_{k},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \forall
s\in\mathcal{S}_{k},\;k\in[K].$ (8)
The goal is to design a machine for which the stationary distribution of
$\\{M_{n}\\}$ corresponding to the parameter $\theta$ will concentrate on
states that belong to classes $\mathcal{S}_{k}$ for which
$(\theta-\hat{\theta}_{k})^{2}$ is the smallest. This goal is in general
easier to achieve when each set consists of a large number of states, which
corresponds to small $K$ (as the total number of states $S$ is fixed). On the
other hand, the quadratic risk such a machine can attain is obviously limited
by the number of different sets $K$, and in particular is $\Omega(1/K^{2})$,
as there must exist some $\theta\in[0,1]$ at distance $\Omega(1/K)$ from all
points $\hat{\theta}_{1},\ldots,\hat{\theta}_{K}$. Thus, the choice of $K$
should balance the tension between these two contrasting goals; specifically,
we will see that the choice $K=\Theta(\sqrt{S})$ is suitable to that end.
Since the estimator $\hat{\theta}$ depends on $\\{M_{n}\\}$ only through its
class, it is natural to define the _quantized process_
$\\{Q_{n}\\}_{n=1}^{\infty}$ obtained by the deterministic scalar mapping
$\displaystyle Q_{n}=\phi(M_{n}),\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ n=1,2,\ldots,$ (9)
where $\phi:[S]\to[K]$ maps each state to its set label (namely: $\phi(s)=k$
iff $s\in\mathcal{S}_{k}$). The process $\\{Q_{n}\\}$, as well as any process
on a finite alphabet, consists of _runs_ of the same letter. We can therefore
decompose it as $\\{S_{1},\tau_{1}\\},\\{S_{2},\tau_{2}\\},\ldots$, where
$S_{i}$ denotes the first letter in the $i$th run, and $\tau_{i}$ denotes its
length. We refer to the process $\\{S_{i}\\}_{i=1}^{\infty}$, supported on
$[K]$ as the _sampled process_ , and to $\\{\tau_{i}\\}_{i=1}^{\infty}$,
supported on $\mathbb{N}$, as the _holding times_ process. Note that both
$\\{S_{i}\\}$ and $\\{\tau_{i}\\}$ are deterministically determined by
$\\{Q_{n}\\}$ and hence, by the original process $\\{M_{n}\\}$. In general,
the sampled process can be complicated; however, in our construction, we
impose a particular structure that ensures that the sampled process
$\\{S_{n}\\}$ is also a Markov process. Specifically, for each $k\in[K]$ there
is an entry state $s_{k}\in\mathcal{S}_{k}$, such that all edges going out of
a state $\ell\notin\mathcal{S}_{k}$ to the set $\mathcal{S}_{k}$, go into the
entry state $s_{k}\in\mathcal{S}_{k}$. In other words, whenever $M_{n}$ enters
the set $\mathcal{S}_{k}$ from a different set, it does so through the
designated entry state only. This feature guarantees that at the beginning of
the $i$th run, the state of the original process $\\{M_{n}\\}$ is determined
by $S_{i}$, and consequently $\\{S_{i}\\}$ is indeed a Markov process itself.
Furthermore, conditioned on $S_{i}$, the holding time $\tau_{i}$ is
independent of the entire past. We denote the conditional distribution of
$\tau_{i}$ conditioned on the event $S_{i}=k$, by $P_{T_{k}}$. It will be
convenient to also define the random variables $T_{k}\sim P_{T_{k}}$, for
$k\in[K]$. In our construction, we further guarantee that any set
$\mathcal{S}_{k}$ is accessible from any other set $\mathcal{S}_{j}$, $j\neq
k$. This ensures that the underlying Markov process $\\{M_{n}\\}$ is ergodic,
and as a result, so is the sampled process $\\{S_{n}\\}$. We refer to the
structure described here, i.e., all sets are accessible from one another and
have entry states, as a _nested Markov structure_.
The ergodicity of $\\{M_{n}\\}$ immediately implies the ergodicity of the
quantized process $\\{Q_{n}\\}$, by (9). Denote by $\pi_{k}$ the stationary
probability of state $k$ for the process $\\{Q_{n}\\}$. We therefore have that
for a machine $f,\hat{\theta}$ of the type described above,
$\displaystyle
R_{\theta}=R_{\theta}(f,\hat{\theta})=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}\operatorname{\mathbb{E}}\left(\hat{\theta}(M_{i})-\theta\right)^{2}=\sum_{k=1}^{K}\pi_{k}\left(\hat{\theta}_{k}-\theta\right)^{2}.$
(10)
The next lemma determines the stationary distribution
$\\{\pi_{k}\\}_{k\in[K]}$ of the quantized process $\\{Q_{n}\\}$, in terms of
the stationary distribution $\\{\mu_{k}\\}_{k\in[K]}$ of the sampled process
$\\{S_{n}\\}$ and the expected holding times
$\\{\operatorname{\mathbb{E}}[T_{k}]\\}_{k\in[K]}$.
###### Lemma 2.1.
The unique stationary probability of state $k$ under the process $\\{Q_{n}\\}$
is
$\displaystyle\pi_{k}=\frac{\operatorname{\mathbb{E}}[T_{k}]\mu_{k}}{\sum_{j=1}^{M}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}.$
(11)
Combining Lemma 2.1 with (10), we have that the risk of such machine is
$\displaystyle R_{\theta}$
$\displaystyle=\sum_{k=1}^{K}\frac{\operatorname{\mathbb{E}}[T_{k}]\mu_{k}}{\sum_{j=1}^{M}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\hat{\theta}_{k}-\theta\right)^{2}.$
(12)
It is immediately evident from (12) that the asymptotic risk attained by a
machine with the nested Markov structure defined above depends only on the
stationary distribution of the sampled process $\\{S_{n}\\}$ and the expected
holding times. Ideally, we would like to construct this machine such that two
things would happen for every $\theta$:
1. (i)
$\\{\mu_{k}\\}$ would be concentrated on states whose corresponding estimate
$\hat{\theta}_{k}$ is close to $\theta$;
2. (ii)
The expected holding times for these states will be at least as large as those
of other states.
We now describe how our machine is designed to achieve the desired behaviour
(i) of $\\{\mu_{k}\\}$. Later, we will see that the desired behavior (ii) of
$\\{\mathbb{E}[T_{k}]\\}$ follows more or less automatically.
First, we set our estimators to be222The denominator is set to $K+2$ rather
than $K$ for minor technical reasons, in order to avoid dealing with
probabilities on the boundary of the simplex in the analysis.
$\displaystyle\hat{\theta}_{k}=\frac{k}{K+2},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ k\in[K].$ (13)
We then design our machine such that the sampled process $\\{S_{n}\\}$ is a
random walk, that moves either one state left or one state right from each
state (except for the extreme states $1$ and $K$ that behave slightly
differently). In particular, the $k$th state in $\\{S_{n}\\}$ is connected
only to states $k+1$ and $k-1$ for all $k\in\\{2,\ldots,K-1\\}$. The precise
diagram for the sampled process $\\{S_{n}\\}$ is shown in Figure 1, where the
transition probabilities $\\{p_{k},q_{k}=1-p_{k}\\}_{k\in[K]}$ will depend on
$\theta$ through the construction of the original machine generating the
original Markov chain $\\{M_{n}\\}$. We design the machine in a way that
guarantees that the random walk $\\{S_{n}\\}$ has a strong _drift_ towards the
state $k$ whose corresponding estimator is closest to $\theta$. In particular,
if $\theta>\frac{k+1}{K+2}$ then $p_{k}>1-\epsilon$ and conversely, if
$\theta<\frac{k}{K+2}$ then $p_{k}<\epsilon$, for some $\epsilon<1/2$ and all
states $k\in\\{2,\ldots,K-1\\}$.
$1$$\cdots$$i$$\cdots$$K$$q_{2}$$q_{i}$$1$$q_{i+1}$$q_{K-1}$$q_{i-1}$$p_{2}$$p_{i-1}$$p_{i}$$1$$p_{i+1}$$p_{K-1}$
Figure 1: A sampled chain of $K$ states.
The desired drift behavior is enabled by constructing the sets
$\mathcal{S}_{1},\ldots,\mathcal{S}_{K}$ as _mini-chains_ , where the $k$th
mini-chain consists of $N_{k}$ states, and is designed to solve the composite
binary hypothesis testing problem:
$\mathcal{H}_{0}:\left\\{\theta>\frac{k+1}{K+2}\right\\}$ vs.
$\mathcal{H}_{1}:\left\\{\theta<\frac{k}{K+2}\right\\}$. Each mini-chain
$\mathcal{S}_{k}$ is initialized in its entry state $s_{k}$, and eventually
moves to the entry state $s_{k+1}$ of mini-chain $\mathcal{S}_{k+1}$ if it
decided in favor of hypothesis $\mathcal{H}_{0}$, or to the entry state
$s_{k-1}$ of mini-chain $\mathcal{S}_{k-1}$ if it decided in favor of
hypothesis $\mathcal{H}_{1}$. The time it takes it to “make a decision” is the
random holding time with some distribution $P_{T_{k}}$. Note that if the error
probability of the machine is smaller than $\epsilon<1/2$ under both
hypotheses, we will indeed attain the desired drift behavior. Our goal now is
to design mini-chains that attain small error probabilities with as few states
as possible. To that end, we appeal to [Berg et al.(2020)Berg, Ordentlich, and
Shayevitz], where the authors defined the following machine.333Their machine
was designed to solve the _simple_ binary hypothesis test
$\mathcal{H}_{0}:\\{\theta=p\\}$ vs. $\mathcal{H}_{1}:\\{\theta=q\\}$, but as
our analysis demonstrates, the difference between the two problems is not
significant.
###### Definition 2.2.
$\operatorname{\mathsf{RUNS}}(N,p,q)$ is the machine with $N\geq 4$ states
depicted in Figure 2, designed to decide between the hypotheses
$\mathcal{H}_{0}:\\{\theta>p\\}$ vs. $\mathcal{H}_{1}:\\{\theta<q\\}$, for
some $0<q<p<1$. The machine is initialized at state $s$ and evolves according
to the sequence of input bits $X_{1},X_{2},\ldots$. If the machine observes a
run of $N-s$ ones before observing a run of $s-1$ zeros, it decides
$\mathcal{H}_{0}$ and exists right. Otherwise, it decides $\mathcal{H}_{1}$
and exists left. The initial state of the machine is $s=f(N,p,q)$, where
$\displaystyle f(N,p,q)\triangleq 2+\left\lceil\frac{\log pq}{\log p(1-p)+\log
q(1-q)}(N-3)\right\rfloor,$ (14)
is an integer between $2$ and $N-1$. We denote the (worst case) error
probability of the machine by
$\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}(N,p,q)}=\max\left\\{p^{0}_{1},p^{1}_{0}\right\\}$,
where
$\displaystyle p^{0}_{1}$ $\displaystyle=\sup_{\theta<q}\leavevmode\nobreak\
\leavevmode\nobreak\ \Pr_{X_{1},X_{2}\ldots\stackrel{{\scriptstyle
i.i.d.}}{{\sim}}\mathsf{Bern}(\theta)}\left(\operatorname{\mathsf{RUNS}}(N,p,q)\text{
decides }\mathcal{H}_{0}\right),$ (15) $\displaystyle p^{1}_{0}$
$\displaystyle=\sup_{\theta>p}\leavevmode\nobreak\ \leavevmode\nobreak\
\Pr_{X_{1},X_{2}\ldots\stackrel{{\scriptstyle
i.i.d.}}{{\sim}}\mathsf{Bern}(\theta)}\left(\operatorname{\mathsf{RUNS}}(N,p,q)\text{
decides }\mathcal{H}_{1}\right).$ (16)
$1$$\cdots$$s$$\cdots$$N$$\cdots$$\cdots$$1-\theta$$1-\theta$$1-\theta$$1-\theta$$1-p$$\theta$$\theta$$\theta$$\theta$exit
left$\theta$exit right Figure 2: $\operatorname{\mathsf{RUNS}}(N,p,q)$ \-
Deterministic Binary Hypothesis Testing Machine
The next lemma demonstrates that with $N=O(K)$ states, the machine
$\operatorname{\mathsf{RUNS}}(N,p,q)$ can decide whether $\theta>p$ or
$\theta<q=p-1/K$ with constant error probability $\epsilon<1/2$. Thus, the
desired drift can be attained by mini-chains of $O(K)$ states.
###### Lemma 2.3.
For any $\frac{2}{K}\leq p\leq 1-\frac{1}{K}$, $q=p-\frac{1}{K}$ and
$0<\epsilon<1/2$, let444Logarithms in this paper are taken to base $2$.
$\displaystyle N=N(\epsilon,p,K)\triangleq 3+\left\lceil K\cdot
6\log\frac{2}{\epsilon\cdot\left(p-\frac{1}{K}\right)(1-p)}\right\rceil.$ (17)
Then
$\displaystyle\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}(N,p,q)}<\epsilon.$
(18)
We therefore take the $k$th mini-chain $\mathcal{S}_{k}$ as the machine
$\operatorname{\mathsf{RUNS}}(N_{k},p,q)$ with $q=\frac{k}{K+2}$,
$p=q+\frac{1}{K+2}=\frac{k+1}{K+2}$, and
$N_{k}=N(\epsilon,\frac{k+1}{K+2},K+2)$. The total number of states in our
machine is therefore (see calculation in the appendix)
$\displaystyle S$
$\displaystyle=\sum_{k=1}^{K}N_{k}=\sum_{k=1}^{K}N\left(\epsilon,\frac{k+1}{K+2},K+2\right)\leq
6(K+2)^{2}\log\left(\frac{2e}{\epsilon}\right),$ (19)
and the sampled chain $\\{S_{n}\\}$ indeed satisfies the desired drift
property: for all $2\leq k\leq K-1$ we have that if $\theta>\frac{k+1}{K+2}$
then $p_{k}>1-\epsilon$ whereas if $\theta<\frac{k}{K+2}$ then
$p_{k}<\epsilon$. Note that we did not quantify $p_{k}$ for the case where
$\theta\in\left[\frac{k}{K+2},\frac{k+1}{K+2}\right]$, but as will become
apparent below, it is indeed not needed for our analysis. Also note that
whenever the sampled chain reaches state $1$ it will immediately move back to
state $2$, and whenever it reaches state $K$ it will immediately move back to
state $K-1$ (that is, $p_{1}=1$ and $p_{K}=0$), but the holding times in those
states are nevertheless random (and may be very large if the underlying
$\theta$ is very close to $0$ or $1$, and dictated by the time it takes for
the corresponding $\operatorname{\mathsf{RUNS}}(N,p,q)$ mini-chains
$\mathcal{S}_{1}$ and $\mathcal{S}_{K}$ to reach a decision). The next lemma
shows that the drift property implies that if
$\theta\in\left[\frac{k}{K+2},\frac{k+1}{K+2}\right]$, then the stationary
probability $\mu_{j}$ of the $j$th state in the sampled chain decreases
exponentially with the “distance” $|j-k|$.
###### Lemma 2.4.
Assume that $\theta\in\left[\frac{k}{K+2},\frac{k+1}{K+2}\right]$. Then, the
stationary distribution of the sampled process $\\{S_{n}\\}$ induced by the
machine described above satisfies
$\displaystyle\mu_{k-i}\leq\mu_{k-1}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}$
(20)
for $1\leq i\leq k-1$, and
$\displaystyle\mu_{k+i}\leq\mu_{k+1}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}$
(21)
for $1\leq i\leq K-k$.
This shows that the stationary distribution of the sampled chain $\\{S_{n}\\}$
is indeed concentrated on the desired states. The next lemma deals with the
expected holding times, and lower bounds the ratio between the expected
holding time in the “correct state” $k$ and the expected holding time in any
other state of the sampled chain.
###### Lemma 2.5.
If $\theta<\frac{j}{K+2}$, then the expected holding time in state $i$
satisfies
$\displaystyle\operatorname{\mathbb{E}}[T_{j}]\geq(1-\epsilon)\operatorname{\mathbb{E}}[T_{i}]$
(22)
for all $i>j$. Similarly, if $\theta>\frac{j+1}{K+2}$, then the expected
holding time in state $i$ satisfies
$\displaystyle\operatorname{\mathbb{E}}[T_{j}]\geq(1-\epsilon)\operatorname{\mathbb{E}}[T_{i}]$
(23)
for all $i<j$.
We now combine (12) with Lemma 2.4 and Lemma 2.5 in order to upper bound the
asymptotic risk attained by our machine, and establish the claim
$R_{\theta}=O(1/S)$ for all $\theta\in(\frac{1}{K+2},\frac{K+1}{K+2})$. The
cases where $\theta\in[0,\frac{1}{K+2})$ and $\theta\in(\frac{K+1}{K+2},1]$
then follow easily from minor adjustments, and are treated in the appendix.
Assume that $\frac{k}{K+2}\leq\theta\leq\frac{k+1}{K+2}$ for some $k\in[K]$.
From (3), the asymptotic risk is then
$\displaystyle R_{\theta}$
$\displaystyle=\sum_{i=1}^{K}\frac{\operatorname{\mathbb{E}}[T_{i}]\mu_{i}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(24)
$\displaystyle=\sum_{i=1}^{k-1}\frac{\operatorname{\mathbb{E}}[T_{i}]\mu_{i}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{i}{K+2}-\theta\right)^{2}+\frac{\operatorname{\mathbb{E}}[T_{k}]\mu_{k}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{k}{K+2}-\theta\right)^{2}$
$\displaystyle+\sum_{i=k+1}^{K}\frac{\operatorname{\mathbb{E}}[T_{i}]\mu_{i}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(25)
$\displaystyle\leq\frac{1}{1-\epsilon}\sum_{i=1}^{k-1}\frac{\operatorname{\mathbb{E}}[T_{k-1}]\mu_{k-1}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\frac{\mu_{i}}{\mu_{k-1}}\left(\frac{i}{K+2}-\theta\right)^{2}+\frac{1}{(K+2)^{2}}$
$\displaystyle+\frac{1}{1-\epsilon}\sum_{i=k+1}^{K}\frac{\operatorname{\mathbb{E}}[T_{k+1}]\mu_{k+1}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\frac{\mu_{i}}{\mu_{k+1}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(26)
$\displaystyle\leq\frac{1}{1-\epsilon}\sum_{i=1}^{k-1}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\left(\frac{i+1}{K+2}\right)^{2}+\frac{1}{(K+2)^{2}}+\frac{1}{1-\epsilon}\sum_{i=1}^{K-k}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\left(\frac{i+1}{K+2}\right)^{2}$
(27)
$\displaystyle\leq\frac{1}{(K+2)^{2}}\cdot\frac{1}{1-\epsilon}\left(2\cdot\sum_{i=1}^{\infty}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}(i+1)^{2}+1\right)$
(28)
$\displaystyle\leq\frac{6\log\left(\frac{2e}{\epsilon}\right)}{S}\left(\frac{2\epsilon}{(1-2\epsilon)^{3}}+\frac{8(1-\epsilon)}{(1-2\epsilon)^{2}}+\frac{1}{1-\epsilon}\right),$
(29)
where (26) follows from Lemma 2.5, (27) follows from Lemma 2.4 and since
$\frac{\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}{\sum_{k=1}^{M}\operatorname{\mathbb{E}}[T_{k}]\mu_{k}}\leq
1$, (28) is since we only add positive terms, and (29) is due to the identity
$\sum_{i=0}^{\infty}q^{i}(i+2)^{2}=\frac{q(1+q)+4(1-q)}{(1-q)^{3}}$ and by
substituting (19). Finally, substituting $\epsilon=1/100$ into (29) gives
$R_{\theta}<\frac{600}{S}$.
## 3 Proofs of Technical Claims
The following simple lemma will be useful for the proofs of Lemma 2.1 and
Lemma 2.4.
###### Lemma 3.1.
Let $\\{X_{n}\\}$ be a stationary process over some alphabet $\mathcal{S}$.
Then for any disjoint partition
$\mathcal{C}\cup\mathcal{C}^{\prime}=\mathcal{S}$, it holds that
$\displaystyle\Pr(X_{n}\in\mathcal{C},X_{n+1}\in\mathcal{C}^{\prime})=\Pr(X_{n}\in\mathcal{C}^{\prime},X_{n+1}\in\mathcal{C}).$
(30)
###### Proof 3.2.
For any disjoint partition $\mathcal{C}\cup\mathcal{C}^{\prime}=\mathcal{S}$
we have
$\displaystyle\Pr(X_{n+1}\in\mathcal{C}^{\prime})=\Pr(X_{n}\in\mathcal{C}^{\prime})=\Pr(X_{n}\in\mathcal{C}^{\prime},X_{n+1}\in\mathcal{C}^{\prime})+\Pr(X_{n}\in\mathcal{C}^{\prime},X_{n+1}\in\mathcal{C}).$
(31)
Subtracting $\Pr(X_{n}\in\mathcal{C}^{\prime},X_{n+1}\in\mathcal{C}^{\prime})$
from both sides, establishes the claim.
###### Proof 3.3.
of Lemma 2.1 : The proof is very similar to the derivation of the invariant
measure of a continuous-time Markov chain. Let $\\{M^{\prime}_{n}\\}$ be the
process defined as follows:
1. 1.
Draw $M^{\prime}_{0}$ according to the stationary distribution of $M_{n}$.
2. 2.
For $n>0$, draw $M^{\prime}_{n+1}|M^{\prime}_{n}\sim W$, where $W$ is the
Markov kernel of our chain.
3. 3.
For $n<0$, draw $M^{\prime}_{n-1}|M^{\prime}_{n}\sim W^{\prime}$, where
$W^{\prime}$ is the reverse Markov kernel corresponding to the stationary
distribution.
Clearly, $\\{M^{\prime}_{n}\\}$ is a stationary ergodic process with marginal
distribution equal to the stationary distribution of $\\{M_{n}\\}$. Let
$Q^{\prime}_{n}=\phi(M^{\prime}_{n})$ where $\phi$ is the mapping to the set
label (similar to $Q_{n}$). Clearly, $\\{Q^{\prime}_{n}\\}$ is a stationary
process as well, and $\\{Q_{n}\\}$ converges to the marginal distribution of
$\\{Q^{\prime}_{n}\\}$. Recall that $\\{Q^{\prime}_{n}\\}$ is composed of runs
of consecutive letters of $[K]$, and that the length of each run is
independent of all past runs. The run-length random variables do depend on the
letter $k\in[K]$ of the run, and we denote by $T_{k}\sim P_{T_{k}}$ a generic
random variable corresponding to a run of the letter $k$. Furthermore, we
denote by $A_{k}(t)$ the event that a new run in $\\{Q^{\prime}_{n}\\}$ of
letters $k$ started at time $t$, and let the integer random variable
$Z_{t}\geq 1$ denote the number of symbols left in the current run at time $t$
(including the one at time $t$). If $Q^{\prime}_{0}=k$, this means that a run
of letters $k$ started at some time $-t$, and its corresponding $Z_{-t}$ was
greater than $t$. We can therefore write
$\displaystyle\pi_{k}$ $\displaystyle=\Pr\left(Q^{\prime}_{0}=k\right)$ (32)
$\displaystyle=\sum_{t=0}^{\infty}\Pr\left(A_{k}(-t),Z_{-t}>t\right)$ (33)
$\displaystyle=\sum_{t=0}^{\infty}\Pr\left(A_{k}(-t)\right)\Pr\left(T_{k}>t\right)$
(34)
$\displaystyle=\Pr\left(A_{k}(0)\right)\sum_{t=0}^{\infty}\Pr\left(T_{k}>t\right)$
(35)
$\displaystyle=\Pr\left(A_{k}(0)\right)\operatorname{\mathbb{E}}\left(T_{k}\right),$
(36)
where (34) follows since given that $A_{k}(t)$ occurred, $Z_{t}$ is
independent of everything that happened before this run began and has the
distribution $P_{T_{k}}$, (35) is from stationarity, and (36) is due to the
identity
$\sum_{t=0}^{\infty}\Pr\left(T_{k}>t\right)=\operatorname{\mathbb{E}}\left(T_{k}\right)$
for a non-negative random variable. Thus, from stationarity, for each $t$ we
have
$\displaystyle\Pr\left(A_{k}(t)\right)=\Pr\left(A_{k}(0)\right)=\frac{\pi_{k}}{\operatorname{\mathbb{E}}\left(T_{k}\right)}.$
(37)
Now, denote by $B_{k}(t)$ the event that a run of letters $k$ ended at time
$t$. Note that since $\\{Q^{\prime}_{n}\\}$ is stationary, Lemma 3.1 suggests
that the probability it enters a state $k$ is equal to the probability it
leaves a state $k$ at any given time, namely
$\displaystyle\Pr(B_{k}(t))=\Pr(A_{k}(t))=\frac{\pi_{k}}{\operatorname{\mathbb{E}}\left(T_{k}\right)},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ k\in[k].$ (38)
Now consider the sampled Markov chain $\\{S_{n}\\}$, and denote its stationary
distribution for state $j$ by $\mu_{j}$, and its transition probability from
state $j$ to state $k$ by $P_{jk}$. We have
$\displaystyle\Pr\left(A_{k}(t+1)\right)$ $\displaystyle=\sum_{j\neq
k}\Pr\left(B_{j}(t)\right)P_{jk}.$ (39)
Substituting (37) into (39), we have
$\displaystyle\frac{\pi_{k}}{\operatorname{\mathbb{E}}\left(T_{k}\right)}=\sum_{j\neq
k}\frac{\pi_{j}}{\operatorname{\mathbb{E}}\left(T_{j}\right)}P_{jk}.$ (40)
Thus, the stationary distribution $\\{\pi_{k}\\}_{k\in[K]}$ of $\\{Q_{n}\\}$
must satisfy (40). Since $\\{\mu_{k}\\}_{k\in[K]}$ is the unique stationary
distribution of $\\{S_{n}\\}$, we have that
$\displaystyle\pi_{j}^{*}=\frac{\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}{\sum_{k=1}^{M}\operatorname{\mathbb{E}}[T_{k}]\mu_{k}},\leavevmode\nobreak\
\leavevmode\nobreak\ j\in[K],$ (41)
is the unique distribution satisfying (40), and is consequently the stationary
distribution of $\\{Q_{n}\\}$, as claimed.
###### Proof 3.4.
of Lemma 2.4 : By construction, $\\{S_{n}\\}$ follows the transition
probability law plotted in Figure 1. For all $i\in\\{2,\ldots,K-2\\}$, we have
from Lemma 3.1 that by choosing the partition
$\mathcal{C}=\\{1,\ldots,i-1\\},\mathcal{C}^{\prime}=\\{i,\ldots,K\\}$ and
noting from Figure 1 that only adjacent states are connected,
$\mu_{i-1}p_{i-1}=\mu_{i}q_{i}$, or equivalently
$\displaystyle\mu_{i-1}$ $\displaystyle=\frac{q_{i}}{p_{i-1}}\mu_{i}.$ (42)
By construction of the mini-chains $\mathcal{S}_{i}$ and by Lemma 2.3, we have
that $q_{i}<\epsilon$ and $p_{i}>1-\epsilon$ for $i<k$. Thus, repeated
application of (42) yields
$\displaystyle\mu_{k-i}=\prod_{j=1}^{i}\frac{q_{k-j+1}}{p_{k-j}}\mu_{k}\leq\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\mu_{k-1},$
(43)
for $2\leq i\leq k-1$. Similarly, since $p_{i}<\epsilon$ and
$q_{i}>1-\epsilon$ for $i>k$, we have
$\displaystyle\mu_{k+i}=\prod_{j=1}^{i}\frac{p_{k+j-1}}{q_{k+j}}\mu_{k}\leq\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\mu_{k+1},$
(44)
for $1\leq i\leq K-1-k$. For the extreme states $1$ and $K$, by appealing to
Lemma 3.1 and recalling that $p_{1}=1$ and $q_{K}=1$, we have
$\displaystyle\mu_{1}=q_{2}\mu_{2}\leq\epsilon\mu_{2}<\frac{\epsilon}{1-\epsilon}\cdot\mu_{2},$
(45)
and
$\displaystyle\mu_{K}=p_{K-1}\mu_{K-1}\leq\epsilon\mu_{K-1}<\frac{\epsilon}{1-\epsilon}\cdot\mu_{K-1}.$
(46)
###### Proof 3.5.
of Lemma 2.5 : Fix $\theta$, and recall that each state $i$ in the sampled
chain corresponds to a
$\operatorname{\mathsf{RUNS}}\left(N_{i},\frac{i}{K+2},\frac{i+1}{K+2}\right)$
mini-chain in the original chain, where $N_{i}=N(\epsilon,\frac{i}{K+2},K+2)$
is as defined in (17). Restricting our attention to that mini-chain, denote by
$s_{i}=f\left(N_{i},\frac{i}{K+2},\frac{i+1}{K+2}\right)$ its initial state,
and denote by $T_{i}^{1}$ the first time a run of $N_{i}-s_{i}$ consecutive
ones is observed, and $T_{i}^{0}$ as the first time a run of $s_{i}-1$
consecutive zeros is observed. We exit the mini-chain when either a run of
$N_{i}-s_{i}$ consecutive ones or a run of $s_{i}-1$ consecutive zeros is
observed, so we have that the exit time $T_{i}$ satisfies $T_{i}\leq
T_{i}^{1}$ and $T_{i}\leq T_{i}^{0}$, which implies
$\displaystyle\operatorname{\mathbb{E}}[T_{i}]\leq\operatorname{\mathbb{E}}[T_{i}^{1}],$
(47)
$\displaystyle\operatorname{\mathbb{E}}[T_{i}]\leq\operatorname{\mathbb{E}}[T_{i}^{0}].$
(48)
Next, we observe that $i\mapsto s_{i}$ is monotonically non-increasing and
$i\mapsto N_{i}-s_{i}$ is monotonically non-decreasing. These facts can be
verified from the formulas (17) and (14) for $N(\epsilon,\frac{i}{K+2},K+2)$
and $f\left(N_{i},\frac{i}{K+2},\frac{i+1}{K+2}\right)$, respectively. Thus
the expected time to observe a run of $N_{i}-s_{i}$ consecutive ones is also
non-decreasing and we have
$\displaystyle\operatorname{\mathbb{E}}\left[T_{1}^{1}\right]\leq\operatorname{\mathbb{E}}\left[T_{2}^{1}\right]\leq\ldots\leq\operatorname{\mathbb{E}}\left[T_{j}^{1}\right],$
(49)
and similarly
$\displaystyle\operatorname{\mathbb{E}}\left[T_{S}^{0}\right]\leq\operatorname{\mathbb{E}}\left[T_{S-1}^{0}\right]\leq\ldots\leq\operatorname{\mathbb{E}}\left[T_{j}^{0}\right].$
(50)
Let $\\{W_{n}^{j}(\theta)\\}$ be a random walk in
$\operatorname{\mathsf{RUNS}}\left(N_{j},\frac{j}{K},\frac{j+1}{K}\right)$
under $\theta$, and let $W_{n}^{j}(\theta)\rightarrow 1$ (resp.
$W_{n}^{j}(\theta)\rightarrow 0$) denote the event that
$\\{W_{n}^{j}(\theta)\\}$ exits right (resp. exits left). We have
$\displaystyle
T_{j}^{1}=T_{j}+\left(T_{j}^{1}-T_{j}\right)\operatorname{\mathds{1}}(W_{n}^{j}(\theta)\rightarrow
0).$ (51)
By taking the expectation of both sides, we have
$\displaystyle\operatorname{\mathbb{E}}\left[T_{j}^{1}\right]$
$\displaystyle=\operatorname{\mathbb{E}}\left[T_{j}\right]+\operatorname{\mathbb{E}}\left[\left(T_{j}^{1}-T_{j}\right)\operatorname{\mathds{1}}(W_{n}^{j}(\theta)\rightarrow
0)\right]$ (52)
$\displaystyle=\operatorname{\mathbb{E}}\left[T_{j}\right]+\Pr(W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{1}-T_{j}|W_{n}^{j}(\theta)\rightarrow
0\right]$ (53)
$\displaystyle=\operatorname{\mathbb{E}}\left[T_{j}\right]+\Pr(W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{1}\right],$ (54)
due to
$\displaystyle\operatorname{\mathbb{E}}\left[T_{j}^{1}-T_{j}|W_{n}^{j}(\theta)\rightarrow
0\right]$
$\displaystyle=\sum_{t=1}^{\infty}\Pr(T_{j}=t|W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{1}-T_{j}|T_{j}=t,W_{n}^{j}(\theta)\rightarrow
0\right]$ (55)
$\displaystyle=\sum_{t=1}^{\infty}\Pr(T_{j}=t|W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{1}-t|T_{j}^{1}>t,W_{t}^{j}(\theta)=1\right]$
(56)
$\displaystyle=\sum_{t=1}^{\infty}\Pr(T_{j}=t|W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{1}\right]$ (57)
$\displaystyle=\operatorname{\mathbb{E}}\left[T_{j}^{1}\right],$ (58)
where (56) is since no run of $N_{j}-s_{j}$ ones was observed until time $t$
and the last bit was $X_{t}=0$, and (57) follows from the memoryless property
of the chain. Thus,
$\displaystyle\operatorname{\mathbb{E}}\left[T_{j}\right]$
$\displaystyle=\Pr(W_{n}^{j}(\theta)\rightarrow
1)\operatorname{\mathbb{E}}\left[T_{j}^{1}\right],$ (59)
$\displaystyle\operatorname{\mathbb{E}}\left[T_{j}\right]$
$\displaystyle=\Pr(W_{n}^{j}(\theta)\rightarrow
0)\operatorname{\mathbb{E}}\left[T_{j}^{0}\right].$ (60)
Equation (22) now follows by recalling that $\Pr(W_{n}^{j}(\theta)\rightarrow
0)\geq 1-\epsilon$ for $\theta<\frac{j}{K}$ and by appealing to (50) and (48).
(23) is proven similarly by appealing to (49).
This work was supported by the ISF under Grants 1791/17 and 1495/18.
## References
* [Acharya et al.(2018)Acharya, Canonne, and Tyagi] Jayadev Acharya, Clément L Canonne, and Himanshu Tyagi. Distributed simulation and distributed inference. _arXiv preprint arXiv:1804.06952_ , 2018.
* [Acharya et al.(2020)Acharya, Canonne, and Tyagi] Jayadev Acharya, Clément L Canonne, and Himanshu Tyagi. Inference under information constraints ii: Communication constraints and shared randomness. _IEEE Transactions on Information Theory_ , 66(12):7856–7877, 2020.
* [Barnes et al.(2018)Barnes, Han, and Özgür] Leighton Pate Barnes, Yanjun Han, and Ayfer Özgür. A geometric characterization of Fisher information from quantized samples with applications to distributed statistical estimation. In _2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , pages 16–23. IEEE, 2018.
* [Berg et al.(2020)Berg, Ordentlich, and Shayevitz] Tomer Berg, Or Ordentlich, and Ofer Shayevitz. Binary hypothesis testing with deterministic finite-memory decision rules. In _2020 IEEE International Symposium on Information Theory (ISIT)_ , pages 1259–1264. IEEE, 2020.
* [Braverman et al.(2016)Braverman, Garg, Ma, Nguyen, and Woodruff] Mark Braverman, Ankit Garg, Tengyu Ma, Huy L Nguyen, and David P Woodruff. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In _Proceedings of the forty-eighth annual ACM symposium on Theory of Computing_ , pages 1011–1020. ACM, 2016.
* [Chandrasekaran(1970)] Balakrishnan Chandrasekaran. Finite-memory hypothesis testing–a critique (corresp.). _IEEE Transactions on Information Theory_ , 16(4):494–496, 1970.
* [Chien et al.(2010)Chien, Ligett, and McGregor] Steve Chien, Katrina Ligett, and Andrew McGregor. Space-efficient estimation of robust statistics and distribution testing. In _ICS_ , pages 251–265. Citeseer, 2010.
* [Cover(1969)] Thomas M Cover. Hypothesis testing with finite statistics. _The Annals of Mathematical Statistics_ , 40(3):828–835, 1969.
* [Dagan and Shamir(2018)] Yuval Dagan and Ohad Shamir. Detecting correlations with little memory and communication. In _Conference On Learning Theory_ , pages 1145–1198, 2018.
* [Dagan et al.(2019)Dagan, Kur, and Shamir] Yuval Dagan, Gil Kur, and Ohad Shamir. Space lower bounds for linear prediction in the streaming model. In _Conference on Learning Theory_ , pages 929–954, 2019.
* [Dar and Feder(2014)] Ronen Dar and Meir Feder. Finite-memory prediction as well as the empirical mean. _IEEE transactions on information theory_ , 60(8):4526–4543, 2014.
* [Garg et al.(2014)Garg, Ma, and Nguyen] Ankit Garg, Tengyu Ma, and Huy L Nguyen. On communication cost of distributed statistical estimation and dimensionality. _arXiv preprint arXiv:1405.1665_ , 2014.
* [Hadar and Shayevitz(2019)] Uri Hadar and Ofer Shayevitz. Distributed estimation of gaussian correlations. _IEEE Transactions on Information Theory_ , 65(9):5323–5338, 2019.
* [Hadar et al.(2019)Hadar, Liu, Polyanskiy, and Shayevitz] Uri Hadar, Jingbo Liu, Yury Polyanskiy, and Ofer Shayevitz. Communication complexity of estimating correlations. In _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_ , pages 792–803, 2019.
* [Han et al.(2018a)Han, Özgür, and Weissman] Yanjun Han, Ayfer Özgür, and Tsachy Weissman. Geometric lower bounds for distributed parameter estimation under communication constraints. In _Conference On Learning Theory_ , pages 3163–3188. PMLR, 2018a.
* [Han et al.(2018b)Han, Ozgur, and Weissman] Yanjun Han, Ayfer Ozgur, and Tsachy Weissman. Geometric lower bounds for distributed parameter estimation under communication constraints. _Proceedings of Machine Learning Research_ , 75, 2018b.
* [Hellman(1974)] Martin E Hellman. Finite-memory algorithms for estimating the mean of a gaussian distribution (corresp.). _IEEE Transactions on Information Theory_ , 20(3):382–384, 1974.
* [Hellman and Cover(1970)] Martin E Hellman and Thomas M Cover. Learning with finite memory. _The Annals of Mathematical Statistics_ , pages 765–782, 1970.
* [Hellman and Cover(1971)] Martin E Hellman and Thomas M Cover. On memory saved by randomization. _The Annals of Mathematical Statistics_ , 42(3):1075–1078, 1971.
* [Ingber and Feder(2006)] Amir Ingber and Meir Feder. Prediction of individual sequences using universal deterministic finite state machines. In _2006 IEEE International Symposium on Information Theory_ , pages 421–425. IEEE, 2006.
* [Jain and Tyagi(2018)] Ayush Jain and Himanshu Tyagi. Effective memory shrinkage in estimation. In _2018 IEEE International Symposium on Information Theory (ISIT)_ , pages 1071–1075. IEEE, 2018.
* [Jordan et al.(2018)Jordan, Lee, and Yang] Michael I Jordan, Jason D Lee, and Yun Yang. Communication-efficient distributed statistical inference. _Journal of the American Statistical Association_ , 2018.
* [Kontorovich(2012)] Leonid (Aryeh) Kontorovich. Statistical estimation with bounded memory. _Statistics and Computing_ , 22(5):1155–1164, 2012.
* [Leighton and Rivest(1986)] F. Thomson Leighton and Ronald Rivest. Estimating a probability using finite memory. _IEEE Transactions on Information Theory_ , 32(6):733–742, 1986.
* [McGregor et al.(2012)McGregor, Pavan, Tirthapura, and Woodruff] Andrew McGregor, A Pavan, Srikanta Tirthapura, and David Woodruff. Space-efficient estimation of statistics over sub-sampled streams. In _Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems_ , pages 273–282, 2012.
* [Meron and Feder(2004)] Eado Meron and Meir Feder. Finite-memory universal prediction of individual sequences. _IEEE Transactions on Information Theory_ , 50(7):1506–1523, 2004.
* [Raz(2018)] Ran Raz. Fast learning requires good memory: A time-space lower bound for parity learning. _Journal of the ACM (JACM)_ , 66(1):3, 2018.
* [Robbins(1956)] Herbert Robbins. A sequential decision problem with a finite memory. _Proceedings of the National Academy of Sciences of the United States of America_ , 42(12):920, 1956.
* [Roberts and Tooley(1970)] Richard Roberts and J Tooley. Estimation with finite memory. _IEEE Transactions on Information Theory_ , 16(6):685–691, 1970.
* [Samaniego(1973)] Francisco J. Samaniego. Estimating a binomial parameter with finite memory. _IEEE Transactions on Information Theory_ , 19(5):636–643, 1973.
* [Sharan et al.(2019)Sharan, Sidford, and Valiant] Vatsal Sharan, Aaron Sidford, and Gregory Valiant. Memory-sample tradeoffs for linear regression with small error. In _Symposium on Theory of Computing (STOC)_ , 2019.
* [Steinhardt and Duchi(2015)] Jacob Steinhardt and John Duchi. Minimax rates for memory-bounded sparse linear regression. In _Conference on Learning Theory_ , pages 1564–1587, 2015.
* [Steinhardt et al.(2016)Steinhardt, Valiant, and Wager] Jacob Steinhardt, Gregory Valiant, and Stefan Wager. Memory, communication, and statistical queries. In _Conference on Learning Theory_ , pages 1490–1516, 2016.
* [Von Neumann(1951)] John Von Neumann. 13\. various techniques used in connection with random digits. _Appl. Math Ser_ , 12(36-38):5, 1951.
* [Xu and Raginsky(2017)] A. Xu and M. Raginsky. Information-theoretic lower bounds on Bayes risk in decentralized estimation. _IEEE Transactions on Information Theory_ , 63(3):1580–1600, March 2017.
* [Zhang et al.(2013)Zhang, Duchi, Jordan, and Wainwright] Yuchen Zhang, John Duchi, Michael I Jordan, and Martin J Wainwright. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In _Advances in Neural Information Processing Systems_ , pages 2328–2336, 2013.
## Appendix A Bound on the error probability
To establish Lemma 2.3, we need to prove two supporting lemmas. First, we show
that $p^{1}_{0}$ is achieved by $\theta=q$, and $p^{0}_{1}$ is achieved by
$\theta=p$. Denote the probability of deciding $\mathcal{H}_{0}$ under
$\theta$ as $p_{0}(\theta)$, and the probability of deciding $\mathcal{H}_{1}$
under $\theta$ as $p_{1}(\theta)$.
###### Lemma A.1.
For $\operatorname{\mathsf{RUNS}}(N,p,q)$, if $\theta>p$, then
$p_{1}(\theta)\leq p_{1}(p)$. Similarly, if $\theta<q$, then
$p_{0}(\theta)\leq p_{0}(q)$.
###### Proof A.2.
We prove the first part of the claim, and the second follows symmetrically. To
that end, we use a coupling argument. Denote by $\\{W_{n}^{p}\\}$ the random
walk on $\operatorname{\mathsf{RUNS}}(N,p,q)$ under $p$ and
$\\{W_{n}^{\theta}\\}$ as the random walk on
$\operatorname{\mathsf{RUNS}}(N,p,q)$ under $\theta$, where here we assume the
extreme states $1$ and $N$ are absorbing, such that once the random walk
reaches one of these states, it stays there forever. We couple the two
processes using the following joint distribution for
$\left(\\{W_{n}^{p}\\},\\{W_{n}^{\theta}\\}\right)$: Let $\\{W_{n}^{p}\\}$ be
the standard walk on the chain under the $\mathsf{Bern}(p)$ sequence. For any
$n$, if $W_{n}^{p}$ goes one step to the right, $W_{n}^{\theta}$ goes one step
to the right as well. If $W_{n}^{p}$ goes one step to the left, we flip an
independent $\mathsf{Bern}\left(\frac{\theta-p}{1-p}\right)$ coin, and
$W_{n}^{\theta}$ goes one step to the right upon seeing $1$ or one step to the
left upon seeing $0$. It is easy to see that the marginal distribution under
$\\{W_{n}^{\theta}\\}$ corresponds to the chain under the
$\mathsf{Bern}(\theta)$ distribution, and this is therefore a valid coupling.
Our claim now immediately follows from the observation that under this
coupling, $W_{n}^{\theta}$ is never to the left of $W_{n}^{p}$.
Second, we prove the following lemma, which bounds the error probability of
$\operatorname{\mathsf{RUNS}}(N,p,q)$ when the hypotheses are $\frac{1}{K}$
apart.
###### Lemma A.3.
For any $\frac{2}{K}\leq p\leq 1-\frac{1}{K}$, $q=p-\frac{1}{K}$ and $N\geq
3+\left\lceil K\cdot 6\log\frac{2}{p_{\min}}\right\rceil$, it holds that
$\displaystyle\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}(N,p,q)}\leq\frac{2}{p_{\min}}\cdot\exp_{2}\left\\{-\frac{\left(1-\frac{1}{K\cdot
p}\right)\left(\frac{1}{K}H_{b}(p)-\frac{1}{K^{2}}\log
p\right)\left(N-3\right)}{\frac{1}{K}\left(1-2\left(p-\frac{1}{K}\right)\right)-2\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\log
p(1-p)}\right\\},$ (61)
where $p_{\min}=\min\\{p(1-p),q(1-q)\\}$ and $H_{b}(p)\triangleq-p\log
p-(1-p)\log(1-p)$ is the binary entropy of $p$.
###### Proof A.4.
In [Berg et al.(2020)Berg, Ordentlich, and Shayevitz], the authors showed that
for initial state $s$, we have
$\displaystyle p_{1}(p)$
$\displaystyle=\frac{1-p^{N-s}}{1+\frac{p^{N-s-1}}{(1-p)^{s-2}}-p^{N-s-1}}$
(62)
$\displaystyle\leq\frac{(1-p)^{s-2}}{p^{N-s-1}}\cdot\frac{1}{1-(1-p)^{s-2}},$
(63)
and
$\displaystyle p_{0}(q)$
$\displaystyle=\frac{1-(1-q)^{s-1}}{1+\frac{(1-q)^{s-2}}{q^{N-s-1}}-(1-q)^{s-2}}$
(64)
$\displaystyle\leq\frac{q^{N-s-1}}{(1-q)^{s-2}}\cdot\frac{1}{1-q^{N-s-1}}.$
(65)
Choosing $s=s^{*}$, where $s^{*}$ is
$\displaystyle 2+\frac{\log pq}{\log p(1-p)+\log q(1-q)}(N-3),$ (66)
we get
$\displaystyle\frac{(1-p)^{s^{*}-2}}{p^{N-s^{*}-1}}=\frac{q^{N-s^{*}-1}}{(1-q)^{s^{*}-2}}=2^{-r(p,q)(N-3)},$
(67)
where
$\displaystyle r(p,q)\triangleq\frac{\log p\log(1-q)-\log q\log(1-p)}{\log
p(1-p)+\log q(1-q)}.$ (68)
We therefore have, for $s=s^{*}$,
$\displaystyle\max\left\\{p_{0}(q),p_{1}(p)\right\\}\leq
2^{-r(p,q)(N-3)}\cdot\max\left\\{\frac{1}{1-(1-p)^{s^{*}-2}},\frac{1}{1-q^{N-s^{*}-1}}\right\\}.$
(69)
Recall that $s$ is a state in the chain so it must be an integer, whereas
$s^{*}$ may not be. Thus, we need to round $s^{*}$ either up or down, in which
case, both ratios in (67) $\frac{(1-p)^{s^{*}-2}}{p^{N-s^{*}-1}}$, and
$\frac{q^{N-s^{*}-1}}{(1-q)^{s^{*}-2}}$, will increase by at most
$\frac{1}{p_{\min}}$, where $p_{\min}=\min\\{p(1-p),q(1-q)\\}$. Furthermore,
for our choice of $N$, $\frac{2}{K}\leq p\leq 1-\frac{1}{K}$ and
$q=p-\frac{1}{K}$, we have that $3<s*<N-2$ and the rightmost part of (69) is
always upper bounded by $2$. Combining this with Lemma A.1, we therefore get
the bound
$\displaystyle\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}(N,p,q)}=\max\left\\{p^{0}_{1},p^{1}_{0}\right\\}=\max\left\\{p_{0}(q),p_{1}(p)\right\\}\leq\frac{2}{p_{\min}}\cdot
2^{-r(p,q)(N-3)}.$ (70)
Setting $p-q=\delta>0$, we have
$\displaystyle r\left(p,p-\delta\right)$ $\displaystyle=\frac{\log
p\log(1-p+\delta)-\log(p-\delta)\log(1-p)}{\log
p(1-p)+\log(p-\delta)(1-p+\delta)}$ (71) $\displaystyle=\frac{\log
p\left(\log(1-p)+\log\left(1+\frac{\delta}{1-p}\right)\right)-\left(\log
p+\log\left(1-\frac{\delta}{p}\right)\right)\log(1-p)}{\log
p(1-p)+\log(1-p)+\log\left(1+\frac{\delta}{1-p}\right)+\log
p+\log\left(1-\frac{\delta}{p}\right)}$ (72)
$\displaystyle\geq\frac{\frac{\delta}{1-p+\delta}\log
p+\frac{\delta}{p}\log(1-p)}{2\log
p(1-p)+\frac{\epsilon}{1-p+\delta}-\frac{\delta}{p-\delta}}$ (73)
$\displaystyle=-\frac{p-\delta}{p}\cdot\frac{\delta p\log
p+\delta(1-p+\delta)\log(1-p)}{2(p-\delta)(1-p+\delta)\log
p(1-p)-\delta(1-2(p-\delta))}$ (74)
$\displaystyle=\left(1-\frac{\delta}{p}\right)\cdot\frac{\delta
H_{b}(p)-\delta^{2}\log(1-p)}{\delta(1-2(p-\delta))-2(p-\delta)(1-p+\delta)\log
p(1-p)},$ (75)
where (73) follows from $\frac{x}{x+1}\leq\log(1+x)\leq x$ and (75) follows
from the definition of the binary entropy. The claim follows by substituting
$\delta=\frac{1}{K}$.
###### Proof A.5.
of Lemma 2.3 : Let $N=3+\lceil c\cdot K\rceil$, for some $c\geq
6\log\frac{2}{p_{\min}}$. From Lemma A.3,
$\displaystyle\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}\left(N,p,p-\frac{1}{K}\right)}\leq\frac{2}{p_{\min}}\cdot\exp_{2}\left\\{-\frac{c\left(1-\frac{1}{K\cdot
p}\right)\left(H_{b}(p)-\frac{1}{K}\log(1-p)\right)}{\frac{1}{K}\left(1-2\left(p-\frac{1}{K}\right)\right)-2\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\log
p(1-p)}\right\\}$ (76)
In order to guarantee
$\operatorname{\mathsf{P_{e}}}^{\operatorname{\mathsf{RUNS}}\left(N,p,p-\frac{1}{K}\right)}\leq\epsilon$,
it is sufficient to choose $c$ to be
$\displaystyle\frac{\frac{1}{K}\left(1-2\left(p-\frac{1}{K}\right)\right)-2\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\log
p(1-p)}{\left(1-\frac{1}{K\cdot
p}\right)\left(H_{b}(p)-\frac{1}{K}\log(1-p)\right)}\cdot\log\frac{2}{\epsilon
p_{\min}}.$ (77)
Upper bounding the first term in the brackets, we get
$\displaystyle\frac{\frac{1}{K}\left(1-2\left(p-\frac{1}{K}\right)\right)-2\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\log
p(1-p)}{\left(1-\frac{1}{K\cdot
p}\right)\left(H_{b}(p)-\frac{1}{K}\log(1-p)\right)}$ (78)
$\displaystyle\leq\frac{1}{1-\frac{1}{K\cdot
p}}\cdot\frac{\frac{1}{K}+2\left(H_{b}(p)-\frac{1}{K}\log(1-p)\right)}{H_{b}(p)-\frac{1}{K}\log(1-p)}$
(79) $\displaystyle=\frac{1}{1-\frac{1}{K\cdot p}}\left(2+\frac{1}{K\cdot
H_{b}(p)-\log(1-p)}\right)$ (80) $\displaystyle\leq\frac{3}{1-\frac{1}{K\cdot
p}}$ (81) $\displaystyle\leq 6,$ (82)
where (79), (81) and (82) follows since $p\geq\frac{2}{K}$ implies
1. (i)
$H_{b}(p)-\frac{1}{K}\log(1-p)\geq-\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\log
p(1-p)$,
2. (ii)
$K\cdot H_{b}(p)-\log(1-p)\geq 1$,
3. (iii)
$\frac{1}{1-\frac{1}{K\cdot p}}\leq 2$.
Combining (82) and (77), noting that
$\min\left\\{p(1-p),\left(p-\frac{1}{K}\right)\left(1-p+\frac{1}{K}\right)\right\\}\geq\left(p-\frac{1}{K}\right)(1-p)$,
and choosing
$\displaystyle
c=c_{\epsilon,p}=6\log\frac{2}{\epsilon\left(p-\frac{1}{K}\right)(1-p)},$ (83)
the proof is concluded.
## Appendix B Calculation of number of states $S$ in (19)
Using the expression in (17) for $N(\epsilon,p,K)$ we obtain
$\displaystyle S$ $\displaystyle=\sum_{k=1}^{K}N_{k}$ (84)
$\displaystyle=\sum_{k=1}^{K}N\left(\epsilon,\frac{k+1}{K+2},K+2\right)$ (85)
$\displaystyle\leq
4K+6(K+2)\sum_{k=1}^{K}\log\frac{2}{\epsilon\left(\frac{k}{K+2}\cdot\frac{K-k+1}{K+2}\right)}$
(86) $\displaystyle=4K+6K(K+2)\log\left(\frac{2}{\epsilon}\right)-6(K+2)\cdot
2\sum_{k=1}^{\frac{K}{2}}\log\left(\frac{k}{K+2}\cdot\frac{K-k+1}{K+2}\right)$
(87) $\displaystyle\leq
4K+6K(K+2)\log\left(\frac{2}{\epsilon}\right)-6(K+2)\cdot
2\sum_{k=1}^{\frac{K}{2}}\log\left(\frac{k}{K+2}\right)-6K(K+2)$ (88)
$\displaystyle\leq
4K+6K(K+2)\log\left(\frac{2}{\epsilon}\right)-6K(K+2)\log\left(\frac{K}{2e(K+2)}\right)-6K(K+2)$
(89) $\displaystyle\leq
4K+6K(K+2)\log\left(\frac{2e}{\epsilon}\right)+12(K+2)\leq
6(K+2)^{2}\log\left(\frac{2e}{\epsilon}\right),$ (90)
where (87) follows from the symmetry of
$\left(\frac{k}{K+2}\cdot\frac{K-k+1}{K+2}\right)$ around $k=\frac{K}{2}$,
(88) from $\frac{K-k+1}{K+2}\geq\frac{1}{2}$ for all $1\leq k\leq\frac{K}{2}$,
(89) is from $n!\geq\left(\frac{n}{e}\right)^{n}$ and (90) follows from
$\log(1+x)\geq\frac{x}{x+1}$.
## Appendix C Proof of $R_{\theta}=O(1/S)$ for
$\theta\in\left[0,\frac{1}{K+2}\right)$ and
$\theta\in\left(\frac{K+1}{K+2},1\right]$
We shall prove the case $\theta\leq\frac{1}{K+2}$. The case of $\theta\geq
1-\frac{1}{K+2}$ follows from a symmetric argument. We show how previous
results imply that for very small $\theta$ the stationary distribution is
concentrated on the two leftmost states of the sampled chain. From there, the
proof is similar (yet not identical) to the proof of the general case. Let us
go step by step:
* •
Firstly, Lemma 2.3 implies that $p_{k}<\epsilon$ for all $k>1$ in the chain of
Figure 1.
* •
Now, a simplified (one-sided) version of Lemma 2.4 shows the stationary
distribution is exponentially decreasing for all states $\geq 2$. This follows
since eq. (44) still holds with $k=1$,
$\displaystyle\mu_{i+1}\leq\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\mu_{2},$
(91)
for $1\leq i\leq K-1$.
* •
Applying Lemma 2.5, eq. (22) states that
$\operatorname{\mathbb{E}}[T_{j}]>(1-\epsilon)\operatorname{\mathbb{E}}[T_{i}]$
for all $j\in[n]$ and $i>j$.
* •
Calculate the risk $R_{\theta}$.
$\displaystyle R_{\theta}$
$\displaystyle=\sum_{i=1}^{K}\frac{\operatorname{\mathbb{E}}[T_{i}]\mu_{i}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(92)
$\displaystyle=\frac{\operatorname{\mathbb{E}}[T_{1}]\mu_{1}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{1}{K+2}-\theta\right)^{2}+\sum_{i=2}^{K}\frac{\operatorname{\mathbb{E}}[T_{i}]\mu_{i}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(93)
$\displaystyle\leq\frac{1}{(K+2)^{2}}+\frac{1}{1-\epsilon}\sum_{i=2}^{K}\frac{\operatorname{\mathbb{E}}[T_{2}]\mu_{2}}{\sum_{j=1}^{K}\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}\frac{\mu_{i}}{\mu_{2}}\left(\frac{i}{K+2}-\theta\right)^{2}$
(94)
$\displaystyle\leq\frac{1}{(K+2)^{2}}+\frac{1}{1-\epsilon}\sum_{i=1}^{K-1}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}\left(\frac{i+1}{K+2}\right)^{2}$
(95)
$\displaystyle\leq\frac{1}{(K+2)^{2}}\cdot\frac{1}{1-\epsilon}\left(\sum_{i=1}^{\infty}\left(\frac{\epsilon}{1-\epsilon}\right)^{i-1}(i+1)^{2}+1\right)$
(96)
$\displaystyle\leq\frac{6\log\left(\frac{2e}{\epsilon}\right)}{S}\left(\frac{\epsilon}{(1-2\epsilon)^{3}}+\frac{4(1-\epsilon)}{(1-2\epsilon)^{2}}+\frac{1}{1-\epsilon}\right),$
(97)
where (94) follows from Lemma 2.5, (95) follows from Lemma 2.4 and since
$\frac{\operatorname{\mathbb{E}}[T_{j}]\mu_{j}}{\sum_{k=1}^{M}\operatorname{\mathbb{E}}[T_{k}]\mu_{k}}\leq
1$, (96) is since we only add positive terms, and (97) is due to the identity
$\sum_{i=0}^{\infty}q^{i}(i+2)^{2}=\frac{q(1+q)+4(1-q)}{(1-q)^{3}}$ and by
substituting (19). Finally, substituting $\epsilon=1/100$ into (97) gives
$R_{\theta}<\frac{300}{S}$.
|
# Verifiable Learned Behaviors via Motion Primitive Composition:
Applications to Scooping of Granular Media
Andrew Benton, Eugen Solowjow, Prithvi Akella1 1All authors are with Siemens
Corporation {andrew.benton, prithvi.akella<EMAIL_ADDRESS>
###### Abstract
A robotic behavior model that can reliably generate behaviors from natural
language inputs in real time would substantially expedite the adoption of
industrial robots due to enhanced system flexibility. To facilitate these
efforts, we construct a framework in which learned behaviors, created by a
natural language abstractor, are verifiable by construction. Leveraging recent
advancements in motion primitives and probabilistic verification, we construct
a natural-language behavior abstractor that generates behaviors by
synthesizing a directed graph over the provided motion primitives. If these
component motion primitives are constructed according to the criteria we
specify, the resulting behaviors are probabilistically verifiable. We
demonstrate this verifiable behavior generation capacity in both simulation on
an exploration task and on hardware with a robot scooping granular media.
## I Introduction
In recent years, learning from human demonstrations has proven tremendously
successful at imitating intricate, human-like motion on robotic systems [1, 2,
3]. This has allowed for improvements in robotic grasping [4, 5, 6], assembly
[3, 7, 8], and even robotic surgery [9, 10, 11]. However, these methods often
require prohibitive amounts of precisely labeled data [12]. Additionally,
these learned behaviors are typically not transferrable to tasks that are
similar but not identical, prompting further research into task-transferrable
learning [13, 14, 15]. However, works in this vein exhibit similar, if not
heightened, requirements on the amount of data available to the learning
procedure.
Despite these challenges, more comprehensive learned models that incorporate
streams of multimodal data have shown tremendous success at learning
generalized, intricate behaviors. For example, the recently developed Palm-E
model has successfully translated natural language user commands to control
policies for a $6$-DOF arm, realizing the intended tasks even when they were
not explicitly learned [16]. Building on the success of Palm-E and other
foundational robotic models [17, 18, 19], recent work also aims to codify
effective design principles for these models [20].
Figure 1: A graphical representation of our natural-language-based behavior
generalizer and verification scheme. By ensuring that the language model only
composes behaviors as a directed graphical abstraction over the provided
motion primitives, we show that any such generated behavior has an associated
certificate list that we can exploit to verify the learned behavior’s ability
to realize the user’s desired task.
Conceptually, however, both the Palm-E model and the other learning paradigms
mentioned prior hinge on a notion of composing generalized behavior from a
finite set of learned behaviors. Prior work in controls and robotics has shown
that generalizing from this initial behavior set, termed motion primitives in
the existing literature, yields robust, and more importantly, verifiable
generalized behavior provided the primitives and subsequent behaviors are
constructed with care [21, 22, 23]. Consequently, inspired by the previous
attempts at codifying design principles for these learned models [20], we
posit that by leveraging these prior works in motion primitives and black-box
risk-aware verification, we can synthesize verifiable learned behaviors over a
provided set of carefully constructed motion primitives.
Our Contribution: Leveraging recent work in risk-aware verification [24, 25],
we take steps towards constructing a framework for verifying learned,
generalized behaviors composed from a set of motion primitives. Specifically,
if the input/output spaces of the motion primitives satisfy certain conditions
that permit its verifiability, and the behavior is constructed as a directed
graph over these primitives, then the resulting behavior is similarly
verifiable. We showcase this verifiability in both simulation and on hardware,
focusing on exploration and reconnaissance for the former and a granular media
scooping task for the latter.
Structure: We review black-box risk-aware verification and motion primitives
in Section II before formally stating the problem under study in Section II-C.
Section III details our behavior generation scheme and states our main
contribution regarding the verifiability of the resulting generated behaviors.
Finally, Section IV showcases our behavior generation scheme developing an
exploratory behavior - Section IV-A \- and a scooping motion for granular
media - Section IV-B. Both behaviors are also verified in the same sections
according to the provided verification scheme.
## II Terminology and Formal Problem Statement
### II-A Black-Box Risk-Aware Verification
The information in this section is adapted from [24, 25]. Black-box risk-aware
verification assumes the existence of a discrete-time controlled system of the
following form, with system state $x\in\mathcal{X}$, control input
$u\in\mathcal{U}$, environment state $d\in\mathcal{D}$ and potentially unknown
dynamics $f$:
$x_{k+1}=f(x_{k},u_{k},d),~{}\forall~{}k=0,1,2,\dots.$ (1)
As verification measures the robustness of a controlled system’s ability to
realize a behavior of interest, work in this vein assumes the existence of a
feedback controller $U:\mathcal{X}\times\mathcal{D}\to\mathcal{U}$. The
system’s evolution when steered by this controller $U$ will be denoted as
$\Sigma$ \- a function mapping an initial system and environment state to the
system state evolution as prescribed by (1), i.e.
$\Sigma(x_{0},d)=\\{x_{0},x_{1},\dots,x_{K}\\}\in\mathcal{X}^{K}$ for some
$K>0$. Finally, a robustness measure $\rho$ maps this state evolution
$\Sigma(x_{0},d)$ and environment state $d$ to the reals, i.e.
$\rho:\mathcal{X}^{K}\times\mathcal{D}\to\mathbb{R}$. For context, these
robustness measures can be those coming from temporal logic [26] or the
minimum value of a control barrier function over a time horizon [27] among
other methods. A positive outcome of this robustness measure indicates that
the corresponding state evolution realized the desired behavior, i.e.
$\rho(\Sigma(x_{0},d),d)\geq 0$ implies the state evolution $\Sigma(x_{0},d)$
realized the behavior of interest.
Black-box risk-aware verification employs this robustness measure to provide a
probabilistic statement on the system’s ability to realize the desired
behavior for all permissible initial conditions and environment states. This
will formally be expressed in the following theorem:
###### Theorem 1.
Let $\\{r^{i}=\rho(\Sigma(x_{0}^{i},d^{i}),d^{i})\\}_{i=1}^{N}$ be a set of
$N$ robustness evaluations of trajectories whose initial conditions and
environments $(x_{0}^{i},d^{i})$ were sampled via $\pi$ over
$\mathcal{X}\times\mathcal{D}$, and let
$r^{*}=\min\\{r_{1},r_{2},\dots,r_{N}\\}$. Then, both the probability of
sampling an initial condition and environment evolution pair whose robustness
is lower bounded by $r^{*}$ and the confidence in the associated probability
is only a function of the number of samples $N$ and a scalar
$\epsilon\in[0,1]$, i.e.
$\operatorname{\mathbb{P}}^{N}_{\pi}\left[\operatorname{\mathbb{P}}_{\pi}[\rho(\Sigma(x_{0},d),d)\geq
r^{*}]\geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N}.$ (2)
### II-B Motion Primitives
Motion primitives are a well-studied field in the controls and robotics
literature, though we will provide a slight variant on existing definitions to
align with our notation.
###### Definition 1.
A Motion Primitive is $4$-tuple $\mathcal{P}=(\Xi,A,U,R)$ with the following
definitions for the tuple:
* $(\Xi)$
The complete set of parameters for this primitive, i.e.
$\Xi\subseteq\mathbb{R}^{p}$ for an appropriate dimension $p\geq 0$.
* $(A)$
A function taking a system and environment state $(x,d)$ as per (1) and
outputting the subset of valid parameters $P$ for this pair, i.e.
$A(x,d)=P\subseteq\Xi$.
* $(U)$
The parameterized controller for this primitive, mapping states, environments,
and the parameter to inputs, i.e.
$U:\mathcal{X}\times\mathcal{D}\times\Xi\to\mathcal{U}$.
* $(R)$
A parameterized function outputting the subset of the state space the system
will occupy upon completion of the primitive, i.e. for $\xi\in\Xi$ and with
environment state $d$, $R(\xi,d)=X_{f}\subseteq\mathcal{X}$.
As an example consistent with the simulations to follow then, consider the
system as per (1) to be a single-integrator system on the plane required to
navigate in a finite-sized grid. A feasible motion primitive $\mathcal{P}$
would be moving the system to an adjacent cell. For simplicity’s sake, assume
there are no obstacles, and as such, the environment state space
$\mathcal{D}=\varnothing$. Then, the complete set of parameters $\Xi$ would be
the labels for all the cells in this grid, the accepting function $A$ outputs
all adjacent cells to the cell containing the current system state $x$, $U$
could be a proportional controller sending the system to the appropriate grid,
and $R$ would output the subset of the state space encompassed by the cell to
which the system was required to move.
### II-C Problem Statement
Our goal is to develop a framework by which behaviors learned over these
primitives can be verified. As such, we define a behavior $B$ as a directed
graph of primitives, with edges from a primitive $\mathcal{P}$ indicating the
primitive $\mathcal{P}^{\prime}$ to be run upon completion of $\mathcal{P}$.
For examples of such behaviors, see the sketch provided in Figure 1 and the
resulting behavior for our simulation example in Figure 3. The formal
definition of these behaviors will follow.
###### Definition 2.
A behavior $B$ is a directed graph defined as a $4$-tuple, i.e. $B=(N,E,S,T)$
with the following definitions:
* $(N)$
The finite set of nodes for the graph, where each node is a primitive as per
Definition 1, i.e.
$N=\\{\mathcal{P}_{1},\mathcal{P}_{2},\dots,\mathcal{P}_{|N|}\\}$.
* $(E)$
The set of directed edges connecting nodes in the graph. Each edge identifies
a method to choose parameters for the successive primitive. If multiple edges
emanate from a node, then a method exists such that at runtime, only one edge
is chosen.
* $(S)$
A start function taking as input the system and environment state $(x,d)$ as
per (1) and outputting both the starting primitive and its parameter, i.e.
$S(x,d)=(\xi,\mathcal{P})$ where $\mathcal{P}\in N$ and $\xi\in
A_{\mathcal{P}}(x,d)$.
* $(T)$
The set of terminal nodes, i.e. $T\subseteq N$.
Our goals are twofold. First, determine whether we can verify the behaviors
generated by Algorithm 1, and second, if the behaviors are verifiable,
determine a framework by which we can verify any behavior generated by this
method. Phrased formally, the problem statement will follow.
###### Problem 1.
Determine if the behaviors generated by Algorithm 1 are verifiable, and if
they are verifiable, determine a method to verify any such generated behavior.
Algorithm 1 Natural Language-based Behavior Generalizer
A set of primitives as per Definition 1 and their descriptions
$\mathbb{D}=\\{(\mathcal{P}_{i},$ description of primitive $i)\\}_{i=1}^{M}$,
a list of existing behaviors $\mathbb{B}=\\{B_{1},B_{2},\dots\\}$ with
behaviors $B$ as per Definition 2, and a natural language abstractor $A$
taking as input a string $s$ defining a desired behavior, a string $I$
defining any useful, non-primitive information available for behavior
generation, and the primitive list $\mathbb{D}$ and outputting behaviors $B$.
while True do
$c\leftarrow$ desired behavior $B$
if $c\not\in\mathbb{B}$ then
$s\leftarrow$ description of desired behavior
$I\leftarrow$ helpful non-primitive information
$\mathbb{B}\leftarrow\mathbb{B}\bigcup A(s,I)$
end if
end while
## III Verifying Learned Behaviors
We will provide a solution to both aspects of Problem 1 simultaneously, by
constructing the framework for verifying any behavior as per Definition 2. To
construct this framework, we first note that there exist two outcomes to
executing any behavior from any initial system and environment state - it
either terminates successfully or it does not. In the event it terminates
successfully, we can record the set of all primitives run over the course of
the behavior, their corresponding parameters, and the system states upon
termination of the corresponding primitive, i.e.
$\mathbb{D}=\\{(\xi_{1},\mathcal{P}_{1},x^{f}_{1}),(\xi_{2},\mathcal{P}_{2},x^{f}_{2}),\dots\\}$.
If the behavior fails due to reasons such as an intermediary controller
failure or an error in the behavior’s graph construction leading to a runtime
error, we can record the failure.
This permits us to construct a robustness measure for a verification scheme
aligned with the method described in Section II-A. First, for each pair in the
dataset $\mathbb{D}$ generated by running the behavior, we can define a
certificate function checking whether the terminal state laid in the terminal
set prescribed by the primitive, parameter, and environment:
$C\left(\xi,\mathcal{P},x^{f},d\right)=x^{f}\in R_{\mathcal{P}}(\xi,d).$ (3)
Here, we note that we are implicitly associating boolean outcomes with $\pm
1$. The robustness measure $\rho$ would check the validity of each of these
certificates over the run of a behavior and output $1$ if all certificates
were satisfied and $-1$ if the system failed or any certificate was not
satisfied. Specifically then, let $(x_{0},d)$ be the initial system and
environment state, let $\Sigma$ be the trajectory function as described in
Section II-A, and let $\mathbb{D}$ be the dataset of tuples collected over the
course of a successfully run behavior. Then the robustness measure
$\rho_{B}(\Sigma(x_{0},d),d)=\begin{cases}\min\limits_{\gamma\in\mathbb{D}}~{}C(\gamma,d)&\mbox{if~{}behavior~{}finished},\\\
-1&\mbox{else}.\end{cases}$ (4)
Here, we have abbreviated the tuples in $\mathbb{D}$ with the variable
$\gamma$ to ease notation. That being said, the robustness measure $\rho_{B}$
in (4) evaluates to a positive number if and only if the behavior successfully
terminated and all component primitives exhibited their component desired
behaviors.
Figure 2: Examples of the environments considered for the example in Section
IV-A. The blue circle represents the agent, the blue square represents the
agent’s starting cell, the green squares are goals, the black squares are
obstacles, and the gold region is the region explored by the learned behavior.
Using the robustness measure in (4), we can verify any behavior as per
Definition 2. To ease the formal exposition of the results, we will first
denote via $\mathcal{B}$ the subset of the system and environment state spaces
that have a valid starting point for the behavior $B$ to be verified. This is
to ensure that in the verification procedure to follow, we do not sample and
evaluate the behavior’s performance from initial conditions and environment
states that disallow the behavior from the start. Formally then,
$\mathcal{B}=\\{(x,d)\in\mathcal{X}\times\mathcal{D}~{}|~{}S_{B}(x,d)\neq\varnothing\\}.$
(5)
With these definitions we have the following theorem identifying a framework
to verify behaviors, though to simplify exposition, we will express the
assumptions separately:
###### Assumption 1.
Let $\\{r^{i}=\rho_{B}(\Sigma(x^{i}_{0},d^{i}),d^{i})\\}_{i=1}^{N}$ be the
behavioral robustness of $N$ attempts at executing behavior $B$ from uniformly
sampled initial conditions and states $(x_{0},d)$ over the allowable space
$\mathcal{B}$ as per (5) with robustness measure $\rho$ as per (4), and let
$r^{*}=\min_{i}r^{i}$.
###### Theorem 2.
Let Assumption 1 hold. If $r^{*}=1$, then $\forall~{}\epsilon\in[0,1]$, the
behavior $B$ will execute successfully for at least $100(1-\epsilon)\%$ of the
initial condition and environment pairs in $\mathcal{B}$ and the confidence in
this statement is $1-(1-\epsilon)^{N}$.
Proof: As Assumption 1 satisfies the conditions for Theorem 1, we can employ
the same theorem and get the following result $\forall~{}\epsilon\in[0,1]$ and
substituting $r^{*}=1$:
$\displaystyle\mathbb{C}1$
$\displaystyle\triangleq\operatorname{\mathbb{P}}_{\operatorname{\mathrm{U}}[\mathcal{B}]}[\rho_{B}(\Sigma(x_{0},d),d)\geq
1]\geq 1-\epsilon,$ (6) $\displaystyle\mathbb{C}2$
$\displaystyle\triangleq\operatorname{\mathbb{P}}^{N}_{\operatorname{\mathrm{U}}[\mathcal{B}]}[\mathbb{C}1]\geq
1-(1-\epsilon)^{N}.$
Here, $\operatorname{\mathrm{U}}[\mathcal{B}]$ denotes the uniform
distribution over $\mathcal{B}$. We will analyze $\mathbb{C}1$ first. Note
that in order for $\rho_{B}(\Sigma(x_{0},d),d)\geq 1$, all certificate
functions over the dataset $\mathbb{D}$ generated by running behavior $B$ must
evaluate to $1$ \- a consequence of equations (4) and (3). As a result,
$\rho_{B}(\Sigma(x_{0},d),d)=1\iff\begin{gathered}\mathrm{The~{}behavior~{}executes}\\\
\mathrm{successfully.}\end{gathered}$ (7)
Therefore, we can define a subset of the feasible joint state space,
corresponding to initial conditions and environment states where from and in
the behavior executes successfully:
$\mathbb{V}=\\{(x,d)\in\mathcal{B}~{}|~{}\rho(\Sigma(x,d),d)=1\\}.$ (8)
Similarly, we can define a volume fraction function over the allowable joint
state space:
$\mathcal{V}(Q)=\frac{\int_{Q}1ds}{\int_{\mathcal{B}}1ds}.$ (9)
Finally, since the uniform distribution assigns probabilistic weight to a
subset of events equivalent to their volume fraction in the sample space,
$\mathbb{C}1$ resolves to the following:
$\mathbb{C}1\equiv\mathcal{V}(\mathbb{V})\geq 1-\epsilon.$ (10)
Substituting this equivalency in $\mathbb{C}2$ completes the proof.
$\mathbin{{\rule{5.38193pt}{5.38193pt}}}$
### III-A Extending to Non-Deterministic Behaviors
In the prior sections, we only considered deterministic system evolution and
behavior graph resolution. However, it may be the case that either the system
evolves or the behavior graph resolves non-deterministically. Our proposed
verification framework should account for this non-determinism, and this
section details how the prior procedure extends to this case. We will
formalize this non-determinism directly in the context of verification.
Specifically, we assume that we have a distribution by which we can draw
robustness evaluations of system trajectories, i.e.
$\rho(\Sigma(x_{0},d),d)~{}\mathrm{is~{}sampled~{}from~{}}\pi_{V}$ (11)
Note that this accounts for both cases where the initial system and
environment states are potentially sampled randomly via a distribution
$\pi_{X}$ over the allowable space $\mathcal{B}$ as per (5) and the ensuing
trajectories $\Sigma(x_{0},d)$ are also randomly sampled from some unknown
trajectory-level distribution $\pi_{S}$, arising from the aforementioned non-
deterministic system evolution or behavior graph resolution.
As a result, we can follow the same verification method as in Theorem 1,
though we cannot identify trajectories via initial conditions as we did in
Assumption 1. The following assumption and corollary expresses this notion
formally:
###### Assumption 2.
Let $\rho_{B}$ be the robustness measure for the behavior $B$ as per equation
(4), let $\\{r^{i}=\rho_{B}(\Sigma(x^{i}_{0},d^{i}),d^{i})\\}_{i=1}^{N}$ be
the robustnesses of $N$ trajectories sampled via the (unknown) distribution
$\pi_{V}$, and let $r^{*}=\min_{i}r^{i}$.
###### Corollary 1.
Let Assumption 2 hold. If $r^{*}=1$, then $\forall~{}\epsilon\in[0,1]$, the
non-deterministic system $\Sigma$ successfully executes the behavior $B$ with
minimum probability $1-\epsilon$ and confidence $1-(1-\epsilon)^{N}$, i.e.:
$\operatorname{\mathbb{P}}^{N}_{\pi_{V}}\left[\operatorname{\mathbb{P}}_{\pi_{V}}[\rho(\Sigma(x_{0},d),d)\geq
r^{*}]\geq 1-\epsilon\right]\geq 1-(1-\epsilon)^{N}.$ (12)
Proof: This is a direct result of Theorem 1.
$\mathbin{{\rule{5.38193pt}{5.38193pt}}}$
## IV Demonstrations
### IV-A Exploratory Behavior Generation
To illustrate the verifiability of behaviors generated via Algorithm 1, this
section will detail our efforts at using a natural language abstractor built
on ChatGPT to construct an exploratory behavior.
System and Environment Description: To that end, the simulations to follow
feature an agent idealized as a single integrator system on the plane and
navigating within a $10\times 10$ grid with obstacles and a few goals. The
system state $x$ is its planar position and its labels for each of the cells,
i.e. $x\in[-5,5]^{2}\times\\{$empty, obstacle, unexplored,
goal$\\}^{100}\triangleq\mathcal{X}$. The environment, i.e. obstacle and goal
cells, is the subset of the overall label space where there exist $30$
obstacles and $3$ goals with no overlaps, i.e. $\mathcal{D}\subset$
$\\{$empty, obstacle, goal$\\}^{100}$. The system dynamics as per (1) are
known in this case, with single-integrator dynamics for the planar dimension
and label updates when specifically provided by a controller - otherwise,
labels remain constant.
Motion Primitives: The system has two primitives upon which the natural-
language behavior generalizer can build behaviors. Their descriptions will
follow:
* $\mathcal{P}^{s}_{1}:$
A label update function that updates the labels in the state $x$ to match the
labels of the cells immediately surrounding the agent, i.e. if the agent were
in cell $(2,3)$ the function updates the labels of cells
$\\{(2,3),(3,3),(1,3),(2,4),(2,2)\\}$.
* $\Xi:$
The set of all cells, i.e. $\Xi=\\{0,1,2,\dots,9\\}^{2}$.
* $A:$
A function outputting the cell the system currently occupies, i.e. if the
system’s planar position were $[-4.5,-3.5]$, the only valid parameter is cell
$(0,1)$.
* $U:$
Updates the state to reflect the environment labels of all adjacent cells.
* $R:$
A function outputting the portion of the state space where the labels for the
agent’s current and adjacent cells align with those of the environment. All
other cell labels are unconstrained, i.e. if the agent’s current and adjacent
cells were all empty, then $R(\xi,d)$ would output the subset of the state
space containing label vectors whose elements for those cells all read “empty”
with no constraints on other elements.
* $\mathcal{P}^{s}_{2}:$
A navigation function that steers the agent to a desired cell while avoiding
obstacles.
* $\Xi:$
The set of all cells, i.e. $\Xi=\\{0,1,2,\dots,9\\}^{2}$.
* $A:$
A function outputting the portion of the parameter space where the cell is
reachable by the agent in the provided environment.
* $U:$
A Markov-Decision-based planner tracked by a PD controller that steers the
agent to the desired cell while avoiding obstacles.
* $R:$
Outputs the portion of the planar state space encompassed by the desired cell,
i.e. if the agent could reach cell $(2,2)$, then $R(\xi=(2,2),d)=[-2,-1]^{2}$.
Algorithm Information: We desired an exploratory behavior whereby the system
searches the grid for a goal and after identifying a goal, oscillates back and
forth between the goal and its starting location at least $5$ times. As useful
information for the task-following algorithm, the inputted information -
string $I$ in Algorithm 1 \- indicated that the language model could use the
following functions when determining edges in the outputted behavior graph:
* $\mathcal{E}^{s}_{1}:$
A function that outputs as a list, all the cells that have been explored by
the agent thus far, i.e. all cells that have a label other than “unexplored”
in the current state.
* $\mathcal{E}^{s}_{2}:$
A function that outputs as a list all cells immediately adjacent to the
agent’s currently occupied cell.
* $\mathcal{E}^{s}_{3}:$
A function that determines whether a goal has been found and outputs the
corresponding cell.
Figure 3: Depiction of the directed behavior graph generated by Algorithm 1
for the example detailed in Section IV-A. The first behavior’s graph is
highlighted in green, the second behavior incorporates the first and the extra
information is the unhighlighted part of the graph. Figure 4: Depiction of the
learned scooping behavior. In this case, the motion was coded previously, but
contingent on the arm’s ability to sense the cups in its environment. As such,
the LLM interface only asked for the end-user to provide that initial
positioning (1) wherein the arm had a high likelihood of sending both cups.
Then the LLM behavior first moves to the desired sensing position (2), calls
the scooping primitive as seen in (3)-(4), and returns to the instructed
sensing position in (5) in case any of the cups shifted during the procedure.
Then the process repeats.
Behavior 1: For the first step, we asked the algorithm to devise a behavior
that explored the grid until it identified a goal. Specifically, the inputted
behavior string $s$ was as follows: “Please construct a function that performs
the following tasks in sequence. First, it searches over all explored cells
that are not obstacles to find the explored cell with the highest number of
unexplored neighbors. Let’s call this identified cell, cell A. Second, it
sends the agent to cell A and identifies the labels of all adjacent cells.
Three, it repeats steps one and two until a goal has been found, at which
point, it stops.” The part of the graph highlighted in green in Figure 3 shows
the generated behavior graph. As part of this generation procedure, it used
two of the provided functions $\mathcal{E}^{s}_{1},\mathcal{E}^{s}_{2}$ to
construct the edge decision function $\mathcal{E}^{s}_{4}$ whose description
will follow:
* $\mathcal{E}^{s}_{4}:$
A function that searches over all explored cells - the list of explored cells
is provided by $\mathcal{E}^{s}_{1}$ \- and assigns to each cell the number of
its adjacent cells that are unexplored - the list of adjacent cells is
provided by $\mathcal{E}^{s}_{2}$. Reports the first cell in the list with the
maximum number of unexplored neighbors.
Behavior 2: We wanted to build on the prior behavior for the latter half of
our goal, and as such, informed the LLM of the existence of this prior
behavior in the list of existing behaviors denoted as $\mathbb{B}$ in
Algorithm 1. Then, as the user, we requested the following from the LLM:
“Please construct a function that performs the following tasks in sequence.
First, it finds a goal. Second, it moves between the goal and its starting
location 5 times.” The behavior graph for this second behavior is the
unhighlighted graph in Figure 3.
Verification Procedure and Remarks: As Behavior $2$ utilized Behavior $1$,
verifying both amounts to verifying the former. Following the results of
Theorem 2, we recorded a data set $\mathbb{D}$ of parameters, primitives, and
terminal states while running the second behavior. The certificates per
equation (3) amount to checking that updated labels matched their true labels
after running primitive $\mathcal{P}^{s}_{1}$ and checking that the system
occupied the desired cell after running primitive $\mathcal{P}^{s}_{2}$. The
allowable joint state space $\mathcal{B}$ as per (5) was the portion of the
joint space where the system starts in a state $x$ such that at least one goal
is reachable in the corresponding environment $d$. Finally, the verification
procedure uniformly randomly sampled state pairs $(x,d)\in\mathcal{B}$ and
checked the corresponding certificates for each run of the behavior.
After running the second behavior from $100$ randomly sampled initial state
pairs, the behavior terminated successfully every time. As such, by Theorem 2
we expect that the second behavior will run successfully for $95\%$ of
possible state pairs and we are $99.4\%$ confident in this statement - we
generated these numbers by substituting $\epsilon=0.05$ and $N=100$ for
Theorem 2. To validate these statements, we ran the second behavior in $2000$
more sampled environments, and it terminated successfully every time. If we
were incorrect in our prior statement that the behavior would run successfully
for at least $95\%$ of feasible state pairs $(x,d)\in\mathcal{B}$, then we
would have been effectively guaranteed to identify such a failure over the
$2000$ subsequent runs. As we did not, we are confident in our corresponding
statement. Furthermore, while the synthesized behaviors seem rudimentary, they
suffice to indicate that our behavior synthesis scheme produces effective and
verifiable behaviors.
### IV-B Scooping of Granular Media
Our second demonstration focusing on granular media scooping illustrates the
framework’s utility in helping end-users set up repetitive, verifiable tasks.
System and Environment Description: The scooping problem consists of picking
up material from a donor container and depositing it into a receiver container
using a UR5e $6$-DOF robot arm with a wrist-mounted RealSense depth camera.
While a rudimentary scooping motion has been programmed apriori, it does not
know the environment in which it will be performing this motion - similar to
the situation when a pre-programmed robot has to be initialized for specific
use. The robot’s state $x\in\mathbb{R}^{6}$ is the full pose of the end-
effector, the control input $u$ corresponds to joint rotations, and the
environment $d$ corresponds to the locations and orientations of the donor and
receiver containers and the level and distribution of sand in the donor
container.
Motion Primitives: In this case, the system only has one primitive, the
scooping primitive, described as follows:
* $\mathcal{P}^{r}:$
A primitive performing a scooping motion from a donor container to a receiver
container.
* $\Xi:$
The space of feasible end-effector poses where a parameter $\xi\in\Xi$ denotes
the pose in which the robot will sense all objects in the environment to start
the scooping motion.
* $A:$
A function outputting the space of end-effector poses from which all
containers are in view of the onboard vision system.
* $U:$
A controller that performs the scooping motion.
* $R:$
A function that outputs a ball around the provided parameter within which the
end-effector’s pose will lie upon the termination of the scooping motion.
That being said, the acceptance function $A$ is implicitly defined and
impossible to know apriori. Here, we intend for the algorithm to assist the
end-user in selecting a parameter $\xi$ whose validity, i.e. existence in
$A(x,d)~{}\forall~{}(x,d)\in\mathcal{X}\times\mathcal{D}$, can be checked
through the ensuing verification procedure.
Algorithm Information: To assist the user in picking such a parameter $\xi$,
the algorithm was provided an information string $I$ describing a helper
function $\mathcal{E}^{r}_{1}$ that translated and rotated the end-effector a
desired amount. This string also included several examples of natural language
translations to inputs for this function $\mathcal{E}^{r}_{1}$. Additionally,
the string included another function $\mathcal{E}^{r}_{2}$ that saved the end-
effector pose for future reference, and the LLM was told to call this function
if the user deemed the current end-effector pose satisfactory.
Behavior Generation and Verification: The task-model repeatedly queried the
user for end-effector translations and rotations and as to whether or not the
user deemed the current pose sufficient for sensing any placement of
containers. As such, there was no singular behavior prompt $s$. However, as
the resulting behavior repetitively executes the scooping primitive with the
user-provided sensing pose parameter $\xi$, this behavior can be verified by
the results of Corollary 1. To do so, before every scooping motion, we placed
the containers at a computer-generated randomly chosen distance from a pre-
determined set-point. As we are manually placing containers at the pre-
determined locations, there will be noise affecting this placement, though we
assume this noise is independent for successive placements. We will denote
this distribution of container placements via $\pi$. As there is no need to
sample over initial robot states - the system always starts and ends at the
parameterized sensing pose $\xi$ every iteration - we can draw independent
environments - container placements - via our distribution $\pi$ and record
the robot’s ability to perform its scooping motion in each placement. Doing so
for $59$ sampled environments with successful trials each time indicates
according to Corollary 1 that if we continued to sample environments and test
the system accordingly, the system would succeed at least $95\%$ of the time
and we are at least $95\%$ confident in that statement.
## V Conclusion
We propose a framework by which a natural language abstractor can synthesize
verifiable behaviors as a directed graph over provided motion primitives. To
showcase the increased flexibility and verifiability of the synthesized
behaviors, we instructed the task-following model to construct an exploratory
behavior for a simulated planar agent and a scooping behavior for a robotic
arm. In both cases, the generated behavior was verifiable via the
aforementioned method, and we were able to validate our probabilistic
verification statements in simulation.
## References
* [1] C. G. Atkeson and S. Schaal, “Robot learning from demonstration,” in _ICML_ , vol. 97. Citeseer, 1997, pp. 12–20.
* [2] H. Ravichandar, A. S. Polydoros, S. Chernova, and A. Billard, “Recent advances in robot learning from demonstration,” _Annual review of control, robotics, and autonomous systems_ , vol. 3, pp. 297–330, 2020.
* [3] Z. Zhu and H. Hu, “Robot learning from demonstration in robotic assembly: A survey,” _Robotics_ , vol. 7, no. 2, p. 17, 2018.
* [4] Y. Lin, S. Ren, M. Clevenger, and Y. Sun, “Learning grasping force from demonstration,” in _2012 IEEE International Conference on Robotics and Automation_. IEEE, 2012, pp. 1526–1531.
* [5] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in _2009 IEEE International Conference on Robotics and Automation_. IEEE, 2009, pp. 763–768.
* [6] E. Misimi, A. Olofsson, A. Eilertsen, E. R. Øye, and J. R. Mathiassen, “Robotic handling of compliant food objects by robust learning from demonstration,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 6972–6979.
* [7] O. M. Manyar, Z. McNulty, S. Nikolaidis, and S. K. Gupta, “Inverse reinforcement learning framework for transferring task sequencing policies from humans to robots in manufacturing applications,” in _2023 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2023, pp. 849–856.
* [8] Y. Wang, R. Xiong, L. Shen, K. Sun, J. Zhang, and L. Qi, “Towards learning from demonstration system for parts assembly: A graph based representation for knowledge,” in _The 4th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent_. IEEE, 2014, pp. 174–179.
* [9] B. Keller, M. Draelos, K. Zhou, R. Qian, A. N. Kuo, G. Konidaris, K. Hauser, and J. A. Izatt, “Optical coherence tomography-guided robotic ophthalmic microsurgery via reinforcement learning from demonstration,” _IEEE Transactions on Robotics_ , vol. 36, no. 4, pp. 1207–1218, 2020.
* [10] T. Osa, K. Harada, N. Sugita, and M. Mitsuishi, “Trajectory planning under different initial conditions for surgical task automation by learning from demonstration,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2014, pp. 6507–6513.
* [11] J. W. Kim, C. He, M. Urias, P. Gehlbach, G. D. Hager, I. Iordachita, and M. Kobilarov, “Autonomously navigating a surgical tool inside the eye by learning from demonstration,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2020, pp. 7351–7357.
* [12] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne, “Imitation learning: A survey of learning methods,” _ACM Comput. Surv._ , vol. 50, no. 2, apr 2017. [Online]. Available: https://doi.org/10.1145/3054912
* [13] C. Chao, M. Cakmak, and A. L. Thomaz, “Towards grounding concepts for transfer in goal learning from demonstration,” in _2011 IEEE International Conference on Development and Learning (ICDL)_ , vol. 2. IEEE, 2011, pp. 1–6.
* [14] K. Hausman, Y. Chebotar, S. Schaal, G. Sukhatme, and J. J. Lim, “Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [15] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning via meta-learning,” in _Conference on robot learning_. PMLR, 2017, pp. 357–368.
* [16] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, _et al._ , “Palm-e: An embodied multimodal language model,” _arXiv preprint arXiv:2303.03378_ , 2023.
* [17] Y. Cui, S. Niekum, A. Gupta, V. Kumar, and A. Rajeswaran, “Can foundation models perform zero-shot task specification for robot manipulation?” in _Learning for Dynamics and Control Conference_. PMLR, 2022, pp. 893–905.
* [18] D. Shah, A. Sridhar, N. Dashora, K. Stachowicz, K. Black, N. Hirose, and S. Levine, “Vint: A large-scale, multi-task visual navigation backbone with cross-robot generalization,” in _7th Annual Conference on Robot Learning_ , 2023.
* [19] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, _et al._ , “On the opportunities and risks of foundation models,” _arXiv preprint arXiv:2108.07258_ , 2021.
* [20] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “Chatgpt for robotics: Design principles and model abilities,” _Microsoft Auton. Syst. Robot. Res_ , vol. 2, p. 20, 2023.
* [21] F. Stulp, E. Theodorou, M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal, “Learning motion primitive goals for robust manipulation,” in _2011 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2011, pp. 325–331.
* [22] W. Ubellacker and A. D. Ames, “Robust locomotion on legged robots through planning on motion primitive graphs,” in _2023 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2023, pp. 12 142–12 148.
* [23] P. Akella, W. Ubellacker, and A. D. Ames, “Probabilistic guarantees for nonlinear safety-critical optimal control,” _arXiv preprint arXiv:2303.06258_ , 2023.
* [24] P. Akella, A. Dixit, M. Ahmadi, J. W. Burdick, and A. D. Ames, “Sample-based bounds for coherent risk measures: Applications to policy synthesis and verification,” _arXiv preprint arXiv:2204.09833_ , 2022.
* [25] P. Akella, M. Ahmadi, and A. D. Ames, “A scenario approach to risk-aware safety-critical system verification,” _arXiv preprint arXiv:2203.02595_ , 2022.
* [26] A. Donzé and O. Maler, “Robust satisfaction of temporal logic over real-valued signals,” in _International Conference on Formal Modeling and Analysis of Timed Systems_. Springer, 2010, pp. 92–106.
* [27] A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, “Control barrier function based quadratic programs for safety critical systems,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 8, pp. 3861–3876, 2016.
|
11footnotetext: P. Godau and P. Kalinowski contributed equally to this
paper.11institutetext: Division of Intelligent Medical Systems (IMSY), German
Cancer Research Center (DKFZ), Heidelberg, Germany 11email:
<EMAIL_ADDRESS>22institutetext: National Center for Tumor
Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and university
medical center Heidelberg 33institutetext: Faculty of Mathematics and Computer
Science, Heidelberg University, Germany 44institutetext: HIDSS4Health -
Helmholtz Information and Data Science School for Health,
Karlsruhe/Heidelberg, Germany 55institutetext: Helmholtz Imaging, German
Cancer Research Center (DKFZ), Germany 66institutetext: Instituto de Ciencias
de la Computación, UBA-CONICET, Argentina 77institutetext: Interactive Machine
Learning Group, German Cancer Research Center (DKFZ), Germany 88institutetext:
Medical Faculty, Heidelberg University, Germany
# Deployment of Image Analysis Algorithms
under Prevalence Shifts
Patrick Godau 11223344⋆⋆ Piotr Kalinowski 1144⋆⋆ Evangelia Christodoulou 11
Annika Reinke 113355 Minu Tizabi 1155 Luciana Ferrer 66 Paul Jäger 5577 Lena
Maier-Hein 1122335588
###### Abstract
Domain gaps are among the most relevant roadblocks in the clinical translation
of machine learning (ML)-based solutions for medical image analysis. While
current research focuses on new training paradigms and network architectures,
little attention is given to the specific effect of prevalence shifts on an
algorithm deployed in practice. Such discrepancies between class frequencies
in the data used for a method’s development/validation and that in its
deployment environment(s) are of great importance, for example in the context
of artificial intelligence (AI) democratization, as disease prevalences may
vary widely across time and location. Our contribution is twofold. First, we
empirically demonstrate the potentially severe consequences of missing
prevalence handling by analyzing (i) the extent of miscalibration, (ii) the
deviation of the decision threshold from the optimum, and (iii) the ability of
validation metrics to reflect neural network performance on the deployment
population as a function of the discrepancy between development and deployment
prevalence. Second, we propose a workflow for prevalence-aware image
classification that uses estimated deployment prevalences to adjust a trained
classifier to a new environment, without requiring additional annotated
deployment data. Comprehensive experiments based on a diverse set of 30
medical classification tasks showcase the benefit of the proposed workflow in
generating better classifier decisions and more reliable performance estimates
compared to current practice.
###### Keywords:
Prevalence shift Medical image classification Generalization Domain Gap.
## 1 Introduction
Figure 1: Summary of contributions. (a) Based on a dataset comprising 30
medical image classification tasks, we show that prevalence shifts between
development data and deployment data engender various problems. (b) Our
workflow for prevalence-aware medical image classification addresses all of
these issues.
Machine learning (ML) has begun revolutionizing many fields of imaging
research and practice. The field of medical image analysis, however, suffers
from a substantial translational gap that sees a large number of
methodological developments fail to reach (clinical) practice and thus stay
short of generating (patient) benefit. A major roadblock are dataset shifts,
situations in which the distributions of data used for algorithm
development/validation and its deployment, differ due to exogenous factors
such as dissimilar cohorts or differences in the acquisition process [8, 40].
In the following, we focus on prevalence shifts, which are highly relevant in
the context of global artificial intelligence (AI) [37]. Common causes for
prevalence shifts include sample selection bias and variations in
environmental factors like season or geography [8, 11, 40]. According to prior
work [11] as well as our own analyses, prevalence handling is especially
crucial in the following steps related to model deployment:
Model re-calibration: After a prevalence shift models need to be re-
calibrated. This has important implications on the decisions made based on
predicted class scores (see next point). Note in this context that deep neural
networks tend not to be calibrated after training in the first place [15].
Decision rule: A decision rule is a strategy transforming continuous predicted
class scores into a single classification decision. Simply using the argmax
operator ignores the theoretical boundary conditions derived from Bayes
theory. Importantly, argmax relies on the predicted class scores to be
calibrated and is thus highly sensitive to prevalence shifts [13].
Furthermore, it only yields the optimal decision for specific metrics.
Analogously, tuned decision rules may not be invariant to prevalence shifts.
Performance assessment: Class frequencies observed in one test set are in
general not representative of those encountered in practice. This implies that
the scores for widely used prevalence-dependent metrics, such as Accuracy, F1
Score, and Matthews Correlation Coefficient (MCC), would substantially differ
when assessed under the prevalence shift towards clinical practice [27].
This importance, however, is not reflected in common image analysis practice.
Through a literature analysis, we found that out of a total of 53 research
works published between 01/2020 and beginning of 03/2023 that used any of the
data included in our study, only one explicitly mentioned re-calibration.
Regarding the most frequently implemented decision rules, roughly three
quarters of publications did not report any strategy, which we strongly assume
to imply use of the default argmax operator. Moreover, both our analysis and
previous work show Accuracy and F1 Score to be among the most frequently used
metrics for assessing classification performance in comparative medical image
analysis [26, 27], indicating that severe performance deviations under
potential prevalence shifts are a widespread threat.
Striving to bridge the translational gap in AI-based medical imaging research
caused by prevalence shifts, our work provides two main contributions: First,
we demonstrate the potential consequences of ignoring prevalence shifts on a
diverse set of medical classification tasks. Second, we assemble a
comprehensive workflow for image classification, which is robust to prevalence
shifts. As a key advantage, our proposal requires only an estimate of the
expected prevalences rather than annotated deployment data and can be applied
to any given black box model.
Figure 2: Medical image classification tasks used in this study. The number of
samples (red) and classes (green) ranges from 1,200 to 121,583 and two to
eight, respectively. The imbalance ratio (blue) varies between 1 and 10.9.
## 2 Methods
### 2.1 Workflow for prevalence-aware image classification
Our workflow combines existing components of validation in a novel manner. As
illustrated in Fig. 1, it leverages estimated deployment prevalences to adjust
an already trained model to a new environment. We use the following
terminology.
Fundamentals: We define a dataset $D:=\\{(x_{i},y_{i})|1\leq i\leq N\\}$ by a
set of $N$ images $x_{i}\in X$ and labels $y_{i}\in Y$ with
$Y=\\{1,\ldots,C\\}$. $P_{D}$ is a $C$-dimensional vector called the
prevalences of $D$, where $P_{D}(k):=|\\{(x_{i},y_{i})\in D|y_{i}=k\\}|/N$ is
the prevalence of class $k\in Y$. The fraction
$\max_{k}\\{P_{D}(k)\\}/\min_{k}\\{P_{D}(k)\\}$ is named the imbalance ratio
(IR) of $D$.
Re-calibration: We refer to the output of a model
$\varphi:X\rightarrow\mathbb{R}^{C}$ before applying the softmax activation as
$\varphi(x)$. It can be re-calibrated by applying a transformation $f$. Taking
the softmax of $\varphi(x)$ (no re-calibration) or of $f(\varphi(x))$, we
obtain predicted class scores $s_{x}$. The probably most popular re-
calibration approach is referred to as “temperature scaling” [15] and requires
only a single parameter $t\in\mathbb{R}$ to be estimated:
$f_{\mathrm{temp}}(\varphi(x))=\varphi(x)/t$. The transformation parameter(s)
is/are learned with minimization of the cross-entropy loss.
Decision rule: A decision rule $d$ is a deterministic algorithm that maps
predicted class scores $s_{x}$ to a final prediction $d(s_{x})\in Y$. The most
widely used decision rule is the argmax operator, although various
alternatives exist [27].
To overcome problems caused by prevalence shifts, we propose the following
workflow (Fig. 1b)
Step 1: Estimate the deployment prevalences: The first step is to estimate the
prevalences in the deployment data $D_{dep}$, e.g., based on medical records,
epidemiological research, or a data-driven approach [23, 32]. The workflow
requires an underlying anticausal connection of image and label, i.e., a label
$y$ causes the image $x$ (e.g., presence of a disease has a visual effect)[8,
11], to be verified at this point.
Step 2: Perform prevalence-aware re-calibration: Given an external factor that
varies $P_{D}$ between development calibration and deployment datasets
$D_{cal}$ and $D_{dep}$ from, we can assume the likelihoods $P(x|y=k)$ to stay
identical for an anticausal problem, ignoring potential manifestation and
acquisition shifts during deployment [8]. Under mild assumptions [23, 41],
weight adaptation in the loss function optimally solves the prevalence shift
for a classifier. In the presence of prevalence shifts, we therefore argue for
adaptation of weights in the cross-entropy loss
$\sum_{i}-w(y_{i})\log(s_{i}(y_{i}))$ according to the expected prevalences;
more precisely, for class $k$ we use the weight
$w(k)=P_{D_{dep}}(k)/P_{D_{cal}}(k)$ during the learning of the transformation
parameters [11, 34, 41]. Furthermore, since temperature scaling’s single
parameter $t$ is incapable of correcting the shift produced by a mismatch in
prevalences, we add a bias term $b\in\mathbb{R}^{C}$ to be estimated alongside
$t$ as suggested by [2, 6, 29]. We refer to this re-calibration approach as
“affine scaling”: $f_{\mathrm{aff}}(\varphi(x))=\varphi(x)/t+b$.
Step 3: Configure validation metric with deployment prevalences: Prevalence-
dependent metrics, such as Accuracy, MCC, or the F1 Score, are widely used in
image analysis due to their many advantages [27]. However, they reflect a
model’s performance only with respect to the specific, currently given
prevalence. This problem can be overcome with the metric Expected Cost (EC)
[13]. In its most general form, we can express EC as
$\mathrm{EC}=\sum_{k}P_{D}(k)\sum_{j}c_{kj}R_{kj}$, where $c_{kj}$ refers to
the “costs” we assign to the decision of classifying a sample of class $k$ as
$j$ and $R_{kj}$ is the fraction of all samples with reference class $k$ that
have been predicted as $j$. Note that the standard 0-1 costs ($c_{kk}=0$ for
all $k$ and $c_{kj}=1$ for $k\neq j$) reduces to EC being 1 minus Accuracy. To
use EC as a robust estimator of performance, we propose replacing the
prevalences $P_{D}(k)$ with those previously estimated in step 1 [13].
Step 4: Set prevalence-aware decision rule: Most counting metrics [27] require
some tuning of the decision rule during model development, as the argmax
operator is generally not the optimal option. This tuning relies on data from
the development phase and the resulting decision rule is likely dependent on
development prevalences and does not generalize (see Sec. 3). On the other
hand, EC, as long as the predicted class scores are calibrated, yields the
optimal decision rule $\mathrm{argmin}_{k}\sum_{j}c_{jk}s_{x}(j)$ [3, 16]. For
standard 0-1 costs, this simplifies to the argmax operator.
Step 5: External validation: The proposed steps for prevalence-aware image
classification have strong theoretical guarantees, but additional validation
on the actual data of the new environment is indispensable for monitoring
[33].
### 2.2 Experimental design
The purpose of our experiments was twofold: (1) to quantify the effect of
ignoring prevalence shifts when validating and deploying models and (2) to
show the value of the proposed workflow. The code for our experiments is
available at https://github.com/IMSY-DKFZ/prevalence-shifts.
#### 2.2.1 Medical image classification tasks
To gather a wide range of image classification tasks for our study, we
identified medical image analysis tasks that are publicly available and
provide at least 1000 samples. This resulted in 30 tasks covering the
modalities laparoscopy [22, 38], gastroscopy/colonoscopy [5, 30], magnetic
resonance imaging (MRI)[4, 9], X-ray [1, 18, 20, 31], fundus photography [24],
capsule endoscopy[35], and microscopy [14] (Fig. 2). We split each task as
follows: 30% of the data – referred to as “deployment test set” $D_{dep}$ –
was used as a hold-out split to sample subsets $D_{dep}(r)$ representing a
deployment scenario with IR r. The remaining data set made up the “development
data“, comprising the “development test set” $D_{test}$ (10%; class-balanced)
, the “training set” (50%) and the “validation set” (10%; also used for
calibration).
#### 2.2.2 Experiments
For all experiments, the same neural network models served as the basis. To
mimic a prevalence shift, we sub-sampled datasets $D_{dep}(r)$ from the
deployment test sets $D_{dep}$ according to IRs $r\in[1,10]$ with a step size
of $0.5$. The experiments were performed with the popular prevalence-dependent
metrics Accuracy, MCC, and F1 Score, as the well as EC with 0-1 costs. For our
empirical analyses, we trained neural networks (specifications: see Tab.
LABEL:tab:models Suppl.) for all 30 classification tasks introduced in Sec.
2.2.1. In the interest of better reproducibility and interpretability, we
focused on a homogeneous workflow (e.g., by fixing hyperparameters across
tasks) rather than aiming to achieve the best possible Accuracy for each
individual task. The following three experiments were performed. (1) To assess
the effects of prevalence shifts on model calibration, we measured
miscalibration on the deployment test set $D_{dep}(r)$ as a function of the
increasing IR r for five scenarios: no re-calibration, temperature scaling,
and affine scaling (the latter two with and without weight adaptation).
Furthermore, (2) to assess the effects of prevalence shifts on the decision
rule, for the 24 binary tasks, we computed – with and without re-calibration
and for varying IR r – the differences between the metric scores on
$D_{dep}(r)$ corresponding to an optimal decision rule and two other decision
rules: argmax and a cutoff that was tuned on $D_{test}$. Lastly, (3) to assess
the effects of prevalence shifts on the generalizability of validation
results, we measured the absolute difference between the metric scores
obtained on the development test data $D_{test}$ and those obtained on the
deployment test data $D_{dep}(r)$ with varying IR r. The scores were computed
for the argmax decision rule for both non-re-calibrated and re-calibrated
predicted class scores. To account for potential uncertainty in estimating
deployment prevalences, we repeated all experiments with slight perturbation
of the true prevalences. To this end, we drew the prevalence for each class
from a normal distribution with a mean equal to the real class prevalence and
fixed standard deviation (std). We then set a minimal score of 0.01 for each
class and normalized the resulting distribution.
## 3 Results
Figure 3: Effect of prevalence shifts on the calibration. The class-wise
calibration error (CWCE) generally increases with an increasing prevalence
shift from development (balanced) to deployment test set. Left: Mean (line)
and standard deviation (shaded area) obtained from n = 30 medical
classification tasks. Right: CWCE values for all tasks at imbalance ratio 10.
Figure 4: Effect of prevalence shifts on the decision rule. The difference
between the actual metric score and the optimal metric score (optimal decision
rule) is shown as a function of the imbalance ratio for non-re-calibrated
(left) and re-calibrated (right) models for two decision rule strategies:
argmax (top) and threshold optimization on the development test set (bottom).
Mean (lines) and standard deviation (transparent area) obtained from n=24
binary tasks.
Effects of prevalence shifts on model calibration In general, the calibration
error increases with an increasing discrepancy between the class prevalences
in the development and the deployment setting (Fig. 3). The results clearly
demonstrate that a simple accuracy-preserving temperature scaling-based method
is not sufficient under prevalence shifts. Only our proposed method, which
combines an affine transformation with a prevalence-driven weight adjustment,
consistently features good calibration performance. This also holds true when
perturbing the deployment prevalences, as demonstrated in Fig.
LABEL:fig:sup:calibration_perturb (Suppl.). For the inspected range (up to
r=10), miscalibration can be kept constantly close to 0. Note that CWCE is a
biased estimator of the canonical calibration error [27], which is why we
additionally report the Brier Score (BS) as an overall performance measure
(Fig. LABEL:fig:sup:calibration_metrics Suppl.).
Effects of prevalence shifts on the decision rule Fig. 4 supports our
proposal: An argmax-based decision informed by calibrated predicted class
scores (top right) and assessed with the Expected Cost (EC) metric (identical
to the blue Accuracy line in this case) yields optimal results irrespective of
prevalence shifts. In fact, this approach substantially increases the quality
of the decisions when compared to a baseline without re-calibration, as
indicated by an average relative decrease of EC by 25%. This holds true in a
similar fashion for perturbed versions of the re-calibration (Fig.
LABEL:fig:sup:threshold Suppl.). The results further show that argmax is not
the best decision rule for F1 Score and MCC (Fig. 4 top). Importantly,
decision rules optimized on a development dataset do not generalize to unseen
data under prevalence shifts (Fig. 4 bottom).
Effects of prevalence shifts on the generalizability of validation results As
shown in Fig. 5, large deviations from the metric score obtained on the
development test data of up to 0.41/0.18 (Accuracy), 0.35/0.46 (F1 Score), and
0.27/0.32 (MCC), can be observed for the re-calibrated/non-re-calibrated case.
In contrast, the proposed variation of Expected Cost (EC) enables a reliable
estimation of performance irrespective of prevalence shifts, even when the
prevalences are not known exactly (Fig. LABEL:fig:sup:assessment Suppl.). The
same holds naturally true for the prevalence-independent metrics Balanced
Accuracy (BA) and Area under the Receiver Operating Curve (AUROC) (Fig.
LABEL:fig:sup:assessment Suppl.).
Figure 5: Effect of prevalence shifts on the generalizability of validation
results. The absolute difference of the metric score computed on the
deployment data to that computed on the development test set is shown as a
function of the imbalance ratio (IR) for non-re-calibrated (top) and re-
calibrated (bottom) models. The dot- and boxplots show the results for all
n=30 tasks at a fixed IR of 10.
## 4 Discussion
Important findings, some of which are experimental confirmations of theory,
are:
1. 1.
Prevalence shifts lead to miscalibration. A weight-adjusted affine re-
calibration based on estimated deployment prevalences compensates for this
effect.
2. 2.
Argmax should not be used indiscriminately as a decision rule. For the metric
EC and specializations thereof (e.g., Accuracy), optimal decison rules may be
derived from theory, provided that the predicted class scores are calibrated.
This derived rule may coincide with argmax, but for other common metrics (F1
Score, MCC) argmax does not lead to optimal results.
3. 3.
An optimal decision rule, tuned on a development dataset, does not generalize
to datasets with different prevalences. Prevalence-aware setting of the
decision rule requires data-driven adjustment or selection of a metric with a
Bayes theory-driven optimal decision rule.
4. 4.
Common prevalence-dependent metrics, such as MCC and F1 Score, do not give
robust estimations of performance under prevalence shifts. EC, with adjusted
prevalences, can be used in these scenarios.
These findings have been confirmed by repeated experiments using multiple
random seeds for dataset splitting and model training. Overall, we present
strong evidence that the so far uncommon metric EC offers key advantages over
established metrics. Due to its strong theoretical foundation and flexibility
in configuration it should, from our perspective, evolve to a default metric
in image classification. Note in this context that while our study clearly
demonstrates the advantages of prevalence-independent metrics, prevalence-
dependent metrics can be much better suited to reflect the clinical interest
[27].
In conclusion, our results clearly demonstrate that ignoring potential
prevalence shifts may lead to suboptimal decisions and poor performance
assessment. In contrast to prior work [25], our proposed workflow solely
requires an estimation of the deployment prevalences – and no actual
deployment data or model modification. It is thus ideally suited for
widespread adoption as a common practice in prevalence-aware image
classification.
#### 4.0.1 Acknowledgements
This project has been funded by (i) the German Federal Ministry of Health
under the reference number 2520DAT0P1 as part of the pAItient (Protected
Artificial Intelligence Innovation Environment for Patient Oriented Digital
Health Solutions for developing, testing and evidence based evaluation of
clinical value) project, (ii) HELMHOLTZ IMAGING, a platform of the Helmholtz
Information & Data Science Incubator and (iii) the Helmholtz Association under
the joint research school “HIDSS4Health – Helmholtz Information and Data
Science School for Health” (iv) state funds approved by the State Parliament
of Baden-Württemberg for the Innovation Campus Health + Life Science Alliance
Heidelberg Mannheim.
## References
* [1] Covid19 x-ray classification dataset on kaggle. https://www.kaggle.com/ahemateja19bec1025/covid-xray-dataset, accessed: 2022-01-13
* [2] Alexandari, A.M., et al.: Maximum likelihood with bias-corrected calibration is hard-to-beat at label shift adaptation. In: International Conference on Machine Learning (2020)
* [3] Bishop, C.M.: Pattern recognition and machine learning (information science and statistics) (2006)
* [4] Bohaju, J.: Brain tumor (2020). https://doi.org/10.34740/KAGGLE/DSV/1370629
* [5] Borgli, H., et al.: Hyper-kvasir: A comprehensive multi-class image and video dataset for gastrointestinal endoscopy (Dec 2019). https://doi.org/10.31219/osf.io/mkzcq
* [6] Brummer, N., et al.: On calibration of language recognition scores. 2006 IEEE Odyssey - The Speaker and Language Recognition Workshop pp. 1–8 (2006)
* [7] Buslaev, A., et al.: Albumentations: Fast and flexible image augmentations. Information 11(2) (2020). https://doi.org/10.3390/info11020125
* [8] de Castro, D.C., et al.: Causality matters in medical imaging. Nature Communications 11 (2019)
* [9] Cheng, J.: brain tumor dataset (Apr 2017). https://doi.org/10.6084/m9.figshare.1512427.v5
* [10] Deng, J., et al.: Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition (2009)
* [11] Dockes, J., et al.: Preventing dataset shift from breaking machine-learning biomarkers. GigaScience 10 (2021)
* [12] Falcon, W., et al.: PyTorch Lightning (Mar 2019). https://doi.org/10.5281/zenodo.3828935, https://github.com/Lightning-AI/lightning
* [13] Ferrer, L.: Analysis and comparison of classification metrics. ArXiv abs/2209.05355 (2022)
* [14] Ghamsarian, N., et al.: Relevance-based compression of cataract surgery videos using convolutional neural networks (Oct 2020). https://doi.org/10.1145/3394171.3413658
* [15] Guo, C., et al.: On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70. p. 1321–1330. ICML’17, JMLR.org (2017)
* [16] Hastie, T.J., et al.: The elements of statistical learning (2001)
* [17] He, K., et al.: Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
* [18] Irvin, J., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison (2019)
* [19] Johnson, J.M., et al.: Survey on deep learning with class imbalance. Journal of Big Data 6, 1–54 (2019)
* [20] Kermany, D.S., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131.e9 (2018). https://doi.org/10.1016/j.cell.2018.02.010
* [21] Kingma, D.P., et al.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2015)
* [22] Leibetseder, A., et al.: Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In: Proceedings of the 9th ACM Multimedia Systems Conference, MMSys 2018, Amsterdam, The Netherlands, June 12-15, 2018. pp. 357–362. ACM (2018). https://doi.org/10.1145/3204949.3208127
* [23] Lipton, Z.C., et al.: Detecting and correcting for label shift with black box predictors. In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018\. Proceedings of Machine Learning Research, vol. 80, pp. 3128–3136. PMLR (2018), http://proceedings.mlr.press/v80/lipton18a.html
* [24] Liu, R., et al.: Deepdrid: Diabetic retinopathy—grading and image quality estimation challenge. Patterns p. 100512 (2022). https://doi.org/https://doi.org/10.1016/j.patter.2022.100512
* [25] Ma, W., et al.: Test-time adaptation with calibration of medical image classification nets for label distribution shift. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part III. p. 313–323. Springer-Verlag, Berlin, Heidelberg (2022), https://doi.org/10.1007/978-3-031-16437-8_30
* [26] Maier-Hein, L., et al.: Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications 9 (2018)
* [27] Maier-Hein, L., et al.: Metrics reloaded: Pitfalls and recommendations for image analysis validation. ArXiv abs/2206.01653 (2022)
* [28] Paszke, A., et al.: PyTorch: An Imperative Style, High-Performance Deep Learning Library. In: Advances in Neural Information Processing Systems 32. pp. 8024–8035. Curran Associates, Inc. (2019)
* [29] Platt, J.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods (1999)
* [30] Pogorelov, K., et al.: Nerthus: A bowel preparation quality video dataset. In: Proceedings of the 8th ACM on Multimedia Systems Conference. pp. 170–174. MMSys’17, ACM, New York, NY, USA (2017). https://doi.org/10.1145/3083187.3083216
* [31] Rajpurkar, P., et al.: MURA dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs. In: Medical Imaging with Deep Learning (2018), https://openreview.net/forum?id=r1Q98pjiG
* [32] Saerens, M., et al.: Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation 14, 21–41 (2002)
* [33] Saria, S., et al.: Tutorial: Safe and reliable machine learning. ArXiv abs/1904.07204 (2019)
* [34] Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90, 227–244 (2000)
* [35] Smedsrud, P.H., et al.: Kvasir-Capsule, a video capsule endoscopy dataset. Scientific Data 8(1), 142 (2021). https://doi.org/10.1038/s41597-021-00920-z
* [36] Smith, L.N.: Cyclical learning rates for training neural networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) pp. 464–472 (2015)
* [37] Subbaswamy, A., et al.: From development to deployment: dataset shift, causality, and shift-stable models in health ai. Biostatistics (2019)
* [38] Twinanda, A.P., et al.: Endonet: A deep architecture for recognition tasks on laparoscopic videos. IEEE Transactions on Medical Imaging 36, 86–97 (2017)
* [39] Wightman, R.: Pytorch image models. https://github.com/rwightman/pytorch-image-models (2019). https://doi.org/10.5281/zenodo.4414861
* [40] Zhang, A., et al.: Shifting machine learning for healthcare from development to deployment and from models to data. Nature Biomedical Engineering 6, 1330 – 1345 (2022)
* [41] Zhang, K., et al.: Domain adaptation under target and conditional shift. In: International Conference on Machine Learning (2013)
|
[orcid = 0000-0002-6980-1813] [1] Software, Writing of the manuscript
[orcid = 0000-0002-9661-5709] Software, Writing of the manuscript
[orcid = 0000-0002-2321-9334] Discussion, Writing of the manuscript
[orcid = 0000-0002-3477-2282] Conceptualization of this study, Methodology,
Writing of the manuscript
# Morphological Study of Granular-Granular Impact Craters through Time-of-
Flight Cameras: from Concept to Automation in Python
F. Corrales-Machín<EMAIL_ADDRESS>G. Viera-López R. Bartali Y.
Nahmad-Molinari Universidad Autónoma de San Luis Potosí, Instituto de Física,
Av. Parque Chapultepec 1570, San Luis Potosí 78295, México Gran Sasso Science
Institute, Viale Francesco Crispi, 7, L’Aquila 67100, Italy Universidad
Autónoma de San Luis Potosí, Facultad de Ciencias, Av. Parque Chapultepec
1570, San Luis Potosí, 78295, México
###### Abstract
Laboratory made granular-granular impact craters have been used as model
analogues of planetary impact craters. These kind of craters have been
observed and studied using profilometry techniques that allow to retrieve
important morphologic features from the impacted surface. In this work, we
propose to use a Time-of-Flight camera (Microsoft Kinect One) for the
acquisition of depth data. We show comparisons between the typically used
technique and the analysis derived from the Time-of-Flight data. We also
release _craterslab_ , a Python library developed to automate most of the
tasks from the process of studying impact craters produced by granular
projectiles hitting on the surface of granular targets. The library is able to
acquire, identify, and measure morphological features of impacted surfaces
through the reconstruction of 3D topographic maps. Our results show that using
a Time-of-Flight camera and automating the data processing with a software
library for the systematic study of impact craters can produce very accurate
results while reducing the time spent on different stages of the process.
###### keywords:
Depth sensor Kinect Crater morphology Python library
Introduces a robust technique based on ToF sensors for studying experimental
man-made craters morphology.
Compares the results obtained with ToF sensors against the established
technique.
Proposes a software library for the automation of impact craters morphology
and morphometric measurements.
## 1 Introduction
Determining unknown distances to objects or their spatial dimensions by
measuring the angles they form from known points of observation is an ancient
technique known as triangulation, which is still used in modern instruments.
In the sixth century BC, Thales of Miletus measured the height of pyramids by
comparing the ratio of their shadow length to height with his own shadow
length to height ratio at the same time, using the Thales theorem of
corresponding proportions. Shortly after, Eratosthenes measured the radius of
the Earth, while Aristarchus of Samos calculated the sizes of the Sun and
Moon, as well as their distances from the Earth in Earth radii, based on the
same geometric principles. This led to the development of a heliocentric model
of our solar system, utilizing a simple yet powerful set of geometric
theorems, thousands of years ago (James, 1953).
After Galileo invented the astronomical telescope in $1609$ and discovered
craters on the Moon’s surface, various hypotheses were proposed regarding the
origin of these geological structures, including coraline reefs, volcanic
activity, and the later rejected idea of impact origin proposed by Hooke. It
was not until the twentieth century that the impact origin theory was revived
by Grove Gilbert for explaining lunar craters, and found to align well with
Laplace’s protoplanetary cloud theory of solar system formation (Gilbert,
1979).
The importance of meteorite impacts for Earth and life on Earth evolution was
recognized in $1980$, when the Chicxulub crater in the Yucatán peninsula was
recognized as the scar of a colossal impact that caused the mass extinction
event at the Cretaceous - Paleogene (K-Pg) boundary $65$ Ma ago (Alvarez et
al., 1980, 1995). Initially, projected shadow length was used to determine the
depth of craters and the height of their rims in early studies of lunar
geophysical features (Chappelow, 2013). Subsequently, satellite radar
altimetry using real-time of flight techniques (Davis, 1992) was employed to
explore topographic features and create elevation maps. Eventually, phase-
change Time-of-Flight techniques, such as LiDAR, were introduced for
atmospheric, terrestrial, and planetary science prospecting.
Currently, there exists a well-established understanding of the processes
involved in impact crater formation, which has been derived from geophysical
exploration of terrestrial impact craters, computer simulations, and
hypervelocity experiments. These processes can be categorized into three main
stages: contact and compression, excavation of a transient crater, and
modification through avalanching and deposition of debris (Melosh, 1989;
Osinski and Pierazzo, 2013). However, due to their rare occurrence and the
immense energy involved, impacts that form planetary craters are infrequent
events that are difficult to observe directly. Consequently, it is challenging
to gather experimental or observational evidence to directly compare and
validate the theoretical understanding of impact crater formation.
Again, using proportionality laws or scaled systems, the Scottish geologist
and geographer Henry Cadell played a pivotal role in advancing the field of
analog model studies through sandbox experiments. His work focused on
investigating the formation of thrust and fold systems in the Scottish
Highlands. Subsequently, scaled analogue modeling has become a commonly
employed technique for studying the geometric, kinematic, and dynamic
evolution of various geological structures. This powerful tool allows for a
comprehensive understanding of the geometric and kinematic development of
extensional, inverted fault systems, as well as strike-slip fault systems.
The remarkable resemblance between the scaled models and the natural
geological examples described in the literature highlights the effectiveness
of this method in accurately replicating real-world geological structures
(McClay, 1996). However, this technique has been, just very recently,
incorporated for understanding Impact craters as geologic processes (Bartali
et al., 2015) by considering equal adimensional numbers (v.g. Reynolds and
comminution numbers), regardless of the fact that man made laboratory craters
and observed planetary craters are produced by events differing in six or more
orders of magnitude (González-Gutiérrez et al., 2014; Pacheco-Vázquez and
Ruiz-Suárez, 2011).
In order to investigate the influence of impact collision energy on the final
shape of craters, various techniques have been employed for the
characterization of morphological features of craters such as laser
profilometry or direct measurements (De Vet and de Bruyn, 2007). These
techniques provide valuable insights into the characteristics and behavior of
impact craters, aiding in the understanding of the relationship between the
energy involved in the event and the resulting crater morphology and
sedimentologic features.
In 2010, Microsoft released a structured light-based range detection camera,
Kinect, which provides depth images (RGB-D) along with an RGB color camera.
Although the Kinect sensor was originally intended for natural user
interaction in body-based video games, the release of its source code by
Microsoft has led to the development of numerous applications in robotics (El-
laithy et al., 2012), 3D reconstruction (Keller et al., 2013; Newcombe et al.,
2011; Nießner et al., 2013), medicine (Mousavi Hondori and Khademi, 2014),
augmented reality and interaction (Vera et al., 2011), geophysics (Rincón et
al., 2022; Tortini et al., 2014), among others.
In 2013, Microsoft announced an update to Kinect based on the Time-of-Flight
(ToF) principle. This new version includes additional improvements compared to
its predecessor.
In the study of craters, the Kinect system has been employed to automatically
measure grain size distribution across a range from pebbles to blocks in
outcrops within the Joya Honda crater in Mexico (Chávez et al., 2014).
However, the increasing utilization and affordability of LiDAR and Time-of-
Flight instruments for rapid surface topography measurement have prompted us
to develop a versatile methodology specifically designed for acquiring and
processing topographic data in the study of impact crater formation.
As part of this work, we release a Python library we develop to automate our
methodology and determine the morphological characteristics of excavated
craters in laboratory settings. We expect that both our library and our
approach on using Time-of-Flight cameras may enable novel studies on granular-
granular impact craters serving as model analogues for observed planetary
craters.
## 2 Mapping of surfaces
Three-dimensional measurement and reconstruction of surfaces is a significant
topic in various fields of research, with diverse applications such as range
scanning (Zhang and Yau, 2009), industrial inspection of manufactured parts
(Graebling et al., 2002), reverse engineering (digitization of complex, free-
form surfaces) (Lu and Wang, 2015; Carbone et al., 2001), object recognition
and 3D mapping (Stein et al., 1992; Rogers et al., 2011). Currently, several
techniques are implemented for these measurements, benefiting from significant
technological advancements that enable high resolutions and software with
multiple domain-specific features. However, access to these software often
comes with a high financial cost.
In the context of mapping granular-type impact craters, the scientific
community primarily relies on profilometry as the preferred technique for
obtaining morphological characteristics. However, the idea of implementing
depth measurement techniques based on range sensors, such as LiDAR, in this
field of research is highly appealing. In this section, we will explain the
operating principle of both techniques and their general limitations, with a
deeper focus on their application for the study of craters morphology.
### 2.1 Profilometry-based Methods
With the current technological advances in acquiring three-dimensional surface
maps, different profilometry techniques have been refined to obtain more
reliable results in shorter time (Van der Jeught and Dirckx, 2016; Salvi et
al., 2010; Su and Chen, 2001). Despite these advancements, most of these
techniques are challenging to implement and have limitations such as complex
image analysis.
As mentioned earlier, laser profilometry is commonly used to obtain
morphological characteristics of craters. This method is based on the
principle of triangulation, where a laser projects a beam of light onto the
surface of interest, and a sensor records the position and angle of the
reflected beam. With this information, the distance between the sensor and the
surface can be calculated, allowing for the reconstruction of a three-
dimensional profile.
In addition to laser profilometry, another technique used for measuring depths
on crater surfaces is structured light profilometry (Geng, 2011). In this
method, a pattern of structured light, such as stripes or lines, is projected
onto the surface, and an image of the illuminated surface is captured.
Analyzing the deformations of the light pattern in the image allows for the
calculation of local depths of the surface. Structured light profilometry is
based on the principle of interferometry, where variations in the surface
shape cause changes in the phase and intensity of the reflected light. These
changes are captured by a camera and processed to obtain a depth map of the
crater’s surface.
While laser profilometry and structured light profilometry are widely used
techniques for obtaining data for the morphological characterization of
granular impact surfaces, they also have certain limitations that are
important to consider. The following are some of these limitations:
Both laser profilometry and structured light profilometry methods have
limitations in resolution due to factors such as sensor-to-surface distance,
pixel size, and laser precision. These limitations can impede capturing fine
surface details, especially in areas with small features. Additionally,
accurately measuring transparent or translucent surfaces can be challenging as
light may pass through or be absorbed instead of being reflected, resulting in
inconsistent measurements. Reflective surfaces can also pose difficulties, as
intense reflections can interfere with measurements and generate inaccurate
data. Shadows and obstructed areas on the surface can hinder data capture by
causing variations in reflected light intensity or blocking the light pattern
projection. Furthermore, measurements obtained through these methods are
susceptible to noise and artifacts, which can introduce errors or distortions
in the data. These artifacts can arise from fluctuations in light intensity,
environmental interference, or device calibration issues. Finally, data
acquisition time can be a limitation, particularly when high resolution or
sampling large areas efficiently is required, impacting situations that demand
fast response times.
In summary, laser profilometry and structured light profilometry are valuable
techniques for measuring depths and obtaining three-dimensional surface
information. While they have seen improvements in recent years, they still
have limitations in terms of implementation complexity and specific challenges
related to image analysis. These techniques, nevertheless, offer valuable
insights into the morphology of granular impact craters and contribute to the
understanding of physical phenomena.
### 2.2 Methods based on LiDAR Sensors
In the last decade, new affordable range detection devices have been
developed. Light Detection and Ranging (LiDAR), since the 1960s with the
advent of lasers, has emerged as a pioneer in this field, empowering multiple
applications (Dong and Chen, 2017; Pittman et al., 2013). LiDAR technology is
based on the Time-of-Flight principle. It measures the time it takes for light
emitted by a device to travel to the surface of an object and return to the
sensor of the unit. The precise measurement of the time it takes for light to
travel and return to the sensor array of a measuring device is determined by
the switching velocity of the sensor’s microelectronics. Time-of-Flight
cameras employ a continuous wave intensity modulation approach, where the
surface of interest is illuminated with near-infrared intensity-modulated
periodic light. Considering the finite speed of light ($c$) and the distance
between the camera and the surface (assuming the sensor and illumination are
in the same location), an optical signal experiences a temporal shift
$\phi[d]$, which corresponds to a phase shift in the periodic signal. The
phase shift is calculated by considering the charge accumulated in the sensor
due to the reflected light when the synchronous shutter turns off the light
sampling. By transforming the temporal shift into the sensor-object distance,
we obtain the equation $d=4c\phi\pi$. It is important to note that
intermittent illumination of the scene at several gigahertz and rapid
switching speeds are crucial for achieving high depth resolution.
Among the various LiDAR devices based on the Time-of-Flight principle, the
second generation of Microsoft Kinect (KinectToF) stands out. It offers
several improvements over its predecessor, which utilizes structured light
(SL) method for depth information acquisition. In the first generation of
Kinect, the structured light method involves projecting a sequence of known
patterns onto an object, which deform based on the object’s shape. The
deformed patterns are then captured by a camera, and by analyzing the
distortion using triangulation, depth information is derived.
Both SL and ToF principles for range detection are susceptible to various
sources of error. Several studies have compared these methods and explored
different calibration techniques for the Kinect camera. These studies include
(Sarbolandi et al., 2015; Wasenmüller and Stricker, 2017; Pagliari and Pinto,
2015; Yang et al., 2015; Lachat et al., 2015; Essmaeel et al., 2012; Zhang,
2000). Considering the benefits and limitations of the two different Kinect
principles of operation, it has been determined that the second generation,
utilizing ToF technology, is superior (Kadambi et al., 2014). To our
knowledge, ToF sensors have not yet been used for the study of morphological
signatures of experimental impact craters in laboratory.
## 3 Materials and Methods
An experimental system was designed to recreate the formation of impact
craters by using, for the first time, a KinectToF sensor for the data
acquisition. Considering that laser profilometry is the typically used
technique for this purpose, we added it to the experimental setup in order to
validate the results obtained by our approach.
We constructed a square-based sandbox with dimensions of $45$ cm per side and
$15$ cm in height as the surface or granular bed in which the crater forms
after the impact of a sand lump projectile. Sand grains with a diameter of
$d\leq 1.0$ mm were deposited inside the box as the granular medium. The
granular bed is loose packed or compacted in order to observe how the
morphologies of the craters vary for different impact energies.
Impacts were carried out by releasing a granular projectile from heights
ranging from $0.1$ m to $20$ m, respectively. The granular projectiles were
composed of $250$ g of the same granular material as the impact surface, using
$50$ ml of water, and $5.0$ g of hydraulic Portland cement as an adhesive. The
mixture was compacted into a spherical mold and left to dry at room
temperature, forming weakly consolidated granular spheres with a diameter of
$7.0$ cm and packing fractions ranging from $\eta=0.40$ to $\eta=0.62$.
For retrieving depth maps from the granular surface, we attached a Microsoft
Kinect One to a mobile system placed over the sandbox, allowing the sensor to
move along one horizontal (the $Y$ axis) above the sand free surface during
experiments. Depth data was acquired at a height of $102.7$ cm, perpendicular
and stationary to the impact surface. Two depth maps are acquired using
Kinect, the first one containing all projectile fragments that may be present
on the surface, and for the second one, the interfering projectile fragments
are removed to facilitate the morphological analysis of the impacted surface.
Once the depth data is acquired using the Kinect sensor, a custom software was
used in order to process the data and retrieve valuable information from the
surface.
The laser profilometry technique is performed as well in order to compare and
discuss the results of both methods. It is conducted without a sensor for
automated data acquisition. Instead, a laser beam is used to project five
lines onto the granular bed at a $45$-degree angle. Scanning is performed at
different points on the surface, and images are captured for each position.
Subsequently, these images are processed using _ImageJ_ software (Bartali et
al., 2013), employing the principle of triangulation to obtain depth and
diameter measurements of the crater under study. The procedure of using laser
profilometry to obtain morphological characteristics is well-known and
established in the field.
Next, we will address some definitions related to morphological observables
from impact craters.
## 4 Main Crater Observables
Craters can be classified into two groups: simple and complex craters.
Examples from both types can be inspected in Figure 1(a) and Figure 1(b)
respectively.
Simple craters are bowl-shaped depressions with raised rims and approximately
parabolic interior profiles. A straightforward structure that presents a
circular or elliptical rim with the rim-to-floor depths of a large sample of
simple craters on the moon are roughly $1:5$ of the rim-to-rim diameter.
Complex craters possess a variety of features and a more complicated structure
than simple craters. They often exhibit a central structure in their interior
(central peak or dome) which may protrude above the crater rim. Images from
Lunar Reconnaissance Orbiter Camera (LROC) shows craters with single or
multiple central peaks, concentric rims and flat inner floors. The depths of
complex craters increase with increasing diameter, but they increase much more
slowly than the depths of simple craters (Melosh and Ivanov, 1999).
(a)
(b)
Figure 1: Crater classification. (a) Simple crater Steinheil (b) Complex
crater Tycho. Images taken from (Lunar QuickMap, 2023d, b)
In both cases, we will establish the original ground surface as the zero
reference for heights and depths (Osinski and Pierazzo, 2013). From this
reference point, we will consider the positive $Z$ axis as an increase in
height above the surface, and the negative axis as a decrease in surface
level.
For both morphologies depicted in Figure 2 there may be deposits of granular
material in the interior of the crater, which are remnants of the impacting
granular projectile. Therefore, the maximum observable depth $d_{max}$ may be
smaller than the actual crater depth $d_{t}$. Both measurements of depth are
typically below the original ground surface.
(a)
(b)
Figure 2: Definition of some craters observables. (a) For Simple Craters. (b)
For Complex Craters.
The height of the central peak $H_{cp}$ can be defined as the difference
between its maximum depth value $d_{cp}$ and the maximum observable depth
$d_{max}$.
From Figure 1 we can notice that craters generally have an elliptical
geometry, where a circular approximation of their surface is a special case of
an ellipse. To determine the type of geometric approximation that best fits
their surface, Equation (1) can be fitted on samples from the distribution of
maximum height values around the crater rim $h_{rim}$.
Once this ellipse is obtained, the values of $a$ and $b$ are fixed, which
represent the major and minor radii of the ellipse, respectively. In the case
of a circular fit, these values will be equal. Additionally, $x_{c}$ and
$y_{c}$ represent the $x$ and $y$ coordinates of the center of the ellipse.
Finally, Equation 1 fitted over the rim provides the variables $\theta$ and
$\varepsilon=\dfrac{\sqrt{a^{2}-b^{2}}}{a}$, which are the rotation angle and
eccentricity of the ellipse that represents how circular or elliptical the
surface of the crater is in function of it diameters.
$\displaystyle\frac{[(x-x_{c})\cos\theta+(y-y_{c})\sin\theta]^{2}}{a^{2}}+$
(1)
$\displaystyle\frac{[(x-x_{c})\sin\theta-(y-y_{c})\cos\theta]^{2}}{b^{2}}=1$
The diameter $D=2a$ of both simple and complex craters is defined as the
distance equal to the major axis of the best fitting ellipse to the rim.
Having an elliptical approximation for a crater simplifies the computation of
several observables. For example, the crater diameter $D$ can be conveniently
expressed as $D=2a$. Furthermore, by transforming Equation (1) into an
inequality, it is possible to quickly determine whether an arbitrary
coordinate $x,y$ corresponds to the interior of the crater or not. This can be
used to speed up some costly computations, such as finding the maximum
observable depth $d_{max}$ or computing the crater concavity’s volume
$V_{in}$.
The concavity’s volume $V_{in}$ is the volume contained inside the crater
limited to the average value of $h_{rim}$. This volume is equivalent to the
amount of water that can be contained within the crater’s concavity if it
could be filled up to the average rim’s height without being spilled out,
considering the rim’s height to be uniform all around the fitting ellipse.
The excavated volume $V_{ex}$ is the volume of the hollow under the surface
reference ground level within the crater rims. This excavated volume only
accounts for the amount of material of the target that has been removed or
compressed, but not substituted by the projectile material. As the impact
energy increases, larger is the excavated volume, and less material from the
projectile remains within the crater.
For complex craters with uplifted central structures, such as central peaks or
domes, the crater’s depression forms an annular trough. The lowest points of
this annular depression delineate a ring-shaped valley, marking the beginning
of the uplifted central structure. The volume of the central peak ($V_{cp}$)
corresponds to the space enclosed within the inner ring-shaped valley and is
determined based on the average depth of this valley.
On the other hand, we define the deposit excess volume $V_{exc}$ as the volume
found above the ground surface. This variable is related to the amount of
material ejected and expanded (Reynolds dilatancy (Andreotti et al., 2013)) or
displaced by the shock wave during the impact and includes all material
deposited on the surface, including that of the crater rim. It is important to
note that in the case of complex craters, the formation of central peaks may
protrude above the ground surface zero reference, and that fraction of the
volume of the central peak $V_{cp}$ is considered in the calculation of
$V_{exc}$.
Now, let’s delve into a particular case of morphologies observed for impacts
with very low energies, where no penetration occurs in the original and
compacted ground surface (See Figure 3).
(a)
(b)
Figure 3: Impacts at very low energies with no penetration of original ground
surface. (a) Perpendicular view respect to the original ground surface (b)
Oblique view.
These particular morphologies do not meet the definitions of craters, since no
depression is formed, and will be considered as sand mounds formed by the
remnants of the projectile on the impact surface. These mounds may or may not
approximate central peaks. For these cases, only the variables of deposit
excess volume $V_{exc}$ and maximum mound height $H_{m}$ for $z>0$ values are
taken into account.
## 5 Craterslab Software
The study of planetary geology and impact craters encouraged the development
of various software tools that aid in the analysis of planetary craters. These
software tools provide valuable insights into the formation and evolution of
celestial bodies, helping us to better understand the history and structure of
our solar system.
Some of the most recent software tools for analyzing planetary craters include
craterstats (Michael, Greg and Annex, Andrew and Aye, Michael, ), CSFD Tools
(Riedel et al., 2018), mvtk (McDonald and Montgomery, 2015) and PyNAPLE
(Sheward et al., 2022). These software offer a range of features, from 3D
visualization and topographic mapping to data analysis and modeling tools.
They are widely used by planetary scientists and researchers to study the
morphology and history of craters on various celestial bodies.
However, most of the craters-related software have been crafted with a strong
focus on planetary craters. While man-made craters have been shown to be
useful models for studying the rare events occurred during impact crater
formation, specific software tools are required to help process data from
these experiments. To address this need, we have developed _craterslab_ , a
software library that is able to automate data acquisition from Time-of-Flight
sensors and process the data to retrieve the main crater morphologic features.
The library is open source and packaged for distribution via pypi (Viera-
López, Gustavo and Corrales-Machín, Frank, ).
The software is designed to simplify the data acquisition and analysis
process, allowing researchers to focus on the interpretation of results rather
than spending time on data processing. It offers a range of features,
including automatic data acquisition, real-time data processing, and the
ability to visualize and analyze data in a variety of ways. Sample plots
produced with _craterslabs_ can be seen in Figure 4.
(a)
(b)
Figure 4: Visualization of the impacted surfaces using _craterslab_. (a) 2D
view of the impact surface in the $X,Y$ plane. The fitted ellipse is observed
over the distribution of maximum height values around the crater rim. The
diameter, which coincides with the major axis of the fitted ellipse, is also
represented. (b) 3D visualization of the cavity volume, which can be
interpreted as the amount of water that can be contained within the crater.
This provides a visual interpretation of the numerical value of $V_{in}$.
The workflow of the software can be summarized in: (1) fetching surface
mapping data directly from sensors or locally stored files; (2) Classifying
the surface based on the observed formation: simple crater, complex crater or
sand mound; (3) computing the shape of the crater by fitting an ellipse to the
crater rim; (4) determining morphological crater features and (5) visualizing
the results. However, the software is built with flexibility, allowing for
independent usage of some of its functionalities.
The different crater’s observables computed by the software, described in
Section 4, allows for various analyses of experimental crater morphology.
Variables such as diameter and depth can be more accurately correlated with
each other. Others, like cavity volume, can now be determined precisely with
numerical integration rather than geometric approximations. For example, it is
now possible to calculate the volume of the cavity in the craters represented
in Figure 5(d) and Figure 5(e).
The software is also able to compute the interior slopes of the craters, which
allows to determine the incoming direction of the projectile in oblique
impacts; the excavated and excess volume, which are related to the amount of
material deposited inside the crater, compression and expansion of the
terrain, and the ejecta deposited outside the crater after impact.
In the following section, we will illustrate the usage of the software by
processing the data obtained following the procedure described in Section 3.
## 6 Results and Discussion
In order to validate both, the methodology for studying craters morphologies
through ToF sensor and our software library for automating the process, we
conducted a set of experiments at different launching heights on a compacted
or loose packed sand bed as described in section 3, producing a wide range of
impact craters types or sand mounds. Figure 5 shows the outcomes of three
different type of morphologies produced experimentally.
Figure 5(a) shows the resulting data gathered and visualized using our
software for the case of a simple crater, similarly, Figure 5(b) and Figure
5(c) resemble the data from a complex crater and a sand mound respectively.
For all three cases, we included an image of the surface taken after the
impact. Those images can be seen in Figures 5(d), 5(e) and 5(f) respectively.
When comparing the images in the first and second rows of Figure 5, the
remarkable similarities between the experimental craters and their three-
dimensional visualizations by _craterslab_ are evident.
The images in the third row (Figures 5(g), 5(h), 5(i)) depict natural craters
on the Moon, Mercury, and Mars, respectively (Lunar QuickMap, 2023c; Mercury
QuickMap, 2023; Mars QuickMap, 2023). They were included to highlight the
similarities found between our experimental craters and those in our solar
system. The insets of these images represent the cross-sectional profiles
obtained from the platforms provided by Applied Coherent Technology (ACT)
Corporation. Upon comparing the profiles of the images in the second row,
obtained by _craterslab_ , with those in the third row of Figure 5, the
striking similarity between granular analog craters and natural craters is
remarkable.
In order to expand the evaluation, we proceeded to use our software for
analyzing the depth map of the King crater, a well known lunar crater. The
results are presented in Figure 6, where a three-dimensional visualization of
the crater surface with the fitted ellipse on the crater rims (Figure 6(b)) is
displayed. In addition to reproducing natural craters in three dimensions and
enabling visual analysis, _craterslab_ is also capable of extracting the main
observables that allow for the analysis of their morphological
characteristics.
For the King crater (Figure 6(a)) (Lunar QuickMap, 2023a), our software
provides results that can be directly compared with those from the LROC
platform, such as cross-sectional profile and interior slopes. However,
_craterslab_ can obtain and analyze additional observables from natural
craters, for example the cavity volume ($V_{in}$).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 5: Three-dimensional visualization of the morphological surface of
granular impact craters using _craterslab_ with Kinect depth data.
Experimental and natural crater images are shown for comparison. (a) A simple
crater obtained at a height of $7.0$ m above a loose packed granular bed with
$V_{in}=442966.88$ mm3. (b) A complex crater obtained at a height of $9$ m
above a loose packed granular bed with $V_{in}=550837.34$ mm3. The morphology
inside the crater changes compared to (a) due to a slight variation in
potential energy. (c) Sand mound formed by the remnants of the projectile on
the compacted impact surface at a height of $2$ m with $V_{exc}=191267.39$
mm3. The experimental images (d), (e), and (f) correspond to the reconstructed
three-dimensional models and serve as a visual comparison, showcasing the
similarities and differences between the experimental craters and their
reconstructions using ToF sensors. The insets correspond to the cross-
sectional profile obtained by _craterslab_. Similarly, images (g), (h), and
(i) display natural craters alongside the insets of their cross-sectional
profiles. (g) Simple crater Bernoullie C on the Moon, inset extracted from the
LROC platform using LOLA (Lunar Orbiter Laser Altimeter). (h) Complex crater
Debussy on Mercury, inset obtained from the USGS DEM (United States Geological
Survey Digital Elevation Model). (i) Mound formation on Mars without
nomenclature, coordinates: Latitude: $-45.47963$, Longitude: $55.10807$, inset
extracted from the MOLA DEM (Mars Orbiter Laser Altimeter Digital Elevation
Model).
Additionally, a comparison is shown between the profile view of the crater
over the ellipse’s largest radii obtained by _craterslab_ and the profile view
by LROC, Figure 6(c). The profile for the King crater obtained by the software
is similar to the one obtained by the LROC tool. The slight differences can be
attributed to the manual selection of the profile with the LROC tool, which
does not allow for obtaining the largest profile from the crater
automatically.
Once the 3D data from the experimental or natural craters is obtained, the
software can compute the main observables automatically, eliminating the need
for manual calculations or laborious image analysis procedures. This
automation not only saves time but also ensures a more reliable and consistent
analysis, leading to a deeper understanding of the crater morphology and how
is correlated with its associated launching parameters.
(a)
(b)
(c)
Figure 6: Depth map analysis of the King crater using _craterslab_. (a) King
Crater, with a diameter of $77$ km and a depth of $5$ km, is one of the
youngest craters on the far side of the Moon and serves as an excellent
example of a Copernican-aged complex impact crater. (b) Three-dimensional
projection of King Crater. The volume of its cavity is $V_{in}=5392.65$ km3.
(c) Comparison of cross-sectional profiles of King Crater obtained from the
LROC platform using LOLA (Lunar Orbiter Laser Altimeter) and _craterslab_
Next, we obtain and characterize the morphological variations of the impact
surface using our library and a KinectToF as the depth sensor. All craters
produced by the collision events were characterized by both techniques:
profilometry and software-aided depth map analysis. The morphological
characteristics of craters were measured manually from upper view pictures for
profilometry, but both manually and automatically from the depth maps provided
by the Kinect sensor, for comparison purposes.
For the Kinect measurements in the plane $X,Y$, we first determined the
craters diameter manually, mimicking the processing conducted with the
profilometry, and then we automatically fitted a rotated ellipse using our
software in order to compare both methods for measuring the diameter of the
crater. Results are shown in Figure 7(b) and Figure 7(c) respectively. It is
notorious that both methods are equivalent for diameter purposes, at least for
the eccentricity values of these normal incidence impact craters.
(a)
(b)
(c)
Figure 7: Diameter vs. Potential Energy for Impacts on a Loose Packed Granular
Bed. The insets display logarithmic scale plots accompanied by linear fits.
All linear fits cases exhibit a slope of $0.23$. This preliminary result is
close to the exponent found in the relationship $D\propto E^{1/4}$ for natural
craters in our solar system. (a) Diameters obtained using the profilometry
method. (b) Diameters estimated manually using Kinect data. (c) Diameters
computed automatically using _craterslab_.
Comparing the results from the profilometry technique (see Figure 7(a)) with
the manual processing of the depth map (see Figure 7(b)), a standard deviation
of $0.028$ mm is obtained for distance values. This indicates that, under our
working conditions, the resolution of the Kinect camera is equivalent to the
profilometry method in the $X,Y$ plane.
Consequently, the differences in diameter size obtained from the software,
manual estimation from the depth map, and profilometry are nearly
indistinguishable, as depicted in Figure 7.
In morphological characterizations involving the $Z$ plane, both techniques
are equivalent for obtaining depth data but profilometry exhibits a higher
margin of error compared to KinectToF. The increased errors in profilometry
occur within the interior of the fitted ellipse. This is attributed to the
granular nature of the surface and the lighting conditions on the impact
surface, which cause the thickness of the laser lines to increase within the
crater. This introduces greater uncertainty in the measurement of depth
values, as depicted in Figure 8. The average thickness in error of the laser
lines within the crater is $5.09$ mm.
In contrast, Kinect depth data exhibits an offset of $\pm 1$ mm in our
measurements. This offset represents the correction applied to align the depth
measurements with the true surface positions, compensating for any systematic
errors introduced by the sensor or experimental setup.
As a result, Kinect provides higher precision in the three-dimensional
reconstruction of granular impact surfaces.
## 7 Conclusions
We propose a methodology for studying impact craters based on Time-of-Flight
sensors. We validate our approach by comparing it with the established
technique which relies on profilometry. Surface topographic data are gathered
using a KinectToF camera for different impacting energies and compaction of
the target terrain, producing a variety of crater shapes whose main
morphological features are recognized and automatically measured, including;
shape, depth, local slope, excess and excavated volumes as well as central
peak volume and height. KinectToF exhibits high precision for the task,
outperforming laser profilometry.
A software for automating the process is released as part of this work. It is
designed for data acquisition and analysis of granular impact craters and
natural craters. Time spent on both acquisition and analysis of data is
considerably minimized when compared with previous methods due to the usage of
this software.
LiDAR sensors, specifically the second generation of Microsoft Kinect, when
combined with our software, are able to provide very accurate results from
craters morphology. As a consequence, previously used geometric
approximations, such as cavity’s volume, can now be calculated numerically
with greater precision. The automatic computation of other observables
performed by the software, such as excess volume, excavated volume and inner
slope of the crater, may be pivotal for advancing various research topics in
field. For instance, determining projectile’s penetration angle in craters
formed by oblique impacts and analyzing the amount of material remaining as a
deposit or ejected matter after impacts.
(a)
(b)
Figure 8: Lines of the laser light beam inside the crater depending on the
lighting conditions. (a) thickness $6.96$ mm. (b) thickness $3.36$ mm.
Code availability section
Name of the library: craterslab
Contact<EMAIL_ADDRESS>
Hardware requirements: Any system compatible with Windows, Linux or MacOS
Program language: Python 3.10+
Software required: Python
Program size: Scripts size – 34.2 kB, pretrained model – 26.3 MB
The source code is available for downloading at the link:
https://github.com/gvieralopez/craterslab
Data availability
The data used for this work is available for downloading at the link:
https://github.com/gvieralopez/craters-data
## References
* Alvarez et al. (1980) Alvarez, L.W., Alvarez, W., Asaro, F., Michel, H.V., 1980. Extraterrestrial cause for the cretaceous-tertiary extinction. Science 208, 1095–1108.
* Alvarez et al. (1995) Alvarez, W., Claeys, P., Kieffer, S.W., 1995. Emplacement of cretaceous-tertiary boundary shocked quartz from chicxulub crater. Science 269, 930–935.
* Andreotti et al. (2013) Andreotti, B., Forterre, Y., Pouliquen, O., 2013. Granular media: between fluid and solid. Cambridge University Press.
* Bartali et al. (2015) Bartali, R., Nahmad-Molinari, Y., Rodríguez-Liñán, G.M., 2015. Low speed granular–granular impact crater opening mechanism in 2d experiments. Earth, Moon, and Planets 116, 115–138.
* Bartali et al. (2013) Bartali, R., Rodríguez-Liñán, G.M., Nahmad-Molinari, Y., Sarocchi, D., Ruiz-Suárez, J.C., 2013. Role of the granular nature of meteoritic projectiles in impact crater morphogenesis. arXiv:1302.0259.
* Carbone et al. (2001) Carbone, V., Carocci, M., Savio, E., Sansoni, G., De Chiffre, L., 2001. Combination of a vision system and a coordinate measuring machine for the reverse engineering of freeform surfaces. The International Journal of Advanced Manufacturing Technology 17, 263--271.
* Chappelow (2013) Chappelow, J., 2013. Simple impact crater shape determination from shadows. Meteoritics & Planetary Science 48, 1863--1872.
* Chávez et al. (2014) Chávez, G.M., Sarocchi, D., Santana, E.A., Borselli, L., Rodríguez-Sedano, L., 2014. Using kinect to analyze pebble to block-sized clasts in sedimentology. Computers & Geosciences 72, 18--32.
* Davis (1992) Davis, C.H., 1992. Satellite radar altimetry. IEEE transactions on microwave theory and techniques 40, 1070--1076.
* De Vet and de Bruyn (2007) De Vet, S.J., de Bruyn, J.R., 2007. Shape of impact craters in granular media. Physical Review E 76, 041306.
* Dong and Chen (2017) Dong, P., Chen, Q., 2017. LiDAR remote sensing and applications. CRC Press.
* El-laithy et al. (2012) El-laithy, R.A., Huang, J., Yeh, M., 2012. Study on the use of microsoft kinect for robotics applications, in: Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, IEEE. pp. 1280--1288.
* Essmaeel et al. (2012) Essmaeel, K., Gallo, L., Damiani, E., De Pietro, G., Dipanda, A., 2012. Temporal denoising of kinect depth data, in: 2012 Eighth International Conference on Signal Image Technology and Internet Based Systems, IEEE. pp. 47--52.
* Geng (2011) Geng, J., 2011. Structured-light 3d surface imaging: a tutorial. Advances in Optics and Photonics 3, 128--160.
* Gilbert (1979) Gilbert, G.K., 1979. The moon’s face: A study of the origin of its features, in: A Source Book in Astronomy and Astrophysics, 1900--1975. Harvard University Press, pp. 80--87.
* González-Gutiérrez et al. (2014) González-Gutiérrez, J., Carrillo-Estrada, J., Ruiz-Suarez, J., 2014. Penetration of granular projectiles into a water target. Scientific reports 4, 1--5.
* Graebling et al. (2002) Graebling, P., Lallement, A., Zhou, D.Y., Hirsch, E., 2002. Optical high-precision three-dimensional vision-based quality control of manufactured parts by use of synthetic images and knowledge for image-data evaluation and interpretation. Applied Optics 41, 2627--2643.
* James (1953) James, 1953. Historia de la física: hasta mediados del siglo xx. Technical Report.
* Van der Jeught and Dirckx (2016) Van der Jeught, S., Dirckx, J.J., 2016. Real-time structured light profilometry: a review. Optics and Lasers in Engineering 87, 18--31.
* Kadambi et al. (2014) Kadambi, A., Bhandari, A., Raskar, R., 2014. 3d depth cameras in vision: Benefits and limitations of the hardware: With an emphasis on the first-and second-generation kinect models. Computer vision and machine learning with RGB-D sensors , 3--26.
* Keller et al. (2013) Keller, M., Lefloch, D., Lambers, M., Izadi, S., Weyrich, T., Kolb, A., 2013. Real-time 3d reconstruction in dynamic scenes using point-based fusion, in: 2013 International Conference on 3D Vision-3DV 2013, IEEE. pp. 1--8.
* Lachat et al. (2015) Lachat, E., Macher, H., Mittet, M.A., Landes, T., Grussenmeyer, P., 2015. First experiences with kinect v2 sensor for close range 3d modelling. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences .
* Lu and Wang (2015) Lu, K., Wang, W., 2015. A multi-sensor approach for rapid and precise digitization of free-form surface in reverse engineering. The International Journal of Advanced Manufacturing Technology 79, 1983--1994.
* Lunar QuickMap (2023a) Lunar QuickMap, 2023a. Complex crater king. URL: https://bit.ly/45cpIGL. we acknowledge the use of imagery from Lunar QuickMap(https://quickmap.lroc.asu.edu), a collaboration between NASA, Arizona State University & Applied Coherent Technology Corp.
* Lunar QuickMap (2023b) Lunar QuickMap, 2023b. Complex crater tycho. URL: https://bit.ly/3pYhjYh. we acknowledge the use of imagery from Lunar QuickMap(https://quickmap.lroc.asu.edu), a collaboration between NASA, Arizona State University & Applied Coherent Technology Corp.
* Lunar QuickMap (2023c) Lunar QuickMap, 2023c. Simple crater bernoullie c. URL: https://bit.ly/43AL2nK. we acknowledge the use of imagery from Lunar QuickMap(https://quickmap.lroc.asu.edu), a collaboration between NASA, Arizona State University & Applied Coherent Technology Corp.
* Lunar QuickMap (2023d) Lunar QuickMap, 2023d. Simple crater steinheil. URL: https://bit.ly/471E5z5. we acknowledge the use of imagery from Lunar QuickMap(https://quickmap.lroc.asu.edu), a collaboration between NASA, Arizona State University & Applied Coherent Technology Corp.
* Mars QuickMap (2023) Mars QuickMap, 2023. Mound formation on mars without nomenclature. URL: https://bit.ly/3Dlg4ph.
* McClay (1996) McClay, K., 1996. Recent advances in analogue modelling: uses in section interpretation and validation. Geological Society, London, Special Publications 99, 201--225.
* McDonald and Montgomery (2015) McDonald, J., Montgomery, J., 2015. Real-time crater profile analysis: Visualizing high-resolution laser altimetry data sets on the gpu. SPECIAL PAPER OF THE GEOLOGICAL SOCIETY OF AMERICA 518, 213--227.
* Melosh and Ivanov (1999) Melosh, H., Ivanov, B., 1999. Impact crater collapse. Annual Review of Earth and Planetary Sciences 27, 385--415.
* Melosh (1989) Melosh, H.J., 1989. Impact cratering: A geologic process. New York: Oxford University Press; Oxford: Clarendon Press .
* Mercury QuickMap (2023) Mercury QuickMap, 2023. Complex crater debussy. URL: https://bit.ly/3Do9IW7. we acknowledge the use of imagery from Mercury QuickMap(https://messenger.quickmap.io), a collaboration between NASA, JHU/APL & Applied Coherent Technology Corp.
* (34) Michael, Greg and Annex, Andrew and Aye, Michael, . Craterstats. URL: https://github.com/ggmichael/craterstats.
* Mousavi Hondori and Khademi (2014) Mousavi Hondori, H., Khademi, M., 2014. A review on technical and clinical impact of microsoft kinect on physical therapy and rehabilitation. Journal of medical engineering 2014.
* Newcombe et al. (2011) Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A., 2011. Kinectfusion: Real-time dense surface mapping and tracking, in: 2011 10th IEEE international symposium on mixed and augmented reality, Ieee. pp. 127--136.
* Nießner et al. (2013) Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M., 2013. Real-time 3d reconstruction at scale using voxel hashing. ACM Transactions on Graphics (ToG) 32, 1--11.
* Osinski and Pierazzo (2013) Osinski, G.R., Pierazzo, E., 2013. Impact cratering: Processes and products. Impact Cratering , 1--20.
* Pacheco-Vázquez and Ruiz-Suárez (2011) Pacheco-Vázquez, F., Ruiz-Suárez, J., 2011. Impact craters in granular media: grains against grains. Physical review letters 107, 218001.
* Pagliari and Pinto (2015) Pagliari, D., Pinto, L., 2015. Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors. Sensors 15, 27569--27589.
* Pittman et al. (2013) Pittman, S.J., Costa, B., Wedding, L.M., 2013. Lidar applications. Coral reef remote sensing: A guide for mapping, monitoring and management , 145--174.
* Riedel et al. (2018) Riedel, C., Michael, G., Kneissl, T., Orgel, C., Hiesinger, H., van der Bogert, C.H., 2018. A new tool to account for crater obliteration effects in crater size-frequency distribution measurements. Earth and Space Science 5, 258--267.
* Rincón et al. (2022) Rincón, M., Márquez, A., Herrera, R., Galland, O., Sánchez-Oro, J., Concha, D., Montemayor, A., 2022. Monitoring volcanic and tectonic sandbox analogue models using the kinect v2 sensor. Earth and Space Science 9, e2020EA001368.
* Rogers et al. (2011) Rogers, J.G., Trevor, A.J., Nieto-Granda, C., Christensen, H.I., 2011. Simultaneous localization and mapping with learned object recognition and semantic data association, in: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE. pp. 1264--1270.
* Salvi et al. (2010) Salvi, J., Fernandez, S., Pribanic, T., Llado, X., 2010. A state of the art in structured light patterns for surface profilometry. Pattern recognition 43, 2666--2680.
* Sarbolandi et al. (2015) Sarbolandi, H., Lefloch, D., Kolb, A., 2015. Kinect range sensing: Structured-light versus time-of-flight kinect. Computer vision and image understanding 139, 1--20.
* Sheward et al. (2022) Sheward, D., Avdellidou, C., Cook, A., Sefton-Nash, E., Delbo, M., Cantarella, B., Zanatta, L., 2022. Pynaple: Lunar surface impact crater detection. Monthly Notices of the Royal Astronomical Society 514, 4320--4328.
* Stein et al. (1992) Stein, F., Medioni, G., et al., 1992. Structural indexing: Efficient 3-d object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 125--145.
* Su and Chen (2001) Su, X., Chen, W., 2001. Fourier transform profilometry:: a review. Optics and lasers in Engineering 35, 263--284.
* Tortini et al. (2014) Tortini, R., Bonali, F.L., Corazzato, C., Carn, S.A., Tibaldi, A., 2014. An innovative application of the kinect in earth sciences: quantifying deformation in analogue modelling of volcanoes. Terra Nova 26, 273--281.
* Vera et al. (2011) Vera, L., Gimeno, J., Coma, I., Fernández, M., 2011. Augmented mirror: interactive augmented reality system based on kinect, in: 13th International Conference on Human-Computer Interaction (INTERACT), Springer. pp. 483--486.
* (52) Viera-López, Gustavo and Corrales-Machín, Frank, . Craterslab. URL: https://github.com/gvieralopez/craterslab.
* Wasenmüller and Stricker (2017) Wasenmüller, O., Stricker, D., 2017. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision, in: Computer Vision--ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, Springer. pp. 34--45.
* Yang et al. (2015) Yang, L., Zhang, L., Dong, H., Alelaiwi, A., El Saddik, A., 2015. Evaluating and improving the depth accuracy of kinect for windows v2. IEEE Sensors Journal 15, 4275--4285.
* Zhang and Yau (2009) Zhang, S., Yau, S.T., 2009. High dynamic range scanning technique. Optical Engineering 48, 033604--033604.
* Zhang (2000) Zhang, Z., 2000. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22, 1330--1334.
|
General Covariance from the Viewpoint of Stacks
Filip Dul111Department of Mathematics, Rutgers – New Brunswick, Piscataway NJ
08854, United States of America.
ORCID: 0000-0001-8623-0293.
Author email address<EMAIL_ADDRESS>
###### Abstract
General covariance is a crucial notion in the study of field theories on
curved spacetimes. A field theory defined with respect to a semi-Riemannian
metric is generally covariant if two metrics on $X$ which are related by a
diffeomorphism produce equivalent physics. From a purely mathematical
perspective, this suggests that we try to understand the quotient stack of
metrics modulo diffeomorphism: we will use the language of groupoids to do
this concretely. Then we will inspect the tangent complex of this stack at a
fixed metric, which when shifted up by one defines a differential graded Lie
algebra. By considering the action of this Lie algebra on the observables for
a Batalin-Vilkovisky scalar field theory, we recover a novel expression of the
stress-energy tensor for that example, while describing how this works for a
general class of theories. We will describe how this construction nicely
encapsulates but also broadens the usual presentation in the physics
literature and discuss applications of the formalism.
Key words: Stacks, Formal Derived Geometry, Curved Spacetimes,
Batalin-Vilkovisky Formalism, Gravitation, Conserved Quantities.
###### Contents
1. 1 Introduction
1. 1.1 Overview
2. 1.2 Future Directions
2. 2 Bundles of Batalin-Vilkovisky field theories
1. 2.1 Introduction to the BV Formalism
2. 2.2 Equivariant Vector Bundles
3. 2.3 General Covariance
1. 2.3.1 An important remark on functoriality
4. 2.4 Groupoids and Stacks
3. 3 Formal Stacks
1. 3.1 Tangent Complexes
2. 3.2 Chevalley-Eilenberg Cochains as Rings of Functions
3. 3.3 Vector Bundles over a Formal Stack
4. 4 Field Theories as Bundles over Formal Stacks
1. 4.1 Equivariant Observables
2. 4.2 The Stress-Energy Tensor
5. 5 Appendix
1. 5.1 A detailed example
2. 5.2 A remark on higher orders
3. 5.3 Acknowledgements
4. 5.4 Statements and Declarations
## 1 Introduction
### 1.1 Overview
Over a hundred years ago, when Albert Einstein and a group of others were
laying the foundations of general relativity, general covariance became an
essential ingredient in formulating physics in curved spacetimes. Roughly, a
field theory coupled to a background metric on a spacetime $X$ is said to be
generally covariant if the diffeomorphism group of $X$ is a symmetry of the
theory. Physicists usually interpret diffeomorphisms as coordinate changes, so
they may say that a theory exhibits general covariance if it is coordinate-
invariant: i.e. a theory may superficially change to a distinct one if the
coordinates are changed, but if it is generally covariant, then those “two”
theories are equivalent in a way which we will make rigorous. Although general
covariance can be understood in the context of all field theories, it is often
considered in the context of field theories coupled to semi-Riemannian
metrics: this is the particular case we will focus on in this paper, but we
will moreover argue that this particular case is of central importance.
The primary aim of this paper is to package general covariance in the Batalin-
Vilkovisky formalism for classical field theories, especially as it is
presented in [8]. Equivariant vector bundles are an appropriate geometric tool
for understanding families of field theories parameterized by a space of
Riemannian or Lorentzian metrics. We therefore review the global theory of
such bundles, explain its relevance for general covariance in Section 2.3, and
then cast much of it in the language of stacks in Section 2.4. Stacks provide
a generalized notion of space which allows us to deal with quotient spaces
that may be singular or otherwise forget interesting information about the
original space. We will describe how a bundle of generally covariant classical
Batalin-Vilkovisky (BV) field theories over the space of metrics on $X$,
denoted $\mathscr{M}$, descends to a bundle over the quotient stack
$[\mathscr{M}/\mathscr{D}]$ of the metrics modulo the diffeomorphism group of
$X$, denoted $\mathscr{D}$. Our definition for general covariance is
equivalent to the following:
###### Definition 1.1.
A bundle $\pi:(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ of Batalin-Vilkovisky
field theories on a compact manifold $X$ is generally covariant if it descends
to a bundle of stacks
$\pi:([\mathscr{F/D}],\\{S,-\\})\to[\mathscr{M/D}].$
Before and after introducing this definition, we discuss in detail how both
scalar field theory and Yang-Mills theory are generally covariant in our sense
as BV field theories. Indeed, one of our central results is Theorem 2.70, in
which we show that Yang-Mills theory is generally covariant, because it serves
as a nexus for views on covariance from other sources in the literature, as we
discuss in the surrounding commentary. We make remarks on functorial aspects
of our work (mostly in Subsection 2.3.1) which are important in their own
right and also serve as points of comparison to the prevailing literature–both
to the factorization algebra framework (as seen in [7] and [8]) the author was
trained in, but also to the AQFT framework (as seen in [19]) the author
desires to better understand.
Much of our concrete computations are in the regime of perturbative field
theories, so we consider formal neighborhoods in the quotient stack
$[\mathscr{M}/\mathscr{D}]$, which in turn lead us to understanding the field
theories as examples in derived deformation theory: in brief, we associate to
a generally covariant theory a formal moduli problem, as defined in [15], by
pulling back the above bundle of stacks over the formal neighborhood of a
metric $[g]\in[\mathscr{M/D}]$. We then compute the function ring for this
pullback over a formal stack and show that it gives us a ring of equivariant
classical observables, as defined in [8]: Proposition 4.3 is thus one of the
primary results of this paper, in that it explicitly links the stacky geometry
presented earlier with the usual factorization algebra framework of Noether’s
Theorem presented in Costello and Gwilliam’s books. This perspective is a
beautiful fusion of Emmy Noether’s foundational work in both homological
algebra and symmetries in physics: homological algebra allows us to put
external symmetries (perturbations) and internal symmetries (isometries) on
equal footing, so that we can state a more fully encompassing form of
Noether’s Theorem.
In Remark 4.9 and Section 4.2, we consider the conservation of the stress-
energy tensor $T_{\mu\nu}$, the conserved quantity associated to general
covariance via Noether’s Theorem, in derived deformation theoretic terms:
essentially, $T_{\mu\nu}$ tells us how the aforementioned formal stack acts on
the field theories it is coupled to, in the language of $L_{\infty}$ algebras.
In Theorem 4.24, we expound on the above by computing a perturbative
equivalence of observables when the theory is deformed by a vector field, and
make a few remarks on how this might be relevant at higher orders in
perturbation theory in Appendix 5.2. One of the objectives of Section 4.2 is
to provide a potential pathway for physicists to link their tools with ours.
### 1.2 Future Directions
Much of this paper serves as a set-up for a few distinct projects. My primary
motivation looking forward is the subject of anomalies in perturbative quantum
field theory. Anomalies arise when the quantization of families of field
theories over a parameter space with some classical symmetry does not
necessarily respect that symmetry. In [17], Rabinovich computes the BV
quantization of families of free fermions parameterized by gauge fields and by
this process recovers the axial anomaly. The anomaly is then explicitly
quantified cohomologically by viewing the background gauge fields
perturbatively (much in the way we consider metrics modulo diffeomorphisms
perturbatively in Section 4): computations from BV quantization allow
Rabinovich to equate the anomaly with the index of the original Dirac
operator, as per the Atiyah-Singer Families Index Theorem.
Much of the work in the current paper is motivated by the desire to reproduce
similar computations to those in [17] when replacing connections modulo gauge
by metrics modulo diffeomorphism. Before diving into quantization, we must
first understand both the global and perturbative nature of the stack
$[\mathscr{M/D}]$ of metrics modulo diffeomorphism as it parameterizes
classical theories. In the case of free fermions parameterized by
$[\mathscr{M/D}]$, we hope to reproduce a version of results stated in [18]:
there, Rabinovich connects his work in [17] to defining a determinant line
bundle $\mathrm{Det}(D)$ (à la Quillen) over some parameter space $B$ via BV
quantization. In our case, we would let $B=[\mathscr{M/D}]$ and then the
anomaly would constitute the first Chern class of the determinant line bundle
over this.
Another goal is to understand how Wald’s results on viewing black hole entropy
as Noether charge (as in [24]) might be feasible within the BV framework. The
study and physical interpretation of black hole thermodynamics is as popular
now as it has ever been; however, it remains elusive in many regards. Wald’s
work connecting black hole entropy to Noether’s Theorem may well serve as a
point of connection to our work: in particular, the BV version of Noether’s
Theorem is put into detail and application in the second half of [8] and
explicit computations in the case of metrics modulo diffeomorphism (a central
object in Wald’s paper) are provided in this article. It is also advantageous
that Wald focuses on structural and algebraic aspects, so that porting it all
over into the BV framework might be somewhat natural.
## 2 Bundles of Batalin-Vilkovisky field theories
We will begin by introducing the Batalin-Vilkovisky (BV) formalism: the
purpose of the following narrative is to show how classical field theory is
very naturally expressed in this formalism, especially in the context of
diffeomorphism equivariance. The basic ingredients required from the outset
are a space of fields, which define the kinematics of a physical model, and an
action functional, which fixes the dynamics of that model. The fields on a
space (or spacetime) $X$ are sections of some bundle $F\to X$ (usually a
vector bundle), denoted $\mathscr{F}:=\Gamma(X,F)$. The action functional is a
function $S:\mathscr{F}\to\mathbf{R}$ whose critical locus
$\mathrm{Crit}(S)$222This is computed via variational calculus, and described
for example in Appendix E of [23]. is the set of $\phi\in\mathscr{F}$ that
satisfy the Euler Lagrange equations associated to $S$ via functional
differentiation. To truly begin a discussion of the BV formalism, we must
begin by making precise the notion of a functional.
###### Remark 2.1.
So far, our bundle $F\to X$ is not graded: as we unfold what it means to
define a BV theory, $F$ will be replaced by a differential graded bundle, but
the notation will not change.
### 2.1 Introduction to the BV Formalism
The space of fields is usually infinite dimensional, which means we cannot
take the usual algebraic symmetric powers of $\mathscr{F}$ to define their
space of functions. Thus, we have the following definitions which play an
identical role, but for the infinite dimensional case. Much of what follows is
from Chapter 5, Section 3 of [6] and Chapter 3, Section 5 and Appendix B of
[7].
###### Definition 2.2.
(Defined in Section 3.5.7 of [7]) The algebra of functionals on $\mathscr{F}$
is
$\mathscr{O}(\mathscr{F}):=\prod_{k\geq 0}\mathrm{Hom}(\mathscr{F}^{\otimes
k},\mathbf{R})_{S_{k}}.$
We may sometimes denote this ring as $\mathrm{Sym}(\mathscr{F}^{\vee})$.
###### Remark 2.3.
To be fully precise, if $X$ is compact, $\mathscr{F}=\Gamma(X,F)$ is a nuclear
Fréchet space, where $\otimes$ denotes the completed projective tensor
product, so that
$\mathscr{F}^{\otimes k}:=\Gamma(X\times\cdots\times
X,F\boxtimes\cdots\boxtimes F),$
meaning each $\mathrm{Hom}(\mathscr{F}^{\otimes k},\mathbf{R})_{S_{k}}$ is a
space of continuous multilinear functionals endowed with the strong dual
topology: i.e. a space of distributions. The literature mentioned above
defines all of this for a slightly broader class of spaces than Fréchet
spaces, but that is enough for us. In particular, we have the following fact,
from page 1 of [22]:
###### Example 2.4.
Let $X$ be a smooth, compact, finite dimensional manifold, and let $F\to X$ be
a vector bundle with space of sections $\Gamma(X,F)=:\mathscr{F}$. Choose
Riemannian metrics and connections on $TX$ and $F$, let $\nabla^{i}\phi$
denote the $i^{th}$ covariant derivative of $\phi\in\mathscr{F}$, and set
$||f||_{n}:=\sum_{i=0}^{n}\mathrm{sup}|\nabla^{i}\phi(x)|.$
By means of the topology defined by the sequence of norms $\\{||-||_{n}\\}$,
$\mathscr{F}$ is a Fréchet space. Clearly, we can define differential graded
Fréchet spaces as well, as will soon be relevant.
###### Definition 2.5.
The space of local functionals, denoted
$\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$, is the linear subspace of
$\mathscr{O}(\mathscr{F})$ spanned by elements of the form
$F_{k}(\phi)=\int_{X}(D_{1}\phi)(D_{2}\phi)\ldots(D_{k}\phi)\textrm{vol},$
for fields $\phi\in\mathscr{F}$ and differential operators $D_{i}$ on $X$.
###### Lemma 2.6.
([6], Ch. 5, Lemma 6.6.1) There is an isomorphism of cochain complexes
$\mathscr{O}_{\mathrm{loc}}(\mathscr{F})\cong\mathrm{Dens}_{X}\otimes_{D_{X}}\mathscr{O}_{\mathrm{red}}(\mathscr{J}(F)),$
where $\mathscr{J}(F)$ denotes sections of the $\infty$-jet bundle
$\mathrm{Jet}(F)\to X$, and $\mathscr{O}_{\mathrm{red}}(\mathscr{J}(F))$ is
the quotient of
$\mathscr{O}(\mathscr{J}(F))=\mathrm{Sym}(\mathscr{J}(F)^{\vee})$ by the
constant polynomial functions.
###### Remark 2.7.
Sections of $\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$ are exactly elements of
the preceding form, and integration defines a natural inclusion:
$\iota:\mathscr{O}_{\mathrm{loc}}(\mathscr{F})\to\mathscr{O}_{\mathrm{red}}(\mathscr{F}).$
This lemma shows that $\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$ is the space
of Lagrangian densities modulo total derivatives: this is desirable because
adding a total derivative to a Lagrangian density does not affect the dynamics
described in the equations of motion. Local functionals are also more
manageable in terms of functional analysis; for example, the action functional
$S$ is always an element of $\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$, and
local functionals are key in defining the Poisson bracket, as we will see
below.
###### Definition 2.8.
For $F\to X$ a graded vector bundle, a constant coefficient $k$-shifted
symplectic structure is an isomorphism
$F\cong_{\omega}F^{!}[k]:=(\mathrm{Dens}_{X}\otimes F^{\vee})[k]$
of graded vector spaces that is graded antisymmetric.
###### Remark 2.9.
It stands to reason that a symplectic structure on a space defines a Poisson
bracket on its space of functions: this is indeed the case for
$\mathscr{O}_{\mathrm{loc}}(\mathscr{F})\subset\mathscr{O}(\mathscr{F})$. This
is not the case however for all of $\mathscr{O}(\mathscr{F})$, for functional
analytic reasons which are outside the scope of this paper.333Details about
this can be found in Chapter 4 of [8]. We will denote the Poisson
(anti-)bracket induced by $\omega$ as $\\{-,-\\}$.
###### Definition 2.10.
A Batalin-Vilkovisky classical field theory $(\mathscr{F},\omega,S)$ on a
smooth manifold $X$ is a differential $\mathbf{Z}$-graded vector bundle $F\to
X$ equipped with a $-1$-shifted symplectic structure $\omega$ and an action
functional $S\in\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$ such that:
(1) $S$ satisfies the classical master equation (CME): $\\{S,S\\}=0$.
(2) $S$ is at least quadratic, so that it can be written uniquely as
$S(\varphi)=\omega(\varphi,Q\varphi)+I(\varphi)$, where $Q$ is a linear
differential operator and $I\in\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$ is at
least cubic.
A free theory is one in which $I=0$: i.e. the action functional $S$ is purely
quadratic.
###### Remark 2.11.
Although $\\{-,-\\}$ is not a Poisson bracket on $\mathscr{O}(\mathscr{F})$,
bracketing with a local functional like
$S\in\mathscr{O}_{\mathrm{loc}}(\mathscr{F})$ defines a derivation
$\\{S,-\\}:\mathscr{O}(\mathscr{F})\to\mathscr{O}(\mathscr{F})[1]$
regardless of whether or not the BV theory is free. For a free theory, it can
be shown that $\\{S,-\\}=Q$ on $\mathscr{O}(\mathscr{F})$, where the
differential $Q$ on $\mathscr{F}$ is extended to $\mathscr{O}(\mathscr{F})$ as
a derivation. For an interacting theory, $\\{S,-\\}$ is prescribed by an
$L_{\infty}$ algebra structure on $\mathscr{F}$, which we will describe in
Definition 2.17 and provide examples of thereafter. The ellipticity or
hyperbolicity of $(\mathscr{F},Q)$ is sometimes assumed: this will be
commented on later.
###### Definition 2.12.
Let $F\to X$ be a differential graded vector bundle with differential $Q$ on
its sheaf of sections $\mathscr{F}$. Then the dg commutative ring of (global)
classical observables for the theory defined by $(\mathscr{F},Q)$ is
$\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}):=(\mathscr{O}(\mathscr{F}),\\{S,-\\}).$
###### Remark 2.13.
We will briefly describe how $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F})$ is to
be understood as the dg ring of functions on the derived critical locus of the
action functional $S:\mathscr{F}^{0}\to\mathbf{R}$, where the degree zero part
of the dg fields is the “naïve” original space of fields, following [8].
The ordinary critical locus $\mathrm{Crit}(S)$ is the intersection of the
graph
$\Gamma(dS)\subset T^{\vee}\mathscr{F}^{0}$
with the zero section $\mathscr{F}^{0}\subset T^{\vee}\mathscr{F}^{0}$. We
thus get its commutative algebra of functions to be
$\mathscr{O}(\mathrm{Crit}(S))=\mathscr{O}(\Gamma(dS))\otimes_{\mathscr{O}(T^{\vee}\mathscr{F}^{0})}\mathscr{O}(\mathscr{F}^{0}).$
However, $\mathrm{Crit}(S)$ can be singular (e.g. it may be a non-transverse
intersection), so we follow the philosophy of derived (algebraic) geometry and
replace the above critical locus with the derived critical locus
$\mathrm{Crit}^{h}(S)$, which has ring of functions
$\mathscr{O}(\mathrm{Crit}^{h}(S))=\mathscr{O}(\Gamma(dS))\otimes_{\mathscr{O}(T^{\vee}\mathscr{F}^{0})}^{\mathbb{L}}\mathscr{O}(\mathscr{F}^{0}).$
This is now a commutative dg algebra instead of an ordinary commutative
algebra, and it can be realized as the complex
$\mathscr{O}(T^{\vee}[-1]\mathscr{F}^{0})=\Gamma(\mathscr{F}^{0},\Lambda^{\bullet}T\mathscr{F}^{0}).$
Now we see how the Batalin-Vilkovisky dg fields $(\mathscr{F},Q)$ arise
naturally from a derived geometric perspective as
$T^{\vee}[-1]\mathscr{F}^{0}$; moreover, the differential on
$\Gamma(\mathscr{F}^{0},\Lambda^{\bullet}T\mathscr{F}^{0})$ is contracting
with the 1-form $dS\in\Omega^{1}(\mathscr{F}^{0})$, and this can be shown to
be equivalent to $\\{S,-\\}$, as we would expect.
###### Example 2.14.
A running example through much of this text will be scalar field theory. We
will consider the free case first. Fix a semi-Riemannian manifold $(X,g)$ and
consider its space of smooth functions
$\Gamma(X,\underline{\mathbf{R}})=C^{\infty}(X)$: these are the a priori
fields. The action functional is
(1)
$S_{g}(\varphi)=\frac{-1}{2}\int_{X}\varphi\Delta_{g}\varphi\mathrm{vol}_{g},$
where $\varphi\in C^{\infty}(X)$, $\mathrm{vol}_{g}$ is the volume form
associated to the metric $g$, written in coordinates as $\sqrt{\det
g}dx_{1}\wedge\ldots\wedge dx_{n}$, and the Laplace-Beltrami operator
$\Delta_{g}$ associated to $g$ should not be mistaken for the BV Laplacian
discussed in related literature. The Euler-Lagrange equation here is Laplace’s
equation, $\Delta_{g}\varphi=0$, so that $\mathrm{Crit}(S)$ is the set of
harmonic functions.
By the above, the derived critical locus is then
(2) $\mathscr{F}_{g}=C^{\infty}(X)\xrightarrow{Q_{g}}\mathrm{Dens}(X)[-1],$
where $\mathrm{Dens}(X)$ is the appropriate dual to $C^{\infty}(X)$ and
$Q_{g}\varphi=\Delta_{g}\varphi\mathrm{vol}_{g}$ is the differential, which
imposes the Euler-Lagrange equations: it is written so as to take values in
$\mathrm{Dens}(X)$ but also to capture all of the dependence on
$g\in\mathrm{Met}(X)$ in the action functional. The symplectic structure
$\omega$ on
$\mathscr{F}_{g}=C^{\infty}(X)\xrightarrow{Q_{g}}\textrm{Dens}(X)[-1]$ is
$\omega(\varphi,\mu)=\int_{X}\varphi\mu,$
for $\varphi$ and $\mu$ in degrees 0 and 1, respectively. We can thus write
$S_{g}(\varphi)$ as $\omega(\varphi,Q_{g}\varphi)$.
For $\mathscr{F}_{g}=C^{\infty}(X)\xrightarrow{Q_{g}}\mathrm{Dens}(X)[-1]$,
the underlying graded ring of $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g})$
is $\mathscr{O}(\mathscr{F}_{g})$, so that it is concentrated in nonpositive
degrees, as Definition 2.2 implies. The action functional $S_{g}(\varphi)$
defined in Equation (1) is a degree 0 element of
$\mathscr{O}(\mathscr{F}_{g})$, but also defines a degree 1 differential on
$\mathscr{O}(\mathscr{F}_{g})$ as $\\{S_{g},-\\}$: thus, $\\{S_{g},S_{g}\\}$
must be a degree 1 element of $\mathscr{O}(\mathscr{F}_{g})$. Since in this
example $\mathscr{O}(\mathscr{F}_{g})$ is concentrated in nonpositive degrees,
the classical master equation holds vacuously. Thus, the free massless scalar
field with metric background $g$ defines a free BV classical field theory,
since the other requirements are easily satisfied.
###### Remark 2.15.
It is implied here that $g$ is a Riemannian metric, because the associated
partial differential operator is the elliptic Laplace-Beltrami operator. If
$g$ were Lorentzian, then we would instead have the hyperbolic d’Alembertian
$\Box_{g}$. For further details comparing these two regimes for the free
scalar field, one should consult the thorough reference [12].
###### Remark 2.16.
An advantage to shifting from the “ordinary” fields $C^{\infty}(X)$ to the
derived critical locus $\mathscr{F}_{g}$ is that there now is an explicit
dependence in the fields on the metric
$g\in\mathrm{Met}(X)=:\mathscr{M}$.444We will denote $\mathrm{Met}(X)$ as
$\mathscr{M}$ when $X$ is implicit. This will allow us to define a
differential graded vector bundle $\pi:\mathscr{F}\to\mathscr{M}$: the base
space is the space of all (semi-)Riemannian metrics on $X$ and the fibers
$\pi^{-1}(g)=\mathscr{F}_{g}$ are field theories depending on the fixed $g$.
This opens up the possibility of seeing how varying the background metric
effects the field theory. We have such a dg vector bundle only when the theory
is free (i.e. $S$ is quadratic in $\varphi$): for an interacting theory, we
will require the notion of an $L_{\infty}$ algebra and bundles thereof.
###### Definition 2.17.
An $L_{\infty}$ algebra over $R$ is a $\mathbf{Z}$-graded, projective
$R$-module $\mathfrak{g}$ with a sequence of multilinear maps of cohomological
degree $2-n$:
$\ell_{n}:\mathfrak{g}\otimes_{R}\ldots\otimes_{R}\mathfrak{g}\to\mathfrak{g},$
where $n\in\mathbf{N}$, such that all $\ell_{n}$ are (1) graded antisymmetric
and (2) satisfy the $n$-Jacobi rule.555We are sweeping details for this rule
under the rug: Definition A.1.2 in [8] is the whole megillah.
###### Example 2.18.
The most natural example of an $L_{\infty}$ algebra for us comes from encoding
nonlinear partial differential equations: i.e. those associated to an
interacting field theory, with a degree three or higher action functional.
For example, say we want to encode
$\Delta_{g}\varphi+\frac{1}{3!}\varphi^{3}=0$, the Euler Lagrange equation
associated to the action functional
$S_{g}(\varphi)=\frac{-1}{2}\int_{X}\varphi\Delta_{g}\varphi\mathrm{vol}_{g}+\frac{1}{4!}\int_{X}\varphi^{4}\mathrm{vol}_{g}.$
The pertinent $L_{\infty}$ algebra has underlying cochain complex
$\mathscr{L}=C^{\infty}(X)[-1]\to\mathrm{Dens}(X)[-2],$
where the differential is $Q_{g}\varphi=\Delta_{g}\varphi\mathrm{vol}_{g}$ and
the only higher bracket is $\ell_{3}:C^{\infty}(X)^{\otimes
3}\to\mathrm{Dens}(X)$, defined as
$\ell_{3}:\varphi_{1}\otimes\varphi_{2}\otimes\varphi_{3}\mapsto\varphi_{1}\varphi_{2}\varphi_{3}\mathrm{vol}_{g}$.
Letting $(R,\mathfrak{m}_{R})$ be a nilpotent Artinian ring in degree $0$, we
get that $\varphi\in C^{\infty}(X)\otimes\mathfrak{m}_{R}$ satisfies the
Maurer-Cartan equation $\mathscr{L}$ if and only if
$Q_{g}\varphi+\frac{1}{3!}\varphi^{3}\mathrm{vol}_{g}=0,$
which recovers the desired partial differential equation (with values in
densities). Thus we see how an $L_{\infty}$ algebra quantifies how a given
equation fails to be linear (a free theory has only nontrivial $\ell_{1}$, and
so only requires a dg structure to be described). Moreover, $\mathscr{L}$ is
an even more particular object, which we now define.
###### Definition 2.19.
A local $L_{\infty}$ algebra on a manifold $X$ is:
(1) A graded vector bundle $L\to X$, where we denote the sections as
$\mathscr{L}$,
(2) a differential operator $d:\mathscr{L}\to\mathscr{L}$ of cohomological
degree 1 such that $d^{2}=0$, and
(3) a collection of polydifferential operators $\ell_{n}:\mathscr{L}^{\otimes
n}\to\mathscr{L}$ for $n\geq 2$ which are alternating, of cohomological degree
$2-n$, and which make $\mathscr{L}$ an $L_{\infty}$ algebra.
If the local $L_{\infty}$ algebra $(\mathscr{L},d)$ is an elliptic complex, we
call it an elliptic $L_{\infty}$ algebra.
The $L_{\infty}$ algebra $\mathscr{L}$ of our ongoing example is an elliptic
$L_{\infty}$ algebra. One advantage of introducing this notion is the
following definition of observables for perturbative field theories, which we
will employ in Section 4 when discussing formal computations:
###### Definition 2.20 (Definition 5.1.1 in [8]).
The observables with support in the open subset $U$ are the commutative dg
algebra
(3) $\mathrm{Obs}^{\mathrm{cl}}(U):=C^{\bullet}(\mathscr{L}(U)),$
where $C^{\bullet}(\mathscr{L})$ denotes Chevalley-Eilenberg cochains. The
factorization algebra of observables for this classical field theory, denoted
$\mathrm{Obs^{cl}}$, assigns $\mathrm{Obs}^{\mathrm{cl}}(U)$ to each open
$U\subset X$.
###### Remark 2.21.
The computations in Example 2.18 work just fine if we replace the elliptic
operator $\Delta_{g}$ with the hyperbolic wave operator $\Box_{g}$, so it
would be convenient to specify the dynamics of the Lorentzian analogue of
$\varphi^{4}$ theory with an $L_{\infty}$ algebra, too. However, as commentary
in Gwilliam and Rejzner’s paper [12] suggests, comparisons between the
Lorentzian and Riemannian settings get stickier when considering interacting
theories. A precise definition of the correct notion of a hyperbolic
$L_{\infty}$ algebra is presented in the recent paper [2].
###### Remark 2.22.
In Example 2.14 of the free scalar field, both components of the graded space
of fields have an action by the diffeomorphism group of $X$–denoted
$\mathscr{D}$ when unambiguous–via pullback: for $f\in\mathscr{D},\varphi\in
C^{\infty}(X),$ and $\mu\in\textrm{Dens}(X)$,
$f\cdot\varphi=f^{*}\varphi=\varphi\circ f$ and $f\cdot\mu=f^{*}\mu$.
Additionally, $\mathscr{D}$ acts on $\textrm{Met}(X)$ via pullback: $f\cdot
g=f^{*}g$. What is special about this example is that the differential $Q_{g}$
commutes with diffeomorphisms in the following sense:
$f^{*}(Q_{g}\varphi)=f^{*}(\Delta_{g}\varphi\textrm{vol}_{g})=\Delta_{f^{*}g}(f^{*}\varphi)\textrm{vol}_{f^{*}g}=Q_{f^{*}g}(f^{*}\varphi)$.
This result is equivalent to the fact that the Laplacian commutes with
diffeomorphisms777This is computed by expressing
$\Delta_{g}=\mathrm{div}_{g}\mathrm{grad}$, and is done explicitly in notes by
Y. Canzani, available at:
https://www.math.mcgill.ca/toth/spectralgeometry.pdf.. This suggests that if
we parameterize families of free scalar BV theories by $\textrm{Met}(X)$, the
result will be a “$\mathscr{D}$-equivariant bundle”. In fact, we can show how
this can work for interacting theories: but first, we must make precise the
idea of a differential graded equivariant bundle.
### 2.2 Equivariant Vector Bundles
To discuss general covariance, we must first understand what an equivariant
vector bundle is and how to use one to specify a family of field theories
parameterized by semi-Riemannian metrics. Once that is done and we make the
connection with general covariance, we will see how groupoids and stacks
naturally arise in this context and provide additional advantages.
###### Definition 2.23 (Definition 1.5 of [4]).
Let $G$ be a Lie group. A smooth fiber bundle $\pi:E\to M$ is said to be
$G$-equivariant if: (i) both $E$ and $M$ are left $G$-spaces, and (ii)
$\pi:E\to M$ is a $G$-equivariant map. If $E=V$ is a vector bundle, we also
require that for all $g\in G$ and $p\in M$, $g:V_{p}\to V_{g\cdot p}$ is a
linear transformation, where $V_{p}:=\pi^{-1}(p)$.
One must be mindful that within the vector bundle part of this definition is
packaged the information that the fibers $V_{p}$ of $V\to M$ could themselves
be $G$-spaces over fixed point sets; however, if there are no fixed points of
$G$ acting on $M$, we immediately get the following.
###### Theorem 2.24.
For $M$ a smooth $G$-space on which the Lie group $G$ acts freely, there is an
equivalence of categories between vector bundles on $M/G$ and $G$-equivariant
vector bundles:
(4) $\textup{VectBun}(M/G)\xrightarrow{\cong}\textup{VectBun}_{G}(M).$
###### Remark 2.25.
This theorem nicely encapsulates how we might keep track of linear data
parameterized by an underlying space which has some symmetries: we simply
quotient the underlying space out by its symmetries and look at the vector
bundle over that. However, this is where the problem becomes apparent: if
there are any points in $M$ which are stabilized by $G$ or any of its
nontrivial subgroups, then $M/G$ is no longer a smooth manifold at those
points. This makes it more difficult to associate to it any structures which
depended on the differentiability of $M$, like its ring of smooth functions or
sections of certain bundles. Stacks deal with those issues nicely, and provide
an analogous theorem in the case the action $M\operatorname{\
\rotatebox[origin={c}]{90.0}{$\circlearrowleft$}\ }G$ is not free.
There is one more distinction which is significant for this work, which we
will define here: the fibers of the vector bundles we want to consider are
differential graded as well as equivariant.
###### Definition 2.26.
A differential graded vector bundle is a vector bundle $V\to M$ whose fibers
$V_{p}$ are $\mathbf{Z}$-graded vector spaces with a smoothly varying
differential $Q_{p}^{i}:V_{p}^{i}\to V_{p}^{i+1}$.
We will usually abbreviate “differential graded” as “dg”, and such a vector
bundle may sometimes be denoted $V^{\bullet}\to M$ or $(V^{\bullet}\to M,Q)$,
depending on what we would like to emphasize within certain contexts. We have
a similar definition when the fibers are $L_{\infty}$ algebras:
###### Definition 2.27.
A bundle of (elliptic) $L_{\infty}$ algebras is a $\mathbf{Z}$-graded vector
bundle $\pi:(V,\ell)\to M$888We may sometimes omit this notation as a pair if
the $L_{\infty}$ structure is implicit. whose fibers
$(V_{p},\ell^{p}):=\pi^{-1}(p)$ are (elliptic) $L_{\infty}$ algebras, such
that the $L_{\infty}$ structure varies smoothly over $M$.
###### Definition 2.28.
A dg vector bundle $(V^{\bullet}\to M,Q)$ is $G$-equivariant if: (i) each of
the $V^{i}\to M$ is $G$-equivariant in the usual sense and (ii) the action by
$G$ induces a cochain map between fibers, i.e. for $g\in G$ and for
$i\in\mathbf{Z}$, the following square commutes:
${V^{i}_{p}}$${V^{i+1}_{p}}$${V^{i}_{g\cdot p}}$${V^{i+1}_{g\cdot
p}.}$$\scriptstyle{Q_{p}^{i}}$$\scriptstyle{g\cdot}$$\scriptstyle{g\cdot}$$\scriptstyle{Q_{g\cdot
p}^{i}}$
A totally analogous definition holds for $G$-equivariant bundles of
$L_{\infty}$ algebras.
###### Remark 2.29.
It might be the case that the differential (or $L_{\infty}$ structure) does
not depend on $p\in M$ for trivial $V\to M$: in this case, the bundle is still
$G$-equivariant, in a rather trivial way. However, it is easy to find examples
in which there is such a dependence, as this the case for our version of
general covariance.
### 2.3 General Covariance
To state general covariance rigorously in our sense, we must first introduce a
few facts about Fréchet manifolds. The space of metrics on a smooth manifold
and its group of diffeomorphisms are infinite dimensional, so defining vector
bundles or other structures which depend on the differentiability of
$\textrm{Met}(X)$ will require us to consider a special class of manifolds
called Fréchet manifolds, which we now define (much of what we state is
adapted from [22]999Another helpful reference is
https://ncatlab.org/nlab/show/Fr%C3%A9chet+manifold.).
###### Definition 2.30 (Adapted from Definition 1.3 in [22]).
A Fréchet manifold is a Hausdorff topological space with an atlas of
coordinate charts taking values in Fréchet spaces (i.e. complete, Hausdorff,
metrizable, locally convex vector spaces) such that the transition functions
are smooth maps between Fréchet spaces.
###### Example 2.31.
$\Gamma(X,\mathrm{Sym}^{2}(T^{\vee}_{X}))$, of which $\mathrm{Met}(X)$ is an
open submanifold, is a Fréchet manifold if $X$ is compact, and similarly, the
diffeomorphism group $\mathrm{Diff}(X)$ is a Fréchet Lie group as long as $X$
is compact [22]. Thus, we will usually assume that $X$ is compact or even
closed in much of what follows, even though in the Lorentzian case, $X$ is
usually not compact. However, many physically relevant Lorentzian manifolds
are assumed to have the simple form $\Sigma\times\mathbf{R}$, for $\Sigma$ a
spacelike compact submanifold and $\mathbf{R}$ the time direction. This is the
path through which many Riemannian results are translated into the Lorentzian
regime.
###### Definition 2.32 (Adapted from Definition 1.7 of [22]).
The space $\mathrm{Met}(X)=\mathscr{M}$ of all Riemannian metrics on a compact
$X$ is the Fréchet manifold defined as the subspace of
$\Gamma(X,\mathrm{Sym}^{2}(T^{\vee}_{X}))$ consisting of all sections which
are Riemannian metrics on $X$, equipped with the smooth topology of uniform
convergence on compact subsets.
Since $\mathscr{M}$ is a Fréchet manifold and since any space of fields
$\mathscr{F}$ on a compact $X$ for a BV classical field theory is a dg Fréchet
manifold by means of being a dg Fréchet space (by Definition 2.10 and Example
2.4), we can bring to fruition Remark 2.16:
###### Proposition 2.33.
Any BV classical field theory for which the action functional $S$ depends on
the metric $g\in\mathscr{M}$ defines a dg Fréchet vector bundle
$\pi:(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ for a free theory or a dg Fréchet
bundle of $L_{\infty}$ algebras for an interacting theory, with fibers
$\pi^{-1}(g)=(\mathscr{F}_{g},\\{S_{g},-\\})$.
###### Proof.
Because $\mathscr{M}$ is always contractible, the underlying graded vector
bundle is $\mathscr{F}\times\mathscr{M}$, where $\mathscr{F}$ is Fréchet by
Example 2.4. A product of Fréchet manifolds is once again Fréchet, and the
assignment of a dg or $L_{\infty}$ structure is smooth. ∎
The computation in Remark 2.22 along with Proposition 2.33 allow us to state
the following:
###### Lemma 2.34.
For the free scalar BV theory defined in Example 2.14, any diffeomorphism
$f\in\mathscr{D}$ defines a cochain map between fibers of the dg Fréchet
vector bundle $(\mathscr{F},Q)\to\mathscr{M}$:
${\mathscr{F}_{g}=C^{\infty}(X)}$${\mathrm{Dens}(X)[-1]}$${\mathscr{F}_{f^{*}g}=C^{\infty}(X)}$${\mathrm{Dens}(X)[-1],}$$\scriptstyle{Q_{g}}$$\scriptstyle{f^{*}}$$\scriptstyle{f^{*}}$$\scriptstyle{Q_{f^{*}g}}$
which implies that $(\mathscr{F},Q)\to\mathscr{M}$ is a
$\mathscr{D}$-equivariant differential graded vector bundle.
For the remainder of this article, we will sometimes drop the term “Fréchet”
when it is contextually implied, unless attention is otherwise drawn to it.
This result also implies a significant and useful corollary:
###### Corollary 2.35.
If $g\in\mathscr{M}$ is a fixed point of $f\in\mathscr{D}$ (i.e. if $f$ is an
isometry of $g$) and if $Q_{g}\varphi=0$, then $Q_{g}(f^{*}\varphi)=0$. In
other words, isometries of the metric $g$ act on the space of solutions to
$\Delta_{g}\varphi=0$ (Laplace’s equation).
Of course, this corollary holds for any generally covariant BV field theory:
we bring special attention to it in this case because it is a “gold standard”
result when learning PDE for the first time, and thus serves as a touchstone
for the value of the preceding perspective.
###### Remark 2.36.
The differential $Q_{g}$ of the differential graded scalar fields has a very
clear dependence on the base space $\mathscr{M}$. In fact, as a topological
space, the bundle is trivial, as it is
$(C^{\infty}(X)\oplus\textrm{Dens}(X)[-1])\times\mathscr{M}$: the differential
$Q_{g}$ defines any nontriviality as a differential graded vector bundle.
###### Example 2.37.
Lemma 2.34 holds for a particular case in which the BV theory is both free and
non-perturbative: i.e. the Euler-Lagrange equations are linear in the fields
$\phi\in\mathscr{F}_{g}$ and we are not choosing a fixed solution to perturb
around, so that the observables are polynomial functions of the fields as
opposed to Taylor series.
We will now consider an example of an interacting theory. The bundle
$(\mathscr{L},\\{S,-\\})\to\mathscr{M}$101010Note that the notation has
changed since the perturbative space of fields is
$\mathscr{L}=\mathscr{F}[-1]$. representing the family of theories is no
longer just a dg vector bundle, but a bundle of elliptic $L_{\infty}$ algebras
over $\mathscr{M}$. Heuristically speaking, we will no longer view the family
as a collection of vector spaces varying over $\mathscr{M}$, but rather as a
collection of formal neighborhoods varying over $\mathscr{M}$: although the
underlying graded structure is still a vector bundle, the geometry encoded in
the $L_{\infty}$ structures on distinct fibers implies this shift in
perspective.
Let us return to Example 2.18. Recall that the equation of motion in that
instance is:
$Q_{g}\varphi+\frac{1}{3!}\varphi^{3}\mathrm{vol}_{g}=0.$
If we fix a diffeomorphism $f\in\mathscr{D}$, we see that the Euler-Lagrange
form satisfies:
(5)
$f^{*}(Q_{g}\varphi+\frac{1}{3!}\varphi^{3}\mathrm{vol}_{g})=Q_{f^{*}g}(f^{*}\varphi)+\frac{1}{3!}(f^{*}\varphi)^{3}\mathrm{vol}_{f^{*}g}.$
The equivariance property for the first summand is precisely what is shown in
Lemma 2.34, and the second summand (the interaction term) is equivariant
because polynomial functions of the fields are patently equivariant in this
way. Equation (5) can then be reformulated in terms of the brackets on the
elliptic $L_{\infty}$ algebra of Example 2.18 as:
(6)
$f^{*}\big{(}\ell_{1}^{g}(\varphi)+\frac{1}{3!}\ell_{3}^{g}(\varphi,\varphi,\varphi)\big{)}=\ell_{1}^{f^{*}g}(f^{*}\varphi)+\frac{1}{3!}\ell_{3}^{f^{*}g}(f^{*}\varphi,f^{*}\varphi,f^{*}\varphi),$
where we have included the dependence of the brackets $\ell_{k}$ on the
underlying metric $g\in\mathscr{M}$ as a superscript. The above equation is
the $\mathscr{D}$-equivariance property we desire in the Euler-Lagrange term
which implies that the family of theories defined by $\varphi^{4}$ theory as
in Example 2.18 is generally covariant.
This generalizes naturally to the case in which the interaction term is any
polynomial in $\varphi$ times $\mathrm{vol}_{g}$. In that case,
$\ell_{1}=Q_{g}$ and $\ell_{n}:C^{\infty}(X)[-1]^{\otimes
n}\to\mathrm{Dens}(X)[-2]$ for $n\geq 2$ is:
$\ell_{n}:\varphi_{1}\otimes\ldots\otimes\varphi_{n}\mapsto\lambda_{n}\varphi_{1}\ldots\varphi_{n}\mathrm{vol}_{g},$
where the $\lambda_{n}$ are constants. Similarly to Equation (5), it is quick
to show that:
(7) $f^{*}\big{(}\ell_{1}^{g}(\varphi)+\sum_{n\geq
2}\frac{\lambda_{n}}{n!}\ell_{n}^{g}(\varphi,\ldots,\varphi)\big{)}=\ell_{1}^{f^{*}g}(f^{*}\varphi)+\sum_{n\geq
2}\frac{\lambda_{n}}{n!}\ell_{n}^{f^{*}g}(f^{*}\varphi,\ldots,f^{*}\varphi).$
Thus, any scalar field theory with action functional
$S_{g}(\varphi)=\int_{X}(\frac{-1}{2}\varphi\Delta_{g}\varphi+V(\varphi))\mathrm{vol}_{g},$
where $V(\varphi)$ is a polynomial “potential” in $\varphi$, is generally
covariant.
###### Lemma 2.38.
Let $\pi:(\mathscr{L},\\{S,-\\})\to\mathscr{M}$ be a family of perturbative
Batalin-Vilkovisky classical scalar field theories with polynomial potential.
Any $f\in\mathscr{D}$ defines an $L_{\infty}$ map between fibers of
$\pi:(\mathscr{L},\\{S,-\\})\to\mathscr{M}$:
${\mathscr{L}_{g}=C^{\infty}(X)[-1]}$${\mathrm{Dens}(X)[-2]}$${\mathscr{L}_{f^{*}g}=C^{\infty}(X)[-1]}$${\mathrm{Dens}(X)[-2].}$$\scriptstyle{\ell^{g}}$$\scriptstyle{f^{*}}$$\scriptstyle{f^{*}}$$\scriptstyle{\ell^{f^{*}g}}$
In other words, $\pi:(\mathscr{L},\\{S,-\\})\to\mathscr{M}$ is a
$\mathscr{D}$-equivariant bundle of $L_{\infty}$ algebras.
An analogous version of Corollary 2.35 holds here, and follows by a nearly
identical computation. We can now state a first definition for general
covariance:
###### Definition 2.39.
Let $\pi:(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ define a family of BV field
theories on $X$ parameterized by the space of metrics on $X$. If it is
$\textrm{Diff}(X)=:\mathscr{D}$-equivariant as a differential graded vector
bundle or as a bundle of $L_{\infty}$ algebras (depending on whether the
theories are free or perturbative/interacting), we call the theory generally
covariant.
###### Remark 2.40.
Field theories which satisfy general covariance are therefore not sensitive to
all of $\mathscr{M}$, but only to the moduli space of metrics modulo
diffeomorphism, $\mathscr{M}/\mathscr{D}$. Although many physically relevant
metrics have many isometries, the coarse quotient $\mathscr{M}/\mathscr{D}$
“forgets them”: the need for a more general concept of a space which remembers
them is desirable, and this is where stacks will become useful.
###### Example 2.41.
A tangible example of $\mathscr{M}/\mathscr{D}$ with such a singular point is
the Riemannian manifold $X=\mathbf{R}^{n}$ along with the flat metric $\eta$.
It is well known that the isometry group of $(\mathbf{R}^{n},\eta)$ is
$O(n)\ltimes\mathbf{R}^{n}$, where the $\mathbf{R}^{n}$ in the semidirect
product is the additive group of translations of $\mathbf{R}^{n}$. In
particular, $O(n)\ltimes\mathbf{R}^{n}$ is a subgroup of
$\mathrm{Diff}(\mathbf{R}^{n})$ which stabilizes
$\eta\in\mathscr{M}(\mathbf{R}^{n})$, meaning that the corresponding point in
the quotient is singular. Moreover, $O(n)\ltimes\mathbf{R}^{n}$ acts on the
space of solutions to any generally covariant theory on
$(\mathbf{R}^{n},\eta)$ . Definition 2.39 therefore “enlarges” our usual idea
of equivalence beyond isometries.
#### 2.3.1 An important remark on functoriality
Strictly speaking, the claims of Lemmas 2.34 and 2.38 are true in a broader
sense. Instead of assuming that $f:X\to X$ is a diffeomorphism, we let $f:U\to
X$ be an isometric embedding. In other words, consider the category
$\mathbf{Riem}_{n}$ whose objects are Riemannian $n$-folds and whose morphisms
are isometric embeddings: $f:(U,g^{\prime})\to(X,g)$ so that
$f^{*}g=g^{\prime}$. In the case of the free scalar field in Lemma 2.34, the
commutative square is replaced by
${{C^{\infty}(X)}}$${{\mathrm{Dens}(X)[-1]}}$${{C^{\infty}(U)}}$${{\mathrm{Dens}(U)[-1]},}$$\scriptstyle{f^{*}}$$\scriptstyle{Q_{g^{\prime}}=Q_{f^{*}g}}$$\scriptstyle{Q_{g}}$$\scriptstyle{f^{*}}$
which commutes by the very same computation. This implies that the assignment
of a free BV theory is a contravariant functor
$(\mathscr{F},Q):\mathbf{Riem}_{n}\to\mathbf{dgVect}$ from the site of
Riemannian $n$-folds to the category of cochain complexes. We can call this
more general notion “very general covariance” or keep it simply as “general
covariance”. The computation from Lemma 2.38 implies that the above works out
just as well for interacting theories: in that case, the target must be
$L_{\infty}\mathbf{Alg}$, the category of $L_{\infty}$ algebras.121212We will
stick with the broader category of $L_{\infty}$ algebras for the rest of this
section. This suggests something deeper about the physics: not only are the
computations invariant with respect to coordinates choices, but also “manifold
choices” more broadly.
We can compose the preceding functor with the functor
$L_{\infty}\mathbf{Alg}\to\mathbf{dgAlg}$131313As it stands, the target
category can be $\mathbf{dgCAlg}$ (dg commutative algebras), but we leave it
as is because we may lose commutativity after quantization. which takes an
$L_{\infty}$ algebra $\mathscr{L}$ and outputs its Chevalley-Eilenberg
cochains $C^{\bullet}(\mathscr{L})$. Then the composite functor
(8) $\mathrm{Obs}^{\mathrm{cl}}:\mathbf{Riem}_{n}\to\mathbf{dgAlg}$
is covariant, as indeed it should be if we would like to make a factorization
algebra from it (as is done in [7] and [8]). This is a point of connection
with the definition of covariance presented in [10]. In that work, Fewster
outlines a broad framework to understand the idea of “same physics in all
spacetimes” (SPAS) in which he defines (Definition 3.1) a locally covariant
theory to be a covariant functor
$\mathfrak{A}:\mathbf{BkGrnd}\to\mathbf{Phys}$ from some category of
“background geometries” to an appropriate category of “physical quantities”,
like observables. Our preceding construction clearly falls into this class of
objects.
In the study of Algebraic Quantum Field Theory (AQFT), a common choice for
$\mathfrak{A}$ is
(9) $\mathfrak{A}:\mathbf{Loc}_{n}\to C^{*}\text{-}\mathbf{Alg}.$
$\mathbf{Loc}_{n}$ is the category of oriented, time-oriented, and globally
hyperbolic $n$-dimensional Lorentzian manifolds whose morphisms are isometric
embeddings which respect orientations and time-orientations, and
$C^{*}$-$\mathbf{Alg}$ is the category of $C^{*}$ algebras, to which the
observables of a quantum field theory (usually) belong. The work of this
article pertains to classical observables, and much of the focus is on the
first part of the composite functor $\mathrm{Obs^{cl}}$: but once the full
composition is made, the comparison with AQFT is apparent. Further details
concerning this subject are provided in great detail in [19] (in particular
Section 2.5).
###### Remark 2.42.
To summarize, the contents of this paper are presented for a fixed smooth
manifold $X$, its space of metrics, its diffeomorphism group, and various
fields defined on it because focusing on the “smaller problem” made it easier
to manage the functional analytic constructions presented earlier and invoked
later on.
Because the aforementioned fields and groups are sheaves on $X$, it is already
apparent that all of the work lifts to the level of the slice category
$\mathbf{Riem}_{n}/X$,141414This is a “little site” built from the site
$\mathbf{Riem}_{n}$. Note that once $X$ is fixed, $\mathrm{Diff}(X)$ acts on
this site. whose objects isometrically embed into $X$ and whose morphisms $f$
are specified by commutative triangles
${U}$${V}$${X.}$$\scriptstyle{f}$$\scriptstyle{\iota_{V}}$$\scriptstyle{\iota_{U}}$
From here, it is not a stretch to see that our constructions lift to
$\mathbf{Riem}_{n}$. In particular, this means we have the composition of
functors
(10)
$\mathbf{Riem}_{n}\xrightarrow{\mathscr{F}}L_{\infty}\mathbf{Alg}\xrightarrow{\mathrm{Obs^{cl}}}\mathbf{dgAlg}.$
Even better, since we take for granted Costello and Gwilliam’s result that
$\mathrm{Obs^{cl}}:\mathbf{Disj}_{X}\to\mathbf{dgAlg}$ defines a factorization
algebra for a fixed $X$, the above composition ultimately allows us to state
the following:
###### Proposition 2.43.
Any generally covariant BV field theory $(\mathscr{F},\omega,S)$ defines a
functor
(11) $\mathrm{Obs^{cl}}(-,\mathscr{F}):\mathbf{Riem}_{n}\to\mathbf{dgAlg}$
which constitutes a factorization algebra on the site $\mathbf{Riem}_{n}$.
###### Remark 2.44.
Concretely, once we input some $X\in\mathbf{Riem}_{n}$, the output is a
factorization algebra on $X$. Roughly, a prefactorization algebra
$\mathcal{F}$ on $X$ is a functor which takes disjoint opens $U_{i}$ as
subsets of some larger open $V\subseteq X$ and outputs multiplication maps
$\bigotimes_{i}\mathcal{F}(U_{i})\to\mathcal{F}(V)$. A factorization algebra
is a prefactorization algebra which satisfies a particular (co)descent axiom.
Further details can be found in Sections 3.1 and 6.1 of [7].
For the rest of the paper, we make constructions and compute results within
the framework of stacks and formal stacks: one of the ultimate motivations is
to step back and notice that all of the results hold at the level of
generality specified in this subsection. An eventual goal is to make
connections with other literature on functorial perspectives in field theory.
An example of such literature linking AQFTs and factorization algebras is [3]
(Theorem 4.7 in particular).
### 2.4 Groupoids and Stacks
Stacks provide a wonderful packaging of a quotient space, but before diving
into them, we must quickly review groupoids, which are the cornerstone of the
theory of stacks and allow us to do concrete computations with them. For the
most part, we follow the constructions outlined in [5] and [13].
###### Definition 2.45.
A groupoid $\mathcal{G}$ is a small category in which all arrows are
invertible. Common notation is
$\mathcal{G}=\mathcal{G}_{1}\mathrel{\mathop{\vbox{
\offinterlineskip\halign{\hbox to\dimexpr\@tempdima+1em{#}\cr
0.28127pt{\rightarrowfill\cr\kern 1.50694pt\cr
0.28127pt{\rightarrowfill\cr}}}\limits^{\\!s}_{\\!t}}\mathcal{G}_{0}}}$, where
$\mathcal{G}_{1}$ is the set of arrows and $\mathcal{G}_{0}$ the set of
objects; $s$ sends an arrow to its source object, and $t$ sends it to its
target. Every such $\mathcal{G}$ has an identity map
$e:\mathcal{G}_{0}\to\mathcal{G}_{1}$ sending an object to its identity arrow,
an inverse map $i:\mathcal{G}_{1}\to\mathcal{G}_{1}$ sending an arrow to its
inverse, and a multiplication map
$m:\mathcal{G}_{1}\times_{\mathcal{G}_{0}}\mathcal{G}_{1}\to\mathcal{G}_{1}$
that concatenates arrows. $s,t,e,i,$ and $m$ are called the structure maps of
$\mathcal{G}$.
###### Example 2.46.
A premier example of a groupoid is the action groupoid which can be associated
to any smooth $G$-space $M$. Its set of objects is $\mathcal{G}_{0}=M$ and its
set of arrows is $\mathcal{G}_{1}=M\times G$, so that we can write it as
$M\times G\mathrel{\mathop{\vbox{ \offinterlineskip\halign{\hbox
to\dimexpr\@tempdima+1em{#}\cr 7.00002pt{\rightarrowfill\cr\kern 1.50694pt\cr
7.00002pt{\rightarrowfill\cr}}}\limits^{\\!}_{\\!}}M=:\mathcal{M}_{\mathcal{G}}}}$
In this case, $s(p,g)=p$ and $t(p,g)=g\cdot p$. Common notation for the action
groupoid is $M//G$: the action groupoid is defined as a first step toward
understanding coarse quotients which forget stabilizers or may not even be
smooth, in the sense that the action of $G$ could fix certain points in $M$
and so $M/G$ could be singular.
###### Definition 2.47.
A Lie groupoid $\mathcal{G}=\mathcal{G}_{1}\mathrel{\mathop{\vbox{
\offinterlineskip\halign{\hbox to\dimexpr\@tempdima+1em{#}\cr
0.28127pt{\rightarrowfill\cr\kern 1.50694pt\cr
0.28127pt{\rightarrowfill\cr}}}\limits^{\\!s}_{\\!t}}\mathcal{G}_{0}}}$ is a
groupoid such that both the space of arrows $\mathcal{G}_{1}$ and space of
objects $\mathcal{G}_{0}$ are smooth manifolds, all structure maps are smooth,
and the source and target maps $s,t:\mathcal{G}_{1}\to\mathcal{G}_{0}$ are
surjective submersions. (In other words, a Lie groupoid is a groupoid internal
to the category of smooth manifolds.)
###### Remark 2.48.
Moreover, if $\pi:V\to M$ is a $G$-equivariant vector bundle, then we could
also define its action groupoid $V\times G\mathrel{\mathop{\vbox{
\offinterlineskip\halign{\hbox to\dimexpr\@tempdima+1em{#}\cr
7.00002pt{\rightarrowfill\cr\kern 1.50694pt\cr
7.00002pt{\rightarrowfill\cr}}}\limits^{\\!}_{\\!}}V=:\mathcal{V}_{\mathcal{G}}}}$.
Both $\mathcal{V}_{\mathcal{G}}$ and $\mathcal{M}_{\mathcal{G}}$ are in fact
Lie groupoids, and $\mathcal{V}_{\mathcal{G}}$ is a vector space object over
$\mathcal{M}_{\mathcal{G}}$ in the category of Lie groupoids. Thus, by some
abuse of notation, we can view
$\pi:\mathcal{V}_{\mathcal{G}}\to\mathcal{M}_{\mathcal{G}}$ as a vector
bundle.
To consider the above definitions for infinite dimensional manifolds, we need
to choose the right category: in our case, it is the category of Fréchet
manifolds.
###### Definition 2.49.
A Fréchet Lie groupoid is a groupoid internal to the category of Fréchet
manifolds: in other words, $\mathcal{G}_{1}$ and $\mathcal{G}_{0}$ are Fréchet
manifolds and $s$ and $t$ are smooth maps of Fréchet manifolds. We denote
their associated category as $\mathrm{FrLieGpd}$.
###### Example 2.50.
Combining Proposition 2.33 with the above definition implies that for a
compact smooth manifold $X$,
(12) $\mathrm{Met}(X)\times\mathrm{Diff}(X)\mathrel{\mathop{\vbox{
\offinterlineskip\halign{\hbox to\dimexpr\@tempdima+1em{#}\cr
7.00002pt{\rightarrowfill\cr\kern 1.50694pt\cr
7.00002pt{\rightarrowfill\cr}}}\limits^{\\!}_{\\!}}\mathrm{Met}(X)=:\mathscr{M}//\mathscr{D}}}$
is a Fréchet Lie groupoid. By Definition 2.39 and the preceding remark, any
generally covariant BV field theory constitutes a dg vector bundle or
$L_{\infty}$ bundle of Fréchet Lie groupoids:
(13) $\pi:(\mathscr{F}//\mathscr{D},\\{S,-\\})\to\mathscr{M}//\mathscr{D}.$
###### Remark 2.51.
Groupoids are a cornerstone in the definition of stacks, which are the spaces
which we eventually would like to replace ordinary manifolds with for the
purpose of including quotient data in the space. The soul of the matter lies
within the definition of a prestack, which is motivated by the functor of
points perspective. The difference is that instead of a functor from
$\mathbf{Mfd}^{op}$ (or $\mathbf{CRing}$ for an algebraic geometer) to
$\mathbf{Set}$, the functor lands in $\mathbf{Gpd}$, which contains any
“equivalence data” specific to the model at hand. We will define a stack and
then quickly move onto the key example, to avoid unnecessary generalities.
###### Definition 2.52 (Definition 1.1 in [13]).
A prestack is a 2-functor $\mathfrak{X}:\mathbf{Mfd}^{op}\to\mathbf{Gpd}$.
A prestack $\mathfrak{X}$ over $\mathbf{Mfd}^{op}$ is a stack if for any
$N\in\mathbf{Mfd}^{op}$ and open cover $\\{U_{i}\\}$ of $N$, it satisfies
descent, in other words: (1) Given objects $P_{i}\in\mathfrak{X}(U_{i})$ and
isomorphisms $\varphi_{ij}:P_{i}|_{U_{i}\cap U_{j}}\to P_{j}|_{U_{i}\cap
U_{j}}$ such that $\varphi_{jk}\circ\varphi_{ij}=\varphi_{ik}|_{U_{i}\cap
U_{j}\cap U_{k}}$, there is an object $P\in\mathfrak{X}(N)$ and isomorphisms
$\varphi_{i}:P|_{U_{i}}\to P_{i}$ such that
$\varphi_{ij}=\varphi_{j}\circ\varphi_{i}^{-1}$. This is called effective
descent data.
(2) Given $P,P^{\prime}\in\mathfrak{X}(N)$ and isomorphisms
$\varphi_{i}:P|_{U_{i}}\to P^{\prime}|_{U_{i}}$ such that
$\varphi_{i}|_{U_{i}\cap U_{j}}=\varphi_{j}|_{U_{i}\cap U_{j}}$, there is a
unique map $\varphi:P\to P^{\prime}$ such that $\varphi_{i}=\varphi|_{U_{i}}$.
###### Remark 2.53.
As we have defined it, the above is actually a differentiable stack; however,
because this is the only kind of stack we need, we usually drop the adjective.
###### Example 2.54.
A fundamental example of a stack over $\mathbf{Mfd}^{op}$ is an ordinary
manifold. For such a manifold $M$, we can define the stack $\underline{M}$ as
$\underline{M}(N):=\mathrm{Map}(N,M)=C^{\infty}(N,M)$ for $N\in\mathbf{Mfd}$.
This stack is presented by the “discrete groupoid”
$M\times\\{1\\}\mathrel{\mathop{\vbox{ \offinterlineskip\halign{\hbox
to\dimexpr\@tempdima+1em{#}\cr 7.00002pt{\rightarrowfill\cr\kern 1.50694pt\cr
7.00002pt{\rightarrowfill\cr}}}\limits^{\\!}_{\\!}}M}}$, whose objects are the
points of $M$ and the only morphisms are the identities for those points. This
embeds $\mathbf{Mfd}$ into the category $\mathbf{Stk}$ of differentiable
stacks. We also get the following essential lemma.
###### Theorem 2.55 (Yoneda Lemma for Stacks: Lemma 1.3 in [13]).
Let $\mathfrak{X}$ be a stack and let $M$ be a manifold. We have the following
equivalence of categories:
$\mathfrak{X}(M)\cong\mathrm{Mor}_{\mathbf{Stk}}(\underline{M},\mathfrak{X}).$
Since stacks are designed to generalize the notion of an ordinary manifold,
there should be an analogous notion of an atlas for stacks.161616One could
even define stacks by first constructing atlases, as is done for manifolds. We
may sometimes denote $\underline{M}$ simply as $M$, when it’s implicit in
context.
###### Definition 2.56.
An atlas (or covering) for a stack $\mathfrak{X}$ is a manifold $X$ and map
$p:X\to\mathfrak{X}$ such that (1) for any manifold $Y$ and
$Y\to\mathfrak{X}$, the stack $X\times_{\mathfrak{X}}Y$ is a manifold, and (2)
$p$ is a submersion, i.e. for all $Y\to\mathfrak{X}$, the projection
$Y\times_{\mathfrak{X}}X\to Y$ is a submersion.
We now specify the most important example of a stack for this paper.
###### Definition 2.57.
Given a smooth $G$-manifold $M$, the associated quotient stack is the functor
$[M/G]:\mathbf{Mfd}^{op}\to\mathbf{Gpd}$ such that the objects of $[M/G](N)$
are pairs $(P\xrightarrow{\pi}N,P\xrightarrow{\alpha}M)$, $\pi$ being a
principal $G$-bundle over $N$ and $\alpha$ being a $G$-equivariant map, and
the arrows are isomorphisms of principal $G$-bundles over $N$ commuting with
the $G$-equivariant maps to $M$.
Note that $[M/G]$ evaluated on a point recovers $M//G$, so that $[M/G]$
rightly gives a natural generalization of $M//G$. An atlas for $[M/G]$ is
$M\to[M/G]$. Much like how we use atlases to define principal and vector
bundles over an ordinary manifold, we use atlases to define such bundles over
stacks, as follows.
###### Definition 2.58.
A principal $G$-bundle $\mathscr{P}\to\mathfrak{X}$ is given by a $G$-bundle
$\mathscr{P}_{X}$ over an atlas $X\to\mathscr{P}$ with an isomorphism of the
two pullbacks
$p_{1}^{*}\mathscr{P}_{X}\xrightarrow{\simeq}p_{2}^{*}\mathscr{P}_{X}$ from
$X\times_{\mathfrak{X}}X\to X$ satisfying the cocycle condition on
$X\times_{\mathfrak{X}}X\times_{\mathfrak{X}}X$.
###### Remark 2.59.
The definition of a vector bundle over a stack $\mathfrak{X}$ is completely
analogous to this. Of course, one could instead invoke that a vector bundle
$V\to\mathfrak{X}$ of rank $n$ is equivalent to a principal
$GL(n,\mathbf{R})$-bundle and then use the preceding definition, anyway.
###### Example 2.60.
An essential example derived from the above definitions is that of
$[\mathrm{pt}/G]$, for $\mathrm{pt}$ a point. Applying the definition shows
that $[\mathrm{pt}/G](X)$ is precisely $\textrm{Bun}_{G}(X)$, the category of
principal $G$-bundles over $X$ (any morphism of $G$-bundles over the same base
space is necessarily an isomorphism). Because of this, it is common to
identify $[\mathrm{pt}/G]$ with $BG$, since $[X,BG]$ is equivalent to
$\textrm{Bun}_{G}(X)$ modulo bundle ismomorphisms.
Notice moreover that defining a vector bundle $V\to[\mathrm{pt}/G]$ amounts to
fixing the vector space $V$ (a vector bundle over the point) as well as a
representation $\rho:G\to\mathrm{End}(V)$. In other words, we have an
equivalence of categories:
$\mathrm{VectBun}([\mathrm{pt}/G])\cong\mathrm{Rep}(G).$
###### Remark 2.61.
The preceding example is a simple but beautiful illustration of how specifying
a vector bundle $V$ over a quotient stack $[M/G]$ is equivalent to specifying
a $G$-equivariant vector bundle over a $G$-manifold $M$. We thus get the
following fact.
###### Theorem 2.62 (Adapted from Example 4.5 in [13]).
For $M$ a smooth $G$-space, we have the following equivalence of categories:
$\mathrm{VectBun}([M/G])\cong\mathrm{VectBun}_{G}(M).$
This is stated in [13] for cartesian sheaves on $[M/G]$ and we are
representing a vector bundle by its space of sections to deduce the above, so
the statement in [13] holds for a larger class of objects. Additionally,
although it is outside the scope of this paper, it is worth mentioning that a
quotient stack $[M/G]$ contains “all possible ways” in which it could have
been defined starting with an action groupoid: more than one groupoid could
present a stack (this is described and expounded on in [5]), so it is
reassuring that the stack itself contains this data.
A deeper level of care must be taken for our motivating example
$[\textrm{Met}(X)/\textrm{Diff}(X)]=[\mathscr{M/D}]$, which is presented by
the Fréchet Lie groupoid $\mathscr{M//D}$. Smooth maps from an ordinary finite
dimensional manifold to a Fréchet manifold are well-defined, and so the
associated maps from one to the other when viewed as their respective discrete
groupoids are also well-defined. This allows us to formulate the following
definition:
###### Definition 2.63.
For a compact manifold $X$, let
$[\textrm{Met}(X)/\textrm{Diff}(X)]=[\mathscr{M/D}]:\mathbf{Mfd}^{op}\to\mathbf{Gpd}$
be the functor such that the objects of $[\mathscr{M/D}](N)$ are pairs
$(P\xrightarrow{\pi}N,P\xrightarrow{\alpha}\mathscr{M})$, $\pi$ being a
principal $\mathscr{D}$-bundle over $N$ and $\alpha$ being a
$\mathscr{D}$-equivariant map, and the arrows are isomorphisms of principal
$\mathscr{D}$-bundles over $N$ commuting with the $\mathscr{D}$-equivariant
maps to $\mathscr{M}$: $[\mathscr{M/D}]$ is the moduli stack of metrics modulo
diffeomorphism.
###### Lemma 2.64.
For a compact manifold $X$, the Fréchet Lie groupoid $\mathscr{M//D}$ presents
the Fréchet moduli stack $[\mathscr{M/D}]$.
###### Remark 2.65.
Definition 2.63 is stated in the “ordinary” sense, so that we don’t specify
the Fréchet nature of the manifolds. Then, Lemma 2.64 implies that
$[\mathscr{M/D}]$ is represented via the canonical functor
$\mathbf{FrGpd}\to\mathbf{Stk}$ sending a Fréchet Lie groupoid to its
associated differentiable stack. The very detailed paper [20]–in particular
Sections 2 and 5–provides additional details and examples for these
definitions, and is what we primarily relied on above.
In light of Definition 2.39 and Theorem 2.62 as well as the preceding
definition, we have:
###### Proposition 2.66.
Any generally covariant family $\pi:(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ of
BV field theories descends to a Fréchet dg vector bundle or $L_{\infty}$
bundle of stacks:
(14) $\pi:([\mathscr{F/D}],\\{S,-\\})\to[\mathscr{M/D}].$
###### Remark 2.67.
Conversely, any such bundle defines a generally covariant theory: in this
sense, Proposition 2.66 can be taken as the definition of a generally
covariant theory. Moreover, we mindfully dropped the notation of a fixed
smooth manifold $X$ in the statement of this proposition: in the long run, we
would like to better understand what kind of functor
$([\mathscr{F/D}],\\{S,-\\})\to[\mathscr{M/D}]$ constitutes from
$\mathbf{Riem}_{n}$ to the category of $L_{\infty}$ bundles over stacks. More
will be said on this in the following example.
###### Example 2.68 (Perturbative Yang-Mills Theory).
The advantages of the stacky formulation of general covariance may be more
convincing when considering theories which have more data involved; e.g. those
with local symmetries. As an example, let us consider Yang-Mills theory: to
begin, let $(X,g)$ be an oriented, $n$-dimensional Riemannian manifold, and
let $G$ be a compact Lie group whose Lie algebra $\mathfrak{g}$ has a
nondegenerate invariant pairing, $\langle-,-\rangle_{\mathfrak{g}}$. To
minimize any topological complications, fix a trivial principal $G$-bundle
$P\to X$.
In this instance, the fields for Yang-Mills theory are the connection one-
forms $A\in\Omega^{1}(X,\mathfrak{g})=\Omega^{1}(X)\otimes\mathfrak{g}$
associated to $P$, which constitute an affine Fréchet space. To such a field,
we can associate its curvature form
$F_{A}:=dA+\frac{1}{2}[A,A]\in\Omega^{2}(X,\mathfrak{g})$. Letting
$\langle-,-\rangle$ denote the pairing on $\Omega^{\bullet}(X,\mathfrak{g})$
defined by
(15) $\langle\omega_{1}\otimes E_{1},\omega_{2}\otimes
E_{2}\rangle:=\int_{X}\omega_{1}\wedge\omega_{2}\langle
E_{1},E_{2}\rangle_{\mathfrak{g}},$
the Yang-Mills action functional can be written as
(16) $S_{YM}(A)=\frac{1}{2}\langle F_{A},\star F_{A}\rangle,$
where $\star$ denotes the Hodge star operator. The Euler-Lagrange equations
for this action are
(17) $d_{A}\star F_{A}=0,$
where $d_{A}=d+A$ is the exterior covariant derivative associated to $A$.
Alternatively, this can be written as $(d_{A}\star d_{A})A=0$.
To move toward the derived-geometric set up in the BV formalism, we must also
consider that there is an action of the gauge group $C^{\infty}(X,G)$ on the
fields $\Omega^{1}(X,\mathfrak{g})$ defined such that for $g\in
C^{\infty}(X,G)$, $g\cdot A$ is $A^{g}:=g^{-1}Ag+g^{-1}dg$. $S_{YM}(A)$ is
invariant under this action, and so the Yang-Mills equations are covariant
with respect to it. Moreover, the infinitesimal gauge action is: for
$\alpha\in C^{\infty}(X)\otimes\mathfrak{g}=\Omega^{0}(X,\mathfrak{g})$,
$A\mapsto d_{A}\alpha=d\alpha+[A,\alpha]\in
T_{A}\Omega^{1}(X,\mathfrak{g})\cong\Omega^{1}(X,\mathfrak{g})$, the tangent
space to the space of connection one-forms at $A$. This action suggests that
we consider the tangent complex181818This will be defined precisely later, in
Definition 3.2. to $A$ as a point in the stack of connections modulo gauge:
(18)
$\mathbf{T}_{A}[\Omega^{1}(X,\mathfrak{g})/\Omega^{0}(X,\mathfrak{g})]\cong\Omega^{0}(X,\mathfrak{g})[1]\xrightarrow{d_{A}}\Omega^{1}(X,\mathfrak{g}).$
We can begin to define a BV theory for perturbative Yang-Mills about a fixed
solution $A$ by computing the $-1$-shifted cotangent bundle of the above:
(19)
$\Omega^{0}(X,\mathfrak{g})[1]\xrightarrow{d_{A}}\Omega^{1}(X,\mathfrak{g})\xrightarrow{d_{A}\star_{g}d_{A}}\Omega^{n-1}(X,\mathfrak{g})[-1]\xrightarrow{d_{A}}\Omega^{n}(X,\mathfrak{g})[-2]=:\mathscr{E}_{(g,A)}.$
The shifted symplectic pairing comes from (15) and the differential between
$\Omega^{1}(X,\mathfrak{g})$ and $\Omega^{n-1}(X,\mathfrak{g})$ comes from the
equations of motion (17). Also, we are being pedantic in that we are labeling
the Hodge star with the metric used to define it.
###### Remark 2.69.
There is a dependence on the metric in the middle differential (the Yang-Mills
term), but we could in principle compute whether or not the entire complex is
diffeomorphism equivariant: this amounts to checking whether or not the
infinitesimal gauge invariance–described by the differential $d_{A}$ between
$\Omega^{0}(X,\mathfrak{g})[1]$ and $\Omega^{1}(X,\mathfrak{g})$ and also
between $\Omega^{n-1}(X,\mathfrak{g})[-1]$ and
$\Omega^{n}(X,\mathfrak{g})[-2]$–is also diffeomorphism equivariant. Put
plainly, showing the diffeomorphism equivariance of $\mathscr{E}_{(g,A)}$
proves that perturbative Yang-Mills theory is generally covariant as a theory
which also depends on connections modulo (infinitesimal) gauge.
To show that $\mathscr{E}_{(g,A)}$ is $\mathscr{D}$-equivariant, we must show
that the diagram
${\Omega^{0}(X,\mathfrak{g})[1]}$${\Omega^{1}(X,\mathfrak{g})}$${\Omega^{n-1}(X,\mathfrak{g})[-1]}$${\Omega^{n}(X,\mathfrak{g})[-2]}$${\Omega^{0}(X,\mathfrak{g})[1]}$${\Omega^{1}(X,\mathfrak{g})}$${\Omega^{n-1}(X,\mathfrak{g})[-1]}$${\Omega^{n}(X,\mathfrak{g})[-2]}$$\scriptstyle{d_{A}}$$\scriptstyle{d_{A}\star_{g}d_{A}}$$\scriptstyle{d_{A}}$$\scriptstyle{f^{*}}$$\scriptstyle{f^{*}}$$\scriptstyle{f^{*}}$$\scriptstyle{f^{*}}$$\scriptstyle{d_{f^{*}A}}$$\scriptstyle{d_{f^{*}A}\star_{f^{*}g}d_{f^{*}A}}$$\scriptstyle{d_{f^{*}A}}$
commutes, for any diffeomorphism $f\in\mathscr{D}$. Notice that in the lower
complex, the Hodge star is defined by the metric $f^{*}g$ and the fixed
connection form is $f^{*}A$. To begin, let
$\alpha\in\Omega^{0}(X,\mathfrak{g})[1]$. We get
(20)
$f^{*}(d_{A}\alpha)=f^{*}(d\alpha+A\wedge\alpha)=d(f^{*}\alpha)+f^{*}A\wedge
f^{*}\alpha,$
because the exterior derivative $d$ is manifestly covariant and pullbacks
commute with wedge products, even if the forms have $\mathfrak{g}$
coefficients: this equation is then equal to
$(d+f^{*}A)(f^{*}\alpha)=d_{f^{*}A}(f^{*}\alpha)$, which proves that the first
square commutes. Moreover, this same computation shows that the last square
commutes, too.
Next, let $\omega\in\Omega^{1}(X,\mathfrak{g})$. Then:
(21) $\displaystyle(d_{A}\star_{g}d_{A})\omega$
$\displaystyle=(d_{A}\star_{g}(d\omega+A\wedge\omega))$ (22)
$\displaystyle=(d+A)(\star_{g}d\omega+\star_{g}(A\wedge\omega))$ (23)
$\displaystyle=(d\star_{g}d\omega+d\star_{g}(A\wedge\omega)+A\wedge\star_{g}d\omega+A\wedge\star_{g}(A\wedge\omega)).$
Before we consider the diffeomorphism action, notice that the $L_{\infty}$
structure can be read off from the last expression. Pulling the above back
along $f\in\mathscr{D}$ results in
(24)
$f^{*}((d_{A}\star_{g}d_{A})\omega)=f^{*}(d\star_{g}d\omega)+f^{*}(d\star_{g}(A\wedge\omega))+f^{*}(A\wedge\star_{g}d\omega)+f^{*}(A\wedge\star_{g}(A\wedge\omega)),$
which, when considering that the pullback commutes with the wedge product and
the manifest covariance of the Hodge star, is equal to
(25) $d\star_{f^{*}g}d(f^{*}\omega)+d\star_{f^{*}g}(f^{*}A\wedge
f^{*}\omega)+f^{*}A\wedge\star_{f^{*}g}d(f^{*}\omega)+f^{*}A\wedge\star_{f^{*}g}(f^{*}A\wedge
f^{*}\omega).$
From here, we see that this is equal to
$(d+f^{*}A)\star_{f^{*}g}(d+f^{*}A)(f^{*}\omega)$, which is what we wanted. We
have therefore shown the following:
###### Theorem 2.70.
The bundle of $L_{\infty}$ algebras
$\mathscr{E}(X)\to\mathrm{Met}(X)\times\Omega^{1}(X,\mathfrak{g})$ with fibers
$\mathscr{E}_{(g,A)}(X)=\Omega^{0}(X,\mathfrak{g})[1]\xrightarrow{d_{A}}\Omega^{1}(X,\mathfrak{g})\xrightarrow{d_{A}\star_{g}d_{A}}\Omega^{n-1}(X,\mathfrak{g})[-1]\xrightarrow{d_{A}}\Omega^{n}(X,\mathfrak{g})[-2],$
is $\mathrm{Diff}(X)$-equivariant. In other words, perturbative Yang-Mills
theory is generally covariant, and the above bundle descends to a bundle of
stacks:
(26)
$\mathscr{E}(X)\to[(\mathrm{Met}(X)\times\Omega^{1}(X,\mathfrak{g}))/\mathrm{Diff}(X)].$
###### Remark 2.71.
We would like to remind the reader that in the style of Subsection 2.3.1, we
can abandon the specific choice of $X$ in (26) and the result is a functor
from $\mathbf{Riem}_{n}$ to the category of bundles of $L_{\infty}$ algebras
over stacks. In this case, what replaces the post-composed functor
$\mathrm{Obs^{cl}}:\mathbf{Riem}_{n}\to\mathbf{dgAlg}$ of Proposition 2.43? To
be more specific, what happens when we input a bundle of $L_{\infty}$ algebras
over a stack to output something in $\mathbf{dgAlg}$ instead of a lone
$L_{\infty}$ algebra? It is of great interest to elaborate more on these
functors in future work.
###### Remark 2.72.
Each of the fibers $\mathscr{E}_{(g,A)}$ describes the formal neighborhood of
a solution $A$ to the Yang-Mills equations as an element of the stack
$[\Omega^{1}(X,\mathfrak{g})/C^{\infty}(X,G)]$ of connections modulo gauge
transformation, and with background metric $g\in\mathscr{M}$: in this sense,
we can view the preceding bundle as parameterizing formal stacks describing
solutions to Yang-Mills modulo gauge living over the stack of metrics modulo
diffeomorphism. More on formal neighborhoods in
$[\Omega^{1}(X,\mathfrak{g})/C^{\infty}(X,G)]$ can be found in the paper [9]
on spontaneous symmetry breaking.
However, we should note that the preceding equivariance computations work out
perfectly well if we don’t treat them perturbatively: after all, the Yang-
Mills term in (17) is diffeomorphism equivariant with respect to both
connection one-forms and metrics. The caveat is that by using $L_{\infty}$
algebras, we are implictly invoking Theorem 2.0.2 in [15], in which the
correspondence between $L_{\infty}$ algebras and formal moduli spaces is
specified; however, if we formally substitute $\omega=A$ in the above
computations, the equivariance property holds for what is evidently the
nonperturbative case. I am interested to see how this can be remedied further
to have a globalized version of Theorem 2.70.
###### Remark 2.73.
Theorem 2.70 states a version of general covariance in which additional
physical fields are inextricably linked to $\mathrm{Met}(X)$ in the moduli
stack. Indeed, any tensor field (even when taking values in some Lie algebra,
for example) is defined with regard to its behavior under diffeomorphisms. So
then why do we state general covariance in terms of metrics modulo
diffeomorphism? The answer is that in the development of the theory of general
relativity, a key observation was Einstein’s equivalence principle.
In general relativity, metrics represent the dynamical variables of the
gravitational field, but any freely falling observer in a gravitational field
can choose coordinates so that they are in an inertial frame: in other words,
any metric can be altered by some diffeomorphism to be locally Euclidean or
Lorentzian. In this sense, metrics and diffeomorphisms are intimately related
when specifying gravitational dynamics, and so we use the associated stack as
a baseline for quantifying general covariance. Further details are provided in
Section 3 of [16].
## 3 Formal Stacks
In Section 4, we will make a connection to the version of the classical
Noether’s Theorem as described in Chapter 12 of [8]. However, we must first
cross the bridge from the world of global stacks as we defined them in Section
2.4 to the world of formal moduli spaces, which are examples of formal stacks.
A key step is to associate to a (differential graded) equivariant vector
bundle a vector bundle over a formal moduli space:191919Formal moduli spaces
are alternatively named “formal moduli problems”. in our case, this formal
moduli space is a formal neighborhood in a quotient stack. In Section 4, this
formal moduli space will be the formal neighborhood of a fixed metric in the
moduli stack of metrics modulo diffeomorphism.
### 3.1 Tangent Complexes
The goal of the next portion is to understand what object we can associate to
a point in a stack that plays the analogous role of a tangent space to an
ordinary manifold. These “tangent complexes” are necessary to compute function
rings on formal neighborhoods in stacks, making them locally ringed spaces.
###### Construction 3.1.
Let $M$ be a smooth $G$-space and let $\textrm{Stab}(p)\subseteq G$ be the
stabilizer subgroup of $p\in M$. The $G$-orbit of $p$ thus looks like a copy
of $G/\textrm{Stab}(p)$ lying in $M$. If we consider the map $t_{p}:G\to M$
defined as $t_{p}(g)=g\cdot p$,202020$t_{p}$ is in fact the target map for the
Lie groupoid $M\times G\rightrightarrows M$ with $p\in M$ fixed in $M\times
G$. then its differential $dt_{p}$ can be used to define a 2-term cochain
complex of vector spaces:
(27) $0\to\mathfrak{g}[1]\xrightarrow{dt_{p}}T_{p}M\to
0=:\mathbf{T}_{p}[M/G],$
where $\mathfrak{g}$ is in cohomological degree $-1$ and $T_{p}M$ is in degree
$0$. Alternative notation is $\mathbf{T}_{p}[M/G]=(\mathfrak{g}[1]\oplus
T_{p}M,dt_{p})$. Note that $\textrm{Stab}(p)$ could be discrete here, although
that is not seen in $\mathbf{T}_{p}[M/G]$. We can also compute
$\textrm{ker}(dt_{p})=H^{-1}(\mathbf{T}_{p}[M/G])=\textrm{Lie(Stab}(p)).$
Thus, if $H^{-1}(\mathbf{T}_{p}[M/G])=0$, then the coarse quotient $M/G$ is an
ordinary manifold at that point, since the action is free nearby it.
$H^{0}(\mathbf{T}_{p}[M/G])$ is the quotient of $T_{p}M$ by
$\textrm{im}(dt_{p})$: it is the usual tangent space of the coarse quotient at
points $p\in M$ where the action is free. As it turns out, this is exactly the
tangent object we are looking for, as the notation suggests: a precise
statement and further details are wonderfully detailed in [1].
###### Proposition 3.2.
The tangent complex to the quotient stack $[M/G]$ at a point $[p]$ is exactly
$\mathbf{T}_{p}[M/G]$ as defined in equation (27).
This inspired the saying that “smooth stacks are geometric spaces whose
tangent spaces are complexes concentrated in nonpositive cohomological
degree”. In the case of quotient stacks, we’re lucky to have a concrete way of
realizing their associated tangent complexes.
###### Remark 3.3.
If we take the union of all the complexes $\mathbf{T}_{p}[M/G]$ over all $p\in
M$, we get a complex of vector bundles over $M$:
$0\to\underline{\mathfrak{g}}\xrightarrow{dt}TM\to 0,$
where $\underline{\mathfrak{g}}=M\times\mathfrak{g}$, considering that the
base space $M$ is implicit. $\underline{\mathfrak{g}}$ is called the Lie
algebroid associated to the action Lie groupoid $M//G$, and $dt$ is called the
anchor map of the Lie algebroid. This is a primordial example of a Lie
algebroid.
###### Example 3.4.
Consider the natural action of the group of diffeomorphisms $\mathscr{D}$ of a
manifold $X$ on the space of Riemannian metrics $\mathscr{M}$:
$t_{g}(f)=f^{*}g$. According to [14], the Lie algebra of $\mathscr{D}$ at the
identity diffeomorphism is $\mathrm{Vect}(X)$, the set of vector fields: this
will be the degree $-1$ part of our tangent complex.
We know that $T_{g}\mathscr{M}\cong\Gamma(X,\textrm{Sym}^{2}(T_{X}^{\vee}))$,
so that we can compute
(28)
$\mathbf{T}_{g}[\mathscr{M}/\mathscr{D}]=(\Gamma(X,T_{X})[1]\oplus\Gamma(X,\textrm{Sym}^{2}(T_{X}^{\vee})),dt_{g}).$
Then, given $V\in\Gamma(T_{X})$, $dt_{g}(V)=L_{V}g$, where $L_{V}g$ is the Lie
derivative of $g$ along $V$: one can see this by considering the one-parameter
family of diffeomorphisms $f=\textrm{exp}(tV)$–i.e. letting $V$ be the
infinitesimal generator of $f$–and computing the derivative at $t=0$ of the
action of $f$ on $g$. Not all diffeomorphisms can be written this way: after
all, $\mathscr{D}$ is not even a simply connected Lie group. Even worse, there
are diffeomorphisms which are infinitesimally close to the identity
diffeomorphism which cannot be written as $\textrm{exp}(tV)$ for some $V$
[14]; however, we need not worry about this in what is to come, as will be
explained in Section 4.
###### Remark 3.5.
We will now show the relevance of the above for field theories by introducing
notation and a lemma: we will perform the relevant computations for the
example of the $\mathscr{D}$-equivariant differential graded vector bundle
$(\mathscr{F},Q)\to\mathscr{M}$ with fibers
$\mathscr{F}_{g}=C^{\infty}(X)\xrightarrow{Q_{g}}\textrm{Dens}(X)[-1]$ and
differential $Q_{g}\varphi=\Delta_{g}\varphi\mathrm{vol}_{g}$. let us call the
actions described in Lemma 2.34
$\tau_{\mathscr{M}}:\mathscr{D}\to\mathrm{Diff}(\mathscr{M})$ and
$\tau_{\mathscr{F}}:\mathscr{D}\to\mathrm{Diff}(\mathscr{F})$. There is also
an “action” on the differential, sending $Q_{g}$ to $Q_{f^{*}g}$ for
$f\in\mathscr{D}$: it clearly comes from the action of $\mathscr{D}$ on
$\mathscr{M}$, but also nicely intertwines with the input and output of the
differential, as described in general covariance.
To get the infinitesimal version of these actions we use computations similar
to those in Example 3.4, keeping in mind that
$\mathrm{Lie}(\mathscr{D})\cong\mathrm{Vect}(X)$. The map
$\tau_{\mathscr{M}_{g}}:\mathrm{Vect}(X)\to T_{g}\mathscr{M}$ is what we
already considered earlier, namely $V\mapsto L_{V}g$, and the fibers are
similar:
###### Lemma 3.6.
The action of $\mathscr{D}$ on the underlying graded vector space of
$\mathscr{F}_{g}$ defines an action of $\mathrm{Vect}(X)$ on
$\mathscr{F}_{g}$. It has a degree 0 part
$\tau_{\mathscr{F}_{g}}:\mathrm{Vect}(X)\to T_{\varphi}C^{\infty}(X)\cong
C^{\infty}(X)$ and a degree 1 part $\tau_{\mathscr{F}_{g}}:\mathrm{Vect}(X)\to
T_{\mu}\mathrm{Dens}(X)\cong\mathrm{Dens}(X)$; they are, respectively,
$V\mapsto L_{V}\varphi$ and $V\mapsto L_{V}\mu$.
Although we provided this example for clarity, such a lemma holds for any
generally covariant BV field theory as we defined it in Definition 2.39, since
tangent vectors can be defined for a Fréchet manifold by means of it being
locally modeled by Fréchet spaces. The most interesting and physically
relevant detail which must be addressed is what happens to the differential
$\\{S_{g},-\\}$ under this infinitesimal action: this will be the content of
Section 4.
### 3.2 Chevalley-Eilenberg Cochains as Rings of Functions
We start with an action of a finite dimensional Lie group $G$ on a finite
dimensional manifold $M$, and then specialize to the case of
$M=\mathbf{R}^{n}$ to consider some concrete computations. In the example of
diffeomorphisms of a manifold $X$ acting on the space of Riemannian metrics on
$X$, $\textrm{Met}(X)=\mathscr{M}$ is a convex cone in
$\Gamma(X,\textrm{Sym}^{2}(T^{\vee}X))$, so that we will be eventually
specializing these constructions to vector spaces or “nice” subsets thereof
anyway.
###### Construction 3.7.
Let $\widehat{M}_{p}$ denote the formal neighborhood of $p\in M$, defined so
that its ring of functions $\mathscr{O}(\widehat{M}_{p})$ is the jets of
$\mathscr{O}(M):=C^{\infty}(M)$ at $p$, and denote the inclusion map
$\hat{p}:\widehat{M}_{p}\to M$: this is equivalent to the restriction map
$\mathscr{O}(M)\to\mathscr{O}(\widehat{M}_{p})$. It is known that
$\mathscr{O}(\widehat{M}_{p})\cong\widehat{\textrm{Sym}}(T_{p}^{\vee}M)$, the
Taylor series ring around $p\in M$, although this isomorphism is not
canonical. We will use the latter, and call the Taylor series ring
$\widehat{\mathscr{O}}_{p}$ when unambiguous.
The action of $G$ on $M$ is defined by a map $P:G\to\textrm{Diff}(M)$. Taking
its total derivative gives us a map $\rho:\mathfrak{g}\to\textrm{Vect}(M)$ of
Lie algebras, where we choose to view $\textrm{Vect}(M)$ as derivations of
$\mathscr{O}(M)$. We then restrict the action of $\textrm{Vect}(M)$ on
$C^{\infty}(M)$ to get an action of $\textrm{Vect}(\widehat{M}_{p})$ on
$C^{\infty}(\widehat{M}_{p})\cong\widehat{\mathscr{O}}_{p}$. The differential
on $\mathbf{T}_{p}[M/G]$ encodes $\rho:\mathfrak{g}\to\textrm{Vect}(M)$ at the
point $p$ and thus on the formal neighborhood $\widehat{M}_{p}$ of $p$ since
$\rho$ is a map of Lie algebras: this gives us
$\mathfrak{g}\to\textrm{Vect}(\widehat{M}_{p})$. Noting that
$\textrm{Vect}(\widehat{M}_{p})\cong\textrm{Der}(\widehat{\mathscr{O}}_{p})$
recovers the action of $\mathfrak{g}$ on $\widehat{\mathscr{O}}_{p}$ via
derivations, this allows us to define
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p})\cong
C^{\bullet}(\mathfrak{g})\otimes\widehat{\mathscr{O}}_{p}$ in the traditional
way.
###### Lemma 3.8.
Chevalley-Eilenberg (CE) cochains of the differential graded Lie algebra
defined by shifting $\mathbf{T}_{p}[M/G]$ up one degree, denoted
$C^{\bullet}(\mathfrak{g}\xrightarrow{dt_{p}}T_{p}M[-1])$, and
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p})$ are isomorphic as
differential graded commutative algebras. Moreover,
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p})$ is the ring of functions
on the formal neighborhood of $[p]\in[M/G]$.
###### Proof.
It is a quick exercise to show that the underlying graded commutative algebras
of $C^{\bullet}(\mathfrak{g}\xrightarrow{dt_{p}}T_{p}M[-1])$ and
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p})$ are identical, as long
as one is careful to employ the noncanonical isomorphism
$\mathscr{O}(\widehat{M}_{p})\cong\widehat{\textrm{Sym}}(T_{p}^{\vee}M)$. From
there, it is sufficient to show that the Chevalley-Eilenberg differentials are
equivalent, which is also left as a brief exercise. ∎
###### Remark 3.9.
This lemma implies that the dg Lie algebra
(29) $\mathfrak{g}_{p}:=(\mathfrak{g}\oplus
T_{p}M[-1],dt_{p},[-,-]_{\mathfrak{g}})$
is of utmost importance. To say more about this, we must state a definition:
###### Definition 3.10 (Definition 3.1.2 in [8]).
A formal (pointed) moduli problem over $k$ is a functor of simplicially
enriched categories
$F:\mathbf{dgArt}_{k}\to\mathbf{sSets},$
where $\mathbf{dgArt}_{k}$ is the category of (local) Artinian dg algebras
over $k$ and $\mathbf{sSets}$ the category of simplicial sets, which
satisfies: (1) $F(k)$ is contractible. (2) $F$ takes surjective maps in
$\mathbf{dgArt}_{k}$ to fibrations in $\mathbf{sSets}$. (3) For
$A,B,C\in\mathbf{dgArt}_{k}$ and surjections $B\to A$ and $C\to A$ (meaning we
can define the fiber product $B\times_{A}C$), we require that the following
natural map is a weak equivalence:
$F(B\times_{A}C)\to F(B)\times_{F(A)}F(C).$
Clearly, this can be viewed as a “localization” of the traditional algebro-
geometric definition of a stack as a functor $\mathbf{CRing}\to\mathbf{Gpd}$
satisfying descent. What follows in the rest of this section and in Section 4
depends on the following theorem, which allows us to connect the above objects
to the more concrete dg Lie algebras and $L_{\infty}$ algebras we use for
computations:
###### Theorem 3.11 (Theorem 2.0.2 in [15]).
There is an equivalence of $(\infty,1)$-categories between the category
$\mathbf{Lie}_{k}$ of differential graded Lie algebras over a characteristic
zero field $k$ and the category $\mathbf{Moduli}_{k}$ of formal pointed moduli
problems over $k$.
The homotopy category of $L_{\infty}$ algebras is equivalent to the homotopy
category of dg Lie algebras, so that the above remains true in that case (as
is relevant for us). Theorem 3.11 confirms that the dg Lie algebra
$\mathfrak{g}_{p}$ completely defines the data of the formal neighborhood of
$[p]$ in $[M/G]$, as we suspected from Lemma 3.8.
###### Remark 3.12.
Much like how a quotient stack “builds in” group action data into its
definition, functions on a formal neighborhood $\widehat{[M/G]}_{p}$ in the
stack, namely $C^{\bullet}(\mathfrak{g},\hat{\mathscr{O}}_{p})$, have “built
into” them all of the $\mathfrak{g}$-invariant data. Concretely,
$C^{\bullet}(\mathfrak{g},\hat{\mathscr{O}}_{p})$ has the usual ring of
functions $\hat{\mathscr{O}}_{p}$ as a subset: tensoring with
$\textrm{Sym}(\mathfrak{g}^{\vee}[-1])$ and imposing the Chevalley-Eilenberg
differential remembers the data of $\mathfrak{g}$ acting on $\widehat{M}_{p}$,
and therefore on $\mathscr{O}(\widehat{M}_{p})\cong\hat{\mathscr{O}}_{p}$ as
well. To see how these ideas unfold in action, we refer the reader to Appendix
5.1.
###### Example 3.13.
In light of this lemma and Example 3.4, the (Fréchet) dg Lie algebra we must
consider in the context of general covariance is thus
(30)
$\mathfrak{g}_{g}:=\mathfrak{g}_{g}(X):=\Gamma(X,T_{X})\xrightarrow{L_{\bullet}g}\Gamma(X,\mathrm{Sym}^{2}(T^{\vee}_{X}))[-1].$
Note that if don’t specify evaluation of $\mathfrak{g}_{g}$ on all of $X$, it
becomes a sheaf on the site $\mathbf{Riem}_{n}$ introduced in Subsection
2.3.1. The metric $g$ should also not be specified in that case, but we leave
it in the notation to distinguish the above from a generic dg Lie algebra.
By applying Lemma 3.8, we see that the ring of functions on the formal
neighborhood of $[g]\in[\mathscr{M}/\mathscr{D}]$ is
$C^{\bullet}(\mathrm{Vect}(X),\mathscr{O}(\widehat{\mathscr{M}}_{g}))\cong
C^{\bullet}(\mathrm{Vect}(X))\otimes\widehat{\mathrm{Sym}}(T_{g}^{\vee}\mathscr{M})$,
which we interpret as the derived invariants of
$\mathscr{O}(\widehat{\mathscr{M}}_{g})$ with respect to the
$\mathscr{D}$-action. Our definition of general covariance from earlier when
properly “localized” would imply that the observables of such a field theory
$\mathscr{F}_{g}$ over $g\in\mathscr{M}$ form a module over
$C^{\bullet}(\mathrm{Vect}(X),\mathscr{O}(\widehat{\mathscr{M}}_{g}))=C^{\bullet}(\mathfrak{g}_{g})$:
this is exactly what is shown in Proposition 4.3.
###### Remark 3.14.
It should be noted that because $\mathrm{Vect}(X)$ and
$T_{g}^{\vee}\mathscr{M}$ are infinite dimensional, the definition of
$C^{\bullet}(\mathfrak{g}_{g})\cong
C^{\bullet}(\mathrm{Vect}(X))\otimes\widehat{\mathrm{Sym}}(T_{g}^{\vee}\mathscr{M})$
is not precisely the one from the finite dimensional case: in particular, we
have shown that $\mathrm{Vect}(X)$ is a Fréchet Lie algebra and
$T_{g}^{\vee}\mathscr{M}$ is a Fréchet vector space, and so the tensor product
used for the preceding objects is the completed projective tensor product used
in Definition 2.2. In this way, $C^{\bullet}(\mathfrak{g}_{g})$ represents the
same data as it would if its inputs were finite dimensional, but we are just a
bit more careful with functional analytic issues to ensure that all of the
rings are well defined.
### 3.3 Vector Bundles over a Formal Stack
Now that we have made things concrete with an example, we’d like to understand
vector bundles in this context. We’re primarily concerned with perturbative
computations (those in the style of Construction 3.7); however, we will
present the global picture first, since general covariance is first presented
in such a context.
###### Construction 3.15.
Let $V$ be a $G$-equivariant vector bundle over $M$, for which the action
$\tau_{M}:G\to\mathrm{Diff}(M)$ is not necessarily free. Call the action on
the total space of $V\to M$ $\tau_{V}:G\to\mathrm{Diff}(V)$. Recall from
Example 2.46 that we get the pair of Lie groupoids $\mathcal{V}_{\mathcal{G}}$
and $\mathcal{M}_{\mathcal{G}}$ with a map
$\pi:\mathcal{V}_{\mathcal{G}}\to\mathcal{M}_{\mathcal{G}}$ between them. This
information in turn presents a pair of stacks, and the projection gives us a
map $\pi:[V/G]\to[M/G]$ between those stacks. Here, $[V/G]$ is a vector space
object in the category of stacks over the stack $[M/G]$, much like how
$\mathcal{V}_{\mathcal{G}}$ is a vector space object in the category of Lie
groupoids over the Lie groupoid $\mathcal{M}_{\mathcal{G}}$.
The action of a finite dimensional Lie group $G$ on a finite dimensional $M$
restricts to an action of the formal group
$\widehat{G}\overset{\textrm{exp}}{\cong}\mathfrak{g}$ (defined as the formal
neighborhood of the identity in $G$) on $\widehat{M}_{p}$, the formal
neighborhood of $p\in M$. This defines a formal Lie groupoid which then
presents the stack $[\widehat{M}_{p}/\widehat{G}]\cong\widehat{[M/G]}_{p}$,
whose rings of functions we computed earlier to be
$C^{\bullet}(\mathfrak{g}_{p})$, so that $\mathfrak{g}_{p}$ is the dg Lie
algebra associated to the formal moduli problem $\widehat{[M/G]}_{p}$.
We can pull back the $G$-equivariant vector bundle $V\to M$ along
$\hat{p}:\widehat{M}_{p}\to M$ to get a $\mathfrak{g}$-equivariant vector
bundle $\hat{p}^{*}V\to\widehat{M}_{p}$. Topologically, the total space of
$\hat{p}^{*}V$ is the formal neighborhood of the entire fiber
$\pi^{-1}(p)=V_{p}$, which we can think of heuristically as
$V_{p}\times\widehat{M}_{p}$. Both parts of this product have an action of
$\widehat{G}$, even though one of the directions is a formal space and the
other a vector space which is not necessarily viewed as formal (i.e. its ring
of functions may be polynomials, and not power series). Thus, we can consider
the associated formal Lie groupoid here as well, and it presents the stack
$[(\hat{p}^{*}V)/\widehat{G}]$.
The vector bundle which plays the local role of the global stack
$[V/G]\to[M/G]$ is therefore
$[(\hat{p}^{*}V)/\widehat{G}]\to[\widehat{M}_{p}/\widehat{G}].$
On account of $C^{\bullet}(\mathfrak{g}_{p})$ being the space of functions on
$[\widehat{M}_{p}/\widehat{G}]$, we see that a section
$\sigma:[\widehat{M}_{p}/\widehat{G}]\to[(\hat{p}^{*}V)/\widehat{G}]$ is an
element of $C^{\bullet}(\mathfrak{g}_{p})\otimes V_{p}$. This is the
stackified and deformation-theoretic version of a section of $V\to M$ being an
element of $C^{\infty}(M)\otimes V_{p}$ in local coordinates near $p$.
Moreover, this reasoning results in the following lemma.
###### Lemma 3.16.
The ring of functions on
$[(\hat{p}^{*}V)/\widehat{G}]\cong[(\hat{p}^{*}V)/\mathfrak{g}]$ is
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p}\otimes\mathrm{Sym}(V_{p}^{\vee}))\cong
C^{\bullet}(\mathfrak{g}_{p},\mathrm{Sym}(V_{p}^{\vee}))$, which is isomorphic
as a graded ring to
$C^{\bullet}(\mathfrak{g}_{p})\otimes\mathrm{Sym}(V_{p}^{\vee})$.
###### Proof.
The definition of $[(\hat{p}^{*}V)/\mathfrak{g}]$ implies that
$\mathscr{O}([(\hat{p}^{*}V)/\mathfrak{g}])$ must be the derived
$\mathfrak{g}$-invariant functions on the space $\widehat{M}_{p}\times V_{p}$.
Given that $\mathscr{O}(\widehat{M}_{p}\times
V_{p})=\widehat{\mathscr{O}}_{p}\otimes\mathrm{Sym}(V_{p}^{\vee})$ and that
both parts of this tensor product are $\mathfrak{g}$-modules, we can define
the CE cochains
$C^{\bullet}(\mathfrak{g},\widehat{\mathscr{O}}_{p}\otimes\mathrm{Sym}(V_{p}^{\vee}))$.
In conjunction with Lemma 3.8, these are the derived $\mathfrak{g}$-invariant
functions we are looking for. To see that the differential graded rings are
isomorphic, we simply note that the CE differential on both is
$d_{CE}=[-,-]^{\vee}_{\mathfrak{g}}+\tau_{M_{p}}^{\vee}+\tau_{V_{p}}^{\vee},$
where $\tau_{M_{p}}^{\vee}$ and $\tau_{V_{p}}^{\vee}$ are the “duals” (as in
Appendix 5.1) to the induced actions $\tau_{M_{p}}$ and $\tau_{V_{p}}$ on
$\widehat{\mathscr{O}}_{p}$ and $\mathrm{Sym}(V_{p}^{\vee})$, respectively. ∎
## 4 Field Theories as Bundles over Formal Stacks
### 4.1 Equivariant Observables
Next we shall consider how $Q_{g}$ and more generally $\\{S_{g},-\\}$ behave
under arbitrary perturbations $g+\varepsilon h$, for $h\in T_{g}\mathscr{M}$,
and then invoke that $h=L_{V}g$ comes from a vector field $V$ to see what the
effect is. But we already have the machinery to do this! The preceding
sentence amounts to pulling back the dg vector or $L_{\infty}$ bundle
$(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ to be over the formal neighborhood of
$g\in\mathscr{M}$, and seeing what the “full differential”
$\\{S_{g+\varepsilon h},-\\}$ looks like over this formal neighborhood.
###### Remark 4.1.
Although in finite dimensions we have
$\widehat{G}\overset{\textrm{exp}}{\cong}\mathfrak{g}$, we mentioned in
Example 3.4 that it was no longer the case that there is a bijection between
the formal neighborhood of the identity diffeomorphism in $\mathrm{Diff}(X)$
and its Lie algebra $\mathrm{Vect}(X)$ of vector fields: this could ostensibly
be cause for alarm. However, by Lurie’s Theorem 3.11 it remains true even in
the infinite dimensional case that the dg Lie algebra $\mathfrak{g}_{g}$
introduced earlier is in correspondence with the formal neighborhood of
$[g]\in[\mathscr{M/D}]$. This will be an assurance in what follows.
###### Construction 4.2.
A family $(\mathscr{F},\\{S,-\\})$ of BV field theories defined as a dg vector
or $L_{\infty}$ bundle over $\mathscr{M}$ pulls back to an appropriate bundle
over the formal neighborhood of $g\in\mathscr{M}$, denoted
$\widehat{\mathscr{M}}_{g}$, where
$\mathscr{O}(\widehat{\mathscr{M}}_{g})=\widehat{\mathscr{O}}_{g}\cong\widehat{\mathrm{Sym}}(T_{g}^{\vee}\mathscr{M})$.
Heuristically, this pullback looks like
$\widehat{\mathscr{M}}_{g}\times\mathscr{F}_{g}$.
We get an analogous pullback of stacks when the theory is generally covariant.
In this case, the $\mathscr{D}$-equivariant bundle
$(\mathscr{F},\\{S,-\\})\to\mathscr{M}$ is equivalent to a bundle of stacks
$([\mathscr{F}/\mathscr{D}],\\{S,-\\})\to[\mathscr{M}/\mathscr{D}]$. If we
consider an equivalence class of metrics $[g]\in[\mathscr{M}/\mathscr{D}]$ and
fix its formal neighborhood, we can pull back
$([\mathscr{F}/\mathscr{D}],\\{S,-\\})$ over this formal neighborhood. We
denote the total space of this pullback as
$\widehat{[\mathscr{F}/\mathscr{D}]}_{g}$. We can then conclude:
###### Proposition 4.3.
For a generally covariant family
$\pi:([\mathscr{F}/\mathscr{D}],\\{S,-\\})\to[\mathscr{M}/\mathscr{D}]$ of BV
classical field theories and for a fixed $[g]\in[\mathscr{M}/\mathscr{D}]$, we
have
(31) $\mathscr{O}(\widehat{[\mathscr{F}/\mathscr{D}]}_{g})\cong
C^{\bullet}(\mathfrak{g}_{g},\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g})).$
###### Proof.
By the equivalence of categories in Theorem 3.11 which we are taking for
granted, the formal moduli space defined by the formal neighborhood of
$[g]\in[\mathscr{M/D}]$ is equivalent to the dg Lie algebra
$\mathfrak{g}_{g}=\Gamma(X,T_{X})\xrightarrow{L_{\bullet}g}\Gamma(X,\mathrm{Sym}^{2}(T^{\vee}_{X}))[-1].$
The dg algebra of functions on this formal neighborhood is thus
$C^{\bullet}(\mathfrak{g}_{g})\cong
C^{\bullet}(\mathrm{Vect}(X),\widehat{\mathrm{Sym}}(T_{g}^{\vee}\mathscr{M})).$
The ring of functions on the fiber part of the pullback is simply
$C^{\bullet}(\mathrm{Vect}(X),\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g}))$,
since it is $\mathscr{O}(\mathscr{F}_{g})$ with the differential
$\\{S_{g},-\\}$ and the implicit action of $\mathrm{Vect}(X)$ on the theory
and thus on its observables. Hence, the underlying dg ring of functions on
$\widehat{[\mathscr{F}/\mathscr{D}]}_{g}$ is the underlying dg ring of
$C^{\bullet}(\mathfrak{g}_{g},\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g}))$.
Both dg rings have Chevalley-Eilenberg differential
(32)
$d_{CE}=[-,-]^{\vee}_{\mathrm{Vect}(X)}+\tau_{\mathscr{M}_{g}}^{\vee}+\tau_{\mathscr{F}_{g}}^{\vee}+\\{S_{g},-\\}.$
Here, the first three terms are the usual Chevalley-Eilenberg differential
concerned with the dual of the bracket on $\mathrm{Vect}(X)$ and the actions
of $\mathrm{Vect}(X)$ on $\widehat{\mathrm{Sym}}(T_{g}^{\vee}\mathscr{M})$ and
$\mathscr{O}(\mathscr{F}_{g})$, and the fourth term is the differential on the
free field theory over $[\widehat{\mathscr{M}}_{g}/\mathrm{Vect}(X)]$. Since
the underlying rings agree and the CE differentials do, too, this gives the
result. ∎
###### Remark 4.4.
As in the case of ordinary manifolds, the ring of functions on the bundle is a
module over the ring of functions on the base space. In fact, the veracity of
the above claim can almost be taken as a definition: in the case where we
treat the BV field theory $\mathscr{F}$ perturbatively, Proposition 4.3 simply
computes the function ring of the formal moduli problem representing
perturbative fields parameterized by a formal neighborhood in
$[\mathscr{M/D}]$. As it stands, the statement also includes polynomial
functions of nonperturbative free fields along the fibers.212121The fibers
thus constitute a “non-formal” part of the total moduli problem.
Proposition 4.3 implicitly supplies a natural action of $\mathfrak{g}_{g}$ on
$\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g})$: this allows us to conclude
that Noether’s Theorem as presented in Theorem 12.4.1 of [8] applies. We have
not given the precise details of a “full” $L_{\infty}$ action of
$\mathfrak{g}_{g}$ on $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g})$, but its
existence is implicit in the theorem: $\\{S_{g},-\\}$ as part of $d_{CE}$
above contains information about the formal neighborhood of
$[g]\in[\mathscr{M/D}]$. We will provide a thorough description of how this
goes momentarily.
###### Remark 4.5 (Further remarks on functoriality).
Before getting into explicit computations, we would like to mention in the
vein of Subsection 2.3.1 that because $\mathfrak{g}_{g}$ is a sheaf on
$\mathbf{Riem}_{n}$ (its diffeomorphism equivariance can quickly be checked),
the equivariant observables similarly define a factorization algebra, as in
Proposition 2.43:
$C^{\bullet}(\mathfrak{g}_{g}(-),\mathrm{Obs^{cl}}(-,\mathscr{F})):\mathbf{Riem}_{n}\to\mathbf{dgVect}.$
This provides yet another factorization algebra when evaluated on the site of
Riemannian manifolds. Thus, considering the stacky geometry of
$[\mathscr{M/D}]$ for a fixed $X$ and invoking $\mathbf{Riem}_{n}$-naturality
after the fact once again provides an interesting construction (and
generalization) of objects introduced, for example, in [8], while
simultaneously opening avenues of comparison to prevailing literature.
Note: From here on out, we will be treating both free222222Any dg Lie algebra
is an $L_{\infty}$ algebra where the only nontrivial bracket is $\ell_{1}$.
and interacting theories as $L_{\infty}$ algebras, since the reliance on
$L_{\infty}$ structures for defining actions becomes more important. In
practice, this means we will be using $\mathscr{L}$ (alias $\mathscr{F}[-1]$)
to denote the fields.
###### Remark 4.6.
Recall that when $\mathfrak{g}$ is a Lie algebra and $R$ is a
$\mathfrak{g}$-module, $H^{0}(C^{\bullet}(\mathfrak{g},R))=R^{\mathfrak{g}}$,
the $\mathfrak{g}$-invariants of $R$. Analogously, albeit with slightly more
care to compute, we have:
(33)
$H^{0}(C^{\bullet}(\mathfrak{g}_{g},\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g})))=\\{F\in\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g}):F(g+\varepsilon
L_{V}g)-F(g)=0\\}.$
A prime example of such an $F$ is the action functional $S_{g}$ of any
generally covariant theory. Moreover, if $V$ is a Killing field for $g$, the
equality in the conditions on the right side above holds trivially: this is a
shadow of the fact that a moduli stack “remembers” stabilizers where the
coarse quotient would not.
Although it is meaningful (and a good sanity check) to compute cohomology
groups, we stick to Noether’s philosophy of focusing on the cochain complexes
themselves. In our case, this means understanding what the equivariant
observables are providing. There is a guiding definition which, when unpacked
carefully, tells us the value of what we found above:
###### Definition 4.7 (Definition 12.2.12 in [8]).
For $\mathfrak{g}$ a dg Lie or $L_{\infty}$ algebra and $\mathscr{L}$ an
(elliptic) $L_{\infty}$ algebra encoding a Batalin-Vilkovisky classical field
theory, an action of $\mathfrak{g}$ on $\mathscr{L}$ is any of the following
data: (i) An $L_{\infty}$ structure on $\mathfrak{g}\oplus\mathscr{L}$ such
that the exact sequence
(34) $\mathscr{L}\to\mathfrak{g}\ltimes\mathscr{L}\to\mathfrak{g}$
is a sequence of maps of $L_{\infty}$ algebras, the structure maps
$\mathfrak{g}^{\otimes n}\otimes\mathscr{L}^{\otimes m}\to\mathscr{L}$ are
polydifferential operators on the $\mathscr{L}$-variables, and the action
preserves the pairing $\omega$.
(ii) An $L_{\infty}$ morphism $\mathfrak{g}\to
C^{\bullet}_{\mathrm{loc}}(\mathscr{L})[-1]$.
(iii) A degree 1 element $S^{\mathfrak{g}}$ in the dg Lie algebra
$\mathrm{Act}(\mathfrak{g},\mathscr{L}):=C^{\bullet}_{\mathrm{red}}(\mathfrak{g})\otimes
C^{\bullet}_{\mathrm{loc}}(\mathscr{L})[-1]$
which satisfies the Maurer-Cartan equation
(35)
$d_{\mathfrak{g}}S^{\mathfrak{g}}+d_{\mathscr{L}}S^{\mathfrak{g}}+\frac{1}{2}\\{S^{\mathfrak{g}},S^{\mathfrak{g}}\\}=0:$
this can be viewed as an equivariant classical master equation.
###### Remark 4.8.
By $C^{\bullet}_{\mathrm{loc}}(\mathscr{L})$ above we mean observables for
$\mathscr{L}$ that are local in the sense of Definition 2.5:
$C^{\bullet}_{\mathrm{loc}}(\mathscr{L})[-1]$ is the formal moduli version of
symplectic vector fields, which control symmetries and deformations of a
classical field theory. $C^{\bullet}_{\mathrm{red}}(\mathfrak{g})$ is defined
as the kernel of the augmentation map
$C^{\bullet}(\mathfrak{g})\to\mathbf{R}$.232323Thorough details concerning
these two rings is provided in Chapters 3 and 4 of [8].
Moreover, since $S^{\mathfrak{g}}$ is local in the fields $\mathscr{L}$, it
defines a derivation of $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L})$ via
$\\{S^{\mathfrak{g}},-\\}$: this is precisely what is used to define the
action of $\mathfrak{g}_{g}$ on
$\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g})$ when computing the equivariant
classical observables in Proposition 4.3.
###### Remark 4.9.
The facet of the preceding definition we will hone in on is the third one:
finding a functional $S^{\mathfrak{g}}$ which satisfies an equivariant
classical master equation provides a concrete computational representation of
the action of $\mathfrak{g}$ on $\mathscr{L}$ and a more complete picture of
the Chevalley-Eilenberg description of how the formal moduli stack acts on the
theory.
We would like to encode both deformations by $h\in
T_{g}\mathscr{M}=\Gamma(X,\mathrm{Sym}^{2}(T_{X}^{\vee}))$ and an action of
vector fields $V\in\mathrm{Vect}(X)$ on the BV field theory, since these are
the degree 1 and 0 parts (respectively) of the dg Lie algebra
$\mathfrak{g}_{g}$ representing the formal neighborhood of $g$ as an element
of the stack $[\mathscr{M/D}]$. Any action functional $S_{g}$ for a BV theory
can be written as
$S_{g}(\phi)=\int_{X}\phi D_{g}(\phi),$
where the differential operator may be a nonlinear function in $\phi$.
Denoting the antifields for the theory as $\psi$, we thus posit the following:
(36) $S^{\mathfrak{g}_{g}}=\int_{X}\phi\big{(}D_{g+\varepsilon
h}-D_{g}\big{)}(\phi)+\int_{X}(L_{V}\phi)\psi.$
On the right side, we interpret $D_{g+\varepsilon h}$ as a formal power series
in the metric perturbation $h$, and by $L_{V}$ we mean the “natural action” of
vector fields on the fields, which are usually tensorial in nature (hence the
notation). In accordance with (ii) in Definition 4.7, the $L_{\infty}$
morphism $\mathfrak{g}\to C^{\bullet}_{\mathrm{loc}}(\mathscr{L})[-1]$ is thus
given by sending $(V,h)\in\mathfrak{g}_{g}$ to:
$\\{S^{\mathfrak{g}_{g}},-\\}\in
C^{\bullet}_{\mathrm{loc}}(\mathscr{L})[-1]\subset\
\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g})[-1],$
where the latter is interpreted as symplectic vector fields on $B\mathscr{L}$,
the formal derived critical locus as seen in Remark 2.13. By means of general
covariance, which implies that either the dg or $L_{\infty}$ structure
prescribed by $D_{g}(\phi)$ is diffeomorphism equivariant,
$S^{\mathfrak{g}_{g}}$ satisfies the necessary classical master equation.
Strictly speaking, what we need in the preceding is equivariance with respect
to the action by vector fields: in the case of the free scalar field, this
comes from the fact that the Laplace-Beltrami operator satisfies (modulo
$\varepsilon^{2}$)
$\Delta_{g+\varepsilon L_{V}g}=\Delta_{g}+\varepsilon[L_{V},\Delta_{g}].$
###### Remark 4.10.
What we have presented so far allows us to provide a more precise and
meaningful expression for the Chevalley-Eilenberg differential for
$C^{\bullet}(\mathfrak{g}_{g},\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g}))$
in Proposition 4.3:
(37)
$d_{CE}=[-,-]^{\vee}_{\mathrm{Vect}(X)}+\\{S^{\mathfrak{g}_{g}},-\\}+\\{S_{g},-\\}.$
This provides the usual dual to the action of vector fields on themselves;
however, we have here the $L_{\infty}$ action of $\mathfrak{g}_{g}$ on the
observables as well as the usual differential $\\{S_{g},-\\}$ of the
observables on their own. If we consider, for example, the interacting scalar
field with polynomial potential as in Example 2.37, the latter two brackets
above would combine to result in bracketing with:
(38) $S^{\mathrm{tot}}:=S_{g}+S^{\mathfrak{g}_{g}}=\int_{X}\varphi
Q_{g+\varepsilon h}\varphi+\sum_{n\geq
2}\int_{X}\frac{\lambda_{n}}{n!}\varphi^{n}\mathrm{vol}_{g+\varepsilon
h}+\int_{X}(L_{V}\varphi)\psi.$
In Definitions 1 and 2 of [11], Getzler defines covariance by supplying
something like $S^{\mathrm{tot}}$ above and demanding that it satisfy a
Maurer-Cartan equation for a curved Lie (super) algebra: I would be curious to
see how the connection between the two could be made completely precise.
To expound more on all of the above, we must introduce the stress-energy
tensor.
### 4.2 The Stress-Energy Tensor
We’d like to consider the stress-energy tensor for the free scalar BV theory:
its generalization to the polynomial self-interaction in Lemma 2.38 is
computationally simple. This section is intended to see how an example of
Definition 4.7 plays out as well as connect between the above work to how
things are “usually done” in physics.
To begin, let us consider an arbitrary first order deformation of the
Laplacian $\Delta_{g}$ on a Riemannian manifold $(X,g)$: in other words, let
$g_{t}$ be a one-parameter family of metrics such that $g_{0}=g$ and let us
compute
$\frac{d}{dt}\Delta_{g_{t}}\varphi\Big{|}_{t=0}.$
Writing $\Delta_{g_{t}}$ in coordinates and not evaluating at $t=0$ for now,
we have:
(39)
$\frac{d}{dt}\Big{(}\frac{1}{\sqrt{\mathrm{det}g_{t}}}\partial_{\mu}(\sqrt{\mathrm{det}g_{t}}g_{t}^{\mu\nu}\partial_{\nu}\varphi)\Big{)}.$
Recall that for a one-parameter family of invertible matrices $A(t)$, we have
$\frac{d}{dt}\mathrm{det}A(t)=\operatorname{Tr}(A(t)^{-1}A^{\prime}(t))\mathrm{det}A(t).$
Using this and a few other manipulations, expression (39) reduces to
(40)
$\frac{-1}{2}\operatorname{Tr}(g_{t}^{-1}\partial_{t}g_{t})\Delta_{g_{t}}\varphi+\frac{1}{\sqrt{\mathrm{det}g_{t}}}\partial_{\mu}\Big{(}\frac{\sqrt{\mathrm{det}g_{t}}}{2}\operatorname{Tr}(g_{t}^{-1}\partial_{t}g_{t})g_{t}^{\mu\nu}\partial_{\nu}\varphi+\sqrt{\mathrm{det}g_{t}}\partial_{t}g_{t}^{\mu\nu}\partial_{\nu}\varphi\Big{)}.$
Denote the derivative of $g_{t}$ at $g_{0}=g$ as $\delta
g:=\partial_{t}g_{t}|_{t=0}$ (this is the traditional notation in physics,
although we could call it $h$ as a degree 1 element of $\mathfrak{g}$).
Evaluating at $t=0$ gives:
###### Lemma 4.11.
The first order deformation of the Laplacian $\Delta_{g}$ with respect to the
metric $g$ is
(41)
$\frac{d}{dt}\Delta_{g_{t}}\varphi\Big{|}_{t=0}=\frac{-1}{2}\operatorname{Tr}(g^{-1}\delta
g)\Delta_{g}\varphi+\frac{1}{\sqrt{\mathrm{det}g}}\partial_{\mu}\Big{(}\sqrt{\mathrm{det}g}\big{(}\frac{1}{2}\operatorname{Tr}(g^{-1}\delta
g)g^{\mu\nu}\partial_{\nu}\varphi+\delta
g^{\mu\nu}\partial_{\nu}\varphi\big{)}\Big{)}.$
Moreover, if we assume the deformation $\delta g\in T_{g}\mathscr{M}$ is
induced by an isometry of $g$, then the first order deformation of the
Laplacian is identically zero.
###### Remark 4.12.
Often in the physics literature, we write
$\operatorname{Tr}(g^{-1}\delta g)=g^{\mu\nu}\delta g_{\mu\nu},$
and we may occasionally adopt that notation. Moreover, the above computation
is done with the action functional (1) in mind. Thus, any difference with the
stress-energy tensor computations using the functional
(42)
$\int_{X}g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi\mathrm{vol}_{g}=\int_{X}g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi\sqrt{\mathrm{det}g}d^{n}x,$
which is just as common in the physics literature, differs only by boundary
terms.
First, we shall provide a general definition of the stress-energy tensor for
any theory.
###### Construction 4.13.
Let $S_{g}\in C^{\bullet}(\mathscr{L}_{g})$ be an action functional for the dg
space of BV fields $\mathscr{L}_{g}$ which depends on a fixed background
metric $g\in\mathscr{M}$. It can thus be written as
$S_{g}(\phi)=\int_{X}L_{g}(\phi),$
where $\phi\in\mathscr{L}$ and $L_{g}(\phi)$ is a Lagrangian density. If we
let $g_{t}$ be a one-parameter family of metrics such that $g_{0}=g$, we can
perform computations similar to those in Lemma 4.11 to compute:
(43) $\frac{\delta}{\delta
g}S_{g}(\phi):=\frac{d}{dt}S_{g_{t}}(\phi)\Big{|}_{t=0}=\int_{X}\frac{d}{dt}L_{g_{t}}(\phi)\Big{|}_{t=0}.$
The notation invoked on the left hand side is common in physics literature,
and defined this way in [23]. Up to boundary terms which we can safely ignore,
(43) can be written as
(44) $\int_{X}\delta g^{\mu\nu}T_{\mu\nu}(g,\phi)\mathrm{vol}_{g},$
for some $T_{\mu\nu}(g,\phi)$ (or simply $T_{\mu\nu})$ which depends on the
fields $\phi$ and on the metrics $g$.
###### Definition 4.14.
$T_{\mu\nu}$ is the stress-energy (or energy-momentum) tensor of a field
theory on $X$ with fields $\phi\in\mathscr{F}$ and action functional $S_{g}$
depending on $g\in\mathrm{Met}(X)$.
###### Remark 4.15.
Before moving on, we must take note of an important fact: the stress-energy
tensor coupled to the metric perturbation as in Equation (44) is precisely the
first order in $\varepsilon$ part of the power series in $h$ found in Equation
(36).
###### Example 4.16.
To compute the stress-energy tensor of free massless scalar field, we will
begin by noting that according to the definition, we must compute
$\int_{X}\varphi\frac{d}{dt}Q_{g_{t}}\varphi\Big{|}_{t=0},$
where $Q_{g}\varphi=\Delta_{g}\varphi\mathrm{vol}_{g}$. Lemma 4.11 is useful,
since we have already done the necessary work on the first piece. However,
note that $Q_{g}\varphi$ is written in coordinates as
$\partial_{\mu}(\sqrt{\mathrm{det}g}g^{\mu\nu}\partial_{\nu}\varphi)d^{n}x,$
so that we have in fact stripped away some of the complexity of the
computation by pairing with the Riemannian volume form. Hence, we can use
Lemma 4.11 and toss away the first term to get
(45)
$\int_{X}\varphi\frac{d}{dt}Q_{g_{t}}\varphi\Big{|}_{t=0}=\int_{X}\varphi\partial_{\mu}\Big{(}\sqrt{\mathrm{det}g}\big{(}\frac{1}{2}\operatorname{Tr}(g^{-1}\delta
g)g^{\mu\nu}\partial_{\nu}\varphi+\delta
g^{\mu\nu}\partial_{\nu}\varphi\big{)}\Big{)}d^{n}x.$
This is not yet in the preferred form in (44), but if we integrate by parts
and invoke that $\operatorname{Tr}(g^{-1}\delta g)=g^{\mu\nu}\delta
g_{\mu\nu}=g_{\mu\nu}\delta g^{\mu\nu},$ the above becomes
(46) $\int_{X}\delta
g^{\mu\nu}\big{(}-\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{1}{2}g_{\mu\nu}(g^{\rho\sigma}\partial_{\rho}\varphi\partial_{\sigma}\varphi)\big{)}\mathrm{vol}_{g},$
where we changed the labelling of indices in the second term to omit
confusion. Thus, the stress-energy tensor for our example is
$T_{\mu\nu}=-\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{1}{2}g_{\mu\nu}(g^{\rho\sigma}\partial_{\rho}\varphi\partial_{\sigma}\varphi)$.
We would have computed this without any by-parts maneuvers had we started with
the action functional (42) more common in physics literature, but it is a good
exercise to see how these agree.
The above is the traditional trajectory one takes to finding the stress-energy
tensor; however, since our theory is generally covariant and so we can use
facts about equivariant vector bundles to simplify things, let us consider
what that buys us. To begin, let $f_{t}$ be a one-parameter subgroup of
diffeomorphisms. General covariance implies that
(47)
$\frac{d}{dt}\int_{X}(f_{t}^{*}\varphi)\Delta_{f_{t}^{*}g}(f_{t}^{*}\varphi)\mathrm{vol}_{f_{t}^{*}g}\Big{|}_{t=0}=0.$
Unfolding the left hand side, this equation becomes
$\int_{X}(L_{V}\varphi)\Delta_{g}\varphi\mathrm{vol}_{g}+\int_{X}\varphi\Delta_{g}(L_{V}\varphi)\mathrm{vol}_{g}+\int_{X}\varphi(\frac{d}{dt}\Delta_{f^{*}_{t}g}\big{|}_{t=0})\varphi\mathrm{vol}_{g}+\int_{X}\varphi\Delta_{g}\varphi(\frac{d}{dt}\mathrm{vol}_{f^{*}_{t}g}\big{|}_{t=0})=0.$
Here, we assumed that $V$ generates the flow $f_{t}$, and used the fact that
$\frac{d}{dt}(f^{*}_{t}\varphi)|_{t=0}=L_{V}\varphi$. This equation is an
integrated linear approximation to the equivariance property computed in Lemma
2.34: it states concretely that a simultaneous first order perturbation along
the $\mathscr{D}$-orbit in $\mathscr{M}$ and in $\mathscr{L}_{g}$ is trivial.
###### Remark 4.17.
The third and fourth terms on the left hand side are exactly those that
comprise the integral of the stress-energy tensor in the special case that the
derivative is computed in the direction of the $\mathscr{D}$-orbit. This
grants us two key insights:
(1) Computationally, the above amounts to the metric perturbation (an element
of $T_{g}\mathscr{M}$) coming from an infinitesimal diffeomorphism (i.e. a
vector field). But we have seen this before: this is saying that $\delta g\in
T_{g}\mathscr{M}$ is in the image of the differential in the dg Lie algebra
$\mathfrak{g}_{g}$ in Example 3.13. Hence, $\delta g^{\mu\nu}=L_{V}g^{\mu\nu}$
(the computation works fine even though $g^{\mu\nu}$ is technically the
inverse). With this, Equation (46) becomes:
$\int_{X}L_{V}g^{\mu\nu}T_{\mu\nu}\mathrm{vol}_{g}.$
A standard result from Riemannian geometry is that
$L_{V}g^{\mu\nu}=\nabla^{\mu}V^{\nu}+\nabla^{\nu}V^{\mu}$, and since
$T_{\mu\nu}$ is symmetric by definition, the above must be
$\int_{X}(\nabla^{\mu}V^{\nu})T_{\mu\nu}\mathrm{vol}_{g}=-\int_{X}V^{\nu}(\nabla^{\mu}T_{\mu\nu})\mathrm{vol}_{g},$
where we invoked integration by parts and the fact that
$\nabla^{\mu}\mathrm{vol}_{g}=0$ in the equality. Then, standard computations
for generally covariant theories (which can be found in Appendix E of [23])
show that for on-shell fields (here meaning $\varphi$ such that
$\Delta_{g}\varphi=0$), the above integral is identically zero. For this to be
true, it must be the case that
(48) $\nabla^{\mu}T_{\mu\nu}=0.$
In the language of Noether’s Theorem, the stress-energy tensor $T_{\mu\nu}$ is
the conserved current associated to general covariance, a symmetry of a field
theory coupled to a metric.
In our regime, this implies that the conservation law
$\nabla^{\mu}T_{\mu\nu}=0$ is what is ultimately responsible for allowing us
to define an $L_{\infty}$ action of the dg Lie algebra $\mathfrak{g}_{g}$
associated to the formal neighborhood of $[g]\in[\mathscr{M/D}]$ on
observables $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{L}_{g})$ for our generally
covariant BV field theory in the sense of Definition 4.7. However, what we
have shown above is only up to first order in the metric perturbation! The
differential $\\{S^{\mathfrak{g}},-\\}$ we defined previously in principle
contains “higher conservation laws” associated to higher $L_{\infty}$ brackets
read off from higher order terms in the power series $h\in T_{g}\mathscr{M}$.
The author would be interested in assigning a physical interpretation to this.
(2) Additionally, since the third and fourth terms are (up to a sign) the same
as the first two, this means considering the first two alone should give us
all the relevant data of the stress-energy tensor for a generally covariant
field theory: we could even find a second order vector field equivariance
property analogous to the one stated at the end of Remark 4.9 (we do just that
in Section 5.2 of the Appendix).
###### Construction 4.18.
Let us consider the “infinitesimal general covariance” property more formally.
Insight (1) suggests that the action functionals $S_{g}(\varphi)$ and
$S_{g+\varepsilon
L_{V}g}(\varphi)=\frac{-1}{2}\int_{X}\varphi\Delta_{g}\varphi\mathrm{vol}_{g}-\frac{\varepsilon}{2}\int_{X}L_{V}g^{\mu\nu}T_{\mu\nu}\mathrm{vol}_{g}=:S_{g}(\varphi)+\varepsilon
I_{g}(L_{V}g,\varphi),$
where this equality holds modulo $\varepsilon^{2}$, should produce the same
dynamics: this is true because for on-shell fields, the second term is zero.
In other words, if we were to make sense of the differential $Q_{g+\varepsilon
L_{V}g}$ for the BV space of fields, it should be appropriately equivalent to
$Q_{g}$. Moreover, $Q_{g}$ induces the differential $\\{S_{g},-\\}$ on
$\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g})$, so that we would like
$\\{S_{g+\varepsilon
L_{V}g},-\\}=\\{S_{g},-\\}+\varepsilon\\{I_{g}(L_{V}g),-\\}$, the induced
differential on $\mathrm{Obs}^{\mathrm{cl}}(X,\mathscr{F}_{g+\varepsilon
L_{V}g})$ from $Q_{g+\varepsilon L_{V}g}$, to be similarly equivalent. To give
all of the above hands and legs, we must rigorously define
$\mathscr{F}_{g+\varepsilon L_{V}g}$ and its observables in the first place.
Let $\mathbb{D}_{2}=\mathbf{R}[\varepsilon]/(\varepsilon^{2})$ denote the
(real) dual numbers. We can tensor
$\mathscr{F}_{g}=C^{\infty}(X)\xrightarrow{Q_{g}}\mathrm{Dens}(X)[-1]$ with
$\mathbb{D}_{2}$ to get $\mathscr{F}_{g}\otimes\mathbb{D}_{2}$, whose elements
can be written as $\varphi_{0}+\varepsilon\varphi_{1}$ in degree 0 and
similarly for degree 1. The differential $Q_{g+\varepsilon L_{V}g}$ looks like
$Q_{g}+\varepsilon D.$
It remains only to find $D$, which will depend on $g$ and $V$ and must be so
that
${\mathscr{F}_{g}\otimes\mathbb{D}_{2}=C^{\infty}(X)\otimes\mathbb{D}_{2}}$${\mathrm{Dens}(X)[-1]\otimes\mathbb{D}_{2}}$${\mathscr{F}_{g}\otimes\mathbb{D}_{2}=C^{\infty}(X)\otimes\mathbb{D}_{2}}$${\mathrm{Dens}(X)[-1]\otimes\mathbb{D}_{2}}$$\scriptstyle{Q_{g}+\varepsilon
0}$$\scriptstyle{\mathrm{Id}+\varepsilon
L_{V}}$$\scriptstyle{\mathrm{Id}+\varepsilon
L_{V}}$$\scriptstyle{Q_{g}+\varepsilon D}$
commutes. The downward-pointing arrows are $\mathrm{Id}+\varepsilon L_{V}$
since we are still assuming the diffeomorphism $f$ is generated by the vector
field $V$: concretely, this is the first order approximation to the commuting
square in Lemma 2.34. Thus, we are trying to suss out a neat form of the
first-order perturbation of $Q_{g}$ with respect to the metric when the
perturbation is along a diffeomorphism orbit. Our computations from Equation
(47) suggest that we try $D=[L_{V},Q_{g}]$.
###### Lemma 4.19.
Let
$\widetilde{\mathscr{F}}_{g}:=(\mathscr{F}_{g}\otimes\mathbb{D}_{2},Q_{g})$
and $\widetilde{\mathscr{F}}_{g+\varepsilon
L_{V}g}:=(\mathscr{F}_{g}\otimes\mathbb{D}_{2},Q_{g}+\varepsilon[L_{V},Q_{g}])$.
Then the map $\mathrm{Id}+\varepsilon
L_{V}:\widetilde{\mathscr{F}}_{g}\to\widetilde{\mathscr{F}}_{g+\varepsilon
L_{V}g}$ is a cochain isomorphism (i.e. it is an equivalence of free BV field
theories).
###### Remark 4.20.
We omit the proof: it is straightforward, albeit tedious. The above is the
perturbative realization of general covariance: intuitively, the free BV
scalar field coupled to a metric is equivalent to the free BV scalar coupled
to an infinitesimally close metric in the same diffeomorphism orbit. This
lemma also states that for the free scalar field with differential $Q_{g}$ on
its BV space of fields, the first order deformation of $Q_{g}$ along the
$\mathscr{D}$-orbit starting at $g\in\mathscr{M}$ is exactly
$D=[L_{V},Q_{g}].$
This provides a nice coordinate-free form of the stress-energy tensor.
###### Remark 4.21.
Such a lemma holds for any BV theory which is generally covariant by our
definition: the only caveat is that the bookkeeping required to prove lemmas
like those above may be more painstaking. The issues are that the Lie
derivative $L_{V}$ manifests differently on different choices of fields, so
one must be careful, and the bookkeeping may be more painstaking with higher
$L_{\infty}$ terms. Additionally, certain BV fields have more than two terms
in their cochain complexes: the computations in that case are more cumbersome,
but only in the sense of needing to check multiple squares commute. This
happens for example in Example 2.68.
Our goal was not only to show that these two “infinitesimally close” spaces of
fields were equivalent, but to show that their associated observables were
similarly equivalent. This is what we do next. We need the following lemma:
###### Lemma 4.22.
If $\alpha:(V,d_{V})\to(W,d_{W})$ is an isomorphism of cochain complexes, then
there is an induced isomorphism
$\alpha:(\mathrm{Sym}(V),d_{V})\to(\mathrm{Sym}(W),d_{W})$ of cochain
complexes, where the differentials $d_{V}$ and $d_{W}$ are extended to the
respective symmetric algebras as derivations.
###### Remark 4.23.
It is similarly true that $\mathrm{Sym}(V^{\vee})$ and
$\mathrm{Sym}(W^{\vee})$ are isomorphic cochain complexes: the differentials
on $V^{\vee}$ and $W^{\vee}$ are induced by those on $V$ and $W$, and using
this lemma once more gives
$(\mathrm{Sym}(V^{\vee}),d_{V})\cong(\mathrm{Sym}(W^{\vee}),d_{W})$. (We abuse
notation so that $d_{V}$ and $d_{W}$ are the differentials induced from those
on $V$ and $W$, respectively.)
One might expect that because the naïve algebraic symmetric powers of
$\mathscr{F}_{g}$ are not what we use to define observables, we should be
wary; however, the completed projective tensor product we used to define
functionals is the necessary one in the case of infinite-dimensional vector
spaces for these constructions to carry over. We can now state a key theorem:
###### Theorem 4.24.
We have the following isomorphism of classical observables:
(49)
$\mathrm{Obs}^{\mathrm{cl}}(X,\widetilde{\mathscr{F}}_{g})\cong\mathrm{Obs}^{\mathrm{cl}}(X,\widetilde{\mathscr{F}}_{g+\varepsilon
L_{V}g}),$
where the isomorphism is induced by the isomorphism $\mathrm{Id}+\varepsilon
L_{V}$ from Lemma 4.19.
###### Proof.
Since Lemma 4.22 holds for infinite dimensional cochain complexes with the
definition of $\mathrm{Sym}(\widetilde{\mathscr{F}}_{g})$ as in Definition 2.2
(i.e. with the completed projective tensor product), we indeed have that the
isomorphism $\mathrm{Id}+\varepsilon L_{V}$ from Lemma 4.19 induces an
isomorphism of $(\mathscr{O}(\widetilde{\mathscr{F}}_{g}),\\{S_{g},-\\})$ and
$(\mathscr{O}(\widetilde{\mathscr{F}}_{g+\varepsilon
L_{V}g}),\\{S_{g},-\\}+\varepsilon\\{I_{g}(L_{V}g),-\\})$. This is the result.
∎
Recall that although we have done the precise computations in the case of the
massless free scalar field, the same statement holds in the case of any BV
theory with differential $Q_{g}$.
###### Remark 4.25.
As mentioned earlier, $\mathfrak{g}_{g}$ is a sheaf on $\mathbf{Riem}_{n}$,
and so identical computations as in Theorem 4.24 imply the analogous
equivalence of factorization algebras for the equivariant observables of
Proposition 4.3.
###### Remark 4.26.
This result follows almost directly from a theory exhibiting general
covariance; however, having isomorphisms written down explicitly and
recognizing their naturality when compared to the non-perturbative definition
of general covariance provides a sanity check, not to mention an enhanced
perspective on quantities like the stress-energy tensor.
Checking this theorem over a fixed $g\in\mathscr{M}$ and invoking
$\mathscr{D}$-equivariance implies that $\\{S_{g},-\\}$ is the differential
over the entire diffeomorphism orbit $\mathscr{D}\cdot g\subset\mathscr{M}$.
Similarly, seeing how $\\{S_{g},-\\}$ varies over a formal neighborhood of $g$
(i.e. expanding $\\{S_{g+\varepsilon h},-\\}$ in consecutive orders of
$\varepsilon h$) really grants us a view of the formal neighborhood of all of
$\mathscr{D}\cdot g$: this is precisely equivalent to considering the formal
neighborhood of $g$ as an element of the quotient stack $[\mathscr{M/D}]$.
###### Remark 4.27.
Note that although Theorem 4.24 is computed on a fixed $X$ for simplicity, it
holds at the level of factorization algebras, in the sense that
$\mathrm{Obs}^{\mathrm{cl}}(-,\widetilde{\mathscr{F}}_{g})$ and
$\mathrm{Obs}^{\mathrm{cl}}(-,\widetilde{\mathscr{F}}_{g+\varepsilon L_{V}g})$
define equivalent factorization algebras
$\mathbf{Riem}_{n}\to\mathbf{dgVect}$.
A few remarks on higher order versions of this theorem are made in Appendix
5.2.
## 5 Appendix
### 5.1 A detailed example
The following is a detailed example of how Chevalley-Eilenberg cochains arise
as functions on a formal neighborhood around a point in a stack: it is meant
to supplement what was shown in Lemma 3.8. Fix coordinates
$(x_{1},\ldots,x_{n})$ on $\mathbf{R}^{n}$ and consider an action
$P:G\to\textrm{Diff}(\mathbf{R}^{n})$ for a finite-dimensional Lie group $G$.
The total derivative of this map is
$\rho:\mathfrak{g}\to\textrm{Vect}(\mathbf{R}^{n})\cong
C^{\infty}(\mathbf{R}^{n})\otimes\mathbf{R}^{n}$, which for
$\alpha\in\mathfrak{g}$ has some coordinate expression:
$\alpha\mapsto\sum_{i=1}^{n}f(x_{i},\alpha)\partial_{i},$
where we use the shorthand $\partial/\partial x_{i}=\partial_{i}$. If we
restrict to a formal neighborhood of the origin,
$\widehat{\mathbf{R}}^{n}_{0}$, and compute its space of functions, we get the
usual Taylor series of functions about the origin,
$C^{\infty}(\widehat{\mathbf{R}}^{n}_{0})\cong\widehat{\textrm{Sym}}(T_{0}^{\vee}\mathbf{R}^{n})\cong\mathbf{R}\llbracket
x_{1},\ldots,x_{n}\rrbracket$, which we will denote
$\mathbf{R}\llbracket\mathbf{x}\rrbracket$ when convenient. Thus, restricting
the preceding derivative to the formal neighborhood of $0$ gives us
$\rho_{0}:\mathfrak{g}\to\textrm{Vect}(\widehat{\mathbf{R}}^{n}_{0})\cong\mathbf{R}\llbracket\mathbf{x}\rrbracket\otimes\widehat{\mathbf{R}}^{n}_{0}$,
which looks like:
$\alpha\mapsto\sum_{i=1}^{n}\hat{f}_{0}(x_{i},\alpha)\partial_{i},$
where $\hat{f}_{0}$ denotes the Taylor expansion of $f$ at $0$. This defines
an action of $\mathfrak{g}$ on $\mathbf{R}\llbracket\mathbf{x}\rrbracket$ by
derivations, and so we can thus define
$C^{\bullet}(\mathfrak{g},\mathbf{R}\llbracket\mathbf{x}\rrbracket)$.
Fixing a basis $\\{\alpha_{1},\ldots,\alpha_{m}\\}$ for $\mathfrak{g}$
(assuming finite dimension $m$), denote the dual basis for
$\mathfrak{g}^{\vee}$ as $\\{\alpha^{1},\ldots,\alpha^{m}\\}$. With these |
# Delving into the Scale Variance Problem in Object Detection
Junliang Chen, Xiaodong Zhao and Linlin Shen *: Corresponding author Computer
Vision Institute, School of Computer Science and Software Engineering,
Shenzhen University, China, and Shenzhen Institute of Artificial Intelligence
of Robotics of Society, Shenzhen, China, and Guangdong Key Laboratory of
Intelligent Information Processing, Shenzhen University, Shenzhen 518060,
China
{chenjunliang2016<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Object detection has made substantial progress in the last decade, due to the
capability of convolution in extracting local context of objects. However, the
scales of objects are diverse and current convolution can only process single-
scale input. The capability of traditional convolution with a fixed receptive
field in dealing with such a scale variance problem, is thus limited. Multi-
scale feature representation has been proven to be an effective way to
mitigate the scale variance problem. Recent researches mainly adopt partial
connection with certain scales, or aggregate features from all scales and
focus on the global information across the scales. However, the information
across spatial and depth dimensions is ignored. Inspired by this, we propose
the multi-scale convolution (MSConv) to handle this problem. Taking into
consideration scale, spatial and depth information at the same time, MSConv is
able to process multi-scale input more comprehensively. MSConv is effective
and computationally efficient, with only a small increase of computational
cost. For most of the single-stage object detectors, replacing the traditional
convolutions with MSConvs in the detection head can bring more than 2.5%
improvement in AP (on COCO 2017 dataset), with only 3% increase of FLOPs.
MSConv is also flexible and effective for two-stage object detectors. When
extended to the mainstream two-stage object detectors, MSConv can bring up to
3.0% improvement in AP. Our best model under single-scale testing achieves
48.9% AP on COCO 2017 test-dev split, which surpasses many state-of-the-art
methods.
###### Index Terms:
object detection, scale variance, multi-scale convolution
Figure 1: Performance on COCO val-2017 split of multi-scale convolution in
various single-stage detectors including anchor-based FreeAnchor [1] and
anchor-free RepPoints [2]. Two-stage detectors like Faster R-CNN (Faster) [3],
Mask R-CNN (Mask) [4], and Cascade R-CNN (Cascade) [5] are provided for
reference. Our MSConv can significantly improve the APs of different
detectors. All the models are trained on ResNet-50 [6] backbone with a
resolution of $640\times 640$.
## I Introduction
Object detection is a fundamental challenge in computer vision, which contains
two sub tasks: location and classification. For any input image, the object
detector is supposed to find out the position and category of all the objects
available in the image. Unlike object recognition which only requires
classification information, in object detection, we need to obtain the
features containing accurate scale information of objects to locate them. The
scale of different objects may vary in a wide range, making it difficult to
represent large and small objects using the same scale. To alleviate the scale
variance problem, researchers have made a lot of attempt.
In the earlier researches, image pyramid is an effective idea to solve the
scale variances. The input image is resized to different resolutions,
resulting in an image pyramid. Hand-engineered features are then extracted on
the image pyramid. In image recognition, with the development of convolutional
neuron networks (CNN), hand-engineered features are gradually replaced by the
features computed by CNN. CNN is more robust to the scale variance and
translation, thus improves the recognition performance based on a single
image. Many recent top methods in the ImageNet [7] and COCO [8] detection
challenges have utilized multi-scale testing to extract features from an image
pyramid using CNN. Each level of the image pyramid is passed through the same
CNN, to generate features at different scales. The idea to use CNN to extract
features from an image pyramid gives a multi-scale feature representation of
the original image. As all the levels of the image pyramid are passed through
the whole network, the relative features are semantically strong, including
the finest features with the highest resolution.
Image pyramid is a simple and good solution to represent the image at
different scales. However, it is time-consuming, due to repeatedly network
forward on different level of the image pyramid. To alleviate this problem,
researchers have tried to make full use of the inherent characteristic of deep
CNNs. Modern deep CNNs usually consist of many layers, including down-sample
layers which generate features with decreased resolutions, such as poolings
and strided convolutions. Given an input image, the deep CNN generates
features with different scales, i.e.,a feature hierarchy. As the depth goes
deeper, the corresponding layer becomes semantically stronger. Therefore, the
last layer of the network is representative and widely used for object
detection (e.g.,YOLOv1 [9]).
However, the semantically strongest features from the CNN have the lowest
resolution, which reduces the spatial representation capability. Besides, the
intrinsic feature hierarchy brings large semantic gap between the highest and
lowest level. Thus, to some extent, it is limited to only use the features
with strongest semantic for object classification and location. To solve this
issue, the Single Shot MultiBox Detector (SSD) [10] first attempts to utilize
the existing feature hierarchy generated by the CNN. SSD takes the pyramidal
features as inputs, and conducts prediction independently on each level of the
pyramid. In consideration of realtime processing, SSD builds up detection
heads from the high-level layers (e.g.,conv4_3 layer of VGG-16 network [11])
with low resolutions, which can not represent small objects well.
To make better use of the diverse semantics of features from different scales.
Feature Pyramid Networks (FPN) explores the connection patterns between the
multiple layers. FPN proposes the lateral connection of two adjacent layers in
a top-down manner, and takes advantage of the representation capability of the
high-resolution layers (e.g.,the last layer of the conv2_x block of ResNets
[6]). FPN gives an effective solution to explore the characteristic and
advantage of the feature pyramid. Nevertheless, FPN holds an information flow
from top to down, so the features at high levels are short of semantic
available at low-level ones. The perspective of FPN inspires the follow-up
researches on the exploration of building up better architectures to deal with
multi-scale features. [12, 13] introduce extra but limited information from
other scales beyond the top-down path of FPN. To a certain degree, the above
methods mitigate the scale variance problem existing in FPN, but ignore the
channel and spatial semantic differences between the multi-scale features.
Inspired by the above researches, we propose the multi-scale convolution
(MSConv) to effectively solve the mentioned problem. MSConv is an extension of
traditional convolution to process multi-scale input. MSConv is
computationally efficient and effective to improve the detection performance
of the object detectors.
In this paper, we mainly make the following contributions.
* 1.
We propose the multi-scale convolution (MSConv), to extend the traditional
convolution to accept multi-scale input. MSConv can deal with the multi-scale
features across scale, channel and spatial dimension at the same time.
* 2.
By replacing the convolutions with MSConvs in the detection head, mainstream
single-stage object detectors can get more than 2.5% improvement in AP. Our
best model based on FreeAnchor [1] detector achieves 48.9% AP on COCO test-dev
under single-scale testing, surpassing many state-of-the-art methods.
* 3.
The proposed MSConv is computationally efficient, i.e.,only a small increase
of computation cost is required.
## II Related Works
### II-A Object Detectors
State-of-the-art object detection methods can usually be divided into two
categories: single-stage detectors and two-stage ones.
Two-stage Detectors. To locate the objects with different scales, R-CNN
systems [14, 15, 3, 5, 16] first generate region proposals at different
scales, then extract the features within the region proposals for further
classification or regression. Though the region proposals are at different
scales, the extracted features are resized to the same spatial size
(e.g.,$7\times 7$) using ROI pooling [15] or ROIAlign [4] resizing operation.
However, the scale variance problem still exists. The features extracted by
the region proposals are from objects with different scales, containing the
rich information on position and category of the objects. The larger objects
with larger area have more spatial information than smaller ones. The resizing
operation may bring inequality information loss for objects at different
scales. As the features of any object are resized to the same spatial size,
the information loss of larger objects is greater than that of the small ones.
Therefore, there still exists scale variance problem in two-stage detectors.
Single-stage Detectors. Given multi-scale input features, single-stage
detectors (e.g.,[10]) usually directly generate predictions. Anchor-based
RetinaNet [17] has made great progress in solving scale variance problem.
While dense anchors with different scales and aspect ratios are used by
RetinaNet to cover different objects, FPN [18] is used to enhance the features
with stronger semantic from higher levels. RetinaNet has better performance
than Faster R-CNN [3] and comparable performance with most two-stage
detectors. Anchor-free ones [19, 20] adopt per-pixel prediction of location
and classification. To avoid object ambiguity problem, each pixel is only
related to single object. Objects will be assigned to the corresponding levels
according to their scales. The larger objects are assigned to higher levels
for their larger receptive fields, the smaller objects are assigned to lower
levels for their finer spatial features. Therefore, each level is in charge of
prediction of objects at similar scales. However, this fixed assignment
strategy only associates any object with a certain scale, which is limited for
object represent. It may be better to predict the object using features from
more scales (e.g.,features that are semantically stronger or spatially finer),
instead of the fixed ones.
### II-B Methods Dealing with Scale Variance Problem
Recently, many researchers are exploring methods to overcome the scale
variance problem. We can simply divide them into two categories: methods
fusing features from partial scales (partial connections for short) and
methods fusing features from all scales (full connections for short).
Partial Connections. FPN [18] first proposes lateral connection to merge
features from adjacent scales in a top-down path. PANet [12] brings an
additional bottom-up path on the basis of FPN to supplement the missing finer
spatial information from smaller scales. NAS-FPN [13] introduces neural
architecture search (NAS) to discover a better scheme that merges cross-scale
features from any scales instead of only adjacent ones. These methods enhance
the original features with semantics from other scales, but can only obtain
information from limited ones.
Full Connections. Beyond obtaining information from certain scales, many
researches are exploring methods aggregating features from all scales. Kong et
al.,[21] gather features from all levels to a medium level followed by
concatenation along channel dimension. For each level, they use global
attention and local configuration to enhance and refine the combined features.
The refined features are finally resized to the corresponding level and then
element-wisely summed up with the original input of this level. Libra R-CNN
[22] first gathers features from all levels to a medium level and does
element-wise summation. After that, a non-local [23] module is applied to
enhance the merged features. The enhanced features are then scattered to each
level and element-wisely summed up with the original input. The above methods
obtain features from all scales by gathering them to a medium level and then
merge the features. However, a feature representation at medium scale is
improper to describe objects at other scales. Therefore, the generated
features can fit well the scale of this level, but may not fit well for other
scales.
Besides, most of the above methods adopt a simple way to merge the features
(often element-wisely summation), which lacks nonlinearity and gives the
features from different scales the same weight. Nevertheless, we should let
the network learn proper weights to combine the features from different
scales.
## III Multi-Scale Convolution
### III-A Overview
In this section, we give the overview of multi-scale convolution. Multi-scale
convolution consists of two steps: feature gathering and feature blending. Let
the input features from $L$ different levels be $X=\\{X^{1},\dots,X^{L}\\}$.
In the feature gathering step, the multi-scale features will be gathered to
each level. The output of feature gathering step $Q=\\{Q^{1},\dots,Q^{L}\\}$
is obtained in a gather-scatter manner:
$\Phi=Gather(X,l_{gl}),\ l_{gl}\in\\{1,\dots,L\\}$ (1)
$Q=Scatter(\Phi,\\{l\\}_{l=1}^{L})$ (2)
where $\Phi$ denotes gathering result of all levels, $l_{gl}$ denotes the
gathering level (set to 1 if not specified). Gather and Scatter denote the
gathering and scattering process, respectively.
In the feature blending step, at each level, the gathered features are passed
to two modules: scale alignment (SA) and context attention (CA), to further
blend the multi-scale and original features. The output
$O=\\{O^{1},\dots,O^{L}\\}$ after CA can be computed as:
$O=\\{CA(W_{m}\ast SA(Q^{l},X^{l}))\\}_{l=1}^{L}$ (3)
where $W_{m}$ denotes the weight of the $1\times 1$ convolution to make the
channel number of the output of SA and $X^{l}$ equal. $\ast$ denotes the
convolution operation.
The final output $Y=\\{Y^{1},\dots,Y^{L}\\}$ of MSConv can be computed as:
$Y=\\{W_{Y}\ast(O^{l}\oplus X^{l})\\}_{l=1}^{L}$ (4)
where $\oplus$ denotes element-wise summation, $W_{Y}$ denotes the weight of
the $3\times 3$ convolution to generate the final output.
Figure 2: Architecture of the multi-scale convolution.
### III-B Detailed Architecture
Figure 2 shows the architecture of our multi-scale convolution. In the feature
gathering step, we first reduce the channels of the input features and resize
them to the lowest level. Then we concatenate the gathered multi-scale
features and scatter them to each level. In the feature blending step, the
features at each level are then passed to a shared block for further
processing. In the shared block, the multi-scale features with the original
input features are passed to a scale alignment module to align the spatial
scale of the multi-scale features. The scale-aligned features are then passed
to a $1\times 1$ convolution to merge the channels. The merged features are
then rescaled by the context attention module with attention across scale,
channel and spatial dimension. After that, a $3\times 3$ convolution is
applied in the element-wise summation of the merged features and the original
input to generate the final output for each level.
#### III-B1 Feature Gathering
(a) Full connection. (b) Gather-scatter connection.
Figure 3: Different connections of multi-scale features.
Before merging the multi-scale features from each scale, we should find a
proper way of multi-scale feature representation for each level. The best
solution is to gather features from all scales to each level, by full
connection, shown in Figure 3 (a). In this way, the multi-scale features turn
into the same spatial representation for each level. However, the full
connection manner introduces too many additional operations including
upsampling and downsampling, with a complexity of $\mathcal{O}(CL^{2})$. As an
alternative, we adopt a gather-scatter connection to approximate full
connection.
To reduce the computation cost, we first separately use a $1\times 1$
convolution to reduce the number of channels of each input to $C_{r}$
($C_{r}\leq C$). The output of the $l$-th level is denoted as
$D^{l}\in\mathbb{R}^{C_{r}\times H^{l}\times W^{l}}$, where $H^{l}$ and
$W^{l}$ denotes the resolution of the input at the $l$-th level. $C_{r}$ is
set to 64 in our experiment if not specified.
After that, to simultaneously process the multi-scale features, we gather all
the features to the same level $l_{gl}$, and concatenate them along the
channel dimension. $l_{gl}$ is set to 1 so that the gathered features can keep
the largest resolution to avoid information loss. The gathering process to
generate output $\Phi\in\mathbb{R}^{LC_{r}\times H^{l_{gl}}\times W^{l_{gl}}}$
is:
$E=\\{Resize(D^{l},(H^{l_{gl}},W^{l_{gl}}))\\}_{l=1}^{L}$ (5) $\Phi=Concat(E)$
(6)
where $Resize$ denotes the resizing function, $Concat$ denotes concatenation
along channels.
Then we generate the features for each level for further processing by scatter
$\Phi$ to each level through resizing. The scatter process to generate $Q$ is:
$Q=\\{Resize(\Phi,(H^{l},W^{l}))\\}_{l=1}^{L}$ (7)
In this way, for each level, the features from all scales obtain the same
spatial scale of the current level, so that the detector can get a proper
multi-scale feature representation, which can be processed across scale,
channel and spatial dimension. The complexity of our connection manner is
$\mathcal{O}(C_{r}L)<\mathcal{O}(CL^{2})$, which is far less than that of the
full connection manner.
#### III-B2 Feature Blending
After feature gathering, we blend the features for each level. We first apply
the scale alignment module to neutralize the spatial offset generated by the
pooling operation during feature preparation. Then we merge the multi-scale
features through a $1\times 1$ convolution and use the context attention
module to dynamically rescale the weight of each channel of the merged
features. The network can thus dynamically select the useful features and
suppress the useless features. At each level, the rescaled features are then
element-wisely summed up with the original input and passed to a $3\times 3$
convolution to generate the final output.
Figure 4: The scale alignment module. The rectangle represents a feature, in
which the number of its channels is shown.“k” denotes the kernel size.
Scale Alignment. The pooling operation we use during feature preparation has
translation invariance, and is not sensitive to position variation. Therefore,
each pixel of the multi-scale features after feature gathering for each level
has a spatial offset from the concatenated features $\Phi$. To deal with this
problem, we propose the scale alignment (SA) module to neutralize the spatial
offset. Figure 4 shows the architecture of SA module. Firstly, we use a
$k\times k$ convolution on the original single-scale input to generate the
deformable offset and mask on the basis of the current scale. The multi-scale
features with the offset and mask together are then passed to a $k\times k$
modulated deformable convolution [24] with groups=$L$, to generate the scale-
aligned features $M^{l}$ at level $l$ ($k$ is set to 1 if not specified). Then
a $1\times 1$ convolution is applied in the multi-scale features to merge the
channels. The merges features $M^{l}$ has the same channel number as $X^{l}$.
Figure 5: The context attention module.
Context Attention. The merged features $M^{l}$ contains different context both
in spatial and depth dimension. As the unnecessary context may bring influence
to the final feature representation, we should keep the useful features and
suppress the useless ones. In order to achieve this goal, we propose the
context attention (CA) module to rescale the features by an attention across
depth and spatial dimension. The architecture of CA module is shown in Figure
5. We first independently use a $3\times 3$ local average pooling (LAP) and a
global average pooling (GAP) to extract local features $L^{l}$ and global
features $G^{l}$:
$L^{l}=LAP(M^{l}),\ G^{l}=GAP(M^{l})$ (8)
Then a dependent $1\times 1$ convolution is separately applied to $L^{l}$ and
$G^{l}$ to extract features across channels:
$\mathcal{L}^{l}=W_{\mathcal{L}}\ast L^{l},\
\mathcal{G}^{l}=W_{\mathcal{G}}\ast G^{l}$ (9)
where $\mathcal{L}^{l}$ and $\mathcal{G}^{l}$ are the features generated from
$L^{l}$ and $G^{l}$, respectively. $W_{\mathcal{L}}$ and $W_{\mathcal{G}}$ are
the weights of the convolution of $L^{l}$ and $G^{l}$ shared across all
levels, respectively.
To generate the attention $S^{l}$ across depth (exactly across scale and
channel) and spatial dimension, we apply a $1\times 1$ convolution followed by
a sigmoid function on the element-wise summation of $\mathcal{L}^{l}$ and
$\mathcal{G}^{l}$:
$S^{l}=\sigma(W_{\sigma}\ast(\mathcal{L}^{l}\oplus\mathcal{G}^{l}))$ (10)
where $W_{\sigma}$ denotes the weight of the convolution before sigmoid
function, and $\sigma$ denotes the sigmoid function.
The output of CA module $O^{l}$ is an element-wise product of the merged
features $M^{l}$ and the attention $S^{l}$:
$O^{l}=M^{l}\otimes S^{l}$ (11)
(a) RetinaNet head with traditional convolutions.
(b) RetinaNet head with multi-scale convolutions.
Figure 6: RetinaNet head with different convolutions. TABLE I: Ablation studies of component effectiveness on COCO val-2017.“SA” and “CA” denote scale alignment and context attention module, respectively. The number in [] is the relative improvement over RetinaNet baseline. | | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$ | Params (M) | FLOPs (G)
---|---|---|---|---|---|---|---|---|---
RetinaNet baseline | 33.2 | 52.5 | 35.4 | 15.8 | 37.1 | 46.5 | 37.74 | 95.56
Ours | | | | | | | |
SA | CA | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$ | Params (M) | FLOPs (G)
| | 34.8[+1.6] | 55.1 | 37.2 | 17.0 | 38.8 | 48.8 | 37.60 | 93.20
✓ | | 35.6[+2.4] | 56.1 | 38.2 | 18.7 | 39.9 | 50.3 | 37.70 | 94.13
| ✓ | 35.6[+2.4] | 56.2 | 38.1 | 18.3 | 39.9 | 49.2 | 38.39 | 97.69
✓ | ✓ | 35.9[+2.7] | 56.4 | 38.9 | 18.4 | 40.0 | 50.6 | 38.49 | 98.61
(a) Original image.
(b) Score map w/o SA.
(c) Score map w/ SA.
Figure 7: Visualization of the confidence score maps without or with scale alignment module. TABLE II: The effect of different gathering levels on COCO val-2017. Gathering Level | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---
P7 | 34.6 | 55.2 | 37.0 | 17.6 | 38.6 | 48.1
P5 | 35.2 | 55.7 | 37.7 | 17.4 | 39.1 | 49.2
P3 | 35.9 | 56.4 | 38.9 | 18.4 | 40.0 | 50.6
TABLE III: Comparisons with other pyramid architectures based on RetinaNet detector on COCO val-2017. The number in [] is the relative improvement over FPN [18]. Method | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$ | Params (M) | FLOPs (G)
---|---|---|---|---|---|---|---|---
FPN [18] | 33.2 | 52.5 | 35.4 | 15.8 | 37.1 | 46.5 | 37.74(1.00x) | 95.56(1.00x)
PANet [12] | 33.4[+0.2] | 52.5 | 35.4 | 16.0 | 37.7 | 46.9 | 40.10(1.06x) | 97.92(1.02x)
PConv [25] | 33.8[+0.6] | 53.8 | 36.1 | 16.7 | 38.1 | 46.9 | 41.28(1.09x) | 96.78(1.01x)
Libra [22] | 33.9[+0.7] | 53.8 | 36.1 | 16.5 | 38.0 | 47.6 | 38.01(1.01x) | 95.75(1.00x)
NAS-FPN [13] | 35.1[+1.9] | 53.9 | 37.3 | 17.1 | 39.7 | 49.8 | 59.72(1.58x) | 138.60(1.45x)
BiFPN [26] | 35.2[+2.0] | 54.4 | 37.7 | 17.5 | 39.5 | 49.3 | 55.60(1.47x) | 122.34(1.28x)
SEPC-Lite [25] | 35.3[+2.1] | 55.3 | 37.6 | 17.6 | 39.4 | 49.9 | 41.37(1.10x) | 96.99(1.01x)
MSConv | 35.9[+2.7] | 56.4 | 38.9 | 18.4 | 40.0 | 50.6 | 38.49(1.02x) | 98.61(1.03x)
#### III-B3 Head Design
In this section, we introduce how to integrate our MSConv into single-stage
detectors. We take RetinaNet as an example to elaborate how to replace the
traditional convolution used in the detection head of single-stage detectors.
Figure 6 shows the difference between head design of traditional convolutions
and that of our MSConvs.
In the original RetinaNet, the multi-scale inputs are separately processed by
a shared head with two branches: classification and regression. The two
branches have independent weights but share the same input. At each branch,
the input features are passed through a fully convolution network (FCN)
consisting of several (4 by default) stacked convolutions to extract features
specially for classification or regression. Finally, a $3\times 3$ convolution
is applied to the extracted features to generate the final prediction.
It is easily to replace the traditional convolutions with MSConvs at each
branch. However, each MSConv still brings additional computation. To make a
compromise, the two branches share the same MSConv. To keep the difference
between classification and regression, we introduce an extra $3\times 3$
convolution for each branch after the shared MSConvs. The final prediction at
each branch is generated by a $3\times 3$ convolution.
## IV Experiments
### IV-A Dataset and Evaluation Metrics
We carry out our experiments on COCO [8] dataset. We use the data in
train-2017 split containing around 115k images to train our models, and
evaluate the performance for ablation studies on val-2017 split with about 5k
images. Main results are reported on the test-dev split (20k images without
available public annotations). We report all the results in the standard COCO-
style average precision (AP).
### IV-B Experimental Settings
For fair comparisons, we conduct our experiments on MMDetecion [27] platform
in PyTorch [28] framework. If not specified, all the settings are the same
with described in MMDetection [27]. Modulated deformable convolution [24] is
applied.
Training Settings. We adopt ResNet-50 [6] as our default backbone network, and
RetianNet [17] as our default object detector. The backbone networks are
initialized with the weight of the models pretrained on ImageNet [7]. Our
models are trained using stochastic gradient descent (SGD) optimizer for 12
epochs with an initial learning rate of 0.01, which is divided by 10 after 8
and 11 epochs. Due to memory limitation, the batchsize (16 by default) will be
adjusted with a linearly scaled learning rate. Momentum and weight decay are
set to 0.9 and $1e^{-4}$, respectively. The resolution of the input images is
set to $640\times 640$.
Inference Settings. For each input image, we execute the following steps to
get the predictions. We collect the predictions with top 1000 confidences from
each prediction layer and use a confidence threshold of 0.05 for each class to
filter out the predictions with low confidences. For each class, we apply non
maximum suppression (NMS) with threshold of 0.5 to filter the low-quality
predictions. After NMS, we select the predictions with top 100 confidences as
the final results.
### IV-C Ablation Studies
#### IV-C1 The effectiveness of each component
We analyze if each component of our model is effective for improvement of
detection. The experimental results are listed in Table I. The performance of
RetinaNet is shown in the first group, while the performance of our methods
are shown in the second group.
The third line of the second group in the table shows that our plain model
without any extra module can simply achieve a better performance than
RetinaNet baseline (+1.6% AP), even with fewer parameters and less
computational cost.
If we only introduce the scale alignment module, the detector can get 0.8% AP
improvement over our plain model, which is 2.4% higher than the original
RetinaNet. As shown in Figure 7, combined with the SA module, the foreground
regions of the objects are more accurate. Introducing the context attention
module, the detector gets an improvement of 2.4% AP over RetinaNet baseline,
with a little increase of computation.
The last line of the table reveals that applying both SA and CA module can
boost the performance of the detector to 35.9% AP, which is 2.7% higher than
that of the baseline.
#### IV-C2 The effectiveness of different gathering levels
In this section, we analyze the effect of different gathering level of MSConv.
As shown in Table II, as the gathering level goes down (from P7 to P3), the
performances under all metrics goes higher. The results show that the level
with the largest scale (P3) can keep the most information with the least loss,
which justifies the effectiveness to gather multi-scale features to the level
with the largest resolution.
TABLE IV: Applied in single-stage and two-stage detectors on COCO val-2017. The number in [] is the relative improvement over the original detector. Detector | MSConv | $AP$ | $AP_{50}$ | $AP_{75}$
---|---|---|---|---
single-stages | | | |
RetinaNet | | 33.2 | 52.5 | 35.4
RetinaNet | ✓ | 35.9[+2.7] | 56.4 | 38.9
FreeAnchor | | 36.4 | 54.6 | 39.1
FreeAnchor | ✓ | 39.0[+2.6] | 58.1 | 42.1
RepPoints | | 35.8 | 56.2 | 37.9
RepPoints | ✓ | 37.5[+1.7] | 57.6 | 40.4
two-stages | | | |
Faster R-CNN | | 34.6 | 55.7 | 36.9
Faster R-CNN | ✓ | 37.2[+2.6] | 57.8 | 39.9
Mask R-CNN | | 35.2 | 56.4 | 37.9
Mask R-CNN | ✓ | 38.2[+3.0] | 57.6 | 43.0
Cascade R-CNN | | 38.1 | 55.9 | 41.1
Cascade R-CNN | ✓ | 39.7[+1.6] | 57.6 | 43.0
TABLE V: Comparisons with state-of-the-art methods on COCO test-dev under single model and single-scale testing settings. Method | Backbone | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---|---
Two-stage methods | | | | | | |
Faster R-CNN w/ FPN [18] | ResNet-101 | 36.2 | 59.1 | 39.0 | 18.2 | 39.0 | 48.2
Mask R-CNN [4] | ResNet-101 | 38.2 | 60.3 | 41.7 | 20.1 | 41.1 | 50.2
Mask R-CNN [4] | ResNeXt-101 | 39.8 | 62.3 | 43.4 | 22.1 | 43.2 | 51.2
LH R-CNN [29] | ResNet-101 | 41.5 | - | - | 25.2 | 45.3 | 53.1
Cascade R-CNN [5] | ResNet-101 | 42.8 | 62.1 | 46.3 | 23.7 | 45.5 | 55.2
TridentNet [30] | ResNet-101 | 42.7 | 63.6 | 46.5 | 23.9 | 46.6 | 56.6
TridentNet [30] | ResNet-101-DCN | 46.8 | 67.6 | 51.5 | 28.0 | 51.2 | 60.5
TSD [31] | ResNet-101 | 43.2 | 64.0 | 46.9 | 24.0 | 46.3 | 55.8
One-stage methods | | | | | | |
RetinaNet [17] | ResNet-101 | 39.1 | 59.1 | 42.3 | 21.8 | 42.7 | 50.2
RetinaNet [17] | ResNeXt-101 | 40.8 | 61.1 | 44.1 | 24.1 | 44.2 | 51.2
FreeAnchor [1] | ResNet-101 | 43.1 | 62.2 | 46.4 | 24.5 | 46.1 | 54.8
FreeAnchor [1] | ResNeXt-101 | 44.9 | 64.3 | 48.5 | 26.8 | 48.3 | 55.9
FCOS [19] | ResNet-101 | 41.5 | 60.7 | 45.0 | 24.4 | 44.8 | 51.6
FCOS [19] | ResNeXt-101 | 44.7 | 64.1 | 48.4 | 27.6 | 47.5 | 55.6
ATSS [32] | ResNet-101 | 43.6 | 62.1 | 47.4 | 26.1 | 47.0 | 53.6
ATSS [32] | ResNet-101-DCN | 46.3 | 64.7 | 50.4 | 27.7 | 49.8 | 58.4
ATSS [32] | ResNeXt-101-DCN | 47.7 | 66.6 | 52.1 | 29.3 | 50.8 | 59.7
SAPD [33] | ResNet-101 | 43.5 | 63.6 | 46.5 | 24.9 | 46.8 | 54.6
SAPD [33] | ResNet-101-DCN | 46.0 | 65.9 | 49.6 | 26.3 | 49.2 | 59.6
SAPD [33] | ResNeXt-101-DCN | 46.6 | 66.6 | 50.0 | 27.3 | 49.7 | 60.7
RepPoints v2 [34] | ResNeXt-101 | 47.8 | 67.3 | 51.7 | 29.3 | 50.7 | 59.5
RepPoints v2 [34] | ResNet-101-DCN | 48.1 | 67.5 | 51.8 | 28.7 | 50.9 | 60.8
GFL [35] | ResNet-101 | 45.0 | 63.7 | 48.9 | 27.2 | 48.8 | 54.5
GFL [35] | ResNet-101-DCN | 47.3 | 66.3 | 51.4 | 28.0 | 51.1 | 59.2
GFL [35] | ResNeXt-101-DCN | 48.2 | 67.4 | 52.6 | 29.2 | 51.7 | 60.2
FreeAnchor w/ MSConv | ResNet-101-DCN | 47.7 | 67.5 | 52.2 | 29.6 | 51.2 | 58.5
FreeAnchor w/ MSConv | ResNeXt-101-DCN | 48.9 | 68.8 | 53.4 | 31.8 | 51.8 | 60.2
### IV-D Comparisons with other Pyramid Architectures
In order to justify that our model is more effective and efficient , we
compare the performance of our method with state-of-the-art pyramid
architectures in Table III based on RetinaNet detector. Though NAS-FPN [13]
and BiFPN [26] achieve huge improvement over FPN [18], they require much more
computational cost. The last line of the table shows that our model achieves
the best performance among the state-of-the-art methods, with only a small
increase of parameters and FLOPs.
### IV-E Application in Single-stage and Two-stage Detectors
In this section, we conduct experiments to evaluate the effectiveness of our
model on single-stage detectors, including anchor-based ones or anchor-free
ones, such as FreeAnchor [1] and RepPoints [2], and two-stage detectors.
As shown in the first group of Table IV, when combined with our method, the
single-stage detectors can get a significant improvement in AP. MSConv can
provide an increase of larger than 2.5% AP for the single-stage detectors. The
comparison of MSConv in various single-stage detectors is also shown in Figure
1.
In additional to single-stage detectors, MSConv is also effective for two-
stage detectors. The second group in Table IV lists the experimental results
of MSConv applied to two-stage detectors. When combined with MSConvs, two-
stage detectors get more than 2.5% improvement in AP. MSConv provides the most
increase in AP of 3.0% for Mask R-CNN.
### IV-F Comparison with State-of-the-art Methods
In this section, we compare the performance of our method on COCO test-dev
split with state-of-the-art methods under single model single-scale testing
settings, which are shown in Table V. We use FreeAnchor as our detector. For
training, we adopt 2$\times$ learning schedule with scale-jitter. With
ResNet-101-DCN backbone, the AP (47.7%) of our method surpasses most of the
state-of-the-art methods using the same backbone. The backbone of
ResNeXt-101-DCN further improves our AP to 48.9%, which surpasses the AP of
all other competitors.
## V Conclusions
In this paper, we propose the multi-scale convolution (MSConv), an extension
of the traditional convolution, to accept and deal with multi-scale input. The
proposed MSConv can meanwhile process the multi-scale input across channel,
spatial and scale dimension. MSConv can dramatically boost the detection
performance of single-stage object detectors with only a small increase of
computation. The results also suggest that MSConv is flexible and able to
bring considerable improvement as well to two-stage object detectors.
## References
* [1] X. Zhang, F. Wan, C. Liu, R. Ji, and Q. Ye, “FreeAnchor: Learning to match anchors for visual object detection,” in _NeurIPS_ , 2019.
* [2] Z. Yang, S. Liu, H. Hu, L. Wang, and S. Lin, “Reppoints: Point set representation for object detection,” in _ICCV_ , 2019.
* [3] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in _NeurIPS_ , 2015.
* [4] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in _ICCV_ , 2017.
* [5] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in _CVPR_ , 2018.
* [6] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _CVPR_ , 2016.
* [7] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2009.
* [8] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in _ECCV_ , 2014.
* [9] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in _CVPR_ , 2016.
* [10] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in _ECCV_ , 2016.
* [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in _ICLR_ , 2015.
* [12] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in _CVPR_ , 2018.
* [13] G. Ghiasi, T.-Y. Lin, and Q. V. Le, “NAS-FPN: Learning scalable feature pyramid architecture for object detection,” in _CVPR_ , 2019.
* [14] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic 2014.
* [15] R. Girshick, “Fast r-cnn,” in _ICCV_ , 2015.
* [16] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, “Hybrid task cascade for instance segmentation,” in _CVPR_ , 2019.
* [17] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in _ICCV_ , 2017.
* [18] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in _CVPR_ , 2017.
* [19] Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully convolutional one-stage object detection,” in _ICCV_ , 2019.
* [20] T. Kong, F. Sun, H. Liu, Y. Jiang, L. Li, and J. Shi, “Foveabox: Beyond anchor-based object detector,” _TIP_ , 2020.
* [21] T. Kong, F. Sun, C. Tan, H. Liu, and W. Huang, “Deep feature pyramid reconfiguration for object detection,” in _ECCV_ , 2018.
* [22] J. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, and D. Lin, “Libra R-CNN: Towards balanced learning for object detection,” in _CVPR_ , 2019.
* [23] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in _CVPR_ , 2018.
* [24] X. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable convnets v2: More deformable, better results,” in _CVPR_ , 2019.
* [25] X. Wang, S. Zhang, Z. Yu, L. Feng, and W. Zhang, “Scale-equalizing pyramid convolution for object detection,” in _CVPR_ , 2020.
* [26] M. Tan, R. Pang, and Q. V. Le, “Efficientdet: Scalable and efficient object detection,” in _CVPR_ , 2020.
* [27] K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu _et al._ , “MMDetection: Open mmlab detection toolbox and benchmark,” _arXiv:1906.07155_ , 2019.
* [28] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga _et al._ , “Pytorch: An imperative style, high-performance deep learning library,” in _NeurIPS_ , 2019.
* [29] Z. Li, C. Peng, G. Yu, X. Zhang, Y. Deng, and J. Sun, “Light-head R-CNN: In defense of two-stage object detector,” _arXiv:1711.07264_ , 2017.
* [30] Y. Li, Y. Chen, N. Wang, and Z. Zhang, “Scale-aware trident networks for object detection,” in _ICCV_ , 2019.
* [31] G. Song, Y. Liu, and X. Wang, “Revisiting the sibling head in object detector,” in _CVPR_ , 2020.
* [32] S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, “Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection,” in _CVPR_ , 2020.
* [33] C. Zhu, F. Chen, Z. Shen, and M. Savvides, “Soft anchor-point object detection,” in _ECCV_ , 2020.
* [34] Y. Chen, Z. Zhang, Y. Cao, L. Wang, S. Lin, and H. Hu, “Reppoints v2: Verification meets regression for object detection,” in _NeurIPS_ , 2020\.
* [35] X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, and J. Yang, “Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection,” in _NeurIPS_ , 2020.
|
# Applying Machine Learning to Study Fluid Mechanics
Steven L. Brunton1∗
1 Department of Mechanical Engineering, University of Washington, Seattle, WA
98195, United States
###### Abstract
This paper provides a short overview of how to use machine learning to build
data-driven models in fluid mechanics. The process of machine learning is
broken down into five stages: (1) formulating a problem to model, (2)
collecting and curating training data to inform the model, (3) choosing an
architecture with which to represent the model, (4) designing a loss function
to assess the performance of the model, and (5) selecting and implementing an
optimization algorithm to train the model. At each stage, we discuss how prior
physical knowledge may be embedding into the process, with specific examples
from the field of fluid mechanics.
_Keywords–_ Machine learning, fluid mechanics, physics-informed machine
learning, neural networks, deep learning
††∗ Corresponding author (sbrunton@uw.edu).
## 1 Introduction
The field of fluid mechanics is rich with data and rife with problems, which
is to say that it is a perfect playground for machine learning. Machine
learning is the art of building models from data using optimization and
regression algorithms. Many of the challenges in fluid mechanics may be posed
as optimization problems, such designing a wing to maximize lift while
minimizing drag at cruise velocities, estimating a flow field from limited
measurements, controlling turbulence for mixing enhancement in a chemical
plant or drag reduction behind a vehicle, among myriad others. These
optimization tasks fit well with machine learning algorithms, which are
designed to handle nonlinear and high-dimensional problems. In fact, machine
learning and fluid mechanics both tend to rely on the same assumption that
there are patterns that can be exploited, even in high-dimensional systems
[1]. Often, the machine learning algorithm will model some aspect of the
fluid, such as the lift profile given a particular airfoil geometry, providing
a _surrogate_ that may be optimized over. Machine learning may also be used to
directly solve the fluid optimization task, such as designing a machine
learning model to manipulate the behavior of the fluid for some engineering
objective with active control [2, 3, 4].
In either case, it is important to realize that machine learning is _not_ an
automatic or turn-key procedure for extracting models from data. Instead, it
requires expert human guidance at every stage of the process, from deciding on
the problem, to collecting and curating data that might inform the model, to
selecting the machine learning architecture best capable of representing or
modeling the data, to designing custom loss functions to quantify performance
and guide the optimization, to implementing specific optimization algorithms
to train the machine learning model to minimize the loss function over the
data. A better name for machine learning might be “expert humans teaching
machines how to learn a model to fit some data,” although this is not as
catchy. Particularly skilled (or lucky!) experts may design a learner or a
learning framework that is capable of learning a variety of tasks,
generalizing beyond the training data, and mimicking other aspects of
intelligence. However, such artificial intelligence is rare, even more so than
human intelligence. The majority of machine learning models are just that,
models, which should fit directly into the decades old practice of model-based
design, optimization, and control [5].
With its unprecedented success on many challenging problems in computer vision
and natural language processing, machine learning is rapidly entering the
physical sciences, and fluid mechanics is no exception. The simultaneous
promise, and over-promise, of machine learning is causing many researchers to
have a healthy mixture of optimism and skepticism. In both cases, there is a
strong desire to understand the uses and limitations of machine learning, as
well as best practices for how to incorporate it into existing research and
development workflows. It is also important to realize that while it is now
relatively simple to train a machine learning model for a well-defined task,
it is still quite difficult to create a new model that outperforms traditional
numerical algorithms and physics-based models. Incorporating partially known
physics into the machine learning pipeline well tend to improve model
generalization and improve interpretability and explainability, which are key
elements of modern machine learning [6, 7].
Figure 1: Schematic of the five stages of machine learning on an example of
reduced-order modeling. In this case, the goal is to learn a low dimensional
coordinate system $\mathbf{z}=\bm{f}_{1}(\mathbf{x},\bm{\theta}_{1})$ from
data in a high-dimensional representation $\mathbf{x}$, along with a dynamical
system model $\dot{\mathbf{z}}=\bm{f}_{2}(\mathbf{z},\bm{\theta}_{2})$ for how
the state $\mathbf{z}$ evolves in time. Finally, this latent state derivative
$\dot{\mathbf{z}}$ must be able to approximate the high dimensional derivative
$\dot{\mathbf{x}}$ through the decoder
$\dot{\mathbf{x}}\approx\bm{f}_{3}(\dot{\mathbf{z}},\bm{\theta}_{3})$. The
loss function $\mathcal{L}(\bm{\theta},\mathbf{X})$ defines how well the model
performs, averaged over the data $\mathbf{X}$. Finally, the parameters
$\bm{\theta}=\\{\bm{\theta}_{1},\bm{\theta}_{2},\bm{\theta}_{3}\\}$ are found
through optimization.
## 2 Physics Informed Machine Learning for Fluid Mechanics
Applied machine learning may be separated into a few canonical steps, each of
which provides an opportunity to embed prior physical knowledge: (1) choosing
the problem to model or the question to answer; (2) choosing and curating the
data used to train the model; (3) deciding on a machine learning architecture
to best represent or model this data; (4) designing loss functions to quantify
performance and to guide the learning process; and (5) implementing an
optimization algorithm to train the model to minimize the loss function over
the training data. See Fig. 1 for a schematic of this process on the example
of reduced-order modeling. This organization of steps is only approximate, and
there are considerable overlaps and tight interconnections between each stage.
For example, choosing the problem to model and choosing the data to inform
this model are two closely related decisions. Similarly, designing a custom
loss function and implementing an optimization algorithm to minimize this loss
function are tightly coupled. In most modern machine learning workflows, it is
common to iteratively revisit earlier stages based on the outcome at later
stages, so that the machine learning researcher is constantly asking new
questions and revising the data, the architecture, the loss functions, and the
optimization algorithm to improve performance. Here, we discuss these
canonical stages of machine learning, investigate how to incorporate physics,
and review examples in the field of fluid mechanics. This discussion is
largely meant to be a high-level overview, and many more details can be found
in recent reviews [8, 9, 5, 10].
### 2.1 The problem
Data science is the art of asking and answering questions with data. The sub-
field of machine learning is concerned with leveraging historical data to
build models that may be deployed to automatically answer these questions,
ideally in real-time, given new data. It is critical to select the right
system to model, motivated by a problem that is both important and tractable.
Choosing a problem involves deciding on input data that will be readily
available in the future, and output data that will represent the desired
output, or prediction, of the model. The output data should be determinable
from the inputs, and the functional relationship between these is precisely
what the machine learning model will be trained to capture.
The nature of the problem, specifically what outputs will be modeled given
what inputs, determines the large classes of machine learning algorithms:
_supervised_ , _unsupervised_ , and _reinforcement learning_. In supervised
learning, the training data will have expert labels that should be predicted
or modeled with the machine learning algorithm. These output labels may be
discrete, such as a categorical label of a ‘dog’ or a ‘cat’ given an input
image, in which case the task is one of _classification_. If the labels are
continuous, such as the average value of lift or drag given a specified
airfoil geometry, then the task is one of _regression_. In unsupervised
learning, there are no expert labels, and structure must be extracted from the
input data alone; thus, this is often referred to as _data mining_ , and
constitutes a particularly challenging field of machine learning. Again, if
the structure in the data is assumed to be discrete, then the task is
_clustering_. After the clusters are identified and characterized, these
groupings may be used as proxy labels to then classify new data. If the
structure in the data is assumed to be continuously varying, then the task is
typically thought of as an _embedding_ or _dimensionality reduction_ task.
Principal component analysis (PCA) or proper orthogonal decomposition (POD)
may be thought of as unsupervised learning tasks that seek a continuous
embedding of reduced dimension [11]. Reinforcement learning is a third, large
branch of machine learning research, in which an _agent_ learns to make
control decisions to interact with an environment for some high level
objection [12]. Examples include learning how to play games [13, 14], such as
chess and go.
#### Embedding physics:
Deciding on what phenomena to model with machine learning is often inherently
related to the underlying physics. Although classical machine learning has
been largely applied to “static” tasks, such as image classification and the
placement of advertisements, increasingly it is possible to apply these
techniques to model physical systems that evolve in time according to some
_rules_ or _physics_. For example, we may formulate a learning problem to find
and represent a conserved quantity, such as a Hamiltonian, purely from data
[15]. Alternatively, the machine learning task may be to model time-series
data as a differential equation, with the learning algorithm representing the
dynamical system [16, 17, 18, 19, 20]. Similarly, the task may involve
learning a coordinate transformation where these dynamics become simplified in
some _physical_ way; i.e., coordinate transformations to linearize or
diagonalize/decouple dynamics [21, 22, 23, 24, 25, 26, 27, 28].
#### Examples in fluid mechanics:
There are many _physical_ modeling tasks in fluid mechanics that are
benefiting from machine learning [9, 5]. A large field of study focuses on
formulating turbulence closure modeling as a machine learning problem [8, 29],
such as learning models for the Reynolds stresses [30, 31] or sub-gridscale
turbulence [32, 33]. Designing useful input features is also an important way
that prior physical knowledge is incorporated into turbulence closure modeling
[34, 35, 36]. Similarly, machine learning has recently been focused on the
problem of improving computational fluid dynamics (CFD) solvers [37, 38, 39,
40]. Other important problems in fluid mechanics that benefit from machine
learning include super-resolution [41, 42], robust modal decompositions [1,
43, 44], network and cluster modeling [45, 46, 47], control [48, 4] and
reinforcement learning [49, 50], and design of experiments in cyberphysical
systems [51]. Aerodynamics is a large related field with significant data-
driven advances [52]. The very nature of these problems embeds the learning
process into a larger physics-based framework, so that the models are more
physically relevant by construction.
### 2.2 The data
Data is the lifeblood of machine learning, and our ability to build effective
models relies on what data is available or may be collected. As discussed
earlier, choosing data to inform a model is closely related to choosing what
to model in the first place, and therefore this stage cannot be strictly
separated from the choice of a problem above. The quality and quantity of data
directly affects the resulting machine learning model. Many machine learning
architectures, such as deep neural networks, are essentially sophisticated
interpolation engines, and so having a diversity of training data is essential
to these models being useful on unseen data. For example, modern deep
convolutional neural networks rose to prominence with their unprecedented
classification accuracy [53] on the ImageNet data base [54], which contains
over $14$ million labeled images with over $20,000$ categories, providing a
sufficiently large and rich set of examples for training. This pairing of a
vast labeled data set with a novel deep learning architecture is widely
regarded as the beginning of the modern era of deep learning [55].
#### Embedding physics:
The training data provides several opportunities to embed prior physical
knowledge. If a system is known to exhibit a symmetry, such translational or
rotational invariance, then it is possible to augment and enrich the training
data with shifted or rotated examples. More generally, it is often assumed
that with an abundance of training data, these physical invariances will
automatically be learned by a sufficiently expressive architecture. However,
this approach tends to require considerable resources, both to collect and
curate the data, as well as to train increasingly large models, making it more
appropriate for industrial scale, rather than academic scale, research. In
contrast, it is also possible to use physical intuition to craft new features
from the training data, for example by applying a coordinate transformation
that may simplify the representation or training. Physical data often comes
from multiple sources with different fidelity, such as from numerical
simulations, laboratory experiments, and in-flight tests. This is an important
area of research for flight testing and unsteady aerodynamics [52], and
recently physics informed neural networks have been used with multifidelity
data to approximate PDEs [56].
#### Examples in fluid mechanics:
Fluids data is notoriously vast and high-dimensional, with individual flow
fields often requiring millions (or more!) degrees of freedom to characterize.
Moreover, these flow fields typically evolve in time, resulting in a time
series of multiple snapshots. Although vast in the spatial and/or temporal
dimensions, data is often rather sparse in parameter space, as it is expensive
to numerically or experimentally investigate multiple geometries, Reynolds
numbers, etc. Thus there are many algorithms designed for both rich and sparse
data. Other considerations involve exciting transients and observing how the
system evolves when it is away from its natural state. In many other cases,
fluids data might be quite limited, for example given by time-series data from
a few pressure measurements on the surface of an airfoil, or from force
recordings on an experimental turbine.
### 2.3 The architecture
Once a problem has been identified, and data is collected and curated, it is
necessary to choose an architecture with which to represent the machine
learning model. Typically, a machine learning model is a function that maps
inputs to outputs
$\displaystyle\mathbf{y}=\mathbf{f}(\mathbf{x};\bm{\theta})$ (1)
and this function is generally represented within a specified family of
functions parameterized by values in $\bm{\theta}$. For example, a linear
regression model would model outputs as a linear function of the inputs, with
$\bm{\theta}$ parameterizing this linear map, or matrix. Neural networks have
emerged as a particularly powerful and flexible class of models to represent
functional relationships between data, and they have been shown to be able to
approximate arbitrarily complex functions with sufficient data and depth [57,
58]. There is a tremendous variety of potential neural network architectures
[11], limited only by the imagination of the human designer. The most common
architecture is a simple feedforward network, in which data enters through an
input layer and maps sequentially through a number of computational layers
until an output layer. Each layer consists of nodes, where data from nodes in
the previous layer are combined in a weighted sum and processed through an
activation function, which is typically nonlinear. In this way, neural
networks are fundamentally compositional in nature. The parameters
$\bm{\theta}$ determine the network weights for how data is passed from one
layer to the next, i.e. the weighted connectivity matrices for how nodes are
connected in adjacent layers. The overarching network topology (i.e., how many
layers, how large, what type of activation functions, etc.) is specified by
the architect or determined in a meta-optimization, thus determining the
family of functions that may be approximated by that class of network. Then,
the network weights for the specific architecture are optimized over the data
to minimize a given loss function; these stages are described next.
It is important to note that not all machine learning architectures are neural
networks, although they are one of the most powerful and expressive modern
architectures, powered by increasingly big data and high performance
computing. Before the success of deep convolutional networks on the ImageNet
dataset, neural networks were not even mentioned in the list of top ten
machine learning algorithms [59]. Random forests [60] and support vector
machines [61] are two other leading architectures for supervised learning.
Bayesian methods are also widely used, especially for dynamical systems [62].
Genetic programming has also been widely used to learn human-interpretable,
yet flexible representations of data for modeling [63, 16, 64, 65] and control
[4]. In addition, standard linear regression and generalized linear regression
are still widely used for modeling time-series data, especially in fluids. The
dynamic mode decomposition (DMD) [17, 66, 1] employs linear regression with a
low-rank constraint in the optimization to find dominant spatiotemporal
coherent structures that evolve linearly in time. The sparse identification of
nonlinear dynamics (SINDy) [18] algorithm employs generalized linear
regression, with either a sparsity promoting loss function [67] or a sparse
optimization algorithm [18, 68], to identify a differential equation model
with as few model terms as are necessary to fit the data.
#### Embedding physics:
Choosing a machine learning architecture with which to model the training data
is one of the most intriguing opportunities to embed physical knowledge into
the learning process. Among the simplest choices are convolutional networks
for translationally invariant systems, and recurrent networks, such as long-
short-time memory (LSTM) networks [20] or reservoir computing [19, 69], for
systems that evolve in time. LSTMs have recently been used to predict
aeroelastic responses across a range of Mach numbers [70]. More generally,
equivariant networks seek to encode various symmetries by construction, which
should improve accuracy and reduce data requirements for physical systems [71,
72, 73, 74]. Autoencoder networks enforce the physical notion that there
should be low-dimensional structure, even for high-dimensional data, by
imposing an information bottleneck, given by a constriction of the number of
nodes in one or more layers of the network. Such networks uncover nonlinear
manifolds where the data is compactly represented, generalizing the linear
dimensionality reduction obtained by PCA and POD. It is also possible to embed
physics more directly into the architecture, for example by incorporating
Hamiltonian [75, 76] or Lagrangian [77, 78] structure. There are numerous
successful examples of physics-informed neural networks (PINNs) [79, 80, 81,
82, 83], which solve supervised learning problems while being constrained to
satisfy a governing physical law. Graph neural networks have also shown the
ability to learn generalizable physics in a range of challenging domains [84,
64, 85]. Deep operator networks [86] are able to learn continuous operators,
such as governing partial differential equations, from relatively limited
training data.
#### Examples in fluid mechanics:
There are numerous examples of custom neural network architectures being used
to enforce physical solutions for applications in fluid mechanics. The work of
Ling et al. [30] designed a custom neural network layer that enforced Galilean
invariance in the Reynolds stress tensors that they were modeling. Related
Reynolds stress models have been developed using the SINDy sparse modeling
approach [87, 88, 89]. Hybrid models that combine linear system identification
and nonlinear neural networks have been used to model complex aeroelastic
systems [90]. The hidden fluid mechanics (HFM) approach is a physics-informed
neural network strategy that encodes the Navier-Stokes equations while being
flexible to the boundary conditions and geometry of the problem, enabling
impressive physically quantifiable flow field estimations from limited data
[91]. Sparse sensing has also been used to recover pressure distributions
around airfoils [92]. The Fourier neural operator is a novel operator network
that performs super-resolution upscaling and simulation modeling tasks [93].
Equivariant convolutional networks have been designed and applied to enforce
symmetries in high-dimensional complex systems from fluid dynamics [73].
Physical invariances have also been incorporated into neural networks for
subgrid-scale scalar flux modeling [94]. Lee and Carlberg [95] recently showed
how to incorporate deep convolutional autoencoder networks into the broader
reduced-order modeling framework [96, 97, 98], taking advantage of the
superior dimensionality reduction capabilities of deep autoencoders.
### 2.4 The loss function
The loss function is how we quantify how well the model is performing, often
on a variety of tasks. For example, the $L_{2}$ error between the model output
and the true output, averaged over the input data, is a common term in the
loss function. In addition, other terms may be added to regularize the
optimization (e.g., the $L_{1}$ or $L_{2}$ norm of the parameters
$\bm{\theta}$ to promote parsimony and prevent overfitting). Thus, the loss
function typically balances multiple competing objectives, such as model
performance and model complexity. The loss function may also incorporate terms
used to promote a specific behavior across different sub-networks in a neural
network architecture. Importantly, the loss function will provide valuable
information used to approximate gradients required to optimize the parameters.
#### Embedding physics:
Most of the physics-informed architectures described above involve custom loss
functions to promote the efficient training of accurate models. It is also
possible to incorporate physical priors, such as sparsity, by adding $L_{1}$
or $L_{0}$ regularizing loss terms on the parameters in $\bm{\theta}$. In
fact, parsimony has been a central theme in physical modeling for century,
where it is believed that balancing model complexity with descriptive
capability is essential in developing models that generalize. The sparse
identification of nonlinear dynamics algorithm [18] learns dynamical systems
models with as few terms from a library of candidate terms as are needed to
describe the training data. There are several formulations involving different
loss terms and optimization algorithms that promote additional physical
notions, such as stability [99] and energy conservation [100]. Stability
promoting loss functions based on notions of Lyapunov stability have also been
incorporated into autoencoders, with impressive results on fluid systems
[101].
#### Examples in fluid mechanics:
Sparse nonlinear modeling has been used extensively in fluid mechanics, adding
sparsity-promoting loss terms to learn parsimonious models that prevent
overfitting and generalize to new scenarios. SINDy has been used to generate
reduced-order models for how dominant coherent structures evolve in a flow for
a range of configurations [100, 102, 103, 104, 105]. These models have also
been extended to develop compact closure models [87, 88, 89]. Recently, the
physical notion of _boundedness_ of solutions, which is a fundamental concept
in reduced-order models of fluids [106], has been incorporated into the SINDy
modeling framework as a novel loss function. Other physical loss functions may
be added, such as adding the divergence of a flow field as a loss term to
promote solutions that are incompressible [107].
### 2.5 The optimization algorithm
Ultimately, machine learning models are trained using optimization algorithms
to find the parameters $\bm{\theta}$ that best fit the training data.
Typically, these optimization problems are both high-dimensional and non-
convex, leading to extremely challenging optimization landscapes with many
local minima. While there are powerful and generic techniques for convex
optimization problems [108, 109], there are few generic guarantees for
convergence or global optimality in non-convex optimization. Modern deep
neural networks have particularly high-dimensional parameters $\bm{\theta}$
and require large training data sets, which necessitate stochastic gradient
descent algorithms. In a sense, the optimization algorithm is the engine
powering machine learning, and as such, it is often abstracted from the
decision process. However, developing advanced optimization algorithms is the
focus of intense research efforts. It is also often necessary to explicitly
consider the optimization algorithm when designing a new architecture or
incorporating a novel loss term.
#### Embedding physics:
There are several ways that the optimization algorithm may be customized or
modified to incorporate prior physical knowledge. One approach is to
explicitly add constraints to the optimization, for example that certain
coefficients must be non-negative, or that other coefficients must satisfy a
specified algebraic relationship with each other. Depending on the given
machine learning architecture, it may be possible to enforce energy
conservation [100] or stability constraints [99] in this way. Another approach
involves employing custom optimization algorithms required to minimize the
physically motivated loss functions above, which are often non-convex. In this
way, the line between loss function and optimization algorithm are often
blurred, as they are typically tightly coupled. For example, promoting
sparsity with the $L_{0}$ norm is non-convex, and several relaxed optimization
formulations have been developed to approximately solve this problem. The
sparse relaxed regularized regression (SR3) optimization framework [68] has
been developed specifically to handle challenging non-convex loss terms that
arise in physically motivated problems.
#### Examples in fluid mechanics:
Loiseau [100] showed that it is possible to enforce energy conservation for
incompressible fluid flows directly by imposing skew-symmetry constraints on
the quadratic terms of a sparse generalized linear regression (i.e. SINDy)
model. These constraints manifest as equality constraints on the sparse
coefficients $\bm{\theta}$ of the SINDy model. Because the standard SINDy
optimization procedure is based on a sequentially thresholded least-squares
procedure, it is possible to enforce these equality constraints at every stage
of the regression, using the Karush–Kuhn–Tucker (KKT) conditions. The SR3
optimization package [68] was developed to generalize and extend these
constrained optimization problems to more challenging constraints, and to more
generic optimization problems. This is only one of many examples of custom
optimization algorithms being developed to train machine learning models with
novel loss functions or architectures.
## 3 Parting Thoughts
This brief paper has attempted to provide a high level overview of the various
stages of machine learning, how physics can be incorporated at each stage, and
how these techniques are being applied today in fluid mechanics. Machine
learning for physical systems requires careful consideration in each of these
steps, as every stage provides an opportunity to incorporate prior knowledge
about the physics. A working definition of physics is the part of a model that
generalizes, and this is one of the central goals of machine learning models
for physical systems. It is also important to note that machine learning is
fundamentally a collaborative effort, as it is nearly impossible to master
every stage of this process.
The nature of this topic is mercurial, as new innovations are being introduced
every day that improve our capabilities and challenge our previous
assumptions. Much of this work has deliberately oversimplified the process of
machine learning and the field of fluid mechanics. Machine learning is largely
concerned with fitting functions from data, and so it is important to pick the
right functions to fit. The inputs to the function are the variables and
parameters that we have access to or control over, and the outputs are
quantities of interest that we would like to accurately and efficiently
approximate in the future. It is a fruitful exercise to revisit classically
important problems where progress was limited by our ability to represent
complex functions. For example, Ling et al. [30] had great success revisiting
the classical Reynolds stress models of Pope [110] with powerful modern
techniques. More fundamentally, machine learning is about asking and answering
questions with data. We can’t forget why we are asking these questions in the
first place: because we are curious, and there is value in knowing the answer.
## Disclaimer
Any omission or oversight was the result of either ignorance, forgetfulness,
hastiness, or lack of imagination on my part. These notes are not meant to be
exhaustive, but rather to provide a few concrete examples from the literature
to guide researchers getting started in this field. This field is growing at
an incredible rate, and these examples provide a tiny glimpse into a much
larger effort. I have tried to sample from what I consider some of the most
relevant and accessible literature. However, a disproportionate number of
references are to work by my close collaborators, as this is the work I am
most familiar with. If I have missed any important references or connections,
or mis-characterized any works cited here, please let me know and I’ll try to
incorporate corrections in future versions of these notes.
## Acknowledgments
SLB acknowledges many valuable discussions and perspectives gained from
collaborators and coauthors Petros Koumoutsakos, J. Nathan Kutz, Jean-
Christophe Loiseau, and Bernd Noack.
## References
* [1] Kunihiko Taira, Steven L Brunton, Scott Dawson, Clarence W Rowley, Tim Colonius, Beverley J McKeon, Oliver T Schmidt, Stanislav Gordeyev, Vassilios Theofilis, and Lawrence S Ukeiley. Modal analysis of fluid flows: An overview. AIAA Journal, 55(12):4013–4041, 2017.
* [2] Jean Rabault, Miroslav Kuchta, Atle Jensen, Ulysse Réglade, and Nicolas Cerardi. Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control. Journal of fluid mechanics, 865:281–302, 2019.
* [3] Feng Ren, Hai-bao Hu, and Hui Tang. Active flow control using machine learning: A brief review. Journal of Hydrodynamics, 32(2):247–253, 2020.
* [4] Yu Zhou, Dewei Fan, Bingfu Zhang, Ruiying Li, and Bernd R Noack. Artificial intelligence control of a turbulent jet. Journal of Fluid Mechanics, 897, 2020.
* [5] Steven L. Brunton, Bernd R. Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics. Annual Review of Fluid Mechanics, 52:477–508, 2020.
* [6] Mengnan Du, Ninghao Liu, and Xia Hu. Techniques for interpretable machine learning. Communications of the ACM, 63(1):68–77, 2019.
* [7] Christoph Molnar. Interpretable machine learning. Lulu. com, 2020.
* [8] Karthik Duraisamy, Gianluca Iaccarino, and Heng Xiao. Turbulence modeling in the age of data. Annual Reviews of Fluid Mechanics, 51:357–377, 2019.
* [9] MP Brenner, JD Eldredge, and JB Freund. Perspective on machine learning for advancing fluid mechanics. Physical Review Fluids, 4(10):100501, 2019.
* [10] Michael P Brenner and Petros Koumoutsakos. Machine learning and physical review fluids: An editorial perspective. Physical Review Fluids, 6(7):070001, 2021.
* [11] S. L. Brunton and J. N. Kutz. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, 2019.
* [12] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
* [13] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
* [14] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
* [15] Eurika Kaiser, J Nathan Kutz, and Steven L Brunton. Discovering conservation laws from data for control. In 2018 IEEE Conference on Decision and Control (CDC), pages 6415–6421. IEEE, 2018.
* [16] Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. Science, 324(5923):81–85, 2009.
* [17] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, August 2010.
* [18] S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016.
* [19] Jaideep Pathak, Zhixin Lu, Brian R Hunt, Michelle Girvan, and Edward Ott. Using machine learning to replicate chaotic attractors and calculate lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(12):121102, 2017.
* [20] Pantelis R Vlachas, Wonmin Byeon, Zhong Y Wan, Themistoklis P Sapsis, and Petros Koumoutsakos. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc. R. Soc. A, 474(2213):20170844, 2018.
* [21] Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 9(1):4950, 2018.
* [22] Christoph Wehmeyer and Frank Noé. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The Journal of Chemical Physics, 148(241703):1–9, 2018.
* [23] Andreas Mardt, Luca Pasquali, Hao Wu, and Frank Noé. VAMPnets: Deep learning of molecular kinetics. Nature Communications, 9(5), 2018.
* [24] Naoya Takeishi, Yoshinobu Kawahara, and Takehisa Yairi. Learning koopman invariant subspaces for dynamic mode decomposition. In Advances in Neural Information Processing Systems, pages 1130–1140, 2017.
* [25] Qianxiao Li, Felix Dietrich, Erik M Bollt, and Ioannis G Kevrekidis. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(10):103111, 2017.
* [26] Enoch Yeung, Soumya Kundu, and Nathan Hodas. Learning deep neural network representations for koopman operators of nonlinear dynamical systems. arXiv preprint arXiv:1708.06850, 2017.
* [27] Samuel E Otto and Clarence W Rowley. Linearly-recurrent autoencoder networks for learning dynamics. SIAM Journal on Applied Dynamical Systems, 18(1):558–593, 2019\.
* [28] K. Champion, B. Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45):22445–22451, 2019.
* [29] Shady E Ahmed, Suraj Pawar, Omer San, Adil Rasheed, Traian Iliescu, and Bernd R Noack. On closures for reduced order models $-$ a spectrum of first-principle to machine-learned avenues. arXiv preprint arXiv:2106.14954, 2021.
* [30] Julia Ling, Andrew Kurzawski, and Jeremy Templeton. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155–166, 2016.
* [31] J Nathan Kutz. Deep learning in fluid dynamics. Journal of Fluid Mechanics, 814:1–4, 2017.
* [32] Romit Maulik, Omer San, Adil Rasheed, and Prakash Vedula. Subgrid modelling for two-dimensional turbulence using neural networks. Journal of Fluid Mechanics, 858:122–144, 2019.
* [33] Guido Novati, Hugues Lascombes de Laroussilhe, and Petros Koumoutsakos. Automating turbulence modelling by multi-agent reinforcement learning. Nature Machine Intelligence, 3(1):87–96, 2021.
* [34] Jian-Xun Wang, Jin-Long Wu, and Heng Xiao. Physics-informed machine learning approach for reconstructing reynolds stress modeling discrepancies based on dns data. Physical Review Fluids, 2(3):034603, 2017.
* [35] Linyang Zhu, Weiwei Zhang, Jiaqing Kou, and Yilang Liu. Machine learning methods for turbulence modeling in subsonic flows around airfoils. Physics of Fluids, 31(1):015105, 2019.
* [36] Linyang Zhu, Weiwei Zhang, Xuxiang Sun, Yilang Liu, and Xianxu Yuan. Turbulence closure for high reynolds number airfoil flows by deep neural networks. Aerospace Science and Technology, 110:106452, 2021.
* [37] Yohai Bar-Sinai, Stephan Hoyer, Jason Hickey, and Michael P Brenner. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31):15344–15349, 2019.
* [38] Stephan Thaler, Ludger Paehler, and Nikolaus A Adams. Sparse identification of truncation errors. Journal of Computational Physics, 397:108851, 2019.
* [39] Ben Stevens and Tim Colonius. Enhancement of shock-capturing methods via machine learning. Theoretical and Computational Fluid Dynamics, 34:483–496, 2020\.
* [40] Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan Hoyer. Machine learning accelerated computational fluid dynamics. arXiv preprint arXiv:2102.01010, 2021.
* [41] N Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L Brunton, Michael W Mahoney, and J Nathan Kutz. Shallow neural networks for fluid flow reconstruction with limited sensors. Proceedings of the Royal Society A, 476(2238):20200097, 2020.
* [42] Kai Fukami, Koji Fukagata, and Kunihiko Taira. Super-resolution reconstruction of turbulent flows with machine learning. Journal of Fluid Mechanics, 870:106–120, 2019.
* [43] Kunihiko Taira, Maziar S Hemati, Steven L Brunton, Yiyang Sun, Karthik Duraisamy, Shervin Bagheri, Scott Dawson, and Chi-An Yeh. Modal analysis of fluid flows: Applications and outlook. AIAA Journal, 58(3):998–1022, 2020.
* [44] Isabel Scherl, Benjamin Strom, Jessica K Shang, Owen Williams, Brian L Polagye, and Steven L Brunton. Robust principal component analysis for particle image velocimetry. Physical Review Fluids, 5(054401), 2020.
* [45] Aditya G Nair and Kunihiko Taira. Network-theoretic approach to sparsified discrete vortex dynamics. Journal of Fluid Mechanics, 768:549–571, 2015.
* [46] E. Kaiser, B. R. Noack, L. Cordier, A. Spohn, M. Segond, M. Abel, G. Daviller, J. Osth, S. Krajnovic, and R. K. Niven. Cluster-based reduced-order modelling of a mixing layer. J. Fluid Mech., 754:365–414, 2014.
* [47] Daniel Fernex, Bernd R Noack, and Richard Semaan. Cluster-based network modeling—from snapshots to complex dynamical systems. Science Advances, 7(25):eabf5006, 2021.
* [48] Guy Y Cornejo Maceda, Yiqing Li, François Lusseyran, Marek Morzyński, and Bernd R Noack. Stabilization of the fluidic pinball with gradient-enriched machine learning control. Journal of Fluid Mechanics, 917, 2021.
* [49] Dixia Fan, Liu Yang, Zhicheng Wang, Michael S Triantafyllou, and George Em Karniadakis. Reinforcement learning for bluff body active flow control in experiments and simulations. Proceedings of the National Academy of Sciences, 117(42):26091–26098, 2020.
* [50] Siddhartha Verma, Guido Novati, and Petros Koumoutsakos. Efficient collective swimming by harnessing vortices through deep reinforcement learning. Proceedings of the National Academy of Sciences, 115(23):5849–5854, 2018.
* [51] Dixia Fan, Gurvan Jodin, TR Consi, L Bonfiglio, Y Ma, LR Keyes, George E Karniadakis, and Michael S Triantafyllou. A robotic intelligent towing tank for learning complex fluid-structure dynamics. Science Robotics, 4(36), 2019.
* [52] Jiaqing Kou and Weiwei Zhang. Data-driven modeling for unsteady aerodynamics and aeroelasticity. Progress in Aerospace Sciences, 125:100725, 2021.
* [53] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
* [54] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [55] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
* [56] Xuhui Meng and George Em Karniadakis. A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse pde problems. Journal of Computational Physics, 401:109020, 2020.
* [57] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
* [58] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
* [59] Xindong Wu, Vipin Kumar, J Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J McLachlan, Angus Ng, Bing Liu, S Yu Philip, et al. Top 10 algorithms in data mining. Knowledge and Information Systems, 14(1):1–37, 2008.
* [60] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
* [61] Bernhard Schölkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
* [62] Antoine Blanchard and Themistoklis Sapsis. Bayesian optimization with output-weighted optimal sampling. Journal of Computational Physics, 425:109901, 2021.
* [63] Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007.
* [64] Miles D Cranmer, Rui Xu, Peter Battaglia, and Shirley Ho. Learning symbolic physics with graph networks. arXiv preprint arXiv:1909.05862, 2019.
* [65] Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho. Discovering symbolic models from deep learning with inductive biases. arXiv preprint arXiv:2006.11287, 2020.
* [66] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems. SIAM, 2016.
* [67] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
* [68] Peng Zheng, Travis Askham, Steven L Brunton, J Nathan Kutz, and Aleksandr Y Aravkin. Sparse relaxed regularized regression: SR3. IEEE Access, 7(1):1404–1423, 2019.
* [69] Jaideep Pathak, Brian Hunt, Michelle Girvan, Zhixin Lu, and Edward Ott. Model-free prediction of large spatiotemporally chaotic systems from data: a reservoir computing approach. Physical review letters, 120(2):024102, 2018.
* [70] Kai Li, Jiaqing Kou, and Weiwei Zhang. Deep neural network for unsteady aerodynamic and aeroelastic modeling across multiple mach numbers. Nonlinear Dynamics, 96(3):2157–2177, 2019.
* [71] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018.
* [72] Benjamin Kurt Miller, Mario Geiger, Tess E Smidt, and Frank Noé. Relevance of rotationally equivariant convolutions for predicting molecular properties. arXiv preprint arXiv:2008.08461, 2020.
* [73] Rui Wang, Robin Walters, and Rose Yu. Incorporating symmetry into deep dynamics models for improved generalization. arXiv preprint arXiv:2002.03061, 2020.
* [74] Simon Batzner, Tess E Smidt, Lixin Sun, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. Se (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. arXiv preprint arXiv:2101.03164, 2021.
* [75] Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. Advances in Neural Information Processing Systems, 32:15379–15389, 2019.
* [76] Marc Finzi, Ke Alexander Wang, and Andrew Gordon Wilson. Simplifying hamiltonian and lagrangian neural networks via explicit constraints. Advances in Neural Information Processing Systems, 33, 2020.
* [77] Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
* [78] Yaofeng Desmond Zhong and Naomi Leonard. Unsupervised learning of lagrangian dynamics from images for prediction and control. Advances in Neural Information Processing Systems, 33, 2020.
* [79] M Raissi, P Perdikaris, and GE Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
* [80] Guofei Pang, Lu Lu, and George Em Karniadakis. fpinns: Fractional physics-informed neural networks. SIAM Journal on Scientific Computing, 41(4):A2603–A2626, 2019.
* [81] Liu Yang, Dongkun Zhang, and George Em Karniadakis. Physics-informed generative adversarial networks for stochastic differential equations. SIAM Journal on Scientific Computing, 42(1):A292–A317, 2020.
* [82] Zhiping Mao, Ameya D Jagtap, and George Em Karniadakis. Physics-informed neural networks for high-speed flows. Computer Methods in Applied Mechanics and Engineering, 360:112789, 2020.
* [83] George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, 2021.
* [84] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
* [85] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, pages 8459–8468. PMLR, 2020.
* [86] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, 2021.
* [87] S Beetham and J Capecelatro. Formulating turbulence closures using sparse regression with embedded form invariance. Physical Review Fluids, 5(8):084611, 2020.
* [88] Sarah Beetham, Rodney O Fox, and Jesse Capecelatro. Sparse identification of multiphase turbulence closures for coupled fluid–particle flows. Journal of Fluid Mechanics, 914, 2021.
* [89] Martin Schmelzer, Richard P Dwight, and Paola Cinnella. Discovery of algebraic reynolds-stress models using sparse symbolic regression. Flow, Turbulence and Combustion, 104(2):579–603, 2020.
* [90] Jiaqing Kou and Weiwei Zhang. A hybrid reduced-order framework for complex aeroelastic simulations. Aerospace science and technology, 84:880–894, 2019.
* [91] Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367(6481):1026–1030, 2020.
* [92] Xuan Zhao, Lin Du, Xuhao Peng, Zichen Deng, and Weiwei Zhang. Research on refined reconstruction method of airfoil pressure based on compressed sensing. Theoretical and Applied Mechanics Letters, page 100223, 2021.
* [93] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
* [94] Hugo Frezat, Guillaume Balarac, Julien Le Sommer, Ronan Fablet, and Redouane Lguensat. Physical invariance in neural networks for subgrid-scale scalar flux modeling. Physical Review Fluids, 6(2):024607, 2021.
* [95] Kookjin Lee and Kevin T Carlberg. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics, 404:108973, 2020.
* [96] B. R. Noack, K. Afanasiev, M. Morzynski, G. Tadmor, and F. Thiele. A hierarchy of low-dimensional models for the transient and post-transient cylinder wake. Journal of Fluid Mechanics, 497:335–363, 2003.
* [97] Peter Benner, Serkan Gugercin, and Karen Willcox. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM review, 57(4):483–531, 2015.
* [98] Clarence W Rowley and Scott TM Dawson. Model reduction for flow analysis and control. Annual Review of Fluid Mechanics, 49:387–417, 2017.
* [99] Alan A Kaptanoglu, Jared L Callaham, Christopher J Hansen, Aleksandr Aravkin, and Steven L Brunton. Promoting global stability in data-driven models of quadratic nonlinear dynamics. arXiv preprint arXiv:2105.01843, 2021.
* [100] J.-C. Loiseau and S. L. Brunton. Constrained sparse Galerkin regression. Journal of Fluid Mechanics, 838:42–67, 2018.
* [101] N Benjamin Erichson, Michael Muehlebach, and Michael W Mahoney. Physics-informed autoencoders for lyapunov-stable fluid flow prediction. arXiv preprint arXiv:1905.10866, 2019.
* [102] J.-C. Loiseau, B. R. Noack, and S. L. Brunton. Sparse reduced-order modeling: sensor-based dynamics to full-state estimation. Journal of Fluid Mechanics, 844:459–490, 2018.
* [103] Jean-Christophe Loiseau. Data-driven modeling of the chaotic thermal convection in an annular thermosyphon. Theoretical and Computational Fluid Dynamics, 34(4):339–365, 2020\.
* [104] Nan Deng, Bernd R Noack, Marek Morzyński, and Luc R Pastur. Low-order model for successive bifurcations of the fluidic pinball. Journal of fluid mechanics, 884, 2020.
* [105] Nan Deng, Bernd R Noack, Marek Morzyński, and Luc R Pastur. Galerkin force model for transient and post-transient dynamics of the fluidic pinball. Journal of Fluid Mechanics, 918, 2021.
* [106] M. Schlegel and B. R. Noack. On long-term boundedness of galerkin models. Journal of Fluid Mechanics, 765:325–352, 2015.
* [107] Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards physics-informed deep learning for turbulent flow prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1457–1466, 2020.
* [108] Michael Grant, Stephen Boyd, and Yinyu Ye. Cvx: Matlab software for disciplined convex programming, 2008.
* [109] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2009.
* [110] SBj Pope. A more general effective-viscosity hypothesis. Journal of Fluid Mechanics, 72(2):331–340, 1975.
|
# Maximal pronilfactors and a topological Wiener-Wintner theorem
Yonatan Gutman & Zhengxing Lian To Benjamin Weiss with great respect.
###### Abstract.
For strictly ergodic systems, we introduce the class of CF-Nil($k$) systems:
systems for which the maximal measurable and maximal topological $k$-step
pronilfactors coincide as measure-preserving systems. Weiss’ theorem implies
that such systems are abundant in a precise sense. We show that the CF-
Nil$(k)$ systems are precisely the class of minimal systems for which the
$k$-step nilsequence version of the Wiener-Wintner average converges
everywhere. As part of the proof we establish that pronilsystems are
coalescent both in the measurable and topological categories. In addition, we
characterize a CF-Nil$(k)$ system in terms of its $(k+1)$-th dynamical
cubespace. In particular, for $k=1$, this provides for strictly ergodic
systems a new condition equivalent to the property that every measurable
eigenfunction has a continuous version.
††The authors were partially supported by the National Science Centre (Poland)
grant 2016/22/E/ST1/00448. Y.G. was partially supported by the National
Science Centre (Poland) grant 2020/39/B/ST1/02329. Z.L. was partially
supported by the Xiamen Youth Innovation Foundation No. 3502Z20206037; the
presidential research fund of Xiamen University No. 20720210034 and NNSF of
China No. 1210010472.††Keywords: coalescence; cubespace; nilsequence; maximal
pronilfactor; strictly ergodic; topological model; universality; topological
Wiener-Wintner theorem.††Mathematics Subject Classification (2020): 37A05,
37B05.
###### Contents
1. 1 Introduction.
2. 2 Preliminaries.
1. 2.1 Dynamical systems.
2. 2.2 Topological models.
3. 2.3 Conditional expectation.
4. 2.4 Pronilsystems and nilsequences.
5. 2.5 Host-Kra structure theory machinery.
6. 2.6 Maximal measurable pronilfactors.
7. 2.7 Maximal topological pronilfactors.
8. 2.8 CF-Nil$(k)$ systems.
9. 2.9 A CF-Nil$(k)$ topological model.
3. 3 Coalescence and universality for maximal pronilfactors.
1. 3.1 Coalescence
2. 3.2 Universality
4. 4 Cubespace characterization of CF-Nil($k$).
5. 5 A topological Wiener-Wintner theorem.
## 1\. Introduction.
In recent years there has been an increase in interest in pronilfactors both
for measure-preserving systems (m.p.s.) and topological dynamical systems
(t.d.s.). Pronilfactors of a given system are either measurable or topological
(depending on the category) factors given by an inverse limit of nilsystems. A
t.d.s. (m.p.s.) is called a topological (measurable) $d$-step pronilsystem if
it is a topological (measurable) inverse limit of nilsystems of degree at most
$d$.111 It is a classical fact that every (measurable) ergodic $d$-step
pronilsystem is isomorphic as m.p.s. to a (topological) minimal $d$-step
pronilsystem. In the theory of measure preserving systems
$(X,\mathcal{X},\mu,T)$ maximal measurable pronilfactors appear in connection
with the $L^{2}$-convergence of the nonconventional ergodic averages
(1) $\frac{1}{N}\sum f_{1}(T^{n}x)\ldots f_{k}(T^{kn}x)$
for $f_{1},\ldots,f_{k}\in L^{\infty}(X,\mu)$ ([HK05, Zie07]). In the theory
of topological dynamical systems maximal topological pronilfactors appear in
connection with the higher order regionally proximal relations ([HKM10, SY12,
GGY18]).
When a system possesses both measurable and topological structure, it seems
worthwhile to investigate pronilfactors both from a measurable and topological
point of view. A natural meeting ground are strictly ergodic systems - minimal
topological dynamical systems $(X,T)$ possessing a unique invariant measure
$\mu$. For $k\in\mathbb{Z}$ let us denote by
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ respectively $(W_{k}(X),T)$ the
maximal $k$-step measurable respectively topological pronilfactor222Both these
objects exist and are unique in a precise sense. See Subsection 3.2. of
$(X,T)$. Clearly $(W_{k}(X),T)$ has a unique invariant measure $\omega_{k}$.
We thus pose the question when is $(W_{k}(X),\mathcal{W}_{k}(X),\omega_{k},T)$
isomorphic to $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s.? We call a
t.d.s. which is strictly ergodic and for which
$(W_{k}(X),\mathcal{W}_{k}(X),\omega_{k},T)$ is isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s., a CF-Nil$(k)$
system333This terminology is explained in Subsection 2.8.. Note that
$(W_{k}(X),\mathcal{W}_{k}(X),\omega_{k},T)$ is always a measurable factor of
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$. At first glance it may seem that
CF-Nil$(k)$ systems are rare however a theorem by Benjamin Weiss regarding
topological models for measurable extensions implies that every ergodic m.p.s.
is measurably isomorphic to a CF-Nil$(k)$ system444See Subsection 2.9..
We give two characterizations of CF-Nil$(k)$ systems. The first
characterization is related to the Wiener-Wintner theorem while the second
characterization is related to $k$-cube uniquely ergodic systems - a class of
topological dynamical systems introduced in [GL19].
The Wiener-Wintner theorem ([WW41]) states that for an ergodic system
$(X,\mathcal{X},\mu,T)$, for $\mu$-a.e. $x\in X$, any
$\lambda\in\mathbb{S}^{1}$ and any $f\in L^{\infty}(\mu)$, the following limit
exists:
(2) $\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}\lambda^{n}f(T^{n}x)$
Denote by $M_{T}\subset\mathbb{S}^{1}$ the set of measurable
eigenvalues555Measurable and topological eigenvalues are defined in Subsection
2.1. of $(X,\mathcal{X},\mu,T)$. Let $P_{\lambda}f$ be the projection of $f$
to the eigenspace corresponding to $\lambda$ (in particular for $\lambda\notin
M_{T}$, $P_{\lambda}f\equiv 0$). For fixed $\lambda\in\mathbb{S}^{1}$, one can
show (2) converges a.s. to $P_{\lambda}f$.
In [Les96] Lesigne proved that a.s. convergence in (2) still holds when the
term $\lambda^{n}$ is replaced by a (continuous function) of a real-valued
polynomial $P(n)$, $P\in\mathbb{R}[t]$. In [Fra06] Frantzikinakis established
a uniform version666In the context of the Wiener-Wintner theorem, uniform
versions are a.s. convergence results involving a supremum over weights
belonging to a given class. The first result of this type was obtained by
Bourgain in [Bou90]. of this theorem. In [HK09], Host and Kra showed that a.s.
convergence in (2) still holds when the term $\lambda^{n}$ is replaced by a
nilsequence. In [EZK13] Eisner and Zorin-Kranich established a uniform version
of this theorem.
For topological dynamical systems one may investigate the question of
everywhere convergence in the Wiener-Wintner theorem. In [Rob94], Robinson
proved that for an uniquely ergodic system $(X,\mu,T)$, for any $f\in C(X)$,
if every measurable eigenfunction of $(X,\mathcal{X},\mu,T)$ has a continuous
version then the limit (2) converges everywhere. He noted however that if
$P_{\lambda}f\neq 0$ for some $\lambda\in M_{T}$, then the convergence of (2)
is not uniform in $(x,\lambda)$, since the limit function $P_{\lambda}f(x)$ is
not continuous on $X\times\mathbb{S}^{1}$.777Note $M_{T}$ is countable.
Moreover Robinson constructed a strictly ergodic system $(X,T)$ such that (2)
does not converge for some continuous function $f\in C(X)$, some
$\lambda\in\mathbb{C}$ and some $x\in X$. Other topological versions of the
Wiener-Wintner theorem may be found in [Ass92, Fan18]888One should also note
that topological Wiener-Winter theorems have been investigated in the
generality of operator semigroups by Schreiber and Bartoszek and Śpiewak
([Sch14, BŚ17])..
The first main result of this article is the following theorem:
###### Theorem A.
Let $(X,T)$ be a minimal system. Then for $k\geq 0$ the following are
equivalent:
* (I).
$(X,T)$ is a CF-Nil$(k)$ system.
* (II).
For any $k$-step nilsequence $\\{a(n)\\}_{n\in\mathbb{Z}}$, any continuous
function $f\in C(X)$ and any $x\in X$,
(3) $\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}a(n)f(T^{n}x)$
exists.
We remark that the direction (I)$\Rightarrow$(II) of Theorem A follows from
[HK09] whereas the case $k=1$ of Theorem A follows from [Rob94, Theorem 1.1].
As part of the proof of Theorem A we established a fundamental property for
pronilsystems:
###### Theorem B.
Let $(Y,\nu,T)$ be a minimal (uniquely ergodic) $k$-step pronilsystem. Then
* (I).
$(Y,\nu,T)$ is measurably coalescent, i.e. if
$\pi:(Y,\nu,T)\rightarrow(Y,\nu,T)$ is a measurable factor map, then $\pi$ is
a measurable isomorphism.
and
* (II).
$(Y,T)$ is topologically coalescent, i.e. if $\Phi:(Y,T)\rightarrow(Y,T)$ is a
topological factor map, then $\Phi$ is a topological isomorphism.
As part of the the theory of higher order regionally proximal relations, Host,
Kra and Maass introduced in [HKM10] the dynamical cubespaces
$\operatorname{C}_{\operatorname{T}}^{n}(X)\subset X^{2^{n}}$,
$n\in\mathbb{N}:=\\{1,2,\ldots\\}$. These compact sets enjoy a natural action
by the Host-Kra cube groups $\mathcal{HK}^{n}(T)$. According to the
terminology introduced in [GL19], a t.d.s. $(X,T)$ is called $k$-cube uniquely
ergodic if $(\operatorname{C}_{\operatorname{T}}^{k}(X),\mathcal{HK}^{k}(T))$
is uniquely ergodic. The third main result of this article is the following
theorem:
###### Theorem C.
Let $(X,T)$ be a minimal t.d.s. Then the following are equivalent for any
$k\geq 0$:
* (I).
$(X,T)$ is a CF-Nil$(k)$ system.
* (II).
$(X,T)$ is $(k+1)$-cube uniquely ergodic.
We remark that the direction (I) $\Rightarrow$ (II) follows from [HSY17].
In the context of various classes of strictly ergodic systems, several authors
have investigated the question of whether every measurable eigenfunction has a
continuous version. Famously in [Hos86] (see also [Que10, Page 170]), Host
established this is the case for admissible substitution dynamical systems. In
[BDM10, Theorem 27] an affirmative answer was given for strictly ergodic
Toeplitz type systems of finite rank. In [DFM19], the continuous and
measurable eigenvalues of minimal Cantor systems were studied.
It is easy to see that for strictly ergodic systems $(X,T)$ the condition that
every measurable eigenfunction has a continuous version is equivalent to the
fact that $(X,T)$ is CF-Nil($1$). Thus Theorem C provides for strictly ergodic
systems a new condition equivalent to the property that every measurable
eigenfunction has a continuous version. Namely this holds iff
$(\operatorname{C}_{\operatorname{T}}^{2}(X),\mathcal{HK}^{2}(T))$ is uniquely
ergodic. As the last condition seems quite manageable one wonders if this new
equivalence may turn out to be useful in future applications.
Structure of the paper. In Subsections 2.1–2.3 we review some definitions and
classical facts; In Subsections 2.4–2.8, we introduce the topological and
measurable maximal pronilfactors and define the CF-Nil$(k)$ systems; In
Subsection 2.9, we use Weiss’s Theorem to show that the CF-Nil$(k)$ systems
are abundant; In Section 3, we prove Theorem B and then establish universality
for maximal pronilfactors; In Section 4, we prove Theorem C; In Section 5, we
prove Theorem A.
Acknowledgements. We are grateful to Bernard Host, Mariusz Lemańczyk and and
anonymous referee for helpful comments.
## 2\. Preliminaries.
### 2.1. Dynamical systems.
Throughout this article we assume every topological space to be metrizable. A
$\mathbb{Z}$-topological dynamical system (t.d.s.) is a pair $(X,T)$, where
$X$ is a compact space and $T$ is a homeomorphism on $X$. Denote by $C(X)$ the
set of real-valued continuous functions on $X$. The orbit $\mathcal{O}(x)$ of
$x\in X$ is the set $\mathcal{O}(x)=\\{T^{n}x:n\in\mathbb{Z}\\}$. Its closure
is denoted by $\operatorname{\overline{\mathcal{O}}}(x)$ A t.d.s. is minimal
if $\operatorname{\overline{\mathcal{O}}}(x)=X$ for all $x\in X$. A t.d.s.
$(X,T)$ is distal if for a compatible metric $d_{X}$ of $X$, for any $x\neq
y\in X$, $\inf_{n\in\mathbb{Z}}d_{X}(T^{n}x,T^{n}y)>0$. We say
$\pi:(Y,S)\rightarrow(X,T)$ is a topological factor map if $\pi$ is a
continuous and surjective map such that for any $x\in X$, $\pi(Sx)=T\pi(x)$.
Given such a map, $(X,T)$ is called a topological factor of $(Y,S)$ and
$(X,T)$ is said to factor continuously on $(Y,S)$. If in addition $\pi$ is
injective then it is called a topological isomorphism and $(Y,S)$ and $(X,T)$
are said to be isomorphic as t.d.s. A factor map $\pi:(Y,S)\rightarrow(X,T)$
is called a topological group extension by a compact group $K$ if there exists
a continuous action $\alpha:K\times Y\rightarrow Y$ such that the actions $S$
and $K$ commute and for all $x,y\in Y$, $\pi(x)=\pi(y)$ iff there exists a
unique $k\in K$ such that $kx=y$. A (topological) eigenvalue of a t.d.s.
$(X,T)$ is a complex number $\lambda\in\mathbb{S}^{1}$ such that an equation
of the form $f(Tx)=\lambda f(x)$ holds for some $f\in C(X,\mathbb{C})$ and all
$x\in X$. The function $f$ is referred to as a continuous or topological
eigenfunction.
Let $\\{(X_{m},T_{m})\\}_{m\in\mathbb{N}}$ be a sequence of t.d.s. and for any
$m\geq n$, $\pi_{m,n}:(X_{n},T_{n})\rightarrow(X_{m},T_{m})$ factor maps such
that $\pi_{i,l}=\pi_{i,j}\circ\pi_{j,l}\text{ for all }1\leq i\leq j\leq l.$
The inverse limit of $\\{(X_{m},T_{m})\\}_{m\in\mathbb{N}}$ is defined to be
the system $(X,T)$, where
$X=\\{(x_{m})_{m\in\mathbb{N}}\in\prod_{m\in\mathbb{N}}X_{m}:\
\pi_{m+1}(x_{m+1})=x_{m}\text{ for }m\geq 1\\}$
equipped with the product topology and
$T(x_{m})_{m\in\mathbb{N}}\triangleq(T_{m}x_{m})_{m\in\mathbb{N}}$. We write
$(X,T)=\underleftarrow{\lim}(X_{m},T_{m})$.
A measure preserving probability system (m.p.s.) is a quadruple
$(X,\mathcal{X},\mu,T)$, where $(X,\mathcal{X},\mu)$ is a standard Borel
probability space (in particular $X$ is a Polish space and $\mathcal{X}$ is
its Borel $\sigma$-algebra) and $T$ is an invertible Borel measure-preserving
map ($\mu(TA)=\mu(A)$ for all $A\in\mathcal{X}$). An m.p.s.
$(X,\mathcal{X},\mu,T)$ is ergodic if for every set $A\in\mathcal{X}$ such
that $T(A)=A$, one has $\mu(A)=0$ or $1$. A measurable factor map is a Borel
map $\pi:(X,\mathcal{X},\mu,T)\rightarrow(Y,\mathcal{Y},\nu,S)$ which is
induced by a $G$-invariant sub-$\sigma$-algebra of $\mathcal{X}$ ([Gla03,
Chapter 2.2]). Given such a map, $(Y,\mathcal{Y},\nu,S)$ is called a
measurable factor of $(X,\mathcal{X},\mu,T)$. If $\pi$ is in addition
invertible on a set of full measure then $\pi$ is called a measurable
isomorphism and $(X,\mathcal{X},\mu,T)$ and $(Y,\mathcal{Y},\nu,S)$ are said
to be isomorphic as m.p.s. Let $(Y,\mathcal{Y},\nu,S)$ be an m.p.s. and $A$ a
compact group with Borel $\sigma$-algebra $\mathcal{A}$ and Haar measure $m$.
A skew-product $(Y\times A,\mathcal{Y}\otimes\mathcal{A},\nu\times m,T)$ is
given by the action $T(y,u)=(Sy,\beta(y)u)$, where $\beta:Y\rightarrow A$ is a
Borel map, the so-called cocycle of the skew-product. The projection $(Y\times
A,\mathcal{Y}\otimes\mathcal{A},\nu\times
m,T)\rightarrow(Y,\mathcal{Y},\nu,S)$ given by $(y,a)\mapsto y$ is called a
measurable group extension (cf. [Gla03, Theorem 3.29]).
A (measurable) eigenvalue of a m.p.s. $(X,\mathcal{X},\mu,T)$ is a complex
number $\lambda\in\mathbb{S}^{1}$ such that an equation of the form
$f(Tx)=\lambda f(x)$ holds for $\mu$-a.e. $x\in X$ for some Borel function
$f:X\rightarrow\mathbb{C}$. The function $f$ is referred to as a measurable
eigenfunction.
Denote by $\operatorname{P_{T}}(X)$ the set of $T$-invariant Borel probability
measures of $X$. A t.d.s. $(X,T)$ is called uniquely ergodic if
$|\operatorname{P_{T}}(X)|=1$. If in addition it is minimal then it is called
strictly ergodic. For a strictly ergodic system $(X,T)$ with a (unique)
invariant measure $\mu$, we will use the notation $(X,\mu,T)$. When considered
as a m.p.s. it is with respect to its Borel $\sigma$-algebra.
Occasionally in this article we will consider more general group actions than
$\mathbb{Z}$-actions. Thus a $G$-topological dynamical system (t.d.s.) is a
pair $(G,X)$ consisting of a (metrizable) topological group $G$ acting on a
(metrizable) compact space $X$. For $g\in G$ and $x\in X$ we denote the action
both by $gx$ and $g.x$. We will need the following proposition:
###### Proposition 2.1.
Let $G$ be an amenable group. Let $(G,X)$ be uniquely ergodic and let
$(G,X)\rightarrow(G,Y)$ be a topological factor map. Then $(G,Y)$ is uniquely
ergodic.
###### Proof.
See proof of Proposition 8.1 of [AKL14]. ∎
### 2.2. Topological models.
###### Definition 2.2.
Let $(X,\mathcal{X},\mu,T)$ be a m.p.s. We say that a t.d.s.
$(\hat{X},\hat{T})$ is a topological model for $(X,\mathcal{X},\mu,T)$ w.r.t.
to a $\hat{T}-$invariant probability measure $\hat{\mu}$ on
$\hat{\mathcal{X}}$, the Borel $\sigma$-algebra of $X$, if the system
$(X,\mathcal{X},\mu,T)$ is isomorphic to
$(\hat{X},\hat{\mathcal{X}},\hat{\mu},\hat{T})$ as m.p.s., that is, there
exist a $T$-invariant Borel subset $C\subset X$ and a $\hat{T}$-invariant
Borel subset $\hat{C}\subset\hat{X}$ of full measure and a (bi)measurable and
equivariant measure preserving bijective Borel map $p:C\rightarrow\hat{C}$.
Notice that oftentimes in this article $(\hat{X},\hat{T})$ will be uniquely
ergodic so that $\hat{\mu}$ will be the unique $\hat{T}-$invariant probability
measure of $X$.
###### Definition 2.3.
Let $(X,\mathcal{X},\mu,T)$, $(Y,\mathcal{Y},\nu,S)$ be m.p.s. Let
$(\hat{X},\hat{T})$, $(\hat{Y},\hat{S})$ be t.d.s. which are topological
models of $(X,\mathcal{X},\mu,T)$ and $(Y,\mathcal{Y},\nu,S)$ w.r.t. measures
$\hat{\mu}$ and $\hat{\nu}$ as witnessed by maps $\phi$ and $\psi$
respectively. We say that
$\hat{\pi}:(\hat{X},\hat{T})\rightarrow(\hat{Y},\hat{S})$ is a topological
model for a factor map
$\pi:(X,\mathcal{X},\mu,T)\rightarrow(Y,\mathcal{Y},\nu,S)$ if $\hat{\pi}$ is
a topological factor and the following diagram
$\begin{CD}X@>{\phi}>{}>\hat{X}\\\ @V{\pi}V{}V@V{}V{\hat{\pi}}V\\\
Y@>{\psi}>{}>\hat{Y}\end{CD}$
is commutative, i.e. $\hat{\pi}\phi=\psi\pi$
### 2.3. Conditional expectation.
Let $(X,\mathcal{X},\mu)$ be a probability space and let $\mathcal{B}$ be a
sub-$\sigma$-algebra of $\mathcal{X}$. For $f\in L^{1}(\mu)$, the conditional
expectation of $f$ w.r.t. $\mathcal{B}$ is the unique function
$\mathbb{E}(f|\mathcal{B})\in L^{1}(X,\mathcal{B},\mu)$ satisfying
(4) $\int_{B}fd\mu=\int_{B}\mathbb{E}(f|\mathcal{B})d\mu$
for every $B\in\mathcal{B}$. For $f\in L^{1}(\mu)$ and $g\in
L^{\infty}(X,\mathcal{B},\mu)$, it holds (see [HK18, Chapter 2, Section 2.4]):
(5) $\int_{X}fgd\mu=\int_{X}\mathbb{E}(f|\mathcal{B})gd\mu.$
Let $(X,\mathcal{X},\mu)$ and $(Y,\mathcal{Y},\nu)$ be probability spaces and
let $\pi:X\rightarrow Y$ be a measurable map such that $\pi_{*}\mu=\nu$.
Denote by $\mathbb{E}(f|Y)\in L^{1}(Y,\nu)$ the function such that
$\mathbb{E}(f|Y)=\mathbb{E}(f|\pi^{-1}(\mathcal{Y}))\circ\pi^{-1}$. Note this
is well-defined. Thus the difference between $\mathbb{E}(f|Y)$ and
$\mathbb{E}(f|\pi^{-1}(\mathcal{Y}))$ is that the first function is considered
as a function on $Y$ and the second as a function on $X$.
### 2.4. Pronilsystems and nilsequences.
###### Definition 2.4.
A (real) Lie group is a group that is also a finite dimensional real smooth
manifold such that the group operations of multiplication and inversion are
smooth. Let $G$ be a Lie group. Let $G_{1}=G$ and $G_{k}=[G_{k-1},G]$ for
$k\geq 2$, where $[G,H]=\\{[g,h]:g\in G,h\in H\\}$ and $[g,h]=g^{-1}h^{-1}gh$.
If there exists some $d\geq 1$ such that $G_{d+1}=\\{e\\}$, $G$ is called a
$d$-step nilpotent Lie group. We say that a discrete subgroup $\Gamma$ of a
Lie group $G$ is cocompact if $G/\Gamma$, endowed with the quotient topology,
is compact. We say that quotient $X=G/\Gamma$ is a $d$-step nilmanifold if $G$
is a $d$-step nilpotent Lie group and $\Gamma$ is a discrete, cocompact
subgroup. The nilmanifold $X$ admits a natural action by $G$ through
translations $g.a\Gamma=ga\Gamma$, $g,a\in G$. The Haar measure of $X$ is the
unique Borel probability measure on $X$ which is invariant under this action.
A nilsystem of degree at most $d$, $(X,T)$, is given by an $d$-step
nilmanifold $X=G/\Gamma$ and $T\in G$ with action $T.a\Gamma=Ta\Gamma$. When a
nilsystem is considered as a m.p.s. it is always w.r.t. its Haar measure.
###### Definition 2.5.
A t.d.s. (m.p.s) is called a topological (measurable) $d$-step pronilsystem if
it is a topological (measurable) inverse limit of nilsystems of degree at most
$d$. By convention a $0$-step pronilsystem is the one-point trivial system.
###### Remark 2.6.
By [HK18, p. 233] if an ergodic measurable $d$-step pronilsystem is presented
as the inverse limit
$(X,\mathcal{X},\nu,T)=\underleftarrow{\lim}(X_{m},\mathcal{X}_{m},\nu_{m},T_{m})$
given by the measurable factor maps
$\pi_{m}:(X_{m},\mathcal{X}_{m},\nu_{m},T_{m})\rightarrow(X_{m-1},\mathcal{X}_{m-1},\nu_{m-1},T_{m-1})$
between nilsystems of degree at most $d$ then there exist topological factor
maps $\tilde{\pi}_{m}:(X_{m},T_{m})\rightarrow(X_{m-1},T_{m-1})$ such that
$\tilde{\pi}=\pi$ $\nu_{m}$-a.e. and so effectively one can consider
$(X,\mathcal{X},\nu,T)$ as a (minimal) topological pronilsystem. Moreover any
two $d$-step pronilsystem topological models of $(X,\mathcal{X},\nu,T)$ are
isomorphic as t.d.s. (Theorem 3.3).
###### Definition 2.7.
([HKM10, Definition 2.2]) A bounded sequence $\\{a(n)\\}_{n\in\mathbb{Z}}$ is
called a $d$-step nilsequence if there exists a $d$-step pronilsystem $(X,T)$,
$x_{0}\in X$ and a continuous function $f\in C(X)$ such that
$a(n)=f(T^{n}x_{0})$ for $n\in\mathbb{Z}$.
###### Theorem 2.8.
([HK09, Theorem 3.1]) Let $(X,T)$ be a nilsystem. Then $(X,T)$ is uniquely
ergodic if and only if $(X,T)$ is ergodic w.r.t. the Haar measure if and only
if $(X,T)$ is minimal.
The following proposition is an immediate corollary of the previous theorem.
###### Proposition 2.9.
Let $(X,T)$ be a pronilsystem. Then $(X,T)$ is uniquely ergodic if and only if
$(X,T)$ is minimal.
###### Definition 2.10.
Let $(X,\mu,T)$ be a strictly ergodic t.d.s. We say that a t.d.s. $(Y,T)$ is a
topological $k$-step pronilfactor of $(X,T)$ if it is a topological factor of
$(X,T)$ and if it is isomorphic to a $k$-step pronilsystem as a t.d.s. We say
that a m.p.s. $(Y,\mathcal{Y},\nu,T)$ is a measurable $k$-step pronilfactor of
$(X,T)$ if it is a measurable factor of $(X,\mathcal{X},\mu,T)$ and if it is
isomorphic to a $k$-step pronilsystem as a m.p.s.
### 2.5. Host-Kra structure theory machinery.
By a face of the discrete cube $\\{0,1\\}^{k}$ we mean a subcube obtained by
fixing some subset of the coordinates. For $k\in\mathbb{N}$, let
$[k]=\\{0,1\\}^{k}$. Thus $X^{[k]}=X\times\cdots\times X$, $2^{k}$ times and
similarly $T^{[k]}=T\times\cdots\times T$, $2^{k}$ times. For $x\in X$,
$x^{[k]}=(x,\ldots,x)\in X^{[k]}$. Let
$[k]_{*}=\\{0,1\\}^{k}\setminus\\{\vec{0}\\}$ and define
$X_{*}^{[k]}=X^{[k]_{*}}$.
###### Definition 2.11.
([HK05]) Let $(X,\mathcal{X},\mu,T)$ be an ergodic m.p.s. For $1\leq j\leq k$,
let $\overline{\alpha}_{j}=\\{v\in\\{0,1\\}^{k}:v_{j}=1\\}$ be the $j$-th
upper face of $\\{0,1\\}^{k}$. For any face $F\subset\\{0,1\\}^{k}$, define
$(T^{F})_{v}=\begin{cases}T&v\in F\\\ \operatorname{Id}&v\notin F.\end{cases}$
Define the face group $\mathcal{F}^{k}(T)\subset\operatorname{Homeo}(X^{[k]})$
to be the group generated by the elements $\\{T^{\overline{\alpha}_{j}}:1\leq
j\leq k\\}$. Define the the $k$-th Host-Kra cube group $\mathcal{HK}^{k}(T)$
to be the subgroup of $\operatorname{Homeo}(X^{[k]})$ generated by
$\mathcal{F}^{k}(T)$ and $T^{[k]}$.
###### Definition 2.12.
([HK05]) Let $(X,\mathcal{B},\mu,T)$ be an ergodic m.p.s. Let
$\mu^{[1]}=\mu\times\mu$. For $k\in\mathbb{N}$, let $\mathcal{I}_{T^{[k]}}$ be
the $T^{[k]}$-invariant $\sigma$-algebra of
$(X^{[k]},\mathcal{X}^{[k]},\mu^{[k]})$. Define $\mu^{[k+1]}$ to be the
relative independent joining of two copies of $\mu^{[k]}$ over
$\mathcal{I}_{T^{[k]}}$. That is, for $f_{v}\in L^{\infty}(\mu)$,
$v\in\\{0,1\\}^{k+1}$:
$\int_{X^{[k+1]}}\prod_{v\in\\{0,1\\}^{k+1}}f_{v}(x_{v})d\mu^{[k+1]}(x)=\\\
\int_{X^{[k]}}\mathbb{{E}}(\prod_{v\in\\{0,1\\}^{k}}f_{v0}|\mathcal{I}_{T^{[k]}})(x)\mathbb{{E}}(\prod_{v\in\\{0,1\\}^{k}}f_{v1}|\mathcal{I}_{T^{[k]}})(x)d\mu^{[k]}(x).$
In particular, from Equation (5), it follows that for all measurable functions
$H_{1},H_{2}\in L^{\infty}(X^{[k]},\mu^{[k]})$,
(6)
$\int_{X^{[k]}}\mathbb{{E}}(H_{1}|\mathcal{I}_{T^{[k]}})(c)\mathbb{{E}}(H_{2}|\mathcal{I}_{T^{[k]}})(c)d\mu^{[k]}(c)=\\\
\int_{X^{[k]}}\mathbb{{E}}(H_{1}|\mathcal{I}_{T^{[k]}})(c)H_{2}(c)d\mu^{[k]}(c).$
Note $\mu^{[k]}$ is $\mathcal{HK}^{k}(T)$-invariant ([HK18, Chapter 9,
Proposition 2]).
###### Definition 2.13.
[HK18, Chapter 9, Section 1] For $k\in\mathbb{N}$, let $\mathcal{J}_{*}^{k}$
be the $\sigma$-algebras of sets invariant under $\mathcal{F}^{k}(T)$ on
$X_{*}^{[k]}$.
###### Definition 2.14.
[HK18, Subsection 9.1] Let $(X,\mathcal{X},\mu,T)$ be an ergodic m.p.s. For
$k\in\mathbb{N}$, define $\mathcal{Z}_{k}(X)$ to be the $\sigma$-algebra
consisting of measurable sets $B$ such that there exists a
$\mathcal{J}_{*}^{k+1}$-measurable set $A\subset X_{*}^{[k+1]}$ so that up to
$\mu^{[k+1]}$\- measure zero it holds:
$X\times A=B\times X_{*}^{[k+1]}$
Define the $k$-th Host-Kra factor $Z_{k}(X)$ as the measurable factor of $X$
induced by $\mathcal{Z}_{k}(X)$ and denote by $\pi_{k}:X\rightarrow Z_{k}(X)$
the (measurable) canonical $k$-th projection. Let $\mu_{k}$ be the projection
of $\mu$ w.r.t. $\pi_{k}$.
###### Definition 2.15.
Let $(X,\mathcal{X},\mu,T)$ be an m.p.s. and $k\in\mathbb{N}$. The Host-Kra-
Gowers seminorms on $L^{\infty}(\mu)$ are defined as follows:
$|||f|||_{k}=(\int\prod_{v\in\\{0,1\\}^{k}}\mathcal{C}^{|v|}fd\mu^{[k]})^{1/2^{k}},$
where $|(v_{1},\ldots,v_{k+1})|=\Sigma_{i=1}^{k+1}v_{i}$ and
$\mathcal{C}^{n}z=z$ if $n$ is even and $\mathcal{C}^{n}z=\overline{z}$ if $n$
is odd. By [HK18, Subsection 8.3], $|||\cdot|||_{k}$ is a seminorm.
###### Lemma 2.16.
[HK18, Chapter 9, Theorem 7] Let $(X,\mathcal{X},\mu,T)$ be an ergodic m.p.s.
and $k\in\mathbb{N}$. Then for $f\in L^{\infty}(\mu)$, $|||f|||_{k+1}=0$ if
and only if $\mathbb{E}(f|\mathcal{Z}_{k}(X))=0$.
### 2.6. Maximal measurable pronilfactors.
###### Definition 2.17.
Let $k\in\mathbb{N}$. A m.p.s. $(X,\mathcal{X},\mu,T)$ is called a
(measurable) system of order $k$ if it is isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$.
###### Theorem 2.18.
([HK05, Theorem 10.1], [HK18, Chapter 16, Theorem 1], for an alternative proof
see [GL19, Theorem 5.3]) An ergodic m.p.s. is a system of order $k$ iff it is
isomorphic to a minimal $k$-step pronilsystem as m.p.s.
###### Remark 2.19.
Let $(X,\mathcal{X},\mu,T)$ be an ergodic m.p.s. In the literature
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is referred to as the maximal
measurable $k$-step pronilfactor or as the maximal factor which is a system of
order $k$ (see [HK18, Chapter 9, Theorem 18]). By this it is meant that any
measurable factor map
$\phi:(X,\mathcal{X},\mu,T)\rightarrow(Y,\mathcal{Y},\nu,S)$ where
$(Y,\mathcal{Y},\nu,S)$ is a minimal $k$-step pronilsystem, factors through
the canonical $k$-th projection
$\pi_{k}:(X,\mathcal{X},\mu,T)\rightarrow(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$,
i.e., there exists a unique (up to measure zero)
$\psi:(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)\rightarrow(Y,\mathcal{Y},\nu,S)$
such that $\phi=\psi\circ\pi_{k}$ a.s. In section 3 we establish the
complementary property of universality for
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$.
###### Remark 2.20.
In [HKM14, Corollary 2.2] a criterion for an ergodic m.p.s.
$(X,\mathcal{X},\mu,T)$ to have $Z_{k}(X)=Z_{1}(X)$ for all $k\geq 1$ is
given. Indeed this is the case for ergodic systems whose spectrum does not
admit a Lebesgue component with infinite multiplicity. In particular this
holds true for weakly mixing systems, systems with singular maximal spectral
type and systems with finite spectral multiplicity.
### 2.7. Maximal topological pronilfactors.
Recall the Definition of $\mathcal{HK}^{k}(T)$ and $\mathcal{F}^{k}(T)$
(Definition 2.11).
###### Definition 2.21.
Let $(X,T)$ be a minimal t.d.s. Define the induced $(k+1)$-th dynamical
cubespace by:
$\operatorname{C}_{\operatorname{T}}^{k+1}(X)=\overline{\\{gx^{[k+1]}|\,g\in\mathcal{HK}^{k+1}(T)\\}}.$
###### Definition 2.22.
([HKM10, Definition 3.2]) Let $(X,T)$ be a topological dynamical system and
$k\geq 1$. The points $x,y\in X$ are said to be regionally proximal of order
$k$, denoted $(x,y)\in\operatorname{RP}^{[k]}(X)$, if there are sequences of
elements $f_{i}\in\mathcal{F}^{k}(T)$, $x_{i},y_{i}\in X$, $a_{*}\in
X_{*}^{[k]}$ such that
$\lim_{i\rightarrow\infty}(f_{i}x_{i}^{[k]},f_{i}y_{i}^{[k]})=(x,a_{*},y,a_{*}).$
###### Theorem 2.23.
([SY12, Theorem 3.5]999This theorem was generalized to arbitrary minimal group
actions in [GGY18, Theorem 3.8].) Let $(X,T)$ be a minimal t.d.s. and $k\geq
1$. Then $\operatorname{RP}^{[k]}(X)$ is a closed $T$-invariant equivalence
relation.
###### Definition 2.24.
A t.d.s. $(X,T)$ is called a (topological) system of order $k$ if
$\operatorname{RP}^{[k]}(X)=\\{(x,x)\,|\,x\in X\\}$.
###### Theorem 2.25.
([HKM10, Theorem 1.2], for an alternative proof see [GMV20, Theorem 1.30]) A
minimal t.d.s. is a topological system of order $k$ iff it is isomorphic to a
minimal $k$-step pronilsystem as t.d.s.
Theorem 2.23 allows us to give the following definition.
###### Definition 2.26.
Let $(X,T)$ be a minimal t.d.s. Define the maximal $k$-step nilfactor by
$W_{k}(X)=X/\operatorname{RP}^{[k]}(X)$. Denote the associated map
$\operatorname{\pi_{k}^{top}}:X\rightarrow W_{k}(X)$ as the (topological)
canonical $k$-th projection.
###### Remark 2.27.
The terminology of Definition 2.26 is justified by the following property: Any
topological factor map $\phi:(X,T)\rightarrow(Y,T)$ where $(Y,T)$ is a system
of order $k$, factors through the canonical $k$-th projection
$\operatorname{\pi_{k}^{top}}:(X,T)\rightarrow(W_{k}(X),T)$, i.e., there
exists a unique $\psi:(W_{k}(X),T)\rightarrow(Y,T)$ such that
$\phi=\psi\circ\operatorname{\pi_{k}^{top}}$ ([HKM10, Proposition 4.5]). In
section 3 we establish the complementary property of universality for
$(W_{k}(X),T)$.
###### Definition 2.28.
([GL19, Definition 3.1]) A t.d.s. $(X,T)$ is called $k$-cube uniquely ergodic
if $(\operatorname{C}_{\operatorname{T}}^{k}(X),\mathcal{HK}^{k}(T))$ is
uniquely ergodic.
### 2.8. CF-Nil$(k)$ systems.
###### Definition 2.29.
For $k\geq 0$, we say $(X,T)$ is a CF-Nil($k$) system if $(X,T)$ is strictly
ergodic and $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is isomorphic to
$(W_{k}(X),\omega_{k},T)$ as m.p.s.where $\mu_{k}$ and $\omega_{k}$ are the
images of the unique invariant measure of $(X,T)$ under the measurable,
respectably topological canonical $k$-th projections.
###### Remark 2.30.
By convention $Z_{0}(X)=W_{0}(X)=\\{\bullet\\}$. Thus every strictly ergodic
$(X,T)$ is CF-Nil($0$).
The term "$(X,\mu,T)$ is CF-Nil($k$)" is an abbreviation of
"$(X,\mu,T)$ Continuously Factors on a $\mathbf{k}$-step proNilsystem which is
isomorphic to $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s."
Indeed if $(W_{k}(X),\omega_{k},T)$ is isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s. then obviously this
condition holds. The reverse implication is given by the following proposition
which has been (implicitly) used several times in the literature ([HK09,
HKM14, HSY19]). Its proof is given at the end of Subsection 3.2.
###### Proposition 2.31.
Let $(X,T)$ be a strictly ergodic t.d.s. which topologically factors on a
(minimal) $k$-step pronilsystem $(\hat{Z}_{k},T)$ with the unique ergodic
measure $\gamma_{k}$. If $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is
isomorphic to $(\hat{Z}_{k},\gamma_{k},T)$ as m.p.s., then $(\hat{Z}_{k},T)$
and $(W_{k}(X),T)$ are isomorphic as t.d.s. In particular $(X,\mu,T)$ is CF-
Nil($k$).
Theorem C allows us to give a remarkable simple proof of the following
Theorem.
###### Theorem 2.32.
Let $(X,T)$ be a CF-Nil$(k)$ system. The following holds:
1. (1)
If $\pi:(X,T)\rightarrow(Y,T)$ is a topological factor map, then $(Y,T)$ is a
CF-Nil$(k)$ system.
2. (2)
$(X,T)$ is a CF-Nil($i$) system for $0\leq i\leq k$.
###### Proof.
To prove (1) we note $(Y,T)$ is minimal being a factor of a minimal system and
$(\operatorname{C}_{\operatorname{T}}^{k+1}(Y),\mathcal{HK}^{k+1}(T))$ is
uniquely ergodic being a factor of
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ under
the natural topological factor map induced from $\pi:(X,T)\rightarrow(Y,T)$
(see Proposition 2.1). By Theorem C this implies $(Y,T)$ is a CF-Nil$(k)$
system.
Similarly, to prove (2), we consider
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))\rightarrow(\operatorname{C}_{\operatorname{T}}^{i+1}(X),\mathcal{HK}^{i+1}(T))$
given by
$(c_{v_{1},\ldots,v_{k+1}})_{(v_{1},\ldots,v_{k+1})\in\\{0,1\\}^{k+1}}\mapsto(c_{v_{1},\ldots,v_{i+1},0,\ldots,0})_{(v_{1},\ldots,v_{i+1})\in\\{0,1\\}^{i+1}}$
∎
### 2.9. A CF-Nil$(k)$ topological model.
Recall the definitions of Subsection 2.2. In [Wei85, Theorem 2] Benjamin Weiss
proved the following theorem:
###### Theorem 2.33.
(Weiss) Let $(Z,\nu,S)$ be a strictly ergodic t.d.s. and
$(X,\mathcal{X},\mu,S)$ an ergodic m.p.s. such that there exists a measurable
factor $\pi:(X,\mathcal{X},\mu,T)\rightarrow(Z,\mathcal{Z},\nu,S)$. Then $\pi$
has a topological model $\hat{\pi}:(\hat{X},\hat{T})\rightarrow(Z,S)$ where
$(\hat{X},\hat{T})$ is strictly ergodic.
The following theorem is already implicit in [HSY19].
###### Theorem 2.34.
Let $k\in\mathbb{Z}$. Every ergodic system $(X,\mathcal{X},\mu,T)$ has a
topological model $(\hat{X},\hat{T})$ such that $(\hat{X},\hat{T})$ is CF-
Nil($k$).
###### Proof.
By Theorem 2.18, $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is measurably
isomorphic to a strictly ergodic inverse limit of $k$-step nilsystems
$(\hat{Z}_{k},\hat{T})$. By Theorem 2.33, $(X,\mathcal{X},\mu,T)$ admits a
strictly ergodic topological model $(\hat{X},\hat{T})$ such that there exists
a topological factor map $(\hat{X},\hat{T})\rightarrow(\hat{Z}_{k},\hat{T})$
which is a topological model of
$(X,\mathcal{X},\mu,T)\rightarrow(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$. By
Proposition 2.31, $(\hat{X},\hat{T})$ is CF-Nil($k$).∎
###### Remark 2.35.
One can easily construct a strictly ergodic system which is not CF-Nil($k$).
Let $(X,\mathcal{X},\mu,T)$ be an irrational rotation on the circle. By
[Leh87], there exists a topologically mixing and strictly ergodic model
$(\hat{X},\hat{\mu},T)$ of $(X,\mu,T)$. As $X$ is an irrational rotation,
$Z_{1}(\hat{X})=\hat{X}$ and therefore for all $k\geq 1$,
$Z_{k}(\hat{X})=\hat{X}$. As $\hat{X}$ is topologically mixing, it is
topologically weakly mixing and therefore for all $k\geq 1$,
$W_{k}(\hat{X})=\\{\bullet\\}$ ([SY12, Theorem 3.13(1)]). It follows for all
$k\geq 1$ one has that $(W_{k}(\hat{X}),T)$ is not isomorphic to
$(Z_{k}(\hat{X}),\hat{\mu}_{1},T)$ as m.p.s.
## 3\. Coalescence and universality for maximal pronilfactors.
### 3.1. Coalescence
In this section we establish Theorem B, i.e., both topological coalescence
(introduced in [Aus63]) and measurable coalescence (introduced in [HP68]) for
minimal pronilsystems101010The definitions of these concepts appear as part of
the statements of Theorems 3.1 and 3.3 respectively.. There is a vast
literature dedicated to coalescence (see [LLT92] and references within).
Coalescence plays an important role in the next subsection.
###### Theorem 3.1.
(Topological coalescence for minimal pronilsystems) Let $(Y,T)$ be a minimal
$k$-step pronilsystem. Then $(Y,T)$ is topologically coalescent, i.e. if
$\Phi:(Y,T)\rightarrow(Y,T)$ is a topological factor map, then $\Phi$ is a
topological isomorphism.
###### Proof.
Recall that the Ellis semigroup is defined as
$E=E(Y,T)=\overline{\\{T^{n}:n\in\mathbb{Z}\\}}$, where the closure is w.r.t.
the product topology on $Y^{Y}$ (see [Ell58] for more details). By a theorem
of Donoso [Don14, Theorem 1.1], $E(Y,T)$ is a $k$-step nilpotent group, i.e.
for $E_{1}=E$, $E_{i+1}=[E_{i},E],i\geq 1$, one has that
$E_{k+1}=\\{\operatorname{Id}\\}$. As $\Phi$ is continuous, one has that $E$
and $\Phi$ commute, i.e. for any $g\in E$, $\Phi\circ g=g\circ\Phi$. For any
$z\in Y$, we define the group $\mathcal{G}(Y,z)=\\{\alpha\in E(Y,T),\alpha
z=z\\}$. Let $x,y\in Y$ such that $\Phi(x)=y$. If $u\in\mathcal{G}(Y,x)$, one
always has that $uy=u(\Phi(x))=\Phi(ux)=\Phi(x)=y$, i.e.
$u\in\mathcal{G}(Y,y)$. Thus $\mathcal{G}(Y,x)\subset\mathcal{G}(Y,y)$.
Assume that $\Phi$ is not one-to-one, then there exists $x_{1}\neq x_{2}\in Y$
such that $\Phi(x_{1})=\Phi(x_{2})$. As $(Y,T)$ is minimal, there exists
$p_{1},p_{2}\in E(Y,T)$ such that $x_{1}=p_{1}x$, $x_{2}=p_{2}x$. Then
$p_{1}y=\Phi(p_{1}x)=\Phi(x_{1})=\Phi(x_{2})=\Phi(p_{2}x)=p_{2}y$. Thus
$p_{1}^{-1}p_{2}\in\mathcal{G}(Y,y)$. As $p_{2}x=x_{2}\neq x_{1}=p_{1}x$, we
have
$p_{1}^{-1}p_{2}x\neq x,$
which implies that
$p_{1}^{-1}p_{2}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$.
Let $\beta_{0}=p_{1}^{-1}p_{2}$. As $(Y,T)$ is minimal, there exists $u\in
E(Y,T)$ such that $ux=y$. Then $\mathcal{G}(Y,x)=u^{-1}\mathcal{G}(Y,y)u$. Let
$\beta_{1}=(u^{-1}\beta_{0}^{-1}u)\beta_{0}$. As
$\beta_{0}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$, one has that
(7) $\beta_{0}\notin\mathcal{G}(Y,x),\beta_{0}\in\mathcal{G}(Y,y)\text{ and
}(u^{-1}\beta_{0}^{-1}u)\in
u^{-1}\mathcal{G}(Y,y)u=\mathcal{G}(Y,x)\subset\mathcal{G}(Y,y).$
Thus we can show that $\beta_{1}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$.
Indeed, by (7) we know that
$\beta_{1}=(u^{-1}\beta_{0}^{-1}u)\beta_{0}\in\mathcal{G}(Y,y)$ as
$\mathcal{G}(Y,y)$ is a group. If $\beta_{1}\in\mathcal{G}(Y,x)$, then
$\beta_{0}=(u^{-1}\beta_{0}^{-1}u)^{-1}\beta_{1}\in\mathcal{G}(Y,x)$, which
constitutes a contradiction. Therefore
$\beta_{1}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$ and
$(u^{-1}\beta_{1}^{-1}u)\in u^{-1}\mathcal{G}(Y,y)u=\mathcal{G}(Y,x)$.
Similarly, we define $\beta_{i+1}=(u^{-1}\beta_{i}^{-1}u)\beta_{i}$ for $i\geq
1$. By the same argument, one has that
$\beta_{i+1}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$. But notice that
$\beta_{i}\in E_{i+1}$ and $E_{k+1}=\\{\operatorname{Id}\\}$, therefore
$\operatorname{Id}=\beta_{k}\in\mathcal{G}(Y,y)\setminus\mathcal{G}(Y,x)$.
Contradiction.
Thus $\Phi$ is a one-to-one topological factor map, which implies it is a
topological isomorphism. ∎
###### Proposition 3.2.
[HK18, Chapter 13, Proposition 15] Let $(Y,\nu,T)$,
$(Y^{\prime},\nu^{\prime},T)$ be minimal (uniquely ergodic) $k$-step
pronilsystems. Let $\pi:(Y,\nu,T)\rightarrow(Y^{\prime},\nu^{\prime},T)$ be a
measurable factor map. Then there exists a topological factor map
$\hat{\pi}:(Y,T)\rightarrow(Y^{\prime},T)$ such that $\pi(y)=\hat{\pi}(y)$ for
$\nu$-a.e. $y$.
Combining Theorem 3.1 and Proposition 3.2 we immediately have the following
theorem.
###### Theorem 3.3.
(Measurable coalescence for minimal pronilsystems) Let $(Y,\nu,T)$ be a
minimal (uniquely ergodic) $k$-step pronilsystem. Then $(Y,\nu,T)$ is
measurably coalescent, i.e. if $\pi:(Y,\nu,T)\rightarrow(Y,\nu,T)$ is a
measurable factor map, then $\pi$ is a measurable isomorphism (which equals
a.s. a topological isomorphism).
###### Proof.
By Proposition 3.2, there exists a topological factor map
$\hat{\pi}:(Y,\nu,T)\rightarrow(Y,\nu,T)$ such that $\pi(y)=\hat{\pi}(y)$ for
$\nu$-a.e. $y\in Y$. By Theorem 3.1, $\hat{\pi}$ is a topological isomorphism.
As $\pi$ equals a.s. $\hat{\pi}$, one may find a $T$-invariant Borel set
$Y_{0}\subset Y$ with $\nu(Y_{0})=1$, $\pi_{|Y_{0}}=\hat{\pi}_{|Y_{0}}$. As
$\hat{\pi}$ is one-to-one, $\pi_{|Y_{0}}^{-1}(\pi_{|Y_{0}}(Y_{0}))=Y_{0}$ and
therefore $\nu(\pi_{|Y_{0}}(Y_{0}))=1$. Thus
$\pi_{|Y_{0}}:Y_{0}\rightarrow\hat{\pi}(Y_{0})$ is a Borel measurable one-to-
one map between two $T$-invariant sets of full measure, which implies that
$\pi$ is a measurable isomorphism. ∎
###### Corollary 3.4.
Let $(X,\mathcal{X},\mu,T)$ be an ergodic m.p.s. and $k\in\mathbb{N}$. Let
$(Y,\mathcal{Y},\nu,S)$ be a minimal $k$-step pronilsystem isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$. Let
$\pi:(X,\mathcal{X},\mu,T)\rightarrow(Y,\mathcal{Y},\nu,S)$ be a factor map.
The following holds:
1. (1)
There is a (topological) isomorphism $p\leavevmode\nobreak\
:\leavevmode\nobreak\
(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)\rightarrow(Y,\mathcal{Y},\nu,S)$ such
that $\pi=p\circ\pi_{k}$ a.s.
2. (2)
For every measurable factor map
$\phi:(X,\mathcal{X},\mu,T)\rightarrow(Y^{\prime},\mathcal{Y}^{\prime},\nu^{\prime},S^{\prime})$
where $(Y^{\prime},\mathcal{Y}^{\prime},\nu^{\prime},S^{\prime})$ is a minimal
$k$-step pronilfactor, factors through $\pi$, i.e., there exists a unique (up
to measure zero)
$\psi:(Y,\mathcal{Y},\nu,S)\rightarrow(Y^{\prime},\mathcal{Y}^{\prime},\nu^{\prime},S^{\prime})$
such that $\phi=\psi\circ\pi$ a.s.
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{k}}$$\scriptstyle{\pi}$$\scriptstyle{\phi}$$\textstyle{Z_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{Y^{\prime}}$
###### Proof.
By the maximality of $\pi_{k}$ (see Subsection 2.6) there is a measurable
factor map
$p:(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)\rightarrow(Y,\mathcal{Y},\nu,S)$
such that $\pi=p\circ\pi_{k}$ a.s. By assumption there is a measurable
isomorphism
$i:(Y,\mathcal{Y},\nu,S)\rightarrow(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$
(which equals a.s. a topological isomorphism). By Theorem 3.3, $i\circ p$ is a
measurable isomorphism and therefore $p$ is a measurable isomorphism. This
establishes (1). Thus $\pi$ inherits the maximality property of $\pi_{k}$.
This establishes (2). ∎
###### Remark 3.5.
Bernard Host has pointed out to us that it is possible to prove Theorem B
using results from [HK18, Chapter 13].
### 3.2. Universality
###### Definition 3.6.
Let $(X,\mu,T)$ be a strictly ergodic t.d.s. Denote by
$\operatorname{C_{k}^{top}}$ the collection of (topological) isomorphism
equivalence classes of topological $k$-step pronilfactors of $(X,T)$. Denote
by $\operatorname{C_{k}^{meas}}$ the collection of (measurable) isomorphism
equivalence classes of measurable $k$-step pronilfactors of $(X,T)$. An
(equivalence class of) t.d.s. $(M,T)\in\operatorname{C_{k}^{top}}$ is called
$\operatorname{C_{k}^{top}}$-universal111111This terminology is frequently
used in the literature, see [dV93, GL13]. if every
$(N,S)\in\operatorname{C_{k}^{top}}$ is a topological factor of $(M,T)$. An
(equivalence class of) m.p.s.
$(M,\mathcal{M},\mu,T)\in\operatorname{C_{k}^{meas}}$ is called
$\operatorname{C_{k}^{meas}}$-universal if every
$(N,\mathcal{N},v,S)\in\operatorname{C_{k}^{meas}}$ is a measurable factor of
$(M,\mathcal{M},\mu,T)$.
The following theorem establishes a complementary property to maximality as
described in Remark 2.19 and Remark 2.27.
###### Theorem 3.7.
Let $(X,\mu,T)$ be a strictly ergodic t.d.s., then $(W_{k}(X),T)$ is the
unique $\operatorname{C_{k}^{top}}$-universal topological $k$-step
pronilfactor of $(X,T)$ and $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is the
unique $\operatorname{C_{k}^{meas}}$-universal measurable $k$-step
pronilfactor of $(X,T)$.
###### Proof.
By Remark 2.19 $(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ is a
$\operatorname{C_{k}^{meas}}$-universal measurable $k$-step pronilfactor of
$(X,T)$. Assume
$(Z^{\prime}_{k}(X),\mathcal{Z}^{\prime}_{k}(X),\mu^{\prime}_{k},T)$ is
another $\operatorname{C_{k}^{meas}}$-universal measurable $k$-step
pronilfactor of $(X,T)$. By universality one has measurable factor maps
$Z^{\prime}_{k}(X)\rightarrow\mathcal{Z}^{\prime}_{k}(X)$ and
$Z_{k}(X)\rightarrow\mathcal{Z}^{\prime}_{k}(X)$. By Theorem 3.3, $Z_{k}(X)$
and $\mathcal{Z}^{\prime}_{k}(X)$ are isomorphic.
By Remark 2.27 $(W_{k}(X),T)$ is a $\operatorname{C_{k}^{top}}$-universal
topological $k$-step pronilfactor of $(X,T)$. By Theorem 3.1 it is unique.
∎
###### Proof of Proposition 2.31.
By Remark 2.27, one can find a topological factor map
$q:(W_{k}(X),T)\rightarrow(\hat{Z}_{k},T)$. Let $\omega_{k}$ be the unique
ergodic measure of $(W_{k}(X),T)$. By Remark 2.19, one can find a measurable
factor map
$\psi:(\hat{Z}_{k},\gamma_{k},T)\rightarrow(W_{k}(X),\omega_{k},T)$.
$\textstyle{\hat{Z}_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{W_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$
By Proposition 3.2, there exists a topological factor map
$\hat{\psi}:(\hat{Z}_{k},\gamma_{k},T)\rightarrow(W_{k}(X),\omega_{k},T)$ such
that $\hat{\psi}=\psi$ a.s. In particular, $\hat{\psi}\circ
q:(W_{k}(X),\omega_{k},T)\rightarrow(W_{k}(X),\omega_{k},T)$ is a topological
factor map. By Theorem 3.1, $\hat{\psi}\circ q$ is a topological isomorphism.
Thus $q$ is a topological isomorphism. As $(\hat{Z}_{k},T)$ and $(W_{k},T)$
are uniquely ergodic, $q$ is also a measurable isomorphism. In particular
$(W_{k}(X),\mathcal{W}_{k}(X),\omega_{k},T)$ and
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ are isomorphic as m.p.s. and
$(X,\mu,T)$ is CF-Nil($k$).∎
## 4\. Cubespace characterization of CF-Nil($k$).
In this section, we prove Theorem C. We need some lemmas.
###### Lemma 4.1.
[HKM10, Lemma 5.6] Let $(X,T)$ be a minimal topological dynamical system and
$\mu$ be an invariant ergodic measure on $X$. Then the measure $\mu^{[k]}$ is
supported on $\operatorname{C}_{\operatorname{T}}^{k}(X)$ for any $k\geq 1$.
###### Proof.
The Lemma is proven in [HKM10, Lemma 5.6] with the help of the so called
$L^{2}$-convergence of cubical averages theorem [HK05, Theorem 1.2]. This is a
deep theorem with a highly non-trivial proof. We note that we are able to give
a direct proof of this Lemma which we hope to publish elsewhere. ∎
###### Definition 4.2.
Let $G$ be a countable amenable group. A Følner sequence
$\\{F_{N}\\}_{N\in\mathbb{N}}$ is a sequence of finite subsets of $G$ such
that for any $g\in G$, $\lim_{n\rightarrow\infty}|gF_{N}\cap
F_{N}|/|F_{N}|=1$.
###### Theorem 4.3.
(Lindenstrauss) Let $G$ be an amenable group acting on a measure space
$(X,\mathcal{X},\mu)$ by measure preserving transformations. Let
$\mathcal{I}_{G}$ be the $G$-invariant $\sigma$-algebra of
$(X,\mathcal{X},\mu)$. There is a Følner sequence
$\\{F_{N}\\}_{N\in\mathbb{N}}$ such that for any $f\in L^{\infty}(\mu)$, for
$\mu$-a.e. $x\in X$,
${\displaystyle\lim_{N\rightarrow\infty}\frac{1}{|F_{N}|}\sum_{g\in
F_{N}}}f(gx)=\mathbb{E}(f|\mathcal{I}_{G})(x),$
In particular, if the $G$ action is ergodic, for $\mu$-a.e. $x\in X$,
${\displaystyle\lim_{N\rightarrow\infty}\frac{1}{|F_{N}|}\sum_{g\in
F_{N}}}f(gx)=\int f(x)d\mu\text{ a.e.}$
###### Proof.
The theorem follows from [Lin01, Theorem 1.2] and [Lin01, Proposition 1.4]. In
[Lin01, Theorem 1.2] the statement reads
(8) $\lim_{N\rightarrow\infty}\frac{1}{|F_{N}|}\sum_{g\in
F_{N}}f(gx)=\overline{f}(x)\text{ a.e.}$
for some $G$-invariant $\overline{f}\in L^{\infty}(\mu)$.
Note that if we replace $f$ by $\mathbb{E}(f|\mathcal{I}_{G})$ in (8), we have
trivially as $\mathbb{E}(f|\mathcal{I}_{G})$ is $G$-invariant:
$\mathbb{E}(f|\mathcal{I}_{G})(x)=\lim_{N\rightarrow\infty}\frac{1}{|F_{N}|}\sum_{g\in
F_{N}}\mathbb{E}(f|\mathcal{I}_{G})(gx)$
Using the Lebesgue dominated convergence theorem for conditional
expectation121212It follows easily from applying the Lebesgue dominated
convergence theorem in Equation (4). one has:
$\mathbb{E}(f|\mathcal{I}_{G})(x)=\lim_{N\rightarrow\infty}\mathbb{E}(\frac{1}{|F_{N}|}\sum_{g\in
F_{N}}f(g\cdot)|\mathcal{I}_{G})(x)=\mathbb{E}(\overline{f}|\mathcal{I}_{G})(x)=\overline{f}(x)\text{
a.e.}$
Thus $\overline{f}(x)=\mathbb{E}(f|\mathcal{I}_{G})(x)$, which gives the
statement above.
∎
###### Proof of Theorem C.
(I) $\Rightarrow$ (II): This follows from the proof in [HSY17, Section 4.4.3],
where it is shown that if one has a commutative diagram of the following form:
$\begin{CD}(X,\mathcal{X},\mu,T)@>{\phi}>{}>(\hat{X},T)\\\
@V{\pi_{k}}V{}V@V{}V{\hat{\pi}_{k}}V\\\
(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)@>{\operatorname{Id}}>{}>(Z_{k}(X),T),\end{CD}$
then $(C^{k+1}_{T}(\hat{X}),\mathcal{HK}^{k+1}(T))$ is uniquely ergodic. Here
$(X,\mathcal{X},\mu,T)$ is an ergodic system, $(\hat{X},T)$ is strictly
ergodic, $\phi$ is a measurable isomorphism w.r.t. the uniquely ergodic
measure of $(\hat{X},T)$ and $\hat{\pi}_{k}$ is a topological factor map.
Indeed, it is easy to obtain such a diagram for a CF-Nil$(k)$ system using
Proposition 2.31.
(II) $\Rightarrow$ (I): We assume that
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ is
uniquely ergodic. By Lemma 4.1, the unique invariant measure is $\mu^{[k+1]}$.
As $(X,T)$ is a topological factor of
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ w.r.t.
the projection to the first coordinate, $(X,T)$ is uniquely ergodic.
Let $p_{k}:(X,T)\rightarrow(W_{k}(X),T)$ be the topological canonical $k$-th
projection. By Proposition 2.1, as $(X,T)$ is uniquely ergodic so is
$(W_{k}(X),T)$. Let us denote by $\omega_{k}$ the unique invariant measure of
$(W_{k}(X),T)$. Obviously $(p_{k})_{*}\mu=\omega_{k}$. Thus
$p_{k}:(X,\mu,T)\rightarrow(W_{k}(X),\omega_{k},T)$ is a measurable factor
map. Let $\mathcal{W}_{k}$ be the $\sigma$-algebra corresponding to the map
$p_{k}$. Let $\mathcal{Z}_{k}$ be the $\sigma$-algebra corresponding to the
measurable canonical $k$-th projection
$\pi_{k}:(X,\mu,T)\rightarrow(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$. We will
show that $\mathcal{W}_{k}=\mathcal{Z}_{k}$, which implies that
$(W_{k}(X),\omega_{k},T)$ is isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s. The map
$p_{k}:(X,T)\rightarrow(W_{k}(X),T)$ induces a factor map
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))\rightarrow(\operatorname{C}_{\operatorname{T}}^{k+1}(W_{k}(X)),\mathcal{HK}^{k+1}(T)).$
By Proposition 2.1, as
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ is
uniquely ergodic so is
$(\operatorname{C}_{\operatorname{T}}^{k+1}(W_{k}(X)),\mathcal{HK}^{k+1}(T))$.
By Lemma 4.1 the unique invariant measure on
$(\operatorname{C}_{\operatorname{T}}^{k+1}(W_{k}(X)),\mathcal{HK}^{k+1}(T))$
is $\omega_{k}^{[k+1]}$. Let $\gamma_{k+1}$ be the conditional product measure
relative to $(W_{k}(X)^{[k+1]},\omega_{k}^{[k+1]})$ on $X^{[k+1]}$ ([Fur77,
Definition 9.1]). This is the unique measure on $X^{[k+1]}$ such that for all
$f_{v}\in L^{\infty}(X,\mu)$, $v\in\\{0,1\\}^{k+1}$ ([Fur77, Lemma 9.1]):
(9)
$\int_{X^{[k+1]}}\prod_{v\in\\{0,1\\}^{k+1}}f_{v}(c_{v})d\gamma_{k+1}(c)=\\\
\int_{W_{k}(X)^{[k+1]}}\prod_{v\in\\{0,1\\}^{k+1}}\mathbb{E}(f_{v}|W_{k}(X))(c_{v})d\omega_{k}^{[k+1]}(c).$
As $\mathbb{E}(\cdot|W_{k}(X))$ commutes with $T$ and $\omega_{k}^{[k+1]}$ is
$\mathcal{HK}^{k+1}(T)$-invariant, one has that $\gamma_{k+1}$ is
$\mathcal{HK}^{k+1}(T)$-invariant. It is natural to introduce the measure
$\gamma_{k+1}$ as by [HK18, Chapter 9, Theorem 14], $\mu^{[k+1]}$ is the
conditional product measure relative to $\mu_{k}^{[k+1]}$. Thus if
$\mu_{k}=\omega_{k}$ then $\gamma_{k+1}=\mu^{[k+1]}$. It turns out one can
reverse the direction of implications. Indeed we claim that
$\gamma_{k+1}(\operatorname{C}_{\operatorname{T}}^{k+1}(X))=1$. Assuming this
claim and recalling the assumption that
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ is
uniquely ergodic, one has by Lemma 4.1 that $\gamma_{k+1}=\mu^{[k+1]}$. With
the end goal of showing $\mathcal{Z}_{k}=\mathcal{W}_{k}$ we start by showing
$\mathcal{Z}_{k}\subset\mathcal{W}_{k}$. It is enough to show
$L^{\infty}(\mu)\cap L^{2}(\mathcal{W}_{k})^{\perp}\subset L^{\infty}(\mu)\cap
L^{2}(\mathcal{Z}_{k})^{\perp}$. To this end we will show that for any
function $f\in L^{\infty}(\mu)$ such that $\mathbb{E}(f|\mathcal{W}_{k})=0$,
it holds that $\mathbb{E}(f|\mathcal{Z}_{k})=0$. By Definition 2.15, as
$\gamma_{k+1}=\mu^{[k+1]}$,
$|||f|||_{k+1}^{2^{k+1}}=\int\prod_{v\in\\{0,1\\}^{k+1}}\mathcal{C}^{|v|}f(c_{v})d\gamma_{k+1}(c)=\\\
\int\prod_{v\in\\{0,1\\}^{k+1}}\mathbb{E}(\mathcal{C}^{|v|}f|W_{k}(X))(c_{v})d\omega_{k}^{[k+1]}(c).$
As $\mathbb{E}(f|\mathcal{W}_{k})\equiv 0$, it holds that
$\mathbb{E}(\mathcal{C}^{|v|}f|W_{k}(X))\equiv 0$ for any
$v\in\\{0,1\\}^{k+1}$. Therefore $|||f|||_{k+1}=0$. This implies by Lemma 2.16
that $\mathbb{E}(f|\mathcal{Z}_{k})=0$ as desired. By Remark 2.19, $Z_{k}(X)$
is the maximal measurable $k$-step pronilfactor of $(X,\mu,T)$. As
$(W_{k}(X),\omega_{k},T)$ is a $k$-step pronilfactor of $(X,T)$, one has that
$\mathcal{W}_{k}\subset\mathcal{Z}_{k}$. Thus
$\mathcal{W}_{k}=\mathcal{Z}_{k}$, which implies that
$(W_{k}(X),\omega_{k},T)$ is isomorphic to
$(Z_{k}(X),\mathcal{Z}_{k}(X),\mu_{k},T)$ as m.p.s.
As a final step, we will now show that
$\gamma_{k+1}(\operatorname{C}_{\operatorname{T}}^{k+1}(X))=1$. Let $f_{v}\in
L^{\infty}(X,\mu)$, $v\in\\{0,1\\}^{k+1}$ and set
$H_{0}=\prod_{v\in\\{0\\}\times\\{0,1\\}^{k}}f_{v}$ and
$H_{1}=\prod_{v\in\\{1\\}\times\\{0,1\\}^{k}}f_{v}$ as well as
$\hat{H}_{0}=\prod_{v\in\\{0\\}\times\\{0,1\\}^{k}}\mathbb{E}(f_{v}|W_{k}(X))$,
$\hat{H}_{1}=\prod_{v\in\\{1\\}\times\\{0,1\\}^{k}}\mathbb{E}(f_{v}|W_{k}(X))$.
By Equation (9), we have
(10)
$\int_{X^{[k+1]}}H_{0}(c)H_{1}(c^{\prime})d\gamma_{k+1}(c,c^{\prime})=\int_{W_{k}(X)^{[k+1]}}\hat{H}_{0}(c)\hat{H}_{1}(c^{\prime})d\omega_{k}^{[k+1]}(c,c^{\prime}).$
By Equation (6) in Definition 2.12,
(11)
$\int_{W_{k}(X)^{[k+1]}}\hat{H}_{0}(c)\hat{H}_{1}(c^{\prime})d\omega_{k}^{[k+1]}(c,c^{\prime})=\int_{W_{k}(X)^{[k]}}\mathbb{E}(\hat{H}_{0}|\mathcal{I}_{T^{[k]}})(c)\hat{H}_{1}(c)d\omega_{k}^{[k]}(c).$
By Birkhoff’s ergodic theorem (see also Theorem 4.3), one has that
(12)
$\begin{array}[]{ll}\int_{W_{k}(X)^{[k]}}\mathbb{E}(\hat{H}_{0}|\mathcal{I}_{T^{[k]}})(c)\hat{H}_{1}(c)d\omega_{k}^{[k]}(c)\\\
{\displaystyle=\int\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1}\hat{H}_{0}((T^{[k]})^{n}c)\hat{H}_{1}(c)d\omega_{k}^{[k]}(c)}\\\
={\displaystyle\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1}\int\hat{H}_{0}((T^{[k]})^{n}c)\hat{H}_{1}(c)d\omega_{k}^{[k]}(c)},\end{array}$
here we used the Lebesgue dominated convergence theorem.
Abusing notation one may consider $\hat{H}_{0}$ and $\hat{H}_{1}$ as defined
on $X^{[k]}$ (see Subsection 2.3). As
$p_{k}:(X,\mu,T)\rightarrow(W_{k}(X),\omega_{k},T)$ is a measurable factor
map, one has
$\int\hat{H}_{0}((T^{[k]})^{n}c)\hat{H}_{1}(c)d\omega_{k}^{[k]}(c)=\int\hat{H}_{0}((T^{[k]})^{n}c)\hat{H}_{1}(c)d\mu^{[k]}(c).$
As $(C_{T}^{k}(X),\mathcal{HK}^{k}(T))$ is a topological factor of
$(\operatorname{C}_{\operatorname{T}}^{k+1}(X),\mathcal{HK}^{k+1}(T))$ w.r.t.
the “lower” $2^{k}$ coordinates, $(C_{T}^{k}(X),\mathcal{HK}^{k}(T))$ is
uniquely ergodic. By Lemma 4.1, the unique ergodic measure is $\mu^{[k]}$. By
Theorem 4.3 applied to $(C_{T}^{k}(X),\mu^{[k]},\mathcal{HK}^{k}(T))$, there
is a Følner sequence $\\{F_{M}\subset\mathcal{HK}^{k}(T)\\}_{M\in\mathbb{N}}$
such that
(13)
$\int\hat{H}_{0}\big{(}(T^{[k]})^{n}c\big{)}\hat{H}_{1}(c)d\mu^{[k]}(c)=\lim_{M\rightarrow\infty}\frac{1}{|F_{M}|}\sum_{h\in
F_{M}}\hat{H}_{0}\big{(}(T^{[k]})^{n}hs\big{)}\hat{H}_{1}(hs)$
for $\mu^{[k]}$-a.e. $s\in C_{T}^{k}(X)$. Thus from Equations (10), (11), (12)
and (13), it holds for arbitrary $f_{v}\in L^{\infty}(X,\mu)$,
$v\in\\{0,1\\}^{k+1}$, $H_{0}=\prod_{v\in\\{0\\}\times\\{0,1\\}^{k}}f_{v}$ and
$H_{1}=\prod_{v\in\\{1\\}\times\\{0,1\\}^{k}}f_{v}$, for $\mu^{[k]}$-a.e.
$s\in C_{T}^{k}(X)$,
(14) $\int_{X^{[k+1]}}H_{0}(c)H_{1}(c^{\prime})d\gamma_{k+1}(c,c^{\prime})=\\\
\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1}\lim_{M\rightarrow\infty}\frac{1}{|F_{M}|}\sum_{h\in
F_{M}}\hat{H}_{0}\big{(}(T^{[k]})^{n}hs\big{)}\hat{H}_{1}(hs)$
Let $R\in C(X^{[k+1]},\mathbb{R})$ be a continuous function. We claim for
$\mu^{[k]}$-a.e. $s\in\operatorname{C}_{\operatorname{T}}^{k}(X)$,
(15) $\int
R(c)d\gamma_{k+1}(c)=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1}\lim_{M\rightarrow\infty}\frac{1}{|F_{M}|}\sum_{h\in
F_{M}}R\big{(}(T^{[k]})^{n}hs,hs\big{)}$
Notice that it follows from Definitions 2.11 and 2.21 that if $s\in
C_{T}^{k}(X)$, then $((T^{[k]})^{n}hs,hs)\in C_{T}^{k+1}(X)$ for arbitrary
$h\in\mathcal{HK}^{k}(T)$ and $n\in\mathbb{Z}^{+}$ (see also [GGY18,
Subsection A.2]). Thus using Equation (15) with functions $R_{\delta}\in
C(X^{[k+1]},[0,1])$ such that
$R_{\delta}|_{\operatorname{C}_{\operatorname{T}}^{k+1}(X)}\equiv 1$ and
$R|_{X^{[k+1]}\setminus
B_{\delta}(\operatorname{C}_{\operatorname{T}}^{k+1}(X))}\equiv 0$, (taking
$\delta$ to zero) one obtains:
$\gamma_{k+1}(C_{T}^{k+1}(X))=1.$
We now prove (15). For $d\in\mathbb{N}$, let $H_{d}^{(i)}$ be functions of the
form $\prod_{v\in\\{0,1\\}^{k+1}}h^{(i)}_{v}$, $i\in I_{d}$ for some finite
set $I_{d}$, such that $|R(z)-\sum_{i\in I_{d}}H_{d}^{(i)}(z)|<\frac{1}{2d}$
for all $z\in\operatorname{C}_{\operatorname{T}}^{k+1}(X)$. Denote by
$C(R)=\int R(c)d\gamma_{k+1}(c)$ the (LHS) of (15). Denote by $D(R)(z)$ be the
(RHS) of Equation (15). By Equation (14), $C(H_{d}^{(i)})=D(H_{d}^{(i)})(z)$
for $\mu^{[k]}$-a.e. $z\in\operatorname{C}_{\operatorname{T}}^{k}(X)$. Note
that $|C(R)-\sum_{i\in I_{d}}C(H_{d}^{(i)})|<\frac{1}{2d}$ and
$|D(R)(y)-\sum_{i\in I_{d}}D(H_{d}^{(i)})(y)|<\frac{1}{2d}$ for all
$y\in\operatorname{C}_{\operatorname{T}}^{k}(X)$. Thus for any $d$,
$E_{d}:=\\{y\in\operatorname{C}_{\operatorname{T}}^{k}(X):|C(R)(y)-D(R)(y)|<\frac{1}{d}\\}$
has full $\mu^{[k]}$ measure. Let $E=\bigcap_{d\in\mathbb{N}}E_{d}$, then
$\mu^{[k]}(E)=1$ and for any $y\in E$, Equation (15) holds. ∎
The following remark may be of interest:
###### Remark 4.4.
In [GHSY20, Section 6] an example is given showing there exists a strictly
ergodic distal system which is not CF-Nil($1$).
## 5\. A topological Wiener-Wintner theorem.
In this section, we prove Theorem A.
###### Definition 5.1.
Let $(X,T)$ be a t.d.s. and $\mu\in\operatorname{P_{T}}(X)$. A point $x\in X$
is generic (for $\mu$) if for all $f\in C(X)$
$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N}f(T^{n}x)=\int fd\mu$
###### Lemma 5.2.
Let $(X,T)$ be a t.d.s. and $x_{0}\in X$. Assume that for all $f\in C(X)$,
there exists $c_{f}\in\mathbb{R}$, a constant depending on $f$, so that :
$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N}f(T^{n}x_{0})=c_{f}$
Then $x_{0}$ is generic for some $\mu\in\operatorname{P_{T}}(X)$.
###### Proof.
Define the functional $\phi:C(X)\rightarrow\mathbb{R}$ by $\phi(f)=c_{f}$. It
is easy to see that $\phi$ is a bounded linear positive functional of supremum
norm $1$. By the Riesz representation theorem $c_{f}=\int fd\mu$ for some
Borel probability measure $\mu$ on $X$ ([Rud06, Theorem 2.14]). As
$c_{f}=c_{Tf}$ for all $f\in C(X)$, it follows that
$\mu\in\operatorname{P_{T}}(X)$. Thus $x_{0}$ is generic by Definition 5.1. ∎
###### Theorem 5.3.
([Gla03, Theorem 4.10]) Let $(X,T)$ be a minimal t.d.s., then $(X,T)$ is
uniquely ergodic iff every $x\in X$ is generic for some
$\mu\in\operatorname{P_{T}}(X)$ (depending on $x$).
###### Lemma 5.4.
Let $(X,T)$ be a t.d.s. and $\mu\in\operatorname{P_{T}}(X)$. If a point $x\in
X$ is generic for $\mu$, then $\mu$ is supported on
$\operatorname{\overline{\mathcal{O}}}(x)$.
###### Proof.
Let $f$ be a non-negative function supported outside
$\operatorname{\overline{\mathcal{O}}}(x)$. Then $\int
fd\mu=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}f(T^{n}x)=0$. Q.E.D. ∎
###### Proof of Theorem A.
$(I)\Rightarrow(II)$. It follows from [HK09, Theorem 2.19 and Proposition
7.1].
We will show $(II)\Rightarrow(I)$ inductively. For $k=0$ note that Condition
$(II)$ with the constant nilsequence $a(n)\equiv 1$ implies that for a fixed
arbitrary $x\in X$ and every $f\in C(X)$,
$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}a(n)f(T^{n}x)=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}f(T^{n}x)$
exists. From Lemma 5.2, $x\in X$ is generic for some $\mu_{x}\in P_{T}(X)$. By
Theorem 5.3, $(X,T)$ is uniquely ergodic. By assumption $(X,T)$ is minimal and
thus $(X,T)$ is a CF-Nil$(k)$ system.
Assume the $(II)\Rightarrow(I)$ holds for $k-1$. We will now show
$\sim(I)\Rightarrow\,\,\,\sim(II)$ for $k$. Thus we assume that $(X,T)$ is not
CF-Nil($k$). If $(X,T)$ is not CF-Nil($k-1$) then the result follows from the
inductive assumption. Thus we may assume $(X,T)$ is CF-Nil($k-1$) and in
particular uniquely ergodic. Denote the unique probability measure of $(X,T)$
by $\mu$. By definition one has that
$(Z_{k-1}(X),\mathcal{Z}_{k-1}(X),\mu_{k-1},T)$ is isomorphic as an m.p.s. to
$(W_{k-1}(X),\omega_{k-1},T)$, where $\omega_{k-1}$ is the unique ergodic
measure of $(W_{k-1}(X),T)$.
An important result of the Host-Kra structure theory is that
$\pi:Z_{k}(X)\rightarrow Z_{k-1}(X)$, determined by
$\pi_{k-1}=\pi\circ\pi_{k}$ (as defined in Definition 2.14), is a measurable
group extension w.r.t. some abelian group $A$ (See [HK05, Section 6.2], [HK18,
Chapter 9, Section 2.3]). By [GL19, Theorem 1.1, proof of Theorem 5.3], we can
find a topological model
$\hat{\pi}:(\hat{Z}_{k},T)\rightarrow(\hat{Z}_{k-1},T)$ of $\pi$ which is an
abelian topological group extension w.r.t. the abelian group $A$ such that
$(\hat{Z}_{k},T)$ is a minimal $k$-step pronilsystem and $(\hat{Z}_{k-1},T)$
is a minimal $(k-1)$-step pronilsystem. Denote by $\phi$ and $\psi$ the
measurable isomorphisms between $Z_{k}(X)$ and $\hat{Z}_{k}(X)$ and
$Z_{k-1}(X)$ and $\hat{Z}_{k-1}(X)$ respectively.
$\begin{CD}Z_{k}(X)@>{\phi}>{}>\hat{Z}_{k}(X)\\\
@V{\pi}V{}V@V{}V{\hat{\pi}}V\\\
Z_{k-1}(X)@>{\psi}>{}>\hat{Z}_{k-1}(X)\end{CD}$
For clarity denote $\pi_{Z_{k}}:=\pi_{k}$ from the previous paragraph.
Define $\pi_{\hat{Z}_{k}}=\phi\circ\pi_{Z_{k}}$. Let $p_{k-1}:X\rightarrow
W_{k-1}(X)$ be the topological canonical $(k-1)$-th projection. Let
$\pi_{\hat{Z}_{k-1}}=\hat{\pi}\circ\pi_{\hat{Z}_{k}}$. By Corollary 3.4(2),
$\hat{\pi}\circ\pi_{\hat{Z}_{k}}$ inherits the maximality property of
$\pi_{k-1}=\pi\circ\pi_{Z_{k}}$. By Corollary 3.4(1), there exists a
measurable factor map $p:\hat{Z}_{k-1}(X)\rightarrow W_{k-1}(X)$ such that
$p_{k-1}=p\circ\hat{\pi}\circ\pi_{\hat{Z}_{k}(X)}$ a.s. As $\hat{Z}_{k-1}(X)$
is isomorphic to both $Z_{k-1}(X)$ and $W_{k-1}(X)$ as m.p.s.131313Here we use
that $(X,T)$ is CF-Nil($k-1$)., by Theorem 3.3, $p$ may be chosen to be a
topological isomorphism. W.l.o.g. we will assume $p=\operatorname{Id}$. Thus
we have:
(16) $p_{k-1}(x)=\hat{\pi}\circ\pi_{\hat{Z}_{k}(X)}(x)$ for $\mu$-a.e. $x\in
X$.
---
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{Z_{k}}}$$\scriptstyle{\operatorname{Id}}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Id}}$$\scriptstyle{\pi_{\hat{Z}_{k}}}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{k-1}}$$\textstyle{Z_{k}(X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\pi}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces\hat{Z}_{k}(X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\hat{\pi}}$$\textstyle{Z_{k-1}(X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces\hat{Z}_{k-1}(X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Id}}$$\textstyle{\ignorespaces\ignorespaces\ignorespaces\ignorespaces
W_{k-1}(X)}$
We claim that there exists a minimal subsystem $(Y,T\times
T)\subset(X\times\hat{Z}_{k},T\times T)$ such that $(Y,T\times T)$ is not
uniquely ergodic. Assuming this, as by Theorem 5.3 a minimal system is
uniquely ergodic if and only if every point is generic, there exists
$(x_{3},u_{3})\in Y$ such that $(x_{3},u_{3})$ is not a generic point for any
measure. By Lemma 5.2, there exist continuous functions $h\in C(\hat{Z}_{k})$,
$f\in C(X)$ such that
(17)
$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}h(T^{n}u_{3})f(T^{n}x_{3})$
does not exist. As $(\hat{Z}_{k},T)$ is a $k$-step pronilsystem,
$h(T^{n}u_{3})$ is a $k$-step nilsequence (Definition 2.7). Thus $(II)$ does
not hold as required.
Our strategy in proving the claim is finding a minimal subsystem $(Y,T\times
T)$ of $(X\times\hat{Z}_{k},T\times T)$ which supports an invariant measure
$\nu$, w.r.t which $(Y,T\times T)$ is isomorphic to $(X,\mu,T)$ as an m.p.s.
We then assume for a contradiction that $(Y,T\times T)$ is uniquely ergodic.
Next we notice that the strictly ergodic system $(Y,T\times T)$, being
measurably isomorphic to $(X,\mu,T)$, has $Z_{k}(Y)\simeq Z_{k}(X)$. Moreover
as $(Y,T\times T)$ is a minimal subsystem of a product of the two minimal
systems, $(X,T)$ and $(\hat{Z}_{k},T)$, it maps onto each of them through the
first, respectively second coordinate projections. From the projection on
$(\hat{Z}_{k},T)$, we conclude that $(Y,T)$ has a topological $k$-step
pronilfactor $\hat{Z}_{k}$ which is measurably isomorphic to $Z_{k}(Y)$. By
Proposition 2.31, one has that $(Y,T)$ is CF-Nil($k$).From the projection on
$(X,T)$, we conclude by Proposition 2.32, that $(X,T)$ is CF-Nil($k$).This
constitutes a contradiction implying that $(Y,T)$ is not uniquely ergodic as
desired.
A natural copy of $(X,\mu,T)$ inside $(X\times\hat{Z}_{k},T\times T)$ is given
by the graph joining of $\pi_{\hat{Z}_{k}(X)}$, defined by the measure
$\mu^{(k)}=(\operatorname{Id}\times\pi_{\hat{Z}_{k}(X)})_{*}\mu:=\int\delta_{x}\times\delta_{\pi_{\hat{Z}_{k}(X)}(x)}d\mu(x)$
on $(X\times\hat{Z}_{k},T)$ (see [Gla03, Chapter 6, Example 6.3]). Clearly
(18)
$\operatorname{Id}\times\pi_{\hat{Z}_{k}(X)}:(X,\mathcal{X},\mu,T)\rightarrow(X\times\hat{Z}_{k},\mathcal{X}\times\hat{\mathcal{Z}}_{k},\mu^{(k)},T\times
T),\,x\mapsto(x,\pi_{\hat{Z}_{k}(X)}(x)).$
is a measurable isomorphism and in particular $\mu^{(k)}$ is an ergodic
measure of $(X\times\hat{Z}_{k},T\times T)$. However
$(X\times\hat{Z}_{k},\mathcal{X}\times\hat{\mathcal{Z}}_{k},\mu^{(k)},T\times
T)$ is a m.p.s. and not a (minimal) t.d.s. We consider an orbit closure of a
$\mu^{(k)}$-generic point $(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1}))$ to be
determined later. By Lemma 5.4, $\mu^{(k)}$ is supported on
$\operatorname{\overline{\mathcal{O}}}(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1}))$.
However
$(\operatorname{\overline{\mathcal{O}}}(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1})),T\times
T)$ is not necessarily minimal. We thus pass to an (arbitrary) minimal
subsystem $(Y,T\times
T))\subset(\operatorname{\overline{\mathcal{O}}}(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1})),T\times
T))$. However $\mu^{(k)}$ is not necessarily supported on $Y$. As explained in
the previous paragraph, our final aim will be to find (a possibly different)
invariant measure $\nu\in\operatorname{P_{T\times T}}(Y)$ which is isomorphic
to $\mu$.
As $\hat{\pi}$ is a topological group extension w.r.t. the abelian group $A$,
(19) $\operatorname{Id}\times\hat{\pi}:(X\times\hat{Z}_{k},T\times
T)\rightarrow(X\times W_{k-1}(X),T\times T):(x,z)\mapsto(x,\hat{\pi}(z))$
is also a topological group extension w.r.t. the abelian group $A$. Thus $A$
acts on the fibers of $\operatorname{Id}\times\hat{\pi}$ transitively and
continuously by homeomorphisms. Moreover for all $a\in A$,
$(\operatorname{Id}\times a)_{*}\mu^{(k)}$ is an invariant measure on
$(X\times\hat{Z}_{k},T\times T)$ isomorphic to $\mu^{(k)}$ and thus isomorphic
to $\mu$. We will find $\nu\in\operatorname{P_{T\times T}}(Y)$ of the form
$\nu=(\operatorname{Id}\times a)_{*}\mu^{(k)}$. Indeed for $\mu$-a.e. $x\in
X$, $(x,\pi_{\hat{Z}_{k}(X)}(x))$ is a generic point of $\mu^{(k)}$. Using
(16), one may choose $x_{1}\in X$ such that
* •
$(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1}))$ is a generic point of $\mu^{(k)}$;
* •
$\hat{\pi}(\pi_{\hat{Z}_{k}(X)}(x_{1}))=p_{k-1}(x_{1})$.
From the second point it follows that:
$\operatorname{Id}\times\hat{\pi}:(\operatorname{\overline{\mathcal{O}}}(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1})),T\times
T)\rightarrow(\operatorname{\overline{\mathcal{O}}}(x_{1},p_{k-1}(x_{1})),T\times
T)$
is a topological factor map. As $p_{k-1}$ is a topological factor map,
(20) $\operatorname{Id}\times
p_{k-1}:(X,T)\rightarrow(\operatorname{\overline{\mathcal{O}}}(x_{1},p_{k-1}(x_{1})),T\times
T),\,x\rightarrow(x,p_{k-1}(x))$
is a topological isomorphism. Therefore
$(\operatorname{\overline{\mathcal{O}}}(x_{1},p_{k-1}(x_{1})),T\times T)$ is
minimal. Thus
$(\operatorname{Id}\times\hat{\pi})_{|Y}:(Y,T)\rightarrow(\operatorname{\overline{\mathcal{O}}}(x_{1},p_{k-1}(x_{1})),T)$
factors onto. In particular there exists $z_{1}\in\hat{Z}_{k}(X)$, such that
$(x_{1},z_{1})\in Y$ and $\hat{\pi}(z_{1})=p_{k-1}(x_{1})$. As by assumption
$\hat{\pi}(\pi_{\hat{Z}_{k}(X)}(x_{1}))=p_{k-1}(x_{1})$, we can find $a\in A$
such that $a.\pi_{\hat{Z}_{k}(X)}(x_{1})=z_{1}$. As
$(x_{1},\pi_{\hat{Z}_{k}(X)}(x_{1}))$ is a generic point of $\mu^{(k)}$, it
follows that $(x_{1},a.\hat{\pi}_{k}(x_{1}))=(x_{1},z_{1})$ is a generic point
of $\nu:=(\operatorname{Id}\times a)_{*}\mu^{(k)}$. Therefore by Lemma 5.4,
the invariant measure $\nu\simeq\mu$ is supported on the minimal subsystem
$\operatorname{\overline{\mathcal{O}}}{(x_{1},z_{1})}=Y$. This ends the proof.
∎
## References
* [AKL14] Omer Angel, Alexander S. Kechris, and Russell Lyons. Random orderings and unique ergodicity of automorphism groups. Journal of the European Mathematical Society, 16(10):2059–2095, 2014.
* [Ass92] I. Assani. Uniform Wiener-Wintner theorems for weakly mixing dynamical systems. Unpublished preprint, 1992.
* [Aus63] Joseph Auslander. Endomorphisms of minimal sets. Duke Mathematical Journal, 30(4):605–614, 1963.
* [BDM10] Xavier Bressaud, Fabien Durand, and Alejandro Maass. On the eigenvalues of finite rank Bratteli-Vershik dynamical systems. Ergodic Theory Dynam. Systems, 30:639–664, 2010.
* [Bou90] J. Bourgain. Double recurrence and almost sure convergence. J. Reine Angew. Math., 404:140–161, 1990.
* [BŚ17] Wojciech Bartoszek and Adam Śpiewak. A note on a Wiener-Wintner theorem for mean ergodic Markov amenable semigroups. Proceedings of the American Mathematical Society, 145(7):2997–3003, 2017.
* [DFM19] Fabien Durand, Alexander Frank, and Alejandro Maass. Eigenvalues of minimal cantor systems. J. Eur. Math. Soc, 21:727–775, 2019.
* [Don14] Sebastián Donoso. Enveloping semigroups of systems of order $d$. Discrete Contin. Dyn. Syst., 34(7):2729–2740, 2014.
* [dV93] J. de Vries. Elements of topological dynamics, volume 257 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1993.
* [Ell58] Robert Ellis. Distal transformation groups. Pacific J. Math., 8:401–405, 1958.
* [EZK13] Tanja Eisner and Pavel Zorin-Kranich. Uniformity in the Wiener-Wintner theorem for nilsequences. Discrete Contin. Dyn. Syst., 33(8):497–516, 2013.
* [Fan18] Ai-Hua Fan. Topological Wiener-Wintner ergodic theorem with polynomial weights. Chaos, Solitons and Fractals, 117:105–116, 2018.
* [Fra06] Nikos Frantzikinakis. Uniformity in the polynomial Wiener-Wintner theorem. Ergodic Theory Dynam. Systems, 26(4):1061–1071, 2006.
* [Fur77] Harry Furstenberg. Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions. J. Analyse Math., 31:204–256, 1977.
* [GGY18] Eli Glasner, Yonatan Gutman, and XiangDong Ye. Higher order regionally proximal equivalence relations for general minimal group actions. Advances in Mathematics, 333(6), 2018.
* [GHSY20] Eli Glasner, Wen Huang, Song Shao, and Xiangdong Ye. Regionally proximal relation of order $d$ along arithmetic progressions and nilsystems. Sci. China Math., 63(9):1757–1776, 2020.
* [GL13] Yonatan Gutman and Hanfeng Li. A new short proof for the uniqueness of the universal minimal space. Proceedings of the American Mathematical Society, 141(1):265–267, 2013.
* [GL19] Yonatan Gutman and Zhengxing Lian. Strictly ergodic distal models and a new approach to the Host-Kra factors. Preprint. https://arxiv.org/abs/1909.11349, 2019.
* [Gla03] Eli Glasner. Ergodic theory via joinings, volume 101 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2003.
* [GMV20] Yonatan Gutman, Freddie Manners, and Péter P. Varjú. The structure theory of nilspaces III: Inverse limit representations and topological dynamics. Advances in Mathematics, 365(13), 2020.
* [HK05] Bernard Host and Bryna Kra. Nonconventional ergodic averages and nilmanifolds. Ann. of Math. (2), 161(1):397–488, 2005.
* [HK09] Bernard Host and Bryna Kra. Uniformity seminorms on $\ell^{\infty}$ and applications. J. Anal. Math., 108:219–276, 2009.
* [HK18] Bernard Host and Bryna Kra. Nilpotent Structures in Ergodic Theory. American Mathematical Society, 2018.
* [HKM10] Bernard Host, Bryna Kra, and Alejandro Maass. Nilsequences and a structure theorem for topological dynamical systems. Adv. Math., 224(1):103–129, 2010.
* [HKM14] Bernard Host, Bryna Kra, and Alejandro Maass. Complexity of nilsystems and systems lacking nilfactors. Journal d’Analyse Mathématique, 124(1):261–295, 2014.
* [Hos86] Bernard Host. Valeurs propres des systèmes dynamiques définis par des substitutions de longueur variable. Erg. Theory Dyn. Systems, 6:529–540, 1986.
* [HP68] Frank Hahn and William Parry. Some characteristic properties of dynamical systems with quasi-discrete spectra. Mathematical systems theory, 2(2):179–190, 1968.
* [HSY17] Wen Huang, Song Shao, and Xiangdong Ye. Strictly ergodic models under face and parallelepiped group actions. Commun. Math. Stat, 5:93–122, 2017.
* [HSY19] W. Huang, S. Shao, and X. Ye. Pointwise convergence of multiple ergodic averages and strictly ergodic models. Journal d Analyse Matématique, 139(2), 2019.
* [Leh87] Ehud Lehrer. Topological mixing and uniquely ergodic systems. Israel Journal of Mathematics, 57(2):239–255, 1987.
* [Les96] Emmanuel Lesigne. Un théorème de disjonction de systèmes dynamiques et une généralisation du théorème ergodique de Wiener-Wintner. Ergodic Theory Dynam. Systems, 10:513–521, 1996.
* [Lin01] Elon Lindenstrauss. Pointwise theorems for amenable groups. Invent. Math., 146(2):259–295, 2001.
* [LLT92] Mariusz Lemańczyk, Pierre Liardet, and Jean-Paul Thouvenot. Coalescence of circle extensions of measure-preserving transformations. Ergodic Theory and Dynamical Systems, 12(4):769–789, 1992.
* [Que10] Martine Queffélec. Substitution dynamical systems - spectral analysis. Lecture Notes in Mathematics, Springer, 2010.
* [Rob94] E. Arthur Robinson. On uniform convergence in the Wiener-Wintner theorem. Journal of the London Mathematical Society, 49, 1994.
* [Rud06] Walter Rudin. Real and complex analysis. Tata McGraw-hill education, 2006.
* [Sch14] Marco Schreiber. Topological Wiener–Wintner theorems for amenable operator semigroups. Ergodic Theory and Dynamical Systems, 34(5):1674–1698, 2014.
* [SY12] Song Shao and Xiangdong Ye. Regionally proximal relation of order $d$ is an equivalence one for minimal systems and a combinatorial consequence. Adv. Math., 231(3-4):1786–1817, 2012.
* [Wei85] Benjamin Weiss. Strictly ergodic models for dynamical systems. Bull. Amer. Math. Soc. (N.S.), 13(2):143–146, 1985.
* [WW41] Norbert Wiener and Aurel Wintner. Harmonic analysis and ergodic theory. Amer J Math, 63:415–426, 1941.
* [Zie07] Tamar Ziegler. Universal characteristic factors and Furstenberg averages. J. Amer. Math. Soc., 20(1):53–97 (electronic), 2007.
Institute of Mathematics, Polish Academy of Sciences, ul. Śniadeckich 8,
00-656 Warszawa, Poland.
Yonatan Gutman<EMAIL_ADDRESS>
School of Mathematical Sciences, Xiamen University, Xiamen, Fujian 361005,
P.R. China;
Zhengxing Lian<EMAIL_ADDRESS><EMAIL_ADDRESS>
|
CytoITMprobe: a network information flow plugin for Cytoscape
Aleksandar Stojmirović , Alexander Bliskovsky and Yi-Kuo Yu***to whom
correspondence should be addressed
National Center for Biotechnology Information
National Library of Medicine
National Institutes of Health
Bethesda, MD 20894
United States
#### Background:
Cytoscape is a well-developed flexible platform for visualization, integration
and analysis of network data. Apart from the sophisticated graph layout and
visualization routines, it hosts numerous user-developed plugins that
significantly extend its core functionality. Earlier, we developed a network
information flow framework and implemented it as a web application, called ITM
Probe. Given a context consisting of one or more user-selected nodes, ITM
Probe retrieves other network nodes most related to that context. It requires
neither user restriction to subnetwork of interest nor additional and possibly
noisy information. However, plugins for Cytoscape with these features do not
yet exist. To provide the Cytoscape users the possibility of integrating ITM
Probe into their workflows, we developed CytoITMprobe, a new Cytoscape plugin.
#### Findings:
CytoITMprobe maintains all the desirable features of ITM Probe and adds
additional flexibility not achievable through its web service version. It
provides access to ITM Probe either through a web server or locally. The
input, consisting of a Cytoscape network, together with the desired origins
and/or destinations of information and a dissipation coefficient, is specified
through a query form. The results are shown as a subnetwork of significant
nodes and several summary tables. Users can control the composition and
appearance of the subnetwork and interchange their ITM Probe results with
other software tools through tab-delimited files.
#### Conclusions:
The main strength of CytoITMprobe is its flexibility. It allows the user to
specify as input any Cytoscape network, rather than being restricted to the
pre-compiled protein-protein interaction networks available through the ITM
Probe web service. Users may supply their own edge weights and
directionalities. Consequently, as opposed to ITM Probe web service,
CytoITMprobe can be applied to many other domains of network-based research
beyond protein-networks. It also enables seamless integration of ITM Probe
results with other Cytoscape plugins having complementary functionality for
data analysis.
## Background
Cytoscape [1, 2, 3] is a popular and flexible platform for visualization,
integration and analysis of network data. Apart from the sophisticated graph
layout and visualization routines, its main strength is in providing an API
that allows developers other than its core authors to produce extension
plugins. Over the last decade, a large number of plugins have been released,
supporting the features such as import and export of data, network analysis,
scripting and functional enrichment analysis. In this paper, we describe
CytoITMprobe a plugin that brings to Cytoscape new functionality founded on
information flow.
Numerous approaches for analyzing biological networks based on information
flow [4, 5, 6, 7, 8, 9, 10] have emerged in recent years. The main assumption
of all such methods is information transitivity: information can flow through
or can be exchanged via paths of biological interactions. Our contribution to
this area [11, 12] is a context-specific framework based on discrete-time
random walks (or equivalently, diffusion) over weighted directed graphs. In
contrast to most other approaches, our framework explicitly accommodates
directed networks as well as the information loss and leakage that generally
occurs in all networks. Apart from the network itself and a user-specified
context, it requires no prior restriction to the sub-network of interest nor
additional and possibly noisy information. We implemented our framework as an
application called ITM Probe [13] and made it available as a web service [14],
where users can query protein-protein interaction (PPI) networks from several
model organisms and visualize the results.
In addition to implementing network flow algorithms, the ITM Probe web service
possesses a number of useful features. Using the Graphviz [15] suite for
layout and visualization of graphs, it displays in the user’s web browser the
images of subnetworks consisting of nodes identified as significant by the
information flow models and offers a choice of multiple coloring schemes. The
entire query results can be retrieved in the CSV format or forwarded to a
functional enrichment analysis tool to facilitate their interpretation.
However, lacking a mechanism to decouple the algorithmic part from the
interaction graph, the ITM Probe web service restricts users to querying only
the few compiled PPI networks available on the website. Using a canned suite
for graph layout, ITM Probe limits the users’ ability to manipulate network
images. For example, the only way to change the layout of significant
subnetworks is to choose a different seed and re-compute the layout. Most
importantly, not having an adequate interface to a well-designed platform such
as Cytoscape, it is difficult to use the results of the ITM Probe service
within the workflows involving additional data and algorithms from other
sources. We thus developed CytoITMprobe to meet these challenges by (1)
providing an explicit decoupling between the algorithmic part and the
interaction graph, (2) utilizing the core graph manipulation functionality of
Cytoscape for a broader visualization choices, and (3) adding an appropriate
input/output interface for seamless integration with other resources available
in Cytoscape.
Figure 1: ITM Probe is based on discrete-time random walks with boundary nodes
and damping. As an example, consider the weighted directed network shown,
containing 19 nodes and 44 links. Single-directional links are assigned weight
2 and are indicated using arrows while bi-directional edges are assigned
weight 1 and are shown as lines. The first five graphs show the time progress
of a random walk in the presence of damping and two absorbing boundary nodes
(indicated by octagons). At $t=0$, 1000 random walkers start at a single point
in the network. At $t=1$, they have progressed one step from their origin to
the nodes adjacent to it, being distributed randomly in proportion to the
weights of the edges leading from the origin. Only 900 walkers remain in the
network at $t=1$ due to damping: the damping factor $\mu=0.9$ (dissipation
$0.1$) means that $10\%$ of walkers are dissipated at each step. At $t=60$,
most of the walks have terminated, either by dissipation, or by reaching one
of the two boundary nodes. The number of walkers terminating at each boundary
node depends on their starting location. The final graph shows the probability
$F_{ik}$ for a random walk starting at any transient node in the network
(indicated by circular shape) to terminate at the boundary node on the right-
hand side (scaled by 1000). Note that the value indicated in the final graph
for the starting node at $t=0$ (190) is the same as the final number of walks
shown at $t=60$ as terminating at the right boundary node.
## Information Flow Framework
ITM Probe extracts _context-specific_ information from networks. We elaborated
on the information flow framework underlying ITM Probe in our previous
publications [11, 12] and here we provide a non-technical explanation. Given a
context consisting of one or more user-selected network nodes, the aim is to
retrieve a set of other network nodes most related to that context. We model
networks as weighted directed graphs, where nodes are linked by directional
edges and each edge is assigned a positive weight. One can consider a random
walker that wanders among network nodes in discrete steps. The rule of the
walk is that the walker starts at a certain node and in each step moves
randomly to some adjacent node with probability proportional to the weight of
the edge linking these nodes (Fig. 1). If the graph is connected, that is, if
there is a directed path linking any two nodes, such a walk never terminates
and the walker will eventually visit every node in the graph.
Our main idea is to set termination or _boundary_ nodes for the walkers while
using random walks to explore the neighborhoods of the context nodes. Provided
there is a directed path linking any node to a boundary node, every random
walk here will eventually terminate. Furthermore, the nodes visited by a
walker before termination will vary depending on the origin of the walk. Since
a random walk is a stochastic process, and each walk is different, we are
interested in the cumulative behavior of infinitely many walkers following the
same rules. On average, we expect that the nodes more relevant to the context
will be more visited than those that are less relevant. Thus, the main
quantity of interest is the average number of visits to a node given the
selected origins and destinations of the walk.
A problem with the above approach is that random walkers may spend too much
time in the graph if the origins and destinations of the walk are far apart.
This could mean that the entire graph is visited so that the most significant
nodes are just those with the largest degree. To ensure that the significant
nodes are relatively close to the context nodes, our framework contains an
additional ingredient, _damping_ : at each step of a walk, we assign a certain
probability for the walker to dissipate, that is, to leave the network. We
still evaluate the average number of visits to each node, but now only count
the visits prior to the walker leaving the network. Evidently, the nodes that
are close to the walker’s origin will be significantly visited. In addition to
forcing locality, damping is also natural in physical or biological contexts.
If we treat random walkers as information propagating through the network, it
is natural to assume that some information is lost during transmission. For
protein-protein interaction networks, where nodes are proteins and links are
physical bindings between proteins, damping could be associated with protein
degradation by proteases, which would diminish the strength of information
propagation.
ITM Probe framework contains three models: _emitting_ , _absorbing_ and
_channel_. In the absorbing model (Fig. 1), the context nodes are interpreted
as destinations or _sinks_ of random walks, while every non-boundary or
_transient_ node is considered as a potential origin. For each transient node
$i$ and each sink $k$, the model computes $F_{ik}$, the average number of
visits to the terminating node $k$ by random walks originating at the node
$i$. Since a walk can either terminate at one sink or the other, $F_{ik}$ can
also be interpreted as the probability that a random walk from $i$ reaches
$k$. In the absence of damping, the sum of $F_{ik}$ over all sinks will be
exactly $1$ for any transient node $i$. However, in the presence of damping,
the sum of $F_{ik}$ over all sinks may be much less than $1$ (Fig. 1). The
emitting model (Fig. 2), offers a dual point of view. Here, the context nodes
are interpreted as origins or _sources_ of random walks. The walks terminate
by dissipating or by returning to the sources – the sources form an emitting
boundary. Since the origins of the walks are fixed, the quantity of interest
is the visits to the transient nodes. Specifically, for each source $s$ and
each transient node $i$, the emitting model returns $H_{si}$, the average
number of visits to $i$ by walkers originating at $s$.
Figure 2: The emitting model counts visits from sources. Using the example
network from Fig. 1 with the same damping factor, consider the case where 1000
random walkers start at the source node indicated by a hexagon. At each time
step, some random walkers leave the network due to damping or by moving back
to the source. In the first five graphs, the number in each node documents the
total number of visits to that node from all random walkers, dissipated or
not, up to the indicated time. The value of $H_{si}$ returned by the ITM Probe
emitting mode ($s$ here denotes the source node) yields the expected number of
visits to node $i$ per random walker that starts at $s$ over infinitely many
time steps. The final graph shows the values of $H_{si}$ for this context,
scaled by 1000. Note that the magnitude shown for one transient node is
greater than 1000 because a walker may visit the same node multiple times.
The values of $F_{ik}$ and $H_{si}$ can be efficiently computed by solving
(sparse) systems of linear equations. Let $W_{ij}$ denote the weight of the
directed link $i\to j$ and let $0<\mu<1$ denote the damping factor. For all
pairs of nodes $i,j$, construct the random walk evolution operator
$\mathbf{P}$, where $P_{ij}=\frac{\mu W_{ij}}{\sum_{j^{\prime}}W_{ij}}$. The
operator $\mathbf{P}$ includes damping and hence $\sum_{j}P_{ij}<1$. Let
$\mathbf{P}_{TT}$ denote the sub-operator of $\mathbf{P}$ with domain and
range restricted only to transient nodes and let
$\mathbf{G}=(\mathbb{I}-\mathbf{P}_{TT})^{-1}$, where $\mathbb{I}$ stands for
the identity matrix. Then, it can be shown [11], that
$\displaystyle F_{ik}$ $\displaystyle=\sum_{j}G_{ij}P_{jk},\qquad\text{and}$
$\displaystyle H_{si}$ $\displaystyle=\sum_{j}P_{sj}G_{ji}.$
More details, including the cases where $\mu=0$, $\mu=1$ or non-uniform
damping are covered in [11, 12].
Figure 3: The channel model highlights the directed flow from origins to
destinations. Consider once again the example network from Figs. 1 and 2, now
with a single source (hexagon) and two sinks (octagons). In common with the
case from Fig. 2, the walkers start at the source, but in this case can
terminate only by reaching the sinks. The damping factor is implicit: it
determines how far the walkers are allowed to deviate from the shortest path
towards one of the sinks. In the first five graphs, the number in each
transient node documents the total number of visits to that node from all
random walkers up to the indicated time. However, the value in each sink node
represents the likelihood to reach that sink from the source at the indicated
time. The value of $\hat{\varPhi}_{i,K}^{s}$ returned by the ITM Probe
normalized channel mode yields the expected number of visits to node $i$ per
random walker that starts at $s$ over infinitely many time steps. Note that
the sink nodes split the flow from the source depending on their location. In
this example, over infinitely many time steps, the node closer to the source
captures 970 walkers, while the further sink gets only the remaining 30.
The channel model combines the emitting and the absorbing model, with both
sources and sinks on the boundary. It illuminates the most likely paths from
sources to sinks. For each source node $s$, transient node $i$ and sink node
$k$, it computes $\varPhi_{i,k}^{s}=H_{si}F_{ik}$, the average number of
visits to $i$ by a random walker that originates at $s$ and terminates at $k$.
ITM Probe does not report $\varPhi_{i,k}^{s}$ directly, but instead shows a
simpler, _normalized_ quantity $\hat{\varPhi}_{i,K}^{s}$ (Fig. 3), which is
defined for each source $s$ and transient node $i$ by
$\hat{\varPhi}_{i,K}^{s}=\frac{\sum_{k}H_{si}F_{ik}}{\sum_{k^{\prime}}F_{sk^{\prime}}}.$
(1)
Here, the numerator $\sum_{k}H_{si}F_{ik}=\sum_{k}\varPhi_{i,k}^{s}$ gives the
average number of visits, in the presence of damping, to $i$ by a random
walker starting at $s$ and terminating at any sink. The denominator gives the
total probability of a walker starting at $s$ to terminate at any sink. Hence,
with the denominator off-setting the effect of damping, the value of
$\hat{\varPhi}_{i,K}^{s}$ counts the average number of visits to $i$ by
walkers that start at $s$ and terminate at any of the sinks as if no
dissipation is present. Generally, damping in the emitting or the absorbing
model determines how far the flow can reach away from its origins. In
contrast, the damping parameter for the normalized channel model plays a
different role (Fig. 4): it effectively determines the ‘width’ of the channel
from sources to sinks. When damping is very strong, only the nodes on the
shortest path from a source to its nearest sink will be visited.
Given the close relationship between random walks and diffusion, it is also
possible to interpret ITM Probe models through information diffusion (or
information flow). Within that paradigm, a fixed amount information is
constantly replenished at the source nodes while leaving the network at all
boundary nodes and through dissipation. At equilibrium, when the rate of flow
entering equals the rate of leaving, the amount of information occupying each
transient node is equivalent to the average number of visits to that node
(using the aforementioned non-replenishing random walk interpretation [11]).
We call the set of nodes most influenced by the flow an _Information
Transduction Module_ or ITM.
Figure 4: An example of the results of running different ITM Probe models.
Here we see the results of running the emitting (a,d,g), absorbing (b,e,h) and
channel (c,f,i) model of ITM Probe with the same sources and sinks but
different dissipation coefficients. The underlying undirected graph is derived
from a square lattice by removing random nodes and edges. Sources are shown as
hexagons, sinks as octagons, and transient nodes as squares. The top row
(a,b,c) shows the runs with damping factor $\mu=0.95$ (dissipation $0.05$),
the middle (d,e,f) with $\mu=0.75$ and the bottom with $\mu=0.25$. For the
emitting and channel model, each basic cyan, magenta or yellow color is
associated with a source. The coloring of each node arises by mixing the basic
color in proportion to the strength of information flow from their respective
sources. For the absorbing model, the nodes are shaded according to the total
probability of absorption at any sink on a logarithmic scale.
## Software architecture
CytoITMprobe architecture consists of two parts: the user interface front end
and computational back end. The user interface, written in Java [16] using
Cytoscape API, is accessed as a Cytoscape plugin. It consists of the query
form, the results viewer and the ITM subnetwork (Fig. 5). The back end is the
standalone ITM Probe program, written in Python, which can be installed
locally or accessed through a web service. In either configuration,
CytoITMprobe takes the user input through the graphical user interface,
validates it, and passes a query to the back end. Upon receiving from the back
end the entire query results, CytoITMprobe stores them as the node and network
attributes of the original network. Consequently, the query output can be
edited or manipulated within Cytoscape, as well as saved for later use.
Figure 5: CytoITMprobe interface. At startup from the Plugins menu,
CytoITMprobe embeds its query form into the Control Panel (left). After
performing a query or loading previously obtained search results, it creates
an ITM subnetwork showing significant nodes and a viewer embedded into Results
Panel (right). The viewer allows closer examination of the results and
manipulation of the contents and the look of the ITM subnetwork. The overall
visual styling of CytoITMprobe components closely resembles that of the ITM
Probe web version.
Standalone ITM Probe is a part of the qmbpmn-tools Python package, which also
contains the code supporting the ITM Probe and SaddleSum web services, as well
as the scripts for constructing the underlying datasets. The ITM Probe part
depends on Numpy and Scipy [17] packages for numerical computations. The
performance of ITM Probe critically depends on the routines for computing
direct solutions of large, sparse, nonsymmetric systems of linear equations.
Scipy supports two sparse direct solver libraries (both written in C): SuperLU
[18] as default and UMFPACK [19] as an optional add on through SciKits
collection [20]. In our experience, UMFPACK runs faster than SuperLU and Scipy
always uses it if available. However, for optimal performance, UMFPACK
requires well-tuned Basic Linear Algebra Subroutines (BLAS) libraries and may
not be easy to install. To support users who are inclined not to install
UMFPACK or Scipy, CytoITMprobe supports remote queries by default.
## Input
CytoITMprobe requires as input a weighted directed graph and the ITM Probe
model parameters that include a selection of boundary nodes and a dissipation
probability.
### Step one: defining a query graph
In CytoITMprobe graph connectivity is specified by selecting a Cytoscape
network. In addition, each link must be assigned a weight and a direction
through the query form. Edge weights are set using the _Weight attribute_
dropdown box, which lists all available floating-point edge attributes of the
selected network and the default option (_NONE_). If the default option is
selected, CytoITMprobe assumes a weight $2$ for any self-pointing edge and $1$
for all other edges. If an attribute is selected, the weight of an edge is set
to the value of the selected attribute for that edge. Null attribute values
are treated as zero weights.
Since Cytoscape edges are always internally treated as directed, the user must
also indicate the directedness of each edge type through the query form.
Whenever a new Cytoscape network is selected, CytoITMprobe updates the query
form and places all of the network’s edge types into the _undirected_
category. The user can use arrow buttons to move some edge types to the
_directed_ or _ignored_ category. Undirected edges are treated as
bidirectional, with the same weight in both directions. Directed edges have a
specified weight assigned only in the forward direction, with the backward
direction receiving the zero weight. Ignored edges have zero weight in both
directions. Since Cytoscape allows multiple edges of different types between
the same nodes, CytoITMprobe collapses multiple edges in each direction into a
single edge by appropriately summing their weights (Fig. 6).
Figure 6: Edge weights example Consider the following example: Suppose A and B
are nodes in a Cytoscape network linked by three edges of two types with shown
edge weights. Assume two type I edges (lighter gray), $A\to B$ and $B\to A$
are directed, while a single type II edge (darker gray) $A\to B$ is
undirected. At query time, CytoITMprobe creates two directed edges, $A\to B$
and $B\to A$, with weights $3$ and $6$, respectively.
### Step two: selecting a model and boundary nodes
In addition to a weighted directed graph, ITM Probe requires an information
flow model (emitting, absorbing or normalized channel), a selection of sources
and/or sinks, and dissipation probability. The choice of the model determines
the types of boundary nodes that need to be specified, as well as the ways in
which the damping factor can be set (see ‘Step three: specifying dissipation
probability’ below). The query form also allows users to specify _excluded
nodes_. Any flow reaching excluded nodes is fully dissipated. This is a way to
remove those nodes that do not participate in information propagation in the
desired context or that introduce undesirable shortcuts.
### Step three: specifying dissipation probability
The values of $H$, $F$, and $\hat{\varPhi}_{,}$ all implicitly depend on the
dissipation probability. In ITM Probe the user can set the dissipation
probability directly or specify a related quantity that can, using Newton’s
method, determine the dissipation probability. The choice of the alternative
quantity depends on the selected model. For the emitting model, this quantity
is the average path length before termination, which we denote by $\bar{t}$.
For example, the user can require a random walker to make on average three
steps before terminating. The formula for $\bar{t}$ is
$\bar{t}=1+\frac{1}{n_{S}}\sum_{s}\sum_{j}H_{sj},$ (2)
where $n_{S}$ denotes the number of sources. For the normalized channel model,
the path length before termination is given by
$\bar{t}=1+\frac{1}{n_{S}}\sum_{s}\sum_{j}\hat{\varPhi}_{j,K}^{s}.$ (3)
Since the normalized channel model counts only the random walkers actually
terminating at sinks, $\bar{t}$ is in this case bounded below by the length of
the shortest path from any source to any sink. Hence, ITM Probe accepts the
desired value of $\bar{t}$ in terms of length deviation from the shortest
path. There are two ways to set the average path-length deviation: in absolute
units (steps) or as a proportion of the length of the shortest path. The
absorbing model allows users to obtain the dissipation probability by setting
the average absorption probability, denoted $\bar{r}$. The formula for
$\bar{r}$ is
$\bar{r}=\frac{1}{n_{T}}\sum_{i}\sum_{k}F_{ik},$ (4)
where $k$ ranges over all sinks, $i$ ranges over all transient nodes _that are
connected to at least one sink_ , and $n_{T}$ is the total number of such
nodes. The value of $\bar{r}$ represents the likelihood of a random walk
starting at a randomly selected point in the network to reach a sink. The
dissipation probability obtained in this way is larger if the sinks are well-
connected hubs near the center of the network, in contrast to the case when
the chosen sinks are not as well connected.
### Step four: submitting a query
After specifying all necessary input, the user submits a query by pressing the
_QUERY_ button on the query form. The time required for a run depends on
whether the query is local or remote, as well as on the size of the submitted
graph and the number of selected sources and/or sinks.
## Output
For every completed query, CytoITMprobe displays its results in a viewer
embedded in Cytoscape Results Panel and a new Cytoscape network consisting of
significant nodes (ITM subnetwork). The results viewer has five tabs: _Top
Scoring Nodes_ , _Summary_ , _Input Parameters_ , _Excluded Nodes_ , and
_Display Options_. The first four tabs contain information about the query and
the results, while the last one contains a form that allows users to
manipulate the ITM subnetwork. The form controls two aspects of the
subnetwork: composition (what nodes are selected and how many) and node
coloring.
### Displaying significant nodes
Subnetwork nodes are selected through a _ranking attribute_ , which assigns a
numerical value from ITM Probe results to each node. The nodes are listed in
descending order of the ranking attribute and top nodes are displayed as the
ITM subnetwork. The number of top nodes is determined by specifying a
_selection criterion_ , which can be simply a number of nodes to show, a
cutoff value or the ‘participation ratio’. Specifying a cutoff value $x$
selects the nodes with their ranking attribute greater than $x$. Participation
ratio estimates the number of ‘significant’ nodes by considering all values of
the ranking attribute in a scale-independent manner [11]. The available
choices for the ranking attribute depend on the ITM Probe model and the number
of boundary points. For the emitting and normalized channel model, the user
can select visits to a node from each source or the sum of visits from all
sources. It is also possible to use _interference_ [11], which denotes the
minimum number of node visits, taken over all sources. For the absorbing
model, the available attributes are absorbing probabilities to each sink and
the total probability of termination at a sink. The values of all attributes
for the subnetwork nodes are displayed in the _Top Scoring Nodes_ tab.
The colors of the subnetwork nodes are determined by selecting _coloring
attributes_ , a _scaling function_ and a _color map_. The list of coloring
attributes is the same as the list of ranking attributes but the user can
select up to three coloring attributes. If a single attribute is selected,
node colors are determined by the selected eight-category ColorBrewer [21]
color map. Otherwise, they are resolved by color mixing: each coloring
attribute is assigned a single basic color (cyan, magenta or yellow), and the
final node color is obtained by mixing the three basic colors in proportion to
the values of their associated attributes at that node. The scaling function
serves to scale and discretize the coloring attributes to the ranges
appropriate for color maps. Figure 4 shows examples of mixed color scheme with
three boundary points (left and right columns) and of a coloring using a
single attribute (center column).
### Manipulating node attributes
Since the ITM Probe query results are saved as Cytoscape attributes of the
original network, they can be arbitrarily modified through Cytoscape. Any
changes made are reflected in the results viewer and the corresponding ITM
subnetwork after pressing the _RESET_ button on the Display Options form.
Using the CytoITMprobe attribute nomenclature, users can create additional
attributes to be used for ranking or coloring. Consider the following usage
example. A user has run an emitting model query with three sources, S1, S2,
and S3, and obtained the results in a viewer labeled ITME243. At the end of
the run, CytoITMprobe created the attributes ITME243[S1], ITME243[S2] and
ITME243[S3] for the nodes of the input network and saved the results as their
values. The user creates a new floating-point node attribute with a label
ITME243[avgS1S2] and fills it with an average of ITME243[S1] and ITME243[S2].
After resetting the Display Options form, an item ‘Custom [avgS1S2]’ is
available for selection as a ranking or coloring attribute. This gives the
user the flexibility to reinterpret S1 and S2 as if they were a single source
of equal weight as S3. Another possibility is to combine the results of
queries with different boundaries and display them together on the same
subnetwork.
### Saving and restoring results
The query network together with its attributes containing ITM Probe results
can be saved as a Cytoscape session and later retrieved. After reloading the
session, the user can regenerate the results viewer and the corresponding
subnetwork for a stored ITM by pressing the _LOAD_ button on the CytoITMprobe
query form and selecting the desired ITM from a list.
Alternatively, the ITM Probe results can be exported to tab-delimited text
files through the Cytoscape _Export_ menu. Each exported tab-delimited file
contains all the information necessary to restore the results except the query
network and can be easily manipulated both by humans and by external programs
or scripts. The results from tab-delimited files can be imported into any
selected Cytoscape network through the _Import_ menu. Since the selected
network may be different from the original query network, only the results for
the nodes in the selected network whose IDs match the IDs from the imported
file will be loaded. After importing the results, CytoITMprobe generates a new
results viewer and a subnetwork, as if the results originated from a direct
ITM Probe query.
## Discussion
The main function of ITM Probe, also applicable to domains other than PPI
networks, is to retrieve information from large and complex networks by
discovering the possible interface between network nodes that are hypothesized
to be related. This paradigm works best with large networks, where such
information cannot be easily accessed by other means. For examples of
applications of the ITM Probe frameworks to protein-protein interaction
networks, consult our earlier papers [11, 13, 12].
With a network as an _encyclopedia_ of domain-specific knowledge, ITM Probe
enables a direct access to its specific portions related to a specified
context. The user can learn about the objects representing individual nodes by
setting them as sources and/or sinks and retrieving information about the most
significant objects in the resulting ITM. This approach not only extracts a
relevant subnetwork but also produces context-specific weights for each node.
With their interpretation as average numbers of node visits, or equivalently,
as average numbers of paths passing through a node, the ITM weights signify
the relative importance of network nodes in the context of the query and thus
can be used to refine its interpretation as a whole. For example, a single
node with a large weight in an ITM resulting from a normalized channel model
query represents a choke point _in the particular context of the query_. The
same node need not have a high global centrality.
Containing both sources and sinks, the normalized channel model offers the
users the ability to formulate and evaluate network based hypotheses in
silico. Since information flow that reaches one sink cannot subsequently
terminate at any other, sink nodes can be associated with alternative
hypotheses, such as different biological functions if the network is PPI. The
information flow from each source will then, depending on the dissipation
coefficient used, mainly trace the path towards the sink most likely to be
reached first from that source (see Fig. 4, right column). The ITM Probe
framework considers all weighted paths from sources to sinks and hence
produces more robust results than approaches involving only the shortest
paths. The path weights are tunable using the dissipation probability.
Compared to the previously described web interface to ITM Probe [13],
CytoITMprobe significantly benefits from being a part of the Cytoscape
platform. Although the _Display Options_ form is very similar to the web
version, the sophisticated network visualization functionality provided by
Cytoscape allows significantly more versatility in displaying ITMs. For
example, Cytoscape GUI allows users to manually alter node placements, rotate
network views, or arbitrarily change the look of a network. In addition,
Cytoscape interface enables users to directly manipulate node attributes
representing ITM Probe results and possibly create new node summary variables
appropriate to their problem. The newly created variables can be immediately
reflected in the graphical representation of an ITM, which is not possible in
the web setting. Most importantly, the results of ITM Probe can be integrated
into workflows involving other Cytoscape plugins that provide complementary
functionality. For instance, output ITMs can be related to terms from
controlled vocabularies such as Gene Ontology [22] using functional enrichment
analysis plugins such as PinGO [23] or our own recently released CytoSaddleSum
[24]. The graph-theoretic structure of ITM subnetworks can be analyzed using a
variety of algorithms such as MCODE [25] or GraphletCounter [26, 27].
The architecture of CytoITMprobe with a Cytoscape front end and an ITM Probe
back end offers flexibility for a variety of usage scenarios. In contrast to
the web version, it allows users to use ITM Probe with arbitrary networks and
edge weights, rather than being limited to compiled PPIs from few model
organisms. Most users will be content with accessing ITM Probe through the web
server. However, the option to download and install the qmbpmn-tools package
provides not only faster running times for queries but also the ability to use
the command line interface for ITM Probe to perform batch queries and to
locally reproduce its web service. The separation of the presentation layers
(web or Cytoscape) from the ‘business’ layer (standalone ITM Probe)
facilitates easy future updates to any components.
## Conclusion
CytoITMprobe is a plugin that brings the previously unavailable network flow
algorithms of ITM Probe to the Cytoscape platform. It enables users to extract
context-specific subnetworks from large networks by specifying the origins
and/or destinations of information flow. CytoITMprobe significantly extends
the features of the previously released web version of ITM Probe.
The main novelty of CytoITMprobe is that it allows the user to specify as
input any Cytoscape network, rather than being restricted to the PPI networks
available through the ITM Probe web service. Using Cytoscape attributes to
hold their desired values, users may easily supply their own edge weights and
denote edge directionalities. Additionally, the ability to manipulate and add
new node attributes through Cytoscape reduces the workload required for
visualizing various combinations of ITM components. In the context of
biological cellular networks, this additional flexibility may lead to
constructions of new node attributes that can better reflect biological
significance, hence facilitating more educated hypothesis forming.
By bringing ITM Probe to Cytoscape, CytoITMprobe enables seamless integration
of ITM Probe results with other Cytoscape plugins having complementary
functionality for data analysis. By decoupling the query network from the
information flow algorithm, the newly developed CytoITMprobe can be applied to
many other domains of network-based research beyond protein-networks.
## Availability and requirements
### CytoITMprobe plugin
Project name: CytoITMprobe
Project home page:
http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/itmprobe.html
Documentation:
http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/mn/itm_probe/doc/cytoitmprobe.html
Video tutorial: http://www.youtube.com/watch?v=4Cdf-mSKtWo
Operating system(s): Platform independent
Programming language: Java
Other requirements: Java SE 6 or higher and Cytoscape 2.7 or higher
License: All components written by the authors at the NCBI are released into
Public Domain. Components included from elsewhere are available under their
own open source licenses and attributed in the source code.
### Standalone ITM Probe (optional for CytoITMprobe)
Project name: qmbpmn-tools
Project home page:
http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/itmprobe.html
Documentation: http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/mn/itm_probe/doc/
Operating system(s): Platform independent
Programming language: Python
Other requirements: Python 2.6 or 2.7, Numpy 1.3 or higher and Scipy 0.7 or
higher. UMFPACK Scikit is recommended for good performance.
License: All components written by the authors at the NCBI are released into
Public Domain. Components included from elsewhere are available under their
own open source licenses and attributed in the source code.
## Acknowledgments
This work was supported by the Intramural Research Program of the National
Library of Medicine at the National Institutes of Health.
## References
* [1] Cline MS, Smoot M, Cerami E, Kuchinsky A, Landys N, Workman C, Christmas R, Avila-Campilo I, Creech M, Gross B, Hanspers K, Isserlin R, Kelley R, Killcoyne S, Lotia S, Maere S, Morris J, Ono K, Pavlovic V, Pico AR, Vailaya A, Wang PL, Adler A, Conklin BR, Hood L, Kuiper M, Sander C, Schmulevich I, Schwikowski B, Warner GJ, Ideker T, Bader GD: Integration of biological networks and gene expression data using Cytoscape. _Nat Protoc_ 2007, 2(10):2366–82.
* [2] Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T: Cytoscape: a software environment for integrated models of biomolecular interaction networks. _Genome Res_ 2003, 13(11):2498–504.
* [3] Smoot ME, Ono K, Ruscheinski J, Wang PL, Ideker T: Cytoscape 2.8: new features for data integration and network visualization. _Bioinformatics_ 2011, 27(3):431–2.
* [4] Nabieva E, Jim K, Agarwal A, Chazelle B, Singh M: Whole-proteome prediction of protein function via graph-theoretic analysis of interaction maps. _Bioinformatics_ 2005, 21 Suppl 1:302–310.
* [5] Tu Z, Wang L, Arbeitman M, Chen T, Sun F: An integrative approach for causal gene identification and gene regulatory pathway inference. _Bioinformatics_ 2006, 22:e489–496.
* [6] Suthram S, Beyer A, Karp R, Eldar Y, Ideker T: eQED: an efficient method for interpreting eQTL associations using protein networks. _Mol. Syst. Biol._ 2008, 4:162.
* [7] Zotenko E, Mestre J, O’Leary DP, Przytycka TM: Why do hubs in the yeast protein interaction network tend to be essential: reexamining the connection between the network topology and essentiality. _PLoS Comput Biol_ 2008, 4(8):e1000140.
* [8] Missiuro P, Liu K, Zou L, Ross B, Zhao G, Liu J, Ge H: Information flow analysis of interactome networks. _PLoS Comput Biol_ 2009, 5(4):e1000350.
* [9] Voevodski K, Teng S, Xia Y: Spectral affinity in protein networks. _BMC Syst Biol_ 2009, 3:112.
* [10] Kim YA, Przytycki JH, Wuchty S, Przytycka TM: Modeling information flow in biological networks. _Phys Biol_ 2011, 8(3):035012.
* [11] Stojmirović A, Yu YK: Information flow in interaction networks. _J Comput Biol_ 2007, 14(8):1115–43.
* [12] Stojmirović A, Yu YK: Information flow in interaction networks II: channels, path lengths and potentials. _J Comput Biol_ 2012\. in press.
* [13] Stojmirović A, Yu YK: ITM Probe: analyzing information flow in protein networks. _Bioinformatics_ 2009, 25(18):2447–9.
* [14] ITM Probe Web Service [http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/mn/itm_probe].
* [15] Gansner ER, North SC: An open graph visualization system and its applications to software engineering. _Software — Practice and Experience_ 2000, 30(11):1203–1233.
* [16] Java [http://www.java.com].
* [17] Jones E, Oliphant T, Peterson P, et al.: SciPy: Open source scientific tools for Python 2001–. [http://www.scipy.org/].
* [18] Demmel JW, Eisenstat SC, Gilbert JR, Li XS, Liu JWH: A supernodal approach to sparse partial pivoting. _SIAM J. Matrix Analysis and Applications_ 1999, 20(3):720–755.
* [19] Davis TA: Algorithm 832: UMFPACK V4.3—an unsymmetric-pattern multifrontal method. _ACM Trans. Math. Softw._ 2004, 30(2).
* [20] SciKits [http://scikits.appspot.com/].
* [21] Harrower M, Brewer C: ColorBrewer.org: An Online Tool for Selecting Colour Schemes for Maps. _Cartogr J_ 2003, 40:27–37.
* [22] Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, Harris MA, Hill DP, Issel-Tarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE, Ringwald M, Rubin GM, Sherlock G: Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. _Nat Genet_ 2000, 25:25–29.
* [23] Smoot M, Ono K, Ideker T, Maere S: PiNGO: a Cytoscape plugin to find candidate genes in biological networks. _Bioinformatics_ 2011, 27(7):1030–1.
* [24] Stojmirovic A, Bliskovsky A, Yu YK: CytoSaddleSum: a functional enrichment analysis plugin for Cytoscape based on sum-of-weights scores. _Bioinformatics_ 2012\. [Doi://10.1093/bioinformatics/bts041].
* [25] Bader GD, Hogue CWV: An automated method for finding molecular complexes in large protein interaction networks. _BMC Bioinformatics_ 2003, 4:2.
* [26] Whelan C, Sönmez K: Computing graphlet signatures of network nodes and motifs in Cytoscape with GraphletCounter. _Bioinformatics_ 2012, 28(2):290–1.
* [27] Milenković T, Przulj N: Uncovering biological network function via graphlet degree signatures. _Cancer Inform_ 2008, 6:257–73.
|
# Quantum diffusion via an approximate semigroup property
Felipe Hernández<EMAIL_ADDRESS>
###### Abstract.
In this paper we introduce a new approach to the diffusive limit of the weakly
random Schrodinger equation, first studied by L. Erdos, M. Salmhofer, and H.T.
Yau. Our approach is based on a wavepacket decomposition of the evolution
operator, which allows us to interpret the Duhamel series as an integral over
piecewise linear paths. We relate the geometry of these paths to combinatorial
features of a diagrammatic expansion which allows us to express the error
terms in the expansion as an integral over paths that are exceptional in some
way. These error terms are bounded using geometric arguments. The main term is
then shown to have a semigroup property, which allows us to iteratively
increase the timescale of validity of an effective diffusion. This is the
first derivation of an effective diffusion equation from the random
Schrodinger equation that is valid in dimensions $d\geq 2$.
###### Contents
1. 1 Introduction
2. 2 More detailed outline of the proof
3. 3 A sketch of the derivation of the path integral
4. 4 The ladder approximation for ${\mathcal{E}}_{\tau}$
5. 5 Iterating the path integral
6. 6 Geometry and combinatorics of extended paths
7. 7 Skeletons and diagrams
8. 8 Colored operators
9. 9 Constructing partitions from colorings
10. 10 The diagrammatic expansion
11. 11 Bounding the diffusive diagram contributions
12. 12 Analysis of the ladder superoperator
13. 13 The path integral
14. 14 A first operator moment estimate
15. 15 Interspersing the free evolution
16. A The Boltzmann limit for short times
17. B Using graphs, forests, and partitions to compute moments
18. C Wavepackets and quantization
19. D Elementary estimates for the linear Boltzmann equation
## 1\. Introduction
### 1.1. The kinetic limit for the Schödinger equation
In this paper, we study the equation
(1.1) $i\partial_{t}\psi=-\frac{1}{2}\Delta\psi+{\varepsilon}V\psi$
with a stationary random potential $V$. An example of a potential we will
consider is a mean- zero Gaussian random field with a smooth and compactly
supported two point correlation function
(1.2) ${\mathbf{E}\,}V(x)V(y)=R(x-y).$
with $R\in C_{c}^{\infty}({\mathbf{R}}^{d})$. Our approach works for more
general potentials that are stationary, have finite range of dependence, and
have bounded moments in $C^{k}$ (for $k>20d$, say).
The equation (1.1) is a simple model for wave propagation in a random
environment. It also has a more direct physical significance, as it models the
motion of a cold electron in a disordered environment [29]. We are interested
in this paper in the regime where the frequency of the initial condition
$\psi$ is comparable to the correlation length of the potential, which we
consider to be of unit scale. This regime is out of reach of both traditional
WKB-type semiclassical approximations, which are more appropriate for high-
frequency $\psi$, and of homogenization techniques, which are appropriate for
low-frequency $\psi$.
This regime was first rigorously studied by H. Spohn in [29] , who showed that
the spectral density
$\mu_{t}(p)={\mathbf{E}\,}|\psi_{t/{\varepsilon}^{2}}(p)|^{2}$ converges, in
the semiclassical limit ${\varepsilon}\to 0$, to a weak solution of a
spatially homogeneous kinetic equation
(1.3)
$\partial_{t}\mu(p)=\int\delta(|p|^{2}-|p^{\prime}|^{2}){\widehat{R}}(p-p^{\prime})[\mu(p^{\prime})-\mu(p)]\mathop{}\\!\mathrm{d}p^{\prime},$
where $R(x)$ is the two-point correlation function defined in (1.2). The term
$\delta(|p|^{2}-|p^{\prime}|^{2})$ enforces conservation of kinetic energy,
which is appropriate in the limit ${\varepsilon}\to 0$ since the potential
energy becomes negligible. The time scale between scattering events is on the
order ${\varepsilon}^{-2}$, which can be heuristically justified by using the
Born approximation of the solution of (1.1). Spohn’s technique for
demonstrating (1.3) was to write out the Duhamel expansion for the solution
$\psi_{t}(p)$ to (1.1) in momentum space, take an expectation of the quantity
$|\psi_{t}(p)|^{2}$ using the Wick rule for the expectation of a product of
Gaussian random variables, and separate terms into a main term and an error
term. The error terms are controlled by additional cancellations and the main
terms are compared to a series expansion for the solution of (1.3). Spohn’s
analysis of the Dyson series allowed him to control the solution up to times
$c{\varepsilon}^{-2}$ for some small constant $c>0$.
This proof technique has been used by many authors since to improve upon our
understanding of (1.1). Most notably, L. Erdös and H.T. Yau in a series of
works [17, 18] were able to improve the time scale to arbitrary kinetic times
of the form of the order $O({\varepsilon}^{-2})$ while also demonstrating the
weak convergence of the Wigner function
${\mathcal{W}_{\psi}}(x,p):=\int e^{iy\cdot
p}\overline{\psi}(x-y/2)\psi(x+y/2)\mathop{}\\!\mathrm{d}y$
to the solution of the linear Boltzmann equation
(1.4)
$\partial_{t}\rho+p\cdot\nabla_{x}\rho={\varepsilon}^{2}\int\delta(|p|^{2}-|p^{\prime}|^{2}){\widehat{R}}(p-p^{\prime})[\rho(x,p^{\prime})-\rho(x,p)]\mathop{}\\!\mathrm{d}p^{\prime}.$
Introducing the rescaled coordinates $T={\varepsilon}^{2}t$,
$X={\varepsilon}^{2}x$ along with the rescaled solution
$\rho^{\varepsilon}_{T}(X,p):=\rho_{{\varepsilon}^{-2}T}({\varepsilon}^{-2}x,p),$
the equation (1.4) can be written
$\partial_{T}\rho^{\varepsilon}+p\cdot\nabla_{X}\rho=\int\delta(|p|^{2}-|p^{\prime}|^{2}){\widehat{R}}(p-p^{\prime})[\rho^{\varepsilon}(X,p^{\prime})-\rho^{\varepsilon}(X,p)]\mathop{}\\!\mathrm{d}p^{\prime}.$
In an impressive sequence of refinements to this work, L. Erdos, M. Salmhofer
and H.T. Yau [14, 16, 15] were able to improve the timescale even further to
diffusive times ${\varepsilon}^{-2-\kappa}$ for some positive $\kappa>0$ (in
fact, one can take $\kappa=1/370$ when $d=3$). At this timescale a diffusion
equation emerges. The principle is that the momentum variable is no longer
relevant to the evolution because it becomes uniformly distributed over the
sphere within time $O({\varepsilon}^{-2})$, and all that remains of the
momentum information is the kinetic energy variable $e=|p|^{2}/2$. Moreover,
for diffusive times ${\varepsilon}^{-2-\kappa}$ the particle travels a
distance ${\varepsilon}^{-2-\kappa/2}$ so the diffusive length scale is
${\varepsilon}^{-2-\kappa/2}$.
For solutions $\rho$ of the linear Boltzmann equation (1.4), the particle
distribution $f$ defined by
(1.5)
$f_{T}(X,e)=\int_{|p|^{2}/2=e}\rho_{{\varepsilon}^{-2-\kappa}T}({\varepsilon}^{-2-\kappa/2}X,p)\mathop{}\\!\mathrm{d}{\mathcal{H}}^{n-1}(p)$
converges in the limit ${\varepsilon}\to 0$ to a solution of the diffusion
equation
(1.6) $\partial_{T}f_{T}=D_{e}\Delta_{X}f_{T},$
where $D_{e}$ is a diffusion coefficient depending on the energy $e$. See [16]
for more details on the limiting diffusion equation.
To reach the diffusive time scale in which the particle experiences infinitely
many scattering events, one must consider terms with ${\varepsilon}^{-c}$
collisions, which produces more than ${\varepsilon}^{-{\varepsilon}^{-c}}$
diagrams when one applies the Wick expansion. To deal with the explosion in
the number of terms, Erdos, Salmhofer, and Yau developed a resummation
technique to more accurately estimate the sizes of the terms and additionally
had to exploit intricate cancellations coming from the combinatorial features
of the diagrams considered.
### 1.2. Statement of the main result
In this paper, we provide an alternative derivation of the linear Boltzmann
equation which is also valid up to diffusive times but with a fundamentally
different approach. In our proof, we use a wavepacket decomposition of the
evolution operator. The wavepacket decomposition allows us to keep information
about the position and momentum of the particle simultaneously (up to the
limits imposed by the uncertainty principle), and we therefore express the
solution as an integral over piecewise linear paths in phase space.
To make the connection between operators and the linear Boltzmann equation, we
use the Weyl quantization $a\in
C^{\infty}({{\mathbf{R}}^{2d}})\mapsto\operatorname{Op}^{w}(a)\in\mathcal{B}(L^{2}({\mathbf{R}}^{d}))$
defined by
$\operatorname{Op}^{w}(a)f(x)=\int e^{i(x-y)\cdot
p}a((x+y)/2,p)f(y)\mathop{}\\!\mathrm{d}y\mathop{}\\!\mathrm{d}p.$
The relationship between the Weyl quantization and the Wigner transform is
given by the identity
$\langle\operatorname{Op}^{w}(a)\psi,\psi\rangle=\int
a(x,p){\mathcal{W}_{\psi}}(x,p)\mathop{}\\!\mathrm{d}x\mathop{}\\!\mathrm{d}p.$
In particular, applying this identity to the solution $\psi_{t}=e^{-itH}\psi$
to (1.1) where $H$ is the random Hamiltonian
$H=-\frac{1}{2}\Delta+{\varepsilon}V,$
we have
$\int
a(x,p){\mathbf{E}\,}{\mathcal{W}_{\psi_{t}}}(x,p)\mathop{}\\!\mathrm{d}x\mathop{}\\!\mathrm{d}p={\mathbf{E}\,}\langle\operatorname{Op}^{w}(a)e^{-itH}\psi,e^{-itH}\psi\rangle=\langle{\mathbf{E}\,}e^{itH}\operatorname{Op}^{w}(a)e^{-itH}\psi,\psi\rangle.$
Therefore in order to answer questions about the weak convergence of
${\mathcal{W}_{\psi_{t}}}$, it suffices to study the quantum evolution channel
(1.7) ${\mathcal{E}}_{t}[A]:={\mathbf{E}\,}e^{itH}Ae^{-itH}$
applied to operators of the form $A=\operatorname{Op}^{w}(a)$ with
sufficiently regular symbols $a$. In particular, we will show that for
suitable observables $a_{0}$ and for times $t\leq{\varepsilon}^{-2-\kappa}$,
we have
$\|{\mathcal{E}}_{t}[\operatorname{Op}^{w}(a_{0})]-\operatorname{Op}^{w}(a_{t})\|_{op}=o(1),$
where $a_{t}$ solves the dual of the linear Boltzmann equation (1.4),
(1.8)
$\partial_{t}a-p\cdot\nabla_{x}a={\varepsilon}^{2}\int\delta(|p|^{2}-|p^{\prime}|^{2}){\widehat{R}}(p-p^{\prime})[a(x,p^{\prime})-a(x,p)]\mathop{}\\!\mathrm{d}p^{\prime}.$
A natural norm that we use on $a$ which also controls the operator norm of
$\operatorname{Op}^{w}(a)$ is the $C^{k}$ norm with $k=2d+1$ (see Appendix C
for a self-contained proof that the $C^{2d+1}$ norm of $a$ controls the
operator norm of $\operatorname{Op}^{w}(a)$). We will use a $C^{k}$ norm which
is rescaled to the appropriate length scales of the problem. Because the time
scale between scattering events is ${\varepsilon}^{-2}$, a natural spatial
length scale is ${\varepsilon}^{-1}$. For the rest of the paper we write
$r={\varepsilon}^{-1}$ for this length scale. This is the length scale of a
wavepacket that remains coherent between scattering events. Conversely the
natural length scale in momentum is ${\varepsilon}=r^{-1}$. This “microscopic”
scale is the one we use for the wavepacket decomposition of the operator
$e^{itH}$.
On the other hand, a natural “macroscopic” length scale of the problem is
${\varepsilon}^{-2}$, which is the distance that a particle with momentum
$O(1)$ travels between scatterings. A natural “macroscopic” length scale in
momentum is $O(1)$, which is the impulse applied to a particle in a typical
scattering event.
The following norm measures the smoothness of an observable at these length
scales:
$\|a\|_{C^{k}_{r,L}}:=\sum_{|\alpha_{x}|+|\alpha_{p}|\leq
k}\sup_{(x,p)}|(rL\partial_{x})^{\alpha_{x}}(r^{-1}L\partial_{p})^{\alpha_{p}}a(x,p)|.$
When $L=1$, this norm probes the microscopic smoothness of observables,
whereas when $L={\varepsilon}^{-1}$, the norm probes the macroscopic
smoothness.
We make one final comment before we state the main result of the paper, which
is that we will not treat the evolution of low-frequency modes. In dimension
$d=2$, the scattering cross section of a low frequency wave with momentum
$|p|\ll 1$ is still on the order ${\varepsilon}^{2}$ but the speed of travel
is only $|p|$, so the distance between typical scattering events is only
$|p|{\varepsilon}^{-2}$ rather than ${\varepsilon}^{-2}$. Because scattering
events are more closely spaced, the bounds coming from the geometric arguments
we use deteriorate and we make no attempt to understand what happens in this
regime. In higher dimensions the scattering cross section also shrinks with
momentum so that one could in principle first approximate the evolution of low
frequency modes by a free evolution with no potential and therefore recover
the result for all frequencies. We do not make this argument in this paper.
###### Theorem 1.1.
For each $d\geq 2$, there exists $\theta=\theta(d)>0$ and $\kappa=\kappa(d)>0$
such that the following holds. Let $V$ be an admissible potential as described
in Definition (B.2), and let $a_{0}\in C^{2d+1}({{\mathbf{R}}^{2d}})$ be a
classical observable supported away from zero momentum;
$\operatorname{supp}a_{0}\subset\\{(x,p)\in{{\mathbf{R}}^{2d}}\mid|p|\geq{\varepsilon}^{\theta(d)}\\}.$
Suppose moreover that $a_{t}$ solves (1.8) with initial condition $a_{0}$.
Then
(1.9)
$\|{\mathcal{E}}_{t}[\operatorname{Op}^{w}(a_{0})]-\operatorname{Op}^{w}(a_{t})\|_{op}\leq
C_{d}{\varepsilon}^{2+\kappa}t\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}.$
In particular, for arbitrary $\psi_{0}\in L^{2}({\mathbf{R}}^{d})$ and
$\psi_{t}$ solving (1.1) it follows that
(1.10)
$\int{\mathcal{W}_{\psi_{t}}}(x,p)a_{0}(x,p)\mathop{}\\!\mathrm{d}x\mathop{}\\!\mathrm{d}p=\int{\mathcal{W}_{\psi_{0}}}(x,p)a_{t}(x,p)\mathop{}\\!\mathrm{d}x\mathop{}\\!\mathrm{d}p+O({\varepsilon}^{2+\kappa}t\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}\|\psi\|_{L^{2}}^{2}).$
To see how the diffusion equation (1.6) emerges as a scaling limit, we
consider observables of the form
$a_{0}(x,p)=\bar{a}({\varepsilon}^{2+\kappa/2}x,p)$
with $\bar{a}\in C^{2d+1}$. With this rescaling, we have
$\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-1}}}\leq\|\bar{a}\|_{C^{2d+1}}.$
In particular,
$\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}$ is bounded
uniformly in ${\varepsilon}$. Moreover, the solution $a_{t}$ solves (1.6).
One major difference between Theorem 1.1 and the main results of [16, 15],
apart from the very different approaches to the proof, is that our result
holds in dimension $d=2$. At first this may appear to be in contradiction with
the conjectured phenomenon of Anderson localization in $d=2$, but the
contradiction disappears when one compares the timescale
${\varepsilon}^{-2-\kappa}$ considered in this paper to the expected length
scale $e^{{\varepsilon}^{-2}}$ of localization in this dimension. Indeed, it
is expected that the particle exhibits diffusive behavior for an exponentially
long time before getting trapped by localization.
The exponent $\kappa(d)$ can in principle be extracted from the proof. However
in this paper we focus on demonstrating the new technique in its simplest form
and therefore do not attempt to optimize $\kappa(d)$. Perhaps with some
optimization one could obtain $\kappa(3)$ comparable to $1/370$, but the proof
we give yields a bound of the order $\kappa(3)\sim 10^{-6}$.
### 1.3. A heuristic sketch of the argument
#### 1.3.1. The phase space path integral
The main idea behind the proof of Theorem 1.1 is to focus on justifying an
approximate semigroup property
(1.11) ${\mathcal{E}}_{2t}[A]\approx{\mathcal{E}}_{t}[{\mathcal{E}}_{t}[A]],$
for suitable operators $A$ including operators of the form
$A=\operatorname{Op}^{w}(a)$. Observe that the approximation (1.11) has the
following physical interpretation. Let
$\displaystyle H_{1}$ $\displaystyle=-\frac{1}{2}\Delta+{\varepsilon}V_{1}$
$\displaystyle H_{2}$ $\displaystyle=-\frac{1}{2}\Delta+{\varepsilon}V_{2}.$
be Hamiltonians with two independently sampled potentials, and observe that
${\mathcal{E}}_{t}\circ{\mathcal{E}}_{t}[A]={\mathbf{E}\,}e^{itH_{2}}e^{itH_{1}}Ae^{-itH_{1}}e^{-itH_{2}}.$
In other words, ${\mathcal{E}}_{t}\circ{\mathcal{E}}_{t}$ represents an
evolution with a potential that abruptly changes into an independently sampled
potential at time $t$. Although such a resampling of the potential drastically
changes the evolution of the wavefunction $\psi_{t}$ itself, we will see that
the effect on observables is minimal
To prove the approximate semigroup property we approximate the evolution
operator $e^{-itH}$ as an integral over piecewise linear paths in phase space,
representing the possible paths of a particle as it scatters. To decompose
phase space we use a family of wavepackets of the form
$\phi_{x,p}(y):=r^{-d/2}e^{iy\cdot p}\chi_{env}((x-y)/r),$
where $\chi_{env}\in C_{c}^{\infty}({\mathbf{R}}^{d})$ is a fixed envelope
normalized in $L^{2}$ and satisfying some additional conditions described in
Appendix C. The functions $\phi_{x,p}$ are localized in space to scale $r$ and
in momentum to scale $r^{-1}$. We use the notation $\xi=(x,p)$ and write
$\mathinner{|{\xi}\rangle}$ as a shorthand for the function $\phi_{\xi}$.
The use of a phase-space path integral already represents a departure from
previous approaches to the problem. Indeed, since the paper of Spohn [29] it
has been customary to write the terms of the Duhamel series expansion for
$e^{itH}$ in the Fourier basis. We will see that by using the spatial
localization of the particle we can more easily compute the expectation
appearing in the integrand without the need for a full Wick expansion (or
cumulant decomposition in the case of a non-Gaussian potential).
The free evolution of a wavepacket approximates the motion of a free classical
particle in the sense that
$e^{-it\Delta/2}\mathinner{|{(x,p)}\rangle}\approx
e^{it|p|^{2}/2}\mathinner{|{(x+tp,p)}\rangle}$
for $t\ll r^{2}$. For multiplication against the potential we use the identity
(1.12)
$V\mathinner{|{(x,p)}\rangle}=\int{\widehat{V_{x}}}(p^{\prime}-p)\mathinner{|{(x,p^{\prime})}\rangle}\mathop{}\\!\mathrm{d}p^{\prime},$
where $V_{x}$ is the potential $V$ multiplied by a cutoff near $y$ which is
$1$ in a ball large enough to contain the support of $\chi_{env}$,
$V_{x}(y)=b((y-x)/r)V(y).$
We can use these two identities to write an expansion of the evolution of a
wavepacket $e^{itH}\mathinner{|{(x,p)}\rangle}$ as an integral over paths in
which the phase space point travels in straight lines with an occasional
impulse from the potential causing discontinuities in the momentum variable.
We represent these piecewise linear paths as a tuple
$\omega=({\mathbf{s}},{\mathbf{p}},{\mathbf{y}})$ with
${\mathbf{s}}=(s_{0},\cdots,s_{k})$ being the sequence of times between the
scattering events (satisfying $\sum s_{j}=t$ and $s_{j}\geq 0$),
${\mathbf{p}}=(p_{0},p_{1}\cdots,p_{k})$ being the sequence of momentum
variables which we require to have the same magnitude
$|p_{j}|=|p_{j^{\prime}}|$ and with initial momentum $p_{0}=p$, and
${\mathbf{y}}=(y_{1},\cdots,y_{k})$ being the sequence of scattering locations
defined by
$\displaystyle y_{1}$ $\displaystyle=x+s_{0}p$ $\displaystyle y_{j+1}$
$\displaystyle=y_{j}+s_{j}p_{j}.$
An example of a path is depicted in Figure 1.
Figure 1. A sample scattering path of a particle with $2$ collisions. The
displacement between consecutive collisions is given by $s_{j}p_{j}$, where
$p_{j}\in{\mathbf{R}}^{d}$ is a momentum vector with constrained kinetic
energy $|p_{j}|^{2}/2$, and $s_{j}\geq 0$ is the time between collisions.
Each such path $\omega$ defines an operator $O_{\omega}$ which approximately
acts on wavepackets by
(1.13)
$O_{\omega}\mathinner{|{(x,p)}\rangle}=e^{i\varphi(\omega)}\prod_{j=1}^{k}{\widehat{V_{y_{j}}}}(p_{j}-p_{j-1})\mathinner{|{y_{k}+s_{k}p_{k},p_{k}}\rangle},$
where $\varphi(\omega)$ is a deterministic phase accumulated from the
stretches of free evolution. These phases do not matter for this sketch of the
proof. However, in the actual proof we use stationary phase to ensure that the
geometric constraints $y_{j+1}=y_{j}+s_{j}p_{j}$ are approximately satisfied
and to show that kinetic energy is approximately conserved. Then, at least
formally, we can write out the path integral for the evolution of a wavepacket
$\mathinner{|{\xi}\rangle}$ as
(1.14) $e^{itH}\mathinner{|{\xi}\rangle}=\int
O_{\omega}\mathinner{|{\xi}\rangle}\mathop{}\\!\mathrm{d}\omega.$
We apply this decomposition of the evolution to investigate the approximate
semigroup property for operators of the form
$A=\int_{{{\mathbf{R}}^{2d}}}a(\xi)\mathinner{|{\xi}\rangle}\mathinner{\langle{\xi}|}\mathop{}\\!\mathrm{d}\xi,$
which are local in phase space in the sense that
$A\mathinner{|{\xi}\rangle}\approx a(\xi)\mathinner{|{\xi}\rangle}.$
Using the path integral (1.14) in the definition of ${\mathcal{E}}_{t}[A]$ we
obtain
${\mathcal{E}}_{t}[A]=\int_{{{\mathbf{R}}^{2d}}}\iint{\mathbf{E}\,}O_{\omega^{\prime}}^{*}\mathinner{|{\xi}\rangle}\mathinner{\langle{\xi}|}O_{\omega}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\omega^{\prime}\mathop{}\\!\mathrm{d}\xi.$
What is important for this sketch of the proof is to simply investigate which
pairs of paths $\omega$ and $\omega^{\prime}$ have
${\mathbf{E}\,}O_{\omega^{\prime}}^{*}\mathinner{|{\xi}\rangle}\mathinner{\langle{\xi}|}O_{\omega}\not=0.$
In particular, we are interested in understanding for which sequence of
positions $y_{j},y^{\prime}_{j}$ and impulses $q_{j},q^{\prime}_{j}$ we have
${\mathbf{E}\,}\prod_{j=1}^{k}{\widehat{V_{y_{j}}}}(q_{j})\prod_{j^{\prime}=1}^{k^{\prime}}{\widehat{V_{y^{\prime}_{j^{\prime}}}}}^{*}(q^{\prime}_{j^{\prime}})\not=0.$
Using the fact that $V$ is real and therefore
${\widehat{V_{y}}}^{*}(q)={\widehat{V_{y}}}(-q)$, we can rewrite the
expectation above as
${\mathbf{E}\,}\prod_{b\in[k]\sqcup[k^{\prime}]}{\widehat{V_{y_{b}}}}(q_{b}),$
where $[k]\sqcup[k^{\prime}]$ is shorthand for the doubled index set
$[k]\times\\{0\\}\cup[k^{\prime}]\times\\{1\\}$ and the $q_{b}$ impulses are
reversed for $b=(j,1)$, so in particular
$q_{j,1}=-(p^{\prime}_{j}-p^{\prime}_{j-1})$. Because $V_{y}$ are localized,
the expectation above splits along a partition
$P({\mathbf{y}},{\mathbf{y}}^{\prime})\in{\mathcal{P}}([k]\sqcup[k^{\prime}])$
defined by the clusters of the collision locations $y_{b}$ (so that
$b\sim_{P}b^{\prime}$ when $|y_{b}-y_{b^{\prime}}|{\,\lesssim\,}r$). That is,
we have an identity of the form
${\mathbf{E}\,}\prod_{b\in[k]\sqcup[k^{\prime}]}{\widehat{V_{y_{b}}}}(q_{b})=\prod_{S\in
P({\mathbf{y}},{\mathbf{y}}^{\prime})}{\mathbf{E}\,}\prod_{b\in
S}{\widehat{V_{y_{b}}}}(q_{b}).$
Within each cluster the expectations is zero unless the sum of the impulses is
zero. This “conservation of momentum” condition is a consequence of the
stationarity of the potential and is made rigorous in Lemma B.3. In the case
of Gaussian potentials, this is a consequence of the Wick formula and the
identity
${\mathbf{E}\,}{\widehat{V}}(p){\widehat{V}}(q)={\widehat{R}}(p)\delta(p+q).$
We are led to the following purely geometric constraints on the pair of paths
$\omega,\omega^{\prime}$.
###### Definition 1.2.
Two paths $\omega,\omega^{\prime}$ are said to be _compatible_ if the
partition $P({\mathbf{y}},{\mathbf{y^{\prime}}})$ has no singletons and
$\sum_{b\in S}q_{b}=0$
for every $S\in P({\mathbf{y}},{\mathbf{y^{\prime}}})$.
#### 1.3.2. Geometric characterization of the error term
This geometric notion of compatible paths is perhaps the most significant idea
in the proof. Indeed, the point is that we can usefully manipulate the path
integral before computing an expectation. In other words, we will first
decompose the path integral according to geometry and then allow the geometry
to dictate the combinatorics of the diagrams.
Now, we take a step back to appreciate which paths contribute to the error
term in the semigroup property. As the discussion above indicates, the
semigroup property compares the evolution ${\mathcal{E}}_{t}$ with a fixed
potential $V$ to the evolution ${\mathcal{E}}_{t/2}\circ{\mathcal{E}}_{t/2}$
during which the potential is refreshed from $V_{1}$ to $V_{2}$ at time $t/2$.
To keep track of which potential a scattering event sees we introduce the
collision times $t_{b}$, defined by $t_{0}=0$ and $t_{b+1}=t_{b}+s_{b}$. The
following condition suffices to ensure that the pair of paths
$(\omega,\omega^{\prime})$ has an expected amplitude that is unaffected by the
possibility of a time-dependent potential.
###### Definition 1.3.
Two paths $\omega,\omega^{\prime}$ are _time-consistent_ if
$t_{b}=t_{b^{\prime}}$ for all pairs of indices
$b,b^{\prime}\in[k]\sqcup[k^{\prime}]$ such that $y_{b}=y_{b^{\prime}}$. Pairs
$(\omega,\omega^{\prime})$ that are not time-consistent are said to be _time-
inconsistent_.
Observe that $\omega$ is compatible with itself and is also time-consistent.
We will show that in fact the paths $(\omega,\omega^{\prime})$ in which
$\omega=\omega^{\prime}$ or is otherwise a small perturbation of $\omega$ form
the bulk of the contribution to ${\mathcal{E}}_{t}$.
To understand the error term, we characterize pairs of paths
$(\omega,\omega^{\prime})$ which are compatible but time-inconsistent. A
simple way that a pair $(\omega,\omega^{\prime})$ could be time-inconsistent
is if either $\omega$ or $\omega^{\prime}$ have a recollision. A simple
example of a pair of compatible paths which are time-inconsistent due to a
recollision is depicted in Figure 2
Figure 2. An example of a pair of paths with a recollision. The paths $\omega$
and $\omega^{\prime}$ are depicted at the top left and top right, and the
collisions are colored according to the cluster in the partition
$P(\omega,\omega^{\prime})$. On the bottom, an abstract depiction of the
partition $P(\omega,\omega^{\prime})$. Notice that there is a red cluster with
$4$ collisions, two from $\omega$ and two from $\omega^{\prime}$.
There is another geometric feature which we call a “tube event” which occurs
when three collisions are collinear (in general, when they lie on narrow
tube). Tube events can also lead to time inconsistencies, as depicted in
Figure 3
Figure 3. An example of a tube event. Note that there are three collisions
lying on a line, but neither $\omega$ nor $\omega^{\prime}$ forms a
recollision. Nonetheless, the collisions are time-inconsistent because the
second collision of $\omega$ coincides with the fourth collision of
$\omega^{\prime}$, as shown in the diagram of $P(\omega,\omega^{\prime})$
below.
We note that the estimation of the contribution of these special geometric
features replaces the need for crossing estimates such as the ones studied in
[23]. Our diagrams are bounded using relatively simple-minded volumetric
considerations (essentially, after taking care of deterministic cancellations
in the integral, we use the triangle inequality and account for the
contribution of each degree of freedom). This simple-minded approach works
particularly well for subkinetic timescales
$\tau{\,\lesssim\,}{\varepsilon}^{-2+\kappa/2}$ in which one only needs
$k_{max}=O(\kappa^{-1})$ collisions in the series expansion to approximate
$e^{i\tau H}$ and therefore all combinatorial factors are bounded by a (very
large) absolute constant.
The general strategy of the proof therefore is as follows:
1. (1)
Classify the geometric behaviors that can lead to time-inconsistencies.
2. (2)
Partition the path integral into paths with bad behaviors and paths without
bad behaviors.
3. (3)
Use geometric estimates to bound the operator norm of the contribution of the
bad paths.
The main new feature of this proof strategy is that the path integral is
partitioned before the expectation is computed. That is, we do not decompose
the expectation until we already have some information about the partition
$P(\omega,\omega^{\prime})$. This is in contrast to the traditional approach
used in [29, 16, 15, 18] which is summarized below.
1. (1)
Expand the expectation using the Wick rule or a cumulant expansion.
2. (2)
Partition the diagrams according to complexity by a combinatorial criterion.
3. (3)
Use oscillatory integral estimates to bound the contributions of the bad
diagrams.
#### 1.3.3. Reaching the diffusive timescale
To reach the diffusive timescale we prove a semigroup property of the form
${\mathcal{E}}_{N\tau}\approx{\mathcal{E}}_{\tau}^{N}$
where $\tau={\varepsilon}^{-2+\kappa/2}$ and $N={\varepsilon}^{-\kappa}$. The
challenge we face in trying to understand the evolution operator $e^{itH}$ for
times $t\sim{\varepsilon}^{-2-\delta}$ is that one needs to resolve at least
${\varepsilon}^{-\delta}$ collisions. This requires a path integral in a space
of dimension ${\varepsilon}^{-\delta}$. If we then try to use crude estimates
to bound the contribution of the terms in the Duhamel expansion we may lose a
factor of $C^{{\varepsilon}^{-\delta}}$. What we need to do is take into
account cancellations that occur between the terms in the Duhamel expansion.
In [16, 15] this is done by renormalizing the propagator. This is equivalent
to viewing $H=-\Delta/2+{\varepsilon}V$ not as a perturbation of the free
evolution $-\Delta/2$ but as a perturbation of
$-\Delta/2+{\varepsilon}^{2}\Theta$ where $\Theta$ is a multiplier operator
that takes into account the effect of immediate recollisions. The multiplier
$\Theta$ has a nonzero imaginary part so that $e^{-i\Delta/2+i\Theta}$ decays
exponentially in time. This exponential decay exactly matches the exponential
growth in the volume of the path inetgral. The value of $\Theta$ is also
chosen so that a precise cancellation occurs in diagrams with immediate
recollisions.
We take an alternative approach to resummation. The idea is that we first
write
$e^{itH}=e^{iN\tau H}=e^{i\tau H}\cdots e^{i\tau H},$
where $N\sim{\varepsilon}^{-\kappa}$ and
$\tau\sim{\varepsilon}^{-2+\kappa/2}$. Each of the terms $e^{i\tau H}$ is
expanded as a Duhamel series of $k_{max}$ terms. We then partition the
resulting path integral into pieces depending on geometric features of the
paths and decompose the expectation using this geometric information. When
this is done we resum the terms in the Duhamel series corresponding to
segments that do not have any geometrically special collisions. This can be
intepreted as a way of writing the evolution channel ${\mathcal{E}}_{N\tau}$
as a perturbation of the refreshed evolution channel
${\mathcal{E}}_{\tau}^{N}$. This seems to be a more general strategy for
deriving kinetic limits – the resummation procedure is dictated by the desired
semigroup structure.
Another important point of comparison concerns the diagrammatic expansion we
derive to reach the diffusive time scale. In both this paper and in [15, 16]
one expands the solution as a sum over diagrams which are stratified in some
way by combinatorial complexity. In [15, 16] the more complex diagrams contain
more opportunities to find decay via crossing estimates, which are nontrivial
bounds on oscillatory integrals. In this paper, we first split the path
integral itself according to geometric complexity and then bound the
combinatorial complexity of the diagrams associated to paths of a fixed
geometric complexity. The difference between these approaches is summarized in
Figure 4.
Geometric complexity of paths | | Combinatorial complexity of diagrams
---|---|---
| 1 | 2 | 3 | 4
1 | ${\varepsilon}^{c}N^{C}$ | - | - | -
2 | ${\varepsilon}^{2c}N^{C}$ | ${\varepsilon}^{2c}N^{2C}$ | - | -
3 | ${\varepsilon}^{3c}N^{C}$ | ${\varepsilon}^{3c}N^{2C}$ | ${\varepsilon}^{3c}N^{3C}$ | -
4 | ${\varepsilon}^{4c}N^{C}$ | ${\varepsilon}^{4c}N^{2C}$ | ${\varepsilon}^{4c}N^{3C}$ | ${\varepsilon}^{4c}N^{4C}$
Figure 4. A cartoon of the contribution of various diagrams. Diagrams have a
combinatorial complexity, and there are at most $N^{Ck}$ diagrams having
complexity exactly $k$. Moreover paths have a geometric complexity, and the
volume of the paths with geometric complexity $k$ is ${\varepsilon}^{ck}$. The
approach taken in [15, 16] is to sort diagrams by combinatorial complexity and
then show that the only contributions to diagrams with high combinatorial
complexity also have high geometric complexity. In this paper, we first sort
the paths by geometric complexity and show that paths with low geometric
complexity only contribute to diagrams with low combinatorial complexity. In
summary, we sum along the rows of this table whereas previous works proceed by
summing over the columns. Note that as long as
$N\ll{\varepsilon}^{-c^{\prime}}$ the sum of the contributions is small.
#### 1.3.4. More explanation of the diagrammatic expansion
To reach subkinetic times we used the following crude idea to verify the
approximate semigroup property: either the pair of paths
$(\omega,\omega^{\prime})$ has a nontrivial geometric event, or it does not.
If there is a nontrivial geometric event, we use the triangle inequality
inside the path integral and the geometric information about the event to pick
up a factor of ${\varepsilon}^{c}$, which is small enough to suppress the
large constant appearing from two inefficiencies in our argument. The first
inefficiency is to fail to take into account precise cancellations in the path
integral, which costs us a factor of $C^{k}$ where $k$ is the number of
collisions. The second inefficiency is the failure to take into account the
combinatorial constraints imposed on the collision partition. The constraints
come from “negative information” about the path – as an oversimplification, if
a collision index $b$ is not part of a tube event or a recollision event, then
it must form part of a ladder or anti-ladder. By failing to take into account
this information, we bound the number of partitions we must sum over by a
large combinatorial factor $(Ck_{max})^{k_{max}}$ rather than a factor that
depends on the precise geometric constraints on the path.
To reach diffusive times we must make our bounds more efficient on both
fronts. To perform our resummation, we introduce in Section 5 the notion of an
“extended path”, which is a path formed from $N$ segments each describing the
evolution of the particle on an interval of length $\tau$. An extended path is
a sequence of path segments with phase space points in between consecutive
segments,
$\Gamma=(\xi_{0},\omega_{1},\xi_{1},\xi_{2},\omega_{2},\xi_{3},\cdots,\xi_{2\ell-2},\omega_{\ell},\xi_{2\ell-1},\cdots,\xi_{2N-2},\omega_{N},\xi_{2N-1}).$
An example of an extended path is drawn in Figure 5.
Figure 5. A depiction of an extended path $\Gamma$. The segment $\omega_{j}$
describes a piecewise linear path between the endpoints $\xi_{2j-2}$ and
$\xi_{2j-1}$. The $\xi_{j}$ variables are phase space pairs $(x,p)$ describing
the position and momentum of the particle at the boundary of the path
segments.
Given an extended path we define an operator $O_{\Gamma}$ by
$O_{\Gamma}=\mathinner{|{\xi_{0}}\rangle}\mathinner{\langle{\xi_{2N-1}}|}\prod_{\ell=1}^{N}\mathinner{\langle{\xi_{2\ell-2}|O_{\omega_{\ell}}|\xi_{2\ell-1}}\rangle},$
so that including the sum over all possible collision numbers $0\leq k_{j}\leq
k_{max}$ of each segment in the integral, we have
$e^{iN\tau H}\approx\int O_{\Gamma}\mathop{}\\!\mathrm{d}\Gamma,$
where there is an error term in the approximation that is described in Section
3.
To write down the evolution channel ${\mathcal{E}}_{N\tau}$ we therefore
arrive at an integral of the form
(1.15)
${\mathcal{E}}_{N\tau}[A]\approx\int{\mathbf{E}\,}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}}\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}.$
Before we take the expectation, we will split up the pairs of paths
$(\Gamma^{+},\Gamma^{-})$ according to their geometric properties. The key
result we will need is a description of the structure of the correlation
partition of the paths in terms of the geometric features. This is done in
Section 6, which characterizes the allowed partitions using ladders and anti-
ladders. Here we simply provide a quick sketch. An example of a ladder
partition on the disjoint union $[k]\sqcup[k]=[k]\times\\{+,-\\}$ is
$P_{lad}=\\{(j,+),(j,-)\\}_{j\in[k]}.$
An example of an anti-ladder partition on $[k]\sqcup[k]$ is the partition
$P_{anti}=\\{(j,+),(k+1-j,-)\\}_{j\in[k]}.$
The ladder and anti-ladder partitions are drawn in Figure 6
Figure 6. An example of a ladder and an anti-ladder partition with five rungs.
The main result of Section 6 is Lemma 6.16, which states that collisions that
are not part of a geometric feature (so-called “typical collisions”) form part
of either a ladder or an anti-ladder structure in the collision partition of
$(\Gamma^{+},\Gamma^{-})$. Figure 7 illustrates the main result in a special
case.
Figure 7. An example illustrating Lemma 6.16. On the top, a path that has a
single recollision event. At the bottom, an example of a collision partition
compatible with this single recollision event. Note that every collision that
is not part of the recollision is either part of a ladder or an anti-ladder.
The next step is to partition the path integral according to the geometric
information of the paths, which we encapsulate in a structure that we call a
“skeleton” ${\mathcal{F}}(\Gamma^{+},\Gamma^{-})$. Given a skeleton
${\mathcal{F}}$, Lemma 6.16 allows us to construct a set of partitions
${\mathcal{Q}}_{\mathcal{F}}$ such that for any pair of paths
$(\Gamma^{+},\Gamma^{-})$ with
${\mathcal{F}}(\Gamma^{+},\Gamma^{-})={\mathcal{F}}$, the collision partition
$P(\Gamma^{+},\Gamma^{-})\in{\mathcal{Q}}_{\mathcal{F}}$. In fact, we have the
stronger statement that for such pairs of paths,
(1.16)
${\mathbf{E}\,}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}}=\sum_{P\in{\mathcal{Q}}({\mathcal{F}})}{\mathbf{E}\,}_{P}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}},$
where the sum is over partitions of the collision indices of
$(\Gamma^{+},\Gamma^{-})$, and ${\mathbf{E}\,}_{P}$ is shorthand for a
splitting of the expectation along the partition $P$.
Writing ${\mathbf{1}}_{{\mathcal{F}}}(\Gamma^{+},\Gamma^{-})$ for the
indicator function that ${\mathcal{F}}(\Gamma^{+},\Gamma^{-})={\mathcal{F}}$,
we then attain the following decomposition for the path integral:
$\displaystyle{\mathcal{E}}_{N\tau}[A]$
$\displaystyle=\int{\mathbf{E}\,}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}}\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}$
$\displaystyle=\sum_{{\mathcal{F}}}\int{\mathbf{1}}_{{\mathcal{F}}}(\Gamma^{+},\Gamma^{-}){\mathbf{E}\,}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}}\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}$
$\displaystyle=\sum_{{\mathcal{F}}}\sum_{P\in{\mathcal{Q}}_{\mathcal{F}}}\int{\mathbf{1}}_{{\mathcal{F}}}(\Gamma^{+},\Gamma^{-}){\mathbf{E}\,}_{P}(O_{\Gamma^{+}})^{*}AO_{\Gamma^{-}}\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}.$
The benefit of decomposing the path integral in this way is that the
expectation ${\mathbf{E}\,}_{P}$ splits in a known way. On the other hand
there is now the challenge of dealing with the indicator function
${\mathbf{1}}_{\mathcal{F}}$. The reason this indicator functions causes a
problem is not the discontinuity (this could be solved by using a smoother
partition of unity) but rather the global nature of the constraints. In
particular, ${\mathbf{1}}_{{\mathcal{F}}}(\Gamma^{+},\Gamma^{-})$ includes a
product of indicator functions for each negative constraint, that is each pair
of collisions that does not form a recollision or a tube event. The negative
constraints are needed to be able to apply Lemma 6.16, but they make it
difficult to exploit the cancellations needed. To get around this we use a
special form of the inclusion-exclusion principle that is tailored to this
purpose. In particular, in Section 7 we decompose the indicator function
${\mathbf{1}}_{\mathcal{F}}$ in the form
(1.17)
${\mathbf{1}}_{{\mathcal{F}}}=\sum_{{\mathcal{F}}^{\prime}\geq{\mathcal{F}}}G_{{\mathcal{F}},{\mathcal{F}}^{\prime}},$
where we impose a partial ordering $\leq$ on skeletons, and where
$G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}$ is supported on the set of pairs
$(\Gamma^{+},\Gamma^{-})$ such that
${\mathcal{F}}(\Gamma^{+},\Gamma^{-})\geq{\mathcal{F}}^{\prime}$. In the
decomposition (1.17), the terms $G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}$
depend only on the variables involving collisions that are in the support of
the skeleton ${\mathcal{F}}^{\prime}$ (that is, collisions involved in a
recollision, a cone event, or a tube event).
A challenge is to find a better way to handle the sum over partitions in
${\mathcal{Q}}_{\mathcal{F}}$. For this we introduce the concept of colored
operators. Given a “coloring” function $\chi$ which assigns a unique color to
each collision in $\Gamma$, we define the colored operator $O_{\Gamma}^{\chi}$
to be an analogue of $O_{\Gamma}$ which replaces each instance of the
potential $V$ with an appropriately chosen independent copy of $V$. Then given
a skeleton ${\mathcal{F}}$, we construct two sets of colors
$\Psi^{+}({\mathcal{F}})$ and $\Psi^{-}({\mathcal{F}})$ so that
$\sum_{P\in{\mathcal{Q}}({\mathcal{F}})}{\mathbf{E}\,}_{P}O_{\Gamma^{+}}^{*}AO_{\Gamma^{-}}=\sum_{\chi^{+}\in\Psi^{+}({\mathcal{F}})}\sum_{\chi^{-}\in\Psi^{-}({\mathcal{F}})}{\mathbf{E}\,}(O_{\Gamma^{+}}^{\chi^{+}})^{*}AO_{\Gamma^{-}}^{\chi^{-}}=:{\mathbf{E}\,}(O_{\Gamma^{+}}^{\Psi^{+}})^{*}AO_{\Gamma^{-}}^{\Psi^{-}}.$
The precise definition of colored operators is given in Section 8, and the
construction of colorings that reproduce the partition collection
${\mathcal{Q}}_{\mathcal{F}}$ is done in Section 9. The benefit of writing the
expectation in this way is that we can use the “operator Cauchy-Schwartz”
inequality
$\|{\mathbf{E}\,}X^{*}AY\|_{op}\leq\|A\|_{op}\|{\mathbf{E}\,}X^{*}X\|_{op}^{1/2}\|{\mathbf{E}\,}Y^{*}Y\|_{op}^{1/2}$
where $X$ and $Y$ are random operators, to simplify the estimation of the
contribution from paths with skeleton ${\mathcal{F}}$. The result of this
Cauchy-Schwartz procedure is depicted in Figure 8.
Figure 8. On the left, a partition with a collision coloring chosen so that
matched collisions have the same color. When the operator Cauchy-Schwartz
inequality is applied, a copy of the top and bottom rows are produced and
matched to each other, converting the anti-ladder portion of the partition
into a ladder (right).
More precisely, we will apply the operator Cauchy-Schwartz inequality to the
operator
$\displaystyle{\mathcal{E}}_{{\mathcal{F}},{\mathcal{F}}^{\prime}}[A]$
$\displaystyle:=\int
G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}(\Gamma^{+},\Gamma^{-})\sum_{P\in{\mathcal{Q}}_{\mathcal{F}}}{\mathbf{E}\,}_{P}(O_{\Gamma^{+}})^{*}AO_{\Gamma^{-}}\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}$
$\displaystyle=\int
G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}(\Gamma^{+},\Gamma^{-}){\mathbf{E}\,}(O_{\Gamma^{+}}^{\Psi^{+}})^{*}AO_{\Gamma^{-}}^{\Psi^{-}}.\mathop{}\\!\mathrm{d}\Gamma^{+}\mathop{}\\!\mathrm{d}\Gamma^{-}.$
To do this we first split $G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}$ as a
mixture of product functions of the form
(1.18) $G_{{\mathcal{F}},{\mathcal{F}}^{\prime}}(\Gamma^{+},\Gamma^{-})=\int
H(\theta)\chi_{{\mathcal{F}},{\mathcal{F}}^{\prime},\theta}^{+}(\omega)\chi_{{\mathcal{F}},{\mathcal{F}}^{\prime},\theta}^{-}(\omega^{\prime})\mathop{}\\!\mathrm{d}\theta.$
We also decompose the coloring sets
$\Psi^{\pm}({\mathcal{F}},{\mathcal{F}}^{\prime})$ into carefully chosen
components which are specified by a data structure called a scaffold, so we
decompose
$\Psi^{\pm}({\mathcal{F}},{\mathcal{F}}^{\prime})=\bigcup_{\operatorname{Scaff}\in{\mathcal{D}}^{\pm}({\mathcal{F}},{\mathcal{F}}^{\prime})}\Psi^{\pm}(\operatorname{Scaff}).$
Then by applying the operator Cauchy-Schwartz inequality we arrive at the
estimate
(1.19)
$\begin{split}\|{\mathcal{E}}_{{\mathcal{F}},{\mathcal{F}}^{\prime}}[A]\|_{op}&\leq\|A\|_{op}(\\#{\mathcal{D}}^{+}({\mathcal{F}},{\mathcal{F}}^{\prime}))\sum_{\operatorname{Scaff}\in{\mathcal{D}}^{+}({\mathcal{F}},{\mathcal{F}}^{\prime})}\big{\|}\int|H(\theta)|{\mathbf{E}\,}(X^{+}_{\operatorname{Scaff},\theta})^{*}(X^{+}_{\operatorname{Scaff},\theta})\mathop{}\\!\mathrm{d}\theta\big{\|}_{op}^{1/2}\\\
&\qquad(\\#{\mathcal{D}}^{-}({\mathcal{F}},{\mathcal{F}}^{\prime}))\sum_{\operatorname{Scaff}\in{\mathcal{D}}^{-}({\mathcal{F}},{\mathcal{F}}^{\prime})}\big{\|}\int|H(\theta)|{\mathbf{E}\,}(X^{-}_{\operatorname{Scaff},\theta})^{*}(X^{-}_{\operatorname{Scaff},\theta})\mathop{}\\!\mathrm{d}\theta\big{\|}_{op}^{1/2},\end{split}$
where
$X^{\pm}_{\operatorname{Scaff},\theta}:=\int\chi_{{\mathcal{F}},{\mathcal{F}}^{\prime},\theta}^{\pm}(\omega)\sum_{\psi\in\Psi^{\pm}(\operatorname{Scaff})}O_{\Gamma}^{\psi}\mathop{}\\!\mathrm{d}\Gamma.$
This calculation involving the Cauchy-Schwartz inequality is done more
carefully in Section 10.
The point of defining the scaffolds is that we can arrange that the operators
${\mathbf{E}\,}(X^{\pm}_{\operatorname{Scaff},\theta})^{*}(X^{\pm}_{\operatorname{Scaff},\theta})$
involve sums over partitions that are formed only from ladders, not anti-
ladders. The point is that ladder partitions have a semigroup structure in the
sense that the concatenation of two ladders is a ladder. We use this structure
to more easily use exact cancellations. More precisely, we use the fact that
the ladder partitions form a good approximation to the evolution
${\mathcal{E}}_{\tau}$ at subkinetic times and the fact that
${\mathcal{E}}_{\tau}$ is a contraction in operator norm in order to obtain
the bounds we need. These bounds are proven in Section 12.
We also point out that the use of Cauchy-Schwartz in this way is lossy, but it
only loses a factor of $N^{C\|{\mathcal{F}}\|}$. We can afford to lose this
factor because the operator norm appearing in the right hand side (1.19) will
have order ${\varepsilon}^{c\|{\mathcal{F}}\|}$. Roughly this is because we
obtain a factor of ${\varepsilon}$ for each special geometric event described
in ${\mathcal{F}}$. A careful argument is needed to ensure that one can obtain
additional factors of ${\varepsilon}$ for each recollision (say) in an
integral over paths containing multiple recollisions, and this is done in
Section 11.
### 1.4. An abbreviated review of related works
Here we point out some related works, making no attempt at providing a
complete review of the rich field of dynamics in random fields.
The rigorous study of the random Schrodinger equation began with the
previously mentioned work of H. Spohn [29]. As mentioned previously, Spohn’s
analysis was extended to the kinetic time in [18] and then to diffusive times
in [15, 16]. Each of these papers considers the convergence in mean of the
Wigner function to the solution of a kinetic equation. A natural question is
to understand the size of the fluctuations of the Wigner function. An analysis
was carried out by T. Chen in [8] which showed that in fact that the $r$-th
moments of the Wigner function are bounded for any $r<\infty$. Chen’s analysis
was later improved by M. Butz in [7]. We also point out the work of J.
Lukkarinen and H. Spohn in [24], which shows that the diagrammatic methods
applied to the Schrödinger equation can also be used to derive kinetic
equations for a random wave equation. See the review [3] for a more complete
discussion of the kinetic regime for waves in a random medium.
Other regimes of interest are the homogenization regime in which the
wavelength of the initial condition is substantially longer than the
decorrelation length scale of the potential. This was studied by G. Bal and N.
Zhang in [31], where a homogenized equation with a constant effective
potential is shown to describe the evolution of the average wave function.
This limit was further studied by T. Chen, T. Komorowski, and L. Ryzhik in
[9]. An entirely different approach to the study of the average wave function
in the kinetic regime was introduced by M. Duerinckx and C. Shirley in [12].
There the authors use ideas from spectral theory to understand the evolution
operator, and are able to show with this method that the average wave function
decays exponentially on the kinetic time scale.
The high frequency regime, in which the wavelength of the initial condition is
much shorter than the decorrelation length of the potential, was considered by
G. Bal, T. Komorowski, and L. Ryzhik in [2]. There the authors derive a
Fokker-Planck equation for the evolution of the Wigner function.
The study of the random Schrodinger equation falls into a larger body of work
of understanding the emergence of apparently irreversible phenomena from
reversible dynamics [28]. From this point of view, the random Schrodinger
equation is simply the one-particle quantum mechanical manifestation of a
larger phenomenon.
The classical version is the stochastic acceleration problem given by the
ordinary differential equation
$\ddot{x}=-{\varepsilon}\nabla V(x)$
where again $V$ is a stationary random potential. Diffusive behavior for the
stochastic acceleration problem was first demonstrated by H. Kesten and G.
Papanicolau in [20] for dimensions $d\geq 3$. For a special class of
potentials their argument was then applied to the two dimensional case by D.
Dürr, S. Goldstein, and J. Lebowitz in [13], and then T. Komorowski and L.
Ryzhik lifted the restriction on the potentials in [21]. The argument used by
Kesten and Papanicolau inspired the semigroup approach taken in this paper.
The connection is that Kesten and Papanicolau define a modified version of the
stochastic acceleration problem in which unwanted correlations are ruled out
by fiat. They then show that this modified evolution is unlikely to have
recollisions after all, and therefore is close to the original evolution. In a
similar way we define an evolution (the refreshed evolution
${\mathcal{E}}_{s}^{m}$) which removes unwanted correlations and use
properties of this evolution to study the true evolution ${\mathcal{E}}_{ms}$.
Although this is where the similarities end, it does seem that a further
unification of the proof techniques may be possible one day.
There are a number of other classical models of particles moving in a random
environment. A popular model is the Lorentz gas, in which a billiard travels
through ${\mathbf{R}}^{d}$ with some obstacles placed according to a Poisson
process. A pioneering paper in the study of the Lorentz gas is [4] where a
linear Boltzmann equation is derived at the Boltzmann-Grad limit of the model.
A review of this model is provided in [11]. We refer the reader also to some
exciting recent developments in this field [1, 22, 26]. It seems that the
classical models of particles moving in random environment contain many of the
same difficulties of understanding the quantum evolution. A deeper
understanding of the phase-space path integral may lead us to a better
understanding of the relationship between the classical and quantum problems.
The random Schrodinger equation is also closely related to wave-kinetic theory
in which one studies the evolution of random waves with a nonlinear
interaction (see [27] for a physically-motivated introduction to this theory).
A pioneering work in this field is the paper of Lukkarinen and Spohn [25], in
which a wave kinetic equation is derived for the nonlinear Schrodinger
equation for initial conditions that are perturbations of an equilibrium
state. In a series of works [19, 5, 10, 6] a wave kinetic equation was derived
for the nonlinear Schrodinger equation on a torus with more general initial
conditions. Independently, in [30] a wave kinetic equation was derived for the
Zakharov-Kuznetsov equation on ${\mathbf{R}}^{d}$ for $d\geq 2$. Each of these
works follows the traditional strategy of writing out a diagrammatic expansion
for the solution and finding sources of cancellation in the error terms and
comparing the main terms to a perturbative expansion of the kinetic equation.
It seems possible that the wavepacket decomposition used in this paper and the
approximate-semigroup argument could be used to make further progress in wave-
kinetic theory.
### 1.5. Acknowledgements
The author is very grateful to Lenya Ryzhik for years of support, advice, and
many clarifying discussions. The author also warmly thanks Minh-Binh Tran for
many helpful conversations about the paper. The author is supported by the
Fannie and John Hertz Foundation.
## 2\. More detailed outline of the proof
In this section we lay out the main lemmas used to prove Theorem 1.1.
The proof involves analysis of three time scales. The first time scale is the
time ${\varepsilon}^{-1.5}$ during which the particle is unlikely to scatter
at all and in particular is unlikely to experience more than one scattering
event. The main result we need from this time scale shows that the linear
Boltzmann equation agrees with the evolution ${\mathcal{E}}_{s}$ with an error
that is very small in operator norm. This calculation is standard and is
reproduced in Appendix A for the sake of completeness. The calculation only
involves two terms from the Duhamel expansion of $e^{-itH}$ so there are no
combinatorial difficulties. propositionshorttime There exists $\theta(d)>0$
such that the following holds: Let $a_{0}\in C^{2d+1}$ be an observable
supported on the set $\\{(x,p)||p|>{\varepsilon}^{\theta}\\}$, and suppose
that $a_{s}$ solves the linear Boltzmann equation (1.8). Then for
$\sigma\leq{\varepsilon}^{-1.5}$,
(2.1)
$\|\operatorname{Op}(a_{\sigma})-{\mathcal{E}}_{\sigma}[\operatorname{Op}(a_{0})]\|_{op}{\,\lesssim\,}{\varepsilon}^{2.1}\sigma\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.25}}}.$
To use Proposition 2 along with the semigroup approximation strategy, we need
the following regularity result for the short time evolution of the linear
Boltzmann equation.
###### Lemma 2.1.
There exists $\theta=\theta(d)>0$ such that the following holds: Let $a_{s}$
solve the linear Boltzmann equation (1.8) and
$\operatorname{supp}a_{0}\subset\\{(x,p)\mid|p|\geq{\varepsilon}^{\theta}\\}$.
Then for $s\leq{\varepsilon}^{-2.05}$,
$\|a_{s}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.25}}}\leq
C\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}.$
Lemma 2.1 is proven with a simple and suboptimal argument in Appendix D, where
we prove a slightly stronger version in Lemma D.1.
Using Proposition 2 and Lemma 2.1 we can prove that the “$\sigma$-refreshed”
evolution ${\mathcal{E}}_{\sigma}^{n}$ approximates the linear Boltzmann
equation up to a diffusive timescale.
###### Corollary 2.2.
For $\sigma={\varepsilon}^{-1.5}$ and $m\in{\mathbb{N}}$ such that
$m\sigma\leq{\varepsilon}^{-2.05}$, and $a_{s}$ solving (1.8),
(2.2)
$\|{\mathcal{E}}_{\sigma}^{m}[\operatorname{Op}(a_{0})]-\operatorname{Op}(a_{m\sigma})\|_{op}\leq
C{\varepsilon}^{2.1}m\sigma\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}.$
###### Proof.
We define the quantity
$F_{j}:=\sup_{\|a_{0}\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.5}}}=1}\|{\mathcal{E}}_{\sigma}^{j}[\operatorname{Op}(a_{0})]-\operatorname{Op}(a_{j\sigma})\|_{op}.$
By Proposition 2, $F_{1}\leq C{\varepsilon}^{2.1}\sigma$. To obtain a bound
for $F_{j+1}$ from $F_{j}$ we write
$\|{\mathcal{E}}_{\sigma}^{m}[\operatorname{Op}(a)]-\operatorname{Op}(a_{m\sigma})\|_{op}\leq\|{\mathcal{E}}_{\sigma}[\operatorname{Op}(a_{(m-1)\sigma})]-\operatorname{Op}(a_{m\sigma})\|_{op}+\|{\mathcal{E}}_{\sigma}^{m}[\operatorname{Op}(a)]-{\mathcal{E}}_{\sigma}[\operatorname{Op}(a_{(m-1)\sigma})]\|_{op}.$
The first quantity is bounded using Proposition 2 and Lemma 2.1. The second
term is bounded by $F_{j}$ using the fact that ${\mathcal{E}}_{\sigma}$ is
linear and is a contraction in the operator norm:
$\|{\mathcal{E}}_{\sigma}[A]\|_{op}=\|{\mathbf{E}\,}e^{i\sigma H}Ae^{-i\sigma
H}\|_{op}\leq{\mathbf{E}\,}\|e^{i\sigma H}Ae^{-i\sigma H}\|_{op}=\|A\|_{op}.$
Therefore we obtain the bound
$F_{j+1}\leq F_{j}+C{\varepsilon}^{2.1}\sigma.$
In particular,
$F_{m}\leq C{\varepsilon}^{2.1}m\sigma,$
so (2.2) follows. ∎
The more substantial component of the proof of Theorem 1.1 is the approximate
semigroup property relating the “refreshed” evolution
${\mathcal{E}}_{\sigma}^{M}$ to the correct evolution channel
${\mathcal{E}}_{M\sigma}$. For the purposes of proving an approximate
semigroup property it is more convenient to work with the “wavepacket
quantization” defined by.
$\operatorname{Op}(a):=\int_{{\mathbf{R}}^{d}}\mathinner{|{\xi}\rangle}\mathinner{\langle{\xi}|}a(\xi)\mathop{}\\!\mathrm{d}\xi.$
In Appendix C we will show that the wavepacket quantization is close to the
Weyl quantization, in the sense that
$\|\operatorname{Op}(a)-\operatorname{Op}^{w}(a)\|_{op}\leq{\varepsilon}^{0.05}\|a\|_{C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-0.1}}}.$
In general, we will be interested in operators of the form
$A=\int\mathinner{|{\xi}\rangle}\mathinner{\langle{\eta}|}a(\xi,\eta)\mathop{}\\!\mathrm{d}\xi$
with kernel $a(\xi,\eta)$ satisfying $|a(\xi,\eta)|\leq C\|A\|_{op}$ and
supported on near the diagonal. To quantify this we introduce the distance
$d_{r}$ on ${{\mathbf{R}}^{2d}}$ so that, writing $\xi=(\xi_{x},\xi_{p})$ and
$\eta=(\eta_{x},\eta_{p})$,
$d_{r}(\xi,\eta):=r^{-1}|\xi_{x}-\eta_{x}|+r|\xi_{p}-\eta_{p}|.$
More precisely, we are interested in families of _good operators_ , defined
below.
###### Definition 2.3 (Good operators).
An operator $A\in\mathcal{B}(L^{2}({\mathbf{R}}^{d}))$ is said to be
$(C_{1},C_{2},\delta)$-good if there exists a function
$a:{{\mathbf{R}}^{2d}}\times{{\mathbf{R}}^{2d}}\to{\mathbf{C}}$ supported in
the set
$\operatorname{supp}a\subset\\{(\xi,\eta)\in{{\mathbf{R}}^{2d}}\times{{\mathbf{R}}^{2d}}\mid
d_{r}(\xi,\eta)<C_{1},|\xi_{p}|\geq{\varepsilon}^{\theta}-C_{2}{\varepsilon}\\}$
such that
$\|A-\int
a(\xi,\eta)\mathinner{|{\xi}\rangle}\mathinner{\langle{\eta}|}\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta\|_{op}\leq\delta.$
Note that the rank one projection onto a wavepacket
$\mathinner{|{\xi}\rangle}\mathinner{\langle{\xi}|}$ is (formally) a
$(0,0,0)$-good operator if $|\xi_{p}|\geq{\varepsilon}^{\theta}$, but its
Wigner transform has smoothness only at the microscopic scale $(r,r^{-1})$.
Similarly if $a\in C^{2d+1}$ is an observable supported on
$\\{(x,p)||p|\geq{\varepsilon}^{\theta}\\}$, then the wavepacket quantization
$\operatorname{Op}(a)$ is a $(0,0,0)$-good operator. Moreover, by (C.6) we
have that $\operatorname{Op}^{w}(a)$ is a $(0,0,{\varepsilon}^{1/2})$-good
operator if $a\in C^{2d+1}_{{\varepsilon}^{-1},{\varepsilon}^{-1/2}}$.
The first step in the proof of the approximate semigroup property is to verify
a semigroup property up to times ${\varepsilon}^{-2+\kappa/2}$.
###### Proposition 2.4.
If $A$ is a $(C_{1},C_{2},\delta)$-good operator with
$C_{1}\leq{\varepsilon}^{-0.1}$ and
$C_{2}\leq\frac{1}{2}{\varepsilon}^{\theta}$ and
${\varepsilon}^{-1.5}<s<{\varepsilon}^{-2+\kappa/2}$ then
$\|{\mathcal{E}}_{2s}[A]-{\mathcal{E}}_{s}^{2}[A]\|_{op}\leq{\varepsilon}^{2.1}s\|A\|_{op}+2\delta.$
and moreover ${\mathcal{E}}_{s}[A]$ is a
$(C_{1}+|\log{\varepsilon}|^{10},C_{2}-10^{3}{\varepsilon},\delta+{\varepsilon}^{100})$-good
operator.
Proposition 2.4 is proved by first comparing ${\mathcal{E}}_{s}$ to the
expectation over ladders, and then observing that the semigroup property holds
for ladders. The first step is done in Section 4 where we prove Proposition
4.10. The derivation of Proposition 2.4 from Proposition 4.10 is explained by
Lemma 12.2.
Using Proposition 2.4 we can prove a comparison result between the linear
Boltzmann equation and the quantum evolution for times up to
${\varepsilon}^{-2+\kappa/10}$.
###### Corollary 2.5.
If $A$ is a $(C_{1},C_{2},\delta)$-good operator with
$C_{1}\leq\frac{1}{2}{\varepsilon}^{-0.1}$ and
$C_{2}\leq{\varepsilon}^{\theta}$ then with $\sigma={\varepsilon}^{-1.5}$,
$\|{\mathcal{E}}_{\sigma}^{m}[A]-{\mathcal{E}}_{m\sigma}[A]\|_{op}\leq
C_{\kappa}{\varepsilon}^{2.1}\sigma\|A\|_{op}+2\delta+{\varepsilon}^{20}$
for $m$ such that $m\sigma\leq{\varepsilon}^{-2+\kappa/2}$.
###### Proof.
We perform an iteration, defining the error
$E_{m}:=\sup_{\begin{subarray}{c}\|A\|_{op}=1\\\ A\text{ is
}(C_{1}-K_{1}m,C_{2}+K_{2}m,\beta-\delta
m){-good}\end{subarray}}\|{\mathcal{E}}_{2^{m}\sigma}[A]-{\mathcal{E}}_{\sigma}^{2^{m}}[A]\|_{op},$
where we choose $C_{1}=\frac{1}{2}{\varepsilon}^{-0.1}$,
$K_{1}=|\log{\varepsilon}|^{10}$, $C_{2}=\frac{1}{4}{\varepsilon}^{0.1}$,
$K_{2}=10^{3}{\varepsilon}$, $\beta={\varepsilon}^{100}$, and
$\delta={\varepsilon}^{100}$. Let ${\mathcal{A}}_{m}$ be the class of
admissible operators in the supremum defining $E_{m}$. The significant point
about ${\mathcal{A}}_{m}$ is that ${\mathcal{E}}_{s}[A]\in{\mathcal{A}}_{m}$
when $A\in{\mathcal{A}}_{m+1}$. To find a recursion for $E_{m}$, we write
(2.3)
$\begin{split}\|{\mathcal{E}}_{2^{m+1}s}[A]-{\mathcal{E}}_{s}^{2^{m+1}}[A]\|_{op}&\leq\|{\mathcal{E}}_{2^{m+1}s}[A]-{\mathcal{E}}_{2^{m}s}^{2}[A]\|_{op}+\|{\mathcal{E}}_{2^{m}s}^{2}[A]-{\mathcal{E}}_{s}^{2^{m+1}}[A]\|_{op}\\\
&\leq\|{\mathcal{E}}_{2^{m+1}s}[A]-{\mathcal{E}}_{2^{m}s}^{2}[A]\|_{op}+\|{\mathcal{E}}_{2^{m}s}[{\mathcal{E}}_{2^{m}s}[A]-{\mathcal{E}}_{s}^{2^{m}}[A]]\|_{op}\\\
&\qquad\qquad+\|({\mathcal{E}}_{2^{m}s}-{\mathcal{E}}_{s}^{2^{m}}){\mathcal{E}}_{s}^{2^{m}}[A]\|_{op}.\end{split}$
Since ${\mathcal{E}}_{s}$ is a contraction in the operator norm, and since
${\mathcal{E}}_{s}$ maps ${\mathcal{A}}_{m+1}$ into ${\mathcal{A}}_{m}$ we
have by Proposition 2.4 that
$\|{\mathcal{E}}_{2^{m+1}s}[A]-{\mathcal{E}}_{s}^{2^{m+1}}[A]\|_{op}\leq(({\varepsilon}^{2.1}2^{m}s)\|A\|_{op}+2\delta)+E_{m}.$
Taking a supremum over $A$ we obtain the relation
$E_{m+1}\leq{\varepsilon}^{2.1}2^{m}\sigma+2E_{m}.$
Since $E_{0}=0$, we obtain
$E_{m}\leq m{\varepsilon}^{2.1}2^{m}\sigma.$
∎
The remaining ingredient needed to prove Theorem 1.1 is a semigroup property
that holds up to diffusive times. This is substantially more difficult than
establishing the semigroup property for subkinetic times because of the need
for resummation in the Duhamel series. The main result is the following.
###### Proposition 2.6.
There exists $\kappa=\kappa(d)$ such that the following holds: If $A$ is a
$({\varepsilon}^{-0.1},\frac{1}{2}{\varepsilon}^{0.1},\delta)$-good operator,
then with $\tau={\varepsilon}^{-2+\kappa/2}$ and
$N=\lfloor{\varepsilon}^{-\kappa}\rfloor$,
$\|{\mathcal{E}}_{N\tau}[A]-{\mathcal{E}}_{\tau}^{N}[A]\|_{op}\leq{\varepsilon}^{c\kappa}\|A\|_{op}+2\delta.$
Having sketched the argument proving the main result, we now outline the
remaining sections in the paper. In Section 3 we explain the phase space path
integral approximation we use throughout the paper. Then in Section 4 we
introduce the ladder superoperator ${\mathcal{L}}_{s}$ which is a main
character in the derivation of the approximate semigroup property. Section 4
contains the bulk of the proof of Proposition 2.4 and contains most of the
main ideas of the paper.
The remaining sections in the paper are dedicated to the proof of Proposition
2.6. In Section 5 we write down the path integral used to represent the
solution operator up to this time. Then in Section 6 we clarify the
relationship between the geometry of paths and the combinatorial features of
their collision partitions. Then in Section 7 we split up the path integral
according to the geometry of the paths. To exploit the combinatorial structure
of the correlation partition, we introduce the formalism of colored operators
in Section 8 and Section 9. Then in Section 10 we finally write out our
version of a “diagrammatic expansion” (which is different than previous
expansions in that the first term of the expansion for
${\mathcal{E}}_{N\tau}[A]$ is the refreshed evolution
${\mathcal{E}}_{\tau}^{N}[A]$). The diagrams are bounded in Section 11.
The remaining sections contain proofs of more technical results needed
throughout the argument, and are referenced as needed.
## 3\. A sketch of the derivation of the path integral
In this section we state the precise version of the phase-space path integral
alluded to in Section 1.3. The proofs of the assertions made in this section
are given in Sections 13, 14, and 15.
The first step is to write out an expansion for $e^{-isH}$ that is valid for
times $s\leq\tau:={\varepsilon}^{-2+\kappa/2}$ in terms of paths. More
precisely, a path $\omega=({\mathbf{s}},{\mathbf{p}},{\mathbf{y}})$ having $k$
collisions is a tuple containing a list of collision invervals
${\mathbf{s}}\in{\mathbf{R}}_{+}^{k+1}$ satisfying $\sum_{j=0}^{k}s_{j}=s$,
momenta ${\mathbf{p}}\in({\mathbf{R}}^{d})^{k+1}$, and collision locations
${\mathbf{y}}\in({\mathbf{R}}^{d})^{k}$. Each path $\omega$ is associated to
the operator $O_{\omega}$ defined by
${\widehat{O_{\omega}\psi}}=\delta_{p_{k}}{\widehat{\psi}}(p_{0})e^{i\varphi(\omega)}\prod_{j=1}^{k}{\widehat{V_{y_{j}}}}(p_{j}-p_{j-1}),$
where $\varphi(\omega)$ is the phase function
$\varphi(\omega)=\sum_{j=0}^{k}s_{j}|p_{j}|^{2}/2+\sum_{j=1}^{k}y_{j}\cdot(p_{j}-p_{j-1}).$
In Dirac notation, we express $O_{\omega}$ as
(3.1)
$O_{\omega}=\mathinner{|{p_{k}}\rangle}\mathinner{\langle{p_{0}}|}e^{i\varphi(\omega)}\prod_{j=1}^{k}{\widehat{V_{y_{j}}}}(p_{j}-p_{j-1}).$
Here $V_{y}(x)=V(x-y)\chi^{V}(x)$ is a localized and shifted version of the
potential, with localization $\chi^{V}$ having width $r$ and satisfying
$\int\chi^{V}=1$.
Let $\Omega_{k}(s)$ denote the space of paths with $k$ collisions and duration
$s$,
$\Omega_{k}(s)=\triangle_{k}(s)\times({\mathbf{R}}^{d})^{k+1}\times({\mathbf{R}}^{d})^{k},$
where $\triangle_{k}(s)\subset{\mathbf{R}}_{+}^{k+1}$ is the set of tuples of
time intervals summing to $s$,
$\triangle_{k}(s)=\\{{\mathbf{s}}=(s_{0},\cdots,s_{k})\in{\mathbf{R}}_{+}^{k+1}\mid\sum_{j=0}^{k}s_{j}=s\\}.$
We will see in Section 13 that the Duhamel expansion can be formally written
$e^{isH}=\sum_{k=0}^{\infty}T_{k}:=\sum_{k=0}^{\infty}\int_{\Omega_{k}(s)}O_{\omega}\mathop{}\\!\mathrm{d}\omega.$
In this integral there is no need for the collision locations ${\mathbf{y}}$
to have any relationship with the variables ${\mathbf{s}}$ and ${\mathbf{p}}$.
There is however significant cancellation due to the presence of the phase
$e^{i\varphi(\omega)}$. For example, by integrating by parts in the $p_{j}$
variables and using the identity,
$\displaystyle\partial_{p_{j}}\varphi(\omega)=y_{j}+s_{j}p_{j}-y_{j+1},$
we can reduce the path integral to paths which satisfy
$|y_{j+1}-(y_{j}+s_{j}p_{j})|\lessapprox r.$
Integration by parts in the $s_{j}$ is somewhat more delicate because of the
hard constraint $\sum s_{j}=\tau$. By decomposing this hard constraint as a
sum of softer constraints, we can impose a cutoff on the weaker conservation
of kinetic energy condition
$||p_{j}|^{2}/2-|p_{j^{\prime}}|^{2}/2|\lessapprox\max\\{|p_{j}|r^{-1},s_{j}^{-1},s_{j^{\prime}}^{-1}\\}.$
The integration by parts argument will allow us to construct a function
$\chi^{path}$ supported on the set of such “good” paths and for which
$T_{k}\approx\int_{\Omega_{k}(s)}\chi^{path}(\omega)O_{\omega}\mathop{}\\!\mathrm{d}\omega,$
with an error that is negligible in operator norm. To be more precise, given a
tolerance $\alpha$ (which we set to be $|\log{\varepsilon}|^{10}$), we define
(3.2)
$\begin{split}\Omega_{\alpha,k}(s)=\\{({\mathbf{s}},{\mathbf{p}},{\mathbf{y}})\in\Omega_{k}(s)\mid&|y_{j+1}-(y_{j}+s_{j}p_{j})|\leq\alpha
r\text{ for all }j\in[1,k-1]\text{ and }\\\
&||p_{j}|^{2}/2-|p_{j^{\prime}}|^{2}/2|\leq\alpha\max\\{s_{j^{\prime}}^{-1},s_{j}^{-1},|p_{j}|r^{-1},\alpha
r^{-2}\\}\text{ for all }j,j^{\prime}\in[0,k]\\}.\end{split}$
Within $\Omega_{\alpha,k}(s)$ we also define the subset
(3.3)
$\begin{split}\Omega_{\alpha,k}(t;\xi,\eta):=\\{({\mathbf{s}},{\mathbf{p}},{\mathbf{y}})\in\Omega_{\alpha,k}(t)\mid&|y_{1}-(\xi_{x}+s_{0}p_{0})|\leq\alpha
r,|\eta_{x}-(y_{k}+s_{k}p_{k})|\leq\alpha r,\\\ &|p_{0}-\xi_{p}|\leq\alpha
r^{-1},\text{ and }|p_{k}-\eta_{p}|\leq\alpha r^{-1}\\},\end{split}$
where $\xi,\eta\in{{\mathbf{R}}^{2d}}$ with $\xi=(\xi_{x},\xi_{p})$ and
$\eta=(\eta_{x},\eta_{p})$.
The following lemma is simply a careful application of integration by parts,
and is done in Section 13.
###### Lemma 3.1.
There exists a cutoff function $\chi\in
C^{\infty}(\Omega_{k}(\tau)\times{{\mathbf{R}}^{2d}}\times{{\mathbf{R}}^{2d}})$
supported in the set
$\operatorname{supp}\chi\subset\\{(\omega,\xi,\eta)\mid\omega\in\Omega_{\alpha,k}(\tau;\xi,\eta)\\}$
such that, with
(3.4)
$T_{k}^{\chi}(\tau):=(i{\varepsilon})^{k}\int_{\Omega_{k}(\tau)\times{{\mathbf{R}}^{2d}}\times{{\mathbf{R}}^{2d}}}\mathinner{|{\eta}\rangle}\mathinner{\langle{\xi}|}\chi(\omega,\xi,\eta)\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta,$
we have the approximation
(3.5)
$\|T_{k}^{\chi}-T_{k}\|_{op}\leq{\varepsilon}^{-C_{d}k}\|V\|_{C^{10d}}^{Ck}\exp(-c\alpha^{0.99}).$
The point of Lemma 3.1 is that it allows us to neglect the contribution of
“physically unreasonable paths” – those that either badly violate conservation
of kinetic energy or the transport constraints $y_{k+1}\approx
y_{k}+s_{k}p_{k}$.
We remark that Lemma 3.1 is deterministic in the sense that that the
conclusion holds for all potentials, and when we apply it we will simply need
moment bounds for the $C^{10d}$ norm of the potential (after being cutoff to a
ball of large radius). With the choice $\alpha=|\log{\varepsilon}|^{10}$, and
assuming that $\|V\|_{C^{10d}}\leq{\varepsilon}^{-100}$ (say), the right hand
side is still $O({\varepsilon}^{K})$ for any $K>0$.
Having given a description to the collision operators $T_{k}$, it remains to
estimate moments of the form ${\mathbf{E}\,}\|T_{k}^{\chi}\|_{op}^{M}$, for
which we use the moment method:
${\mathbf{E}\,}\|T_{k}^{\chi}\|_{op}^{2M}\leq{\mathbf{E}\,}\operatorname{tr}((T_{k}^{\chi})^{*}T_{k}^{\chi})^{M}.$
Note that this step is where the cutoff on the potential is crucial – without
the cutoff $\chi_{R}$ the trace above would be infinite. In Section 14 we
prove Lemma 14.1, which states that
$\Big{(}{\mathbf{E}\,}\|T_{k}^{\chi}(s)\|_{op}^{2M}\Big{)}^{1/M}\leq
R^{C/m}{\varepsilon}^{2}s\,\,(C(kM)^{C}|\log{\varepsilon}|^{10}).$
The presence of the factor $(km)^{C}$ makes this bound unsuitable for reaching
diffusive time scales. However this bound is good enough to approximate
$e^{isH}$ by $e^{-is\Delta/2}$ for times $s\leq{\varepsilon}^{-1.1}$ (say). We
use this result in Section 15 to define a modified operator
$T_{k,\rho}^{\chi}$ which involves a first short period of free evolution.
More precisely, given a time $\sigma>0$ we construct a function
$\rho_{\sigma}\geq 0$ that is supported on the interval $[\sigma 2\sigma]$, is
Gevrey regular, and satisfies $\int\rho_{\sigma}=1$. Then we define
${\widetilde{U}}_{\tau,\sigma}:=\int_{\sigma}^{2\sigma}\int_{\sigma}^{2\sigma}e^{is\Delta/2}e^{-i(\tau-
s-s^{\prime})H}e^{is^{\prime}\Delta/2}\rho_{\sigma}(s)\rho_{\sigma}(s^{\prime})\mathop{}\\!\mathrm{d}s\mathop{}\\!\mathrm{d}s^{\prime}.$
We will fix for the remainder of the paper $\sigma={\varepsilon}^{-1.5}$. In
Section 15 we use Lemma 3.1 to prove Lemma 15.1, which justifies the
approximation
$\|{\mathcal{E}}_{N\tau}[A]-{\mathbf{E}\,}({\widetilde{U}}_{\tau,\sigma}^{*})^{N}A{\widetilde{U}}_{\tau,\sigma}^{N}\|_{op}\leq{\varepsilon}^{0.2}.$
This will allow us to restrict the path integral to a space of paths which do
not have a collision too close to either endpoint,
$\Omega_{k,\alpha}(\tau,\sigma;\xi,\eta):=\\{(\omega,\xi,\eta)\in\Omega_{k,\alpha}(\tau;\xi,\eta)\mid
s_{0}\geq\sigma\text{ and }s_{k}\geq\sigma\\}.$
The operator ${\widetilde{U}}_{\tau,\sigma}$ also has a path integral
expansion in terms of collision operators $T_{k}^{\chi,\sigma}$, which in
addition to having the smooth cutoff $\chi$ have a cutoff enforcing that
$\omega\in\Omega_{k,\alpha}(\tau,\sigma;\xi,\eta)$.
Combining the above arguments we obtain the following approximation result for
the evolution operator ${\widetilde{U}}_{\tau,\sigma}$.
###### Proposition 3.2.
There exists a bounded smooth function $\chi_{\sigma}\in
C^{\infty}(\Omega_{k}(\tau)\times{\mathbf{R}}^{2d}\times{\mathbf{R}}^{2d})$
supported in the set
$\operatorname{supp}(\chi_{\sigma})\subset\Omega_{\alpha,k}(\tau,S;\xi,\eta):=\\{(\omega,\xi,\eta)\mid\omega\in\Omega_{k,\alpha}(\tau;\xi,\eta)\text{
and }s_{0}\geq{\varepsilon}^{-1.6},\text{ and
}s_{k}\geq{\varepsilon}^{-1.6}\\},$
and such that, with
(3.6)
$T_{k}^{\chi,\sigma}:=(i{\varepsilon})^{k}\int_{\Omega_{k}(\tau)\times{\mathbf{R}}^{2d}\times{\mathbf{R}}^{2d}}\mathinner{|{\eta}\rangle}\mathinner{\langle{\xi}|}\chi_{\sigma}(\omega,\xi,\eta)\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta,$
we have the Duhamel expansion
${\widetilde{U}}_{\tau,\sigma}:=\sum_{k=0}^{k_{max}}T_{k}^{\chi,\sigma}+R_{k_{max}}^{\chi,\sigma},$
where the remainder $R_{k_{max}}^{\chi,\rho}$ has the expression
(3.7)
$R_{k_{max}}^{\chi,\sigma}:=(i{\varepsilon})^{k}\int_{0}^{\tau}\mathop{}\\!\mathrm{d}s\int_{\Omega_{k_{max}}(s)\times{\mathbf{R}}^{2d}\times{\mathbf{R}}^{2d}}e^{i(\tau-s)H}\mathinner{|{\eta}\rangle}\mathinner{\langle{\xi}|}\chi_{\sigma}(\omega,\xi,\eta)\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta,$
and
(3.8) $\left({\mathbf{E}\,}\|e^{i\tau
H}-{\widetilde{U}}_{\tau,\sigma}\|_{op}^{{\varepsilon}^{-0.1}}\right)^{{\varepsilon}^{0.1}}\leq{\varepsilon}^{0.3}$
We use the operator
${\widetilde{U}}_{\tau,\sigma}=:U_{\tau,\sigma}+R_{k_{max}}^{\chi,\sigma}$ to
decompose $e^{iN\tau H}$ as follows:
$e^{iN\tau H}\approx U_{\tau,\sigma}^{N}+\sum_{j=1}^{N}e^{i(N-j)\tau
H}R_{k_{max}}^{\chi,\sigma}U_{\tau,\sigma}^{j-1}.$
Let ${\widetilde{{\mathcal{E}}}}_{N\tau}$ be the superoperator formed from the
main term,
${\widetilde{{\mathcal{E}}}}_{N\tau}[A]:={\mathbf{E}\,}(U_{\tau,\sigma}^{N})^{*}AU_{\tau,\sigma}^{N}.$
Let $R_{j}$ be the operator $R_{k_{max}}^{\chi,\sigma}U_{\tau,\sigma}^{j-1}$,
which also can be written
$R_{j}=\int_{0}^{\tau}e^{i(\tau-s)H}R_{j,s}\mathop{}\\!\mathrm{d}s$ with
(3.9)
$R_{j,s}:=\int_{\Omega_{k_{max}}(s)\times{\mathbf{R}}^{2d}\times{\mathbf{R}}^{2d}}\mathinner{|{\eta}\rangle}\mathinner{\langle{\xi}|}U_{\tau,\sigma}^{j-1}\chi_{\sigma}(\omega,\xi,\eta)\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta.$
Then by an application of the operator Cauchy-Schwarz inequality and the
triangle inequality we have the estimate
(3.10)
$\|{\mathcal{E}}_{N\tau}[A]-{\widetilde{{\mathcal{E}}}}_{N\tau}[A]\|_{op}\leq\|A\|_{op}({\varepsilon}^{0.3}+N\tau\max_{j\in[N]}\sup_{s\in[0,\tau]}\|{\mathbf{E}\,}R_{j,s}^{*}R_{j,s}\|_{op}).$
In the course of understanding the evolution
${\widetilde{{\mathcal{E}}}}_{N\tau}$ we will derive estimates that as a
byproduct prove
(3.11) $\max_{j\in[N]}\sup_{0\leq
s\leq\tau}\|{\mathbf{E}\,}R_{j,s}^{*}R_{j,s}\|_{op}\leq{\varepsilon}^{50}.$
In Section 10.1 we explain how this bound is obtained as a modification of the
argument used to control the diffusive diagrams.
## 4\. The ladder approximation for ${\mathcal{E}}_{\tau}$
In this section we sketch the proof that the evolution channel
${\mathcal{E}}_{s}$ is well approximated by a sum over ladder diagrams when
$s\leq\tau={\varepsilon}^{-2}N^{-\kappa/10}$. This is closely related to the
semigroup property which we will explore in a later section. The statement of
the main result of the section, Proposition 4.10, is given in Section 4.3
after some preliminary calculations which motivate the definition of the
ladder superoperator.
### 4.1. An introduction to the channel ${\mathcal{E}}_{s}$
For times $s\leq\tau={\varepsilon}^{-2}N^{-\kappa/10}$, we may use Lemma 3.1
to write
(4.1)
$e^{-isH}=\sum_{k=0}^{k_{max}}\int_{\Omega_{k}(s)}\chi_{S}(\omega,\xi,\eta)\mathinner{|{\eta}\rangle}\mathinner{\langle{\xi}|}\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta+E$
where $\chi_{S}(\omega,\xi,\eta)$ is a smooth function supported on the set
$\Omega_{\alpha,k}(s,\sigma;\xi,\eta)$ and the approximation is up to an error
$E$ that satisfies ${\mathbf{E}\,}\|E\|_{op}^{2}\leq{\varepsilon}^{0.2}$.
The operator $e^{isH}$ can similarly be expressed as an integral over paths,
(4.2)
$e^{isH}=(e^{-isH})^{*}=\sum_{k=0}^{k_{max}}\int_{\Omega_{k}(s)}\chi_{S}(\omega,\xi,\eta)\mathinner{|{\xi}\rangle}\mathinner{\langle{\eta}|}\mathinner{\langle{\eta|O_{\omega}|\xi}\rangle}^{*}\mathop{}\\!\mathrm{d}\omega\mathop{}\\!\mathrm{d}\xi\mathop{}\\!\mathrm{d}\eta+E^{*}.$
We now use (4.1) and (4.2) to write an expansion for ${\mathcal{E}}_{s}[A]$.
We will drop the summation over $k$ and handle the sum implicitly in the
integral over $\Omega(s)=\bigcup_{j=0}^{k_{max}}\Omega_{k}(s)$.
${\mathcal{E}}_{s}[A]\approx\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\mathinner{\langle{\xi_{1}^{+}}|}\chi_{S}(\omega^{+},\xi_{0}^{+},\xi_{1}^{+})\chi_{S}(\omega^{-},\xi_{0}^{-},\xi_{1}^{-})\mathop{}\\!\mathrm{d}\bm{\omega}\mathop{}\\!\mathrm{d}\bm{\xi}.$
up to a remainder that is bounded by $O({\varepsilon}^{0.2}\|A\|_{op})$ in
operator norm.
To express the operator more compactly we introduce some notation. We write
$\bm{\Gamma}=(\xi_{0}^{+},\omega^{+},\xi_{1}^{+};\xi_{0}^{-},\omega^{-},\xi_{0}^{-})=(\Gamma^{+};\Gamma^{-})$
for the full path, and then define the path cutoff function
$\Xi(\bm{\Gamma}):=\chi_{S}(\omega^{+},\xi_{0}^{+},\xi_{1}^{+})\chi_{S}(\omega^{-},\xi_{0}^{-},\xi_{1}^{-}).$
Moreover, we will stack like terms to keep the integrand more organized. That
is we will write $\begin{smallmatrix}A\\\ B\end{smallmatrix}$ to mean the
product $AB$. With this notation,
${\mathcal{E}}_{s}[\operatorname{Op}(a)]\approx\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{0}^{+}}|}\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\mathop{}\\!\mathrm{d}\Gamma$
Given a path $\omega=({\mathbf{s}},{\mathbf{p}},{\mathbf{y}})\in\Omega_{k}$,
the random amplitude $\mathinner{\langle{\xi|O_{\omega}|\eta}\rangle}$ is
given by
$\mathinner{\langle{\xi|O_{\omega}|\eta}\rangle}=\mathinner{\langle{\xi|p_{0}}\rangle}\mathinner{\langle{p_{k}|\eta}\rangle}e^{i\varphi(\omega,\xi,\eta)}\prod_{j=1}^{k}{\widehat{V_{y_{j}}}}(p_{j}-p_{j-1}).$
Since $V$ is real and therefore ${\widehat{V}}(q)^{*}={\widehat{V}}(-q)$, the
term in the expectation can be written
(4.3)
${\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}=\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|p_{+,k_{+}}}\rangle}\mathinner{\langle{p_{+,0}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|p_{-,k_{-}}}\rangle}\mathinner{\langle{p_{-,0}|\xi_{0}^{-}}\rangle}\end{smallmatrix}e^{i(\varphi(\omega^{+})-\varphi(\omega^{-}))}{\mathbf{E}\,}\prod_{a\in
K}{\widehat{V_{y_{a}}}}(q_{a}),$
where
$K_{k_{+},k_{-}}=\\{(\ell,j)\mid\ell\in\\{+,-\\}\text{ and
}j\in[1,k_{\ell}]\\}$
and
$q_{(\ell,j)}=\begin{cases}p_{\ell,j}-p_{\ell,j-1},&\ell=+\\\
p_{\ell,j-1}-p_{\ell,j},&\ell=-.\end{cases}$
We write $X=X(\Gamma)$ for the collision set $X=\\{(y_{a},q_{a})\\}_{a\in
K_{(k_{1},k_{2})}}$.
To split up the expectation, let $P(X)=P({\mathbf{y}})\in{\mathcal{P}}(K)$ be
the finest partition such that $|y_{a}-y_{a^{\prime}}|\leq 2\alpha r$ implies
that $a$ and $a^{\prime}$ belong to the same set. Then
${\mathbf{E}\,}\prod_{a\in K}{\widehat{V_{y_{a}}}}(q_{a}),=\prod_{S\in
P(X)}{\mathbf{E}\,}\prod_{a\in S}{\widehat{V_{y_{a}}}}(q_{a}).$
For admissible potentials, Lemma B.3 implies that
(4.4) $\Big{|}{\mathbf{E}\,}\prod_{a\in
K}{\widehat{V_{y_{a}}}}(q_{a})\Big{|}\leq r^{-|K|d}\prod_{a\in
K}(1+|q_{a}|)^{-10d}\sum_{P^{\prime}\leq P({\mathbf{y}})}\prod_{S\in
P^{\prime}}(C_{V}|S|)^{2|S|}r^{d}b_{|S|r^{-1}}(\sum_{j\in S}q_{j}),$
with $b_{t}(x):=\exp(-c|t^{-1}x|^{0.99})$.
This quantity is only nonnegligible when $|\sum_{j\in S}q_{j}|\lessapprox
r^{-1}$ for each $S\in P({\mathbf{y}})$. This leads us to define the notion of
a $\beta$-complete collision set.
###### Definition 4.1 (Complete collision sets).
A collision set $X=\\{(y_{j},q_{j})\\}_{j=1}^{k}$ is $\beta$-complete if
(4.5) $\Big{|}\sum_{j\in S}q_{j}\Big{|}\leq\beta|S|r^{-1}$
holds for every $S\in Q({\mathbf{y}})$.
An early approximation we can make is to reduce the integration over paths
$\omega^{+}$ and $\omega^{-}$ to only paths which are $\beta$-complete for
$\beta=|\log{\varepsilon}|^{20}$.
###### Lemma 4.2.
Let $s\leq\tau$ and let $A$ be a band-limited operator with bandwidth at most
${\varepsilon}^{-0.5}$. Then for $\beta>|\log{\varepsilon}|^{20}$,
(4.6)
$\begin{split}\big{\|}\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{0}^{+}}|}\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}(1-{\mathbf{1}}_{X(\Gamma)\text{
is
}\beta\text{-complete}})\mathop{}\\!\mathrm{d}\Gamma\big{\|}_{op}\leq{\varepsilon}^{100}\|A\|_{op}.\end{split}$
###### Proof.
The difference is an integral over paths which form $\beta$-incomplete
collision sets, and the norm of the integrand is at most
${\varepsilon}^{|\log{\varepsilon}|^{5}}$ for such paths. The volume of
integration for fixed $\xi_{0}^{+}$ or for fixed $\xi_{0}^{-}$ is only
${\varepsilon}^{-C}$ , so the result follows upon applying the Schur test. ∎
### 4.2. The structure of partitions from generic paths
The idea is that the main contribution to the channel ${\mathcal{E}}_{s}$
should come from paths $\bm{\Gamma}$ that are generic. We define generic paths
as those that do not have incidences.
In general incidences are any geometric feature of a path that can change its
correlation structure. The simplest type of incidence is a recollision.
###### Definition 4.3 (Recollisions).
A _recollision_ in a path $\omega\in\Omega_{k}$ is a pair
$a,a^{\prime}\in[1,k]$ such that $|y_{a}-y_{a^{\prime}}|\leq 2r$.
A special kind of recollision is an _immediate recollision_ , which satisfies
$a^{\prime}=a+1$ and $|p_{a+1}-p_{a-1}|\leq 10\alpha r^{-1}$. Let
$I^{imm}(\omega)\subset[1,k]$ be the set of all indices belonging to a
immediate recollision, and let $I^{rec}(\omega)$ be the set of indices
belonging to a recollision that is not an immediate recollision.
At this point we stop to observe that immediate recollisions do not
substantially alter the trajectory of a path.
An example of an immediate recollision in a path. Note that such a recollision
can only cause a small displacement in position on the order $O(r)$ and a
small perturbation in momentum on the order $O(r^{-1})$.
The first fact we prove is that the time betweeen recollisions cannot be too
large.
###### Lemma 4.4.
Let $\omega\in\Omega_{k,\alpha}(\tau,\sigma;\xi,\eta)$ be a path with
$|\xi_{p}|\geq r^{-1}\sigma^{-1}$. If $|y_{j}-y_{j+1}|\leq 2r$, then
$s_{j}\leq 10|p_{0}|^{-1}\alpha r$.
###### Proof.
First, the condition $|\xi_{0}|\geq r^{-1}\sigma^{-1}$ and the constraint
$|p_{0}-(\xi_{0})_{p}|\leq\alpha r^{-1}$ imply $|p_{0}|\leq 2\alpha
r^{-1}\sigma^{-1}$. Then, since $s_{0}\geq\sigma$
$||p_{j}|^{2}/2-|p_{0}|^{2}/2|\leq\alpha\max\\{s_{j}^{-1},|p_{0}|r^{-1}\\}.$
Assuming that $s_{j}\geq 10|p_{0}|^{-1}\alpha r$, it follows that
$||p_{j}|-|p_{0}||\leq\alpha 2\alpha r^{-1}.$
But the condition $|y_{j}-y_{j+1}|\leq 2r$ implies
$|s_{j}p_{j}|\leq 4\alpha r,$
so that $s_{j}\leq 4\alpha|p_{j}|^{-1}r\leq 10|p_{0}|^{-1}r$. ∎
To state the second fact, we introduce the notion of the collision time
$t_{a}$, simply defined by
$t_{\pm,j}:=\sum_{0\leq j^{\prime}<j}s_{\pm,j^{\prime}}.$
###### Lemma 4.5.
Let $\bm{\Gamma}$ be a $|\log{\varepsilon}|^{20}$-complete path, and suppose
that $\\{a,a+1\\}\in P(\bm{\Gamma})$ for every immediate recollision $a$. If
$a<a^{\prime}\in K(\bm{\Gamma})\setminus I^{imm}(\bm{\Gamma})$ are two
consecutive collisions when ignoring immediate recollisions, then
(4.7) $|y_{a^{\prime}}-(t_{a^{\prime}}-t_{a})p_{a}|\leq 4m^{2}\alpha^{20}r.$
###### Proof.
To prove this, first observe that $a^{\prime}=a+2m+1$ for some number $m$ of
immediate recollisions between $a$ and $a^{\prime}$, and
$y_{a+2m+1}-y_{a}=\sum_{j=0}^{m}y_{a+2j+1}-y_{a+2j}+\sum_{j=1}^{m}y_{a+2j}-y_{a+2j-1}.$
The latter terms are each bounded by $2r$ because $(a+2j-1,a+2j)$ are all
immediate recollisions. The former terms are well approximated by
$s_{a+2j}p_{a+2j}$, so we have
$|y_{a+2m+1}-y_{a}-\sum_{j=0}^{m}s_{a+2j}p_{a+2j}|\leq 2m\alpha r.$
Next we observe that, since $(a+2j-1,a+2j)$ forms a pair in $P(\bm{\Gamma})$
and $\bm{\Gamma})$ is $|\log{\varepsilon}|^{20}$-complete,
$|q_{a+2j-1}+q_{a+2j}|\leq 2|\log{\varepsilon}|^{20}r^{-1}$. Expanding the
definition of $q_{a+2j-1}$ and $q_{a+2j}$ it follows that
$|q_{a+2j-1}+q_{a+2j}|=|p_{a+2j-1}-p_{a+2j-2}+p_{a+2j}-p_{a+2j-1}|=|p_{a+2j}-p_{a+2(j-1)}|\leq
2|\log{\varepsilon}|^{20}r^{-1}$
for every $1\leq j\leq m$. In particular, $|p_{a+2j}-p_{a}|\leq
2|\log{\varepsilon}|^{20}mr^{-1}$ for each $a$, and therefore
$|y_{a+2m+1}-y_{a}-(\sum_{j=0}^{m}s_{a+2j})p_{a}|\leq 2m^{2}\alpha^{20}r.$
Finally, we observe that
$t_{a+2m+1}-t_{a}=\sum_{j=0}^{m}s_{a+2j}+\sum_{j=1}^{m}s_{a+2j-1}.$
The latter sum is bounded by $10m|p_{0}|^{-1}\alpha r$ by Lemma 4.4. ∎
Recollisions form just one type of incidence. It is possible that paths
$\omega^{+}$ and $\omega^{-}$ have a nontrivial collision structure even if
neither path has a recollision. Consider for example the paths
$\omega^{+}=({\mathbf{s}}^{+},{\mathbf{p}}^{+},{\mathbf{y}}^{+})=(((4,2,1,2,4),(v,-v,v,-v,v),(4v,2v,3v,v))\in\Omega_{0}(13,1)$
and
$\omega^{-}:=({\mathbf{s}}^{-},{\mathbf{p}}^{-},{\mathbf{y}}^{-})=((4,3,2,1,3),(v,-v,v,-v,v),(4v,v,3v,2v))\in\Omega_{0}(13,1)$
where $v\in{\mathbf{S}}^{d-1}$ is any unit vector. This example is depicted in
Figure 9. Then the collision partition associated to $\omega^{+}$ and
$\omega^{-}$ is given by
$\displaystyle
P=\\{\\{(+,1),(-,1)\\},\\{(+,2),(-,4)\\},\\{(+,3),(-,3)\\},\\{(+,4),(-,2)\\}\\}.$
Figure 9. Another example of a pair of paths with a nontrivial collision
partition and such that neither $\omega$ nor $\omega^{\prime}$ has a
recollision event. The paths are depicted with a slight downward drift to
clarify the order of the collisions. Note that this behavior is typical in one
dimension.
This partition has a nontrivial structure because the second collision of
$\omega^{-}$ correlates with the fourth collision of $\omega^{+}$ and vice-
versa. This behavior is not uncommon for paths that are constrained to one
dimension. The problem with the above example is that there are non-
consecutive collisions in $\Gamma$ which can be visited by a single path with
two collisions.
We need to introduce another type of incidence to prevent this behavior which
we call a _tube incidence_.
###### Definition 4.6 (Tube incidences).
A _tube incidence_ for a path $\omega\in\Omega_{\alpha,k}(s,S)$ is a pair
$(a,a^{\prime})\in[0,k]$ of collisions such that there exists a collision
$b\in I^{imm}(\omega)$ with $a<b<a^{\prime}$ and there exists a time
$s\in{\mathbf{R}}$ such that
(4.8) $|y_{a}+sp_{a}-y_{a^{\prime}}|\leq 100k^{4}\alpha^{20}r.$
Above we use the convention $y_{0}=y_{1}-s_{0}p_{0}$. We set
$I^{tube}(\omega)$ to be the set of pairs $(a,a^{\prime})$ that form a tube
incidence.
A tube incidence occurs when a particle scattering out of site $a$ can choose
to “skip” its next collision (possibly at $b$) and instead scatter at site
$a^{\prime}$.
The key idea is that the partition $P({\mathbf{y}})$ of a doubled path
$\bm{\Gamma}$ is severely constrained when neither $\omega^{+}$ nor
$\omega^{-}$ have an incidence. In particular, the partition must be a
generalized ladder. To define a generalized ladder we first define a ladder
partition.
###### Definition 4.7 (Ladder partitions).
Let $A$ and $B$ be two finite ordered sets with $|A|=|B|$. The _ladder
matching_ $P_{lad}\in{\mathcal{P}}(A\sqcup B)$ is the unique matching of the
form $P_{lad}=\\{\\{a,\varphi(a)\\}\\}_{a\in A}$ where $\varphi:A\to B$ is the
unique order-preserving bijection between $A$ and $B$.
We can now state the main result.
###### Lemma 4.8.
Let
$\bm{\Gamma}=(\xi_{0}^{+},\omega^{+},\xi_{1}^{+};\xi_{0}^{-},\omega^{-},\xi_{1}^{-})$
be a doubled path such that $X(\bm{\Gamma})$ is
$|\log{\varepsilon}|^{20}$-complete and
$d_{r}(\xi_{0}^{+},\xi_{0}^{-})\leq|\log{\varepsilon}|^{20}$. Then at least
one of the following holds:
* •
One of $\omega^{+}$ or $\omega^{-}$ has an incidence. That is,
$I^{rec}(\omega^{+})\cup I^{tube}(\omega^{+})\cup I^{rec}(\omega^{-})\cup
I^{tube}(\omega^{-})\not=\varnothing.$
* •
The partition $P(\bm{\Gamma})$ has a cell with more than two elements.
* •
The partition $P(\bm{\Gamma})$ is a generalized ladder in the sense that (1)
$P$ saturates the set $I^{imm}(\omega^{+})\cup I^{imm}(\omega^{-})$ and (2)
The restriction $P|_{K(\bm{\Gamma})\setminus I^{imm}(\omega^{+})\setminus
I^{imm}(\omega^{-})}$ is a ladder partition on the set $K(\Gamma^{+})\setminus
I^{imm}(\omega^{+})\sqcup K(\Gamma^{-})\setminus I^{imm}(\omega^{-})$.
###### Proof.
We will assume that we are not in either of the first two cases, so that
$P(\bm{\Gamma})$ is a perfect matching of $K(\bm{\Gamma})$ and neither
$\omega^{+}$ nor $\omega^{-}$ has an incidence.
First we observe that every immediate recollision $(a,a+1)$ must also be a
cell in $P(\bm{\Gamma})$. Since $P(\bm{\Gamma})$ is assumed to be a perfect
matching, it follows that $P(\bm{\Gamma})$ saturates the set of immediate
recollisions.
It remains to show that $P|_{K(\bm{\Gamma})\setminus I^{imm}(\bm{\Gamma})}$
forms a ladder partition. Order the collisions in $K(\Gamma^{+})\setminus
I^{imm}(\Gamma^{+})$ as $(a_{1},a_{2},\cdots,a_{m})$, and the collisions of
$K(\Gamma^{-})\setminus I^{imm}$ as $(b_{1},b_{2},\cdots,b_{m^{\prime}})$
(with $b_{j+1}>b_{j}$ and $a_{j+1}>a_{j}$).We first observe that there are no
pairs $\\{a_{j},a_{j^{\prime}}\\}\in P(\bm{\Gamma})$ because then
$(j,j^{\prime})$ would form a recollision. Likewise
$\\{b_{j},b_{j^{\prime}}\\}\not\in P(\bm{\Gamma})$ for any $j,j^{\prime}$.
This shows that $m=m^{\prime}$. To show that $P(\bm{\Gamma})$ is a generalized
ladder, it suffices to check that $\\{a_{j},b_{j}\\}\in P(\bm{\Gamma})$ for
every $j\in[m]$. We prove this by induction on $j$.
The base case is that $j=1$. Choose $j_{1}$ such that
$\\{a_{1},b_{j_{1}}\\}\in P(\bm{\Gamma})$. then $|y_{a_{1}}-y_{b_{j_{1}}}|\leq
2r$. Using (4.7), we have
$\displaystyle|y_{a_{1}}-(y_{+,0}+t_{a_{1}}p_{+,0})|$ $\displaystyle\leq
4k^{2}\alpha^{20}r$ $\displaystyle|y_{b_{1}}-(y_{-,0}+t_{b_{1}}p_{-,0})|$
$\displaystyle\leq 4k^{2}\alpha^{20}r.$
It follows that for $s=t_{a_{1}}-t_{b_{1}}$,
$|y_{b_{j_{1}}}-(y_{+,0}+sp_{-,0}|\leq 10k^{2}\alpha^{20}r,$
so that either $j_{1}=1$ or else $j_{1}>1$ and therefore $(0,b_{j_{1}})$ is a
tube incidence for $\Gamma^{-}$.
The proof of the inductive step follows the same argument, with an additional
calculation to show that $|p_{a_{j}}-p_{b_{j}}|\leq
m|\log{\varepsilon}|^{20}r^{-1}$ if $\\{a_{j^{\prime}},b_{j^{\prime}}\\}\in
P(\bm{\Gamma})$ for all $j^{\prime}\leq j$. ∎
### 4.3. Ladders and the semigroup property
Now we state the main result of the section, which is that the expectation
appearing in ${\mathcal{E}}_{s}[A]$ is well approximated by simply summing
over the ladder matchings.
###### Definition 4.9 (Generalized ladders).
A generalized ladder partition on the set $[k_{+}]\sqcup[k_{-}]\\{(\pm,j)\mid
j\leq k_{\pm}\\}$ is a partition $P$ of $[k_{+}]\sqcup[k_{-}]$ such that there
exists a set $I_{+}^{imm}\subset[k_{+}]$ and $I_{-}^{imm}\subset[k_{-}]$ such
that $\\{(+,j),(+,j+1)\\}\in P$ for every $j\in I_{+}^{\operatorname{Imm}}$
and $\\{(-,j),(-,j+1)\\}\in P$ for every $j\in I_{-}^{\operatorname{Imm}}$,
and such that $P|_{[k_{+}]\sqcup[k_{-}]\setminus\\{+\\}\times
I_{+}^{\operatorname{Imm}}\setminus\\{-\\}\times I_{-}^{\operatorname{Imm}}}$
is a ladder partition on the set $([k_{+}]\setminus
I_{+}^{imm})\sqcup([k_{-}]\setminus I_{-}^{imm})$.
We set ${\mathcal{Q}}_{gl}\subset{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$ to be
the set of generalized ladder partitions.
To relate partitions to the superoperator ${\mathcal{E}}_{s}$ we first define
the notion of a $P$-expectation. Given a partition
$P\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$, $\omega^{+}\in\Omega_{k_{+}}$ and
$\omega^{-}\in\Omega_{k_{-}}$, we define
(4.9)
${\mathbf{E}\,}_{P}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}=\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|p_{+,k_{+}}}\rangle}\mathinner{\langle{p_{+,0}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|p_{-,k_{-}}}\rangle}\mathinner{\langle{p_{-,0}|\xi_{0}^{-}}\rangle}\end{smallmatrix}e^{i(\varphi(\omega^{+})-\varphi(\omega^{-}))}{\varepsilon}^{k_{+}+k_{-}}\prod_{S\in
P}{\mathbf{E}\,}\prod_{a\in S}{\widehat{V_{y_{a}}}}(q_{a}).$
We then define the ladder expectation to be the sum over all ladder
partitions,
${\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}:=\sum_{P\in{\mathcal{Q}}_{rl}([k_{+}]\sqcup[k_{-}])}{\mathbf{E}\,}_{P}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}$
We are now ready to define the ladder superoperator ${\mathcal{L}}_{s}$.
(4.10)
${\mathcal{L}}_{s}[A]:=\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{0}^{+}}|}\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\mathop{}\\!\mathrm{d}\Gamma.$
For convenience we define
The main result of this section is that the ladder superoperator
${\mathcal{L}}_{s}$ is a good approximation to the evolution
${\mathcal{E}}_{s}$.
###### Proposition 4.10.
Let $A$ be an operator with good support and let
$s\leq{\varepsilon}^{-2-\kappa/2}$. Then
$\|{\mathcal{E}}_{s}[A]-{\mathcal{L}}_{s}[A]\|_{op}\leq
C{\varepsilon}^{2.1}s\|A\|_{op}.$
Before we are ready to prove Proposition 4.10 we must first establish that the
sum over _all_ generalized ladders is the same as the sum over ther _correct_
generalized ladder in the case that $\bm{\Gamma}$ is a path with no
incidences.
###### Lemma 4.11.
Let $\bm{\Gamma}$ be a $|\log{\varepsilon}|^{20}$-complete path and suppose
that $d_{r}(\xi_{0}^{+},\xi_{0}^{-})\leq|\log{\varepsilon}^{-20}$, and
$|\xi_{p}|\geq r^{-1}\sigma^{-1}$. Suppose moreover that $\bm{\Gamma}$ has no
incidences and that $P(\bm{\Gamma})$ is a matching. Then
${\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}={\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}.$
###### Proof.
By Lemma 4.8, it follows that for _some_ generalized ladder
$P\in{\mathcal{Q}}_{gl}([k_{+}]\sqcup[k_{-}])$,
${\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}={\mathbf{E}\,}_{P}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}.$
We will show that if $P^{\prime}\in{\mathcal{Q}}_{gl}([k_{+}]\sqcup[k_{-}])$
is another generalized ladder with $P^{\prime}\not=P$, then
${\mathbf{E}\,}_{P^{\prime}}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}=\prod_{S\in
P}{\mathbf{E}\,}\prod_{a\in S}{\widehat{V_{y_{a}}}}(q_{a})=0.$
Indeed, since $P^{\prime}\not=P$ there exists some
$a,b,b^{\prime}\in[k_{+}]\sqcup[k_{-}]$ such that $\\{a,b\\}\in
P=P(\bm{\Gamma})$ and $\\{a,b^{\prime}\\}\in P^{\prime}$ with $b\not=b$. But
since $\\{a,b^{\prime}\\}\not\in P(\bm{\Gamma})$, $|y_{a}-y_{b^{\prime}}|>2r$
so the expectation
${\mathbf{E}\,}{\widehat{V_{y_{a}}}}(q_{a}){\widehat{V_{y_{b}}}}(q_{b^{\prime}})=0$
vanishes. ∎
### 4.4. The proof of Proposition 4.10
The error ${\mathcal{E}}_{s}[A]-{\mathcal{L}}_{s}[A]$ can be written as a path
integral
${\mathcal{E}}_{s}[A]-{\mathcal{L}}_{s}[A]=\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{0}^{+}}|}\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)\big{(}{\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}-{\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{)}\mathop{}\\!\mathrm{d}\bm{\Gamma}.$
The argument of Lemma 4.2 still works to show that we can restrict the
integral to paths that are $|\log{\varepsilon}|^{20}$-complete. Moreover,
using the support condition on $A$ we can also restrict to the case that
$d_{r}(\xi_{1}^{+},\xi_{1}^{-})\leq|\log{\varepsilon}|^{20}$ and that
$|(\xi_{1}^{+})_{p}|\geq(r\sigma)^{-1}$.
Under these constraints, the only paths that contribute to the path integral
above are those for which either $P(\bm{\Gamma})$ has a cell of more than two
elements and paths which have some kind of incidence. Let
${\mathbf{1}}^{bad}(\bm{\Gamma})$ be the indicator function for such paths. We
decompose this function according to the exact partition $P(\bm{\Gamma})$ and
the exact incidence set $I^{rec}(\bm{\Gamma})$, $I^{tube}(\bm{\Gamma})$,
$\displaystyle{\mathbf{1}}^{bad}(\bm{\Gamma})$
$\displaystyle=\sum_{P\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])}\sum_{I^{rec},I^{tube}}{\mathbf{1}}(P(\bm{\Gamma})=P){\mathbf{1}}(I^{rec}(\bm{\Gamma})=I^{rec}){\mathbf{1}}(I^{tube}(\bm{\Gamma})=I^{tube})$
$\displaystyle=:\sum_{P\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])}\sum_{I^{rec},I^{tube}}{\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma}).$
where the sum includes the constraint that either $I^{rec}\cup
I^{rec}\not=\varnothing$ or else $P$ has a cell with at least three elements.
Then we have the estimate
$\displaystyle\|{\mathcal{E}}_{s}[A]-{\mathcal{L}}_{s}[A]\|_{op}\leq\sum_{P,I^{rec},I^{tube}}\Big{\|}\int\mathinner{|{\xi_{0}^{-}}\rangle}\mathinner{\langle{\xi_{0}^{+}}|}$
$\displaystyle\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma})$
$\displaystyle\big{(}{\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}-{\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{)}\mathop{}\\!\mathrm{d}\bm{\Gamma}\Big{\|}_{op}.$
Using the triangle inequality we bound
$\big{|}{\mathbf{E}\,}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}-{\mathbf{E}\,}_{lad}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{|}\leq\sum_{Q\leq
P(\bm{\Gamma})}\big{|}{\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{|},$
and now applying the Schur test we estimate
(4.11)
$\begin{split}\|{\mathcal{E}}_{s}[A]-{\mathcal{L}}_{s}[A]\|_{op}\leq\sum_{k_{+},k_{-}}&\sum_{P,I^{rec},I^{tube}}\\\
&\sup_{\xi_{0}^{-}}\Big{|}\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma})\sum_{Q\leq
P(\bm{\Gamma})}|{\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}|\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}^{1/2}\\\
&\times\sup_{\xi_{0}^{+}}\Big{|}\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma){\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma})\sum_{Q\leq
P(\bm{\Gamma})}|{\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}|\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{+}\Big{|}^{1/2}.\end{split}$
The first step to bound the term
${\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}$
appearing in the integrand. Then, expanding out the formula (4.9) we have
$\big{|}{\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{|}\leq\begin{smallmatrix}|\mathinner{\langle{\xi_{1}^{+}|p_{+,k_{+}}}\rangle}\mathinner{\langle{p_{+,0}|\xi_{0}^{+}}\rangle}|\\\
|\mathinner{\langle{\xi_{1}^{-}|p_{-,k_{-}}}\rangle}\mathinner{\langle{p_{-,0}|\xi_{0}^{-}}\rangle}|\end{smallmatrix}e^{i(\varphi(\omega^{+})-\varphi(\omega^{-}))}\prod_{S\in
P(\bm{\Gamma})\vee Q}{\mathbf{E}\,}\prod_{a\in
S}{\widehat{V_{y_{a}}}}(q_{a}).$
Now we use
$|\langle{p|\xi}\rangle|\leq Cr^{d/2}\exp(-c(r|(\xi)_{p}-p|)^{0.5})$
as well as Lemma B.3 to estimate
$\displaystyle\big{|}{\mathbf{E}\,}_{Q}\begin{smallmatrix}\mathinner{\langle{\xi_{1}^{+}|O_{\omega^{+}}|\xi_{0}^{+}}\rangle}\\\
\mathinner{\langle{\xi_{1}^{-}|O_{\omega^{-}}|\xi_{0}^{-}}\rangle}^{*}\end{smallmatrix}\big{|}\leq
Cr^{2d}{\varepsilon}^{k_{+}+k_{-}}$
$\displaystyle\begin{smallmatrix}\exp(-c(r|(\xi_{1}^{+})_{p}-p_{+,k_{+}}|)^{0.5})\exp(-c(r|(\xi_{0}^{+})_{p}-p_{+,0}|)^{0.5})\\\
\exp(-c(r|(\xi_{1}^{-})_{p}-p_{-,k_{-}}|)^{0.5})\exp(-c(r|(\xi_{0}^{-})_{p}-p_{-,0}|)^{0.5})\end{smallmatrix}$
$\displaystyle
r^{-d(k_{+}+k_{-})}(C(k_{+}+k_{-}))^{k_{+}+k_{-}}\prod_{Q^{\prime}\leq
P(\bm{\Gamma})\vee Q}\prod_{S^{\prime}\in
Q^{\prime}}r^{d}\exp\Big{(}-c\big{|}r|S|^{-1}\sum_{a\in
S}q_{a}\big{|}^{0.99}\Big{)}$
$\displaystyle\qquad\qquad\qquad\qquad\times\prod_{a\in[k_{+}]\sqcup[k_{-}]}(1+|q_{a}|)^{-20d}.$
We collect the important terms above in the function $E_{Q}(\bm{\Gamma})$,
(4.12)
$E_{Q}(\bm{\Gamma}):=r^{2d}r^{-d(k_{+}+k_{-})}{\varepsilon}^{k_{+}+k_{-}}\prod_{S\in
Q}r^{d}\exp(-c|\frac{r}{|S|}\sum_{a\in
S}q_{a}|^{0.99})\times\prod_{a}(1+|q_{a}|)^{-20d}.$
Because $k_{+},k_{-}\leq k_{max}$, the combinatorial factors contribute at
most an absolute constant. Therefore Proposition 4.10 reduces to the following
integral bound.
###### Lemma 4.12.
Let $A$ be an admissible operator, let $k_{+},k_{-}\in[k_{max}]$, let
$Q\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$, and let $(P,I^{rec},I^{imm})$ be a
triple such that if $I^{rec}\cup I^{tube}=\varnothing$, then $P$ has a cell of
more than two elements. Then
(4.13)
$\sup_{\xi_{0}^{-}}\Big{|}\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma})\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\leq
C{\varepsilon}^{2.25}\tau.$
To estimate (4.13) we use the following simple lemma, which is just an
iterated application of Fubini’s theorem and the triangle inequality.
###### Lemma 4.13.
Let ${\mathcal{X}}=X_{1}\times\cdots\times X_{N}$ be the product of $N$
measure spaces $X_{j}$, and let ${\mathcal{X}}_{j}=X_{1}\times\cdots\times
X_{j}$ be the product of the first $j$ factors. Then for any positive
functions $F_{j}:{\mathcal{X}}_{j}\to{\mathbf{R}}^{+}$,
(4.14)
$\int_{{\mathcal{X}}}\prod_{j=1}^{N}F_{j}(x_{1},\cdots,x_{j})\mathop{}\\!\mathrm{d}X\leq\prod_{j=1}^{N}\sup_{x^{\prime}_{1},\dots,x^{\prime}_{j-1}}\int_{X_{j}}F_{j}(x^{\prime}_{1},\dots,x^{\prime}_{j-1},x_{j})\mathop{}\\!\mathrm{d}x_{j}.$
To apply Lemma 4.13 we need to order the variables in $\bm{\Gamma}$ and bound
the integrand as a product of functions constraining each variable in
$\bm{\Gamma}$ as a function only of variables that come earlier in the
ordering. The reader may find it useful to refer to Figure 10 for a quick
overview of the constraints on the variables.
Figure 10. Two typical collision pairs that can appear. On the left, a
recollision. The contribution from recollisions is heuristically counted as
follows: The $s_{0}$ variable has only the constraint $0\leq s_{0}\leq\tau$.
The $s_{1}$ variable is then constrained to $s_{1}{\,\lesssim\,}r|p_{1}|^{-1}$
so that $|y_{1}-y_{2}|\leq r$. Then $p_{1}$ is chosen from an annulus of width
$r^{-1}$ and radius $|p_{0}|$ and $p_{2}$ is effectively constrained by a
delta function in the momentum variables. The total contribution is at most
${\varepsilon}^{2}\tau{\,\lesssim\,}1$. On the right, a typical “rung”
collision forming a ladder. The time variable $s_{0}$ satisfies $0\leq
s_{0}\leq\tau$, but then $s_{0}^{\prime}$ is constrained by
$|s_{0}-s^{\prime}_{0}|{\,\lesssim\,}r|p_{0}|^{-1}$ so that
$|y_{1}-y_{1}^{\prime}|\leq r$. The momentum variable $p_{1}$ is again chosen
from an annulus of thickness $r^{-1}$ and width $|p_{0}|$, and
$p_{1}^{\prime}$ is constrained by a delta function to match $p_{1}$. Again
the contribution is bounded by ${\varepsilon}^{2}\tau$.
We use the following ordering of the variables:
(4.15)
$\bm{\Gamma}=(\xi_{0}^{+},p_{+,0},s_{+,0},y_{+,1},\cdots,y_{+,k_{+}},p_{+,k_{+}},s_{+,k_{+}},\xi_{1}^{+},\xi_{1}^{-},p_{-,k_{-}},s_{-,k_{-}},y_{-,k_{-}},\cdots,y_{-,1}p_{-,0},s_{-,0},\xi_{0}^{-}).$
Given a variable label $\lambda\in\\{{\mlq\mlq y\mrq\mrq}_{a},{\mlq\mlq
p\mrq\mrq}_{a},{\mlq\mlq s\mrq\mrq}_{a}\\}_{a\in
K(\bm{\Gamma})}\cup\\{{\mlq\mlq\xi\mrq\mrq}_{\ell}^{\pm}\\}_{\ell\in\\{0,1\\}}$,
we define the partial paths $\bm{\Gamma}_{<\lambda}$ to be the sequence of
variables preceding $\lambda$. Thus for example $\bm{\Gamma}_{<{\mlq\mlq
y\mrq\mrq}_{+,1}}=(\xi_{0}^{+},p_{+,0},s_{+,0})$. We also define a total
ordering $\leq^{\prime}$ on $[k_{+}]\sqcup[k_{-}]$ implied by the ordering of
the variables (14.3), in which $(+,j)\leq^{\prime}(-,j^{\prime})$ for any
$j,j^{\prime}$ and $(\pm,j)\leq(\pm,j^{\prime})$ when $\pm j\leq\pm
j^{\prime}$ (that is, the ordering is reversed for the negative indices, as
indicated in (14.3)).
The next step is to bound the integrand as a product of constraints assigned
to each variable as a function of the prior variables.
${\mathbf{1}}_{P,I^{rec},I^{tube}}(\bm{\Gamma})$. We will write out the
“standard term” in the integrand that does not use the indicator function as a
product of constraints of the $p$, $y$, and $\xi$ variables.
$|\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}||\Xi(\bm{\Gamma})|E_{Q\vee
P}(\bm{\Gamma})\leq{\varepsilon}^{k_{+}+k_{-}}F_{p}(\bm{\Gamma})F_{y}(\bm{\Gamma})F_{\xi}(\bm{\Gamma}).$
The constraints on momentum come from several sources. First there is a term
$\prod(1+|q_{a}|)^{-20d}$ ensuring that no impulse is too large. Second, there
is a term enforcing conservation of kinetic energy. Third, there are terms
ensuring that $p_{+,0}$ and $p_{-,k_{-}}$ match with $(\xi_{0}^{+})_{p}$ and
$(\xi_{1}^{-})_{p}$, respectively. Finally, there are the constraints
(approximate delta functions) coming from the expectation. We also take the
factor of $r^{2d}$ from (4.12) and distribute one factor of $r^{d}$ to each
$p_{+,0}$ and $p_{-,k_{-}}$:
$\displaystyle F_{p}(\bm{\Gamma}):=$
$\displaystyle\prod(1+|p_{a}-p_{a-1}|)^{-20d}\prod{\mathbf{1}}(||p_{\pm,j}|-|p_{\pm,0}||\leq\alpha\max\\{s_{\pm,j}^{-1},r^{-1}\\})$
$\displaystyle\times(r^{d}{\mathbf{1}}(|p_{+,0}-(\xi_{0}^{+})_{p}|\leq\alpha
r^{-1}))(r^{d}{\mathbf{1}}(|p_{-,{k_{-}}}-(\xi_{1}^{-})_{p}|\leq\alpha
r^{-1}))\prod_{S\in{Q\vee P}}r^{d}\exp(-ck_{max}^{-1}|r\sum_{a\in
S}q_{a}|^{0.99})$
The constraints in the position variables are determined by the compatibility
conditions $|y_{a+1}-(y_{a}+s_{a}p_{a})|\leq\alpha r$ and the compatibility of
the first and last collisions of $\omega^{+}$ and $\omega^{-}$ against the
boundaries $\xi_{0}^{+}$ and $\xi_{1}^{-}$. We take the factor of
$r^{-d(k_{+}+k_{-})}$ from (4.12) and distribute one $r^{-d}$ to each $y_{a}$
variable:
$\displaystyle F_{y}(\bm{\Gamma}):=$
$\displaystyle\prod(r^{-d}{\mathbf{1}}(|y_{a+1}-(y_{a}+s_{a}p_{a})|\leq\alpha
r))$
$\displaystyle\qquad\times(r^{-d}{\mathbf{1}}(|y_{+,1}-((\xi^{+}_{0})_{x}+s_{0}p_{0})|\leq\alpha
r))(r^{-d}{\mathbf{1}}(|y_{-,k_{-}}-((\xi^{-}_{1})_{x}-s_{-,k_{-}}p_{-,k_{-}})|)).$
The constraint on the $\xi$ variables comes from the compatibility with the
path, along with the support condition on $A$
$\displaystyle
F_{\xi}(\bm{\Gamma}):={\mathbf{1}}(d_{r}(\xi_{1}^{+},(y_{+,k_{+}}+s_{+,k_{+}}p_{+,k_{+}},p_{+,k_{+}}))\leq\alpha){\mathbf{1}}(d_{r}(\xi_{0}^{-},(y_{-,1}-s_{-,0}p_{-,0},p_{-,0}))\leq\alpha)(1+d_{r}(\xi_{1}^{+},\xi_{1}^{-}))^{-20d}.$
There are also indirect constraints on the $s$ variables coming indirectly
from the combination of the compatibility conditions
$|y_{a+1}-(y_{a}+s_{a}p_{a})|\leq\alpha r$ and constraints of the form
$|y_{a}-y_{b}|\leq 2kr$ for collisions $a\sim_{P}b$ of the same cell of the
partition $P(\bm{\Gamma})$. We also note that by Lemma 4.5, we have
$|y_{a^{\prime}}-(y_{a}+(t_{a^{\prime}}-t_{a})p_{a})|\leq C\alpha^{20}r$ when
$a$ and $a^{\prime}$ are collisions separated only by immediate recollisions.
We have no worry of “double-dipping” on the basic compatibility constraints
such as ${\mathbf{1}}(|y_{a+1}-(y_{a}+s_{a}p_{a})|\leq\alpha r)$ because for
example
$\operatorname{supp}(F_{p}(\bm{\Gamma})F_{y}(\bm{\Gamma})F_{\xi}(\bm{\Gamma}))\subset\\{|y_{a+1}-(y_{a}+s_{a}p_{a})|\leq\alpha
r\\},$
so we can freely apply extra such indicator functions where they are useful.
#### 4.4.1. The “standard constraint” bounds
In this section we use the partitions $P$ and $Q$ to “assign constraints” to
each variable.
In particular, we write define functions
$f_{\lambda}(\bm{\Gamma}_{\leq\lambda})$ such that
$\displaystyle F_{p}(\bm{\Gamma})$ $\displaystyle\leq\prod f_{{\mlq\mlq
p\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq p\mrq\mrq},a})$ $\displaystyle
F_{y}(\bm{\Gamma})$ $\displaystyle\leq\prod f_{{\mlq\mlq
y\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq y\mrq\mrq},a})$ $\displaystyle
F_{\xi}(\bm{\Gamma})$ $\displaystyle\leq
f_{{\mlq\mlq\xi\mrq\mrq},1,+}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},1,+})f_{{\mlq\mlq\xi\mrq\mrq},1,-}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},1,-})f_{{\mlq\mlq\xi\mrq\mrq},0,-}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},0,-}).$
These “standard constraint” functions can simply be read off of the
definitions of $F_{p}$, $F_{y}$, and $F_{\xi}$.
(4.16) $f_{{\mlq\mlq p\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq
p\mrq\mrq},a}):=\begin{cases}r^{d}\exp(-ck_{max}^{-1}|r\sum_{a\in
S}q_{a}|^{0.99}),&a=\max_{\leq^{\prime}}S\text{ for some }S\in P\vee Q\\\
r^{d}{\mathbf{1}}(|p_{-,k_{-}}-(\xi_{1}^{-})_{p}|\leq\alpha
r^{-1}),&a=(-,k_{-})\\\
r^{d}{\mathbf{1}}(|p_{+,0}-(\xi_{0}^{+})_{p}|\leq\alpha r^{-1}),&a=(+,0)\\\
(1+|p_{a}-p_{a-1}|)^{-20d}{\mathbf{1}}(||p_{a}|-|p_{+,0}||\leq\alpha\max\\{|p_{0}|^{-1}s_{a}^{-1},r^{-1}\\}),&\text{
else}.\end{cases}$
The standard $y$ constraints are given by
(4.17) $f_{{\mlq\mlq y\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq
y\mrq\mrq},a}):=\begin{cases}r^{-d}{\mathbf{1}}(|y_{+,1}-((\xi^{+}_{0})_{x}+s_{0}p_{0})|\leq\alpha
r),&a=(+,1)\\\
r^{-d}{\mathbf{1}}(|y_{-,k_{-}}-((\xi^{-}_{1})_{x}-s_{-,k_{-}}p_{-,k_{-}})|),&a=(-,k_{-})\\\
r^{-d}{\mathbf{1}}(|y_{+,j}-(y_{+,j-1}+s_{+,j-1}p_{+,j-1})|\leq\alpha
r),&a=(+,j)\text{ for }j>1\\\
r^{-d}{\mathbf{1}}(|y_{-,j+1}-(y_{-,j}+s_{-,j}p_{-,j})|\leq\alpha
r),&a=(-,j)\text{ for }j<k_{-}.\end{cases}$
Finally, the standard $\xi$ constraints are
$\displaystyle
f_{{\mlq\mlq\xi\mrq\mrq},1,+}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},1,+})$
$\displaystyle:={\mathbf{1}}(d_{r}(\xi_{1}^{+},(y_{+,k_{+}}+s_{+,k_{+}}p_{+,k_{+}},p_{+,k_{+}}))\leq\alpha)$
$\displaystyle
f_{{\mlq\mlq\xi\mrq\mrq},1,-}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},1,-})$
$\displaystyle:=(1+d_{r}(\xi_{1}^{+},\xi_{1}^{-}))^{-20d}$ $\displaystyle
f_{{\mlq\mlq\xi\mrq\mrq},0,-}(\bm{\Gamma}_{\leq{\mlq\mlq\xi\mrq\mrq},0,-})$
$\displaystyle:={\mathbf{1}}(d_{r}(\xi_{0}^{-},(y_{-,1}-s_{-,0}p_{-,0},p_{-,0}))\leq\alpha).$
The contributions from the ${\mlq\mlq\xi\mrq\mrq}$ and ${\mlq\mlq y\mrq\mrq}$
variables are easy to account for
###### Lemma 4.14 (Standard position and phase space bounds).
For any $\lambda\in\\{({\mlq\mlq
y\mrq\mrq},a),({\mlq\mlq\xi\mrq\mrq},\ell,\pm)\\}$
$\sup_{\bm{\Gamma}_{<\lambda}}\int
f_{\lambda}(\bm{\Gamma}_{\leq\lambda})\mathop{}\\!\mathrm{d}\Gamma_{\lambda}\leq
C.$
The momentum constraints are slightly more complicated.
###### Lemma 4.15 (Standard momentum bounds).
If $a=\max_{\leq^{\prime}}S$ for some $S\in P\vee Q$ or
$a\in\\{(-,k_{-}),(+,0)\\}$, then
$\sup_{\bm{\Gamma}_{<{\mlq\mlq p\mrq\mrq},a}}\int f_{{\mlq\mlq
p\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq
p\mrq\mrq},a})\mathop{}\\!\mathrm{d}p_{a}\leq C.$
Otherwise, if $a\not\sim_{P}a+1$ is not the first collision of an immediate
recollision, then
(4.18) $\sup_{\bm{\Gamma}_{<{\mlq\mlq p\mrq\mrq},a}}\int f_{{\mlq\mlq
p\mrq\mrq},a}(\bm{\Gamma}_{\leq{\mlq\mlq
p\mrq\mrq},a})\mathop{}\\!\mathrm{d}p_{a}\leq
Cr^{-1}\min\\{|p_{0}|^{d-1},1\\}.$
###### Proof.
Only the second bound needs proof. Since $a+1\not\sim_{P}a$, it follows from
Lemma 4.4 that $s_{a}\gtrsim\alpha|p_{0}|^{-1}r$. Therefore
$|p_{0}|^{-1}s_{a}^{-1}{\,\lesssim\,}r^{-1}$, so $p_{a}$ is constrained to an
annulus of thickness $r^{-1}$ and radius $|p_{0}|$. This annulus has volume
$r^{-1}|p_{0}|^{d-1}$. If $|p_{0}|\gtrsim 1$ the additional factor
$(1+|p_{a}-p_{a-1}|)^{-20d}$ ensures that $p_{a}$ is essentially also confined
to a ball of unit radius. ∎
The immediate recollisions require some more detailed attention. If $a\sim
a+1$ is an immediate recollision, then we group the variables $(s_{a},p_{a})$
and use the following estimate.
###### Lemma 4.16.
For any $|p_{0}|>r^{-1}\sigma^{-1}\geq{\varepsilon}^{0.5}$ and
$q\in{\mathbf{R}}^{d}$,
(4.19)
$\begin{split}\int_{0}^{10\alpha|p_{0}|^{-1}r}\int_{{\mathbf{R}}^{d}}&{\mathbf{1}}(||p|^{2}/2-|p_{0}|^{2}/2|\leq\alpha
s^{-1})(1+|p-q|)^{-20d}\mathop{}\\!\mathrm{d}q\mathop{}\\!\mathrm{d}s\\\ &\leq
C\min\\{|p_{0}|^{-1},|p_{0}|^{d-2}\\}(1+\log({\varepsilon}^{-1})).\end{split}$
###### Proof.
We split the integral over $s$ into dyadic intervals $[2^{k},2^{k+1}]$ for
$k\in{\mathbb{Z}}$. On this interval, the variable $p$ is retricted to an
annulus of radius $|p_{0}|$ and width $|p_{0}|^{-1}2^{-k}$. Moreover,
Now consider the case $|p_{0}|\gtrsim 1$. In this case the factor
$(1+|p-q|)^{-20d}$ additionally restricts the integration over $p$ to a unit
ball, and the integration over $p_{0}$ produces a factor on the order
$\min\\{1,|p_{0}|^{-1}2^{-k}\\}$. Integrating over $s$ to produce a factor
$2^{k}$ and summing over $k$ such that $2^{k}\leq 20\alpha|p_{0}|^{-1}r$, we
obtain the bound
$\displaystyle\int_{0}^{10\alpha|p_{0}|^{-1}r}\int_{{\mathbf{R}}^{d}}$
$\displaystyle{\mathbf{1}}(||p|^{2}/2-|p_{0}|^{2}/2|\leq\alpha
s^{-1})(1+|p-q|)^{-20d}\mathop{}\\!\mathrm{d}q\mathop{}\\!\mathrm{d}s$
$\displaystyle\leq\sum_{\begin{subarray}{c}k\in{\mathbb{Z}}\\\
2^{k}<|p_{0}|^{-1}\end{subarray}}2^{k}+\sum_{\begin{subarray}{c}|p_{0}|^{-1}<2^{k}<20\alpha|p_{0}|^{-1}r\end{subarray}}|p_{0}|^{-1}$
$\displaystyle\leq|p_{0}|^{-1}(1+\log(20\alpha r)).$
The second case we consider is that $|p_{0}|\lesssim 1$. In this case, the
annulus of radius $|p_{0}|$ and width $|p_{0}|^{-1}2^{-k}$ has volume on the
order $|p_{0}|^{d-2}2^{-k}$. The bound then follows from integrating in $s$
and summming over $k$, as above. ∎
Conspicuously missing from the discussion above is the integration over the
time variables. For many of the time variables we simply use the constraint
$s_{a}\leq\tau$ to pick up a factor of $\tau$. Additional constraints come
from the partition $P$. Suppose that $a\leq^{\prime}b$ and $a\sim_{P}b$. Then
there is a constraint $|y_{b}-y_{a}|\leq 2r$, which coupled with the
constraint $|y_{b}-(y_{b-1}+s_{b-1}p_{b-1})|\leq\alpha r$ imposes a constraint
on $s_{b-1}$ in terms of the variables $y_{a}$,$y_{b-1}$, and $p_{b-1}$, which
are all in $\Gamma_{<{\mlq\mlq s\mrq\mrq},b}$. This constraint picks up a
factor of $|p_{0}|^{-1}r$ instead of $\tau$:
(4.20) $\sup_{\Gamma_{<{\mlq\mlq
s\mrq\mrq},b-1}}\int{\mathbf{1}}(|y_{a}-(y_{b-1}+s_{b-1}p_{b-1})|\leq\alpha
r)\mathop{}\\!\mathrm{d}s_{b-1}\leq|p_{0}|^{-1}r.$
#### 4.4.2. The case that $P$ has a cluster
With just the bounds we have already proven, it is possible to obtain a good
estimate in the case that $P$ has a set of more than $2$ elements. This is the
simplest case, as suggested by Figure 11.
Figure 11. The case that the collision partition has a cluster. There is only
one full time degree of freedom contributing a factor of $\tau$, but three
factors of ${\varepsilon}$ from the potential. Such clusters therefore
contribute ${\varepsilon}^{3}\tau$ rather than ${\varepsilon}^{2}\tau$.
###### Lemma 4.17 (The cluster bound).
Let $A$ be an admissible operator, let $k_{+},k_{-}\in[k_{max}]$, let
$Q\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$, and let
$P\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$ be a partition having a cell of more
than two elements. Then
(4.21)
$\sup_{\xi_{0}^{-}}\Big{|}\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma})\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\leq
C^{k_{+}+k_{-}}({\varepsilon}^{2}\tau)^{(k_{+}+k_{-})/2}\tau^{-1}.$
###### Proof.
We bound the standard part of the integrand by the product $F_{p}F_{y}F_{s}$
as described in the previous section and apply Lemma 4.13. The ${\mlq\mlq
y\mrq\mrq}$ and ${\mlq\mlq\xi\mrq\mrq}$ variables contribute a factor of
$C^{k_{+}+k_{-}}$. The product of the contributions from the $(s_{a},p_{a})$
pairs coming from immediate recollisions produces a factor of
$(C\log{\varepsilon}^{-1})^{n_{r}}$, where $n_{r}$ is the number of immediate
recollision clusters of $P$. To account for the ${\mlq\mlq s\mrq\mrq}$ and
${\mlq\mlq p\mrq\mrq}$ variables, let $P^{\prime}\subset P$ The ${\mlq\mlq
p\mrq\mrq}$ variables contribute a total of
$(Cr^{-1}\min\\{|p_{0}|^{d-1},1\\})^{|P|-n_{r}}$ by taking the product of the
integral over all $p_{a}$ with $a=\max_{\leq^{\prime}}S$ for each $S\in P$.
Then for each $s_{a}$ variable that is the first in its cluster (of which
there are $|P|$), we get a trivial factor of $\tau$. Each of the rest of the
$s_{a}$ variables contribute $|p_{0}|^{-1}r$ by (4.20). The product of all of
these factors gives
(4.22)
$\begin{split}\sup_{\xi_{0}^{-}}\Big{|}&\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma})\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\\\
&\qquad\leq
C^{k_{+}+k_{-}}{\varepsilon}^{k_{+}+k_{-}}\tau^{|P|}(C\min\\{|p_{0}|^{d-2},|p_{0}|^{-1}\\})^{|P|-n_{r}}\end{split}$
The first and last factors are bounded by $C^{k_{+}+k_{-}}$. Then the fact
that $P$ is not a perfect matching implies $|P|<(k_{+}+k_{-})/2$ and since
$|P|$ is an integer in particular it follows that $|P|\leq(k_{+}+k_{-})/2-1$.
This proves (4.21). ∎
#### 4.4.3. The recollision case
To complete the proof of Lemma 4.12 we need to find a way to use the
additional constraints coming from a recollision or tube incidence. A
simplified version of the argument is presented in Figure 12
Figure 12. A path with a recollision at indices $(2,5)$. On the left, the case
that $y_{4}$ is far from $y_{5}$. In this case the momentum variable $p_{4}$
is constrained to be approximately parallel to $y_{2}-y_{4}$. On the right,
the case that $y_{4}$ is close to $y_{5}$. In this case there is an additional
constraint on the time variable $s_{3}$.
Suppose that $(a,b)$ is a recollision occuring in $\omega^{+}$,
$\operatorname{sgn}(a)=\operatorname{sgn}(b)=+$. The idea is that a
recollision at $(a,b)$ typically enforces a strong constraint on the momentum
_before_ the collision at $b$. Indeed, if $|y_{a}-y_{b}|\leq 2r$ then
$|s_{b-1}p_{b-1}-(y_{a}-y_{b-1})|\leq 2r$, so in particular
$|p_{b-1}-z|\leq 2rs_{b-1}^{-1},$
with $z=(y_{a}-y_{b-1})s_{b-1}^{-1}$. If $|y_{a}-y_{b-1}|>10\alpha r$, then
$s_{b-1}\geq|y_{a}-y_{b-1}||p_{0}|^{-1}$. This is where the constraint on
$p_{b-1}$ comes from. On the other hand, if $|y_{a}-y_{b-1}|$ is small, then
there is a constraint on $s_{b-2}$ of exactly the same kind as (4.20). The
only additional subtlety to deal with is the possibility that $b-1$ is itself
a recollision or immediate recollision, in which case $p_{b-1}$ is localized
by the momentum constraint on a collision cluster and further localization is
not helpful. This is only a minor difficulty, and so we now prove
###### Lemma 4.18 (The recollision bound).
Let $A$ be an admissible operator, let $k_{+},k_{-}\in[k_{max}]$, let
$P,Q\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$, and let $I^{rec}\not=\varnothing$
be a nonempty set of recollisions in $[k_{+}]$. Then
(4.23)
$\sup_{\xi_{0}^{-}}\Big{|}\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma}){\mathbf{1}}_{I^{rec}}(\bm{\Gamma})\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\leq
C^{k_{+}+k_{-}}({\varepsilon}^{2}\tau)^{(k_{+}+k_{-})/2}({\varepsilon}^{-1.5}\tau^{-1}).$
###### Proof.
Let $(a,a^{\prime})\in I^{rec}$ be the recollision with minimal $a^{\prime}$.
Then let $b<a^{\prime}$ be the first collision before $a^{\prime}$ that is not
an immediate recollision. Because $(a,a^{\prime})$ is not an immediate
recollision, it is clear that $b\not=a^{\prime}$. Moreover, because
$(a,a^{\prime})$ is minimal, the index $b$ is not of the form
$\max_{\leq^{\prime}}S$ for any $S\in Q$.
We bound the indicator function for a recollision as a sum of indicator
functions depending on the distance $|y_{b-1}-y_{b}|$
${\mathbf{1}}_{I^{rec}}(\bm{\Gamma})\leq{\mathbf{1}}(|y_{a}-y_{b}|\leq
2r){\mathbf{1}}(|y_{b-1}-y_{b}|\geq Kr)+{\mathbf{1}}(|y_{b-1}-y_{b}|\leq Kr),$
and use this to split (4.27) as a sum of two integrals each corresponding to a
different term.
For the first term we follow the proof of Lemma 4.17, with the modification
that we set
$f^{\prime}_{{\mlq\mlq p\mrq\mrq},b-1}(\bm{\Gamma}_{\leq{\mlq\mlq
p\mrq\mrq}_{b-1}})={\mathbf{1}}(|p_{b-1}-|p_{0}|\frac{y_{b-1}-y_{a}}{|y_{b-1}-y_{a}|}|\leq|p_{0}|K^{-1}).$
In this case $p_{b-1}$ is sampled from the intersection of an annulus with
thickness $r^{-1}$ and radius $|p_{0}|$ and a ball or radius $|p_{0}|K^{-1}$,
so that
$\sup_{\bm{\Gamma}_{<{\mlq\mlq p\mrq\mrq}_{b-1}}}\int f^{\prime}_{{\mlq\mlq
p\mrq\mrq},b-1}(\bm{\Gamma}_{\leq{\mlq\mlq
p\mrq\mrq}_{b-1}})\mathop{}\\!\mathrm{d}p_{b-1}\leq
Cr^{-1}\min\\{(K^{-1}|p_{0}|)^{d-1},1\\}.$
Applying this estimate in the place of the bound (4.18) along with the rest of
the argument that leads to (4.22) yields
(4.24)
$\begin{split}\sup_{\xi_{0}^{-}}\Big{|}&\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma}){\mathbf{1}}(|y_{a}-y_{b}|\leq
2r){\mathbf{1}}(|y_{b-1}-y_{b}|\geq
Kr)\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\\\
&\qquad\leq
C^{k_{+}+k_{-}}{\varepsilon}^{k_{+}+k_{-}}\tau^{|P|}(C\min\\{|p_{0}|^{d-2},|p_{0}|^{-1}\\})^{|P|-n_{r}-1}(C\min\\{K^{1-d}|p_{0}|)^{d-2},|p_{0}|^{-1}\\})\end{split}$
The last factor is maximized when $|K^{1-d}||p_{0}|^{d-1}=|p_{0}|^{-1}$, which
occurs when $|p_{0}|=K$. In this case we obtain a savings of $K^{-1}$ over the
bound (4.22), and therefore conclude
(4.25)
$\begin{split}\sup_{\xi_{0}^{-}}\Big{|}&\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma}){\mathbf{1}}(|y_{a}-y_{b}|\leq
2r){\mathbf{1}}(|y_{b-1}-y_{b}|\geq
Kr)\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\\\
&\qquad\leq
C^{k_{+}+k_{-}}{\varepsilon}^{k_{+}+k_{-}}\tau^{|P|}K^{-1}.\end{split}$
The second term to deal with is the integral involving
${\mathbf{1}}(|y_{b-1}-y_{b}|\leq Kr)$. In this case, we apply the bound
(4.20) to get a factor of $Kr$ instead of $\tau$ for the integration over the
variable $s_{b-2}$. Thus
(4.26)
$\begin{split}\sup_{\xi_{0}^{-}}\Big{|}&\int\mathinner{\langle{\xi_{1}^{-}|A|\xi_{1}^{+}}\rangle}\Xi(\Gamma)E_{Q\vee
P}(\bm{\Gamma}){\mathbf{1}}_{P}(\bm{\Gamma}){\mathbf{1}}(|y_{b-1}-y_{b}|\leq
Kr)\mathop{}\\!\mathrm{d}\omega^{\pm}\mathop{}\\!\mathrm{d}\xi_{1}^{\pm}\mathop{}\\!\mathrm{d}\xi_{0}^{-}\Big{|}\\\
&\qquad\leq
C^{k_{+}+k_{-}}{\varepsilon}^{k_{+}+k_{-}}\tau^{|P|}(Kr\tau^{-1}).\end{split}$
Choosing $K={\varepsilon}^{-0.5}$ yields the desired result. ∎
#### 4.4.4. The tube incidence case
The final remaining case is that $\bm{\Gamma}$ does not have a recollision but
does have a tube incidence. Suppose that the first tube incidence occurs at
$(a,b)$. Then combining the tube incidence constraint (4.8) with the
compatibility condition $|y_{b}-y_{b-1}-s_{b-1}p_{b-1}|\leq 2r$, we conclude
that there exists $s\in{\mathbf{R}}$ such that
$|s_{b-1}p_{b-1}+(y_{b-1}-y_{a})+sp_{a}|\leq C\alpha^{20}r.$
In other words, the vector $s_{b-1}p_{b-1}$ lies on the tube with thickness
$\alpha^{20}r$, axis $p_{a}$, and passing through $y_{b-1}-y_{a}$. If $p_{a}$
is transverse to $p_{b-1}$ then this imposes a strong constraint on the time
variable $s_{b-1}$. On the other hand if $p_{a}$ is parallel with $p_{b-1}$
then this imposes a constraint on the momentum variable $p_{b-1}$. Both cases
are depicted in Figure 13
Figure 13. Paths with a tube incidence at $(1,5)$, so that
$|y_{5}-(y_{1}+sp_{1})|\lessapprox r$ for some $s\in{\mathbf{R}}$. On the
left, an example in which $p_{4}$ is transverse to $p_{1}$. In this case the
time variable $s_{4}$ is constrained so that $y_{5}$ can lie on the gray tube.
On the right, an example in which $p_{4}$ is approximately parallel to
$p_{1}$. In this case $s_{4}$ is much less constrained, but $p_{4}$ is much
more constrained. In either case there is a gain of a factor of (at least)
${\varepsilon}^{1/2}$.
###### Lemma 4.19 (The tube incidence bound).
Let $A$ be an admissible operator, let $k_{+},k_{-}\in[k_{max}]$, let
$P,Q\in{\mathcal{P}}([k_{+}]\sqcup[k_{-}])$, and let
$I^{tube}\not=\varnothing$ be a nonempty set of tube incidences in $[k_{+}]$,
and suppose moreover that $I^{rec}=\varnothing$ so that there are no
recollisions.
(4.27) |
UUITP-01/21
# Inozemtsev System as Seiberg-Witten Integrable System
Philip C. Argyres Physics Department, University of Cincinnati, PO Box
210011, Cincinnati OH 45221, US Oleg Chalykh School of Mathematics,
University of Leeds, Leeds, LS2 9JT, UK Yongchao Lü Department of Physics
and Astronomy, Uppsala university, Box 516, SE-75120 Uppsala, Sweden
###### Abstract
In this work we establish that the Inozemtsev system is the Seiberg-Witten
integrable system encoding the Coulomb branch physics of 4d $\mathcal{N}=2$
$\mathrm{USp}(2N)$ gauge theory with four fundamental and (for $N\geq 2$) one
antisymmetric tensor hypermultiplets. We describe the transformation from the
spectral curves and canonical one-forms of the Inozemtsev system in the $N=1$
and $N=2$ cases to the Seiberg-Witten curves and differentials explicitly,
along with the explicit matching of the modulus of the elliptic curve of
spectral parameters to the gauge coupling of the field theory, and of the
couplings of the Inozemtsev system to the field theory mass parameters. This
result is a particular instance of a more general correspondence between
crystallographic elliptic Calogero-Moser systems with Seiberg-Witten
integrable systems, which will be explored in future work.
###### Contents
1. 1 Introduction and summary
2. 2 Inozemtsev system
1. 2.1 Hamiltonian description
2. 2.2 Lax matrix
3. 2.3 Spectral curve
4. 2.4 Spectral curves for $N=1$ and $N=2$
5. 2.5 Behaviour near marked points
6. 2.6 Modular property
3. 3 $\mathrm{USp}(2N)$ $N_{f}=4$ superconformal field theory
1. 3.1 Field theory properties
2. 3.2 Seiberg-Witten curve
4. 4 Matching spectral curve to M5 brane curve
1. 4.1 The $N=1$ case
2. 4.2 The $N=2$ case
5. A Appendix
1. A.1 Elliptic functions and identities
2. A.2 Calculating the $N=2$ spectral curve
## 1 Introduction and summary
Since the dawn of Seiberg-Witten era [1, 2], it has been recognized [3] that
there is close connection between 4d $\mathcal{N}=2$ systems and completely
integrable Hamiltonian systems. In particular, Donagi and Witten [4] explained
that for each 4d $\mathcal{N}=2$ supersymmetric field theory there exists a
complex integrable systems encoding its Coulomb branch physics. Following [5]
we will call such a complex integrable system a Seiberg-Witten integrable
system.
There are no known systematic ways to identify the Seiberg-Witten integrable
system for a given 4d $\mathcal{N}=2$ theory. Nevertheless, there have been
two main effective approaches in this regard. In the first approach, one tries
to match known many-body or spin chain integrable systems with particular 4d
$\mathcal{N}=2$ theories. There are several notable examples along this line.
For instance, 4d $\mathcal{N}=2$ pure YM theory with simple gauge algebra
$\mathrm{G}$ corresponds [6] to the twisted affine Toda chain of type
$(\widehat{\mathrm{G}}^{(1)})^{\vee}$, where
$(\widehat{\mathrm{G}}^{(1)})^{\vee}$ is the Langlands dual of the untwisted
affine Kac-Moody algebra $\widehat{\mathrm{G}}^{(1)}$. Another example [7, 8]
is the elliptic Calogero Moser system of $A_{N-1}$ type which describes the
Seiberg-Witten solution of 4d $\mathcal{N}=2^{\ast}$ theories with gauge group
$\mathrm{SU}(N)$ or $\mathrm{U}(N)$; this type of matching has been
generalized to arbitrary simple gauge groups (with $G_{2}$ as a potential
exception) [9]. It is also proposed [10, 11] that the inhomogeneous
$\mathfrak{sl}_{2}$ XXX spin chain provides solutions to 4d $\mathcal{N}=2$
$\mathrm{SU}(N_{c})$ gauge theories with $N_{f}\leq 2N_{c}$ fundamental
hypermultiplets. See the survey [12] for these and further connections.
A second approach identifies Seiberg-Witten integrable systems for a large
class of 4d $\mathcal{N}=2$ supersymmetric field theories as Hitchin systems
on Riemann surfaces with tame/wild ramified punctures. This class of 4d
$\mathcal{N}=2$ supersymmetric field theories are known as class-S theories
[13]. A precursor to this approach is the M-theory solution to certain 4d
$\mathcal{N}=2$ quiver gauge theories engineered with D4-NS5-D6 brane systems
[14].
These two approaches — matching to known integrable systems or to Hitchin
systems — have some overlap. For instance, it is known that the elliptic
Calogero Moser system of type $A_{N-1}$ can be interpreted as the
$\mathrm{SU}(N)$ Hitchin system on a torus with a puncture [15]. However, for
a majority of Hitchin systems there are no explicitly known many-body or spin
chain integrable systems.
In this and upcoming work [16], we will follow the line of the first approach
to identify the Seiberg-Witten systems for several series of 4d
$\mathcal{N}=2$ superconformal field theories which all admit F-theory
constructions. A common feature shared by those theories is that their Coulomb
branch chiral rings are given by the rings of symmetric polynomials with
respect to certain complex reflection groups [17].111We refer the reader to
the appendix in [18] for a nice account of complex reflection groups aimed at
physicists. On general grounds all the relevant complex reflection groups also
need to satisfy various physical constraints including Dirac quantization and
electric-magnetic duality which implies the relevant complex reflection groups
must be crystallographic — which means that there exists an invariant full-
rank lattice preserved by the complex reflection group. All such
crystallographic groups have been classified [19, 20].
Generalizations of elliptic Calogero-Moser systems — known as crystallographic
elliptic Calogero-Moser systems — have been constructed for all
crystallographic complex reflection groups [21]. Our proposal is that these
are candidates for Seiberg-Witten geometries. A nice feature of these
integrable systems is that their full set of parameters matches the mass
deformations of classes of $4d$ $\mathcal{N}=2$ quantum field theories. For
instance, we identify the elliptic Calogero-Moser systems attached to the
crystallographic complex reflection groups of type $G(m,1,N)$ with $m=2,3,4,6$
as Seiberg-Witten integrable systems for $4d$ $\mathcal{N}=2$ rank $N$ $D_{4}$
and $E_{6}$, $E_{7}$, $E_{8}$ theories [22, 23, 24]. Those theories belong to
the the category of class-S theories, therefore their Seiberg-Witten
integrable systems admit Hitchin system construction [25, 26, 27].
In this paper we will focus on the $G(2,1,N)$ case, which are also known as
the Inozemtsev system [28], which corresponds to $4d$ $\mathcal{N}=2$
$\mathrm{USp}(2N)$ gauge theory with one antisymmetric and four fundamental
hypermultiplets. Since $G(2,1,N)$ is the complexification of the Weyl group
$W(B_{N})\equiv W(C_{N})$ and depends on an elliptic modulus, it is natural to
guess that it describes the Coulomb branch of a superconformal gauge theory
with $\mathrm{USp}(2N)$ or $\mathrm{Spin}(2N{+}1)$ gauge group. What is
surprising is that, on the one hand, the Inozemtsev system has no direct Lie-
algebraic interpretation, and on the other hand the Inozemtsev systems has the
right pattern of couplings to match exactly with a single class of 4d
$\mathcal{N}=2$ gauge theories, namely, the $\mathrm{USp}(2N)$ superconformal
theories with one antisymmetric and $N_{f}=4$ fundamental hypermultiplets.
Since the $\mathrm{USp}(2N)$ $N_{f}=4$ theory admits class-S description, the
Inozemtsev system should be equivalent to an $\mathrm{SU}(2N)$ Hitchin system
on the orbicurve $T^{2}/\mathbb{Z}_{2}$, and we offer such an interpretation.
Furthermore, the Seiberg-Witten solutions for the particular
$\mathrm{USp}(2N)$ gauge theories are given in explicit form via an M5 brane
construction in [29]. The equivalence of the Seiberg-Witten solutions with the
Inozemtsev system is not at all obvious. In this work we check their
equivalence for the rank $N=1,2$ cases. We find that we need to modify some
choices made in [29] in the M5 brane construction of the Seiberg-Witten curve
in order to achieve an algebraically transparent matching to the integrable
system.
Our recognition of the Inozemtsev system as a Seiberg-Witten integrable system
has some independent interest. Specifically, one may be able to utilize the
gauge theory description to extract exactly solvable observables by various
powerful techniques including semi-classical methods, supersymmetric
localization, the gauge-Bethe correspondence, and the AGT correspondence, and
relate them to the Inozemtsev system.
This paper is organized as follows. In section 2 we discuss various aspects of
Inozemtsev system, and introduce the Lax representation following [30, 31].
Among other things, we give an interpretation of the Inozemtsev system as a
Hitchin system on the four-punctured sphere. In section 3, after recalling
some general properties of the series of $\mathrm{USp}(2N)$ $N_{f}=4$
theories, we describe the realization of their Coulomb branch physics in terms
of M5 brane curves. In section 4 we describe the transformation from the
spectral curves and canonical one-form of the Inozemtsev system in the $N=1$
and $N=2$ cases to the Seiberg-Witten curves and differentials explicitly,
along with the variable and parameter matching. We include an appendix which
summarizes some relevant elliptic functions and identities and outlines the
derivation of the $N=2$ spectral curve of the Inozemtsev system.
## 2 Inozemtsev system
### 2.1 Hamiltonian description
The Inozemtsev system, also known as the Calogero–Moser–Sutherland system of
$BC_{N}$-type, is described by the Hamiltonian [28]:
$h_{2}=\sum_{j=1}^{N}(p_{j}^{2}-u(q_{j}))-2g^{2}\sum_{j<k}^{N}\left(\wp(q_{j}-q_{k})+\wp(q_{j}+q_{k})\right)\,,$
(2.1)
where $\wp(q)$ is the Weierstrass $\wp$-function with periods $1,{\tau}$ and
$u(q)=\sum_{r=0}^{3}g_{r}^{2}\wp(q+{\omega}_{r})\,,\qquad({\omega}_{0},{\omega}_{1},{\omega}_{2},{\omega}_{3})=\left(0,\frac{1}{2},\frac{1+{\tau}}{2},\frac{{\tau}}{2}\right)\,.$
(2.2)
Here $(p_{i},q_{i})$, $i=1,\dots,N$ represent the momenta and positions of $N$
interacting particles on the line, subject to an external field with potential
$-u(q)$. Note that we have four coupling constants $g_{0,1,2,3}$ in the $N=1$
case and one additional coupling constant $g$ in the $N\geq 2$ cases. It is
customary to assume, in the repulsive regime, that the couplings $g^{2}$ and
$g_{r}^{2}$ are real negative. For our purposes, however, this is not
important, as we consider this system on the complexified phase space
$\mathbb{C}^{2N}$ with the standard (holomorphic) symplectic structure. As
such, it has the underlying symmetry associated with the complex
crystallographic group generated by the translations $q_{j}\mapsto q_{j}+1$,
$q_{j}\mapsto q_{j}+{\tau}$ together with the arbitrary permutations and sign
changes of $q_{j}$. This corresponds to the group $[G(2,1,N)]^{\tau}_{1}$ in
the classification [19].
The Inozemtsev system is known to be completely integrable in Liouville’s
sense, which means that it admits $N$ independent Poisson-commuting
Hamiltonians $h_{2},h_{4},\dots,h_{2N}$. The higher Hamiltonians are of the
form $h_{4}=\sum_{i<j}p_{i}^{2}p_{j}^{2}+\ldots$,
$h_{6}=\sum_{i<j<k}p_{i}^{2}p_{j}^{2}p_{k}^{2}+\ldots$, etc., up to lower
degree terms. Explicit expressions for $h_{2k}$ are available for the quantum
case [32] from which the classical Hamiltonians are easily obtained. For
instance, in the $N=2$ case the quartic Hamiltonian can be taken as
$\displaystyle h_{4}$
$\displaystyle=\Big{(}p_{1}p_{2}+g^{2}\wp(q_{1}-q_{2})-g^{2}\wp(q_{1}+q_{2})\Big{)}^{2}$
$\displaystyle\quad\mbox{}-u(q_{1})p_{2}^{2}-u(q_{2})p_{1}^{2}+u(q_{1})u(q_{2})$
$\displaystyle\quad\mbox{}+\left(u(q_{1})+u(q_{2})\right)\left(g^{2}\wp(q_{1}-q_{2})+g^{2}\wp(q_{1}+q_{2})\right)$
$\displaystyle\quad\mbox{}-2g^{2}\sum^{3}_{i=0}g^{2}_{i}\wp(q_{1}+\omega_{i})\wp(q_{2}+\omega_{i})\,.$
(2.3)
### 2.2 Lax matrix
As another manifestation of the integrability of the model (2.1), it admits a
Lax representation, i.e., a pair of matrix-valued functions
$L,A\,:\,\mathbb{C}^{2N}\to\mathrm{Mat}(2N,\mathbb{C})$ such that the
Hamiltonian dynamics takes the form $\frac{d}{dt}L=[L,A]$. An immediate
corollary is that the quantities $\mathrm{tr}(L^{k})$, as well as the
eigenvalues of $L$, are constants of motion, which means that $L$ remains
isospectral for all $t$. Originally, Inozemtsev constructed in [28] a Lax pair
of size $3N\times 3N$ (see also [33]); other Lax pairs of smaller size have
since been found [9, 30]. We will use the Lax matrix of size $2N\times 2N$
from [30]. To write it down, we need the functions ${\sigma}_{\alpha}(x)$ and
$v_{\alpha}(x):=\sum_{r=0}^{3}g_{r}{\sigma}_{2\alpha}^{r}(x)$ whose definition
and basic properties are given in the Appendix. We have:
$\displaystyle L$
$\displaystyle=\sum_{i=1}^{N}\left(p_{i}E_{i,i}-p_{i}E_{i+N,i+N}+v_{\alpha}(q_{i})E_{i,i+N}+v_{\alpha}(-q_{i})E_{i+N,i}\right)$
(2.4) $\displaystyle+g\sum_{i\neq
j}^{N}\left(\sigma_{\alpha}(q_{ij})E_{i,j}+\sigma_{\alpha}(q_{ij}^{+})E_{i,j+N}+\sigma_{\alpha}(-q_{ij}^{+})E_{i+N,j}+\sigma_{\alpha}(-q_{ij})E_{i+N,j+N}\right)\,,$
where $E_{i,j}$ are the elementary matrices, and $q_{ij}$, $q_{ij}^{+}$ are
the shorthand notation for $q_{i}-q_{j}$ and $q_{i}+q_{j}$, respectively. This
Lax matrix $L$ contains an auxiliary parameter $\alpha$, usually referred to
as the spectral parameter, so we may write $L(\alpha)$ to emphasize this
dependence. We remark that the above expression for $L$ follows closely [31,
(5.15)]. It corresponds, in a different notation, to (3.37) and (3.39) in
[30].
As a function of $\alpha$, the Lax matrix $L$ has the following important
properties.
1. Periodicity:
$L(\alpha+1)=L(\alpha)\,,\quad L(\alpha+\tau)=CL(\alpha)C^{-1}\,,$ (2.5)
where $C=\begin{bmatrix}D&0\\\ 0&D^{-1}\end{bmatrix}$ with
$D=\mathrm{diag}(e^{2\pi iq_{1}},\dots,e^{2\pi iq_{N}})$.
2. Symmetry:
$L(-\alpha)=-ML(\alpha)M^{-1}\,,\quad\text{where}\
M=\begin{bmatrix}0&\mathrm{I}_{N}\\\ \mathrm{I}_{N}&0\end{bmatrix}.$ (2.6)
3. $L$ has simple poles at the half-periods: $L\sim L_{i}({\alpha}-{\omega}_{i})^{-1}+\mathrm{O}(1)$ near ${\alpha}={\omega}_{i}$. The residues $L_{i}$ are
$\displaystyle L_{i}$
$\displaystyle=-g_{i}^{\vee}\begin{bmatrix}0&\mathrm{I}_{N}\\\
\mathrm{I}_{N}&0\end{bmatrix}\quad(i=1,2,3)\,,$ (2.7) $\displaystyle L_{0}$
$\displaystyle=(g-g_{0}^{\vee})\begin{bmatrix}0&\mathrm{I}_{N}\\\
\mathrm{I}_{N}&0\end{bmatrix}-gT\,,$ (2.8)
where $T$ is the $2N\times 2N$ matrix with $0$’s along the main diagonal and
$1$’s elsewhere, and $g_{i}^{\vee}$ are the dual parameters,
$\begin{pmatrix}g^{\vee}_{0}\\\ g^{\vee}_{1}\\\ g^{\vee}_{2}\\\
g^{\vee}_{3}\\\
\end{pmatrix}=\frac{1}{2}\left({\begin{array}[]{rrrr}1&1&1&1\\\ 1&1&-1&-1\\\
1&-1&1&-1\\\ 1&-1&-1&1\\\ \end{array}}\right)\begin{pmatrix}g_{0}\\\ g_{1}\\\
g_{2}\\\ g_{3}\\\ \end{pmatrix}.$ (2.9)
Note that the residues $L_{i}$ are semi-simple (diagonalizable), with
$\displaystyle L_{i}$
$\displaystyle\sim\mathrm{diag}\Big{(}\underbrace{-g_{i}^{\vee},\dots,-g_{i}^{\vee}}_{\text{$N$
times}},\ \underbrace{g_{i}^{\vee},\dots,g_{i}^{\vee}}_{\text{$N$
times}}\Big{)}\qquad\text{for}\ i=1,2,3\,,$ (2.10) $\displaystyle L_{0}$
$\displaystyle\sim\mathrm{diag}\Big{(}-g_{0}^{\vee}-2(N-1)g,\
\underbrace{-g_{0}^{\vee}+2g,\dots,-g_{0}^{\vee}+2g}_{\text{$N{-}1$ times}},\
\underbrace{g_{0}^{\vee},\dots,g_{0}^{\vee}}_{\text{$N$ times}}\Big{)}\,.$
(2.11)
In [30], the Lax pair $L,A$ was constructed by an ad hoc method, and only for
the Hamiltonian flow corresponding to the quadratic Hamiltonian $h_{2}$. A
more general conceptual method for calculating $L,A$ was suggested in [31]. It
uses elliptic Dunkl operators [34, 21] and, apart from reproducing the above
$L$, it allows to construct a Lax partner $A$ for each of the commuting
Hamiltonian flows. This means that $L$ remains isospectral under each of the
flows governed by $h_{2},h_{4},\dots,h_{2N}$, cf. [31, Prop. 5.6]. As a
result, the quantities $f_{i}=\mathrm{tr}(L^{i})$ Poisson-commute with each of
$h_{2k}$, hence $f_{i}$ is a function of $h_{2},\dots,h_{2N}$. Taking into
account (2.5), we conclude that each of the functions
$f_{i}=\mathrm{tr}(L^{i})$ is a polynomial in $h_{2},\dots,h_{2N}$ whose
coefficients are elliptic functions of $\alpha$. Hence, the characteristic
polynomial of $L$ can be written as
$\det(L-k\mathrm{I})=k^{2N}+a_{1}k^{2N-1}+\dots+a_{2N}\,,$ (2.12)
where $a_{i}$ are polynomials in $h_{2},\dots,h_{2N}$, elliptic in the
spectral parameter.
### 2.3 Spectral curve
This puts us in the familiar setting of complex completely integrable systems.
Namely, the level sets of $N$ Poisson-commuting Hamiltonians
$h_{2},\dots,h_{2N}$ define a Lagrangian fibration
$\pi\,:\,\mathbb{C}^{2N}\to\mathbb{C}^{N}$. In addition to that, we have a
family of spectral curves
$f(k,\alpha):=\det(L(\alpha)-k\mathrm{I})=0\,.$ (2.13)
parametrized by the coordinates $h_{2},\dots,h_{2N}$ on the base of the
fibration $\pi$. Each spectral curve (2.13) is a $2N$-sheeted branched
covering of the base elliptic curve
$\Gamma=\mathbb{C}/(\mathbb{Z}+\tau\mathbb{Z})$, with $(k,\alpha)$ viewed as
coordinates on the cotangent bundle $T^{*}\Gamma$. The curve (2.13) comes with
a meromorphic differential, obtained by restriction from the canonical
$1$-form $kd\alpha$ on $T^{*}\Gamma$, and a line bundle $\mathcal{L}$ (eigen-
bundle of $L$).
So far this looks parallel to the case of the usual Calogero–Moser system
[35]. Motivated by [36, 15, 37], one should think of the matrix-valued
$1$-form $\Phi:=L(\alpha)d\alpha$ as a Higgs field of some kind, so let us
sketch such an interpretation. First, instead of considering $\Phi$ over the
elliptic curve $\Gamma$, it is more natural to take into account the symmetry
(2.6). It implies that
$f(-k,-\alpha)=f(k,\alpha)$ (2.14)
and so the spectral curve can be viewed as a branched covering of the Riemann
sphere ${\Gamma}/\mathbb{Z}_{2}$, with the $\mathbb{Z}_{2}$ acting by
$\alpha\mapsto-\alpha$. Indeed, if we multiply $f(k,\alpha)$ by
$(\wp^{\prime}(\alpha))^{2N}$, we get
$\widetilde{f}:=(\wp^{\prime}(\alpha))^{2N}f(k,\alpha)=\det(\wp^{\prime}(\alpha)L(\alpha)-k\wp^{\prime}(\alpha)\mathrm{I})=\det(\widetilde{L}-y\mathrm{I})\,,$
(2.15)
where $\widetilde{L}=\wp^{\prime}(\alpha)L$ and $y=k\wp^{\prime}(\alpha)$. A
quick check confirms that $\widetilde{L}$ is regular at $\alpha=\omega_{r}$,
$r=1,2,3$, and that $\widetilde{L}(-\alpha)=M\widetilde{L}(\alpha)M^{-1}$.
Therefore, the expression (2.15) is a polynomial in $y$, whose coefficients
are even elliptic functions with the only singularity at $\alpha=0$. As a
result, the spectral curve (2.13) acquires polynomial form
$\widetilde{f}(x,y)=0\,,\quad\text{where}\ x=\wp(\alpha)\,,\
y=k\wp^{\prime}(\alpha)\,.$ (2.16)
Using $x=\wp(\alpha)$ as the coordinate on $\Gamma/\mathbb{Z}_{2}$, we also
obtain $\Phi=Ld\alpha=(\wp^{\prime}(\alpha))^{-1}Ldx$. The properties of $L$
tell us that such $\Phi$ should be viewed as a Higgs field on the Riemann
sphere with four marked points, more precisely, on an orbicurve
$\mathbb{CP}^{1}$ of type $(2,2,2,2)$. Recall [38] that Hitchin systems on
orbicurves can also be viewed as parabolic Hitchin systems, with (conjugacy
classes of) the residues of $\Phi$ at the marked points being associated with
the values of the moment map, cf. [37, 5]. Therefore, the formula (2.4) should
be interpreted as a parametrization, by $p_{i},q_{i}$, of the corresponding
$2N$-dimensional symplectic leaf of a parabolic $\mathrm{SL}(2N,\mathbb{C})$
Hitchin system on the Riemann sphere with four marked points
$e_{i}=\wp(\omega_{i})$, $i=0,1,2,3$. This provides an interpretation of the
Inozemtsev system as a Hitchin system. Note that this is different from the
approach of [39]. Note also that the pattern (2.10)–(2.11) of the residues of
$\Phi$ at the marked points is in good agreement with the SCFT picture (see
Sec. 3.2 below). Also, as is explained below in Sec. 2.5, the genus of the
spectral curve equals $N$, which is as expected from both the Hitchin-system
and the M5-brane perspectives.
Let us also recall that starting from a moduli space $\mathcal{M}$ of Higgs
bundles, the nonabelian Hodge correspondence and Riemann–Hilbert map associate
to $\mathcal{M}$ two other moduli spaces, of local systems and of monodromy
data (known as de Rahm and Betti models, see [40] for a nice overview). For
our case, these two other incarnations can be found in [41, 42], see also [43,
44, 33, 45, 46] for further links between the Inozemtsev system and
isomonodromic deformations.
### 2.4 Spectral curves for $N=1$ and $N=2$
Here we present explicit equations for the spectral curves (2.13) in the cases
of $N=1$ and $N=2$. We write equations in terms of the variables $k,\alpha$.
They will be matched to M5 brane curves in Section 4.
#### 2.4.1 $N=1$ curve
For $N=1$, the Lax matrix is (cf. [33])
$L=\begin{bmatrix}p&v_{\alpha}(q)\\\ v_{\alpha}(-q)&-p\end{bmatrix}\,.$ (2.17)
Using A.8, we find:
$\det L=-p^{2}-v_{\alpha}(q)v_{\alpha}(-q)=-p^{2}+u(q)-u^{\vee}(\alpha)\,,$
(2.18)
where $u^{\vee}(\alpha)$ is the dual version of (2.2), defined above in
(2.28). Hence, the spectral curve (2.13) takes the form
$f(k,z)=k^{2}-h_{2}-u^{\vee}(\alpha)=0\,,$ (2.19)
with $h_{2}=p^{2}-u(q)$ viewed as a complex parameter. Multiplying this by
$(\wp^{\prime}(\alpha))^{2}$ and using $x=\wp(\alpha)$,
$y=k\wp^{\prime}(\alpha)$ we obtain $y^{2}=\wp^{\prime
2}(\alpha)\,\left(h_{2}+u^{\vee}(\alpha)\right)$. Using (4.2) it is easy to
see that the right-hand side is a quartic polynomial in $x=\wp(\alpha)$ (it
reduces to a cubic if $g_{0}^{\vee}=0$). For generic $h_{2}$, the curves are
smooth of genus $1$.
The Lagrangian fibration $\pi\,:\,\mathbb{C}^{2}\to\mathbb{C}$ is by the level
sets $p^{2}-u(q)=h_{2}$. Singular fibers correspond to the stationary values
of the Hamiltonian, i.e. to the equilibria $(p,q)=(0,q_{0})$ with
$u^{\prime}(q_{0})=0$. Then we can find that for a number of $l\geq 1$ generic
couplings $g_{i}$, the number of stationary values of $h_{2}$ is $l+2$, in
agreement with the Seiberg-Witten geometry [2]. Indeed, the function
$u^{\prime}(q)=\sum_{i=0}^{3}g_{i}^{2}\wp^{\prime}(q+\omega_{i})$ is odd
elliptic of order $3l$, therefore it has $3l$ zeros; the genericity assumption
ensures that the multiplicity of each zero is always one. Then $4-l$ zeros are
given by the half-periods, for which the values of $h_{2}$ are distinct.
Furthermore, the other $4l-4$ zeros come in pairs $(q,-q)$ so give the same
stationary value of $h_{2}$. Thus, the number of singular fibers (or
stationary values of $h_{2}$) is $(4-l)+(4l-4)/2=l+2$, as claimed.
#### 2.4.2 $N=2$ curve
For $N=2$, the Lax matrix is
$\displaystyle L$ $\displaystyle=$
$\displaystyle\begin{bmatrix}p_{1}&g\sigma_{\alpha}(q_{12})&v_{\alpha}(q_{1})&g\sigma_{\alpha}(q^{+}_{12})\\\
g\sigma_{\alpha}(-q_{12})&p_{2}&g\sigma_{\alpha}(q^{+}_{12})&v_{\alpha}(q_{2})\\\
v_{\alpha}(-q_{1})&g\sigma_{\alpha}(-q^{+}_{12})&-p_{1}&g\sigma_{\alpha}(-q_{12})\\\
g\sigma_{\alpha}(-q^{+}_{12})&v_{\alpha}(-q_{2})&g\sigma_{\alpha}(q_{12})&-p_{2}\end{bmatrix}$
$\displaystyle=$ $\displaystyle
P\begin{bmatrix}p_{1}&v_{\alpha}(q_{1})&g\sigma_{\alpha}(q_{12})&g\sigma_{\alpha}(q^{+}_{12})\\\
v_{\alpha}(-q_{1})&-p_{1}&g\sigma_{\alpha}(-q^{+}_{12})&g\sigma_{\alpha}(-q_{12})\\\
g\sigma_{\alpha}(-q_{12})&g\sigma_{\alpha}(q^{+}_{12})&p_{2}&v_{\alpha}(q_{2})\\\
g\sigma_{\alpha}(-q^{+}_{12})&g\sigma_{\alpha}(q_{12})&v_{\alpha}(-q_{2})&-p_{2}\end{bmatrix}P^{-1}\,,$
where
$\displaystyle P=\begin{bmatrix}1&0&0&0\\\ 0&0&1&0\\\ 0&1&0&0\\\
0&0&0&1\end{bmatrix}.$ (2.21)
The $N=2$ case is the first case with non-zero “antisymmetric mass” (related
to the coupling $g$). If we let $g=0$, we find that the Lax matrix reduces to
two $2\times 2$ blocks, each having the form of a $N=1$ Lax matrix. Similarly,
the general $2N\times 2N$ Lax matrix in the $g\to 0$ limit reduces to $N$
diagonal $2\times 2$ blocks. Subsequently, in this limit the spectral curve is
reducible, as it becomes a product of $N$ copies of the $N=1$ curve.
The $N=2$ spectral curve is given by
$\displaystyle 0=$
$\displaystyle(k^{2}-u^{\lor})^{2}-h_{2}(k^{2}-u^{\lor})+h_{4}$ (2.22)
$\displaystyle\quad-4g^{2}\Bigl{(}\wp(\alpha)(k^{2}-u^{\lor})+g^{\lor}_{0}\wp^{\prime}(\alpha)k+2(g^{\lor}_{0})^{2}\wp(\alpha)^{2}+\wp(\alpha)\sum_{r=1}^{3}(g^{\lor}_{r})^{2}\wp(\omega_{r})\Bigr{)}\,,$
where $u^{\vee}:=u^{\vee}(\alpha)$ and $h_{2},h_{4}$ represent the values of
two commuting Hamiltonians.
The derivation of (2.22) is outlined in appendix A.2.
### 2.5 Behaviour near marked points
In order to make a connection with the analysis of the Seiberg–Witten curve in
Sec. 3.2, it will be useful to look more closely at the singularities of the
Lax matrix (2.4). This will also allow us to confirm that the genus of the
spectral curves equals $N$, as expected.
Expanding $L$ at half-periods gives
$L=\sum_{j\geq-1}L_{i}^{(j)}(\alpha-\omega_{i})^{j}\,,\quad i=0,1,2,3\,,$
(2.23)
for some $L_{i}^{(j)}\in\mathrm{Mat}(2N,\mathbb{C})$ independent of $\alpha$,
with $L_{i}^{(-1)}$ being the residue matrices (2.7)–(2.8). The property (2.6)
implies that
$ML_{i}^{(j)}+(-1)^{j}L_{i}^{(j)}M=0\,,\qquad
M=\begin{bmatrix}0&\mathrm{I}_{N}\\\ \mathrm{I}_{N}&0\end{bmatrix}\,.$ (2.24)
Now consider the $2N$ sheets of the spectral curve $\det(L-k\mathrm{I})=0$
near one of the half-period $\alpha=\omega_{1,2,3}$. From (2.10), we know that
locally we can label these sheets so the roots $k_{1},\dots,k_{2N}$ near
$\alpha=\omega_{i}$ behave as follows:
$\displaystyle(k_{1},\dots,k_{2N})$
$\displaystyle\sim\frac{1}{{\alpha}-{\omega}_{i}}\Big{(}\underbrace{-g_{i}^{\vee},\dots,-g_{i}^{\vee}}_{\text{$N$
times}},\ \underbrace{g_{i}^{\vee},\dots,g_{i}^{\vee}}_{\text{$N$
times}}\Big{)}+\text{regular terms}\,.$ (2.25)
Series expansions for each $k_{r}(\alpha)$ can be worked out recursively, as a
perturbation series, together with the eigenvectors $v_{r}(\alpha)$ such that
$L(\alpha)v_{r}(\alpha)=k_{r}(\alpha)v_{r}(\alpha)\,,\qquad
v_{r}(\alpha)=\sum_{j\geq 0}v_{r}^{(j)}(\alpha-\omega_{i})^{j}\,,$ (2.26)
for a chosen “initial” eigenbasis $v_{r}^{(0)}$ of the residue matrix
$L_{i}^{(-1)}$. Since the residue matrix $L_{i}^{(-1)}$ commutes with $M$ for
all $i=0,1,2,3$ (for $i\neq 0$ it is simply proportional to $M$), the chosen
eigenvectors are also eigenvectors of $M$, and so half of them satisfy
$Mv_{r}^{(0)}=v_{r}^{(0)}$, with $Mv_{r}^{(0)}=-v_{r}^{(0)}$ for the other
half. The additional symmetry (2.24) of the Lax matrix imposes extra
constraints, which result in the following:
1. Near $\alpha=\omega_{i}$, each eigenvalue $k_{r}(\alpha)$ is odd, i.e. it changes sign under $\alpha\mapsto 2\omega_{i}-\alpha$.
2. The terms of the series for the eigenvector $v_{r}(\alpha)$ satisfy $Mv_{r}^{(j)}=\pm(-1)^{j}v_{r}^{(j)}$, with the sign $\pm$ determined by the initial eigenvector $v_{r}^{(0)}$.
An important corollary of the first property is that the regular terms in
(2.25) are in fact of order $O(\alpha-\omega_{i})$. Then by squaring the
spectral variable $k$ and shifting it appropriately, all the poles can be
cancelled. In particular,
$\displaystyle z$
$\displaystyle\sim\frac{1}{({\alpha}-{\omega}_{i})^{2}}\Big{(}\underbrace{0,\dots,0}_{2N\
\text{times}}\Big{)}+\text{regular terms}\quad(i=1,2,3)\,,$ (2.27)
where we have defined
$\displaystyle z$
$\displaystyle:=\frac{1}{4}\left(k^{2}-u^{\lor}+\text{constant}\right),$
$\displaystyle u^{\vee}$
$\displaystyle=\sum_{i=0}^{3}(g_{i}^{\vee})^{2}\wp({\alpha}+{\omega}_{i})\,.$
(2.28)
The factor of $1/4$ and the constant in (2.28) are for later convenience.
The same analysis for $\alpha\sim 0$ gives that
$\displaystyle(k_{1},\dots,k_{2N})$
$\displaystyle\sim\frac{1}{{\alpha}}\Big{(}-g_{0}^{\vee}-2(N-1)g,\
\underbrace{2g-g_{0}^{\vee},\dots,2g-g_{0}^{\vee}}_{\text{$N{-}1$ times}},\
\underbrace{g_{0}^{\vee},\dots,g_{0}^{\vee}}_{\text{$N$
times}}\Big{)}+{O}(\alpha)\,,$ (2.29)
and so by squaring and shifting it appropriately all but one of the $2N$ poles
there can be cancelled. In particular,
$\displaystyle\widetilde{z}:=z+\frac{g}{x}\left(y+\frac{1}{2}g^{\lor}_{0}x^{2}\right)$
$\displaystyle\sim\frac{1}{{\alpha}^{2}}\Big{(}Ng(g^{\lor}_{0}+(N-1)g),\
\underbrace{0,\dots,0}_{2N{-}1\ \text{times}}\Big{)}+\text{regular terms}\,,$
(2.30)
where we have defined
$\displaystyle x:=\wp({\alpha})\sim\frac{1}{{\alpha}^{2}}\,,\qquad
y:=\frac{1}{4}k\wp^{\prime}({\alpha}).$ (2.31)
(2.30) indicates that the coefficients of the spectral curve written in the
$(x,y,\widetilde{z})$ variables (as an $N$-fold cover of the sphere
parametrized by $x$) can only have simple poles at $x=\infty$, while (2.27)
indicates that if they are written in the $(x,y,z)$ variables they will be
regular away from $x=\infty$. In fact this observation will play an important
role in finding the change of variables needed to match the spectral curve to
the Seiberg-Witten curve, discussed in section 3.2.
We can now calculate the genus of the spectral curve (2.16). We follow the
same method as in [35]. First, consider the curve $\Gamma_{N}$ (2.13) and
denote its genus by $g$. Then $2g-2=\nu$,where $\nu$ is the number of the
branch points of $\Gamma_{N}$ viewed as a covering of the elliptic curve
$\Gamma$. This is the number of zeros of $\partial f/{\partial k}$ on
$\Gamma_{N}$; it also equals the number of poles of $\partial f/{\partial k}$.
The poles occur precisely at $2N$ points of $\Gamma_{N}$ above each of the
half-periods $\alpha=\omega_{i}$. Locally, we can factorize $f(k,\alpha)$ into
a product of factors $k-k_{r}(\alpha)$. For example, near
$\alpha=\omega_{1,2,3}$ we have
$f(k,\alpha)=\prod_{r=1}^{N}\left(k+\frac{g_{i}^{\vee}}{\alpha-\omega_{i}}+b_{r}({\alpha})\right)\prod_{r=N+1}^{2N}\left(k-\frac{g_{i}^{\vee}}{\alpha-\omega_{i}}+b_{r}({\alpha})\right)\,,$
(2.32)
where the $b_{r}({\alpha})$ are of order ${O}(\alpha-\omega_{i})$. By
differentiating this equation with respect to $k$, we find that $\partial
f/\partial k$ has a simple pole on each of the $2N$ sheets above $\omega_{i}$.
A similar analysis near $\alpha=0$ shows that $\partial f/\partial k$ has
there a pole of order $2N-1$ on one sheet, poles of order $3$ on $N-1$ sheets,
and simple poles on the remaining $N$ sheets. This gives $2g-2=3\times
2N+(2N-1)+3\times(N-1)+N=12N-4$, so $g=6N-1$.
The curve $\Gamma_{N}^{\prime}$ (2.16) is obtained from $\Gamma_{N}$ by taking
a quotient by the involution $(k,\alpha)\mapsto(-k,-\alpha)$. Thus,
$\Gamma_{N}$ can be viewed as a $2$-sheeted covering of $\Gamma_{N}^{\prime}$,
branched at the fixed points of the involution. These are precisely the points
above the half-periods, so there are $8N$ of them. Denoting by $g^{\prime}$
the genus of $\Gamma_{N}^{\prime}$, we get $12N-4=2g-2=2(2g^{\prime}-2)+8N$,
from which $g^{\prime}=N$, as claimed.
### 2.6 Modular property
The Lax matrix and the spectral curve exhibit a modular behaviour under
$\mathrm{SL}(2,\mathbb{Z})$-action. To state the result, recall that the Lax
matrix $L$ depends on the modular parameter $\tau$, the spectral parameter
$\alpha$, $2n$ variables $p_{i},q_{i}$, and the coupling constants $g$ and
$g_{0,1,2,3}$. Take ${\gamma}=\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\in\mathrm{SL}(2,\mathbb{Z})$ and define
$L^{\prime}$ to be the Lax matrix with the variables changed to
$\tau^{\prime}$, $\alpha^{\prime}$, etc., in the following way:
$\displaystyle{\tau}^{\prime}$
$\displaystyle=\frac{a{\tau}+b}{c{\tau}+d}\,,\quad\alpha^{\prime}=(c{\tau}+d)^{-1}\alpha\,,$
(2.33) $\displaystyle p_{i}^{\prime}$ $\displaystyle=(c{\tau}+d)p_{i}\,,\quad
q_{i}^{\prime}=(c{\tau}+d)^{-1}q_{i}\,,$ (2.34) $\displaystyle g^{\prime}$
$\displaystyle=g\,,\quad g_{0}^{\prime}=g_{0}\,,\quad
g^{\prime}_{r}=g_{\pi_{{\gamma}}(r)}\quad\text{for}\ r=1,2,3\,.$ (2.35)
Here in the last formula we denote by $\pi_{\gamma}$ the permutation of
$\\{1,2,3\\}$ determined by the group homomorphism (A.10). With this notation,
we have:
$\displaystyle L^{\prime}=(c{\tau}+d)QLQ^{-1}\,,$ (2.36)
where $Q=\begin{bmatrix}R&0\\\ 0&R^{-1}\end{bmatrix}$ and
$R=\mathrm{diag}\left(\exp(-\frac{2\pi ic}{c{\tau}+d}\alpha
q_{1}),\dots,\exp(-\frac{2\pi ic}{c{\tau}+d}\alpha q_{N})\right)$.
The formula (2.36) is obtained in a straightforward way from the modular
properties of the functions $\sigma_{\alpha}(x)$ and $v_{\alpha}(x)$ given in
the Appendix. If we introduce $k^{\prime}=(c{\tau}+d)k$, then we also have
$\det(L^{\prime}-k^{\prime}\mathrm{I})=(c\tau+d)^{2N}\det(L-k\mathrm{I})\,.$
(2.37)
The physical interpretation of these properties on the QFT side is the
$\mathrm{SL}(2,\mathbb{Z})$ S-duality mixed with the Spin(8) triality (see
Sec. 3.1 below).
## 3 $\mathrm{USp}(2N)$ $N_{f}=4$ superconformal field theory
We consider the family of 4d $\mathcal{N}=2$ superconformal field theories
consisting of $\mathrm{USp}(2N)$ gauge theories with $N_{f}=4$ hypermultiplets
in the fundamental representation and (for $N\geq 2$) $N_{a}=1$
hypermultiplets in the traceless antisymmetric two-index tensor
representation.
### 3.1 Field theory properties
We list some long-established properties of these theories.
* •
They are a family of interacting 4d $\mathcal{N}=2$ SCFTs labelled by a
positive integer $N$, which we call the rank of the $N_{f}=4$ theory. As
SCFTs, they are invariant under the 4d $\mathcal{N}=2$ superconformal group
$\mathrm{SU}(2,2|2)$.
* •
The $N_{f}=4$ SCFTs have an exact $\mathrm{SL}(2,\mathbb{Z})$ S-duality. This
means that each theory has a one-complex-dimensional conformal manifold given
by the upper half complex plane modulo $\mathrm{SL}(2,\mathbb{Z})$ Möbius
transformations. Though the center of $\mathrm{SL}(2,\mathbb{Z})$ acts
trivially on the conformal manifold, it acts non-trivially as charge
conjugation in the field theory. Around a special point on the conformal
manifold the theory admits a weakly-coupled Lagrangian description in terms of
$\mathrm{USp}(2N)$ gauge theory with 4 fundamental and 1 antisymmetric
hypermultiplets. The weak coupling limit of the complex gauge coupling
constant ${\tau}$ parameterizing the conformal manifold is
$\mathrm{Im}({\tau})\to\infty$.
* •
The internal global “flavor” symmetry is $\mathrm{Spin}(8)$ for $N=1$ and
$\mathrm{Spin}(8)\times\mathrm{SU}(2)$ for $N\geq 2$, under which the four
fundamental hypermultiplets (the same as eight fundamental half-
hypermultiplets) transform in the $(8_{v},1)$ representation, and the
antisymmetric hypermultiplet in the $(1,2)$ representation. Correspondingly,
there is a space of $\mathcal{N}=2$-preserving mass deformations given by the
complexified weight space of $\mathrm{Spin}(8)\times\mathrm{SU}(2)$. Introduce
mass (or deformation) parameters $(m_{1},\ldots,m_{4})$ for $\mathrm{Spin}(8)$
and $M$ for $\mathrm{SU}(2)$ as linear coordinates on this parameter space
such that $m_{i}$ is the complex mass of the $i$-th fundamental
hypermultiplet, and $M$ the mass of the antisymmetric hypermultiplet.222We use
an unconventional normalization for the mass such that our masses $m$ are
related to the canonically normalized masses $\widetilde{m}$ by
$\widetilde{m}=\sqrt{2}\,m$. The principal congruence subgroup
${\Gamma}(2)\subset\mathrm{SL}(2,\mathbb{Z})$ of the S-duality group acts
trivially on the $\mathrm{Spin}(8)$ masses, while the quotient
$\mathrm{SL}(2,\mathbb{Z})/{\Gamma}(2)\simeq S_{3}$ transforms the mass
parameters by the $\mathrm{Spin}(8)$ “triality” outer automorphism [2, 47].
The antisymmetric mass is invariant under S-duality transformations.
* •
The operator content of an $N_{f}=4$ theory can be organized in terms of the
unitary representations of its global symmetry
$\mathrm{SU}(2,2|2)\times\mathrm{Spin}(8)\times\mathrm{SU}(2)$. In particular,
with respect to $\mathrm{SU}(2,2|2)$ there are various sectors of
supersymmetry-protected BPS operators, for instance, Coulomb branch operators
and Higgs branch operators. The condensate of the scalar components in the
$\mathcal{N}=2$ multiplets of BPS operators parameterize moduli spaces of
$\mathcal{N}=2$ invariant vacuum states.
* •
The moduli space of vacua consists of various branches each of which is
locally a metric product of a Coulomb factor and a Higgs factor, with complex
dimension $n_{C}$ and quaternionic dimension $n_{H}$, respectively.
Conventionally, the branch with maximal $n_{C}$ is called the Coulomb branch
and the branch with maximal $n_{H}$ the Higgs branch. The rank $N$ $N_{f}=4$
theory has a Coulomb branch with $(n_{C},n_{H})=(N,N-1)$ and a Higgs branch
with $(n_{C},n_{H})=(0,6N-1)$. The $N-1$ quaternionic dimensional Higgs factor
of the Coulomb branch comes from the components of the antisymmetric
hypermultiplet carrying zero weight with respect to the $\mathrm{USp}(2N)$
gauge algebra.
* •
The vector multiplet of the Lagrangian theory contains a scalar field $\Phi$
in the adjoint representation. The Coulomb branch coordinate ring is freely
generated by $u_{i}:=\mathrm{tr}(\wedge^{2i}\Phi)$ with $i=1,2,\ldots,N$,
corresponding to the primitive Casimir elements of $\mathrm{USp}(2N)$. The
Coulomb branch coordinate ring is graded by the scaling dimension, so the
weight of $u_{i}$ is $2i$. Since the Coulomb branch chiral operators are BPS
operators, this description of the Coulomb branch chiral ring is true at all
points of the conformal manifold, not just at the weak coupling point.
We are interested in the geometry of the Coulomb branch. The low energy
effective $\mathrm{U}(1)^{N}$ gauge theory on the Coulomb branch is encoded in
the special Kähler geometry [48] of the Coulomb branch. The $N-1$ massless
neutral hypermultiplets on the Coulomb branch decouple in the low energy
limit, so will be ignored.
On general grounds [4] a Coulomb branch special Kähler geometry is equivalent
to a classical complex completely integrable Hamiltonian system. In
particular, the Coulomb branch is the $N$-complex-dimensional manifold of the
action variables of the integrable system. The matrix of low energy
$\mathrm{U}(1)^{N}$ complex gauge couplings gives the period matrix of a
complex torus of dimension $N$, so the Coulomb branch parameterizes a family
of complex tori, giving the angle variables of the integrable system. The
complex tori are also endowed with principle polarization coming from the
Dirac pairing on the $\mathrm{U}(1)^{N}$ electric-magnetic charge lattice, and
hence are abelian varieties. The total space of this family of abelian
varieties is a complex symplectic variety, the complex phase space of the
integrable system, with holomorphic symplectic form ${\omega}$.
The next subsection describes the total space geometry by way of a holomorphic
family $\Sigma$ of genus-$N$ Riemann surfaces over the Coulomb branch, along
with a meromorphic one-form ${\lambda}$ on the fibers whose poles have
constant residues. $(\Sigma,{\lambda})$ are called the Seiberg-Witten curve
and one-form in the physics literature. The abelian variety fibers of the
integrable system are the Jacobian tori of the Riemann surfaces, and the
symplectic form is ${\omega}=d{\lambda}$. Thus we will match the field theory
Coulomb branch geometry to the Inozemtsev system by matching the Seiberg-
Witten curve and one-form to the spectral curve and canonical one-form of the
integrable system.
### 3.2 Seiberg-Witten curve
The $\mathrm{USp}(2N)$ $N_{f}=4$ SCFTs can be constructed as the low energy
effective theory of type IIA superstrings in the presence D4, NS5, D6, and O6-
branes generalizing the construction of [14]. The M-theory lift of the D6 and
O6- IIA brane configuration [49] is a specific choice of complex structure of
a $(T^{2}\times\mathbb{C})/\mathbb{Z}_{2}$ hyperkähler orbifold background.
The M-theory lift of the D4 and NS5 branes is a single M5 brane intersecting
the background except over points of $T^{2}$ corresponding to NS5 branes. This
intersection is the Seiberg-Witten curve, and the restriction of the
holomorphic hyperkahler form to the curve is the Seiberg-Witten one-form. This
is the spectral curve of a Hitchin system on the orbifolded torus with
punctures [49].
The deformations of this orbifold background and M5 brane curve corresponding
to turning on the $\mathrm{Spin}(8)$ fundamental masses and the
$\mathrm{SU}(2)$ antisymmetric mass was worked out in [29]. The connection to
a Hitchin system is no longer apparent in this description. We will describe
this solution for the $\mathrm{USp}(2N)$ $N_{f}=4$ Coulomb branch in more
detail shortly in preparation for showing its equivalence to the spectral
curve of the Inozemtsev system. But first, we make a few comments on two other
string constructions of the $\mathrm{USp}(2N)$ $N_{f}=4$ theories.
These theories naturally arise as the world volume theories on a stack of $N$
parallel D3 branes probing an F-theory singularity of $(I^{\ast}_{0},D_{4})$
type — i.e., an $\mathrm{O7}^{-}$ plane coinciding with four $\mathrm{D7}$
branes [50, 51, 52, 53]. But it is not known how to turn on the antisymmetric
mass $M$ deformation in the F-theory construction.
These theories also admit a class-S construction via a 6d $(2,0)$ $A_{2N-1}$
SCFT compactified on a sphere $C$ with four punctures all of type $[N,N]$.
This construction only makes manifest an $\mathrm{SU}(2)^{4}$ subgroup of the
$\mathrm{Spin}(8)$ flavor group, and does not make the antisymmetric
$\mathrm{SU}(2)$ flavor factor or its associated mass deformation apparent
[25]. $C$ is identified with $T^{2}/\mathbb{Z}_{2}$ with the four punctures
corresponding to the four $\mathbb{Z}_{2}$ orbifold fixed points. The
antisymmetric hypermultiplet appears upon taking an appropriate zero-area
limit of $C$ [54], and [27] showed that by modifying the type of one puncture
to be $[N,N-1,1]$, the theory manifests the antisymmetric $\mathrm{SU}(2)$
flavor symmetry. The class-S construction realizes the integrable system
underlying the Coulomb branch geometry as a Hitchin system [55].
The matching to the M5 brane curve, presented below, gives strong evidence
that the Hitchin system associated with the above class-S construction can be
identified with the Inozemtsev system.
In the rest of this section we review the M5 brane construction [29] of the SW
curve for the $\mathrm{USp}(2N)$ $N_{f}=4$ theory. The main ingredients in
this construction are:
* •
The $\mathrm{USp}(2N)$ theory with the $\mathrm{Spin}(8)$ mass deformation is
realized by embedding one complex dimension of the M5 brane world volume in a
complex surface, $Q_{0}$. $Q_{0}$ carries a hyperkähler structure — from which
the SW 1-form is derived — and is a deformation of a
$(T^{2}\times\mathbb{C})/\mathbb{Z}_{2}$ orbifold. This surface can be thought
of (we will be more precise below) as fibered over $T^{2}/\mathbb{Z}_{2}$.
* •
The intersection with the M5 brane then gives a curve which projects to an
$N$-fold cover of $T^{2}/\mathbb{Z}_{2}$ minus one of the orbifold points. At
the missing orbifold point the M5 brane is not transverse to $Q_{0}$; we will
call this point the “pole” of the M5 brane.
* •
The $\mathrm{SU}(2)$ mass deformation, $M$, is realized by further deforming
the background surface to $Q_{M}$. Following the discussion of the analogous
deformation of the elliptic model in [14], describe $Q_{M}$ by two charts to
$Q_{0}$, one including the fibers above a neighborhood of a chosen point $p\in
T^{2}/\mathbb{Z}_{2}$, and the other encompassing the rest of the surface. The
two coordinate patches are isomorphic to the corresponding patches of $Q_{0}$,
and the $M$ deformation is realized by requiring that the transition map is a
shift of the fiber coordinate which has a pole with residue proportional to
$M$ at $p$. We call this transition map the “$M$ shift”. Changing $p$ and the
form of the transition map but keeping $M$ fixed does not change the complex
structure of $Q_{M}$.
* •
The M5 brane curve for the mass-deformed $\mathrm{USp}(2N)$ $N_{f}=4$ SCFT is
then locally a degree-$N$ polynomial in the fiber coordinate on $Q_{M}$ whose
coefficients have at most a simple pole over a chosen orbifold point of
$T^{2}/\mathbb{Z}_{2}$.
The form of the SW curve for the $\mathrm{USp}(2N)$ $N_{f}=4$ (and many other
closely related) SCFTs found in [29] followed this procedure with the $M$
shift at a point $p$ not equal to one of the orbifold points of
$T^{2}/\mathbb{Z}_{2}$. Both the form of the spectral curve of the Inozemtsev
system as well as the above-mentioned S-class construction (where one of the
four punctures is modified to capture the $M$ deformation) suggest that they
will most easily match the form of the SW curve if the point $p$ of the $M$
shift should be taken to coincide with one of the orbifold points. This
involves a slight modification of the construction of [29] which we now
explain.
#### 3.2.1 Background surface
We start with the orbifold $(T^{2}\times\mathbb{C})/\mathbb{Z}_{2}$. Think of
$T^{2}\times\mathbb{C}$ as an affine bundle over $T^{2}$ and let
$v\in\mathbb{C}$ be the fiber coordinate. Write the complex torus $T^{2}$ as a
curve $\eta^{2}=\prod_{i=1}^{4}(x-e_{i}w)$ in weighted projective space,
$[w:x:\eta]\in\mathbb{P}^{2}_{(1,1,2)}$. Note that $\mathrm{SL}(2,\mathbb{C})$
transformations of $(w,x)$ do not change the complex structure of $T^{2}$, but
change the $e_{i}$ by Möbius transformations. The $\mathbb{Z}_{2}$
identification on $\mathbb{C}\times T^{2}$ is
$(v,w,x,\eta)\simeq(-v,w,x,-\eta)$. Using the invariant coordinates on the
orbifold, $y=v\eta$, $z=v^{2}$ ($w$ and $x$ unchanged), the orbifolded
background space is given by the surface $y^{2}=z\prod_{i=1}^{4}(x-e_{i}w)$.
The $(T^{2}\times\mathbb{C})/\mathbb{Z}_{2}$ orbifold has a four-parameter
deformation into a complex surface $Q_{0}$ with the same asymptotic structure.
The mass-deformed orbifold surface $Q_{0}$ and SW 1-form are [29]
$\displaystyle{\lambda}$ $\displaystyle=\frac{y(wdx-xdw)}{P},$ $\displaystyle
P$ $\displaystyle:=\prod_{i}(x-e_{i}w),$ $\displaystyle y^{2}$
$\displaystyle=zP+Q,$ $\displaystyle Q$
$\displaystyle:=\sum_{j}\mu_{j}^{2}w\prod_{k\neq j}[(x-e_{k}w)(e_{j}-e_{k})],$
(3.1)
where $i,j,k\in\\{0,1,2,3\\}$. Note that we still have
$[w:x:y]\in\mathbb{P}^{2}_{(1,1,2)}$. The deformation parameters, ${\mu}_{i}$,
turn out to be related to the fundamental masses by [29]
${\mu}_{0}=\tfrac{1}{2}(m_{1}+m_{2}),\quad{\mu}_{1}=\tfrac{1}{2}(m_{1}-m_{2}),\quad{\mu}_{2}=\tfrac{1}{2}(m_{3}+m_{4}),\quad{\mu}_{3}=\tfrac{1}{2}(m_{3}-m_{4}).$
(3.2)
The topology of $Q_{0}$ can be pictured by noting that the $z=$ constant
“sections” are tori, and the $x={\xi}w$ (${\xi}=$ constant) “fibers” are
generically 2-sheeted covers of the $z$-plane branched over the point
$z=-Q/P$. But when $x=e_{i}w$ the fiber becomes two disconnected copies of the
$z$-plane, $S^{\pm}_{j}:=\bigl{\\{}\,x=e_{j}w,\
y=\pm{\mu}_{j}w^{2}{\textstyle\prod_{k\neq j}}(e_{j}-e_{k}),\ \forall
z\,\bigr{\\}}$. The existence of these “double fibers” over the Weierstrass
points in the deformed orbifold will play a central role in what follows. From
the point of view of the IIA string theory D4/NS5/O6- brane construction, the
generic $x={\xi}w$ fibers correspond to possible loci of (the M theory lift
of) an NS5 brane, while the $S^{\pm}_{j}$ curves correspond the possible loci
of “half” NS5 branes “stuck” at an O6- orientifold plane.
To get closer to the form of the integrable system spectral curve, we will
specialize (3.2.1) to Weierstrass form where the Weierstrass points are placed
at $e_{0}=\infty$ and $\sum_{j=1}^{3}e_{j}=0$. Then the $Q_{0}$ surface and
1-form become
$\displaystyle{\lambda}$ $\displaystyle=\frac{y(wdx-xdw)}{w\widetilde{P}},$
$\displaystyle\widetilde{P}$
$\displaystyle:=\prod_{i}(x-e_{i}w)=x^{3}+s_{2}w^{2}x-s_{3}w^{3},$
$\displaystyle y^{2}$
$\displaystyle=(zw+{\mu}_{0}^{2}x)\widetilde{P}+w^{2}\widetilde{Q},$
$\displaystyle\widetilde{Q}$
$\displaystyle:=\sum_{j}{\mu}_{j}^{2}{\epsilon}_{j}\prod_{k\neq j}(x-e_{k}w).$
(3.3)
where now indices only take the three values $i,j,k\in\\{1,2,3\\}$, and we
have defined the useful combinations
$\displaystyle s_{2}$ $\displaystyle:=\sum_{j<k}e_{j}e_{k},$ $\displaystyle
s_{3}$ $\displaystyle:=\prod_{j}e_{j},$ $\displaystyle{\epsilon}_{j}$
$\displaystyle:=\prod_{k\neq j}(e_{j}-e_{k}).$ (3.4)
Note that the equations for the disjoint fibers over the Weierstrass points
become
$\displaystyle S^{\pm}_{\infty}$
$\displaystyle:=\\{w=0,y=\pm\mu_{0}x^{2},\forall z\\},$ and $\displaystyle
S^{\pm}_{j}$
$\displaystyle:=\\{x{=}e_{j}w,\,y{=}\pm{\mu}_{j}{\epsilon}_{j}w^{2},\,\forall
z\\}.$ (3.5)
Now we discuss the $M$ deformation with the shift put at a branch point. To
motivate the construction, we first review, following [14], the corresponding
deformation of the unorbifolded $T^{2}\times\mathbb{C}$ background,
$\eta^{2}=P$. Put the $M$ shift at the Weierstrass point $w=0$ (which is
$x=\infty$ in the $w=1$ patch) by defining the transition map,
$\displaystyle\widetilde{v}=v+M\frac{\eta}{wx},$ (3.6)
where $\widetilde{v}$ is the fiber coordinate of a chart over a neighborhood
of the $w=0$ point of the $T^{2}$. This transition map has a pole with residue
$M$ over $w=0$, so describes a one-parameter complex deformation of
$T^{2}\times\mathbb{C}$ with parameter $M$. This is because the deformations
of the affine bundle $T^{2}\times\mathbb{C}$ are classified by
$H^{1}(T^{2},\mathcal{O}_{T^{2}})$ which is 1-dimensional, so there is just a
single deformation parameter, and furthermore this cohomology group vanishes
if a point is deleted from $T^{2}$.
In our case $Q_{0}$ is not an affine bundle, but is a deformation of a
$\mathbb{Z}_{2}$ orbifold of the this affine bundle. This leads to the
expectation (for which we do not have a rigorous justification) that there is
still only a single complex deformation preserving the asymptotic structure.
We can find a description of this deformation simply by orbifolding the $M$
shift given in (3.6), or more generally, by defining the transition map to be
any shift of the “fiber” ($z$) coordinate with a pole over the Weierstrass
point $w=0$ with residue proportional to $M$.
The $\mathbb{Z}_{2}$ orbifold action identifies
$\widetilde{v}\leftrightarrow-\widetilde{v}$, so we define invariant
coordinates ${\widetilde{z}}={\widetilde{v}}^{2}$,
${\widetilde{y}}=\widetilde{v}\eta$. Then (3.6) gives the transition map
$\displaystyle{\widetilde{y}}$ $\displaystyle=y+M\frac{\widetilde{P}}{x},$
$\displaystyle{\widetilde{z}}$
$\displaystyle=z+2M\frac{y}{wx}+M^{2}\frac{\widetilde{P}}{wx^{2}},$ (3.7)
in a neighborhood of the $w=0$ fiber of $(\mathbb{C}\times
T^{2})/\mathbb{Z}_{2}$. Thus $y$ is shifted by a term regular at $w=0$ (in the
$x=1$ patch), while $z$ is shifted by a double pole at $w=0$ plus single pole
and regular terms. (Recall that in local coordinates around $w=0$ — i.e.,
$\sqrt{w}$ in the $x=1$ patch — $y$ has a simple zero and $w^{-1}$ a double
pole.)
So far this has all been in the undeformed orbifold. To go to the $Q_{0}$
surface where the orbifold is deformed by turning on the ${\mu}_{i}$ masses,
it was argued in [29] that (3.7) does not change, since one simply shifts
$z\to z+\frac{Q}{P}$ and the same for ${\widetilde{z}}$. In Weierstrass form
this applies without change; just rewrite
$\frac{Q}{P}={\mu}_{0}^{2}\frac{x}{w}+\frac{\widetilde{Q}}{w\widetilde{P}}$.
But (3.7) has a qualitatively different pole structure at $w=0$ in $Q_{0}$
than in the undeformed orbifold. In the undeformed orbifold $y\sim\sqrt{w}$
was the local coordinate vanishing at $w=0$, but in the deformed orbifold
$w=0$ is no longer a branch point for $y$; instead $y$ has two solutions,
giving two disjoint curves over $w=0$, denoted by $S^{\pm}_{\infty}$ in (3.5).
In the neighborhood of $S^{\pm}_{\infty}$ the transition map (3.7) has a pair
of distinct simple poles along $S^{\pm}_{\infty}$ rather than a single double
pole.
Although the form of the $M$ shift given in (3.7) is perfectly valid, the form
of the resulting M5 brane curves do not match to those of the Inozemtsev
system in an algebraically simple way. Confident that there is only a single
complex deformation $Q_{0}\to Q_{M}$, we can modify (3.7) to any other
convenient transition map which has a simple pole in ${\widetilde{z}}$ at
$w=0$.
The property (2.30) of the spectral curve indicates that ${\widetilde{z}}$
should be chosen to have only a single pole at $w=0$ ($x=\infty$). A simple
transition map which does this is
$\displaystyle{\widetilde{y}}$ $\displaystyle=y,$
$\displaystyle{\widetilde{z}}$
$\displaystyle=z+2M\frac{(y+\mu_{0}x^{2})}{wx}.$ (3.8)
since (3.8) behaves near $w=0$ as
$\displaystyle{\widetilde{z}}$
$\displaystyle=\begin{cases}(1+\tfrac{M}{{\mu}_{0}})z+2{\mu}_{0}M\frac{x}{w}&\text{at
$S^{+}_{\infty}$}\\\ (1-\tfrac{M}{{\mu}_{0}})z&\text{at
$S^{-}_{\infty}$},\end{cases}$ (3.9)
so has a simple pole only along the $S^{+}_{\infty}$ fiber over $w=0$, and is
regular along the $S^{-}_{\infty}$ fiber.
We will see below that this transition map gives an M5 brane curve which is
easily matched to the Inozemtsev system spectral curve. Indeed, comparing
(2.28), (2.30) and (2.31) to (3.8) already indicates how most of the variables
and parameters of the integrable system will have to be matched to those of
the SW curve.
#### 3.2.2 M5 brane curve
We now have a choice of placing of a stuck NS5 brane at $w=0$ at either the
$S^{+}_{\infty}$ or the $S^{-}_{\infty}$ fiber. This choice gives two
different forms of the curve upon turning on the $M$ deformation since it
gives different regularity conditions in the shifted ${\widetilde{z}}$
coordinates depending on whether the stuck brane coincides with the shift pole
or not. However, once again the property (2.30) of the spectral curve
indicating that there should be only a single pole dictates that the stuck NS5
brane should be placed at the $S^{+}_{\infty}$ fiber to coincide with the
position of the $M$ shift pole.
Before turning on the $M$ deformation, the M5 brane curve of [29] in the
$Q_{0}$ background specialized to the case of the $\mathrm{USp}(2N)$ $N_{f}=4$
theory has the form $0=z^{N}+\mathcal{A}(w,x,y,z)$ where $\mathcal{A}$ is a
polynomial in $z$ of order $N-1$, homogeneous of weight 0 in $(w,x,y)$, and
can have a simple pole along either the $S^{+}_{\infty}$ or the
$S^{-}_{\infty}$ fiber over $w=0$. This comes from the IIA brane construction
where $N$ is the number of D4 branes (after orbifolding) corresponding to the
rank of the gauge group and the pole at $w=0$ is a single stuck NS5 brane. A
linear basis of functions of $(w,x,y)$ homogeneous of weight 0 with at most a
simple pole at $w=0$ is $\\{1,x/w\\}$. Thus $\mathcal{A}$ can be written more
explicitly as
$\displaystyle 0$ $\displaystyle=z^{N}+A_{0}(z)+\frac{x}{w}A_{1}(z)$ (3.10)
where $A_{0,1}$ are arbitrary polynomials of order $N-1$ in $z$. Since the
curve is allowed to have a pole only along either $S^{+}_{\infty}$ or
$S^{-}_{\infty}$, but not along both, and since $\frac{x}{w}$ has a pole along
both, we must, in fact, have that $A_{1}(z)\equiv 0$. Thus, when $M=0$ the
$\mathrm{USp}(2N)$ $N_{f}=4$ curve is generically $N$ disjoint sections of
$Q_{0}$ corresponding to the $N$ roots of the polynomial $z^{N}+A_{0}(z)$.
This reflect the well-known fact — reviewed at the beginning of the next
section — that when $M=0$ the Coulomb branch of the theory is the $N$-fold
symmetric product of the rank-1 Coulomb branch.
We now turn on the antisymmetric mass deformation parameter $M$ by using the
transition map (3.8). Concretely, the curve for the shifted model is like the
curve for the non-shifted model (3.10) except that we should now allow
singularities only $S^{+}_{\infty}$ in a coordinate patch covering $w=0$ with
coordinates $(w,x,y,{\widetilde{z}})$ related to $(w,x,y,z)$ by (3.8).
Since we are only adding poles at $w=0$, and the only functions of weight zero
in $(w,x,y)$ with poles only there are $(x/w)^{\alpha}$ and
$(y/w^{2})(x/w)^{\alpha}$ for non-negative ${\alpha}$, the general form of the
curve in the $z$ patch will be
$\displaystyle
0=F:=z^{N}+\sum_{a=0}^{\infty}\frac{w^{2}A_{a}+yE_{a}}{w^{2}}\Bigl{(}\frac{x}{w}\Bigr{)}^{a}$
(3.11)
where the $A_{a}$ and $E_{a}$ are arbitrary polynomials of order $N-1$ in $z$.
Though (3.11) is a correct general form for the curve, the infinite sum of
pole terms is intimidating. It is not too hard to bound the number of pole
terms that can contribute by using the condition that there is only at most a
first-order pole at $w=0$ in the shifted ${\widetilde{z}}$ variable. Under the
transition map (3.8), ${\widetilde{z}}=z+yP_{1}+P_{1}$, where $P_{a}$ refers
to a generic rational function of $w$ with poles of up to order $a$ at $w=0$
(work in the $x=1$ patch). Using the fact that $y^{2}\sim zP_{0}+P_{0}$, one
can recursively eliminate all higher powers of $y$ in
${\widetilde{z}}^{\ell}\sim z^{\ell}+\cdots$ to find that
$\displaystyle{\widetilde{z}}^{\ell}\lesssim
z^{\ell}+\sum_{a=1}^{\ell-1}z^{\ell-a}(P_{2a}+yP_{2a-1}).$ (3.12)
The $\lesssim$ sign means that we have pole orders bounded by the terms on the
right. In the ${\widetilde{z}}$ coordinate the curve is to have at most a
simple pole at $w=0$, so will have the form
${\widetilde{z}}^{N}+\sum_{\ell=0}^{N-1}{\widetilde{z}}^{\ell}P_{1}$.
Substituting (3.12) into this then shows that in the $z$ coordinate the
highest-order poles are of the form
$\displaystyle F$ $\displaystyle\ \lesssim\
z^{N}+\sum_{\ell=0}^{N-1}\sum_{a=0}^{\ell}z^{\ell-a}(P_{2a+1}+yP^{\prime}_{2a})\
\sim\ z^{N}+\sum_{\ell=1}^{N}z^{N-\ell}(P_{2\ell-1}+yP^{\prime}_{2\ell-2}),$
(3.13)
where by $P^{\prime}_{a}$ we mean the usual $a$th-order pole for $a\neq 0$,
but $P^{\prime}_{0}\equiv 0$. Comparing to (3.11) then implies that the curve
is
$\displaystyle
0=z^{N}+\sum_{\ell=1}^{N}z^{N-\ell}\Bigl{(}\sum_{a=0}^{2\ell-1}A_{a\ell}\frac{x^{a}}{w^{a}}+\sum_{a=0}^{2\ell-2}E_{a\ell}\frac{yx^{a}}{w^{a+2}}\Bigr{)}.$
(3.14)
Note that (3.12) and thus (3.14) does not give the optimal bound on the order
of the poles appearing in the curve, but instead just gives a reasonable upper
bound. This is not a big deal since any “extra” terms will be set to zero upon
demanding only a simple pole appear in the ${\widetilde{z}}$ patch.
The coefficients in (3.14) are determined by demanding the correct pole
behavior after shifting to the ${\widetilde{z}}$ variable. Concretely, make
the inverse change of coordinates (3.8) in the curve by substituting
$z\to{\widetilde{z}}-2M(y+{\mu}_{0}x^{2})/(wx)$ in (3.14). The 5 brane curve
(3.14) in the $x=1$ patch becomes in terms of the (3.8) shifted variables
$\displaystyle
0=\Bigl{(}{\widetilde{z}}-2M\frac{y+\mu_{0}}{w}\Bigr{)}^{N}+\sum_{\ell=1}^{N}\Bigl{(}{\widetilde{z}}-2M\frac{y+\mu_{0}}{w}\Bigr{)}^{N-\ell}\Bigl{(}\sum_{a=0}^{2\ell-1}\frac{A_{a\ell}}{w^{a}}+\sum_{a=0}^{2\ell-2}\frac{yE_{a\ell}}{w^{a+2}}\Bigr{)}.$
(3.15)
Expand this around $w=0$ keeping only pole terms
${\widetilde{z}}^{\ell}w^{-a}$ and ${\widetilde{z}}^{\ell}yw^{-a}$ for $a>0$.
We do this by using iteratively that
$y^{2}=({\widetilde{z}}w-2M(y+\mu_{0})+{\mu}_{0}^{2})\widetilde{P}+w^{2}\widetilde{Q}$,
with $\widetilde{P}=1+s_{2}w^{2}-s_{3}w^{3}$, and
$\widetilde{Q}=\sum_{j}{\mu}_{j}^{2}{\epsilon}_{j}\prod_{k\neq j}(1-e_{k}w)$
to reduce all terms to either ${\widetilde{z}}w^{-a}$ or
${\widetilde{z}}yw^{-a}$.
Motivated by the form of the spectral curve of the integrable system, as
discussed above, we choose the to put the stuck 5 brane at $S^{+}_{\infty}$.
This means that the $A_{a\ell}$ and $E_{a\ell}$ coefficients are determined by
requiring that all second- and higher-order poles along $S^{\pm}_{\infty}$ and
the simple poles along $S^{-}_{\infty}$ cancel in the ${\widetilde{z}}$
variables. Only a simple pole along $S^{+}_{\infty}$ is allowed, corresponding
to the stuck brane.
#### 3.2.3 The rank-1 SW curve
Specializing to rank $N=1$, there is no $M$ deformation, and the M5 brane
curve (3.10) becomes simply
$\displaystyle 0=z+A_{01}.$ (3.16)
We can use this to eliminate $z$ in the (3.2.1) to give the an elliptic curve
in Weierstrass form for the SW curve. We recall here for later convenience the
expressions for the $Q_{0}$ surface and 1-form written in the $w=1$ patch
coordinates,
$\displaystyle y^{2}$
$\displaystyle=(z+{\mu}_{0}^{2}x)\widetilde{P}+\widetilde{Q},$
$\displaystyle{\lambda}$ $\displaystyle=\frac{ydx}{\widetilde{P}},$ (3.17)
where
$\displaystyle\widetilde{P}$ $\displaystyle:=\prod_{i=1}^{3}(x-e_{i}),$
$\displaystyle\widetilde{Q}$
$\displaystyle:=\sum_{j=1}^{3}{\mu}_{j}^{2}{\epsilon}_{j}\prod_{k\neq
j}(x-e_{k}).$ (3.18)
#### 3.2.4 The rank-2 SW curve
At rank $N=2$ the coefficients in the general M5 brane curve (3.14) are
determined by the procedure described below equation (3.15). For $N=2$ the
highest power of $y$ appearing in (3.15) is 2, and only a single iteration of
using the $Q_{0}$ surface equation to reduce the power of $y$ is needed. As a
result the constraints on the coefficients are not overly complicated, though
it is still useful to use a computer algebra system to solve the constraints.
The result is that the M5 brane curve is (written in the $w=1$ patch
coordinates)
$\displaystyle
0=z^{2}+A_{01}z+A_{20}-4M^{2}zx-8M^{2}{\mu}_{0}(y+{\mu}_{0}x^{2}).$ (3.19)
The intersection of (3.19) with the $Q_{0}$ surface (3.17) and the restriction
of the one-form to this intersection then give a genus-2 SW curve and
associated meromorphic 1-form.
## 4 Matching spectral curve to M5 brane curve
The Coulomb branch of the $\mathrm{USp}(2N)$ $N_{f}=4$ theory is isomorphic as
a complex space (though not as a metric space) to $\mathbb{C}^{N}$ with
coordinates given by the gauge invariant vacuum expectation values
$u_{i}:=\mathrm{tr}(\wedge^{2i}\Phi)$, $i=1,2,\ldots,N$ which have scaling
dimensions $2,4,\ldots,2N$ at the conformal point. The Coulomb branch of the
massless theory has the same complex structure as the classical moduli space.
At a generic point on the Coulomb branch of the massless theory, the adjoint
vev can be diagonalized,
$\Phi=\mathrm{diag}(\pm\phi_{1},\pm\phi_{2},\cdots,\pm\phi_{N})$, in which
case $u_{i}=e_{i}(\phi_{1}^{2},\phi_{2}^{2},\cdots,\phi_{N}^{2})$,
$i=1,2,\ldots,N$, where $e_{i}$ is the $i$-th elementary symmetric polynomial.
As long as the antisymmetric mass vanishes, the matrix of $\mathrm{U}(1)^{N}$
complex gauge couplings is diagonal,
${\tau}_{ij}={\delta}_{ij}{\tau}(\phi_{i}^{2})$.
In the case when all the masses vanish, ${\tau}(\phi_{i}^{2})={\tau}$, i.e.,
has the same constant value. We thus have the same abelian variety with period
matrix $\tau_{ij}=\delta_{ij}\tau$ at all points on the Coulomb branch except
the origin. The singular fiber above the origin is given by the orbifold
$T^{2N}/G(2,1,N)\simeq\mathbb{C}^{N}/([\mathbb{Z}+\tau\mathbb{Z})^{N}\rtimes
G(2,1,N)]$. Then the total space of Coulomb branch is identical to the phase
space of the Inozemtsev system with zero couplings.
Thus for vanishing masses the field theory Coulomb branch geometry is
correctly described by the Inozemtsev system. In the remainder of this section
we present parameter and variable identifications for the rank $N=1$ and $N=2$
cases, showing that the M5 brane SW curve and 1-form and the spectral curve
and 1-form of the Inozemtsev system coincide for non-vanishing masses
(deformation parameters). We stop at $N=2$ because the matching of parameters
becomes increasingly complicated for larger values of $N$.
### 4.1 The $N=1$ case
Recall that the $N=1$ spectral curve is given by (2.19), and the one-form by
${\lambda}=kd\alpha$. Introduce coordinates $(x,y)$ related to $(k,\alpha)$ by
$\displaystyle x$ $\displaystyle=\wp(\alpha),$ $\displaystyle y$
$\displaystyle=\frac{1}{4}\wp^{\prime}(\alpha)k,$ (4.1)
where the prime means derivative with respect to $\alpha$. These definitions
were motivated in (2.31) by the pole structure of the spectral curve. We then
find, using the Weierstrass $\wp$-function identities
$\displaystyle(\wp^{\prime}(\alpha))^{2}$
$\displaystyle=4\prod_{i=1}^{3}(\wp(\alpha)-e_{i}),$
$\displaystyle\wp(\alpha+{\omega}_{i})$
$\displaystyle=e_{i}+\frac{\prod_{j\neq
i}^{3}(e_{i}-e_{j})}{\wp(\alpha)-e_{i}},$ (4.2)
where
$\displaystyle e_{i}:=\wp({\omega}_{i}),\quad i=1,2,3,$ (4.3)
that the spectral curve and one-form become
$\displaystyle y^{2}$
$\displaystyle=\tfrac{1}{4}(h_{2}+{\gamma})\prod^{3}_{i=1}(x-e_{i})+\tfrac{1}{4}(g_{0}^{\vee})^{2}x\prod^{3}_{i=1}(x-e_{i})+\tfrac{1}{4}\sum^{3}_{i=1}(g_{i}^{\vee})^{2}\prod^{3}_{j\neq
i}(x-e_{i})(e_{i}-e_{j}),$ $\displaystyle kd\alpha$
$\displaystyle=\frac{ydx}{\prod^{3}_{i=1}(x-e_{i})},$ (4.4)
where ${\gamma}:=\sum^{3}_{i=1}(g_{i}^{\vee})^{2}e_{i}$. These are easily seen
to coincide with the SW curve and 1-form given in (3.16) and (3.17) with the
parameter identifications
$\displaystyle\mu_{i}^{2}$ $\displaystyle=\frac{1}{4}(g_{i}^{\vee})^{2},$
$\displaystyle A_{01}$ $\displaystyle=-\frac{1}{4}(h_{2}+{\gamma}).$ (4.5)
### 4.2 The $N=2$ case
Recall that the $BC_{2}$ spectral curve is given by (2.22). With the same
change of variables (4.1) as in the $BC_{1}$ case, which matched the 1-forms,
the $BC_{2}$ curve becomes
$\displaystyle(k^{2}-u^{\vee})^{2}-h_{2}(k^{2}-u^{\lor})+h_{4}-4g^{2}\left(x(k^{2}-u^{\vee})+4g^{\lor}_{0}y+2(g^{\vee}_{0})^{2}x^{2}+x{\gamma}\right)=0.$
(4.6)
Recall that
$u^{\vee}:=\sum_{r=0}^{3}(g_{r}^{\vee})^{2}\wp(\alpha+{\omega}_{r})$ and
${\gamma}:=\sum_{r=1}^{3}(g^{\lor}_{r})^{2}e_{r}$. Then with the parameter
identifications
$\displaystyle{\mu}_{0}$ $\displaystyle=\frac{1}{2}g_{0}^{\vee},$
$\displaystyle{\mu}_{i}^{2}$ $\displaystyle=\frac{1}{4}(g_{i}^{\vee})^{2}\
\text{ for }\ i\in\\{1,2,3\\},$ $\displaystyle M^{2}$
$\displaystyle=\frac{1}{4}g^{2},$ $\displaystyle A_{01}$
$\displaystyle=-\frac{1}{4}(h_{2}+2{\gamma}),$ $\displaystyle A_{02}$
$\displaystyle=\frac{1}{16}(h_{4}+{\gamma}h_{2}+{\gamma}^{2}),$ $\displaystyle
z$ $\displaystyle=\frac{1}{4}(k^{2}-u^{\vee}+{\gamma}),$ (4.7)
and using the Weierstrass identities (4.2), we find the spectral curve becomes
the pair of equations
$\displaystyle y^{2}$
$\displaystyle=(z+{\mu}_{0}^{2}x)\widetilde{P}+\widetilde{Q},$ $\displaystyle
0$
$\displaystyle=z^{2}+z\left(A_{01}-4M^{2}x\right)+\left(A_{02}-8M^{2}{\mu}_{0}^{2}x^{2}\right)-8M^{2}{\mu}_{0}y,$
(4.8)
which coincides with the M5 brane curve 3.19 and background surface 3.17. Note
that the definition of $z$ (up to a constant shift) was already motivated in
(2.28) by the pole structure of the spectral curve.
## Acknowledgement
We would like to thank Mario Martone, Joseph Minahan, Yiwen Pan, Oliver
Schlotterer for helpful discussions. YL is grateful to Yuji Tachikawa for
pointing out the paper [27] and for encouragement. The collaboration between
OC and YL started during the workshop “Elliptic integrable systems, special
functions, and quantum field theory” which took place in June, 2019 at
Nordita, Stockholm. OC is grateful to the organizers for the invitation, and
OC and YL would like to thank Nordita for the hospitality and stimulating
environment. The work of PCA is supported in part by DOE grant SC0011784. The
work of YL is supported by the Knut and Alice Wallenberg Foundation under
grant Dnr KAW 2015.0083.
## Appendix A Appendix
### A.1 Elliptic functions and identities
We use the following functions
$\sigma_{\alpha}^{r}(x)=\frac{\vartheta_{r+1}(x-\alpha)\vartheta_{1}^{\prime}(0)}{\vartheta_{r+1}(x)\vartheta_{1}(-\alpha)}\,,\quad
r=0,1,2,3\,,$ (A.1)
where $\vartheta_{1,2,3,4}(x|\tau)$ are the Jacobi theta functions. A summary
of their main properties can be found in [56]; in particular, we have
$\sigma_{\alpha}^{r}(x+\omega)=e^{2\pi
i\alpha\partial_{\tau}\omega}\sigma_{\alpha}^{r}(x)\quad\text{for}\
\omega\in\mathbb{Z}+\tau\mathbb{Z}\,.$ (A.2)
(Here we use the shorthand notation $\partial_{\tau}(a+b\tau)=b$.) We also
denote $\sigma_{\alpha}^{0}(x)$ simply by $\sigma_{\alpha}(x)$, that is,
$\sigma_{\alpha}(x)=\frac{\vartheta_{1}(x-\alpha)\vartheta_{1}^{\prime}(0)}{\vartheta_{1}(x)\vartheta_{1}(-\alpha)}\,.$
(A.3)
The functions (A.1) are related to each other by translations by the half-
periods
$(\omega_{0},\omega_{1},\omega_{2},\omega_{3})=(0,\frac{1}{2},\frac{1+\tau}{2},\frac{\tau}{2})$:
$\sigma_{\alpha}^{r}(x)=e^{2\pi
i\alpha\partial_{\tau}\omega_{r}}\sigma_{\alpha}(x-\omega_{r}).$ (A.4)
For given coupling parameters $g_{0,1,2,3}$, we further define
$v_{\alpha}(x)=v_{\alpha}(x;g_{0},g_{1},g_{2},g_{3})=\sum_{r=0}^{3}g_{r}\sigma_{2\alpha}^{r}(x).$
(A.5)
Note the properties
$\displaystyle\sigma_{-\alpha}(-x)$ $\displaystyle=-\sigma_{\alpha}(x)\,,\quad
v_{-\alpha}(-x)=-v_{\alpha}(x)\,,$ (A.6)
and the following identities:
$\displaystyle\sigma_{\alpha}(-x)\sigma_{\alpha}(x)$
$\displaystyle=\wp(\alpha)-\wp(x),$ (A.7) $\displaystyle
v_{\alpha}(-x)v_{\alpha}(x)$
$\displaystyle=\sum_{r=0}^{3}\Big{(}(g_{r}^{\vee})^{2}\wp(\alpha+\omega_{r})-(g_{r})^{2}\wp(x+\omega_{r})\Big{)}\,,$
(A.8)
where $g^{\vee}_{i}$ are the dual parameters (2.9). Using the notation (2.2),
(2.28), the last relation can be written as
$v_{\alpha}(-x)v_{\alpha}(x)=u^{\vee}(\alpha)-u(x)$.
Another useful property of $v_{\alpha}(x)$ is the following duality:
$v_{\alpha}(x;g_{0},g_{1},g_{2},g_{3})=v_{-x}(-\alpha;g^{\vee}_{0},g^{\vee}_{1},g^{\vee}_{2},g^{\vee}_{3})=-v_{x}(\alpha;g^{\vee}_{0},g^{\vee}_{1},g^{\vee}_{2},g^{\vee}_{3})\,.$
(A.9)
This can be checked by comparing translation properties and residues in the
$x$-variable.
Finally, let us state how $\sigma_{\alpha}(x)$ and $v_{\alpha}(x)$ behave
under action of $\gamma\in\mathrm{SL}(2,\mathbb{Z})$. We will use the group
homomorphism $\pi$ from $\mathrm{SL}(2,\mathbb{C})$ to the permutation group
$S_{3}$ defined on the generators as follows:
$\pi\,:\,\mathrm{SL}(2,\mathbb{C})\to
S_{3}\,,\quad\gamma\mapsto\pi_{\gamma}\,,\quad\left(\begin{smallmatrix}1&1\\\
0&1\end{smallmatrix}\right)\mapsto
s_{23}\,,\quad\left(\begin{smallmatrix}0&1\\\
-1&0\end{smallmatrix}\right)\mapsto s_{13}\,.$ (A.10)
Note that the kernel of $\pi$ is the principal congruence subgroup
${\Gamma}(2)\subset\mathrm{SL}(2,\mathbb{C})$.
Take ${\gamma}=\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\in\mathrm{SL}(2,\mathbb{Z})$ and define
$\tau^{\prime}$, $\alpha^{\prime}$, $x^{\prime}$, $g^{\prime}_{i}$ in the
following way:
$\displaystyle{\tau}^{\prime}$
$\displaystyle=\frac{a{\tau}+b}{c{\tau}+d}\,,\quad\alpha^{\prime}=(c{\tau}+d)^{-1}\alpha\,,\quad
x^{\prime}=(c{\tau}+d)^{-1}x\,,$ (A.11) $\displaystyle g_{0}^{\prime}$
$\displaystyle=g_{0}\,,\quad
g^{\prime}_{r}=g_{\pi_{{\gamma}}(r)}\quad\text{for}\ r=1,2,3\,.$ (A.12)
With this notation, we have:
$\displaystyle{\sigma}_{\alpha^{\prime}}(x^{\prime}|{\tau}^{\prime})$
$\displaystyle=(c{\tau}+d)\exp\left(-\frac{2\pi ic}{c{\tau}+d}\alpha
x\right){\sigma}_{\alpha}(x|{\tau})\,,$ (A.13)
$\displaystyle{\sigma}^{\pi_{\gamma}(r)}_{\alpha^{\prime}}(x^{\prime}|{\tau}^{\prime})$
$\displaystyle=(c{\tau}+d)\exp\left(-\frac{2\pi ic}{c{\tau}+d}\alpha
x\right){\sigma}^{r}_{\alpha}(x|{\tau})\,,\quad r=1,2,3\,.$ (A.14)
These transformations can be deduced easily using the modular transformations
of Jacobi theta functions. As a corollary,
$\displaystyle
v_{\alpha^{\prime}}(x^{\prime};g_{0}^{\prime},g_{1}^{\prime},g_{2}^{\prime},g_{3}^{\prime}|{\tau}^{\prime})=(c{\tau}+d)\exp\left(-\frac{4\pi
ic}{c{\tau}+d}\alpha x\right)v_{\alpha}(x;g_{0},g_{1},g_{2},g_{3}|{\tau})\,.$
(A.15)
### A.2 Calculating the $N=2$ spectral curve
The $N=2$ spectral curve is defined by the characteristic polynomial
$\det(L-k\mathrm{Id})=k^{4}+a_{1}k^{3}+a_{2}k^{2}+a_{3}k+a_{4}$ (A.16)
of the Lax matrix (2.4.2). By direct calculation,
$\displaystyle a_{1}$ $\displaystyle=0$ $\displaystyle a_{2}$
$\displaystyle=-\Bigl{(}p_{1}^{2}-p_{2}^{2}+2g^{2}\bigl{(}\sigma_{\alpha}(-q_{12})\sigma_{\alpha}(q_{12})+\sigma_{\alpha}(-q^{+}_{12})\sigma_{\alpha}(q^{+}_{12})\bigr{)}$
$\displaystyle\quad+v_{\alpha}(-q_{1})v_{\alpha}(q_{1})+v_{\alpha}(-q_{2})v_{\alpha}(q_{2})\Bigr{)}$
$\displaystyle a_{3}$
$\displaystyle=-2g^{2}\Big{(}v_{\alpha}(q_{2})\sigma_{\alpha}(q_{12})\sigma_{\alpha}(-q^{+}_{12})+v_{\alpha}(q_{1})\sigma_{\alpha}(-q_{12})\sigma_{\alpha}(-q^{+}_{12})$
$\displaystyle\quad+v_{\alpha}(-q_{1})\sigma_{\alpha}(q_{12})\sigma_{\alpha}(q^{+}_{12})+v_{\alpha}(-q_{2})\sigma_{\alpha}(-q_{12})\sigma_{\alpha}(q^{+}_{12})\Big{)}$
$\displaystyle a_{4}$
$\displaystyle=p_{1}^{2}p_{2}^{2}+v_{\alpha}(-q_{2})v_{\alpha}(q_{2})p_{1}^{2}+v_{\alpha}(-q_{1})v_{\alpha}(q_{1})p_{2}^{2}$
$\displaystyle\quad+2g^{2}\Big{(}\sigma_{\alpha}(-q^{+}_{12})\sigma_{\alpha}(q^{+}_{12})-\sigma_{\alpha}(-q_{12})\sigma_{\alpha}(q_{12})\Big{)}p_{1}p_{2}$
$\displaystyle\quad+v_{\alpha}(-q_{1})v_{\alpha}(q_{1})v_{\alpha}(-q_{2})v_{\alpha}(q_{2})$
$\displaystyle\quad-g^{2}\Big{(}v_{\alpha}(q_{1})v_{\alpha}(q_{2})\sigma_{\alpha}(-q^{+}_{12})^{2}+v_{\alpha}(-q_{1})v_{\alpha}(q_{2})\sigma_{\alpha}(q_{12})^{2}$
$\displaystyle\quad+v_{\alpha}(q_{1})v_{\alpha}(-q_{2})\sigma_{\alpha}(-q_{12})^{2}+v_{\alpha}(-q_{1})v_{\alpha}(-q_{2})\sigma_{\alpha}(q^{+}_{12})^{2}\Big{)}$
$\displaystyle\quad+g^{4}\Big{(}\sigma_{\alpha}(-q_{12})\sigma_{\alpha}(q_{12})-\sigma_{\alpha}(-q^{+}_{12})\sigma_{\alpha}(q^{+}_{12})\Big{)}^{2}$
(A.17)
where we have used the abbreviations $q_{ij}=q_{i}-q_{j}$ and
$q_{ij}^{+}=q_{i}+q_{j}$.
Using (A.7) and (A.8), we easily find that
$\displaystyle a_{2}$
$\displaystyle=-\Big{(}h_{2}+4g^{2}\wp(\alpha)+2\sum_{r=0}^{3}(g_{r}^{\vee})^{2}\wp(\alpha+\omega_{r})\Big{)}=-(h_{2}+4g^{2}\wp(\alpha)+2u^{\vee}(\alpha))\,,$
(A.18)
where
$h_{2}=p_{1}^{2}+p_{2}^{2}-u(q_{1})-u(q_{2})-2g^{2}\left(\wp(q_{12})+\wp(q_{12}^{+})\right)\,.$
(A.19)
To calculate $a_{3}$, we first note that it is elliptic in $q_{1,2}$ with
possible first order poles along the mirrors $q_{i}=0$ for $i=1,2$ and
$q_{1}\pm q_{2}=0$. However, it is symmetric under interchanging $q_{1},q_{2}$
and changing their signs arbitrarily. Hence, $a_{3}$ cannot have a first order
pole along any mirror, thus it is regular elliptic, i.e. constant independent
of $q_{1}$, $q_{2}$. After than we can evaluate $a_{2}$ at convenient values
of $q_{1},q_{2}$. The result is
$\displaystyle a_{3}$
$\displaystyle=-2g^{2}\Big{(}\sum_{i=0}^{3}g_{i}\Big{)}\wp^{\prime}(\alpha)=-4g^{2}g_{0}^{\vee}\wp^{\prime}(\alpha).$
(A.20)
It remains to deal with $a_{4}$. By using (A.7) and (A.8) repeatedly, we
rearrange it into
$\displaystyle a_{4}$
$\displaystyle=\left(\sum_{r=0}^{3}(g_{r}^{\vee})^{2}\wp(\alpha+\omega_{r})\right)h_{2}+\left(\sum_{r=0}^{3}(g_{r}^{\vee})^{2}\wp(\alpha+\omega_{r})\right)^{2}$
$\displaystyle\qquad+\left(p_{1}p_{2}+g^{2}\wp(q_{12})-g^{2}\wp(q_{12}^{+})\right)^{2}$
$\displaystyle\qquad\mbox{}-u(q_{1})p_{2}^{2}-u(q_{2})p_{1}^{2}+u(q_{1})u(q_{2})-g^{2}b$
(A.21)
where we have introduced
$\displaystyle b$
$\displaystyle=v_{\alpha}(q_{1})v_{\alpha}(q_{2})\sigma_{\alpha}(-q^{+}_{12})^{2}+v_{\alpha}(-q_{1})v_{\alpha}(q_{2})\sigma_{\alpha}(q_{12})^{2}$
$\displaystyle\ \
\qquad+v_{\alpha}(q_{1})v_{\alpha}(-q_{2})\sigma_{\alpha}(-q_{12})^{2}+v_{\alpha}(-q_{1})v_{\alpha}(-q_{2})\sigma_{\alpha}(q^{+}_{12})^{2}\,.$
(A.22)
Calculating $b$ is more involved, so we just give a sketch. As the first step,
we analyse the $2$nd order poles in $q_{1},q_{2}$ and find that the following
expression agrees with $b$ up to an extra term $c$ having first order poles
only:
$\displaystyle b$
$\displaystyle=(2u^{\vee}(\alpha)-u(q_{1})-u(q_{2}))(\wp(q_{12})+\wp(q_{12}^{+}))+\sum_{r=1}^{3}2g_{r}^{2}\wp(q_{1}+\omega_{r})\wp(q_{2}+\omega_{r})+c\,.$
(A.23)
Using the symmetry arguments once more, we conclude that $c$ must be regular,
i.e. it is just a function of $\alpha$. In addition, we know that
$c=c(\alpha)$ is even elliptic. It is also easy to check that $c(\alpha)$ has
a 4th order pole at $\alpha=0$ and 2nd order poles at $\alpha=\omega_{1,2,3}$.
To determine $c(\alpha)$ from that, we analyse the Laurent expansion of $b$ in
$\alpha$ near $\alpha=0$ and $\alpha=\omega_{1,2,3}$. We skip the details and
just give the answer:
$\displaystyle c$
$\displaystyle=2\left[4(g_{0}^{\vee})^{2}\wp(\alpha)^{2}-2\wp(\alpha)\left(u^{\vee}(\alpha)-\sum_{i=1}^{3}(g^{\vee}_{r})^{2}e_{r}\right)\right]+d\,,$
(A.24)
up to a possible constant $d$ which may depend on $g_{i}$ and
$e_{i}=\wp(\omega_{i})$, but not on $\alpha$.
Backward substitution of (A.24) and (A.23) into (A.2) gives the answer for
$a_{4}$, after which all that remains is to rearrange the terms based on the
form of the quartic Hamiltonian $h_{4}$ (2.1). The constant $d$ in (A.24) can
always be absorbed into $h_{4}$, so can be ignored.
## References
* [1] N. Seiberg and E. Witten, Electric-magnetic duality, monopole condensation, and confinement in $\mathcal{N}=2$ supersymmetric Yang-Mills theory, Nucl. Phys. B 426 (1994) 19–52, [hep-th/9407087]. [Erratum: Nucl.Phys.B 430, 485–486 (1994)].
* [2] N. Seiberg and E. Witten, Monopoles, duality and chiral symmetry breaking in $\mathcal{N}=2$ supersymmetric QCD, Nucl. Phys. B 431 (1994) 484–550, [hep-th/9408099].
* [3] A. Gorsky, I. Krichever, A. Marshakov, A. Mironov, and A. Morozov, Integrability and Seiberg-Witten exact solution, Phys. Lett. B 355 (1995) 466–474, [hep-th/9505035].
* [4] R. Donagi and E. Witten, Supersymmetric Yang-Mills theory and integrable systems, Nucl. Phys. B 460 (1996) 299–334, [hep-th/9510101].
* [5] R. Y. Donagi, Seiberg-Witten integrable systems, Surveys in Differential Geometry 4 (1998), no. 1 83–129, [alg-geom/9705010].
* [6] E. J. Martinec and N. P. Warner, Integrable systems and supersymmetric gauge theory, Nucl. Phys. B 459 (1996) 97–112, [hep-th/9509161].
* [7] E. J. Martinec, Integrable structures in supersymmetric gauge and string theory, Phys. Lett. B 367 (1996) 91–96, [hep-th/9510204].
* [8] E. D’Hoker and D. Phong, Calogero-Moser systems in SU(N) Seiberg-Witten theory, Nucl. Phys. B 513 (1998) 405–444, [hep-th/9709053].
* [9] E. D’Hoker and D. Phong, Calogero–Moser Lax pairs with spectral parameter for general Lie algebras, Nucl. Phys. B 530 (1998) 537–610, [hep-th/9804124].
* [10] A. Gorsky, A. Marshakov, A. Mironov, and A. Morozov, N=2 supersymmetric QCD and integrable spin chains: Rational case $N_{f}<2N_{c}$, Phys. Lett. B 380 (1996) 75–80, [hep-th/9603140].
* [11] A. Gorsky, S. Gukov, and A. Mironov, Multiscale N=2 SUSY field theories, integrable systems and their stringy/brane origin. 1., Nucl. Phys. B 517 (1998) 409–461, [hep-th/9707120].
* [12] A. Gorsky and A. Mironov, Integrable many-body systems and gauge theories, in Integrable hierarchies and modern physical theories (Chicago, IL, 2000), vol. 18 of NATO Sci. Ser. II Math. Phys. Chem., pp. 33–176. Kluwer Acad. Publ., Dordrecht, 2001. hep-th/0011197.
* [13] D. Gaiotto, $\mathcal{N}=2$ dualities, JHEP 08 (2012) 034, [arXiv:0904.2715].
* [14] E. Witten, Solutions of four-dimensional field theories via M theory, Nucl. Phys. B 500 (1997) 3–42, [hep-th/9703166].
* [15] A. Gorsky and N. Nekrasov, Elliptic Calogero-Moser system from two-dimensional current algebra, hep-th/9401021.
* [16] P. Argyres, O. Chalykh, and Y. Lü, In preparation, .
* [17] M. Caorsi and S. Cecotti, Geometric classification of 4d $\mathcal{N}=2$ SCFTs, JHEP 07 (2018) 138, [arXiv:1801.04542].
* [18] Y. Tachikawa and G. Zafrir, Reflection groups and 3d $\mathcal{N}\geq$ 6 SCFTs, JHEP 12 (2019) 176, [arXiv:1908.03346].
* [19] V. L. Popov, Discrete complex reflection groups, vol. 15 of Communications of the Mathematical Institute, Rijksuniversiteit Utrecht. Rijksuniversiteit Utrecht, Mathematical Institute, Utrecht, 1982.
* [20] V. Goryunov and S. H. Man, The complex crystallographic groups and symmetries of $J_{10}$, in Singularity theory and its applications, vol. 43 of Adv. Stud. Pure Math., pp. 55–72. Math. Soc. Japan, Tokyo, 2006.
* [21] P. Etingof, G. Felder, X. Ma, and A. Veselov, On elliptic Calogero-Moser systems for complex crystallographic reflection groups, Journal of Algebra 329 (2011), no. 1 107–129, [arXiv:1003.4689].
* [22] J. A. Minahan and D. Nemeschansky, An N=2 superconformal fixed point with $E_{6}$ global symmetry, Nucl. Phys. B482 (1996) 142–152, [hep-th/9608047].
* [23] J. A. Minahan and D. Nemeschansky, Superconformal fixed points with $E_{n}$ global symmetry, Nucl. Phys. B489 (1997) 24–46, [hep-th/9610076].
* [24] O. Aharony and Y. Tachikawa, A Holographic computation of the central charges of d=4, N=2 SCFTs, JHEP 01 (2008) 037, [arXiv:0711.4532].
* [25] D. Nanopoulos and D. Xie, $\mathcal{N}=2$ SU Quiver with USp ends or SU ends with antisymmetric Matter, JHEP 08 (2009) 108, [arXiv:0907.1651].
* [26] F. Benini, S. Benvenuti, and Y. Tachikawa, Webs of five-branes and $\mathcal{N}=2$ superconformal field theories, JHEP 09 (2009) 052, [arXiv:0906.0359].
* [27] D. Gaiotto and S. S. Razamat, Exceptional indices, JHEP 05 (2012) 145, [arXiv:1203.5517].
* [28] V. I. Inozemtsev, Lax representation with spectral parameter on a torus for integrable particle systems, Letters in Mathematical Physics 17 (1989), no. 1 11–17.
* [29] P. C. Argyres, R. Maimon, and S. Pelland, The M theory lift of two O6- planes and four D6-branes, JHEP 05 (2002) 008, [hep-th/0204127].
* [30] K. Takasaki, Elliptic Calogero-Moser systems and isomonodromic deformations, J. Math. Phys. 40 (1999) 5787–5821, [math/9905101].
* [31] O. Chalykh, Quantum Lax pairs via Dunkl and Cherednik Operators, Communications in Mathematical Physics 369 (2019), no. 1 261–316, [arXiv:1804.01766].
* [32] H. Ochiai, T. Oshima, and H. Sekiguchi, Commuting families of symmetric differential operators, Proc. Japan Acad. Ser. A Math. Sci. 70 (1994), no. 2 62–66.
* [33] A. Zotov, Elliptic Linear Problem for the Calogero-Inozemtsev Model and Painlevé VI Equation, Letters in Mathematical Physics 67 (2004), no. 2 153–165, [hep-th/0310260].
* [34] V. M. Buchstaber, G. Felder, and A. P. Veselov, Elliptic Dunkl operators, root systems, and functional equations, Duke Math. J. 76 (1994), no. 3 885–911, [hep-th/9403178].
* [35] I. M. Krichever, Elliptic solutions of the Kadomtsev-Petviashvili equation and integrable systems of particles, Functional analysis and its applications 14 (1980), no. 4 282–290.
* [36] N. Hitchin, Stable bundles and integrable systems, Duke Math. J. 54 (1987), no. 1 91–114.
* [37] N. Nekrasov, Holomorphic bundles and many body systems, Commun. Math. Phys. 180 (1996) 587–604, [hep-th/9503157].
* [38] B. Nasatyr and B. Steer, Orbifold Riemann surfaces and the Yang-Mills-Higgs equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 22 (1995), no. 4 595–643, [alg-geom/9504015].
* [39] J. Hurtubise and E. Markman, Calogero-Moser systems and Hitchin systems, Commun. Math. Phys. 223 (2001) 533–552, [math/9912161].
* [40] P. Boalch, Wild character varieties, meromorphic Hitchin systems and Dynkin diagrams, in Geometry and physics. Vol. II, pp. 433–454. Oxford Univ. Press, Oxford, 2018. arXiv:1703.10376.
* [41] P. Etingof, W. L. Gan, and A. Oblomkov, Generalized double affine Hecke algebras of higher rank, J. Reine Angew. Math. 600 (2006) 177–201, [math/0504089].
* [42] P. Boalch, Simply-laced isomonodromy systems, Publ. Math. Inst. Hautes Études Sci. 116 (2012) 1–68, [arXiv:1107.0874].
* [43] A. M. Levin and M. A. Olshanetsky, Painlevé-Calogero correspondence, in Calogero-Moser-Sutherland models (Montréal, QC, 1997), CRM Ser. Math. Phys., pp. 313–332. Springer, New York, 2000.
* [44] K. Takasaki, Painlevé-Calogero correspondence revisited, J. Math. Phys. 42 (2001), no. 3 1443–1473, [math/0004118].
* [45] H. Kawakami, Matrix Painlevé systems, J. Math. Phys. 56 (2015), no. 3 033503, 27.
* [46] M. Bertola, M. Cafasso, and V. Rubtsov, Noncommutative Painlevé equations and systems of Calogero type, Comm. Math. Phys. 363 (2018), no. 2 503–530, [arXiv:1710.00736].
* [47] K.-M. Lee and P. Yi, A Family of $\mathcal{N}=2$ gauge theories with exact S duality, Nucl. Phys. B 520 (1998) 157–178, [hep-th/9706023].
* [48] D. S. Freed, Special Kähler manifolds, Commun. Math. Phys. 203 (1999) 31–52, [hep-th/9712042].
* [49] S. Gukov and A. Kapustin, New $\mathcal{N}=2$ superconformal field theories from M/F theory orbifolds, Nucl. Phys. B 545 (1999) 283–308, [hep-th/9808175].
* [50] A. Sen, F theory and orientifolds, Nucl. Phys. B 475 (1996) 562–578, [hep-th/9605150].
* [51] T. Banks, M. R. Douglas, and N. Seiberg, Probing F theory with branes, Phys. Lett. B 387 (1996) 278–281, [hep-th/9605199].
* [52] O. Aharony, J. Sonnenschein, S. Yankielowicz, and S. Theisen, Field theory questions for string theory answers, Nucl. Phys. B 493 (1997) 177–197, [hep-th/9611222].
* [53] M. R. Douglas, D. A. Lowe, and J. H. Schwarz, Probing F theory with multiple branes, Phys. Lett. B 394 (1997) 297–301, [hep-th/9612062].
* [54] D. Gaiotto, G. W. Moore, and Y. Tachikawa, On 6d $\mathcal{N}=(2,0)$ theory compactified on a Riemann surface with finite area, PTEP 2013 (2013) 013B03, [arXiv:1110.2657].
* [55] D. Gaiotto, G. W. Moore, and A. Neitzke, Wall-crossing, Hitchin Systems, and the WKB Approximation, arXiv:0907.3987.
* [56] Y. Komori and K. Hikami, Quantum integrability of the generalized elliptic Ruijsenaars models, J. Phys. A 30 (1997), no. 12 4341–4364.
|
# A new finite element paradigm to solve contact problems with roughness
Jacopo Bonari<EMAIL_ADDRESS>Marco Paggi<EMAIL_ADDRESS>Daniele Dini<EMAIL_ADDRESS>IMT School for Advanced Studies Lucca,
Piazza San Francesco 19, 55100 Lucca, Italy Department of Mechanical
Engineering, Imperial College London, South Kensington Campus, London SW7 2AZ
###### Abstract
This article’s main scope is the presentation of a computational method for
the simulation of contact problems within the finite element method involving
complex and rough surfaces. The approach relies on the MPJR (eMbedded Profile
for Joint Roughness) interface finite element proposed in [Paggi, Reinoso
(2020) Mech Adv Mat Struct, 27:20 (2020)], which is nominally flat but can
embed at the nodal level any arbitrary height to reconstruct the displacement
field due to contact in the presence of roughness. Here, the formulation is
generalized to handle 3D surface height fields and any arbitrary nonlinear
interface constitutive relation, including friction and adhesion. The
methodology is herein validated with BEM solutions for linear elastic contact
problems. Then, a selection of nonlinear contact problems prohibitive to be
simulated by BEM and by standard contact algorithms in FEM are detailed, to
highlight the promising aspects of the proposed method for tribology.
###### keywords:
Contact mechanics , Roughness, Friction, Adhesion, Finite Element Method.
††journal: International Journal of Solids and Structures
_Dedicated to Jim Barber’s 80th birthday_
## 1 Introduction
During his long career, Professor James Barber has led many leading-edge
advancements in the fields of continuum mechanics and contact mechanics. Since
his dissertation [1], he comprehensively exploited analytical methods to shed
light on contact problems including friction [2, 3, 4], stability of thermo-
elasticity [5, 6, 7], surface roughness [8, 9, 10, 11]. His research
achievements have been recognized by highly cited publications and books [12,
13, 14].
Since the 1990s, the scientific problem of contact between rough surfaces,
which was initially posed and investigated by mechanicians for tribological
applications, has progressively attracted significant attention from
researchers in other disciplines, especially physics and biology. Indeed,
understanding how the multiscale features of surface roughness influence the
overall emergent features of contact has fundamental implications for a wide
range of technological and physical applications, see e.g. [15, 16, 17, 18,
19, 20]. At the same time, the technological trend to engineer materials by
tailoring their properties at the micro- and even at the nanoscales opens the
issue of accurately representing all the relevant length scales for roughness
and, at the same time, allows the simulation of nonlinear phenomena at the
interface -e.g. friction or adhesion- and in the surrounding bulk -e.g.
fracture, viscoelasticity, and plasticity.
Research on this matter has seen significant progress since the 1950s, with
the development of analytical and semi-analytical micromechanical contact
theories departing from statistics of rough surfaces treated according to
random process theory [21, 22, 23, 24, 25]. In the 1990s, the issue of
resolution-dependency of contact predictions was raised with the advent of
fractal models to synthetically represent roughness over multiple scales [26,
27, 28].
This advancement paved the way for computational methods to simulate contact
problems with roughness by directly including any given surface height field
and avoiding assumptions on their statistical distributions. In this regard,
the Boundary Element Method (BEM) (see [29, 30, 31, 32, 33]) emerged as a
powerful tool to analyze detailed 3D height fields, especially for
frictionless and adhesionless contact problems and linear elastic materials.
This methodology has been proven to be computationally efficient since only
the height field requires to be discretized and Green functions are used to
simulate the response of the semi-infinite continuum. Attempts to generalize
BEM to handle interface or material constitutive nonlinearities have been made
within the last decades to include frictional effects [34, 35, 36, 37], finite
thickness of the domain [38, 39, 40], bulk viscoelasticity [41, 42, 43],
interface adhesion [44, 45, 46, 47, 48], wear [49, 50, 51], plasticity [52,
53, 54, 55], lubrication [56]. However, such methodologies are difficult to be
generalized to include all the above effects and some underlying assumptions
cannot be lifted easily.
The Finite Element Method (FEM) would naturally allow gaining a deeper
understanding of many key features of the subject which were once precluded
with BEM, prime examples being the analysis of contact problems in finite
elasticity, different nonlinear constitutive behaviors, and finite size
geometries. However, the method comes with the cost of a remarkable increase
in computational resources needed, together with the higher care necessary for
a trustful discretization of the rough surface, avoiding artificial smoothing
of fine scale geometrical characteristics of roughness. For these reasons, the
use of FEM for the analysis of rough contacts has been limited to few studies
regarding frictionless problems compared to analytic solutions [57], plastic
deformation [58], finite strain indentation problems with Bezier-smoothed
interface for the prediction of constitutive interface laws [59], or studies
devoted to the identification of the smallest representative model size for
micromechanical applications [60, 61].
In [62], the _MPJR_ approach has been introduced, which is capable of
circumventing some of the criticalities stemming from the discretization of
complex-shaped profiles according to FEM. The key idea consists in embedding
the exact interface height field into a nominally smooth interface finite
element, whose kinematics is borrowed from the Cohesive Zone Model [63, 64].
Under the hypothesis of a rigid indenting profile, the exact deviation from
planarity of the real geometry can then be restored by performing a suitable
correction of the normal gap. This permits to model complex contacting
geometries with simple low-order meshes, with a significant gain in the
overall macroscopic geometry definition and contact solution algorithms. This
regards two primary aspects: (i) the reduction of the high number of finite
elements required for the explicit discretization of the rough boundaries;
(ii) the avoidance of corner cases caused by rapidly varying surface normal
vectors that can induce a lack of convergence of contact search algorithms
[65].
The original MPJR formulation has been extended in [66] to account also for
friction in the partial slip regime. Moreover, it has been also employed to
simulate ironing problems up to full slip and with finite sliding
displacements for viscoelastic layers [67].
In the present article, the MPJR formulation is generalized in two different
directions: $(i)$ 2D contact problems with rough profiles in the presence of
friction and adhesive forces, as an example of a highly interface nonlinear
problem; $(ii)$ 3D contact of rough surfaces with friction. The paper is
structured as follows. In Sec. 2, the variational formulation of the interface
finite element is detailed. In Sec. 3, a set of numerical examples is
presented to show the new capabilities of the approach. In Sec. 4 a summary of
the results and an outlook of the future perspectives for tribological
applications is provided.
## 2 Variational formulation of contact problems with embedded roughness
The framework detailed in the sequel regards the derivation of an interface
finite element capable of simulating contact between a rigid surface
$\mathcal{S}_{\mathrm{r}}$ and a deformable bulk $\mathcal{B}$, with a generic
constitutive behavior, separated by a rough interface.
### 2.1 Contact with a conformal rigid surface
The orientation of the boundary $\partial\mathcal{B}$ is determined by its
outward pointing normal $\mathbf{n}$ and the kinematic quantities governing
the contact problem. The normal gap, $g_{\mathrm{n}}$, and the slip velocity,
$\dot{\mathbf{g}}_{\tau}$, are defined as:
$\displaystyle g_{\mathrm{n}}$
$\displaystyle=\mathbf{n}\cdot(\mathbf{u}_{\mathrm{r}}-\mathbf{u}),$
$\displaystyle\dot{\mathbf{g}}_{\tau}$
$\displaystyle=(\mathbf{I}-\mathbf{n}\otimes\mathbf{n})\cdot(\dot{\mathbf{u}}_{\mathrm{r}}-\dot{\mathbf{u}}),$
(1)
where $\mathbf{u}_{\mathrm{r}}$ and $\mathbf{u}$ represent, respectively, the
displacements of the rigid surface $\mathcal{S}_{\mathrm{r}}$ and of
$\partial\mathcal{B}_{\mathrm{C}}$, which is the subset of
$\partial\mathcal{B}$ where contact takes place.
The contact traction vector $\mathbf{t}$, related to the forces exerted by the
contact of $\mathcal{S}_{\mathrm{r}}$ over $\partial\mathcal{B}_{\mathrm{C}}$
can be expressed by means of the Cauchy theorem as
$\mathbf{t}=\mathbf{T}\cdot\mathbf{n}$, where $\mathbf{T}$ is the Cauchy
stress tensor. The split of $\mathbf{t}$ in its normal and tangential
components, $p_{\mathrm{n}}$ and $\mathbf{q}_{\tau}$ relative to
$\partial\mathcal{B}_{\mathrm{C}}$, makes it possible to define the normal
unilateral and tangential contact conditions.
If adhesive forces are neglected, the normal traction is always acting inward
with respect to the boundary, and therefore it is negative. This allows us to
summarize the conditions for normal contact in the set of relations known as
Hertz-Signorini-Moreau (HSM) inequalities:
$\displaystyle g_{\mathrm{n}}$ $\displaystyle\geq 0,$ $\displaystyle
p_{\mathrm{n}}$ $\displaystyle\leq 0,$ $\displaystyle
g_{\mathrm{n}}p_{\mathrm{n}}$ $\displaystyle=0$ on
$\partial\mathcal{B}_{\mathrm{C}}$. (2)
Starting from this definition, a displacement-based normal contact
constitutive relation can be defined by introducing a penalty parameter
$\varepsilon_{\mathrm{n}}$ which leads to the normal contact traction as:
$p_{\mathrm{n}}=\varepsilon_{\mathrm{n}}g_{\mathrm{n}}.$ (3)
The introduction of a displacement-based traction law also permits to easily
extend the analysis to adhesive problems via the definition of traction-
penetration relations that regularize the HSM contact conditions. In this
sense, the following constitutive relation is derived from a Lennard-Jones
potential-like relationship in the normal direction [68, 69, 70] and reads:
$p_{\mathrm{n}}=\frac{A_{H}}{6\pi
g_{0}^{3}}\biggl{[}\biggl{(}\frac{g_{0}}{g_{\mathrm{n}}}\biggl{)}^{9}-\biggl{(}\frac{g_{0}}{g_{\mathrm{n}}}\biggl{)}^{3}\biggl{]},$
(4)
where $A_{H}$ is the Amaker’s constant characterizing the strength of adhesion
and $g_{0}$ represents the equilibrium distance between two approaching half-
spaces.
If the effect of friction is taken into account, then the contact response has
to be differentiated depending on the status of the interface relative
displacements in tangential direction. The contact domain is therefore given
as:
$\displaystyle\partial\mathcal{B}_{\mathrm{C}}$
$\displaystyle=\partial\mathcal{B}_{\mathrm{C,st}}\cup\partial\mathcal{B}_{\mathrm{C,sl}},$
$\displaystyle\partial\mathcal{B}_{\mathrm{C,st}}\cap\partial\mathcal{B}_{\mathrm{C,sl}}$
$\displaystyle=\varnothing.$
In the equation above, the two subscripts _st_ and _sl_ denote the _stick_ and
the _slip_ regions, respectively. The former is characterized by the absence
of tangential relative motion between the bodies in contact, while the latter
by a nonvanishing relative sliding which gives rise to tangential tractions
opposing the relative movement. The solution of continuity in the contact
subdomain boundary is a direct consequence of the non-linearity of the Coulomb
law employed for modeling friction.
(a) (b)
Figure 1: Normal and tangential constitutive relations for the traction field
at the interface.
This can be expressed by the following set of equalities and inequalities:
$\displaystyle\mathbf{g}_{\tau}$ $\displaystyle=0,$
$\displaystyle\lVert\mathbf{q}_{\tau}\rVert$ $\displaystyle\leq\mu\lvert
p_{\mathrm{n}}\rvert$ $\displaystyle\text{on
$\partial\mathcal{B}_{\mathrm{C,st}}$},$ (5a)
$\displaystyle\dot{\mathbf{g}}_{\tau}$ $\displaystyle\neq 0,$
$\displaystyle\mathbf{q}_{\tau}$ $\displaystyle=\mu\lvert
p_{\mathrm{n}}\rvert\frac{\dot{\mathbf{g}}_{\tau}}{\lVert\dot{\mathbf{g}}_{\tau}\rVert}\;\;$
$\displaystyle\text{on $\partial\mathcal{B}_{\mathrm{C,sl}}$},$ (5b)
where $\dot{\mathbf{g}}_{\tau}$ is the sliding velocity, and $\mu$ is the
friction coefficient. According to Eq. (5a), the tangential reaction can
prevent relative sliding up to a limit value coincident with $\mu\lvert
p_{\mathrm{n}}\rvert$, above which relative sliding begins with a constant
tangential reaction equivalent to the same threshold value. The interface
behavior is depicted in Fig. LABEL:fig:coulomb, together with the following
regularized constitutive law employed for resolving the multi-valuedness in
correspondence of the origin [71]:
$\mathbf{q}_{\tau}=\mu\lvert
p_{\mathrm{n}}\rvert\frac{\dot{\mathbf{g}}_{\tau}}{\lVert\dot{\mathbf{g}}_{\tau}\rVert}\tanh{\frac{\lVert\dot{\mathbf{g}}_{\tau}\rVert}{\dot{\varepsilon}_{\tau}}}.$
(6)
The use of this specific regularization scheme is only a possibility amid
different ones, see for example [72]. In the reference, the tangential
response is modeled according to a Karush-Kuhn-Tucker (KKT) scheme for Coulomb
friction, defined by the set of equations:
$\displaystyle\Phi=\lVert\mathbf{q}_{\tau}\rVert-\mu p_{\mathrm{n}}$
$\displaystyle\leq 0,$
$\displaystyle\dot{\mathbf{g}}_{\tau}-\xi\frac{\partial\Phi}{\partial\mathbf{q}_{\tau}}$
$\displaystyle=0,$ (7a) $\displaystyle\xi$ $\displaystyle\geq 0,$
$\displaystyle\xi\Phi$ $\displaystyle=0,$ (7b)
a regularisation can as well be defined on the slip rule, which after the
introduction of a penalty parameter $\varepsilon_{\tau}$ reads:
$\dot{\mathbf{g}}_{\tau}-\xi\frac{\partial\Phi}{\partial\mathbf{q}_{\tau}}=\frac{1}{\varepsilon_{\tau}}\dot{\mathbf{q}}_{\tau}.$
(8)
The two different schemes deliver different errors introduced in the Coulomb
friction law. On the side of Eq. (6), the error stems from the lack of clear
distinction between zones of stick and zones of slip, thus resulting in the
introduction of a transition zone whose amplitude is strongly dependent on the
chosen value of $\dot{\varepsilon}_{\tau}$. A clear and sharp distinction is
only retrieved in the limit $\dot{\varepsilon}_{\tau}\to 0$. On the other
hand, with the penalty regularization, the error is introduced as a difference
between relative velocity and slip rate.
Each of the two possible choices comes with its own advantages and
disadvantages, but they both provide robust constraints enforcement
procedures. The use of Eq. (6) over the KKT penalized approach offers the
advantage of directly linking tractions and displacements, with no need of
defining trial stick and slip nodes, thus avoiding the necessity of setting up
an additional loop for the identification of the correct stick and slip
domains and the definition of a return map for the identification of the slip
rate. This choice comes with the cost of having a less versatile
implementation. The KKT formulation delivers exact results and the penalty
regularization is just a possible way of proceeding. Stemming from the same
KKT conditions, the problem can also be treated by exploiting lagrangian or
augmented lagrangian schemes, with or without penalization. The same does not
apply to Eq. (6), being only a phenomenological interpretation of Coulomb’s
friction law. For the sake of completeness, the use of the hyperbolic tangent
as regularizing function is only a possibility among different possible
choices. Other functions that approximates tangential tractions arising from
friction have been used and can be found in [71, 73, 74, 75, 65, Ch. 5, pp.
79–80]
When adhesion is also introduced, the tangential reaction expressed by Eq. (6)
is modified as [70]:
$\mathbf{q}_{\tau}=\mu\bigl{(}\lvert p_{\mathrm{n}}\rvert-
p_{\mathrm{c}}\bigl{)}H\bigl{(}g_{\mathrm{c}}-g_{\mathrm{n}}\bigl{)}\frac{\dot{\mathbf{g}}_{\tau}}{\lVert\dot{\mathbf{g}}_{\tau}\rVert}\tanh{\frac{\lVert\dot{\mathbf{g}}_{\tau}\rVert}{\dot{\varepsilon}_{\tau}}},$
(9)
where $p_{\mathrm{c}}$ is the value of the normal traction corresponding to a
specific cut-off normal gap $g_{\mathrm{c}}$, and $H(x)$ is the Heavyside step
function. In this way, the effect of the adhesive tractions on the frictional
forces can be modulated. Introducing $H(x)$ in Eq. (9) makes the tangential
tractions field only $\mathcal{C}^{0}$ differentiable, unless the condition
$g_{\mathrm{c}}=g_{\mathrm{p}}$ is met, being $g_{\mathrm{p}}$ the normal gap
related to the pull-out normal traction. Since the global (and unique) point
of maximum for the normal tractions is located in correspondence with this
point, this is the only value for which $\mathbf{q}_{\tau}$ could reach a null
value smoothly, Fig. 2. On the other hand, imposing $g_{\mathrm{c}}=g_{0}$,
the classic Coulomb law can be retrieved, in the sense that no tangential
forces are present for positive normal tractions. In this latter case, the
system’s full slip state can be more easily assessed since a perfect
correspondence between tangential and normal tractions scaled by $\mu$ is
guaranteed.
Figure 2: Influence of cut-off normal gap $g_{\mathrm{c}}$ over tangential
tractions $q_{\tau}$.
The contribution of the interface to the weak form of the boundary value
problem can be written by means of the virtual work principle as:
$\delta\boldsymbol{\Pi}=\int_{\partial\mathcal{B}_{\mathrm{C}}}(p_{\mathrm{n}}\cdot\delta
g_{\mathrm{n}}+\mathbf{q}_{\tau}\cdot\delta\mathbf{g}_{\tau})\,\mathrm{d}s.$
(10)
The solution of the contact problem in a finite element framework requires the
geometrical approximation of $\mathcal{B}$ and of the contacting interface
$\partial\mathcal{B}_{\mathrm{C}}$, an operation that paves the way for their
discretization into finite elements. The process can be formalized as:
$\displaystyle\mathcal{B}\approx\mathcal{B}^{\mathrm{h}}$
$\displaystyle=\bigcup\limits_{\mathrm{e}=1}^{n_{\Omega}}\Omega^{(\mathrm{e})},$
$\displaystyle\partial\mathcal{B}_{\mathrm{C}}\approx\partial\mathcal{B}_{\mathrm{C}}^{\mathrm{h}}$
$\displaystyle=\bigcup\limits_{\mathrm{e}=1}^{n_{\Gamma}}\Gamma^{(\mathrm{e})},$
(11)
where $\Omega^{(\mathrm{e})}$ represents a single finite element composing the
geometric approximation $\mathcal{B}^{\mathrm{h}}$ of the bulk $\mathcal{B}$,
while $\Gamma^{(\mathrm{e})}$ describes the discretization of
$\partial\mathcal{B}_{\mathrm{C}}^{\mathrm{h}}$, in its turn approximation of
$\partial\mathcal{B}_{\mathrm{C}}$, Fig. 3.
Figure 3: FEM approximation of the bulk and the interface.
Given the hypotheses of conformal contact interface, matching nodes on the
overlying surface can be identified in correspondence to the ones on the
bulk’s boundary, and $\Gamma^{(\mathrm{e})}$ can be defined as an interface
finite element in analogy to CZM for fracture [63]. Here, they are
characterized by two facets, one belonging to
$\partial\mathcal{B}_{\mathrm{C}}^{\mathrm{h}}$ and one to the contacting
rigid surface; the relative displacement of a couple of matching nodes is
responsible for the exchange of reaction forces across the interface thanks to
the defined constitutive relations. Figure 4 shows their layout for a 2D case,
where the element coincides with a collapsed four nodes quadrilateral
(_quad_), and in 3D, where the element is analogous to a collapsed eight nodes
hexahedral (_hex_).
(a) (b)
Figure 4: $2D$ and $3D$ interface finite elements.
### 2.2 Gap field correction to account for roughness
The basic characteristics of the interface finite element derived above are
suitable for the solution of conformal contact problems under small
displacements assumptions, with characteristics analogous to a segment-to-
segment approach with fixed pairings. It has to be remarked that up to this
point the formulation is also valid for the solution of deformable-to-
deformable contact, since the only requirement to be respected is the presence
of a conformal interface.
In [62, 76], an extension has been proposed to analyze rigid to deformable
non-conformal contact problems, from standard curved indenters up to quasi-
fractal wavy or fractal rough surfaces. While the interested reader is
referred to the articles above for a detailed derivation of the method, in the
following only the underlying idea is presented.
(a) (b)
Figure 5: Interface discretization with embedded roughness.
According to Fig. 5, starting from the conformal configuration, a rigid
contacting surface of arbitrary geometry can be taken into account thanks to a
suitable correction of the gap field defined in Eq. (1). If a local reference
system is set in correspondence of
$\partial\mathcal{B}_{\mathrm{C}}^{\mathrm{h}}$, an elevation field marking
the deviation from planarity between the smoothed and the real geometry can be
introduced. In the simplest case of an interface geometry analytically defined
as a function $z(\mathbf{x})$, the corrected gap reads
$g_{\mathrm{n}}^{\ast}=g_{\mathrm{n}}+z(\mathbf{x})$. The use of the modified
gap in the derivation of the system’s stiffness matrix allows accounting for
the complex geometry without the need to actually consider it explicitly
during the FE discretization process.
Once the correction of the gap function is introduced, the method applied to
two deformable bodies is still able to account for the effect of the elastic
contact interactions in the bulk. However, second order effects, which would
modify the local elevations of the embedded rigid profile, are not accounted
for at the moment. A possible strategy to overcome this aspect could be the
introduction of an update of the embedded elevation function $z(\mathbf{x})$
based on the deformation of the underlying bulk.
Therefore, considering a rigid indenter, the contact problem can be simulated
with a standard FE discretization of the bulk material, accompanied by a
single layer of interface finite elements in correspondence of the active set
of contact, which stores the contact geometry information in the form of a
corrected gap.
At this stage, the application of boundary conditions (BCs) to the model can
be performed by constraining the nodal pair of the interface finite elements
opposite to the bulk and applying load in the form of Dirichlet or Neumann BCs
to the bulk nodes, i.e. considering the surface to be fixed with motion only
possible for the deformable body, Fig. LABEL:fig:bcsa. An option for the
application of load in the form of a rigid act of motion or concentrated or
distributed forces directly to the indenter is possible with the deployment of
an additional layer of standard finite elements on the free side of the
interface, and apply them the desired BCs, Fig. LABEL:fig:bcsb. For preserving
the hypothesis of rigidity, however, a high level of stiffness compared to the
bulk’s material has to be assigned to them, where $\mathbf{t}_{0}$ and
$\mathbf{u}_{0}$ represent applied nodal forces and displacements,
respectively.
A third approach can be conceptualized as well, where a rigid act of motion is
directly applied to the rigid surface in the form of suitable time dependence
of the elevation field, that in this case would read
$z=z[\mathbf{x}+\boldsymbol{\Omega}(t)]$, where $\boldsymbol{\Omega}(t)$ is a
three dimensional curve, parametrized in time, that describes the act of
motion of the rigid surface, Fig. LABEL:fig:bcsc. The study of this
methodology of constraint enforcement goes beyond the scope of the present
publication and is left for further studies. Some preliminary result, though,
has been presented in [67], where the concept has proven to be applicable in
the context of the analyses of tangential motion over long slipping distances,
nevertheless still in the context of a small strain theory. It has to be
remarked that the ability to consider long slipping distances is actually a
limitation of the implementation proper to the first two ways of BCs
enforcement presented in the article. Given that, in compliance with a contact
scheme that requires matching nodes at the interface, the variation of the
elevation field $z(\mathbf{x})$ consequent to lateral sliding is not taken
into account, therefore limiting the analysis to infinitesimal sliding
distances.
In conclusion of this section, two different approaches are presented for the
assignment of the correct elevation field to each elements’ Gauß points. The
rough surfaces employed in the contact simulation can be either hard-coded in
the element routine (in the case it can be defined analytically) or stored in
an external file as a three columns matrix of $[x,y,z]$ values and prompted as
a look-up table (this latter solution being necessary in the case the surface
to be used directly comes from topographic measurements, such as those
obtained from photogrammetry or confocal profilometry).
(a) (b) (c)
Figure 6: Different procedures for enforcing Dirichlet ($\mathbf{u}_{0}$) or
Neumann ($\mathbf{t}_{0}$) BCs over the rigid indenter and the deformable
bulk.
## 3 Numerical examples
In this section, new results related to the 2D analysis of rough profiles and
verification against BEM simulations are provided. Then, adhesive contact
problems, also including friction, are solved for wavy profiles. Moreover, one
benchmark test and two bigger scale applications are shown to prove the
capability of the method to handle full scale 3D complex morphologies.
### 3.1 MPJR validation in 2D with BEM
The normal frictionless indentation problem of an elastic layer of finite
depth by a rough profile is herein addressed, and the results compared with
the BEM solution related to the same problem. The profile is obtained using a
Random Midpoint Displacement (RMD) algorithm often employed for the generation
of rough surfaces characterized by a given fractal dimension $D$ [77], see
also [11, 78, 14, Ch. 16, pp. 357–358] for more details, and [79] for a
possible numerical implementation of this fractal surface generation
algorithm, capable of creating elevation fields with given Hurst exponent $H$
and height probability distribution.
The 2D profile used has been obtained as the section cut of a 3D rough surface
generated exploiting the numerical procedure exposed in [11]. The section cut
is performed in correspondence with its highest summit, i.e. the first point
supposed to come into contact during the indentation process. In the benchmark
test we set a surface fractal dimension $D=2.2$, a random seed uniformly
distributed in $[-1,+1]$ and a random function with Gaussian distribution and
initial standard deviation $\sigma_{0}=2.357$ to generate a height field
spanning over one decade of length scales, thus characterized by $N=2049$
elevation points equally spaced in the horizontal direction.
Figure 7: Sketch of the problem under examination.
The profile is considered as the boundary of a rigid indenter that makes
contact with a linear elastic layer of finite unitary depth $b$ that spans
indefinitely in the horizontal direction and rests on a frictionless rigid
foundation. The rough profile spans horizontally over a length of $2b$ and has
an overall height of $g_{0}=1.0\times 10^{-2}b$, measured from the lowest
valley to the highest peak. The elastic layer is characterized by Young’s
modulus $E=1\,$\mathrm{MPa}$$ and Poisson’s ration $\nu=0.3$. The load is
applied under displacement controlled conditions. A downward imposed vertical
displacement linearly increasing from zero up to the value of
$\Delta_{0}=3g_{0}$ is applied in fifteen pseudo-time steps. The problem
addressed is sketched in Fig. 7, plane strain assumptions hold.
For its solution employing the proposed method, the following FEM
implementation has been set up. First, the elastic layer has been modeled
using standard _quad_ bilinear finite elements. Since the solution is focused
on the contact interface, grading has been performed resulting in a finer
resolution in the zone of interest, where a one-to-one correspondence holds
between the interface nodes and the profile sampling points, Fig.
LABEL:fig:msha, finally, the bulk is truncated in the horizontal direction
after a distance of $b/2$ on the left and right sides of the contact zone,
since after mesh convergence studies a higher length has proven not to affect
the quality of the results.
(a) (b)
Figure 8: FEM implementation required for the problem’s solution.
In the contact zone, a single layer of interface finite elements
$\Gamma^{(\mathrm{e})}$ is deployed over the bulk elements, their lower nodes
matching the boundary nodes of the bulk, for a total of $n_{\Gamma}=N-1$. This
is where the geometric pieces of information of the rough profile are stored
elementwise, and the actual normal gap is evaluated as a correction of the
original one, Fig. LABEL:fig:mshb. The arrangement is completed by a single
structured layer of standard _quad_ elements, tied with the interface finite
elements, much stiffer than the bulk’s element, devoted to receiving the
enforcement of the boundary conditions and transmitting them to the upper
nodal pair of the interface finite element, cfr. Fig. LABEL:fig:mshb and
LABEL:fig:bcsc. In the specific, they have been assigned a Young’s modulus
$E_{\mathrm{r}}=1.0\times 10^{3}E$. Finally, a normal penalty parameter
$\varepsilon_{\mathrm{n}}=1.0\times 10^{3}E/b$ has been used.
For providing a benchmark solution, the same problem has been solved by
exploiting a BEM framework developed for $2D$ plane strain contact problems.
In the specific, the Green function employed reproduces the displacement field
occurring at the free boundary of a linear elastic layer of finite depth
resting frictionless on a rigid foundation, when uniform pressure is applied
over a limited strip. Its expression can be found in [39]. With the only
difference of Green functions employed, the remaining BEM implementation and
related details are the same used in [32].
The shape of the indenting profile can be appreciated in Fig. 9 (solid blue
line), together with the qualitative solutions delivered by the FEM (solid red
line) and BEM (black dashed lines) procedures, in terms of surface
displacements $u_{z}(x)$. The presented plot is a snapshot taken for
$t=t_{\mathrm{f}}/3$ so that the imposed displacement corresponds to $g_{0}$.
Qualitatively, a perfect agreement is observed between the two solutions.
Figure 9: Deformation of the elastic frontier under imposed normal far field
displacement and detailed indenting profile geometry. Figure 10: Comparison of
BEM vs. FEM. Results in terms of normal tractions field at the interface.
A quantitative comparison is now made in terms of interface normal tractions
$p(x)$. For pursuing statistically representative results, different
simulations have been performed. Given the same loading conditions, mesh
sizes, and mechanical parameters, different profiles to be tested are
generated. More specifically, ten different values of the Hurst exponent have
been set, linearly varying from $H=0.75$ to $H=0.85$. For each of these
values, ten different random seeds have been used in the generation process,
for a total of $100$ different profiles. Figure 10 shows a specific solution,
related to the normal traction field along with the contacting interface, for
both FEM (blue round markers) and BEM solutions (red triangular markers), at a
given time step. In the top-right magnified panel, some small discrepancies in
the two results can be noticed, but still, very good accordance can be
appreciated. In the authors’ opinion, such differences are to be ascribed to
the kind of profile employed here, i.e. a scattered elevation field which
could be considered a worst case scenario in the context of a contact
mechanics problem solved using FEM. This hypothesis is supported by the
perfect agreement that, on the other hand, can be appreciated if a benchmark
on contact tractions is performed for what concerns a smooth indenting
profile, see for example [76]. Finally, Fig. 11 quantitatively reports the
mean absolute relative error in terms of displacement at the interface and
total reaction force between FEM and BEM, evaluated over all the profiles
employed, for every point of the contacting interface, plotted for every time
step of the analysis. The transparency bands denote the variation of the
standard deviation of the error distribution for every time step. The
expression of the error reads:
$e_{\mathrm{r}}=\frac{1}{n_{\mathrm{s}}n_{\mathrm{p}}}\sum_{i=1}^{n_{\mathrm{s}}}\sum_{j=1}^{n_{\mathrm{p}}}\biggl{\lvert}\frac{u^{(\mathrm{f})}_{ij}-u^{(\mathrm{b})}_{ij}}{u^{(\mathrm{f})}_{ij}}\biggl{\lvert},$
(12)
where superscripts $(\mathrm{f})$ and $(\mathrm{b})$ stand for BEM and FEM
simulations, respectively, where an analogous expression can be drawn for the
error over normal reaction forces $N$. Both the error estimates deliver very
good results with values rapidly approaching zero as the load increases.
Figure 11: Error estimate.
### 3.2 Frictional response with adhesion for a Wavy profile
The second example is characterized by more complex constitutive relations
inclusive of friction and adhesion. The adopted profile is herein simpler, but
it is still comprehensive of the standard difficulties characterizing the
solution of such types of problems using other state-of-the-art numerical
methods, namely the non-compactness of the contacting domain, the use of non-
convex constitutive relationships, and the presence of finite bulk dimensions.
The numerical simulation consists of the indentation problem of a finite depth
elastic layer by a rigid wavy profile made of the superposition of two
harmonics, deriving from the truncation of a Weierstrass profile defined by
the following expression:
$z(x)=g_{0}\sum_{i=0}^{\infty}\gamma^{(D-2)i}\cos\left(2\pi\dfrac{\gamma^{i}x}{\lambda_{0}}\right).$
(13)
Its geometry is obtained by setting $H=0.75$, $\gamma=5$, $z_{0}=1.0\times
10^{-1}l_{0}$ and $\lambda_{0}=2l_{0}$, where $H$ and $\gamma$ are the Hurst
exponent and the base of the wavelength’s geometric progression across the
scales; $\lambda_{0}$ and $z_{0}$ are the fundamental wavelength and
amplitude. The bulk has been modeled as a rectangular elastic block. It is
considered perfectly bonded in correspondence with the lower base, and
periodic boundary conditions have been applied on both the vertical sides, in
correspondence of $x=\pm l_{0}=\pm 10\,$\mathrm{\SIUnitSymbolMicro m}$$. A
Young’s modulus of $E=20.0\,$\mathrm{MPa}$$ and a Poisson’s ratio $\nu=0.3$
have been assigned to the linear elastic bulk. The model employed for
reproducing the tangential behavior is in accordance with Eq. (9), with a
coefficient of friction $\mu=0.2$ and a cut-off on friction forces
$g_{\mathrm{c}}=_{0}$. The two parameters chosen for modelling the adhesion
law are the max adhesive pressure $p_{\mathrm{m}}=0.330\,$\mathrm{MPa}$$ and a
work of adhesion $W=0.027\,$\mathrm{J}\mathrm{/}\mathrm{m}^{2}$$. The chosen
values result in $g_{\mathrm{p}}\approx 1.0\times 10^{-2}l_{0}$, thus keeping
the transition from negative to positive normal contact tractions appreciable
employing a reasonable fine discretization for the interface. In the specific,
for such a case, $2048$ interface finite elements have been employed for
sampling a region corresponding to the fundamental wavelength $\lambda_{0}$.
The finite element arrangement is analogous to the one presented in the
previous case study. Standard finite elements have been used for modeling the
bulk, a single layer of interface finite elements is deployed over the active
contact zone and on top of that a layer of standard finite elements, much
stiffer than the bulk elements, is devoted to the application of BCs.
The simulation is set up under displacement control and solved in two
different stages of equal length, each of them comprehensive of $25$ pseudo
time steps spanning from $t=0$ to a unitary $t=t_{\mathrm{f}}$. In the first
phase, the profile is brought into contact by increasing a vertical far-field
imposed displacement. The related solution is depicted in Fig. LABEL:fig:pzwma
in terms of normal contact tractions $p_{\mathrm{n}}$, cyan ($t=0$) to blue
($t=t_{\mathrm{f}}/2$) curves; and tangential tractions $q_{\tau}$, yellow
($t=0$) to magenta ($t=t_{\mathrm{f}}/2$) curves. Both sets are normalized
with respect to the highest value of the adhesive pressure, considered to be
negative as opposed to positive contact tractions, as customary.
(a) (b)
Figure 12: Example with friction, adhesion, and wavy profile.
Each curve is related to a single pseudo-time step of the simulation and, as
the indentation process advances, the three central asperities merge together
forming a single cluster, while adjacent contact zones not yet connected are
separated by depressed regions with a normal gap greater than $g_{\mathrm{p}}$
but still displaying an appreciable effect of adhesive forces. In this phase,
no tangential loading is applied, so that the related distribution of
tangential forces is anti-symmetric and self equilibrated.
In Fig. LABEL:fig:pzwmb, results are shown when the horizontal motion is
applied to the indenter, after fixing the vertical imposed displacement.
Excluding slight variations due to normal-tangential coupling, normal
tractions are now constant (solid blue line in the same figure) and the
transition from a stick/slip regime to a full slip condition takes place, as
can be seen from the final perfect overlapping between normal and tangential
tractions scaled by $\mu$. The evolution of tangential tractions can be traced
together with the transition from magenta curves ($t=t_{\mathrm{f}}/2$) to
yellow curves ($t=t_{\mathrm{f}}$).
### 3.3 3D simulations
The approach formulated in Sec. 2 is herein applied to 3D contact. The
framework is first validated against the classic Hertz problem. Then, two
bigger-scale applications involving complex surfaces under generic loading
conditions are presented: $(i)$ frictionless normal contact of an RMD rough
surface; $(ii)$ contact between a rigid indenter characterized by a
_Weierstrass-Mandelbrot_ self-affine surface, considering the presence of
friction at the interface and loading in the form of an oblique far-field
displacement.
#### 3.3.1 Hertzian contact problem
The Hertzian contact problem is used as a benchmark for the proposed 3D
implementation. In the classic formulation of the problem, a paraboloid is
employed as a first order approximation of a rigid spherical surface with
radius $R=100\,$\mathrm{mm}$$, which comes into contact with a deformable,
linear elastic half-space. The problem is radially symmetric, and the solution
is given in terms of contact radius $a$ and ellipsoidal normal contact
tractions distributions $p(\mathbf{x})$, with $P$ being their resultant. Given
a vertical imposed displacement $\Delta_{\mathrm{n}}$, the aforementioned
quantities read:
$\displaystyle a$ $\displaystyle=\sqrt{R\Delta_{\mathrm{n}}},$ $\displaystyle
P$
$\displaystyle=\frac{4}{3}\frac{E}{1-\nu^{2}}\sqrt{R\Delta_{\mathrm{n}}^{3}},$
(14)
with $E$ and $\nu$ corresponding to Young’s elastic modulus and Poisson’s
ratio of the half-space, respectively. The comparison is carried on under the
application of a monotonically increasing vertical displacement, starting from
zero up to a value of $\Delta_{0}=5\times 10^{-5}R$ with constant time steps.
Finally, the values chosen for the bulk’s characterization are
$E=1.0\,$\mathrm{MPa}$$ and $\nu=0.0$. Numerical simulations are performed,
assuming both frictionless and frictional interfaces to highlight the
differences that arise due to coupling and affect normal response also in
absence of a direct tangential load.
Given the problem symmetry, only a quadrant of the half-space has been
actually modeled and discretized with a quarter of cylinder, with rigid
constraints in correspondence of the lower base and the round lateral surface
and constraints in tangential direction on the two flat lateral surfaces. A
layer of interface finite elements containing the shape of the indenting
parabolic surface is located in correspondence to the top surface’s center.
Cylinder’s radius and height have been increased until their influence on the
simulation results vanished, thus guaranteeing the equivalence of the FEM bulk
model response with the one expected for the half-space contact problem. A
mesh convergence study has been performed regarding the discretization of the
contact zone. Three different mesh resolutions have been employed, using
square regular grids of $8\times 8$, $16\times 16$, $32\times 32$ interface
elements, respectively and a lateral size $L=1.0\times 10^{-2}R$. The problem
setup can be appreciated in Fig. 13 for the $8\times 8$ resolution, while the
results provided in the following paragraphs are all related to the fine
resolution employed. Finally, a penalty parameter
$\varepsilon_{\mathrm{n}}=1\times 10^{8}E/R$ has been employed.
(a)
(b)
Figure 13: Problem set up characterized by deformable bulk and square contact
patch of interface finite element. In the current case, a paraboloid surface
is embedded in the contact elements, whose shape can be appreciated in
transparency. BCs in the form of an imposed downward displacement are applied
on top of the interface elements layer, and the resulting traction field is
transmitted to the bulk. Contour plots show resulting vertical displacements
$u_{z}$ (a) and resulting Cauchy stress $\sigma_{z}$ (b).
The solution in terms of contact reaction force, against the analytic
reference solution, can be observed in Fig. LABEL:fig:Pz, together with the
surface plot of the correspondent normal tractions, Fig.
LABEL:fig:tracpz_bench. Both the frictionless ($\mu=0.0$) and frictional
($\mu=0.4$) numeric solutions show a stiffer behavior compared with the exact
one. As expected, the frictional case is the stiffest since the application of
the vertical load cause in-plane horizontal displacements, which are
counteracted by the presence of friction. The highest coupling effect can be
appreciated for $\nu=0.0$, while as Poisson’s ratio tends to $0.5$, uncoupling
conditions are met, and the effect is supposed to vanish. The differences in
percentage between the case for $\mu=0.4$ and $\mu=0.0$ are in line with the
theory. The interested reader is addressed to [14, Ch. 7, pp. 129–130] for a
comparison between the presented application and the corresponding coupled
axis-symmetric problem without slip, which represents the scenario opposite to
the absence of friction. A small but still appreciable difference still holds
between the frictionless case and the reference solution. Even if the results
of the validation test can be considered fully satisfactory since they have
been obtained with a rather coarse mesh, the use of a different and more
accurate contact strategy appears more appealing for the future systematic use
of the method, for which the exploitation of the penalty based strategy is not
a strict prerequisite.
In Fig. LABEL:fig:tracpz_bench it can be seen how the numerical simulation
reproduces the characteristic ellipsoidal shaped distribution. The stiffening
effect due to geometrical coupling can be quantitatively appreciated by
comparing the ratio between the maximum value predicted by the analytical
model and the one obtained by the simulation, $p_{0}/p_{\mathrm{max}}=0.8105$.
The solution in terms of contact radius is also checked. For the chosen
interface discretization, a relative error of $1.612\%$ concerning the last
loading step is found. This result is shown in Fig. LABEL:fig:cradius, where
the exact value of the contact radius, thick solid red line, is superposed to
the normal contact tractions’ contour plot.
(a) (b) (c)
Figure 14: Comparison between numerical and analytical solutions for the Hertz
problem.
In Fig. 15, the tangential contact tractions are shown. Fig. LABEL:fig:tracqx
presents the tangent vector’s projection over the first coordinate directions.
Since only normal loading is involved, and the profile is symmetric, they
represent self equilibrated distributions, symmetrical to $y=0$. The magnitude
of the tangential tractions
$\lVert\mathbf{q}_{\tau}\rVert=\sqrt{q_{\tau,1}^{2}+q_{\tau,2}^{2}}$ is
represented in Fig. LABEL:fig:tracq. Again, because of loading conditions, the
distribution is characterized by polar symmetry, with a null value only in
correspondence to the origin. This point is the only one that does not
experience in-plane tractions. The remaining domain is split in the radial
direction into two annular regions, an inner one for which
$\lVert\mathbf{q}_{\tau}\rVert<\mu p_{\mathrm{n}}$ which is therefore in a
state of stick, and an outer one which radially slips under the action of the
punch load. The stick/slip region can be determined by evaluating the ratio
between the radius of stick and the contact radius, with the result
$r_{a}/r_{b}=0.9130$. This implies that roughly $15\%$ of the contact area is
in a partial slip state, even for this rather high coefficient of friction and
no application of tangential load. This fact might be of relevance in cases
where micro-slip related phenomena are considered, for example, in the study
of fretting wear and fretting fatigue [80, 81].
(a) (b)
Figure 15: Surface plot of tangential tractions for the Hertz problem with
friction.
#### 3.3.2 Contact of rough surfaces
In this section, two different kinds of quasi-fractal rough surfaces are going
to be tested, first a rough surface generated using an RMD algorithm, then a
wavy Weierstrass-Mandelbrot (WD) quasi fractal surface. Two different
methodologies have been used for the assignment of the correct elevation field
to each elements’ Gauß points. The surface employed in the contact simulation
has been hardcoded inside the finite element routine in the case of the WD
surface since it can be analytically defined. In the first case, it has been
stored in an external file as a three columns matrix of $[x,y,z]$ values and
prompted as a look-up table, this solution is necessary in the case the
surface to be used directly comes from topographic measurements, such as those
obtained from a confocal profilometer, or if, in general, it lacks an
analytical description as is the case for the RMD surface.
In analogy with Sec. 3.1, the first case is considered particularly
interesting, since a contact problem involving this type of surface can be
particularly challenging when using standard contact search algorithms, given
the scatter in the heights distribution and the total lack of smoothness.
Each of the two simulations is performed over the same mesh, which is
structured over three different layers stacked on the top of each other. The
bottom layer models the bulk, the middle layer is composed of interface finite
elements where the indenter’s geometry is sampled and finally, a top layer
where Dirichlet BCs are applied, according to the scheme depicted in Fig.
LABEL:fig:bcsb. Standard trilinear _hex_ elements have been employed for
modeling the first and the last layer.
The indenters are sampled in an array composed of $128\times 128$ square
interface finite elements. Bulk elements have a height-to-width ratio of $5$,
which, given the square nominal contact area of side $2L$ with
$L=1\,$\mathrm{mm}$$ and the number of elements employed, gives an overall
depth of $h_{\mathrm{b}}=0.1563L$. Fig. 16 shows an overview of the mesh
employed. The problem setup is completed by its mechanical characterization.
The bulk is considered to be linear elastic, with Young’s modulus
$E=1.0\,$\mathrm{MPa}$$ and a Poisson’s ratio $\nu=0.0$, while the material
predisposed for the application of the BCs is considered three orders of
magnitude stiffer than the bulk. The normal penalty parameter has been taken
to be $\varepsilon_{\mathrm{n}}=1\times 10^{3}E/h_{\mathrm{b}}$. Finally, no
restraints are considered on the free lateral surfaces of the elastic bulk.
Figure 16: FEM mesh, interface discretized with $128\times 128$ interface
finite elements.
##### Results for RMD surface
A self-affine rough surface obtained employing the same procedure of Sec. 3.1
is now used for testing the $3D$ implementation. The surface is generated with
the RMD algorithm and a fixed random seed $r=0.547$, a Hurst exponent $H=0.75$
and a random function with Gaussian distribution and a starting standard
deviation $\sigma_{0}=2.357$. The resulting elevation field is shown
qualitatively in Fig. LABEL:fig:RMDsurfa.
(a) (b)
Figure 17: Surface employed in 3D simulations and resultant traction field.
In this case, a frictionless normal indentation problem is solved. The load is
applied as an imposed far field displacement $\Delta_{\mathrm{n}}$ on the top
layer of rigid elements, linearly varying from a null value up to a maximum of
$\Delta_{0}=g_{0}=1.0\times 10^{-2}L$, discretized employing $20$ pseudo time
step. Again, $g_{0}$ represents the amplitude of the surface measured from the
lowest valley to the highest summit. The load history is plotted in Fig.
LABEL:fig:RMDload in terms of imposed normal far field displacement
$\Delta_{\mathrm{n}}$, together with the resultant normal reaction force $P$,
scaled by their maximum values $\Delta_{0}$ and $P_{0}=0.637E/L^{2}$, with
$t_{\mathrm{f}}$ the final instant of the simulation. The maximum value of the
imposed displacement has been chosen high enough to map the evolution of the
actual contact area $A_{\mathrm{c}}$, from a single contacting asperity at
$t=0$ to full contact at $t=t_{\mathrm{f}}$, as can be seen in Fig.
LABEL:fig:RMDac where this quantity scaled by the nominal contact area
$A_{0}=4L^{2}$ is plotted. Finally, the contour plot of the full normal
tractions field is reported in Fig. LABEL:fig:RMDpz, with a peak value
$p_{0}=0.308E$.
(a) (b)
Figure 18: Solution of the indentation problem for an RMD fractal surface.
##### WM with friction
The second full scale simulation is performed considering the presence of
friction at the interface, with a coefficient of friction $\mu=0.2$. The
indenter’s surface is a quasi-fractal Weierstrass-Mandelbrot surface [78, 77,
14, Ch. 16, pp. 356] defined by the function:
$z(x,y)=A\sum_{n=1}^{N}\sum_{m=1}^{M}\gamma^{(D-3)(n-1)}\Bigl{[}\cos{\phi_{m,n}}-\cos{\frac{2\pi\gamma^{n-1}}{\lambda_{0}}}\Bigl{(}x\cos{\frac{\pi
m}{M}}+y\sin{\frac{\pi m}{M}}+\phi_{m,n}\Bigl{)}\Bigl{]},$ (15)
and characterized by the parameters in Tab. LABEL:tab:coeffWM and shown in
Fig. LABEL:fig:WMsurf. The matrix $\Phi$ collects the random phase angles
employed for the surface generation process.
(a) (b)
Figure 19: Surface employed in 3D simulations and resultant traction field. Table 1: Weierstrass-Mandelbrot surface coefficients. $z_{0}$ | $\lambda_{0}$ | $G$ | $D$ | $\gamma$ | $N$ | $M$
---|---|---|---|---|---|---
$[$\mathrm{m}$]$ | $[$\mathrm{m}$]$ | $[-]$ | $[-]$ | $[-]$ | $[-]$ | $[-]$
$1.00\times 10^{-3}$ | $1.00\times 10^{0}$ | $3.00\times 10^{0}$ | $2.25\times 10^{0}$ | $1.30\times 10^{0}$ | $8$ | $10$
The load is still applied as a far field displacement on the top layer of the
mesh, this time considering also horizontal imposed motion. The overall
loading phase is considered quasi-static and discretized in $60$ pseudo time
steps, ranging from zero to $3t_{0}$. The overall loading process is divided
into three different stages. In the first, ranging from zero to $t_{0}$, a
pure vertical displacement is applied from a null value up to
$\Delta_{0}=3.0\times 10^{-1}g_{0}$. The normal displacement is then held
constant, while the indenter is shifted along $x$ direction with constant
positive velocity, reaching a maximum value $\Delta_{\tau,0}=\mu\Delta_{0}$ at
$2t_{0}$. Finally, in the third phase, the indenter is linearly shifted back
to its original position, reached at $3t_{0}$. Figure 20 shows the applied
far-field displacement history, together with the resultant interface overall
reactions, evaluated as the integral of the interface normal and tangential
tractions.
The ratio between the normal indentation and the elastic layer thickness is
$\Delta_{0}/h_{\mathrm{b}}=1.92\,\%$, in line with the assumption of elastic
deformation of the bulk. Still, the surface characteristics have been tailored
to obtain a high final actual contact area to have the possibility of
investigating the contact response from high to low mean-plane separations.
Figure 20: Far field displacement and resultant load vs. time.
Considering the WM related simulations, the outcome in terms of forces
response is also shown in Fig. 20. The vertical reaction force $P$ follows a
characteristic power-law behavior as long as the load is incremented, then
remains constant. During the first stage, parasitic reaction forces $Q_{1}$
and $Q_{2}$ arise due to the simulation’s displacement controlled nature and
the lack of symmetry of the indenting profile. During the second stage,
$Q_{1}$ increases, and a condition of full slip is almost reached, with the
maximum value obtained at $2t_{0}$ approximately equal to $0.85\mu P$. Over
this point, the displacement is reversed, and the indenter is taken back to
its original position. We observe a residual horizontal negative force, a
function of the system hysteresis that can be directly linked to the
frictional energy dissipation.
The contour plot of the normal tractions $p(\mathbf{x})$ at $t_{0}$ is shown
in Fig. LABEL:fig:WMpz. It can be seen that for the selected level of
indentation, the contact area ratio $A_{\mathrm{c}}/A_{0}\simeq 45\%$ is
reached. A clear distinction holds between the contact islands and the domain
that does not experience contact, characterized by homogeneous cyan color.
#### 3.3.3 Computational performances
(a) (b)
Figure 21: Comparison of the solver performances between the two full scale
examples addressed, together with an analogous problem characterized by a
lower number of degrees of freedom.
Results for both the RMD and the WM surfaces are compared in terms of
computational time required and convergence properties at the end of this
section. The performance of the proposed method is compared for both the RMD
and the WM surfaces, along the first load branch, i.e. from zero to the
$20^{th}$ time step. Each simulation ran sequentially on the single node of an
Intel Xeon E5$\cdot$4620 processor with $256\,$\mathrm{GB}$$ of ram. In the
solution process, a full Newton-Raphson solution scheme together with a direct
solver based on Gaußian elimination for the inversion of the global tangent
stiffness matrix has been employed. For the simulations involving friction, an
implicit backward Euler time stepping scheme has been employed, while
dynamical forces have not been taken into account.
Figure LABEL:fig:cpu shows the time employed by a complete run of all the
simulations performed. For comparison purposes, results related to solutions
obtained employing a lower number of degrees of freedom (surfaces modeled on
$64\times 64$ elements grids) are plotted as well in the same figure. As
expected, the most critical factor is the number of degrees of freedom that
characterizes the different examples. For what concerns finer scale problems,
all the simulations with an equivalent number of degrees of freedom have
almost identical CPU times, regardless of the surfaces’ smoothness. In
contrast to the WM surface, the RMD surface is made of a scattered elevation
field, which would result in very challenging scenarios for standard contact
search algorithms. In order to investigate how the presence of friction
affects the performance of the code, the same problem with the WM surface is
solved also setting $\mu=0.0$. Comparing the results, a slight difference is
encountered, but the effect is noticeable for the finest resolution only, with
an increase of about $12\%$ concerning the overall computational time, and
convergence properties as well are not significantly affected. In the
conclusion of the section, Fig. LABEL:fig:nr reports, for each time step of
each simulation, the total number of iterations of the Newton-Raphson
algorithm employed to solve the global non-linear system of equations that
governs the problem. Again, no significant discrepancy is encountered despite
the remarkable differences in terms of smoothness characteristics.
Furthermore, in the case of the WM surface, even friction does not
significantly alter the convergence properties, requiring at most two
additional iterations for reaching convergence.
## 4 Conclusion and future perspectives
In this paper, an extension to the MPJR interface finite element is presented
for the analysis of rough 2D and 3D contact problems. Good accordance has been
found comparing the proposed implementation with solutions obtained from
standard numerical frameworks for the solution of the frictionless normal
contact problem of a rough RMD indenting profile. The setup proved to be valid
also for the analysis of contact problems with wavy interfaces in presence of
friction and adhesion. The proposed formulation provides a way to overcome
some of the major difficulties related to the solution of contact problems
with roughness in the state-of-the-art BEM and FEM formulations. With respect
to classical BEM solvers, the proposed method allows considering any nonlinear
constitutive relation for the bulk and for the interface. Moreover, it allows
simulating finite size geometries and it is naturally prone to be extended for
multi-field simulations (phase field for fracture mechanics in the bulk,
thermo-elasticity, chemical reactions coupled with mechanics, etc.).
As far as standard FEM procedures are concerned, the methodology simplifies
the discretization of the interface, which does not need to be explicitly
included in the model geometry. This allows including any point-wise height
field or any analytical shape of 2D profiles or 3D surfaces as a
straightforward correction to the normal gap. In the case of simulations based
on experimentally acquired profile/surface data (with AFM, confocal
profilometer, or any other technique), the height field can be efficiently
stored into a history variable which is then compiled with the code and read
by the FE software only once at the initialization stage of the problem. This
avoids repeated read-write operations from external files.
Using the proposed formulation, the contacting interface is treated as
nominally flat and roughness is embedded at the interface level node-wise.
Therefore, the method requires an interface discretization that is consistent
with the number of data points required for accurately sampling the indenter’s
boundary, together with their spacing. This allows for an exact reproduction
of the contacting geometry by using low-order interpolation schemes, without
compromising the convergence that can be a problem for contact search
algorithms with not well-defined normal vectors.
Future perspectives comprehend the consistent application of the method to
model full-scale 3D contact problems under finite strain assumptions for the
study of phenomena where surface roughness plays a key role as in wear
problems, fracture-induced by indentation, fracture induced by repeated
application of contact loads, tire-asphalt interaction, nanoscale tribological
tests based on AFM data, multi-field tribological problems.
Finally, the authors are grateful to Jim Barber for the frequent scientific
discussion they had with him over their entire careers. This has inspired (and
we hope will continue to inspire) many research ideas on contact mechanics
between rough surfaces that would have not been possible to pursue without its
input.
## Acknowledgements
JB and MP would like to thank the Italian Ministry of Education, University
and Research (MIUR) for the support to the Research Project of Relevant
National Interest (PRIN 2017) XFAST-SIMS: Extra-fast and accurate simulation
of complex structural systems (Grant agreement n. 20173C478N). DD would like
to acknowledge the support received from the Engineering and Physical Science
Research Council (EPSRC) via his Established Career Fellowship (EP/N025954/1).
## Declaration of interests
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## References
* [1] J. Barber, Thermal effects of friction and wear, Ph.D. thesis, University of Cambridge (1968).
* [2] Y. Ahn, J. Barber, Response of frictional receding contact problems to cyclic loading, International Journal of Mechanical Sciences 50 (10-11) (2008) 1519–1525. doi:10.1016/j.ijmecsci.2008.08.003.
URL https://linkinghub.elsevier.com/retrieve/pii/S0020740308001203
* [3] Y. Ahn, E. Bertocchi, J. Barber, Shakedown of coupled two-dimensional discrete frictional systems, Journal of the Mechanics and Physics of Solids 56 (12) (2008) 3433–3440. doi:10.1016/j.jmps.2008.09.003.
URL https://linkinghub.elsevier.com/retrieve/pii/S0022509608001488
* [4] J. Barber, M. Davies, D. Hills, Frictional elastic contact with periodic loading, International Journal of Solids and Structures 48 (13) (2011) 2041–2047. doi:10.1016/j.ijsolstr.2011.03.008.
URL https://linkinghub.elsevier.com/retrieve/pii/S0020768311001041
* [5] J. Barber, W. Hawthorne, M. Lighthill, Thermoelastic instabilities in the sliding of conforming solids, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 312 (1510) (1969) 381–394. arXiv:https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1969.0165, doi:10.1098/rspa.1969.0165.
URL https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1969.0165
* [6] J. Barber, The effect of thermal distortion on constriction resistance, International Journal of Heat and Mass Transfer 14 (6) (1971) 751–766. doi:10.1016/0017-9310(71)90105-0.
URL https://linkinghub.elsevier.com/retrieve/pii/0017931071901050
* [7] J. Barber, SOME THERMOELASTIC CONTACT PROBLEMS INVOLVING FRICTIONAL HEATING, The Quarterly Journal of Mechanics and Applied Mathematics 29 (1) (1976) 1–13. doi:10.1093/qjmam/29.1.1.
URL https://academic.oup.com/qjmam/article-lookup/doi/10.1093/qjmam/29.1.1
* [8] J. Barber, Bounds on the electrical resistance between contacting elastic rough bodies, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 459 (2029) (2003) 53–66. doi:10.1098/rspa.2002.1038.
URL https://royalsocietypublishing.org/doi/10.1098/rspa.2002.1038
* [9] J. Barber, Multiscale Surfaces and Amontons’ Law of Friction, Tribology Letters 49 (3) (2013) 539–543. doi:10.1007/s11249-012-0094-6.
URL http://link.springer.com/10.1007/s11249-012-0094-6
* [10] J. Barber, Incremental stiffness and electrical contact conductance in the contact of rough finite bodies, Physical Review E 87 (1) (2013) 013203. doi:10.1103/PhysRevE.87.013203.
URL https://link.aps.org/doi/10.1103/PhysRevE.87.013203
* [11] M. Paggi, J. R. Barber, Contact conductance of rough surfaces composed of modified rmd patches, International Journal of Heat and Mass Transfer 54 (2011) 4664–4672. doi:10.1016/j.ijheatmasstransfer.2011.06.011.
* [12] J. Barber, Elasticity, Online access with purchase: Springer, Springer Netherlands, 2002.
URL https://books.google.de/books?id=qUht0jvZcIoC
* [13] J. Barber, Intermediate Mechanics of Materials, Springer Netherlands, 2011.
URL https://books.google.de/books?id=qUht0jvZcIoC
* [14] J. Barber, Contact Mechanics, Vol. 250 of Solid Mechanics and Its Applications, Springer International Publishing, Cham, 2018. doi:10.1007/978-3-319-70939-0.
URL http://link.springer.com/10.1007/978-3-319-70939-0
* [15] M. Müser, W. Dapp, R. Bugnicourt, P. Sainsot, N. Lesaffre, T. Lubrecht, B. Persson, K. Harris, A. Bennett, K. Schulze, S. Rohde, P. Ifju, W. Sawyer, T. Angelini, H. A. Esfahani, M. Kadkhodaei, S. Akbarzadeh, J.-J. Wu, G. Vorlaufer, A. Vernes, S. Solhjoo, A. I. Vakis, R. L. Jackson, Y. Xu, J. Streator, A. Rostami, D. Dini, S. Medina, G. Carbone, F. Bottiglione, L. Afferrante, J. Monti, L. Pastewka, M. O. Robbins, J. A. Greenwood, Meeting the contact-mechanics challenge, Tribology Letters 65 (4) (2017) 118. doi:10.1007/s11249-017-0900-2.
* [16] A. Vakis, V. Yastrebov, J. Scheibert, L. Nicola, D. Dini, C. Minfray, A. Almqvist, M. Paggi, S. Lee, G. Limbert, J. Molinari, G. Anciaux, S. Echeverri Restrepo, A. Papangelo, A. Cammarata, P. Nicolini, R. Aghababaei, C. Putignano, S. Stupkiewicz, J. Lengiewicz, G. Costagliola, F. Bosia, R. Guarino, N. Pugno, G. Carbone, M. Mueser, M. Ciavarella, Modeling and simulation in tribology across scales: an overview, Tribology International (2018). doi:10.1016/j.triboint.2018.02.005.
* [17] T. D. B. Jacobs, A. Martini, Measuring and Understanding Contact Area at the Nanoscale: A Review, Applied Mechanics Reviews 69 (6) (Nov. 2017). doi:10.1115/1.4038130.
URL https://doi.org/10.1115/1.4038130
* [18] M. Paggi, A. Bemporad, J. Reinoso, Computational Methods for Contact Problems with Roughness, in: M. Paggi, D. Hills (Eds.), Modeling and Simulation of Tribological Problems in Technology, Springer International Publishing, Cham, 2020, pp. 131–178. doi:10.1007/978-3-030-20377-1_4.
URL https://doi.org/10.1007/978-3-030-20377-1_4
* [19] I. G. Goryacheva, M. Paggi, V. L. Popov, Editorial: Contact mechanics perspective of tribology, Frontiers in Mechanical Engineering 7 (2021). doi:10.3389/fmech.2021.649792.
URL https://www.frontiersin.org/article/10.3389/fmech.2021.649792
* [20] M. Paggi, J. Bonari, J. Reinoso, From the Pioneering Contributions by Wriggers to Recent Advances in Computational Tribology, in: F. Aldakheel, B. Hudobivnik, M. Soleimani, H. Wessels, C. Weißenfels, M. Marino (Eds.), Current Trends and Open Problems in Computational Mechanics, Springer International Publishing, Cham, 2022, pp. 385–393. doi:10.1007/978-3-030-87312-7_37.
URL https://doi.org/10.1007/978-3-030-87312-7_37
* [21] F. P. Bowden, D. Tabor, The friction and lubrication of solids, Oxford: Clarendon Press, 1950.
* [22] J. Archard, Elastic deformation and the laws of friction, Proceedings of the Royal Society A 243 (1233) (1957) 190–205. doi:https://doi.org/10.1098/rspa.1957.0214.
* [23] J. A. Greenwood, J. B. P. Williamson, Contact of nominally flat surfaces, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 295 (1966) 300–319. doi:10.1098/rspa.1966.0242.
* [24] J. Greenwood, J. Tripp, The elastic contact of rough spheres, Journal of Applied Mechanics, Transactions ASME 34 (1) (1967) 153–159. doi:10.1115/1.3607616.
* [25] A. Bush, R. Gibson, T. Thomas, The elastic contact of a rough surface, Wear 35 (1) (1975) 87–111. doi:https://doi.org/10.1016/0043-1648(75)90145-3.
* [26] A. Majumdar, B. Bhushan, Role of fractal geometry in roughness characterization and contact mechanics of surfaces, ASME Journal of Tribology 112 (1990) 205–216.
* [27] M. Borri-Brunetto, A. Carpinteri, B. Chiaia, Scaling phenomena due to fractal contact in concrete and rock fractures, International Journal of Fracture 95 (1999) 221–238. doi:10.1023/A:1018656403170.
* [28] B. Persson, Theory of rubber friction and contact mechanics, The Journal of Chemical Physics 115 (2001) 3840. doi:10.1063/1.1388626.
* [29] T. Andersson, The boundary element method applied to two-dimensional contact problems with friction, in: Boundary Element Methods, Vol. 3, Springer, Berlin, Heidelberg, 1981, pp. 239–258.
* [30] K. L. Johnson, Contact Mechanics, Cambridge University Press, Cambridge, UK, 1985\.
* [31] I. Polonsky, L. Keer, A numerical method for solving rough contact problems based on the multi-level multi-summation and conjugate gradient techniques, Wear 231 (1999) 206–219.
* [32] A. Bemporad, M. Paggi, Optimization algorithms for the solution of the frictionless normal contact between rough surfaces, International Journal of Solids and Structures 69–70 (2015) 94–105.
* [33] Y. Xu, R. L. Jackson, Boundary element method (BEM) applied to the rough surface contact vs. BEM in computational mechanics, Friction 7 (4) (2019) 359–371. doi:10.1007/s40544-018-0229-3.
URL http://link.springer.com/10.1007/s40544-018-0229-3
* [34] M. Paggi, R. Pohrt, V. L. Popov, Partial-slip frictional response of rough surfaces, Scientific Reports 4 (2014) 1–6. doi:10.1038/srep05178.
* [35] R. Pohrt, Q. Li, Complete boundary element formulation for normal and tangential contact problems, Physical Mesomechanics 17 (2014) 334–340. doi:10.1134/s1029959914040109.
* [36] E. A. Vollebregt, A new solver for the elastic normal contact problem using conjugate gradients, deflation, and an FFT-based preconditioner, Journal of Computational Physics 257 (PA) (2014) 333–351. doi:10.1016/j.jcp.2013.10.005.
URL http://dx.doi.org/10.1016/j.jcp.2013.10.005
* [37] G. Anciaux, J. F. Molinari, Sliding of rough surfaces and energy dissipation with a 3D multiscale approach: SLIDING OF ROUGH SURFACES AND ENERGY DISSIPATION, International Journal for Numerical Methods in Engineering 83 (8-9) (2010) 1255–1271. doi:10.1002/nme.2845.
URL https://onlinelibrary.wiley.com/doi/10.1002/nme.2845
* [38] H. D. Conway, S. M. Vogel, K. A. Farnham, S. So, Normal and shearing contact stresses in indented strips and slabs, International Journal of Engineering Science 4 (4) (1966) 343–359. doi:10.1016/0020-7225(66)90036-x.
* [39] R. H. Bentall, K. L. Johnson, An elastic strip in plane rolling contact, International Journal of Mechanical Sciences 10 (8) (1968) 637–663. doi:10.1016/0020-7403(68)90070-2.
* [40] J. Greenwood, J. Barber, Indentation of an elastic layer by a rigid cylinder, International Journal of Solids and Structures 49 (21) (2012) 2962–2977.
* [41] G. Carbone, C. Putignano, A novel methodology to predict sliding and rolling friction of viscoelastic materials: Theory and experiments, Journal of the Mechanics and Physics of Solids 61 (8) (2013) 1822–1834. doi:https://doi.org/10.1016/j.jmps.2013.03.005.
* [42] C. Putignano, G. Carbone, A review of boundary elements methodologies for elastic and viscoelastic rough contact mechanics, Physical Mesomechanics 17 (4) (2014) 321–333. doi:10.1134/S1029959914040092.
* [43] C. Putignano, G. Carbone, D. Dini, Theory of Reciprocating Contact for Viscoelastic Solids, Physical Review E 93 (4) (2016) 043003, arXiv: 1603.07598. doi:10.1103/PhysRevE.93.043003.
URL http://arxiv.org/abs/1603.07598
* [44] G. Carbone, L. Mangialardi, Analysis of the adhesive contact of confined layers by using a Green’s function approach, Journal of the Mechanics and Physics of Solids 56 (2) (2008) 684–706. doi:10.1016/j.jmps.2007.05.009.
URL https://linkinghub.elsevier.com/retrieve/pii/S0022509607001147
* [45] S. Medina, D. Dini, A numerical model for the deterministic analysis of adhesive rough contacts down to the nano-scale, International Journal of Solids and Structures 51 (14) (2014) 2620–2632. doi:10.1016/j.ijsolstr.2014.03.033.
URL https://linkinghub.elsevier.com/retrieve/pii/S0020768314001395
* [46] L. Pastewka, M. O. Robbins, Contact between rough surfaces and a criterion for macroscopic adhesion, Proceedings of the National Academy of Sciences 111 (9) (2014) 3298–3303. doi:10.1073/pnas.1320846111.
URL https://pnas.org/doi/full/10.1073/pnas.1320846111
* [47] V. L. Popov, R. Pohrt, Q. Li, Strength of adhesive contacts: Influence of contact geometry and material gradients, Friction 5 (3) (2017) 308–325. doi:10.1007/s40544-017-0177-3.
URL http://link.springer.com/10.1007/s40544-017-0177-3
* [48] V. Rey, G. Anciaux, J.-F. Molinari, Normal adhesive contact on rough surfaces: efficient algorithm for FFT-based BEM resolution, Computational Mechanics 60 (1) (2017) 69–81. doi:10.1007/s00466-017-1392-5.
URL http://link.springer.com/10.1007/s00466-017-1392-5
* [49] J. Andersson, A. Almqvist, R. Larsson, Numerical simulation of a wear experiment, Wear 271 (11-12) (2011) 2947–2952. doi:10.1016/j.wear.2011.06.018.
URL https://linkinghub.elsevier.com/retrieve/pii/S0043164811004583
* [50] T. Brink, L. Frérot, J.-F. Molinari, A parameter-free mechanistic model of the adhesive wear process of rough surfaces in sliding contact, Journal of the Mechanics and Physics of Solids 147 (2021) 104238, arXiv: 2004.00559. doi:10.1016/j.jmps.2020.104238.
URL http://arxiv.org/abs/2004.00559
* [51] L. Frérot, R. Aghababaei, J.-F. Molinari, A mechanistic understanding of the wear coefficient: From single to multiple asperities contact, Journal of the Mechanics and Physics of Solids 114 (2018) 172–184. doi:10.1016/j.jmps.2018.02.015.
URL https://linkinghub.elsevier.com/retrieve/pii/S0022509617309742
* [52] C. Mayeur, P. Sainsot, L. Flamand, A Numerical Elastoplastic Model for Rough Contact, Journal of Tribology 117 (3) (1995) 422–429. arXiv:https://asmedigitalcollection.asme.org/tribology/article-pdf/117/3/422/5938121/422\\_1.pdf, doi:10.1115/1.2831270.
URL https://doi.org/10.1115/1.2831270
* [53] A. Almqvist, F. Sahlin, R. Larsson, S. Glavatskih, On the dry elasto-plastic contact of nominally flat surfaces, Tribology International 40 (4) (2007) 574–579. doi:10.1016/j.triboint.2005.11.008.
URL https://linkinghub.elsevier.com/retrieve/pii/S0301679X05003142
* [54] L. Frérot, M. Bonnet, J.-F. Molinari, G. Anciaux, A Fourier-accelerated volume integral method for elastoplastic contact, Computer Methods in Applied Mechanics and Engineering 351 (2019) 951–976. doi:10.1016/j.cma.2019.04.006.
URL https://linkinghub.elsevier.com/retrieve/pii/S0045782519302038
* [55] L. Frérot, G. Anciaux, V. Rey, S. Pham-Ba, J.-F. Molinari, Tamaas: a library for elastic-plastic contact of periodic rough surfaces, Journal of Open Source Software 5 (51) (2020) 2121. doi:10.21105/joss.02121.
URL https://joss.theoj.org/papers/10.21105/joss.02121
* [56] F. Sahlin, R. Larsson, A. Almqvist, P. M. Lugt, P. Marklund, A mixed lubrication model incorporating measured surface topography. Part 1: Theory of flow factors, Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology 224 (4) (2010) 335–351. doi:10.1243/13506501JET658.
URL http://journals.sagepub.com/doi/10.1243/13506501JET658
* [57] S. Hyun, L. Pei, J.-F. Molinari, M. Robbins, Finite-element analysis of contact between elastic self-affine surfaces, Phys. Rev. E 70 (2004) 026117. doi:10.1103/PhysRevE.70.026117.
* [58] L. Pei, S. Hyun, J. Molinari, M. O. Robbins, Finite element modeling of elasto-plastic contact between rough surfaces, Journal of the Mechanics and Physics of Solids 53 (11) (2005) 2385–2409. doi:10.1016/j.jmps.2005.06.008.
* [59] A. A. Bandeira, P. Wriggers, P. de Mattos Pimenta, Numerical derivation of contact mechanics interface laws using a finite approach for large 3D deformation, International Journal for Numerical Methods in Engineering 59 (2) (2004) 173–195. doi:10.1002/nme.867.
* [60] V. A. Yastrebov, J. Durand, H. Proudhon, G. Cailletaud, Rough surface contact analysis by means of the Finite Element Method and of a new reduced model, Comptes Rendus - Mecanique 339 (7-8) (2011) 473–490. doi:10.1016/j.crme.2011.05.006.
URL http://dx.doi.org/10.1016/j.crme.2011.05.006
* [61] A. M. Couto Carneiro, R. Pinto Carvalho, F. M. Andrade Pires, Representative contact element size determination for micromechanical contact analysis of self-affine topographies, International Journal of Solids and Structures 206 (2020) 262–281. doi:10.1016/j.ijsolstr.2020.09.006.
URL https://doi.org/10.1016/j.ijsolstr.2020.09.006
* [62] M. Paggi, J. Reinoso, A variational approach with embedded roughness for adhesive contact problems, Mechanics of Advanced Materials and Structures 27 (20) (2020) 1731–1747. doi:10.1080/15376494.2018.1525454.
* [63] M. Ortiz, A. Pandolfi, Finite deformation irreversible cohesive elements for three-dimensional crack-propagation analysis, International Journal for Numerical Methods in Engineering 44 (1999) 1267–1282. doi:10.1002/(SICI)1097-0207(19990330)44:93.3.CO;2-Z.
* [64] M. Paggi, P. Wriggers, Node-to-segment and node-to-surface interface finite elements for fracture mechanics, Computer Methods in Applied Mechanics and Engineering 300 (2016) 540–560, arXiv: 1604.05236. doi:10.1016/j.cma.2015.11.023.
URL http://arxiv.org/abs/1604.05236
* [65] P. Wriggers, Computational Contact Mechanics, Springer-Verlag Berlin Heidelberg, 2006. doi:10.1007/978-3-540-32609-0.
* [66] J. Bonari, M. Paggi, J. Reinoso, A framework for the analysis of fully coupled normal and tangential contact problems with complex interfaces, Finite Elements in Analysis and Design 196 (2021) 103605. doi:10.1016/j.finel.2021.103605.
URL https://linkinghub.elsevier.com/retrieve/pii/S0168874X21000895
* [67] J. Bonari, M. Paggi, Viscoelastic effects during tangential contact analyzed by a novel finite element approach with embedded interface profiles, Lubricants 8 (12) (2020). doi:10.3390/lubricants8120107.
* [68] N. Yu, A. A. Polycarpou, Adhesive contact based on the Lennard–Jones potential: a correction to the value of the equilibrium distance as used in the potential, Journal of Colloid and Interface Science 278 (2) (2004) 428–435. doi:10.1016/j.jcis.2004.06.029.
URL https://linkinghub.elsevier.com/retrieve/pii/S0021979704005454
* [69] R. A. Sauer, P. Wriggers, Formulation and analysis of a three-dimensional finite element implementation for adhesive contact at the nanoscale, Computer Methods in Applied Mechanics and Engineering 198 (49-52) (2009) 3871–3883. doi:10.1016/j.cma.2009.08.019.
URL https://linkinghub.elsevier.com/retrieve/pii/S0045782509002631
* [70] J. C. Mergel, J. Scheibert, R. A. Sauer, Contact with coupled adhesion and friction: Computational framework, applications, and new insights, Journal of the Mechanics and Physics of Solids 146 (2021) 104194\. doi:10.1016/j.jmps.2020.104194.
URL https://linkinghub.elsevier.com/retrieve/pii/S002250962030418X
* [71] B. Feeny, F. C. Moon, Chaos in a Forced Dry-Friction Oscillator: Experiments and Numerical Modelling, Journal of Sound and Vibration 170 (3) (1994) 303–323. doi:https://doi.org/10.1006/jsvi.1994.1065.
URL https://www.sciencedirect.com/science/article/pii/S0022460X84710650
* [72] J. Simo, T. Laursen, An augmented lagrangian treatment of contact problems involving friction, Computers & Structures 42 (1) (1992) 97–116. doi:10.1016/0045-7949(92)90540-G.
URL https://linkinghub.elsevier.com/retrieve/pii/004579499290540G
* [73] N. Mostaghel, A non-standard analysis approach to systems involving friction, Journal of Sound and Vibration 284 (3-5) (2005) 583–595. doi:10.1016/j.jsv.2004.06.041.
URL https://linkinghub.elsevier.com/retrieve/pii/S0022460X04006030
* [74] E. Pennestrì, V. Rossi, P. Salvini, P. P. Valentini, Review and comparison of dry friction force models, Nonlinear Dynamics 83 (4) (2016) 1785–1801. doi:10.1007/s11071-015-2485-3.
URL http://link.springer.com/10.1007/s11071-015-2485-3
* [75] P. Vigué, C. Vergez, S. Karkar, B. Cochelin, Regularized friction and continuation: Comparison with Coulomb’s law, Journal of Sound and Vibration 389 (2017) 350–363. doi:10.1016/j.jsv.2016.11.002.
URL https://linkinghub.elsevier.com/retrieve/pii/S0022460X16306277
* [76] J. Bonari, M. Paggi, J. Reinoso, A framework for the analysis of fully coupled normal and tangential contact problems with complex interfaces, Finite Elements in Analysis and Design 196 (2021) 103605. doi:https://doi.org/10.1016/j.finel.2021.103605.
URL https://www.sciencedirect.com/science/article/pii/S0168874X21000895
* [77] B. B. Mandelbrot, Fractal Geometry of Nature, San Francisco: W.H. Freeman, 1977\.
* [78] M. F. Barnsley, R. L. Devaney, B. B. Mandelbrot, H.-O. Peitgen, The Science of Fractal Images, Springer-Verlag New York, 1988.
* [79] F. Pérez-Ràfols, A. Almqvist, Generating randomly rough surfaces with given height probability distribution and power spectrum, Tribology International 131 (2019) 591–604. doi:10.1016/j.triboint.2018.11.020.
URL https://linkinghub.elsevier.com/retrieve/pii/S0301679X18305607
* [80] D. Hills, Mechanics of fretting fatigue, Wear 175 (1) (1994) 107–113. doi:https://doi.org/10.1016/0043-1648(94)90173-2.
URL https://www.sciencedirect.com/science/article/pii/0043164894901732
* [81] D. Nowell, D. Dini, D. A. Hills, Recent developments in the understanding of fretting fatigue, Engineering Fracture Mechanics 73 (2) (2006) 207–222. doi:https://doi.org/10.1016/j.engfracmech.2005.01.013.
|
# Establishing a leader in a pairwise comparisons method
Jacek Szybowski<EMAIL_ADDRESS>Konrad Kułakowski<EMAIL_ADDRESS>Jiri
Mazurek<EMAIL_ADDRESS>Sebastian Ernst<EMAIL_ADDRESS>AGH University
of Krakow, The Faculty of Applied Mathematics, al. A. Mickiewicza 30, 30-059
Krakow, Poland AGH University of Krakow, The Department of Applied Computer
Science, al. A. Mickiewicza 30, 30-059 Krakow, Poland Silesian University in
Opava, School of Business Administration in Karvina, Univerzitní nám. 1934,
733 40 Karviná, Czech Republic
###### Abstract
Like electoral systems, decision-making methods are also vulnerable to
manipulation by decision-makers. The ability to effectively defend against
such threats can only come from thoroughly understanding the manipulation
mechanisms. In the presented article, we show two algorithms that can be used
to launch a manipulation attack. They allow for equating the weights of two
selected alternatives in the pairwise comparison method and, consequently,
choosing a leader. The theoretical considerations are accompanied by a Monte
Carlo simulation showing the relationship between the size of the PC matrix,
the degree of inconsistency, and the ease of manipulation. This work is a
continuation of our previous research published in the paper [30].
###### keywords:
pairwise comparisons , data manipulation , rank reversal , orthogonal
projections
††journal: Elsevier
## 1 Introduction
The pairwise comparisons method (PC) constitutes a convenient and broadly
applied tool for a complexity reduction in the multiple criteria decision-
making (MCDM) frameworks such as the Analytic Hierarchy Process (AHP) [27],
Best-Worst Method (BWM) [26], MACBETH [1], or PROMETHEE [6].
In recent decades, many researchers studied PC methods intensively concerning
its consistency, optimal derivation of a priority vector, priority vector’s
desirable properties, and other aspects, see e.g. [10, 19, 20, 25]. Since the
objective of a PC method is to rank compared objects (usually alternatives or
criteria) from the best to the worst, it may happen that an expert
deliberately distorts one or more pairwise comparisons to promote a selected
object, see e.g. [21, 32, 33]. In particular, the problem of preference
manipulation has gained attention in the context of group decision-making
(see, e.g. [11, 12, 23, 24, 28]), or electoral systems analysis ([5, 13, 15]).
The studies above focused on manipulation detection, various anti-manipulation
strategies (mainly through some penalization brought upon a manipulator), or
an estimation of manipulation robustness. Prevention of manipulation was
discussed, e.g., in [21, 29, 31].
In particular, a recent study by Kułakowski et al. [21] introduced two
heuristics enabling the detection of manipulators and minimizing their effect
on the group consensus by diminishing their weights. The first heuristic is
based on the assumption that manipulators will provide judgments that can be
considered outliers concerning those of the other experts in the group. The
second heuristic assumes dishonest judgments are less consistent than the
average consistency of the group.
The presented study is a follow-up of the work by Szybowski et al. [30], where
an algorithm balancing the weights of two given alternatives of a pairwise
comparisons matrix (_EQ algorithm_) has been introduced. This study aims to
introduce a modification of the EQ algorithm that is more efficient in the
case of its multiple uses and to propose two other algorithms based on the EQ
algorithm (_greedy and bubble sort_) capable of altering the best alternative
by a minimal change in elements of an original additive PC matrix. Further, we
define the so-called _Average Ranking Stability Index_ (ARSI) as a measurement
of ranking manipulation’s difficulty. Last but not least, we perform Monte
Carlo simulations to analyze relationships between the size of a PC matrix,
its inconsistency, and the degree of manipulation difficulty given by the
ARSI. In the proposed method, we use PC matrix orthogonalization. We can also
use this technique in procedures to increase the consistency of PC matrices
[4, 17].
The paper is organized as follows: Section 2 provides preliminaries, Section 3
presents new algorithms, and Section 4 includes numerical (Monte Carlo)
simulations. Conclusions close the article.
## 2 Preliminaries
### 2.1 Multiplicative and additive pairwise comparisons systems
Let $E=\\{e_{1},\ldots,e_{n}\\}$ be a finite set of alternatives, $n\geq 2$,
and the goal is to rank all alternatives from the best to the worst by
pairwise comparisons.
* 1.
In the multiplicative pairwise comparisons (MPCs) framework, an expert
expresses his/her judgment of a relative preference (importance) of $e_{i}$
and $e_{j}$ by the value $m_{ij}\in\mathbb{R^{+}}$, where $m_{ij}>1$ means
$e_{i}$ is preferred over $e_{j}$, and $m_{ij}=1$ denotes equal preference of
both alternatives.
MPCs are reciprocal, if:
$m_{ij}=1/m_{ji};\forall i,j\in\\{1,...,n\\}$.
MPCs are consistent, if:
$m_{ij}\cdot m_{jk}=m_{ik};\forall i,j\in\\{1,...,n\\}$.
All MPCs are conveniently arranged into an $n\times n$ multiplicative pairwise
comparisons matrix $M=[m_{ij}]$, and a priority vector (vector of
alternatives’ weights) is then calculated by the eigenvector [27] or the (row)
geometric mean method [9].
Inconsistency of an MPC matrix $M$ can be estimated by the consistency index
($CI$) [27]:
$CI(M)=\dfrac{\lambda_{max}-n}{n-1}$, where $\lambda_{max}$ denotes the
maximal eigenvalue of $M$. Of course, there are a number of other methods for
determining the degree of inconsistency of a PC matrix such as the Koczkodaj’s
index [18], Kazibudzki’s Square Logarithm Deviations index [16] or Barzilai’s
error [2]. A comprehensive review of methods for measuring inconsistency in PC
matrices can be found in [7]. In addition to the inconsistency of the PC
matrix, the incompleteness index can also be determined [22].
* 2.
In the additive pairwise comparisons (APCs) framework, an expert expresses
his/her judgment of a relative preference (importance) of $e_{i}$ and $e_{j}$
by the value $a_{ij}\in\mathbb{R}$, where $a_{ij}>0$ means $e_{i}$ is
preferred over $e_{j}$, and $a_{ij}=0$ denotes equal preference of both
alternatives.
APCs are reciprocal, if:
$a_{ij}=-a_{ji};\forall i,j\in\\{1,...,n\\}$.
APCs are consistent, if:
$a_{ij}+a_{jk}=a_{ik};\forall i,j\in\\{1,...,n\\}$.
All APCs are conveniently arranged into an $n\times n$ additive pairwise
comparisons matrix $A=[a_{ij}]$, and a priority vector (vector of
alternatives’ weights) is then calculated by the row arithmetic mean method
[3].
Multiplicative and additive pairwise comparisons share the same group
structure (are isomorphic) [8] and can be easily converted into each other by
exponential and logarithmic transformations respectively:
$a_{ij}=log(m_{ij}),m_{ij}=exp(a_{ij})$
Both MPC and APC systems have they advantages. While the MPCs are based on
ratios, which are natural to human thinking, APCs enable to use rich
mathematical apparatus of linear algebra, which is especially convenient for
theoretical considerations [14].
The space
$\mathcal{A}:=\\{[a_{ij}]:\ \forall i,j\in\\{1,\ldots,n\\}\
a_{ij}\in\mathbb{R}\textnormal{ and }a_{ij}+a_{ji}=0\\},$
is a linear space of additive pairwise comparisons matrices (PCMs). Recall
that any linear space is endowed with a (orthogonal) basis and that for two
given $n\times n$ matrices $A$ and $B$ their standard Frobenius product is
defined as follows:
$\langle A,B\rangle=\sum_{k=1}^{n}\sum_{l=1}^{n}a_{kl}b_{kl},$
which induces the Frobenius norm
$||A||=\sqrt{\langle A,A\rangle}$
and the Frobenius distance
$d(A,B)=||A-B||.$
### 2.2 Ranking stability index
In the additive pairwise comparisons method it is usually assumed that the
elements of a PCM fall within a certain range $[-M,M]$, for a fixed $M>0$. In
this case, according to [30] the Ranking Stability Index of alternatives
$e_{i}$ and $e_{j}$ has been defined as
$RSI_{ij}^{M}=\frac{|\sum_{k=1}^{n}(a_{ik}-a_{jk})|}{2M}.$
This index expresses a rescaled distance of the weights of the $i$-th and
$j$-th alternatives.
The Ranking Stability Index for a PCM $A$ is given by the formula
$RSI^{M}(A)=\min_{1\leq i\leq j\leq n}RSI_{ij}^{M},$
and it measures the ease of the easiest manipulation.
However, sometimes the decision process is more complicated and some attempts
of manipulations may not be that obvious. Therefore, it could be useful to
define the Average Ranking Stability Index for $A$ as follows:
$ARSI^{M}(A)=\frac{2}{n(n-1)^{2}}\sum_{1\leq i\leq j\leq n}RSI_{ij}^{M}.$
Since for all $i,j$
$0\leq RSI_{ij}^{M}\leq n-1,$
we immediately get
$0\leq ARSI^{M}\leq 1.$
## 3 Establishing a leading alternative
### 3.1 Equating two alternatives
Let us recall the algorithm of finding the best approximation of a given PCM
$A$, which equates the weights of two given alternatives $e_{i}$ and $e_{j}$
(for $i<j$). This algorithm has been introduced in [30] and we will denote it
by EQ($A,i,j$).
In the beginning, we consider the case $i=1$ and $j=2$.
For this purpose we define:
* 1.
the tie space ${\cal A}_{12}$, i.e. the $\frac{n^{2}-n-2}{2}$-dimensional
subspace of all additive PCMs which induce the ranking such that alternatives
$1$ and $2$ are equal:
${\cal A}_{12}=\left\\{A\in{\cal
A}:\sum_{k=1}^{n}a_{1k}=\sum_{k=1}^{n}a_{2k}\right\\},$
* 2.
the set
$Z_{12}:=\\{(q,r):\ 3\leq q<r\leq n\\}.$
We define a basis for the tie space ${\cal A}_{12}$ which consists of additive
PCMs $C^{qr}$ ($(q,r)\in Z_{12}$), $E^{1}$, $F^{p}$ ($p\in\\{3,\ldots,n\\}$)
and $G^{p}$ ($p\in\\{3,\ldots,n-1\\}$), whose elements are given by
$c_{kl}^{qr}=\left\\{\begin{array}[]{rl}1,&k=q,\ l=r\\\ -1,&k=r,\ l=q\\\
0,&\textnormal{otherwise}\end{array}\right..$
$e_{kl}^{1}=\left\\{\begin{array}[]{rl}1,&(k=1,\ l=2)\\\ -1,&(k=2,\ l=1)\\\
2,&k=2,\ l=n\\\ -2,&k=n,\ l=2\\\ 0,&\textnormal{otherwise}\end{array}\right.,$
$f_{kl}^{p}=\left\\{\begin{array}[]{rl}1,&(k=1,\ l=p)\textnormal{ or }(k=2,\
l=n)\\\ -1,&(k=p,\ l=1)\textnormal{ or }(k=n,\ l=2)\\\
0,&\textnormal{otherwise}\end{array}\right.,$
and
$g_{kl}^{p}=\left\\{\begin{array}[]{rl}1,&(k=2,\ l=p)\textnormal{ or }(k=n,\
l=2)\\\ -1,&(k=p,\ l=2)\textnormal{ or }(k=2,\ l=n)\\\
0,&\textnormal{otherwise}\end{array}\right..$
###### Theorem 1 (Theorem 5,[30]).
A family of matrices
${\cal B}=\\{B^{p}\\}_{p=1}^{\frac{n^{2}-n}{2}-1}:=\\{C^{qr}\\}_{(q,r)\in
Z_{12}}\cup\\{E^{1}\\}\cup\\{F^{p}\\}_{p=3}^{n}\cup\\{G^{p}\\}_{p=3}^{n-1}$
(1)
is a basis of ${\cal A}_{12}$.
Next, we apply a standard Gram-Schmidt process to the basis
$B^{1},\ldots,B^{\frac{n^{2}-n}{2}-1}$
of the vector space ${\cal A}_{12}$ equipped with a standard Frobenius inner
product $\langle\cdot,\cdot\rangle$ and we obtain a pairwise orthogonal basis
$H^{1},\ldots,H^{\frac{n^{2}-n}{2}-1}$ (2)
as follows:
$\displaystyle H^{1}$ $\displaystyle=$ $\displaystyle B^{1},$ $\displaystyle
H^{2}$ $\displaystyle=$ $\displaystyle B^{2}-\frac{\langle
H^{1},B^{2}\rangle}{\langle H^{1},H^{1}\rangle}H^{1},$ $\displaystyle H^{3}$
$\displaystyle=$ $\displaystyle B^{3}-\frac{\langle
H^{1},B^{3}\rangle}{\langle H^{1},H^{1}\rangle}H^{1}-\frac{\langle
H^{2},B^{3}\rangle}{\langle H^{2},H^{2}\rangle}H^{2},$ $\displaystyle\cdots$
$\displaystyle=$ $\displaystyle\cdots$ $\displaystyle H^{\frac{n^{2}-n}{2}-1}$
$\displaystyle=$ $\displaystyle
B^{\frac{n^{2}-n}{2}-1}-\sum_{p=1}^{\frac{n^{2}-n}{2}-2}\frac{\langle
H^{p},B^{\frac{n^{2}-n}{2}-1}\rangle}{\langle H^{p},H^{p}\rangle}H^{p}.$
###### Example 2.
Consider $n=4$. Then the dimension of ${\cal A}_{12}$ is
$\frac{n^{2}-n-2}{2}=5$.
Since $Z_{12}=\\{(3,4)\\}$, we get the following basis of ${\cal A}_{12}$:
$B^{1}=C^{34}=\left(\begin{array}[]{cccc}0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&1\\\
0&0&-1&0\end{array}\right),$
$B^{2}=E^{1}=\left(\begin{array}[]{cccc}0&1&0&0\\\ -1&0&0&2\\\ 0&0&0&0\\\
0&-2&0&0\end{array}\right),$
$B^{3}=F^{3}=\left(\begin{array}[]{cccc}0&0&1&0\\\ 0&0&0&1\\\ -1&0&0&0\\\
0&-1&0&0\end{array}\right),$
$B^{4}=F^{4}=\left(\begin{array}[]{cccc}0&0&0&1\\\ 0&0&0&1\\\ 0&0&0&0\\\
-1&-1&0&0\end{array}\right),$
$B^{5}=G^{3}=\left(\begin{array}[]{cccc}0&0&0&0\\\ 0&0&1&-1\\\ 0&-1&0&0\\\
0&1&0&0\end{array}\right).$
Application of the Gram-Schmidt process to this basis results in an orthogonal
basis
$\displaystyle H^{1}=B^{1}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}0&0&0&0\\\ 0&0&0&0\\\ 0&0&0&1\\\
0&0&-1&0\end{array}\right),$ $\displaystyle\langle
H^{1},B^{2}\rangle=0\Rightarrow H^{2}=B^{2}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}0&1&0&0\\\ -1&0&0&2\\\ 0&0&0&0\\\
0&-2&0&0\end{array}\right),$ $\displaystyle\langle H^{1},B^{3}\rangle=0,\
\langle H^{2},B^{3}\rangle=4,\ \langle H^{2},H^{2}\rangle=10$
$\displaystyle\Rightarrow$ $\displaystyle\Rightarrow H^{3}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}0&-\frac{2}{5}&1&0\\\
\frac{2}{5}&0&0&\frac{1}{5}\\\ -1&0&0&0\\\
0&-\frac{1}{5}&0&0\end{array}\right),$ $\displaystyle\langle
H^{1},B^{4}\rangle=0,\ \langle H^{2},B^{4}\rangle=4,\ \langle
H^{3},B^{4}\rangle=\frac{2}{5},\ \langle H^{3},H^{3}\rangle=\frac{12}{5}$
$\displaystyle\Rightarrow$ $\displaystyle\Rightarrow H^{4}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}0&-\frac{1}{3}&-\frac{1}{6}&1\\\
\frac{1}{3}&0&0&\frac{1}{6}\\\ \frac{1}{6}&0&0&0\\\
-1&-\frac{1}{6}&0&0\end{array}\right),$ $\displaystyle\langle
H^{1},B^{5}\rangle=0,\ \langle H^{2},B^{5}\rangle=-4,\ \langle
H^{3},B^{5}\rangle=-\frac{2}{5}\ \langle H^{4},B^{5}\rangle=-\frac{1}{3},\ $
$\displaystyle\langle H^{4},H^{4}\rangle=\frac{7}{3}\Rightarrow H^{5}$
$\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}0&\frac{2}{7}&\frac{1}{7}&\frac{1}{7}\\\
-\frac{2}{7}&0&1&-\frac{1}{7}\\\ -\frac{1}{7}&-1&0&0\\\
-\frac{1}{7}&\frac{1}{7}&0&0\end{array}\right).$
Now, for an additive PCM $A$ we find its projection $A^{\prime}$ onto the
subspace ${\cal A}_{12}$ as a linear combination of the orthogonal basis
vectors
$H^{1},\ldots,H^{\frac{n^{2}-n}{2}-1}:$
i.e.
$A^{\prime}=\varepsilon_{1}H^{1}+\ldots\varepsilon_{\frac{n^{2}-n}{2}-1}H^{\frac{n^{2}-n}{2}-1},$
where the factors
$\varepsilon_{1},\ldots,\varepsilon_{\frac{n^{2}-n}{2}-1}$
are expressed by formulas:
$\varepsilon_{k}=\frac{\langle A,H^{k}\rangle}{\langle H^{k},H^{k}\rangle},\
k=1,\ldots,\frac{n^{2}-n}{2}-1.$
Thus, the algorithm EQ($A,1,2$) can be written in a very simple way:
1. 1.
${\displaystyle A^{\prime}:=\sum_{k=1}^{\frac{n^{2}-n}{2}-1}\frac{\langle
A,H^{k}\rangle}{\langle H^{k},H^{k}\rangle}H^{k};}$
2. 2.
Return($A^{\prime}$);
Now, let us consider the general case, i.e. $1\leq i<j\leq n$.
###### Remark 3.
If $P$ is a matrix of permutation of the $p$-th and $q$-th coordinates, then
$||PA-PB||=||A-B||,$
and
$||AP-BP||=||A-B||,$
for each PCMs $A$ and $B$.
###### Proof.
The thesis follows from the fact that we get $P(A-B)$ (and $(A-B)P$) from
$A-B$ by the permutation of the $p$-th and the $q$-th rows (columns). ∎
Thus, in order to find the closest matrix to $A$ equating the $i$-th and
$j$-th alternatives, we first permute alternatives $\\{i,j\\}$ with
$\\{1,2\\}$, then perform EQ($A,1,2$), and finally permute $\\{1,2\\}$ with
$\\{i,j\\}$.
Let us define the permutation matrix $P_{ij}=[p_{kl}]_{k,l=1}^{n}$.
If $i=1$ and $j\neq 2$, then we put:
$p_{kl}=\left\\{\begin{array}[]{cl}1,&(k,l)\in\\{(2,j),(j,2)\\}\textnormal{ or
}k=l\not\in\\{2,j\\}\\\ 0,&\textnormal{otherwise,}\end{array}\right.$
If $i\neq 1$ and $j=2$, then we put:
$p_{kl}=\left\\{\begin{array}[]{cl}1,&(k,l)\in\\{(1,i),(i,1)\\}\textnormal{ or
}k=l\not\in\\{1,i\\}\\\ 0,&\textnormal{otherwise,}\end{array}\right.$
If $i\neq 1$ and $j\neq 2$, then we put:
$p_{kl}=\left\\{\begin{array}[]{cl}1,&(k,l)\in\\{(1,i),(i,1),(2,j),(j,2)\\}\textnormal{
or }k=l\not\in\\{1,2,i,j\\}\\\ 0,&\textnormal{otherwise.}\end{array}\right.$
SInce for each $(i,j)\neq(1,2)$ the matrix $P_{ij}$ is orthogonal we have
###### Remark 4.
$P_{ij}^{-1}=P_{ij}^{T}=P_{ij}$.
We are ready to introduce the general algorithm EQ($A,i,j$):
1. 1.
If $(i,j)\neq(1,2)$, then $A:=P_{ij}AP_{ij}$;
2. 2.
$A$:=EQ($A,1,2$);
3. 3.
If $(i,j)\neq(1,2)$, then $A:=P_{ij}AP_{ij}$;
4. 4.
Return($A$).
Notice that the above procedure improves the algorithm introduced in
Szybowski2023aomo [30], because:
1\. we allow $j=n$,
2\. we always use the same orthogonal base
$H^{1},\ldots,H^{\frac{n^{2}-n}{2}-1}$ (which is important if we have to run
EQ($i,j$) several times for different $i$ and $j$).
###### Theorem 5 (Theorem 9, [30]).
Let $A=[a_{kl}]\in{\cal{A}}$, $i,j\in\\{1,\ldots,n\\}$, and
$A^{\prime}=[a^{\prime}_{kl}]$ be the orthogonal projection of $A$ onto ${\cal
A}_{ij}$. Then
(1) For each $k\not\in\\{i,j\\}$
$\sum_{l=1}^{n}a^{\prime}_{kl}=\sum_{l=1}^{n}a_{kl},$ (8)
(2)
$\sum_{l=1}^{n}a^{\prime}_{il}=\sum_{l=1}^{n}a^{\prime}_{jl}=\frac{\sum_{l=1}^{n}a_{il}+\sum_{l=1}^{n}a_{jl}}{2}.$
(9)
### 3.2 The algorithm for establishing a leading alternative in a PC method
Let us present the main algorithm of the paper.
Suppose we have a PCM $A$ and we want to promote the $p$-th alternative for
the first place in the ranking.
#### 3.2.1 The greedy algorithm
INPUT: $A,\ p$.
1. 1.
$q:=$ the number of the best alternative
2. 2.
If $p=q$ then return($A$);
3. 3.
Construct the basis (1);
4. 4.
Apply the Gram-Schmidt process to obtain the basis (2);
5. 5.
repeat
* (a)
$A:=$EQ($A,p,q$);
* (b)
$q:=$ the number of the best alternative;
until ranking($p$) $=$ ranking($q$);
6. 6.
return($A$);
###### Example 6.
Let us consider a $4\times 4$ ($n=4$) PCM
$A=\left(\begin{array}[]{cccc}0&1&2&9\\\ -1&0&1&8\\\ -2&-1&0&7\\\
-9&-8&-7&0\end{array}\right).$
The weights in a ranking vector obtained as the arithmetic means of elements
of rows of $A$ are
$w=(3,2,1,-6)^{T},$
so the initial value of $q$ is 1.
Our goal is to promote the fourth alternative ($p=4$) to the first position in
a ranking.
In the example the alternative number 4 is definitely the worst one, so the
algorithm EQ must run $n-1=3$ times, which is the maximal possible number of
iterations.
We construct the basis $B^{1},\ldots,B^{5}$. Next, we apply the Gram-Shmidt
procedure to obtain basis $H^{1},\ldots,H^{5}$. Both bases are described in
Ex. 2.
THE 1ST ITERATION OF THE LOOP:
We run EQ($A,1,4$), i.e. we calculate:
$P_{14}=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&0&1\\\ 0&0&1&0\\\
0&1&0&0\end{array}\right).$
$A^{(2)}=P_{14}AP_{14}=\left(\begin{array}[]{cccc}0&9&2&1\\\ -9&0&-7&-8\\\
-2&7&0&-1\\\ -1&8&1&0\end{array}\right).$
$A^{(3)}=\textnormal{EQ}(A^{(2)},1,2)=\left(\begin{array}[]{cccc}0&0&-2.5&-3.5\\\
0&0&-2.5&-3.5\\\ 2.5&2.5&0&-1\\\ 3.5&3.5&1&0\end{array}\right).$
$A^{(4)}=P_{14}A^{(3)}P_{14}=\left(\begin{array}[]{cccc}0&-3.5&-2.5&0\\\
3.5&0&1&3.5\\\ 2.5&-1&0&2.5\\\ 0&-3.5&-2.5&0\end{array}\right).$
The ranking vector for $A^{(4)}$ is
$w=(-1.5,2,1,-1.5)^{T},$
so the next value of $q$ is 2.
THE 2ND ITERATION OF THE LOOP:
We run EQ($A^{(4)},2,4$), i.e. we calculate:
$P_{24}=\left(\begin{array}[]{cccc}0&0&0&1\\\ 0&1&0&0\\\ 0&0&1&0\\\
1&0&0&0\end{array}\right).$
$A^{(5)}=P_{24}A^{(4)}P_{24}=\left(\begin{array}[]{cccc}0&-3.5&-2.5&0\\\
3.5&0&1&3.5\\\ 2.5&-1&0&2.5\\\ 0&-3.5&-2.5&0\end{array}\right).$
$A^{(6)}=\textnormal{EQ}(A^{(5)},1,2)=\left(\begin{array}[]{cccc}0&0&-0.75&1.75\\\
0&0&-0.75&1.75\\\ 0.75&0.75&0&2.5\\\ -1.75&-1.75&-2.5&0\end{array}\right).$
$A^{(7)}=P_{24}A^{(6)}P_{24}=\left(\begin{array}[]{cccc}0&-1.75&-2.5&-1.75\\\
1.75&0&-0.75&0\\\ 2.5&0.75&0&0.75\\\ 1.75&0&-0.75&0\end{array}\right).$
The ranking vector for $A^{(7)}$ is
$w=(-1.5,0.25,1,0.25)^{T},$
so the next value of $q$ is 3.
THE 3RD ITERATION OF THE LOOP:
We run EQ($A^{(7)},3,4$), i.e. we calculate:
$P_{34}=\left(\begin{array}[]{cccc}0&0&1&0\\\ 0&0&0&1\\\ 1&0&0&0\\\
0&1&0&0\end{array}\right).$
$A^{(8)}=P_{34}A^{(7)}P_{34}=\left(\begin{array}[]{cccc}0&0.75&2.5&0.75\\\
-0.75&0&1.75&0\\\ -2.5&-1.75&0&-1.75\\\ -0.75&0&1.75&0\end{array}\right).$
$A^{(9)}=\textnormal{EQ}(A^{(8)},1,2)=\left(\begin{array}[]{cccc}0&0&2.125&0.375\\\
0&0&2.125&0.375\\\ -2.125&-2.125&0&-1.75\\\
-0.375&-0.375&1.75&0\end{array}\right).$
$A^{(10)}=P_{34}A^{(9)}P_{34}=\left(\begin{array}[]{cccc}0&-1.75&-2.125&-2.125\\\
1.75&0&-0.375&-0.375\\\ 2.125&0.375&0&0\\\ 2.125&0.375&0&0\end{array}\right).$
The ranking vector for $A^{(10)}$ is
$w=(-1.5,0.25,0.625,0.625)^{T},$
so the final value of $q$ is 3. The weights of alternatives $p$ and $q$ are
now equal and the highest, so the algorithm breaks. The output matrix is
$A^{10)}$.
Notice that the chosen alternative is not a sole leader in the ranking.
However, even the slightest correction of the element $a_{pq}$ in favor of the
alternative $p$ may change that. For example, if we put $a_{34}=-0.1$ (and,
respectively $a_{43}=0.1$), then we get "the winning ranking":
$w=(-1.5,0.25,0.6,0.65)^{T}.$
#### 3.2.2 The bubble algorithm
As Example 6 shows, the greedy algorithm has some disadvantages. It is fast on
average, however, if the preferred alternative is on the bottom of the ranking
we may need to run a loop $n-1$ times. Secondly, the whole procedure may
competely reverse the ranking, which is undesirable.
Therefore, we suggest an alternative algorithm, which promotes a chosen
alternative stepwise.
INPUT: $A,\ p$.
1. 1.
$q:=$ the number of the best alternative
2. 2.
If $p=q$ then return($A$);
3. 3.
Construct the basis (1);
4. 4.
Apply the Gram-Shmidt process to obtain the basis (2);
5. 5.
repeat
* (a)
$A:=$EQ($A,p,q$);
* (b)
$q:=$ the number of the alternative directly ahead of the alternative $p$ in
the ranking;
until ranking($p$) $=$ ranking($q$);
6. 6.
return($A$);
###### Example 7.
Consider once more the matrix $A$ from the example 6. The output matrix after
running the bubble algorithm is of the form:
$A^{(10)}=\left(\begin{array}[]{cccc}0&1.625&3.875&0\\\
-1.625&0&2.25&-1.625\\\ -3.875&-2.25&0&-3.875\\\
0&1.625&3.875&0\end{array}\right),$
and the final ranking vector is
$w=(1.375,-0.25,-2.5,1.375)^{T},$
so the fourth alternative moved up to the first position (ex aequo with the
first one), but the relative positions of the other alternatives did not
change.
## 4 Monte Carlo Simulation
For Monte Carlo testing, we generated $2500$ preference profiles within which
the relative priority of a pair of alternatives ranges from $[1/9,9]$. The
number of alternatives ranges from $5$ to $9$, i.e. for five alternatives we
generate $500$ random profiles, for 6 alternatives - $500$ profiles were
prepared, etc.
Based on the drawn preference profiles, we created random pairwise comparison
matrices (PCM) in such a way that for a preference profile
$w=\left(w(a_{1}),\ldots,w(a_{n})\right)^{T}$
a $C_{\alpha}$ is a $n\times n$ PCM in the form
$C_{\alpha}=\left(\begin{array}[]{ccccc}1&c_{1,2}r_{1,2}&c_{1,3}r_{1,3}&\cdots&c_{1,n}r_{1,n}\\\
c_{2,1}r_{2,1}&1&c_{2,3}r_{2,3}&\cdots&c_{2,n}r_{2,n}\\\
\vdots&\cdots&\ddots&\cdots&\vdots\\\ \vdots&\cdots&\cdots&\ddots&\vdots\\\
c_{n,1}r_{n,1}&c_{n,2}r_{n,2}&\cdots&c_{n,n-1}r_{n,n-1}&1\end{array}\right),$
where
$c_{ij}=\frac{w(a_{i})}{w(a_{j})},$
and $r_{ij}$ is a real number randomly selected from $[1/\alpha,\alpha]$ for
$i,j=1,\ldots,n$. Thus, by increasing the value of $\alpha$, we effectively
increase the inconsistency of $C_{\alpha}$. We created matrices in the form of
$C_{\alpha}$ for all $2,500$ random preference profiles and for all $\alpha$
values from the set $\\{1,1.1,1.2,\ldots,4.9,5\\}$. In the end, we generated
$102,500=2,500\times 41$ random PCM matrices with varying degrees of
inconsistency and dimensions ranging from $5\times 5$ to $9\times 9$. All
generated matrices were used as input to both greedy (Sec. 3.2.1) and bubble
algorithms (Sec. 3.2.2).
(a) Greedy algorithm, LBN strategy (b) Greedy algorithm, LBR strategy (c)
Bubble algorithm, LBN strategy (d) Bubble algorithm, LBR strategy
Figure 1: Number of iterations vs. number of alternatives
For both algorithms, we examined for two different strategies for selecting
the promoted alternative. In the first case, we took as the subject of
promotion the alternative with the n-th index (the last in the sense of
numbering) regardless of its actual ranking position. In the second strategy,
we first calculated the ranking using GMM and then promoted the last
alternative in the ranking. The first strategy was called LBN - "last by
numbering" and the second LBR - "last by ranking." Hence, it took us $102,500$
$\times$ ($2$ algorithms) $\times$ ($2$ strategies) $=$ $410,000$ runs of the
greedy and bubble algorithms to conduct the assumed experiments.
Figure 2: Number of iterations and number of tested matrices vs. inconsistency
using the bubble algorithm and LBR strategy as example.
In all four cases, the average number of iterations depends on the number of
alternatives (Fig. 1). In most cases, it increases as the number of
alternatives increases. The only exception was seen in the case of the bubble
algorithm and the LBR strategy where a greater number of alternatives does not
necessarily translate into an increased number of iterations (Fig. 1d).
While the relationship between the number of alternatives and the number of
iterations of the algorithms seems significant, there is no evident
relationship between the inconsistency of the tested matrices and the number
of iterations. In order to observe this possible relationship, we divided the
set of tested matrices into subsets where the first one contained C matrices
with CI(C) between 0 and $0.01$, the second one between $0.01$ and $0.02$, and
so on. For each interval, we counted the average inconsistency, the average
number of iterations and the set count. As long as the set size did not fall
below a few tens of elements, the average number of iterations remained
similar regardless of the average inconsistency of the matrix in a given
subset (Fig. 2). Since the result was similar in each of the four variants
considered in the figure, we used in (Fig. 2) the result for the bubble
algorithm and the LBR strategy. It is worth noting that the modifications made
by the algorithm to the matrix do not change its level of inconsistency. Thus,
attempts to detect such manipulation using only inconsistency measurements may
be ineffective.
The Frobenius distance between the input matrix and the matrices that are the
output of successive algorithms’ iterations increases. This is because each
iteration changes subsequent elements of the matrix, moving it away from the
original matrix (Fig. 3). This behavior can be observed regardless of the type
of algorithm and strategy adopted.
Figure 3: Frobenius distance between input matrix and its subsequent
improvements. Average values for greedy algorithm and LBN strategy.
Similarly, a consistently observable pattern is the decline in the Average
Ranking Stability Index ARSI values. The ARSI values depend on the size of the
matrix i.e., the larger the dimension of the matrix, the higher the ARSI (Fig.
4).
Figure 4: The average RSI (ARSI) value for the matrices studied depending on
the size of the matrix.
Therefore in the study we calculated the corresponding values in groups of
matrices of the same dimensions (Fig. 5). This corresponds to the intuitive
observation that making the first intervention is the most difficult. Each
subsequent one comes more and more easily. More formally, ARSI is being
reduced in subsequent iterations of the algorithm, since they make two
alternatives’ weights equal and closer to the rest and leave other
alternatives’ weights unchanged. This implies that each manipulation increases
the possibility of other manipulations.
(a) PC matrices $5\times 5$ (b) PC matrices $9\times 9$
Figure 5: Decreasing (average) value of ARSI for modified matrices in
successive iterations using the greedy algorithm and LBN strategy as an
example. Iteration $0$ shows the average ARSI value for the unmodified input
matrices.
## 5 Conclusions
In the presented work we have introduced two algorithms of promoting a given
alternative to the position of a ranking leader. They are both based on the EQ
algorithm equating two given alternatives in a ranking. The first one, called
the greedy algorithm, in each step equates the rankings of a promoted
alternative and the current leader. The second one (the bubble algorithm) in
each step equates an alternative with the one directly preceding it in the
ranking. We have also defined the Average Ranking Stability Index (ARSI) for a
PC matrix to measure how easily the data manipulation may happen.
The Monte Carlo study has shown that in general it is harder to create a new
leader when there are more alternatives. On the other hand, the input
inconsistency of data has no influence on the ease of manipulation. The third
conclusion is that each each manipulation facilitates the subsequent ones. The
final remark is that the EQ algorithm does not change the scale, i.e. if the
input PC matrix elements have been taken from the range $[-M,M]$, the output
matrix elements had the same property.
## 6 Acknowledgments
The research has been supported by The National Science Centre, Poland,
project no. 2021/41/B/HS4/03475 and by the Polish Ministry of Science and
Higher Education (task no. 11.11.420.004).
## References
* [1] C.A. Bana e Costa, De Corte, J.M., and J.C. Vansnick. On the mathematical foundation of MACBETH. In J. Figueira, S. Greco, and M. Ehrgott, editors, Multiple Criteria Decision Analysis: State of the Art Surveys, pages 421–463. Springer Verlag, Boston, Dordrecht, London, 2016.
* [2] J. Barzilai. Consistency measures for pairwise comparison matrices. Journal of Multi-Criteria Decision Analysis, 7(3):123–132, 1998\.
* [3] J. Barzilai and B. Golany. Deriving weights from pairwise comparison matrices: The additive case. Operations Research Letters, 9(6):407–410, November 1990.
* [4] J. Benítez, W. W. Koczkodaj, and A. Kowalczyk. Computationally efficient orthogonalization for pairwise comparisons method. Applied Mathematics and Computation, 473:128651, July 2024.
* [5] F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia, editors. Handbook of Computational Social Choice. Cambridge University Press, March 2016.
* [6] J.P. Brans and B. Mareschal. PROMETHEE methods. In J. Figueira, S. Greco, and M. Ehrgott, editors, Multiple Criteria Decision Analysis: State of the Art Surveys, pages 187–219. Springer Verlag, Boston, Dordrecht, London, 2016.
* [7] M. Brunelli. A survey of inconsistency indices for pairwise comparisons. International Journal of General Systems, 47(8):751–771, September 2018.
* [8] B. Cavallo, J. Mazurek, and J. Ramík. A comparative study on precision of pairwise comparison matrices. Fuzzy Optimization and Decision Making, November 2023.
* [9] G. B. Crawford. The geometric mean procedure for estimating the scale of a judgement matrix. Mathematical Modelling, 9(3–5):327 – 334, 1987.
* [10] L. Csató and D. G. Petróczy. On the monotonicity of the eigenvector method. European Journal of Operational Research, 2020.
* [11] Y. Dong, Y. Liu, H. Liang, F. Chiclana, and E. Herrera-Viedma. Strategic weight manipulation in multiple attribute decision making. Omega, 75:154–164, 2018.
* [12] Y. Dong, Q. Zha, H. Zhang, and F. Herrera. Consensus Reaching and Strategic Manipulation in Group Decision Making With Trust Relationships. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(10):6304–6318, October 2021.
* [13] P. Faliszewski, E. Hemaspaandra, and L. A. Hemaspaandra. Using complexity to protect elections. Communications of the ACM, 53(11):74–82, 2010.
* [14] M. Fedrizzi, M. Brunelli, and A. Caprila. The linear algebra of pairwise comparisons. International Journal of Approximate Reasoning, 118:190–207, March 2020.
* [15] A. Gibbard. Manipulation of voting schemes: A general result. Econometrica, 41(4):587–601, 1973. ISBN: 00129682.
* [16] P. T. Kazibudzki. On estimation of priority vectors derived from inconsistent pairwise comparison matrices. Journal of Applied Mathematics and Computational Mechanics, 21(4):52–59, 2022.
* [17] W. W. Koczkodaj, R. Smarzewski, and J. Szybowski. On Orthogonal Projections on the Space of Consistent Pairwise Comparisons Matrices. Fundamenta Informaticae, 172(4):379–397, 2020. Publisher: IOS Press.
* [18] W. W. Koczkodaj and R. Urban. Axiomatization of inconsistency indicators for pairwise comparisons. International Journal of Approximate Reasoning, 94:18–29, March 2018.
* [19] K. Kułakowski. On the properties of the priority deriving procedure in the pairwise comparisons method. Fundamenta Informaticae, 139(4):403 – 419, July 2015.
* [20] K. Kułakowski, J. Mazurek, and M. Strada. On the similarity between ranking vectors in the pairwise comparison method. Journal of the Operational Research Society, 0(0):1–10, 2021.
* [21] K. Kułakowski, J. Szybowski, J. Mazurek, and S. Ernst. Resilient heuristic aggregation of judgments in the pairwise comparisons method. Information Sciences, 657:119979, 2024.
* [22] K. Kułakowski, J. Szybowski, and A. Prusak. Towards quantification of incompleteness in the pairwise comparisons methods. International Journal of Approximate Reasoning, 115:221–234, October 2019.
* [23] O. Lev and Y. Lewenberg. “Reverse Gerrymandering”: Manipulation in Multi-Group Decision Making. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):2069–2076, July 2019.
* [24] X. Liang, J. Guo, and P. Liu. A consensus model considers managing manipulative and overconfident behaviours in large-scale group decision-making. Information Sciences, 654:119848, January 2024.
* [25] J. Mazurek. Advances in Pairwise Comparisons: Detection, Evaluation and Reduction of Inconsistency. Multiple Criteria Decision Making. Springer Nature Switzerland, 2023.
* [26] J. Rezaei. Best-worst multi-criteria decision-making method. Omega, 53(C):49–57, June 2015.
* [27] T. L. Saaty. A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3):234 – 281, 1977.
* [28] Y. Sasaki. Strategic manipulation in group decisions with pairwise comparisons: A game theoretical perspective. European Journal of Operational Research, 304(3):1133–1139, February 2023.
* [29] Q. Sun, J. Wu, F. Chiclana, S. Wang, E. Herrera-Viedma, and R. R. Yager. An approach to prevent weight manipulation by minimum adjustment and maximum entropy method in social network group decision making. Artificial Intelligence Review, December 2022.
* [30] J. Szybowski, K. Kułakowski, and S. Ernst. Almost optimal manipulation of a pair of alternatives, 2023.
* [31] J. Wu, M. Cao, F. Chiclana, Y. Dong, and E. Herrera-Viedma. An Optimal Feedback Model to Prevent Manipulation Behavior in Consensus Under Social Network Group Decision Making. IEEE Transactions on Fuzzy Systems, 29(7):1750–1763, July 2021\.
* [32] R. R. Yager. Penalizing strategic preference manipulation in multi-agent decision making. IEEE Transactions on Fuzzy Systems, 9(3):393–403, 2001.
* [33] R. R. Yager. Defending against strategic manipulation in uninorm-based multi-agent decision making. European Journal of Operational Research, 141(1):217–232, 2002\.
|
# Amorphic complexity of group actions with applications to quasicrystals
Gabriel Fuhrmann Department of Mathematical Sciences, Durham University, UK.
Email<EMAIL_ADDRESS>Maik Gröger Faculty of Mathematics,
University of Vienna, Austria & Faculty of Mathematics and Computer Science,
Jagiellonian University in Kraków, Poland. Email<EMAIL_ADDRESS>Tobias Jäger Department of Mathematics, University of Jena, Germany. Email:
<EMAIL_ADDRESS>Dominik Kwietniak Faculty of Mathematics and
Computer Science, Jagiellonian University in Kraków, Poland. Email:
<EMAIL_ADDRESS>
###### Abstract
In this article, we define amorphic complexity for actions of locally compact
$\sigma$-compact amenable groups on compact metric spaces. Amorphic
complexity, originally introduced for $\mathbb{Z}$-actions, is a topological
invariant which measures the complexity of dynamical systems in the regime of
zero entropy. We show that it is tailor-made to study strictly ergodic group
actions with discrete spectrum and continuous eigenfunctions. This class of
actions includes, in particular, Delone dynamical systems related to regular
model sets obtained via Meyer’s cut and project method. We provide sharp upper
bounds on amorphic complexity of such systems. In doing so, we observe an
intimate relationship between amorphic complexity and fractal geometry.
## 1 Introduction
The study of low-complexity notions for group actions is both a timely and a
classical topic. Its roots go back to Halmos, McKay, and von Neumann who
classified actions with discrete spectrum, as well as Auslander, Ellis,
Furstenberg, and Veech who set the foundations of the theory of equicontinuous
actions and their extensions. Recent years have seen plenty of progress in
illuminating the richness of possible dynamical behaviour of minimal actions
of general groups in the low complexity regime, see for example [Kri07, CP08,
CM16, ST17, Gla18, ŁS18, FK20]. As a matter of fact, the investigation of this
regime not only contributes to the understanding of group actions as such but
is of fundamental importance in the understanding of aperiodic order—with
further applications to geometry, number theory and harmonic analysis [Mey72,
BG13]—and the diffraction spectra of so-called Delone sets, that is,
mathematical models of physical quasicrystals. The latter results from the
observation that diffraction spectra of Delone sets can be studied by means of
certain associated Delone dynamical systems [LM06, BLM07, Len09], see also
[BG13] for further information and references. Analysing these Delone
dynamical systems, it is most natural to ask when two such systems are
conjugate [KS14]. The standard operating procedure to answer this question
clearly is to utilize dynamical invariants and one might be tempted to study
topological entropy of Delone dynamics. However, the physically most
interesting case of pure point diffraction turns out to necessarily come with
zero entropy [BLR07]. There is hence a need for finer topological invariants
which can distinguish zero entropy systems.
In this article, we propose amorphic complexity—a notion recently introduced
for $\mathbb{Z}$-actions [FGJ16]—as a promising candidate for this purpose. To
that end, we extend amorphic complexity to actions of locally compact,
$\sigma$-compact and amenable groups. We will see that amorphic complexity is
tailor-made to study strictly ergodic systems with discrete spectrum and
continuous eigenfunctions, that is, minimal mean equicontinuous systems
[FGL21, Corollary 1.6]. Most importantly, however, we show that amorphic
complexity is not only theoretically well-behaved but also well-computable in
specific examples. This is particularly true due to a neat connection to
fractal geometry. We elaborate on this in the last section of this article
where we apply our findings to model sets—particular Delone sets constructed
by means of Meyer’s cut and project method [Mey72].
Before we introduce amorphic complexity and discuss our main results in more
detail, let us briefly clarify some basic terminology. A triple $(X,G,\alpha)$
is called a _(topological) dynamical system_ if $X$ is a compact metric space
(endowed with a metric $d$), $G$ is a topological group and $\alpha$ is a
continuous action of $G$ on $X$ by homeomorphisms (continuity of $\alpha$ is
understood as continuity of the map $G\times X\ni(g,x)\mapsto\alpha(g)(x)\in
X$). In the following, we use the shorthand $gx$ instead of $\alpha(g)(x)$ for
the action of $g\in G$ on $x\in X$ via $\alpha$. Likewise, we may occasionally
keep the action $\alpha$ implicit and simply refer to $(X,G)$ as a dynamical
system.
As mentioned before, we throughout assume that $G$ is locally compact,
$\sigma$-compact and amenable. Recall that there is hence a _(left) Følner
sequence_ , that is, a sequence $(F_{n})_{n\in\mathbb{N}}$ of compact subsets
of $G$ having positive Haar measure such that
$\displaystyle\lim_{n\to\infty}\frac{m(KF_{n}\triangle
F_{n})}{m(F_{n})}=0\quad\textnormal{for every compact }K\subseteq G,$ (1)
where $\triangle$ denotes the symmetric difference and $m$ is a _(left) Haar
measure_ of $G$ (we may synonymously write $\left|F\right|$ for the Haar
measure $m(F)$ of a measurable set $F\subseteq G$) [EG67, Theorem 3.2.1]. We
will also make use of the existence of _right Følner sequences_ which fulfil a
condition analogous to (1) with the left Haar measure and the multiplication
from the left replaced by the right Haar measure and multiplication from the
right, respectively. However, we would like to stress that in the following,
each Følner sequence is assumed to be a left Følner sequence if not stated
otherwise. Given a (left or right) Følner sequence $\mathcal{F}=(F_{n})$, the
_(upper) asymptotic density_ of a measurable subset $E\subseteq G$ with
respect to $\mathcal{F}$ is defined as
$\mathrm{ad}_{\mathcal{F}}(E)=\varlimsup\limits_{n\to\infty}\frac{\left|E\cap
F_{n}\right|}{\left|F_{n}\right|}.$ (2)
Let us next turn to the definition of amorphic complexity of a dynamical
system $(X,G)$ with respect to a Følner sequence
$\mathcal{F}=(F_{n})_{n\in\mathbb{N}}$ in $G$. Given $x,y\in X$, $\delta>0$,
we set
$\Delta(X,G,\delta,x,y)=\left\\{t\in G\;|\;d(tx,ty)\geq\delta\right\\}.$
For $\nu\in(0,1]$, we say that $x$ and $y$ are _$(\delta,\nu)$ -separated_
with respect to $\mathcal{F}$ if
$\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))=\varlimsup_{n\to\infty}\frac{\left|\Delta(X,G,\delta,x,y)\cap
F_{n}\right|}{\left|F_{n}\right|}\geq\nu.$
Accordingly, a subset $S\subseteq X$ is said to be _$(\delta,\nu)$ -separated_
with respect to $\mathcal{F}$ if all distinct points $x,y\in S$ are
$(\delta,\nu)$-separated. This already yields the first key notion in this
work: the (asymptotic) separation number of $(X,G)$ with respect to $\delta>0$
and $\nu\in(0,1]$, denoted by $\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)$, is
the supremum over the cardinalities of all $(\delta,\nu)$-separated sets in
$X$.
In general, the asymptotic separation numbers do not have to be finite (even
though $X$ is compact) which immediately gives the following dichotomy: if
$\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)$ is finite for all $\delta,\nu>0$,
we say $(X,G)$ has finite separation numbers with respect to $\mathcal{F}$
otherwise, we say it has infinite separation numbers. Our first main
result—consisting of the following two theorems whose proofs are given in
Section 3—identifies canonical classes of group actions with infinite and
finite separation numbers, respectively. First, we give two criteria for
infinite separation numbers.
###### Theorem 1.1.
If $(X,G)$ is weakly mixing with respect to a non-trivial $G$-invariant
probability measure, then $(X,G)$ has infinite separation numbers with respect
to every Følner sequence. Likewise, if $G$ allows for a uniform lattice and
$(X,G)$ has positive topological entropy, then $(X,G)$ has infinite separation
numbers with respect to every Følner sequence.
In the opposite direction, it turns out that in the minimal case finite
separation numbers can be used to characterize mean equicontinuity.
###### Theorem 1.2.
Let $G$ be a unimodular group, meaning it has a sequence which is a left and a
right Følner sequence (this holds, in particular, if $G$ is abelian). Further,
suppose $(X,G)$ is a minimal dynamical system. Then $(X,G)$ has finite
separation numbers with respect to every Følner sequence if and only if
$(X,G)$ is mean equicontinuous.
It is worth mentioning that the class of mean equicontinuous systems comprises
all Delone dynamical systems associated to regular model sets, see also
Section 5. For further examples of mean equicontinuous actions of groups
different from $\mathbb{Z}$, we refer the reader to the literature [Rob96,
Rob99, Cor06, Vor12, GR17, Gla18, ŁS18, FK20, GL20, FGL21].
If $(X,G)$ has finite separation numbers, we are in a position to obtain finer
information by studying the scaling behaviour of the separation numbers as the
separation frequency $\nu$ tends to zero. Here, we may in principle consider
arbitrary growth rates. So far, however, previous results indicate that
polynomial growth is the most relevant, see [FGJ16, GJ16, FG20] for
$G=\mathbb{Z}$. With this in mind, we define the _lower_ and _upper amorphic
complexity_ of $(X,G)$ with respect to $\mathcal{F}$ as
$\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)=\adjustlimits{\sup}_{\delta>0}{\varliminf}_{\nu\to
0}\frac{\log\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)}{-\log\nu}\quad\textnormal{and}\quad\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)=\adjustlimits{\sup}_{\delta>0}{\varlimsup}_{\nu\to
0}\frac{\log\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)}{-\log\nu}.$
In case that both values coincide, we call
$\mathrm{{ac}}_{\mathcal{F}}(X,G)=\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)=\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)$
the amorphic complexity of $(X,G)$ with respect to $\mathcal{F}$. It is
convenient to set $\mathrm{{ac}}_{\mathcal{F}}(X,G)=\infty$ if $(X,G)$ has
infinite separation numbers with respect to $\mathcal{F}$. We discuss the most
basic properties of amorphic complexity—including its invariance under
conjugacy—in Section 2.
Our second main result deals with the problem as to which extent the
asymptotic separation numbers and amorphic complexity depend on the particular
Følner sequence $\mathcal{F}$. In general, we cannot rule out different
amorphic complexities with respect to different Følner sequences. In fact,
this problem already occurs when $G=\mathbb{Z}$, see Section 4. With the next
theorem, however, we provide a sufficient criterion for the independence from
$\mathcal{F}$. Here, we say a dynamical system $(X,G)$ is _pointwise uniquely
ergodic_ if every orbit closure is uniquely ergodic. A strengthening of the
following statement and its proof can be found in Section 4.
###### Theorem 1.3.
Let $(X,G)$ be a dynamical system whose product $(X^{2},G)$ is pointwise
uniquely ergodic. Then $(X,G)$ has infinite separation numbers with respect to
some Følner sequence if and only if it has infinite separation numbers with
respect to all Følner sequences. Moreover,
$\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)$ and
$\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)$ are independent of the particular
Følner sequence $\mathcal{F}$.
It is worth mentioning that mean equicontinuous systems verify the assumptions
of the above theorem [FGL21, Theorem 1.2].
With our third main result, we apply amorphic complexity to the dynamics of
regular model sets. Before we come to the precise formulation, we need to
introduce some terminology. In doing so, we restrict to a rather brief
description of the most essential notions and refer the reader to Section 5
for the details. A _cut and project scheme_ is a triple $(G,H,\mathcal{L})$,
where $G$ and $H$ are locally compact abelian groups and $\mathcal{L}$ is an
irrational lattice in $G\times H$. Together with a compact subset
$W=\overline{\operatorname{int}(W)}\subseteq H$—referred to as a _window_
—$(G,H,\mathcal{L})$ defines a particular instance of a Delone set, a so-
called _model_ set
$\mbox{\Large$\curlywedge$}(W)=\pi_{G}((G\times W)\cap\mathcal{L}),$
where $\pi_{G}:G\times H\to G$ denotes the canonical projection. We call $W$
as well as $\mbox{\Large$\curlywedge$}(W)$ _regular_ if $\partial W$ is of
zero Haar measure and say $W$ is _irredundant_ if $\\{h\in
H\;|\;h+W=W\\}=\\{0\\}$. Now, as $\mbox{\Large$\curlywedge$}(W)$ is a subset
of $G$, $G$ naturally acts on $\mbox{\Large$\curlywedge$}(W)$ by translations.
It turns out that the closure of all translated copies of
$\mbox{\Large$\curlywedge$}(W)$ is compact (in a suitable topology on subsets
of $G$). Denoting this closure by $\Omega(\mbox{\Large$\curlywedge$}(W))$, we
arrive at the Delone dynamical system
$(\Omega(\mbox{\Large$\curlywedge$}(W)),G)$ associated to the model set
$\mbox{\Large$\curlywedge$}(W)$. We obtain
###### Theorem 1.4.
Let $(G,H,\mathcal{L})$ be a cut and project scheme with $W\subseteq H$ a
regular irredundant window and suppose $G$ and $H$ are second countable. Then
for every Følner sequence $\mathcal{F}$ in $G$, we get
$\overline{\mathrm{ac}}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G)\leq\frac{\overline{\mathrm{dim}}_{\mathrm{B}}(H)}{\overline{\mathrm{dim}}_{\mathrm{B}}(H)-\overline{\mathrm{dim}}_{\mathrm{B}}(\partial
W)},$
assuming that $\overline{\mathrm{dim}}_{\mathrm{B}}(H)$ is finite.
Here, $\overline{\mathrm{dim}}_{\mathrm{B}}(\cdot)$ denotes the upper box
dimension, see Section 5 for the details. Let us remark that we further show
that the above estimates are sharp in that they are realised by particular
model sets. In conclusion, we obtain that every value in $[1,\infty)$ can be
attained by amorphic complexity of minimal systems.
Motivated by the above results, we finish with the following question.
Given a locally compact, $\sigma$-compact and amenable group acting minimally
on a compact metric space. Which values can amorphic complexity attain?
In particular, for minimal $\mathbb{Z}$\- or $\mathbb{R}$-actions, we
conjecture that amorphic complexity cannot take values in $(0,1)$. Indeed,
this complexity gap was recently established for subshifts associated to
primitive constant length substitutions [FG20] and is a classical phenomenon
which is well known to occur for polynomial entropy of minimal symbolic
subshifts. For non-minimal $\mathbb{Z}$-actions, however, it was recently
shown that all values in $(0,1)$ can be obtained by amorphic complexity, see
[Kul20, Kul].
#### Acknowledgments
This project has received funding from the European Union’s Horizon 2020
research and innovation program under the Marie Skłodowska-Curie grant
agreement No 750865. Furthermore, it received support by the DFG Emmy-Noether
grant Ja 1721/2-1, DFG Heisenberg grant Oe 538/6-1 and DFG Research Fellowship
grant GR 4899/1-1. DK was supported by the National Science Centre, Poland,
grant no. 2018/29/B/ST1/01340. GF, MG and DK would like to thank the
Mathematisches Forschungsinstitut Oberwolfach for its enormous hospitality
during a Research in Pairs stay (R1721) at the MFO in October 2017 where many
ideas of this work were developed. This work was finished during a visit of GF
and MG to the Jagiellonian University in Kraków in September 2020, which was
also supported by the National Science Centre, Poland, grant no.
2018/29/B/ST1/01340.
## 2 Basic properties of amorphic complexity
In this section, we collect the most basic properties of amorphic complexity.
In particular, given a group $G$ which allows for a lattice $\mathcal{L}$, we
discuss how amorphic complexity of a $G$-action relates to amorphic complexity
of the associated $\mathcal{L}$-action.
The proof of the following statement is verbatim as the proofs of [FGJ16,
Proposition 3.4 & Proposition 3.9].
###### Proposition 2.1.
Let $(X,G)$ and $(Y,G)$ be dynamical systems. We have:
1. (a)
If $(Y,G)$ is a factor of $(X,G)$, then
$\underline{\mathrm{ac}}_{\mathcal{F}}(Y,G)\leq\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)\quad\textnormal{and}\quad\overline{\mathrm{ac}}_{\mathcal{F}}(Y,G)\leq\overline{\mathrm{ac}}_{\mathcal{F}}(X,G).$
In particular, (upper and lower) amorphic complexity is a topological
invariant.
2. (b)
We have that
$\displaystyle\underline{\mathrm{ac}}_{\mathcal{F}}(X\times
Y,G)\geq\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)+\underline{\mathrm{ac}}_{\mathcal{F}}(Y,G),\quad\overline{\mathrm{ac}}_{\mathcal{F}}(X\times
Y,G)\leq\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)+\overline{\mathrm{ac}}_{\mathcal{F}}(X,G).$
In particular, if $\mathrm{{ac}}_{\mathcal{F}}(X,G)$ and
$\mathrm{{ac}}_{\mathcal{F}}(Y,G)$ exist, then
$\mathrm{{ac}}_{\mathcal{F}}(X\times Y,G)$ exists as well.
Before we proceed with further properties of amorphic complexity, we take a
closer look at certain particularly well-behaved Følner sequences. Recall that
a _van Hove sequence_ $(A_{n})_{n\in\mathbb{N}}$ in $G$ is a sequence of
compacta $A_{n}\subseteq G$ of positive Haar measure such that
$\lim_{n\to\infty}\frac{m\big{(}\partial_{K}A_{n})}{m(A_{n})}=0,$
for every compact set $K\subseteq G$ with $e\in K$, where
$\partial_{K}A_{n}\coloneqq
KA_{n}\setminus\operatorname{int}\big{(}\bigcap_{g\in K}gA_{n}\big{)}$ (see
[Tem92, Appendix 3] and [Str05] for further reference). It is not hard to see
that every van Hove sequence is a Følner sequence. In fact, it holds
###### Proposition 2.2 ([Tem92, Appendix 3.K]).
Let $G$ be a locally compact $\sigma$-compact amenable topological group. A
sequence $(A_{n})$ of compact subsets of $G$ is a van Hove sequence if and
only if it is a Følner sequence and
$\displaystyle\lim_{n\to\infty}\frac{m(\partial_{U}A_{n})}{m(A_{n})}=0,$ (3)
for some open neighbourhood $U$ of the neutral element $e$ in $G$.
###### Remark 2.3.
In particular, if $G$ is discrete, then every Følner sequence in $G$ is, in
fact, a van Hove sequence.
It is well known that every locally compact $\sigma$-compact amenable group
allows for a van Hove sequence. For the convenience of the reader, we prove
the following (possibly well-known) refinement of this statement which we need
in the sequel.
###### Proposition 2.4.
Let $G$ be a locally compact $\sigma$-compact amenable topological group.
Suppose $(F_{n})$ is a Følner sequence in $G$ and $B$ is a compact
neighbourhood of $e$. Then $A_{n}\coloneqq BF_{n}$ defines a van Hove sequence
in $G$ with $\mathrm{ad}_{(A_{n})}(E)=\mathrm{ad}_{(F_{n})}(E)$ for every
measurable $E\subseteq G$.
###### Proof.
The last part follows from $E\cap A_{n}\subseteq(E\cap
F_{n})\cup(F_{n}\triangle A_{n})$ and
$\displaystyle 0\leq\lim_{n\to\infty}m(A_{n}\triangle
F_{n})/m(A_{n})\leq\lim_{n\to\infty}m(BF_{n}\triangle F_{n})/m(F_{n})=0,$ (4)
which is a consequence of the fact that $(F_{n})$ is a Følner sequence and
$F_{n}\subseteq BF_{n}=A_{n}$.
For the first part, we make use of Proposition 2.2. To that end, observe that
for every (compact) $K\subseteq G$ we have $KA_{n}\triangle
A_{n}\subseteq(KA_{n}\triangle F_{n})\cup(F_{n}\triangle
A_{n})=(KBF_{n}\triangle F_{n})\cup(F_{n}\triangle A_{n})$. Due to (4) and the
fact that $(F_{n})$ is a Følner sequence, this gives that $(A_{n})$ is a
Følner sequence, too. To see (3), we need the following
###### Claim 2.5.
There is a relatively compact open neighbourhood $U$ of $e$ such that
$F_{n}\subseteq\operatorname{int}\big{(}\bigcap_{g\in U}gA_{n}\big{)}$ for
each $n\in\mathbb{N}$.
###### Proof of the claim.
First, observe that $\operatorname{int}\big{(}\bigcap_{g\in
U}gBF_{n}\big{)}\supseteq\operatorname{int}\big{(}\bigcap_{g\in
U}gB\big{)}F_{n}$. To prove the claim, it hence suffices to show that there is
$U$ with $e\in\operatorname{int}\big{(}\bigcap_{g\in U}gB\big{)}$.
For a contradiction, suppose $e\in\overline{\bigcup_{g\in U}gB^{c}}$ for every
$U$ in the open neighbourhood filter $\mathcal{U}$ of $e$. In other words,
suppose there is a net $(g_{U})_{U\in\mathcal{U}}$ with $g_{U}\in U$ (so that
$g_{U}\to e$) and a net $(h_{U})_{U\in\mathcal{U}}$ in $B^{c}$ such that
$g_{U}h_{U}\to e$. This, however, implies $h_{U}\to e$ which contradicts
$e\in\operatorname{int}(B)$. Therefore, there is $U\in\mathcal{U}$ with
$e\in\operatorname{int}\big{(}\bigcap_{g\in U}gB\big{)}$. Clearly, $U$ can be
chosen open and relatively compact. $\circ$
Now, pick some $U$ as in the above claim. As $(F_{n})$ is a Følner sequence,
we have
$\displaystyle m(\partial_{U}A_{n})/m(A_{n})\leq m(UA_{n}\setminus
F_{n})/m(F_{n})\leq m(\overline{U}BF_{n}\setminus
F_{n})/m(F_{n})\stackrel{{\scriptstyle n\to\infty}}{{\longrightarrow}}0.$
Finally, it follows from Proposition 2.2 that $(A_{n})$ is a van Hove
sequence. ∎
For the next statement, recall that a _uniform lattice_ $\mathcal{L}$ in $G$
is a discrete subgroup of $G$ such that there exists a measurable precompact
subset $C\subseteq G$, referred to as _fundamental domain_ , with
$G=\bigsqcup_{\lambda\in\mathcal{L}}C\lambda$ and $m(C)>0$. With the lattice
$\mathcal{L}$ being a subgroup of $G$, we have a naturally defined dynamical
system $(X,\mathcal{L})$ and it turns out that amorphic complexity is well
behaved when going from $(X,G)$ over to $(X,\mathcal{L})$.
###### Lemma 2.6.
Assume $(X,G)$ is a dynamical system and $G$ allows for a uniform lattice
$\mathcal{L}$. Then for every Følner sequence $\mathcal{F}$ in $G$ there is a
Følner sequence $\mathcal{F}^{\prime}$ in $\mathcal{L}$ such that
$\displaystyle\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)=\underline{\mathrm{ac}}_{\mathcal{F}^{\prime}}(X,\mathcal{L})\qquad\textnormal{and}\qquad\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)=\overline{\mathrm{ac}}_{\mathcal{F}^{\prime}}(X,\mathcal{L}).$
Furthermore, $(X,G)$ has infinite separation numbers with respect to
$\mathcal{F}$ if and only if $(X,\mathcal{L})$ has infinite separation numbers
with respect to $\mathcal{F}^{\prime}$.
###### Proof.
We denote the Haar measure on $G$ by $m$ and that on $\mathcal{L}$ by
$|\cdot|$. Let $C\subseteq G$ be a fundamental domain as in the above
definition of a uniform lattice. First, observe that for all $\delta>0$ there
are $\delta^{-}_{\delta},\delta^{+}_{\delta}>0$ such that for all $x,y\in X$
and $c\in C$ we have $d(c^{-1}x,c^{-1}y)\geq\delta^{-}_{\delta}$ whenever
$d(x,y)\geq\delta$ and $d(cx,cy)\geq\delta^{+}_{\delta}$ whenever
$d(x,y)\geq\delta^{-}_{\delta}$. This straightforwardly follows from the
precompactness of $C$.
Further, due to Proposition 2.4, we may assume without loss of generality that
$\mathcal{F}$ is a van Hove sequence. Under this assumption, there are van
Hove sequences $\mathcal{F}^{\prime}=(F_{n}^{\prime})$ and
$\mathcal{F}^{\prime\prime}=(F_{n}^{\prime\prime})$ in $\mathcal{L}$ with
$\lim_{n\to\infty}|F_{n}^{\prime}|/|F_{n}^{\prime\prime}|=1$ such that
$CF_{n}^{\prime}$ and $CF_{n}^{\prime\prime}$ are von Hove sequences in $G$
and $CF_{n}^{\prime}\subseteq F_{n}\subseteq CF_{n}^{\prime\prime}$, see for
example [Hau20, Lemma 3.2]. We will show that for all $x,y\in X$ and
$\delta>0$ we have
$\displaystyle\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))\leq\mathrm{ad}_{\mathcal{F}^{\prime}}(\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y))\leq\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta^{+}_{\delta},x,y)).$
(5)
Clearly, this implies that for all $\nu\in(0,1)$ and all $\delta>0$
$\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)\leq\mathrm{Sep}_{\mathcal{F}^{\prime}}(X,\mathcal{L},\delta^{-}_{\delta},\nu)\leq\mathrm{Sep}_{\mathcal{F}}(X,G,\delta^{+}_{\delta},\nu)$
and thus proves the statement.
By definition of $\delta^{-}_{\delta}$ and $\delta^{+}_{\delta}$ and since $C$
is a fundamental domain, we have
$\displaystyle\Delta(X,G,\delta,x,y)\subseteq
C\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y)\subseteq\Delta(X,G,\delta^{+}_{\delta},x,y).$
Hence, utilizing the fact that for any subset $F\subseteq\mathcal{L}$ we have
$m(CF)=|F|\cdot m(C)$, we obtain (5) from the following computation
$\displaystyle\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))$
$\displaystyle=\varlimsup_{n\to\infty}m(\Delta(X,G,\delta,x,y)\cap
F_{n})/m(F_{n})$
$\displaystyle\leq\varlimsup_{n\to\infty}m(C\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y)\cap
CF_{n}^{\prime\prime})/m(CF_{n}^{\prime})$
$\displaystyle=\varlimsup_{n\to\infty}m(C\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y)\cap
CF_{n}^{\prime\prime})/m(CF_{n}^{\prime\prime})\cdot|F_{n}^{\prime\prime}|/|F_{n}^{\prime}|$
$\displaystyle=\mathrm{ad}_{\mathcal{F}^{\prime\prime}}(\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y))=\mathrm{ad}_{\mathcal{F}^{\prime}}(\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y))$
$\displaystyle=\varlimsup_{n\to\infty}m(C\Delta(X,\mathcal{L},\delta^{-}_{\delta},x,y)\cap
CF_{n}^{\prime})/m(CF_{n}^{\prime\prime})$
$\displaystyle\leq\varlimsup_{n\to\infty}m(\Delta(X,G,\delta^{+}_{\delta},x,y)\cap
F_{n})/m(F_{n})$
$\displaystyle=\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta^{+}_{\delta},x,y)).\qed$
###### Remark 2.7.
1. (a)
If $(F_{n})$ is a van Hove sequence, then the sets $F_{n}^{\prime}$ and
$F_{n}^{\prime\prime}$ in the above proof are explicitly given by
$F_{n}^{\prime}=\\{h\in\mathcal{L}\;|\;Ch\subseteq F_{n}\\}$ and
$F_{n}^{\prime\prime}=\\{h\in\mathcal{L}\;|\;Ch\cap F_{n}\neq\emptyset\\}$,
see the proof of [Hau20, Lemma 3.2].
2. (b)
Let us briefly comment on the necessity of the passage through Proposition 2.4
in the above proof. As mentioned in Remark 2.3, a Følner sequence in a
discrete group is necessarily a van Hove sequence. Consequently, given a
Følner sequence $(F_{n}^{\prime})$ in the lattice $\mathcal{L}$ of $G$,
$(F_{n}^{\prime})$ is actually a van Hove sequence and therefore, one can show
that $(CF_{n}^{\prime})$ defines a van Hove sequence in $G$. Accordingly,
whenever we seek to bound a Følner sequence $(F_{n})$ in $G$ from below and
above by sequences $(CF_{n}^{\prime})$ and $(CF_{n}^{\prime\prime})$ similarly
as in the previous proof, we actually bound $(F_{n})$ by van Hove sequences.
It turns out that this implies that $(F_{n})$ itself must be van Hove. These
observations are straightforward (though slightly tedious) to check.
## 3 On finiteness of separation numbers
This section deals with the scope of amorphic complexity. In particular, we
identify mean equicontinuous systems as those systems where separation numbers
are finite with respect to every Følner sequence and amorphic complexity may
hence be finite itself. Moreover, we show that positive entropy as well as
weak mixing imply infinite separation numbers.
### 3.1 Mean equicontinuity and finite separation numbers
We next discuss a natural class of dynamical systems with finite separation
numbers: the class of mean equicontinuous systems, see [Aus59, Rob96, HJ97,
Rob99, Cor06, Vor12, DG16, Gla18, ŁS18, FG20, FK20, GL20, FGL21] for numerous
examples. In our discussion of mean equicontinuity, we follow the terminology
of [FGL21]. Given a left or right Følner sequence $\mathcal{F}$, a system
$(X,G)$ is _(Besicovitch) $\mathcal{F}$-mean equicontinuous_ if for all
$\varepsilon>0$ there is $\delta>0$ such that for all $x,y\in X$ with
$d(x,y)<\delta$ we have
$D_{\mathcal{F}}(x,y)\coloneqq\varlimsup\limits_{n\to\infty}1/m(F_{n})\int\limits_{F_{n}}d(tx,ty)\,dm(t)<\varepsilon.$
In this case, $D_{\mathcal{F}}$ clearly defines a continuous pseudometric on
$X$. Thus, by identifying points $x,y\in X$ with $D_{\mathcal{F}}(x,y)=0$, we
obtain a compact metric space which we denote by $X/D_{\mathcal{F}}$.
Before we proceed, let us briefly recall the concept of the (upper) box
dimension of a compact metric space $(M,d)$. Given $\varepsilon>0$, we call a
subset $S$ of $M$ _$\varepsilon$ -separated_ if for all $s\neq s^{\prime}\in
S$ we have $d(s,s^{\prime})\geq\varepsilon$ and denote by $M_{\varepsilon}$
the maximal cardinality of an $\varepsilon$-separated subset of $M$. It is
well known and easy to see that $M_{\varepsilon}<\infty$ due to compactness.
With this notation, the _upper box dimension_ of $M$ is defined as
$\displaystyle\overline{}\mathrm{dim}_{B}(M)=\varlimsup\limits_{\varepsilon\to
0}\frac{\log M_{\varepsilon}}{-\log\varepsilon}.$
Now, for $\mathcal{F}$-mean equicontinuous $(X,G)$, we have
$D_{\mathcal{F}}(x,y)\geq\varlimsup\limits_{n\to\infty}1/m(F_{n})\int\limits_{F_{n}}\mathbf{1}_{[\delta,\infty)}(d(tx,ty))\cdot
d(tx,ty)\,dm(t)\geq\delta\cdot\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))$
for all $\delta>0$ and $x,y\in X$ and hence,
$(X/D_{\mathcal{F}})_{\delta\nu}\geq\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)$.
It follows
###### Proposition 3.1.
If $(X,G)$ is $\mathcal{F}$-mean equicontinuous for some left or right Følner
sequence $\mathcal{F}$, then it has finite separation numbers with respect to
$\mathcal{F}$ and
$\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)\leq\overline{\dim}_{B}(X/D_{\mathcal{F}}).$
It is important to note that if $\mathcal{F}$ is a left Følner sequence, then
$D_{\mathcal{F}}$ is not necessarily invariant. In particular, the equivalence
relation defined by $D_{\mathcal{F}}$ may not define a factor of $(X,G)$ even
if $D_{\mathcal{F}}$ is continuous. However, it is easy to see that
$D_{\mathcal{F}}$ is invariant if $\mathcal{F}$ is a right Følner sequence. We
utilize this observation below.
In any case, it is certainly desirable to have an invariant pseudometric which
does not depend on a particular (right) Følner sequence. To that end, we may
consider
$D(x,y)\coloneqq\sup\\{D_{\mathcal{F}}(x,y)\;|\;\mathcal{F}\textnormal{ is a
left Følner sequence}\\}$
which is, in fact, invariant (see [FGL21, Proposition 3.12]). We say $(X,G)$
is _(Weyl) mean equicontinuous_ if $D$ is continuous.
###### Proposition 3.2 ([FGL21, Proposition 5.8]).
Suppose $(X,G)$ is $\mathcal{F}$-mean equicontinuous for some right Følner
sequence $\mathcal{F}$. Then $(X,G)$ is mean equicontinuous.
Given a left or right Følner sequence $\mathcal{F}$, a system $(X,G)$ is
called _$\mathcal{F}$ -mean sensitive_ if there exists $\eta>0$ such that for
every open set $U\subseteq X$ we can find $x,y\in U$ with
$D_{\mathcal{F}}(x,y)\geq\eta$. Moreover, we say $(X,G)$ is _(Weyl) mean
sensitive_ if there exists $\eta>0$ such that for every open set $U\subseteq
X$ we can find $x,y\in U$ with $D(x,y)\geq\eta$. We have the following direct
generalisation of the equivalence of (1) and (3) in [LTY15, Proposition 5.1]
whose proof extends almost literally to the current setting.
###### Proposition 3.3.
The system $(X,G)$ is $\mathcal{F}$-mean sensitive (with respect to a left or
right Følner sequence $\mathcal{F}$) if and only if there is $\eta>0$ such
that for every $x\in X$ we have that $\\{y\in
X\;|\;D_{\mathcal{F}}(x,y)\geq\eta\\}$ is residual in $X$.
Clearly, if $\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\eta/2,x,y))<\eta/2$, then
$D_{\mathcal{F}}(x,y)\leq\eta/2+(1-\eta/2)\cdot\eta/2<\eta$ (assuming, without
loss of generality, that the maximal distance of points in $X$ is $1$).
###### Corollary 3.4.
If a dynamical system $(X,G)$ is $\mathcal{F}$-mean sensitive (for a left or
right Følner sequence $\mathcal{F}$), then it has infinite separation numbers
with respect to $\mathcal{F}$.
In the following, we take a closer look at the relation between mean
equicontinuity and mean sensitivity in the minimal case. The proof of the next
statement is similar to the one for $\mathbb{Z}$-actions [LTY15, Proposition
4.3 & Theorem 5.4–5.5], see also [ZHL19, Corollary 5.6] for a similar
statement for countable amenable groups. For the convenience of the reader, we
provide a direct proof in the current setting.
###### Lemma 3.5.
Let $(X,G)$ be minimal. Then $(X,G)$ is either mean equicontinuous or mean
sensitive. Furthermore, if $(X,G)$ is mean sensitive, then it is
$\mathcal{F}$-mean sensitive for every right Følner sequence $\mathcal{F}$.
###### Proof.
Suppose $(X,G)$ is not mean equicontinuous. That is, there is $x\in X$ and
$\eta>0$ such that for all $\delta>0$ there is $y_{\delta}\in B_{\delta}(x)$
with $D(x,y_{\delta})>\eta$. Now, given any open set $U\subseteq X$, there is
$g\in G$ and $\delta_{0}>0$ such that $gB_{\delta_{0}}(x)\subseteq U$. Since
$D$ is invariant, we have $D(gx,gy_{\delta_{0}})=D(x,y_{\delta_{0}})>\eta$
which proves the first part.
For the second part, observe that Proposition 3.2 gives that for every right
Følner sequence $\mathcal{F}$ there exist $x\in X$ and $\eta>0$ such that for
all $\delta>0$ there is $y\in B_{\delta}(x)$ with $D_{\mathcal{F}}(x,y)>\eta$.
Since $\mathcal{F}$ is assumed to be a right Følner sequence,
$D_{\mathcal{F}}$ is invariant and we can argue similarly as for $D$ to obtain
$\mathcal{F}$-mean sensitivity. ∎
###### Remark 3.6.
Recall that $G$ acts _effectively_ on $X$ if for all $g\in G$ there is $x\in
X$ such that $gx\neq x$. According to [FGL21, Corollary 7.3], $G$ is
necessarily maximally almost periodic (see [FGL21] and references therein) if
$G$ allows for a minimal, mean equicontinuous and effective action on a
compact metric space $X$. Hence, Lemma 3.5 gives that every minimal effective
action by a group which is not maximally almost periodic (such as the
_continuous_ Heisenberg group $H_{3}(\mathbb{R})$) is mean sensitive.
Recall that a locally compact $\sigma$-compact amenable group $G$ is
_unimodular_ if and only if $G$ allows for a _two-sided Følner sequence_ ,
that is, a sequence $\mathcal{F}$ which is both a left and a right Følner
sequence. In conclusion to the above statements, we obtain
###### Corollary 3.7.
Suppose $G$ is unimodular and $(X,G)$ is minimal. Then $(X,G)$ is mean
equicontinuous if and only if the separation numbers of $(X,G)$ are finite
with respect to every left Følner sequence.
###### Proof.
By definition, mean equicontinuity implies $\mathcal{F}$-mean equicontinuity
with respect to every left Følner sequence. Hence, the “only if”-part follows
from Proposition 3.1.
For the other direction, let $\mathcal{F}$ be a two-sided Følner sequence.
Since we assume the separation numbers with respect to $\mathcal{F}$ to be
finite, we have that $(X,G)$ is not $\mathcal{F}$-mean sensitive. Since
$D_{\mathcal{F}}$ is invariant, we can argue similarly as in Lemma 3.5 to
obtain that $(X,G)$ is $\mathcal{F}$-mean equicontinuous. Utilizing
Proposition 3.2, we obtain the desired statement. ∎
### 3.2 Entropy, mixing and infinite separation numbers
In this section, we discuss how chaotic behaviour—more specifically: weak
mixing or positive entropy—implies infinite separation numbers. Here, we
occasionally have to assume that a Følner sequence we consider is _tempered_ ,
that is, there is $C>0$ such that for all $n$ we have
$m(\bigcup_{k<n}F_{k}^{-1}F_{n})<C\cdot m(F_{n})$. It is well known that every
Følner sequence allows for a tempered subsequence, see [Lin01, Proposition
1.4].
In line with [GW16], we call an invariant measure $\mu$ of $(X,G)$ _weakly
mixing_ if for every system $(Y,G)$ and all of its ergodic measures $\nu$ we
have that $\mu\times\nu$ is ergodic for $(X\times Y,G)$. Hence, if $\mu$ is
weakly mixing, $\mu^{m}=\bigtimes_{k=1}^{m}\mu$ is ergodic for $(X^{m},G)$ and
all $m\in\mathbb{N}$.
###### Theorem 3.8.
Let $(X,G)$ be a dynamical system with a weakly mixing measure $\mu$ and
suppose the support of $\mu$ is not a singleton. Then $(X,G)$ has infinite
separation numbers with respect to every Følner sequence.
###### Proof.
For a tempered Følner sequence, the proof is similar to that of the respective
statement for $\mathbb{Z}$-actions ([FGJ16, Theorem 2.2]) if we replace
Birkhoff’s Ergodic Theorem by Lindenstrauss’ Pointwise Ergodic Theorem [Lin01,
Theorem 1.2]. Here, we have to make use of the ergodicity of $\mu^{m}$ just as
in [FGJ16].
Now, given an arbitrary Følner sequence, we can always go over to a tempered
subsequence (see [Lin01, Proposition 1.4]). This gives infinite separation
numbers for a subsequence and hence, due to the $\limsup$ in (2), infinite
separation numbers for the original sequence. ∎
We next turn to systems with positive topological entropy. Our goal is to show
###### Theorem 3.9.
Suppose $G$ allows for a uniform lattice and the dynamical system $(X,G)$ has
positive topological entropy. Then $(X,G)$ has infinite separation numbers
with respect to every Følner sequence in $G$.
###### Remark 3.10.
Observe that the proof of a similar statement for $\mathbb{Z}$-actions (see
[FGJ16, Theorem 2.3]) utilised results that are only available for
$G=\mathbb{Z}$. The present approach provides an alternative to the somewhat
implicit argument in [FGJ16].
###### Remark 3.11.
We do not make explicit use of the actual definition of entropy in the
following and rather utilize results from the theory of topological
independence. Therefore, we refrain from discussing the basics of entropy
theory in the present work. Interested readers are referred to e.g. [OW87,
KL16, Bow20, Hau20] for a background and further references.
In order to prove Theorem 3.9, we first restrict to actions of countable
discrete (and, as throughout assumed, amenable) groups.
###### Definition 3.12 (cf. [KL16, Definition 8.7]).
Let $(X,G)$ be a dynamical system and suppose $G$ is countable and discrete.
Given a pair $\mathbf{A}=(A_{0},A_{1})$ of subsets of $X$, we say that a set
$J\subseteq G$ is an _independence set_ for $\mathbf{A}$ if for every non-
empty finite subset $I\subseteq J$ and every $(s_{g})_{g\in
I}\in\\{0,1\\}^{I}$ there exists $x\in X$ with $gx\in A_{s_{g}}$ for every
$g\in I$.
###### Theorem 3.13 ([KL16, Theorem 12.19 & Proposition 12.7]).
Suppose $G$ is discrete and countable and $(X,G)$ is a dynamical system. If
$(X,G)$ has positive topological entropy, then there is a pair
$\mathbf{A}=(A_{0},A_{1})$ of disjoint compact subsets of $X$ and $d>0$ such
that for every tempered Følner sequence $\mathcal{F}=(F_{n})$ in $G$ there is
an independence set $J$ of $\mathbf{A}$ with
$\mathrm{ad}_{\mathcal{F}}(J)=\lim_{n\to\infty}|F_{n}\cap J|/|F_{n}|\geq d>0$.
Let $\mathbf{A}$, $\mathcal{F}$ and $J\subseteq G$ be as in the above
statement. Observe that due to the compactness of $A_{0}$ and $A_{1}$ we
actually have that for every $s=(s_{j})_{j\in J}\in\\{0,1\\}^{J}$ there exists
$x\in X$ which _follows_ $s$, that is, $jx\in A_{s_{j}}$ for every $j\in J$.
###### Lemma 3.14.
Let $G$ be a countable discrete group and suppose $(X,G)$ has positive
topological entropy. Then $(X,G)$ has infinite separation numbers with respect
to every Følner sequence in $G$. In fact, there are $\delta>0$ and
$\nu\in(0,1]$ such that for every Følner sequence there is an uncountable
$(\delta,\nu)$-separated set.
###### Proof.
Let $\mathbf{A}=(A_{0},A_{1})$ and $d>0$ be as in Theorem 3.13. Given a Følner
sequence $\mathcal{F}$, we may assume without loss of generality that
$\mathcal{F}$ is tempered. By Theorem 3.13, we have an associated independence
set $J\subseteq G$ for $\mathbf{A}$ with $\mathrm{ad}_{\mathcal{F}}(J)\geq d$.
Set $\delta=\mathop{\textrm{dist}}(A_{0},A_{1})$ and
$\nu=d/2\leq\mathrm{ad}_{\mathcal{F}}(J)/2$. Our goal is to show that there is
an infinite subset $S\subseteq\\{0,1\\}^{J}$ such that whenever $x,y\in X$
follow distinct elements in $S$, then
$\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))\geq\nu$.
To that end, we first define a sequence $(G_{n})_{n\in\mathbb{N}}$ of pairwise
disjoint non-empty finite subsets of $G$ such that for every infinite set
$M\subseteq\mathbb{N}$ we have
$\displaystyle\mathrm{ad}_{\mathcal{F}}(\bigcup_{n\in M}G_{n})\geq 1-\nu.$ (6)
We may do so by starting with $G_{1}=F_{1}$. Assuming we have already chosen
$G_{1},\ldots,G_{n}$ for some $n\in\mathbb{N}$, let $N=N(n)\in\mathbb{N}$ be
large enough to guarantee that
$|F_{N}\setminus(G_{1}\cup\ldots\cup G_{n})|\geq(1-\nu)|F_{N}|$
and set $G_{n+1}=F_{N}\setminus(G_{1}\cup\ldots\cup G_{n})$. Note that this
gives that $(G_{n})$ satisfies (6) for every infinite $M\subseteq\mathbb{N}$
because
$\mathrm{ad}_{\mathcal{F}}(\bigcup_{n\in
M}G_{n})\geq\varlimsup_{\begin{subarray}{c}n\to\infty\\\ n\in
M\end{subarray}}\frac{|F_{N(n-1)}\cap G_{n}|}{|F_{N(n-1)}|}\geq 1-\nu,$
for any infinite $M\subseteq\mathbb{N}$.
Now, let $E$ be an uncountable family of subsets of $\mathbb{N}$ such that
$M\triangle M^{\prime}$ is infinite for distinct $M,M^{\prime}\in E$. Given
$M\in E$, we define $s^{M}\in\\{0,1\\}^{J}$ by
$s^{M}_{j}=\begin{cases}1&\text{ if }j\in G_{n}\text{ and }n\in M,\\\ 0&\text{
otherwise.}\end{cases}$
Set $S=\\{s^{M}\in\\{0,1\\}^{J}\;|\;M\in E\\}$. Given $s\in S$, choose some
$x(s)\in X$ which follows $s$ (recall the discussion before the statement). It
is straightforward to see that for distinct $M,M^{\prime}\in E$, we have for
$x=x(s^{M})$ and $x^{\prime}=x(s^{M^{\prime}})$ that
$\displaystyle\Delta(X,G,\delta,x,x^{\prime})$ $\displaystyle=\\{g\in
G\;|\;d(gx,gx^{\prime})\geq\delta\\}\supseteq\\{g\in J\;|\;\;s^{M}_{g}\neq
s^{M^{\prime}}_{g}\\}$ $\displaystyle=J\cap\big{(}\bigcup_{n\in M\triangle
M^{\prime}}G_{n}\big{)}.$
Using (6), we obtain
$\mathrm{ad}_{\mathcal{F}}\big{(}J\cap\bigcup_{n\in M\triangle
M^{\prime}}G_{n}\big{)}\geq\mathrm{ad}_{\mathcal{F}}(J)/2\geq\nu.$
Hence, $\\{x(s)\in X\;|\;s\in S\\}$ is the uncountable
$(\delta,\nu)$-separated set we sought. ∎
###### Proof of Theorem 3.9.
Let us denote by $\mathcal{L}$ a lattice (as provided by the assumptions) in
$G$. Note that since $G$ is $\sigma$-compact, we have that $\mathcal{L}$ is
countable.
Due to [Hau20, Theorem 5.2], positive topological entropy of $(X,G)$ implies
positive topological entropy of $(X,\mathcal{L})$. Hence, Lemma 3.14 gives
that $(X,\mathcal{L})$ has infinite separation numbers with respect to every
Følner sequence. Due to Lemma 2.6, this implies infinite separation numbers of
$(X,G)$ with respect to every Følner sequence. ∎
## 4 Independence of Følner sequences
In general, amorphic complexity might depend on the particular Følner sequence
with respect to which we compute the separation numbers. For $G=\mathbb{Z}$,
this can be seen by considering the example in [FGJ16, page 541]. There,
$\mathrm{{ac}}_{\mathcal{F}}(X,\mathbb{Z})=\infty$ for
$\mathcal{F}=([0,n))_{n\in\mathbb{N}}$ while
$\mathrm{{ac}}_{\mathcal{F}^{\prime}}(X,\mathbb{Z})=0$ for
$\mathcal{F}^{\prime}=((-n,0])_{n\in\mathbb{N}}$.
The goal of this section is to show
###### Theorem 4.1.
Let $(X,G)$ be a dynamical system whose product $(X^{2},G)$ is pointwise
uniquely ergodic. Then $\overline{\mathrm{ac}}_{\mathcal{F}}(X,G)$ and
$\underline{\mathrm{ac}}_{\mathcal{F}}(X,G)$ are independent of the particular
(left) Følner sequence $\mathcal{F}$.
###### Remark 4.2.
Notice that due to [FGL21, Theorem 1.2], the above gives that amorphic
complexity of mean equicontinuous systems is independent of the particular
Følner sequence.
In fact, we have the following stronger statement which immediately yields
Theorem 4.1.
###### Theorem 4.3.
Let $(X,G)$ be a dynamical system whose product $(X^{2},G)$ is pointwise
uniquely ergodic. The following holds.
1. (i)
Suppose there is a Følner sequence $\mathcal{F}$ such that
$\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)=\infty$ for some
$\delta,\nu\in(0,1)$. Then there exists $\delta_{0}>0$ such that
$\mathrm{Sep}_{\mathcal{F}^{\prime}}(X,G,\delta^{\prime},\nu)=\infty$ for
every Følner sequence $\mathcal{F}^{\prime}$ and every
$\delta^{\prime}\in(0,\delta_{0}]$.
2. (ii)
Let $\mathcal{F}^{0}$ and $\mathcal{F}^{1}$ be Følner sequences and suppose
$\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta,\nu)<\infty$ for all
$\nu,\delta\in(0,1)$. Then there is a cocountable set $A\in(0,1)$ such that
for all $\delta\in A$ we have
$\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta,\nu)=\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta,\nu)$
for all but countably many $\nu$.
###### Proof.
Without loss of generality, we may assume that $\mathrm{diam}(X)=1$. We start
by providing some general observations. Given $\delta\in(0,1)$, let
$(h_{\ell})$ and $(H_{\ell})$ be sequences of non-decreasing continuous self-
maps on $[0,1]$. For large enough $\ell\in\mathbb{N}$, assume that
$h_{\ell}(z)=0$ for $z\in[0,\delta]$ and $h_{\ell}(z)=1$ for
$z\in[\delta+1/\ell,1]$ as well as $H_{\ell}=0$ on $[0,\delta-1/\ell]$ and
$H_{\ell}=1$ on $[\delta,1]$. Clearly,
$h_{\ell}(z)\leq\mathbf{1}_{[\delta,1]}(z)\leq H_{\ell}(z)$ for all
$z\in[0,1]$ and large enough $\ell\in\mathbb{N}$. Hence, for all $x,y\in X$,
every Følner sequence $\mathcal{F}=(F_{n})$, and sufficiently large $\ell$, we
have
$\displaystyle\begin{split}&\int\limits_{X^{2}}h_{\ell}(d(v,w))d\mu_{(x,y)}(v,w)=\lim_{n\to\infty}1/|F_{n}|\cdot\int\limits_{F_{n}}h_{\ell}(d(sx,sy))dm(s)\\\
&\leq\lim_{n\to\infty}1/|F_{n}|\cdot\int\limits_{F_{n}}\mathbf{1}_{[\delta,1]}(d(sx,sy))dm(s)=\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))\\\
&\leq\lim_{n\to\infty}1/|F_{n}|\cdot\int\limits_{F_{n}}H_{\ell}(d(sx,sy))dm(s)=\int\limits_{X^{2}}H_{\ell}(d(v,w))d\mu_{(x,y)}(v,w),\end{split}$
(7)
where we used the pointwise unique ergodicity of $(X^{2},G)$ and where
$\mu_{(x,y)}$ denotes the unique invariant measure on the orbit closure of
$(x,y)\in X^{2}$. Sending $\ell\to\infty$, we obtain equality in (7) unless
$\mu_{(x,y)}(\\{(v,w)\in X^{2}\;|\;d(v,w)=\delta\\})>0.$ (8)
In other words, if (8) does not hold, then
$\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))$ is actually independent of
the Følner sequence $\mathcal{F}$. Notice that given $(x,y)$, there can be at
most countably many $\delta$ which verify (8).
Let us prove statement (i). Suppose $\mathcal{F}$ is a Følner sequence and
$\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)=\infty$ for some
$\delta,\nu\in(0,1)$. Let $\mathcal{S}$ be a countable family of finite
$(X,G,\delta,\nu)$-separated sets (with respect to $\mathcal{F}$) such that
$\sup_{S\in\mathcal{S}}\\#S=\infty$. Further, let $C\subseteq(0,1)$ be the
collection of all $\delta\in(0,1)$ such that for some $S\in\mathcal{S}$ there
are $(x,y)\in S^{2}$ such that (8) holds. As $C$ is at most countable, there
exists $\delta_{0}\in(0,\delta]$ such that for any $S\in\mathcal{S}$ we have
$\mathrm{ad}_{\mathcal{F}^{\prime}}(\Delta(X,G,\delta_{0},x,y))=\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta_{0},x,y))\geq\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,y))\geq\nu$
for all $x\neq y\in S$ and any Følner sequence $\mathcal{F}^{\prime}$ where we
used that $\left|\Delta(X,G,\cdot,x,y)\right|$ is non-increasing. It
straightforwardly follows that each $S$ is
$(X,G,\delta^{\prime},\nu)$-separated with respect to any Følner sequence
$\mathcal{F}$ and every $\delta^{\prime}\in(0,\delta_{0}]$. As $S$ can be
chosen arbitrarily large, this proves the first assertion.
Let us consider (ii). First, observe that due to (i), we have
$\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta,\nu)<\infty$ for all
$\delta,\nu\in(0,1)$. Given $\delta\in(0,1)$, we call $\nu\in(0,1)$ _$\delta$
-singular_ if
$\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\nu)<\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta-\varepsilon,\nu)$
for all $\varepsilon>0$ and some $i\in\\{0,1\\}$. Otherwise, we say $\nu$ is
_$\delta$ -regular_. The collection of all $\delta$-singular elements of
$(0,1)$ is denoted by $B_{\delta}$. We say $\delta$ is _singular_ if
$B_{\delta}$ is uncountable. Otherwise, we call $\delta\in(0,1)$ _regular_.
The collection of all singular $\delta$ in $(0,1)$ is denoted by $B$. We set
$A=(0,1)\setminus B$.
Next, we show that for all $\delta\in(0,1)$ and each $\nu\in B_{\delta}^{c}$
we have
$\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta,\nu)=\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta,\nu)$.
To prove (ii), it then remains to show that $B$ is countable.
Given $\delta\in(0,1)$, let $\nu\in(0,1)$ be $\delta$-regular. By definition,
there is $\varepsilon>0$ such that
$\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\nu)=\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta^{\prime},\nu)$
for all $\delta^{\prime}\in(\delta-\varepsilon,\delta)$ and $i=0,1$. Let
$S\subseteq X$ be $\delta$-$\nu$-separated w.r.t. $\mathcal{F}^{0}$ and
suppose $S$ is of maximal cardinality. As $S$ is finite, the collection of all
$\delta\in(0,1)$ which verify (8) for some pair $(x,y)\in S^{2}$ is countable.
There is hence $\delta^{\prime}\in(\delta-\varepsilon,\delta)$ which does not
verify (8) for any $(x,y)\in S^{2}$. Clearly, $S$ is
$\delta^{\prime}$-$\nu$-separated for $\mathcal{F}^{0}$. By the above, $S$ is
also $\delta^{\prime}$-$\nu$-separated for $\mathcal{F}^{1}$. Hence,
$\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta,\nu)=\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta^{\prime},\nu)\geq\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta^{\prime},\nu)=\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta,\nu).$
By changing the roles of $\mathcal{F}^{0}$ and $\mathcal{F}^{1}$, we obtain
the converse inequality and accordingly
$\mathrm{Sep}_{\mathcal{F}^{0}}(X,G,\delta,\nu)=\mathrm{Sep}_{\mathcal{F}^{1}}(X,G,\delta,\nu)$
for all $\delta$-regular $\nu$.
It remains to show that $B$ is countable. To that end, we need the following
###### Claim 4.4.
If $\delta\in(0,1)$ is singular, then $B_{\delta}$ has non-empty interior.
###### Proof of the claim.
Let $\nu\in(0,1)$ be $\delta$-singular and $\nu^{\prime}\in(0,\nu)$ be
$\delta$-regular. Observe that due to the monotonicity in both arguments of
$\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\cdot,\cdot)$, there has to be a _jump
point_ $\nu_{0}$ between $\nu$ and $\nu^{\prime}$ (possibly coinciding with
$\nu$ or $\nu^{\prime}$), i.e., a point $\nu_{0}$ such that for $i=0$ or $i=1$
we have
$\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\nu_{0}-\varepsilon)>\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\nu_{0})$
for all $\varepsilon>0$. As $\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\cdot)$
is non-increasing and integer-valued, each compact subinterval of $(0,1)$ can
contain at most finitely many such jump points. Therefore, the set of
$\delta$-singular points is a union of isolated points and intervals. Since a
subset of $(0,1)$ with only isolated points is at most countable, this proves
the claim. $\circ$
Now, for a contradiction, assume that $B$ is uncountable. By the above claim,
$B_{\delta}$ contains an interval $I_{\delta}$ whenever $\delta\in B$. Thus,
there must be an uncountable set $B^{\prime}\subseteq B$ with
$\bigcap_{\delta\in B^{\prime}}I_{\delta}\neq\emptyset$. Accordingly, there is
$\nu\in(0,1)$ such that $\nu$ is $\delta$-singular for all $\delta\in
B^{\prime}$. As $\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\cdot,\nu)$ is non-
increasing, there can be at most countably many $\delta$ with
$\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta-\varepsilon,\nu)>\mathrm{Sep}_{\mathcal{F}^{i}}(X,G,\delta,\nu)$
for all $\varepsilon>0$. This contradicts the uncountability of $B^{\prime}$.
Hence, $B$ is at most countable. This finishes the proof. ∎
## 5 Application to regular model sets
In this section, we study amorphic complexity of (the dynamical hull of) model
sets. Given a model set, our third main result provides an upper bound for its
amorphic complexity which may be understood as a measure of its amorphicity.
We start by collecting a number of preliminary facts concerning Delone sets,
cut and project schemes and their associated dynamics.
### 5.1 Delone dynamical systems and model sets
From now on, in what follows, $G$ is a locally compact second countable
abelian group with Haar measure $m_{G}$. Further, in all of the following, we
switch to additive notation for the group operation in $G$, accounting for its
commutativity. By the Birkhoff-Kakutani Theorem, $G$ is metrizable and the
metric $d_{G}$ can be chosen to be invariant under $G$. In fact, open balls
with respect to $d_{G}$ are relatively compact [Str74] so that $G$ is
automatically $\sigma$-compact.
A set $\Gamma\subseteq G$ is called $r$-uniformly discrete if there exists
$r>0$ such that $d_{G}(g,g^{\prime})>r$ for all $g\neq g^{\prime}\in\Gamma$.
Moreover, $\Gamma$ is called $R$-relatively dense (or $R$-syndetic) if there
exists $R>0$ such that $\Gamma\cap B_{G}(g,R)\neq\emptyset$ for all $g\in G$,
where $B_{G}(g,R)$ denotes the open $d_{G}$-ball of radius $R$ centred at $g$.
We call $\Gamma$ a Delone set if it is uniformly discrete and relatively
dense. The collection of all Delone sets in $G$ will be denoted by
$\mathcal{D}(G)$.
Given $\rho>0$ and $g\in\Gamma$, the tuple
$(B_{G}(0,\rho)\cap(\Gamma-g),\rho)$ is called a ($\rho$-)patch of $\Gamma$.
The set of all patches of $\Gamma$ is denoted by $\mathcal{P}(\Gamma)$. A
Delone set $\Gamma$ is said to have finite local complexity (FLC) if for all
$\rho>0$ the number of its $\rho$-patches is finite. For
$\Gamma,\Gamma^{\prime}\in\mathcal{D}(G)$, set
$\mathop{\textrm{dist}}(\Gamma,\Gamma^{\prime})=\inf\left\\{\varepsilon>0\;|\;\exists
g\in B_{G}(0,\varepsilon):(\Gamma-g)\cap
B_{G}(0,1/\varepsilon)=\Gamma^{\prime}\cap B_{G}(0,1/\varepsilon)\right\\}.$
Then
$d(\Gamma,\Gamma^{\prime})=\min\\{1/\sqrt{2},\mathop{\textrm{dist}}(\Gamma,\Gamma^{\prime})\\}$
defines a metric on $\mathcal{D}(G)$ (see [LMS02, Section 2]). Moreover, for
any Delone set $\Gamma\subseteq G$ with FLC the dynamical hull of $\Gamma$,
defined as
$\Omega(\Gamma)=\overline{\left\\{\Gamma-g\;|\;g\in G\right\\}},$
where the closure is taken with respect to $d$, is compact [Sch99, Proposition
2.3]. The dynamical system $(\Omega(\Gamma),G)$, given by the translation
action of $G$ on the hull $\Omega(\Gamma)$, is called a Delone dynamical
system.
The method of choice to construct Delone sets is to utilize a _cut and project
scheme_ (CPS). A CPS consists of a triple $(G,H,\mathcal{L})$ of two locally
compact abelian groups $G$ (external group) and $H$ (internal group) and a
uniform lattice $\mathcal{L}\subseteq G\times H$ which is irrational, that is,
the natural projections $\pi_{G}:G\times H\to G$ and $\pi_{H}:G\times H\to H$
satisfy
* (i)
the restriction $\pi_{G}|_{\mathcal{L}}$ is injective;
* (ii)
the image $\pi_{H}(\mathcal{L})$ is dense.
If not stated otherwise, we throughout assume that $G$ and $H$ are second
countable. As a consequence of (i), if we let $L=\pi_{G}(\mathcal{L})$ and
$L^{*}=\pi_{H}(\mathcal{L})$, the star map
$*:L\rightarrow L^{*}:l\mapsto
l^{*}=\pi_{H}\circ\left.\pi_{G}\right|_{\mathcal{L}}^{-1}(l)$
is well defined and surjective. Given a precompact set $W\subseteq H$
(referred to as window), we define the point set
$\mbox{\Large$\curlywedge$}(W)=\pi_{G}\left(\mathcal{L}\cap(G\times
W)\right)=\\{l\in L\;|\;l^{*}\in W\\}.$
If $W$ is compact and proper (that is, $\overline{\mathrm{int}(W)}=W$), then
$\mbox{\Large$\curlywedge$}(W)$ is a Delone set and has FLC [Rob07]. In this
case, we call $\mbox{\Large$\curlywedge$}(W)$ a model set. If further
$m_{H}(\partial W)=0$, then we call the window, as well as the resulting model
set, regular. Otherwise, we refer to $W$ and $\mbox{\Large$\curlywedge$}(W)$
as irregular. Delone dynamical systems associated to regular model sets are
mean equicontinuous, see [FGL21, Remark 6.2 & Corollary 6.3].
We say that a subset $A\subseteq H$ is irredundant if $\\{h\in
H\;|\;h+A=A\\}=\\{0\\}$. Clearly, if $\partial W$ is irredundant, then so is
$W$. A CPS is called Euclidean if $G=\mathbb{R}^{N}$ and $H=\mathbb{R}^{M}$
for some $M,N\in\mathbb{N}$, and planar if $N=M=1$. Note that in the Euclidean
case, any compact window is irredundant. Further, observe that if $W$ is not
irredundant, it is possible to construct a CPS
$(G,H^{\prime},\mathcal{L}^{\prime})$ with irredundant window
$W^{\prime}\subseteq H^{\prime}$ such that for each
$\Lambda\in\Omega(\mbox{\Large$\curlywedge$}(W))$ with
$\mbox{\Large$\curlywedge$}(\mathrm{int}(W))\subseteq\Lambda\subseteq\mbox{\Large$\curlywedge$}(W)$
we have
$\mbox{\Large$\curlywedge$}(\mathrm{int}(W^{\prime}))\subseteq\Lambda\subseteq\mbox{\Large$\curlywedge$}(W^{\prime})$
(compare [LM06, Section 5] and [BLM07, Lemma 7]).
As $\mathcal{L}$ is a uniform lattice in $G\times H$, the quotient
$\mathbb{T}\coloneqq(G\times H)/\mathcal{L}$ is a compact abelian group. A
natural action of $G$ on $\mathbb{T}$ is given by
$(u,[s,t]_{\mathcal{L}})\mapsto[s+u,t]_{\mathcal{L}}$. Here,
$[s,t]_{\mathcal{L}}$ denotes the equivalence class of $(s,t)\in G\times H$ in
$\mathbb{T}$. Observe that due to the assumptions on $(G,H,\mathcal{L})$, this
action is equicontinuous, minimal and has hence a unique invariant measure
$\mu_{\mathbb{T}}$. Furthermore, if $W\subseteq H$ is irredundant,
$(\mathbb{T},G)$ is the maximal equicontinuous factor of
$(\Omega(\mbox{\Large$\curlywedge$}(W)),G)$ [BLM07]. The respective factor map
$\beta$ is also referred to as torus parametrization.
Given an irredundant window $W$, the fibres of the torus parametrization are
characterized as follows: for
$\Gamma\in\Omega(\mbox{\Large$\curlywedge$}(W))$, we have
$\Gamma\in\beta^{-1}([s,t]_{\mathcal{L}})\quad\Leftrightarrow\quad\mbox{\Large$\curlywedge$}(\mathrm{int}(W)+t)-s\subseteq\Gamma\subseteq\mbox{\Large$\curlywedge$}(W+t)-s$
(9)
as well as
$\Gamma\in\beta^{-1}([0,t]_{\mathcal{L}})\quad\Leftrightarrow\quad\exists\,(t_{j})\in{L^{*}}^{\mathbb{N}}\text{
with }\lim_{j\rightarrow\infty}t_{j}=t\text{ and
}\lim_{j\rightarrow\infty}\mbox{\Large$\curlywedge$}(W+t_{j})=\Gamma.$
In the following, we denote by $\mathrm{Vol}(\mathcal{L})$ the volume of a
fundamental domain of $\mathcal{L}$. Note that $\mathrm{Vol}(\mathcal{L})$ is
well defined.
###### Proposition 5.1 ([HR15, Proposition 3.4]).
Let $(G,H,\mathcal{L})$ be a CPS and $W\subseteq H$ be precompact. Then for
every van Hove sequence $\mathcal{F}=(F_{n})$ in $G$ we have
$\frac{m_{H}(\mathrm{int}(W))}{\mathrm{Vol}(\mathcal{L})}\leq\varliminf_{n\to\infty}\frac{\sharp(\mbox{\Large$\curlywedge$}(W)\cap
F_{n})}{m_{G}(F_{n})}\leq\varlimsup_{t\to\infty}\frac{\sharp(\mbox{\Large$\curlywedge$}(W)\cap
F_{n})}{m_{G}(F_{n})}\leq\frac{m_{H}(W)}{\mathrm{Vol}(\mathcal{L})}.$
Let us collect three more statements which follow easily from the definition
of the metric $d$ on $\mathcal{D}(G)$. Similarly to the notion of
$(\delta,\nu)$-separation of elements of a dynamical system (see Section 1),
given a van Hove sequence $\mathcal{F}$ in $G$, we set
$\nu_{\mathcal{F}}(\delta,\Gamma,\Gamma^{\prime})=\mathrm{ad}_{\mathcal{F}}(\\{g\in
G\;|\;d(g\Gamma,g\Gamma^{\prime})\geq\delta\\}),$
where $\delta>0$ and $\Gamma,\Gamma^{\prime}\in\mathcal{D}(G)$.
###### Proposition 5.2.
For every van Hove sequence $\mathcal{F}=(F_{n})$ in $G$ we have
$\nu_{\mathcal{F}}(\delta,\Gamma,\Gamma^{\prime})\leq
m_{G}(B_{G}(0,1/\delta))\varlimsup_{n\to\infty}\frac{\sharp((\Gamma\Delta\Gamma^{\prime})\cap
F_{n})}{m_{G}(F_{n})},$
with $\delta>0$ and $\Gamma,\Gamma^{\prime}\in\mathcal{D}(G)$.
Accordingly, together with Proposition 5.1, we get
###### Corollary 5.3.
If $m_{H}(\partial W)=0$ and
$\mbox{\Large$\curlywedge$}(\mathrm{int}(W))\subseteq\Gamma\subseteq\mbox{\Large$\curlywedge$}(W)$,
then
$\nu_{\mathcal{F}}(\delta,\Gamma,\Gamma^{\prime})=\nu_{\mathcal{F}}(\delta,\mbox{\Large$\curlywedge$}(W),\Gamma^{\prime})$
for all van Hove sequences $\mathcal{F}$, $\delta>0$ and
$\Gamma^{\prime}\in\mathcal{D}(G)$.
Finally, observe that
###### Proposition 5.4.
Suppose $\delta>0$, $\Gamma,\Gamma^{\prime}\in\mathcal{D}(G)$ and $g\in
B_{G}(0,{\delta/2})$. If $d(\Gamma,\Gamma^{\prime})\geq\delta$, then
$d(\Gamma,\Gamma^{\prime}+g)\geq\delta/2$.
### 5.2 Upper bound on the amorphic complexity of regular model sets
We next come to our third main result. First, recall that for a locally
compact $\sigma$-compact group $H$, the upper box dimension is given by
$\overline{\mathrm{dim}}_{\mathrm{B}}(H)=\varlimsup\limits_{\varepsilon\to
0}\frac{\log
m_{H}\big{(}\overline{B_{H}(h,\varepsilon)}\big{)}}{\log\varepsilon},$
where $h\in H$ is arbitrary. Observe that
$\overline{\mathrm{dim}}_{\mathrm{B}}(H)$ is well defined because of the
invariance of the metric $d_{H}$ and the Haar measure $m_{H}$. Note further
that the above definition, as well as the definition of the (upper) box
dimension of compact sets in Section 3.1, are special cases of a more general
concept of box dimension. We refrain from reproducing the slightly technical
(and standard) general definition here and refer the interested reader to
[Edg98, Section 1.4] instead.
We will also make use of _Minkowski’s characterisation_ of the box dimension
of a given compact set $M\subseteq H$ by
$\overline{\mathrm{dim}}_{\mathrm{B}}(M)=\overline{\mathrm{dim}}_{\mathrm{B}}(H)-\varliminf\limits_{\varepsilon\to
0}\frac{\log
m_{H}\big{(}\overline{B_{H}(M,\varepsilon)}\big{)}}{\log\varepsilon}.$
The proof of this fact in our setting is similar to the one in the Euclidean
space, see for instance [Fal03, Proposition 3.2].
Finally, in order to derive upper bounds on amorphic complexity, it is
convenient to make use of an alternative characterisation which utilises
spanning sets instead of separating sets—similar as in the derivation of upper
bounds for topological entropy (or box dimension). Given $\delta>0$ and
$\nu\in(0,1]$, we say a subset $S\subseteq X$ is _$(\delta,\nu)$ -spanning_
with respect to a Følner sequence $\mathcal{F}$ if for all $x\in X$ there
exists $s\in S$ such that
$\mathrm{ad}_{\mathcal{F}}(\Delta(X,G,\delta,x,s))<\nu$. We denote by
$\mathrm{Span}_{\mathcal{F}}(X,G,\delta,\nu)$ the smallest cardinality among
the $(\delta,\nu)$-spanning sets with respect to $\mathcal{F}$. It is not
difficult to see that $\mathrm{Span}_{\mathcal{F}}(X,G,\delta,\nu)$ instead of
$\mathrm{Sep}_{\mathcal{F}}(X,G,\delta,\nu)$ can equivalently be used in
defining amorphic complexity, see also [FGJ16, Lemma 3.1 & Corollary 3.2].
###### Theorem 5.5.
Suppose $(G,H,\mathcal{L})$ is a cut and project scheme, where $G$ and $H$ are
locally compact second countable abelian groups. Furthermore, let $W\subseteq
H$ be compact, proper, regular and irredundant and assume that
$\overline{\mathrm{dim}}_{\mathrm{B}}(H)$ is finite. Then
$\overline{\mathrm{ac}}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G)\
\leq\
\frac{\overline{\mathrm{dim}}_{\mathrm{B}}(H)}{\overline{\mathrm{dim}}_{\mathrm{B}}(H)-\overline{\mathrm{dim}}_{\mathrm{B}}(\partial
W)},$ (10)
for any Følner sequence $\mathcal{F}$.
###### Proof.
As $W$ is regular and hence $(\Omega(\mbox{\Large$\curlywedge$}(W)),G)$ mean
equicontinuous, we may assume without loss of generality that $\mathcal{F}$ is
van Hove, see Remark 4.2 and Theorem 4.3. We first choose compact sets
$A\subseteq G$ and $B\subseteq H$ such that $W\subseteq B$ and $\pi(A\times
B)=\mathbb{T}$, where $\pi:G\times H\to\mathbb{T}=(G\times H)/\mathcal{L}$ is
the canonical projection.
Given $(g,h)\in A\times B$, let
$\hat{\Gamma}_{g,h}=\mbox{\Large$\curlywedge$}(W+h)-g$. Observe that
$\hat{\Gamma}_{g,h}$ may not be an element of
$\Omega(\mbox{\Large$\curlywedge$}(W))$. While for our asymptotic estimates
this will be of no problem (due to Corollary 5.3), its explicit definition
makes $\hat{\Gamma}_{g,h}$ more convenient to deal with in computations.
###### Claim 5.6.
Let $\delta>0$. If $d_{G}(g,g^{\prime})\leq\delta/2$ and
$d(\hat{\Gamma}_{g,h},\hat{\Gamma}_{g^{\prime},h^{\prime}})\geq\delta$, then
$[-g,-h]_{\mathcal{L}}\in\pi\left(B_{G}(0,2/\delta)\times(W\Delta(W+h^{\prime}-h))\right)=:D(\delta,h^{\prime}-h).$
###### Proof of the claim.
By Proposition 5.4, we know that
$d(\hat{\Gamma}_{g,h},\hat{\Gamma}_{g,h^{\prime}})\geq\delta/2$. Hence, there
exists $(\ell,\ell^{*})\in\mathcal{L}$ with $\ell\in B_{G}(g,2/\delta)$ and
$\ell\in\hat{\Gamma}_{g,h}\Delta\hat{\Gamma}_{g,h^{\prime}}$. The latter
implies that $\ell^{*}\in(W+h)\Delta(W+h^{\prime})$.
Equivalently, this means that $\ell-g\in B_{G}(0,2/\delta)$ and $\ell^{*}-h\in
W\Delta(W+h^{\prime}-h)$, so that
$[-g,-h]_{\mathcal{L}}=[\ell-g,\ell^{*}-h]_{\mathcal{L}}\in\pi\left(B_{G}(0,2/\delta)\times
W\Delta(W+h^{\prime}-h)\right).$
This proves the claim. $\circ$
We can now apply the claim to estimate the separation frequency of a pair
$\hat{\Gamma}_{g,h}$ and $\hat{\Gamma}_{g^{\prime},h^{\prime}}$.
$\displaystyle\nu_{\mathcal{F}}(\delta,\hat{\Gamma}_{g,h},\hat{\Gamma}_{g^{\prime},h^{\prime}})$
$\displaystyle=$
$\displaystyle\varlimsup_{n\to\infty}\frac{1}{m_{G}(F_{n})}\int_{F_{n}}\mathbf{1}_{[\delta,\infty)}(d(\hat{\Gamma}_{g,h}-t,\hat{\Gamma}_{g^{\prime},h^{\prime}}-t))dt$
$\displaystyle=$
$\displaystyle\varlimsup_{n\to\infty}\frac{1}{m_{G}(F_{n})}\int_{F_{n}}\mathbf{1}_{[\delta,\infty)}(d(\hat{\Gamma}_{g+t,h},\hat{\Gamma}_{g^{\prime}+t,h^{\prime}}))dt$
$\displaystyle\leq$
$\displaystyle\varlimsup_{n\to\infty}\frac{1}{m_{G}(F_{n})}\int_{F_{n}}\mathbf{1}_{D(\delta,h^{\prime}-h)}([-g-t,h]_{\mathcal{L}})dt$
$\displaystyle\stackrel{{\scriptstyle(*)}}{{=}}$
$\displaystyle\mu_{\mathbb{T}}(D(\delta,h^{\prime}-h))$ $\displaystyle\leq$
$\displaystyle m_{G}(B_{G}(0,2/\delta))\cdot m_{H}(W\Delta(W+h^{\prime}-h))$
$\displaystyle\leq$ $\displaystyle m_{G}(B_{G}(0,2/\delta))\cdot
m_{H}(\overline{B_{H}(\partial W,d(0,h^{\prime}-h))}),$
where the equality $(*)$ follows from the unique ergodicity of
$(\mathbb{T},G)$ and the fact that $\mu_{\mathbb{T}}(\partial
D(\delta,h^{\prime}-h))=0$.
Now, suppose that $\delta>0$ and $\nu>0$ are given. Let
$\varepsilon=\inf\left\\{\eta>0\;|\;m_{H}\left(B_{H}\left(\partial
W,\eta\right)\right)\geq\nu/m_{G}\left(B_{G}(0,2/\delta)\right)\right\\}.$
Then we have $m_{H}(B_{H}(\partial
W,\varepsilon))\leq\nu/m_{G}(B_{G}(0,2/\delta))$ but at the same time
$m_{H}\big{(}\overline{B_{H}(\partial
W,\varepsilon)}\big{)}\geq\nu/m_{G}(B_{G}(0,2/\delta))$ due to the regularity
of Haar measure. Consequently, if $d_{G}(g,g^{\prime})<\delta/2$ and
$d_{H}(h,h^{\prime})<\varepsilon$, then the first inequality combined with the
above estimate yields that $\hat{\Gamma}_{g,h}$ and
$\hat{\Gamma}_{g^{\prime},h^{\prime}}$ cannot be $(\delta,\nu)$-separated.
For $g\in G$ and $h\in H$, let $\Gamma_{g,h}$ denote some element of
$\Omega(\mbox{\Large$\curlywedge$}(W))$ with
$\mbox{\Large$\curlywedge$}(\mathrm{int}(W)+h)-g\subseteq\Gamma_{g,h}\subseteq\hat{\Gamma}_{g,h}$,
see (9). We cover $A$ by $N=N_{\delta/2}(A)$ balls of radius $\delta/2$ and
$B$ by $M=N_{\varepsilon}(B)$ balls of radius $\varepsilon$ and denote by
$(g_{n})_{n=1}^{N}$ and $(h_{m})_{m=1}^{M}$ the midpoints of these balls. Then
the set $\\{\Gamma_{g_{n},h_{m}}\;|\;n=1,\ldots,N,\ m=1,\ldots,M\\}$ is
$(\delta,\nu)$-spanning due to the above and Corollary 5.3. We obtain the
estimate
$\displaystyle\overline{\mathrm{ac}}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G)$
$\displaystyle=$
$\displaystyle\adjustlimits{\sup}_{\delta>0}{\varlimsup}_{\nu\to
0}\frac{\log\mathrm{Span}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G,\delta,\nu)}{-\log\nu}$
$\displaystyle\leq$
$\displaystyle\adjustlimits{\sup}_{\delta>0}{\varlimsup}_{\varepsilon\to
0}\frac{\log(N_{\delta/2}(A)\cdot N_{\varepsilon}(B))}{-\log
m_{H}\big{(}\overline{B_{H}(\partial W,\varepsilon)}\big{)}}$ $\displaystyle=$
$\displaystyle\varlimsup_{\varepsilon\to 0}\frac{\log
N_{\varepsilon}(B)/-\log\varepsilon}{\log m_{H}\big{(}\overline{B_{H}(\partial
W,\varepsilon})\big{)}/\log\varepsilon}$ $\displaystyle\leq$
$\displaystyle\frac{\overline{\mathrm{dim}}_{\mathrm{B}}(H)}{\overline{\mathrm{dim}}_{\mathrm{B}}(H)-\overline{\mathrm{dim}}_{\mathrm{B}}(\partial
W)},$
where we used Minkowski’s characterisation in the last step. This completes
the proof. ∎
###### Remark 5.7.
It is not too difficult to see that the above result is optimal in the sense
that equality is attained for some examples while at the same time, it cannot
hold in general.
* (a)
In order to see that amorphic complexity can be smaller than the bound
provided by (10), let $H=\mathbb{R}$ and suppose $C\subseteq\mathbb{R}$ is an
arbitrary Cantor set of dimension $d\in[0,1)$. Let $W$ be a window given by
the union of $C$ with a countable number of gaps (that is, bounded connected
components of $\mathbb{R}\setminus C$) such that $\partial W=C$. Clearly, this
can be done such that for each $n$, we have that $W$ contains less than $n$
intervals of size $2^{-n}$ or bigger. If $\varepsilon\in(2^{-n},2^{-n+1}]$,
then each of these intervals contributes at most $2\varepsilon$ to
$m_{H}(W\Delta(W+\varepsilon))$, whereas the union of the other intervals
contributes at most $\varepsilon$ in total (and $\partial W$ does not
contribute since it is of zero measure). Hence, we obtain
$m_{H}(W\Delta(W+\varepsilon))\leq 2\varepsilon n\leq
2\varepsilon(-\log\varepsilon/\log 2+1)$. Accordingly, the computation in the
proof of Theorem 5.5 yields
$\overline{\mathrm{ac}}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G)\leq
1<\frac{1}{1-d}$.
* (b)
The most straightforward examples in which equality is attained in (10) are
given by CPS with $H=\mathbb{R}$. We refrain from discussing the
technicalities (which are in spirit similar to those in the proof of the above
theorem) and simply sketch the main ingredients of the construction. For
$\gamma>2$, consider a middle segment Cantor set $C_{\gamma}$ which is
constructed by always removing the middle $(1-2/\gamma)$-th part of intervals
in the canonical construction of Cantor sets. Observe that $C_{\gamma}$ is of
dimension $\overline{\mathrm{dim}}_{\mathrm{B}}(C_{\gamma})=\log 2/\log\gamma$
with gaps of size $(1-2/\gamma)\cdot\gamma^{-n}$. If $W$ is the window that is
obtained by including all gaps of size $(1-2/\gamma)\cdot\gamma^{-n}$ with $n$
odd, it can be readily checked that
$\lim_{\varepsilon\to 0}\frac{\log
m_{H}(W\Delta(W+\varepsilon))}{\log\varepsilon}=(1-\log 2/\log\gamma).$
We may assume without loss of generality to be given an element $(u,v)$ of
some set of generators of $\mathcal{L}$ with $C_{\gamma}\subseteq[0,v]$. Let
$h_{1},\ldots,h_{\lfloor 1/\varepsilon\rfloor}\in H$ be equidistributed in
$[0,v]\subseteq H$. Similarly to the estimates in the proof of Theorem 5.5, it
can be checked that for small enough $\delta$, we have that
$\\{\Gamma_{0,h_{1}},\ldots\Gamma_{0,h_{\lfloor 1/\varepsilon\rfloor}}\\}$ is
$(\delta,\nu)$-separated with
$\nu=m_{G}(B_{G}(0,1/\delta))m_{H}(W\Delta(W+\varepsilon))$ as $\varepsilon$
(and hence $\nu$) tends to zero. Then one obtains
$\overline{\mathrm{ac}}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G)=\sup_{\delta>0}\varlimsup_{\nu\to
0}\frac{\log\mathrm{Sep}_{\mathcal{F}}(\Omega(\mbox{\Large$\curlywedge$}(W)),G,\delta,\nu)}{-\log\nu}\geq\frac{1}{1-\overline{\mathrm{dim}}_{\mathrm{B}}(C_{\gamma})}$.
* (c)
Note that the construction sketched in (b) yields uncountably many regular
model sets that lie in different conjugacy classes. In fact, it shows that any
value in $[1,\infty)$ can be realised as the amorphic complexity of a regular
model set.
* (d)
The above considerations indicate that while the structure of the boundary of
the window inflicts some upper bound on the complexity of the dynamics of the
resulting model set, it greatly depends on the interior of the window whether
this bound is actually attained or not. This coincides with similar
observations concerning the topological entropy of irregular model sets
[JLO19].
## References
* [Aus59] J. Auslander. Mean-L-stable systems. Illinois Journal of Mathematics, 3(4):566–579, 1959.
* [BG13] M. Baake and U. Grimm. Aperiodic order. Number 149 in Encyclopedia of mathematics and its applications. Cambridge Univ. Press, Cambridge, 2013.
* [BLM07] M. Baake, D. Lenz, and R.V. Moody. Characterization of model sets by dynamical systems. Ergodic Theory and Dynamical Systems, 27(2):341–382, 2007.
* [BLR07] M. Baake, D. Lenz, and C. Richard. Pure point diffraction implies zero entropy for Delone sets with uniform cluster frequencies. Letters in Mathematical Physics, 82:61–77, 2007.
* [Bow20] L. Bowen. Examples in the entropy theory of countable group actions. Ergodic Theory and Dynamical Systems, 40(10):2593–2680, 2020.
* [CM16] M.I. Cortez and K. Medynets. Orbit equivalence rigidity of equicontinuous systems. Journal of the London Mathematical Society, 94(2):545–556, 2016\.
* [Cor06] M.I. Cortez. $\mathbb{Z}^{d}$ Toeplitz arrays. Discrete & Continuous Dynamical Systems - A, 15(3):859–881, 2006\.
* [CP08] M.I. Cortez and S. Petite. G-odometers and their almost one-to-one extensions. Journal of the London Mathematical Society, 78(1):1–20, 2008.
* [DG16] T. Downarowicz and E. Glasner. Isomorphic Extensions and Applications. Topological Methods in Nonlinear Analysis, 48(1):321–338, 2016\.
* [Edg98] G.A. Edgar. Integral, probability, and fractal measures. Springer, 1998.
* [EG67] W.R. Emerson and F.P. Greenleaf. Covering properties and Følner conditions for locally compact groups. Mathematische Zeitschrift, 102:370–384, 1967.
* [Fal03] K. Falconer. Fractal Geometry. 2nd. John Wiley, 2003.
* [FG20] G. Fuhrmann and M. Gröger. Constant length substitutions, iterated function systems and amorphic complexity. Mathematische Zeitschrift, 295:1385–1404, 2020.
* [FGJ16] G. Fuhrmann, M. Gröger, and T. Jäger. Amorphic complexity. Nonlinearity, 29(2):528–565, 2016.
* [FGL21] G. Fuhrmann, M. Gröger, and D. Lenz. The structure of mean equicontinuous group actions. Israel Journal of Mathematics, 2021. To appear.
* [FK20] G. Fuhrmann and D. Kwietniak. On tameness of almost automorphic dynamical systems for general groups. Bulletin of the London Mathematical Society, 52(1):24–42, 2020\.
* [GJ16] M. Gröger and T. Jäger. Some remarks on modified power entropy. In Dynamics and numbers, volume 669 of Contemporary Mathematics, pages 105–122. American Mathematical Society, 2016.
* [GL20] M. Gröger and O. Lukina. Measures and stabilizers of group Cantor actions. Discrete & Continuous Dynamical Systems - A, 2020. Published online.
* [Gla18] E. Glasner. The structure of tame minimal dynamical systems for general groups. Inventiones mathematicae, 211(1):213–244, 2018.
* [GR17] F. García-Ramos. Weak forms of topological and measure-theoretical equicontinuity: relationships with discrete spectrum and sequence entropy. Ergodic Theory and Dynamical Systems, 37(4):1211–1237, 2017.
* [GW16] E. Glasner and B. Weiss. Weak mixing properties for non-singular actions. Ergodic Theory and Dynamical Systems, 36(7):2203–2217, 2016.
* [Hau20] T. Hauser. Relative Topological Entropy for Actions of Non-discrete Groups on Compact Spaces in the Context of Cut and Project Schemes. Journal of Dynamics and Differential Equations, 2020. Published online.
* [HJ97] K.N. Haddad and A.S.A. Johnson. Auslander systems. Proceedings of the American Mathematical Society, 125(7):2161–2170, 1997.
* [HR15] C. Huck and C. Richard. On Pattern Entropy of Weak Model Sets. Discrete & Computational Geometry, 54(3):741–757, 2015.
* [JLO19] T. Jäger, D. Lenz, and C. Oertel. Model sets with positive entropy in euclidian cut and project schemes. Ann. Sci. École Norm. Sup., 52:1073–1106, 2019.
* [KL16] D. Kerr and H. Li. Ergodic theory. Springer Monographs in Mathematics. Springer, Cham, 2016.
* [Kri07] F. Krieger. Sous-décalages de toeplitz sur les groupes moyennables résiduellement finis. Journal of the London Mathematical Society, 75(2):447–462, 2007\.
* [KS14] J. Kellendonk and L. Sadun. Meyer sets, topological eigenvalues, and Cantor fiber bundles. Journal of the London Mathematical Society, 89(1):114–130, 2014\.
* [Kul] M. Kulczycki. Amorphic complexity can take any nonnegative value. In preparation.
* [Kul20] M. Kulczycki. Amorphic complexity can take any nonnegative value in general metric spaces. Dynamical Systems, 2020. Published online.
* [Len09] D. Lenz. Continuity of Eigenfunctions of Uniquely Ergodic Dynamical Systems and Intensity of Bragg Peaks. Communications in Mathematical Physics, 287:225–258, 2009.
* [Lin01] E. Lindenstrauss. Pointwise theorems for amenable groups. Inventiones Mathematicae, 146(2):259–295, 2001.
* [LM06] J.-Y. Lee and R.V. Moody. A characterization of model multi-colour sets. Annales Henri Poincaré, 7:125–143, 2006.
* [LMS02] J.-Y. Lee, R.V. Moody, and B. Solomyak. Pure point dynamical and diffraction spectra. Annales Henri Poincaré, 3(5):1003–1018, 2002.
* [ŁS18] M. Ła̧cka and M. Straszak. Quasi-uniform convergence in dynamical systems generated by an amenable group action. Journal of the London Mathematical Society, 98(3):687–707, 2018\.
* [LTY15] J. Li, S. Tu, and X. Ye. Mean equicontinuity and mean sensitivity. Ergodic Theory and Dynamical Systems, 35(8):2587–2612, 2015.
* [Mey72] Y. Meyer. Algebraic numbers and harmonic analysis. North-Holland Publishing Co., Amsterdam-London; American Elsevier Publishing Co., Inc., New York, 1972.
* [OW87] D.S. Ornstein and B. Weiss. Entropy and isomorphism theorems for actions of amenable groups. Journal d’Analyse Mathématique, 48(1):1–141, 1987.
* [Rob96] E.A. Robinson, Jr. The Dynamical Properties of Penrose Tilings. Transactions of the American Mathematical Society, 348(11):4447–4464, 1996.
* [Rob99] E.A. Robinson, Jr. On the table and the chair. Indagationes Mathematicae. New Series, 10(4):581–599, 1999.
* [Rob07] E.A. Robinson, Jr. A Halmos-von Neumann theorem for model sets, and almost automorphic dynamical systems. In B. Hasselblatt, editor, Dynamics, Ergodic Theory and Geometry, volume 54 of Mathematical Sciences Research Institute Publications, pages 243–272. Cambridge University Press, 2007.
* [Sch99] M. Schlottmann. Generalized Model Sets and Dynamical Systems. In M. Baake and R.V. Moody, editors, Directions in Mathematical Quasicrystals, CRM Monograph Series, pages 143–159. American Mathematical Society, Centre de Recherches Mathematiques, 1999.
* [ST17] M. Sabok and T. Tsankov. On the complexity of topological conjugacy of Toeplitz subshifts. Israel Journal of Mathematics, 220(2):583–603, 2017.
* [Str74] R.A. Struble. Metrics in locally compact groups. Compositio Mathematica, 28(3):217–222, 1974.
* [Str05] N. Strungaru. Almost periodic measures and long-range order in Meyer sets. Discrete & Computational Geometry, 33(3):483–505, 2005.
* [Tem92] A. Tempelman. Ergodic theorems for group actions, volume 78 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1992. Informational and thermodynamical aspects, Translated and revised from the 1986 Russian original.
* [Vor12] Y. Vorobets. Notes on the Schreier graphs of the Grigorchuk group. In L. Bowen, R. Grigorchuk, and Y. Vorobets, editors, Dynamical systems and group actions, volume 567 of Contemporary Mathematics, pages 221–248. American Mathematical Society, 2012.
* [ZHL19] B. Zhu, X. Huang, and Y. Lian. The systems with almost Banach mean equicontinuity for abelian group actions. arXiv:1909.00920, 2019.
|
# Efficient Parameter Mining and Freezing for Continual Object Detection
Angelo G. Menezes1 , Augusto J. Peterlevitz2 , Mateus A. Chinelatto2 , André
C. P. L. F. de Carvalho1
1Institute of Mathematics and Computer Sciences, University of São Paulo, São
Carlos, Brazil
2Computer Vision Department, Eldorado Research Institute, Campinas, Brazil
<EMAIL_ADDRESS>
https://orcid.org/0000-0002-7995-096X https://orcid.org/0000-0003-0575-9633
https://orcid.org/0000-0002-6933-213X https://orcid.org/0000-0002-4765-6459
###### Abstract
Continual Object Detection is essential for enabling intelligent agents to
interact proactively with humans in real-world settings. While parameter-
isolation strategies have been extensively explored in the context of
continual learning for classification, they have yet to be fully harnessed for
incremental object detection scenarios. Drawing inspiration from prior
research that focused on mining individual neuron responses and integrating
insights from recent developments in neural pruning, we proposed efficient
ways to identify which layers are the most important for a network to maintain
the performance of a detector across sequential updates. The presented
findings highlight the substantial advantages of layer-level parameter
isolation in facilitating incremental learning within object detection models,
offering promising avenues for future research and application in real-world
scenarios.
## 1 INTRODUCTION
In the era of pervasive computing, computer vision has emerged as a central
field of study with an array of applications across various domains, including
healthcare, autonomous vehicles, robotics, and security systems (Wu et al.,,
2020). For real-world computer vision applications, continual learning, or the
ability to learn from a continuous stream of data and adapt to new tasks
without forgetting previous ones, plays a vital role. It enables models to
adapt to ever-changing environments and learn from a non-stationary
distribution of data, mirroring human-like learning (Shaheen et al.,, 2021).
This form of learning becomes increasingly significant as the demand grows for
models that can evolve and improve over time without the need to store all the
data and be trained from scratch.
Within computer vision, object detection is a fundamental task aiming at
identifying and locating objects of interest within an image. Historically,
two-stage detectors, comprising a region proposal network followed by a
classification stage, were the norm, but they often suffer from increased
complexity and slower run-time (Zou et al.,, 2019). The emergence of one-stage
detectors, which combine these stages into a unified framework, has allowed
for more efficient and often more accurate detection (Tian et al.,, 2020; Lin
et al.,, 2017). In this context, incremental learning strategies for object
detection can further complement one-stage detectors by facilitating the
continuous adaptation of the model to new tasks or classes, making it highly
suitable for real-world applications where the object landscape may change
over time (Li et al.,, 2019; ul Haq et al.,, 2021).
Recent works have concluded that catastrophic forgetting is enlarged when the
magnitude of the calculated gradients becomes higher for accommodating the new
knowledge (Mirzadeh et al.,, 2021; Hadsell et al.,, 2020). Since the new
parameter values may deviate from the optimum place that was used to obtain
the previous performance, the overall $mAP$ metrics can decline. Traditionally
in continual learning (CL) for classification, researchers have proposed to
tackle this problem directly by applying regularization schemes, often
preventing important neurons from updating or artificially aligning the
gradients for each task. Such techniques have shown fair results at the cost
of being computationally expensive since network parameters are mostly
adjusted individually (Kirkpatrick et al.,, 2017; Chaudhry et al.,, 2018).
To account for the changes and keep the detector aligned with their previous
performances, most works in continual object detection (COD) mitigate
forgetting with regularization schemes based on complex knowledge distillation
strategies and their combination with replay or the use of external data
(Menezes et al.,, 2023). However, we argue that the results presented by the
solo work of Li et al., (2018) indicate that there is room to investigate
further parameter-isolation schemes for COD. For these strategies, the most
important neurons for a task are identified, and their changes are softened
across learning updates to protect the knowledge from previous tasks.
In this paper, we propose a thorough investigation of efficient ways to
identify and penalize the change in weights for sequential updates of an
object detector using insights from the neural pruning literature. We show
that by intelligently freezing full significant layers of neurons, one might
be able to alleviate catastrophic forgetting and foster a more efficient and
robust detector.
## 2 RELATED WORK
The concept of using priors to identify the importance of the weights and
protect them from updating is not new in CL. Kirkpatrick et al., (2017)
proposed a regularization term on the loss function that penalizes the update
of important parameters. These parameters are estimated by calculating the
Fish information matrix for each weight, which considers the distance between
the current weight values and the optimal weights obtained when optimizing for
the previous task. (Zenke et al.,, 2017) similarly regularized the new
learning experiences but kept an online estimate of the importance of each
parameter. Both strategies compute the change needed for each individual
parameter, which can be computationally challenging for large-scale detectors.
Also, on the verge of regularization, Li and Hoiem, (2017) saved a copy of the
model after training for each task and, when learning a new task, applied
knowledge distillation on the outputs to make sure the current model could
keep responses close to the ones produced in previous tasks. Such a strategy
was adapted for COD in the work of Shmelkov et al., (2017), which proposed to
distill knowledge from the final logits and bounding box coordinates. Li et
al., (2019) went further and introduced an additional distillation on
intermediate features for the network. Both strategies have been used in
several subsequent works in COD as strong baselines for performance
comparison.
In CL for classification, Mallya and Lazebnik, (2018) conceptualized PackNet,
which used concepts of the neural pruning literature for applying an iterative
parameter isolation strategy. It first trained a model for a task and pruned
the lowest magnitude parameters, as they were seen as the least contributors
to the model’s performance. Then, the left parameters were fine-tuned on the
initial task data and kept frozen across new learning updates. Such a strategy
is usually able to mitigate forgetting, through the cost of lower plasticity
when learning new tasks. Similarly, Li et al., (2018) proposed a strategy,
here denoted as MMN, to “mine” important neurons for the incremental learning
of object detectors. Their method involved ranking the weights of each layer
in the original model and retaining (i.e., fixing the value of) the Top-K
neurons to preserve the discriminative information of the original classes,
leaving the other parameters free to be updated but not zeroed as initially
proposed by PackNet. The importance of each neuron is estimated by sorting
them based on the absolute value of their weight. The authors evaluated this
strategy with variations of the percentage of neurons to be frozen and found
that a 75% value was ideal for a stability-plasticity balance within the
model. Although simple, the final described performance was on par with the
state-of-the-art of the time (Shmelkov et al.,, 2017).
The above parameter-isolation strategies for CL consider that the most
important individual neurons will present the highest absolute weight values
and must be kept unchanged when learning new tasks. This is a traditional
network pruning concept and is commonly treated as a strong baseline (LeCun et
al.,, 1989; Li et al.,, 2016). However, Neural Network Pruning strategies have
evolved to also consider the filter and layer-wise dynamics. For that, the
importance of a filter or the whole layer can be obtained by analyzing the
feature maps after the forward pass of a subset of the whole dataset. Then,
they can be ranked and pruned based on criteria such as proximity to zero,
variation inter samples, or information entropy (Liu and Wu,, 2019; Luo and
Wu,, 2017; Wang et al.,, 2021). Even so, the available network capacity will
be dependent on the number of involved tasks since important parameters are
not allowed to change.
## 3 METHODOLOGY
Based on the recent neural pruning literature, we explore four different ways
to identify important parameters to be kept intact across sequential updates.
The following criteria are used to determine the importance of each network
$layer$ after forwarding a subset of images from the task data and analyzing
the generated feature maps:
Figure 1: Mining important parameters for efficient incremental updates.
* •
Highest mean of activation values: Rank and select the layers with filters
that produced the highest mean of activations.
$I(layer_{i})=\frac{1}{N}\sum_{k=1}^{N}F(x_{k})$ (1)
* •
Highest median of activation values: An alternative that considers the highest
median of activations instead of the mean.
$I(layer_{i})=Med(F(x_{k}))$ (2)
* •
Highest variance: For this criterion, we consider that filters with higher
standard deviation in the generated feature maps across diverse samples are
more important and their layer should be kept unchanged.
$I(layer_{i})=\sqrt{\frac{1}{N}\sum_{k=1}^{N}(F(x_{k})-\mu)^{2}}$ (3)
* •
Highest information entropy: Rank and select the layers based on the highest
information entropy on their feature maps.
$I(layer_{i})=-\sum_{k=1}^{N}P(F(x_{k}))\log_{2}P(F(x_{k}))$ (4)
where $N$ is the number of images in the subset; $F(x_{k})$ is the flattened
feature map; $Med$ is the median of the feature map activations; $\mu$ is mean
of the feature map activations; $P$ is the probability distribution of a
feature map.
Additionally, in a separate investigation, we explore whether relaxing the
fixed weight constraint proposed by MMN can allow the model to be more plastic
while keeping decent performance on previous tasks. For that, we propose to
simply adjust the changes to the mined task-specific parameters during the
training step by multiplying the gradients calculated in the incremental step
by a penalty value. By allowing them to adjust the important weights in a
minimal way (i.e., with a penalty of 1% or 10%) across tasks, we hypothesize
that the model will be able to circumvent capacity constraints and be more
plastic.
For the proposed layer-mining criteria, we also check which percentage (i.e.,
25, 50, 75, 90) of frozen layers would give the best results. Figure 1
describes the proposed experimental pipeline.
### 3.1 Evaluation Benchmarks
Two different incremental learning scenarios were used to check the
performance of the proposed methods.
Incremental Pascal VOC
We opted to use the incremental version of the well-known Pascal VOC dataset
following the 2-step learning protocol used by the majority of works in the
area (Menezes et al.,, 2023). We investigated the scenarios in which the model
needs to learn either the last class or the last 10 classes at once, as
described in Figure 2.
Figure 2: Incremental PASCAL VOC Benchmark Evaluated Scenarios.
TAESA Transmission Towers Dataset
The detection of transmission towers and their components using aerial footage
is an essential step for performing inspections on their structures. These
inspections are often performed by onsite specialists to categorize the health
aspect of each component. The advantage of automating such tasks by the use of
drones has been largely approached in this industry setting and is known to
have a positive impact on standardization of the acquisition process and
reducing the number of accidents in locu. However, there is a lack of
successful reports of general applications in this field since it inherently
involves several challenges related to acquiring training data, having to deal
with large domain discrepancies (since energy transmission towers can be
located anywhere in a country), and the necessity to update the model every
time a new accessory or tower needs to be mapped.
To aid in the proposal of solutions for some of the listed issues, we
introduce the TAESA Transmission Towers Dataset. It consists of aerial images
from several drone inspections performed on energy transmission sites
maintained by the TAESA company in Brazil. The full dataset has records from
different transmission sites from four cities with different soil and
vegetation conditions. In this way, the incremental benchmark was organized
into four different learning tasks, each representing data from a specific
transmission site, as illustrated by Figure 3.
Table 1: TAESA Dataset Summary.
| | | N∘ of Boxes per label
---|---|---|---
Scenario | Set | | N∘ of
---
Images
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | Total
---
Boxes
Task 1 | Training | 526 | 690 | 2228 | 482 | 119 | 381 | 528 | - | - | - | 4428
Validation | 67 | 78 | 245 | 55 | 16 | 29 | 49 | - | - | - | 472
Testing | 69 | 91 | 252 | 49 | 10 | 42 | 60 | - | - | - | 504
Task 2 | Training | 431 | 86 | 950 | 260 | 4 | - | - | 20 | 429 | 8 | 1757
Validation | 55 | 14 | 120 | 32 | - | - | - | 2 | 55 | - | 223
Testing | 55 | 2 | 120 | 29 | 1 | - | - | 3 | 55 | - | 210
Task 3 | Training | 308 | 5 | 726 | 269 | 39 | - | - | 303 | - | 4 | 1346
Validation | 39 | 3 | 92 | 31 | 5 | - | - | 36 | - | - | 167
Testing | 39 | 1 | 89 | 33 | 6 | - | - | 38 | - | - | 167
Task 4 | Training | 227 | 5 | 1242 | 357 | - | 770 | 83 | - | - | 234 | 2691
Validation | 28 | 2 | 165 | 50 | - | 98 | 12 | - | - | 29 | 356
Testing | 29 | - | 177 | 52 | - | 112 | 11 | - | - | 29 | 381
Figure 3: Sample of images of each task for the TAESA Transmission Towers
Dataset.
Each task can have new classes that were not introduced before and new visuals
for a previously introduced object, making it a challenging “data-incremental”
benchmark. In addition, different from most artificial benchmarks, images were
annotated by several people using a reference sheet of the possible classes
that could be present. For that, the possibility of missing annotations and
label conflict in posterior tasks was reduced. A summary of the dataset with
respect to the number of images and objects, with their description, for each
task can be seen in Tables 2 and 1.
Table 2: ID for each class in the TAESA dataset. Class Label | Description
---|---
0 | Classic Tower
1 | Insulator
2 | Yoke Plate
3 | Clamper
4 | Ball Link
5 | Anchoring Clamp
6 | Guyed Tower
7 | Support Tower
8 | Anchor Tower
### 3.2 Implementation Details
We opted to explore the RetinaNet one-stage detector using a frozen ResNet50
with an unfrozen FPN backbone. The selected freezing criteria is therefore
only applied to the neck (i.e., FPN) and head of the model. The training
settings are similar to the ones proposed by Shmelkov et al., (2017). For both
benchmarks, the model was trained with SGD for 40k steps with an LR of 0.01
for learning the first task. For the incremental tasks, in the Pascal VOC
Benchmark, the model was trained with an LR of 0.001 for more 40k steps when
presented with data from several classes and for 5k steps when only data from
the last class was used. For the incremental tasks with the TAESA benchmark,
the model was trained with an LR of 0.001 for 5k steps for each new task. The
code for training the network was written in Python and used the MMDetection
toolbox for orchestrating the detection benchmark and evaluation procedure
(Chen et al.,, 2019). The main followed steps are depicted below in Algorithm
1.
Algorithm 1 Incremental training with parameter mining and freezing for COD
1:M: Model to be trained
2:$Tasks$: List of learning experiences
3:$S$: Type of mining strategy
4:$L$: Percentage $L$ of frozen layers or parameters
5:$P$: Percentage of gradient penalty
6:$C$: Criteria for freezing the layers
7:$N$: Percentage of samples from $Task_{i}$ to be used for calculating
freezing metrics
8:$i\leftarrow 0$
9:for $i$ in range(length($Tasks$)) do:
10: Train model $M$ with data from $Task_{i}$
11: if $S$ $=gradient\\_mining$ then
12: Dump previous gradient hooks
13: Attach a hook with the gradient penalty $P$ to the selected percentage $L$
of parameters
14: end if
15: if $S$ $=layer\\_freezing$ then
16: Reset $requires\\_grad$ of the parameters in each layer
17: Freeze a percentage $L$ of the layers given the chosen criteria $C$ using
statistics from the feature maps obtained after forwarding the $N$ selected
samples
18: end if
19: Fine-tune in $Task_{i}$ for $1k$ steps to regularize parameters for the
next learning experience
20: $i\leftarrow i+1$
21:end for
22:return $M$
As for the baselines, for the Incremental Pascal VOC benchmark, we considered
the results reported on the work of Li et al., (2019) for the ILOD and RILOD
strategies which also made use of the RetinaNet with ResNet50 as the backbone
in a similar training setting. For the TAESA benchmark, we propose the
comparison against Experience Replay using a task-balanced random reservoir
buffer. We also compare the results in both benchmarks against our
implementation of the MMN strategy from Li et al., (2018) as well as the upper
bound when all data is available for training the model. To account for the
randomness associated with neural networks, we report the performance of each
strategy after the averaging of three runs with different seeds.
### 3.3 Evaluation Metrics
For checking the performance in the Incremental Pascal VOC benchmark, we use
the average $mAP[.5]$ and $\Omega$ for comparisons against the upper bound
(i.e., join-training) as usually reported by other works. To better evaluate
the potential of each strategy regarding the model‘s ability to retain and
acquire new knowledge, we also apply the metrics proposed by Menezes et al.,
(2023) known as the rate of stability ($RSD$) and plasticity ($RPD$) deficits,
described in Equations 5 and 6.
$\displaystyle\text{RSD}=\frac{1}{N_{old\\_classes}}\times$ (5)
$\displaystyle\sum_{i=1}^{N_{old\\_classes}}\frac{mAP_{joint,i}-mAP_{inc,i}}{mAP_{joint,i}}\
*100$
$\displaystyle\text{RPD}=\frac{1}{N_{new\\_classes}}\times$ (6)
$\displaystyle\sum_{i=N_{old\\_classes}+1}^{N_{new\\_classes}}\frac{mAP_{joint,i}-mAP_{inc,i}}{mAP_{joint,i}}\
*100$
Especially for the TAESA benchmark, the performance is measured by the final
$mAP$, with different thresholds, and $mAP[.50]$ after learning all tasks, as
well as with their upper-bound ratios $\Omega_{mAP}$ and $\Omega_{mAP[.50]}$.
Additionally, since the benchmark involves the introduction of a sequence of
tasks, we have modified the existing $RSD$ and $RPD$ metrics to consider
individual tasks instead of classes. In this evaluation scenario, $RSD$
measures the performance deficit against the upper bound $mAP$ in all tasks up
to the last one, while $RPD$ evaluates the performance deficit against the
last learned task.
## 4 RESULTS
### 4.1 Pascal VOC 1-19 + 20
Table 3: Results when learning the last class (TV monitor)
19 + 1 | aero | cycle | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | bike | person | plant | sheep | sofa | train | tv | mAP | $\Omega_{all}\uparrow$ | RSD ($\%$)$\downarrow$ | RPD ($\%$)$ \downarrow$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Upper-bound | 73.5 | 80.6 | 77.4 | 61.2 | 62.2 | 79.9 | 83.4 | 86.7 | 47.6 | 78 | 68.1 | 85.1 | 83.7 | 82.8 | 79.1 | 42.5 | 75.7 | 64.9 | 79 | 76.2 | 73.4 | - | - | -
First 19 | 77 | 83.5 | 77.7 | 65.1 | 63 | 78.1 | 83.6 | 88.5 | 55.2 | 79.7 | 71.3 | 85.8 | 85.2 | 83 | 80.2 | 44.1 | 75.2 | 69.7 | 81.4 | 0 | 71.4 | - | - | -
New 1 | 48 | 61.2 | 27.6 | 18.1 | 8.1 | 58.7 | 53.4 | 17.1 | 0 | 45.9 | 18.2 | 31.9 | 59.9 | 62.2 | 9.1 | 3.4 | 42.9 | 0 | 50.3 | 63.8 | 34.0 | - | - | -
ILOD | 61.9 | 78.5 | 62.5 | 39.2 | 60.9 | 53.2 | 79.3 | 84.5 | 52.3 | 52.6 | 62.8 | 71.5 | 51.8 | 61.5 | 76.8 | 43.8 | 43.8 | 69.7 | 52.9 | 44.6 | 60.2 | 0.81 | 18.01 | 45.66
RILOD | 69.7 | 78.3 | 70.2 | 46.4 | 59.5 | 69.3 | 79.7 | 79.9 | 52.7 | 69.8 | 57.4 | 75.8 | 69.1 | 69.8 | 76.4 | 43.2 | 68.5 | 70.9 | 53.7 | 40.4 | 65.0 | 0.87 | 10.90 | 51.28
MMN | 25 | 71.8 | 78.8 | 66.5 | 48.5 | 48.6 | 73.4 | 78.8 | 77.1 | 9.1 | 76.5 | 52.3 | 74.7 | 82.4 | 76.3 | 62.3 | 21.5 | 65.9 | 20.9 | 68.2 | 45.6 | 60.0 | 0.82 | 17.06 | 41.70
50 | 73.4 | 79 | 71.5 | 51 | 53.4 | 73.4 | 81.6 | 78.5 | 13.9 | 73.5 | 54.5 | 76.7 | 83.2 | 79.1 | 64 | 27.7 | 66.8 | 36.3 | 69.4 | 43 | 62.5 | 0.85 | 13.23 | 45.24
75 | 74.8 | 79.3 | 72.9 | 54.9 | 54 | 73.9 | 82 | 85 | 25.4 | 77.2 | 60 | 81.8 | 83.5 | 80.2 | 70.1 | 35.9 | 68 | 49.7 | 67.8 | 39.3 | 65.8 | 0.90 | 8.25 | 50.29
90 | 76.5 | 82.4 | 74.4 | 58.4 | 57.9 | 74.2 | 82.3 | 86.7 | 35.7 | 77.6 | 65.1 | 83.7 | 83.8 | 82.2 | 72.5 | 37 | 73.2 | 58.5 | 71.5 | 33.7 | 68.4 | 0.93 | 4.15 | 57.92
Gradient penalty of 1% | 25 | 71.9 | 78.8 | 66.5 | 48.6 | 48.5 | 73.4 | 78.8 | 77.1 | 9.1 | 76.5 | 52.3 | 74.6 | 82.4 | 76.3 | 62.3 | 21.5 | 65.9 | 20.7 | 68 | 45.5 | 59.9 | 0.82 | 17.08 | 41.84
50 | 73.3 | 79 | 71.4 | 51 | 53.3 | 73.4 | 81.6 | 78.4 | 13.8 | 73.5 | 54.4 | 76.7 | 83.2 | 79 | 64 | 27.4 | 66.8 | 34.7 | 69.3 | 43 | 62.4 | 0.85 | 13.43 | 45.24
75 | 75 | 79.3 | 72.9 | 54.9 | 54 | 73.8 | 82 | 84.9 | 25.3 | 77.2 | 59.8 | 81.8 | 83.5 | 80.1 | 70.1 | 35.8 | 67.9 | 49.3 | 67.8 | 39.4 | 65.7 | 0.90 | 8.32 | 50.15
90 | 76 | 82.1 | 74.4 | 57.3 | 57.3 | 74.1 | 82.1 | 85.9 | 34 | 77.4 | 63.4 | 82.9 | 83.4 | 82 | 72.1 | 37.1 | 72.4 | 57.1 | 70.5 | 34.3 | 67.8 | 0.92 | 5.01 | 57.10
Gradient penalty of 10% | 25 | 71.8 | 78.6 | 66.5 | 48 | 48.5 | 73.4 | 78.8 | 77.1 | 9.1 | 76.5 | 52.2 | 74.1 | 82.4 | 76.2 | 62.2 | 21 | 65.6 | 19.9 | 68.2 | 45.4 | 59.8 | 0.81 | 17.31 | 41.97
50 | 73.1 | 78.8 | 71.3 | 49.6 | 53.3 | 74.5 | 81.5 | 78.3 | 11.4 | 73.4 | 54 | 76.4 | 82.8 | 76.8 | 63.8 | 27 | 66.4 | 33.4 | 68.6 | 43.8 | 61.9 | 0.84 | 14.13 | 44.15
75 | 73.9 | 79.2 | 72.9 | 53.5 | 54.2 | 73.4 | 81.8 | 79.6 | 22 | 76.9 | 58.4 | 81.6 | 83.3 | 79.8 | 69.3 | 33.6 | 67.4 | 47.2 | 67.4 | 39.4 | 64.7 | 0.88 | 9.75 | 50.15
90 | 76.2 | 81.8 | 73.6 | 55.9 | 57 | 73.2 | 81.2 | 84.6 | 30.3 | 76.9 | 60.7 | 82.4 | 83.6 | 81.1 | 71.1 | 36.3 | 68.3 | 56 | 67 | 37.2 | 66.7 | 0.91 | 6.76 | 53.15
Freezing based on mean | 25 | 75.1 | 78.8 | 71.6 | 57.3 | 54.3 | 75.3 | 81.1 | 78.6 | 27.5 | 77 | 60.4 | 80.8 | 82.5 | 79.6 | 70.5 | 32.5 | 72.3 | 57.3 | 74.1 | 31.3 | 65.9 | 0.90 | 7.52 | 61.19
50 | 75.3 | 78.6 | 72 | 57.7 | 53.8 | 74.7 | 81 | 79 | 27 | 74.7 | 62.5 | 77.8 | 82.7 | 77.5 | 70.5 | 33.1 | 72 | 56.5 | 73.1 | 32.4 | 65.6 | 0.89 | 8.03 | 59.69
75 | 76 | 79.5 | 73.2 | 58 | 57 | 75.8 | 81.6 | 84.4 | 27.3 | 77.3 | 64.8 | 82.1 | 82.7 | 80.4 | 71.5 | 36 | 72.7 | 57.4 | 74.8 | 25.2 | 66.9 | 0.91 | 5.66 | 69.50
90 | 76.2 | 81.3 | 71.9 | 60.8 | 49.9 | 75.7 | 82.8 | 86.2 | 24.8 | 76.5 | 69.4 | 82 | 82.9 | 80.9 | 68.5 | 26.2 | 71.9 | 60.3 | 79.4 | 41.7 | 67.5 | 0.92 | 6.01 | 47.02
Freezing based on median | 25 | 75.1 | 78.7 | 71.7 | 57.3 | 54.4 | 74.8 | 81.2 | 78.7 | 27.4 | 76.9 | 60.1 | 80.8 | 82.5 | 79.3 | 70.6 | 32.3 | 72.5 | 57.3 | 73.6 | 31.3 | 65.8 | 0.90 | 7.62 | 61.19
50 | 75.3 | 78.8 | 72.3 | 57.7 | 56.7 | 74 | 81.6 | 79.4 | 26.5 | 76.9 | 63.1 | 81.8 | 82.6 | 78.9 | 70.8 | 34.7 | 72.8 | 56.2 | 72.9 | 24.4 | 65.9 | 0.90 | 7.06 | 70.59
75 | 78 | 79.6 | 73.2 | 57.1 | 55.7 | 76.1 | 82.6 | 86.1 | 38.3 | 77.2 | 65.8 | 83.1 | 82.4 | 80.5 | 73.7 | 38.5 | 71.6 | 60.5 | 75.4 | 31.2 | 68.3 | 0.93 | 4.02 | 61.32
90 | 77.4 | 82.1 | 72.7 | 61.3 | 50.3 | 77.2 | 82.9 | 85.8 | 28.8 | 76.4 | 69.5 | 82 | 82.8 | 81.2 | 68.5 | 27.5 | 71.7 | 60.4 | 79.1 | 39.6 | 67.9 | 0.92 | 5.29 | 49.88
Freezing based on std | 25 | 75.1 | 78.9 | 71.6 | 57.3 | 54.3 | 75.3 | 81.1 | 78.6 | 27.5 | 77 | 60.4 | 80.8 | 82.5 | 77.4 | 70.5 | 32.4 | 72.3 | 57.3 | 74 | 31.5 | 65.8 | 0.90 | 7.68 | 60.92
50 | 75.1 | 78.9 | 71.6 | 57.2 | 54.3 | 75.3 | 81.1 | 78.7 | 27.5 | 77 | 60.4 | 80.7 | 82.5 | 77.4 | 70.5 | 32.3 | 72.3 | 57.3 | 74 | 31.4 | 65.8 | 0.90 | 7.70 | 61.05
75 | 75.7 | 79.1 | 72.9 | 57.1 | 56.4 | 75.2 | 81.4 | 79.3 | 25.2 | 77.4 | 61.5 | 81.6 | 82 | 79.5 | 70.6 | 33.7 | 72.9 | 56.1 | 74.5 | 27.9 | 66.0 | 0.90 | 7.12 | 65.82
90 | 77.6 | 79.9 | 73.5 | 57.3 | 56.6 | 77.7 | 82.8 | 86.2 | 38.2 | 77.1 | 65.9 | 82.8 | 82.5 | 80.2 | 73.7 | 39 | 72.4 | 61.5 | 76 | 31.5 | 68.6 | 0.94 | 3.62 | 60.92
Freezing based on entropy | 25 | 75.5 | 79.4 | 72.7 | 56.2 | 57.2 | 74.8 | 81.9 | 84.7 | 28.9 | 77.9 | 62 | 81.4 | 83.1 | 81.1 | 71.6 | 35.3 | 68.4 | 54.7 | 69 | 40.7 | 66.8 | 0.91 | 6.86 | 48.38
50 | 76.8 | 81.6 | 72.5 | 57 | 52.2 | 74.7 | 83.2 | 78.3 | 22.2 | 73.8 | 63.7 | 78.1 | 81.3 | 80 | 70.7 | 25.3 | 71 | 45.4 | 74.4 | 57 | 66.0 | 0.90 | 9.27 | 26.17
75 | 76.9 | 81.8 | 71.9 | 61.4 | 50.4 | 76 | 82.7 | 86 | 29.5 | 76 | 69.6 | 82.3 | 82.9 | 80.7 | 68.6 | 26.7 | 72.1 | 60.9 | 79.6 | 40.5 | 67.8 | 0.92 | 5.41 | 48.65
90 | 77.4 | 81.9 | 72.3 | 61.4 | 50.2 | 76.3 | 82.9 | 85.7 | 30 | 76 | 69.6 | 82.2 | 82.5 | 81.2 | 68.5 | 27.4 | 72 | 60.7 | 79.4 | 38.2 | 67.8 | 0.92 | 5.29 | 51.79
Table 3 describes the performance of each strategy for the $19+1$ scenario.
For this scenario, we noticed that the final $mAP$ and $\Omega_{all}$ would
heavily benefit models that were more stable than plastic since there was a
clear imbalance in the number of represented classes (i.e., $19\rightarrow 1$)
for the incremental step. With that in mind, we analyzed the results that
better balanced the decrease in $RSD$ and $RPD$ since, by splitting the
deficits in performance, it is clearer to understand the ability to forget and
adapt in each model. By comparing the results of the application of gradient
penalty with respect to freezing the neurons with the highest magnitude (i.e.,
MMN in Table 3), we see that allowing the extra plasticity did not produce
broad effects in performance. However, when 90% of the weights were mined, the
extra adjustments introduced by using 1% of the calculated gradients allowed
the model to beat MMN. Regarding the results of layer-mining, freezing based
on information entropy presented a better balance in $RSD$ and $RPD$, even
against more established techniques such as ILOD and RILOD. For most of the
results, increasing the percentage of frozen layers gave a lower deficit in
stability with the caveat of increasing the difference in $mAP$ against the
upper bound for the new learned class.
Overall, leaving a lower percentage of parameters frozen across updates for
the methods that worked on individual neurons made their networks more
adaptable. Yet, this hyperparameter for the layer-freezing methods did not
greatly affect the learning of the new class but had a significant impact on
the detection of classes that had been learned previously.
### 4.2 Pascal VOC 1-10 + 11-20
Table 4: Results when learning the last 10 classes
10 + 10 | aero | cycle | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | bike | person | plant | sheep | sofa | train | tv | mAP | $\Omega_{all}\uparrow$ | RSD ($\%$) $\downarrow$ | RPD ($\%$) $\downarrow$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Upper-bound | 73.5 | 80.6 | 77.4 | 61.2 | 62.2 | 79.9 | 83.4 | 86.7 | 47.6 | 78 | 68.1 | 85.1 | 83.7 | 82.8 | 79.1 | 42.5 | 75.7 | 64.9 | 79 | 76.2 | 73.4 | - | - | -
First 10 | 79.2 | 85.6 | 76.5 | 66.7 | 65.9 | 78.9 | 85.2 | 86.6 | 60.2 | 84.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 38.5 | - | - | -
New 10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 74.6 | 85.7 | 86.1 | 79.9 | 79.8 | 43.9 | 76.3 | 68.5 | 80.5 | 76.3 | 37.6 | - | - | -
ILOD | 67.1 | 64.1 | 45.7 | 40.9 | 52.2 | 66.5 | 83.4 | 75.3 | 46.4 | 59.4 | 64.1 | 74.8 | 77.1 | 67.1 | 63.3 | 32.7 | 61.3 | 56.8 | 73.7 | 67.3 | 62.0 | 0.84 | 17.65 | 13.48
RILOD | 71.7 | 81.7 | 66.9 | 49.6 | 58 | 65.9 | 84.7 | 76.8 | 50.1 | 69.4 | 67 | 72.8 | 77.3 | 73.8 | 74.9 | 39.9 | 68.5 | 61.5 | 75.5 | 72.4 | 67.9 | 0.93 | 7.59 | 7.29
MMN | 25 | 59.2 | 37.4 | 38.7 | 33.3 | 17.2 | 46.3 | 52.9 | 57.5 | 5.9 | 45.7 | 62.9 | 73.6 | 76 | 68.8 | 77.1 | 37.6 | 62.9 | 60.9 | 72.5 | 73.5 | 53.0 | 0.72 | 45.84 | 9.72
50 | 65.0 | 42.7 | 43.4 | 37.6 | 19.8 | 53.1 | 58.5 | 58.5 | 6.0 | 46.0 | 59.4 | 72.6 | 73.1 | 69.5 | 75.5 | 35.7 | 60.0 | 59.2 | 69.2 | 71.7 | 53.8 | 0.73 | 40.89 | 12.44
75 | 61.5 | 40.3 | 49.0 | 35.8 | 19.5 | 48.0 | 54.8 | 52.3 | 10.5 | 44.0 | 62.5 | 71.0 | 74.1 | 68.4 | 75.6 | 36.2 | 59.6 | 61.3 | 69.6 | 70.7 | 53.2 | 0.73 | 42.91 | 12.00
90 | 67.2 | 24.9 | 56 | 39.9 | 31.2 | 59.1 | 62.2 | 64.6 | 6.5 | 53.4 | 34.1 | 53.5 | 35.2 | 63.1 | 72.1 | 27.5 | 30 | 45.3 | 61.9 | 62.9 | 47.5 | 0.65 | 36.18 | 34.27
Gradient penalty of 1% | 25 | 59.2 | 37.4 | 38.5 | 33.3 | 17.1 | 46.1 | 52.8 | 57.6 | 5.9 | 45.8 | 62.9 | 73.5 | 76.1 | 68.6 | 77.1 | 37.4 | 62.9 | 61 | 72.6 | 73.5 | 53.0 | 0.72 | 45.90 | 9.74
50 | 64.9 | 43.9 | 43.3 | 37.2 | 19.3 | 53.1 | 58.4 | 58.4 | 5.6 | 46.0 | 59.3 | 72.7 | 73.1 | 69.6 | 75.6 | 35.8 | 60.2 | 59.2 | 69.4 | 71.8 | 53.8 | 0.73 | 40.91 | 12.34
75 | 63.6 | 41.0 | 49.9 | 36.7 | 19.6 | 48.4 | 57.0 | 53.0 | 10.5 | 43.9 | 61.9 | 71.5 | 74.3 | 67.9 | 75.4 | 35.8 | 59.5 | 61.1 | 69.4 | 70.4 | 53.5 | 0.73 | 41.84 | 12.23
90 | 67.2 | 25.1 | 55.2 | 41 | 30.1 | 58.9 | 62.2 | 63.9 | 5 | 52.9 | 38.2 | 55 | 44.5 | 64.9 | 72.5 | 28.6 | 35 | 47.7 | 62.6 | 64.4 | 48.7 | 0.66 | 36.66 | 30.49
Gradient penalty of 10% | 25 | 59 | 36.8 | 36.5 | 33 | 16.5 | 46 | 52.7 | 56.8 | 5.8 | 45.8 | 63.1 | 73.7 | 76.5 | 68.6 | 77.1 | 37.9 | 63.2 | 61.1 | 73 | 73.3 | 52.8 | 0.72 | 46.55 | 9.48
50 | 67.2 | 44 | 43.5 | 38 | 20.4 | 51.8 | 60.8 | 60.5 | 4.7 | 46.5 | 59.1 | 72.7 | 73.2 | 68.9 | 75.6 | 34.7 | 59.6 | 59 | 69.8 | 71 | 54.1 | 0.74 | 39.94 | 12.74
75 | 66.5 | 44.1 | 50.8 | 37.0 | 19.5 | 52.1 | 57.2 | 56.1 | 8.3 | 46.2 | 60.4 | 70.2 | 73.0 | 68.7 | 75.4 | 35.4 | 59.3 | 58.7 | 69.3 | 70.9 | 53.9 | 0.73 | 39.93 | 13.08
90 | 67.6 | 25.8 | 50.6 | 39.5 | 24.9 | 57.2 | 61.5 | 58.5 | 4.7 | 47.6 | 57.2 | 68.1 | 69.8 | 70.7 | 75.3 | 34.0 | 55.1 | 57.7 | 68.3 | 69.3 | 53.2 | 0.72 | 39.88 | 15.24
Freezing based on mean | 25 | 63 | 48.4 | 57.3 | 36.1 | 19.9 | 57.1 | 49.8 | 66 | 7.7 | 45 | 54 | 64 | 64 | 70.4 | 72.1 | 33.9 | 49.7 | 58.6 | 62.1 | 66.6 | 52.3 | 0.71 | 38.18 | 19.31
50 | 63.4 | 48.6 | 58 | 39.1 | 19 | 57.4 | 50 | 66.2 | 8.4 | 44.3 | 53.8 | 63.3 | 63.8 | 70.3 | 72.2 | 33.2 | 49.8 | 58.5 | 61.6 | 67.1 | 52.4 | 0.71 | 37.63 | 19.56
75 | 58.8 | 49.1 | 55.6 | 41.1 | 17.5 | 58.1 | 43.5 | 67.5 | 11 | 43.3 | 47 | 66 | 54.3 | 70 | 70.2 | 32.4 | 47.4 | 58.8 | 51 | 67.5 | 50.5 | 0.69 | 38.84 | 23.51
90 | 54.2 | 49.7 | 51.2 | 39.8 | 23.9 | 60.1 | 44.1 | 70.7 | 14.2 | 46.6 | 24.1 | 57.9 | 46.7 | 63.5 | 59.3 | 28.8 | 42 | 58.4 | 43.8 | 59.4 | 46.9 | 0.64 | 37.61 | 34.51
Freezing based on median | 25 | 60.9 | 48.3 | 57.8 | 34.3 | 23 | 57.3 | 43.8 | 65.7 | 10.4 | 46.2 | 55.1 | 65.2 | 67.7 | 71.3 | 72.8 | 33.9 | 52.8 | 59.3 | 65 | 68.3 | 53.0 | 0.72 | 38.54 | 17.13
50 | 58.5 | 48.8 | 55.4 | 41.5 | 18.7 | 58.4 | 43.8 | 70.5 | 11 | 41.9 | 53.7 | 66.8 | 54.2 | 71.2 | 71.8 | 35.1 | 49.4 | 59.6 | 52.6 | 68.7 | 51.6 | 0.70 | 38.43 | 20.99
75 | 54.6 | 48.9 | 52.7 | 38.4 | 24.6 | 59.3 | 44.1 | 70.9 | 14.1 | 47.2 | 29.4 | 58.7 | 49.5 | 63.6 | 60.4 | 29 | 42.8 | 58.6 | 45.8 | 59.9 | 47.6 | 0.65 | 37.57 | 32.62
90 | 53.6 | 42.4 | 51.9 | 38 | 23.8 | 60.1 | 44.1 | 71.3 | 14.4 | 47.5 | 28 | 58.7 | 49 | 64.7 | 60.1 | 25.4 | 42.3 | 58.4 | 46.8 | 59.7 | 47.0 | 0.64 | 38.62 | 33.25
Freezing based on std | 25 | 62.7 | 48.5 | 57.4 | 36.2 | 19.6 | 57.1 | 49.8 | 66.1 | 7.6 | 45.2 | 54.1 | 64.1 | 64 | 70.2 | 72.2 | 33.9 | 49.8 | 58.4 | 62.1 | 66.4 | 52.3 | 0.71 | 38.20 | 19.34
50 | 62.6 | 48.4 | 56.8 | 38.5 | 19.2 | 57.8 | 50 | 65.9 | 7 | 45.1 | 52.9 | 63.8 | 63.7 | 70.2 | 71.8 | 32.8 | 49.9 | 57.7 | 60.7 | 66.4 | 52.1 | 0.71 | 38.05 | 20.06
75 | 62.1 | 47.3 | 57.8 | 38.8 | 19.5 | 58.2 | 50.1 | 65.3 | 8.5 | 44.6 | 53.4 | 62.7 | 64 | 69.9 | 71.5 | 31.7 | 51.1 | 57.1 | 60.8 | 65.1 | 52.0 | 0.71 | 37.93 | 20.41
90 | 57.2 | 40.8 | 55 | 29.8 | 11.5 | 57.3 | 44.2 | 65.5 | 10.8 | 41.7 | 39.6 | 58.9 | 55.3 | 62.2 | 68.9 | 33.3 | 55.2 | 60 | 54.4 | 64.1 | 48.3 | 0.66 | 43.16 | 25.24
Freezing based on entropy | 25 | 68.3 | 42.3 | 49.8 | 42.1 | 15.3 | 53.3 | 60.8 | 60.9 | 4.8 | 51.4 | 49.9 | 71.4 | 72.4 | 71 | 75.5 | 36.2 | 53.5 | 57.5 | 70.4 | 70.2 | 53.9 | 0.73 | 38.36 | 14.87
50 | 60.8 | 34.1 | 48.2 | 30.1 | 32 | 51.8 | 42.2 | 56.9 | 14.9 | 45.3 | 55.7 | 63 | 67.5 | 66.5 | 73 | 32.5 | 46.9 | 58.8 | 62.3 | 67.4 | 50.5 | 0.69 | 42.82 | 19.56
75 | 61.2 | 31.9 | 49.4 | 32.8 | 29.2 | 55.7 | 46.5 | 57.4 | 10.6 | 47.7 | 55.8 | 66.6 | 65.4 | 64.5 | 71.8 | 30.8 | 45.7 | 57.7 | 63.8 | 66.4 | 50.5 | 0.69 | 41.99 | 20.25
90 | 54.6 | 53.6 | 63.8 | 46.0 | 24.4 | 55.9 | 53.4 | 69.4 | 20.0 | 51.6 | 31.4 | 53.7 | 49.1 | 59.2 | 40.0 | 7.5 | 31.0 | 55.0 | 41.1 | 34.8 | 44.8 | 0.61 | 32.43 | 45.58
Table 4 reports the results for the $10+10$ alternative. For this scenario,
the final $mAP$ and $\Omega_{all}$ became more relevant as there was an equal
representation of classes for their calculations. Results for applying a
penalty to the gradient of selected neurons showed a slightly superior
performance compared to completely freezing them. This was especially true in
all scenarios where a 10% penalty was applied. For this benchmark, freezing
25% of the layers based on information entropy yielded the best results,
followed by using the median of the activations to the same percentage of
frozen layers. However, the final $mAP$ and $\Omega_{all}$ indicate that these
simply arranged strategies might have a difficult time competing against
traditional methods when processing a benchmark with more complexities.
Nonetheless, they can still serve as a quick and strong baseline when compared
to fine-tuning and MMN due to ease of implementation.
Overall for the $10+10$ scenario, all evaluated strategies produced comparable
final in terms of $mAP$ and $\Omega_{all}$. Nevertheless, the best outcomes
were observed when freezing or penalizing 50% or less of the parameters. Since
most detectors based on deep neural networks are overparameterized and not
optimized directly for sparse connections, freezing more than 50% of available
parameters or layers might affect highly the network capacity for learning new
objects. We believe this to be true mainly for learning new tasks with
imbalanced category sets and objects that do not present visual similarities
with the ones previously learned. The Incremental Pascal VOC benchmark
presents not only an imbalanced occurrence of each category but also a
considerable semantic difference for the labels of the two tasks, with the
first having more instances from outdoor environments and the second focusing
on instances from indoor scenes. This can be further investigated by exploring
task-relatedness as a way to define the parameters that determine how layer-
freezing should take place between updates.
Interestingly, as also shown in the final evaluation remarks of the PackNet
strategy for classification, the final performance of the incremental model
can be weakened since it only uses a fraction of the entire parameter set to
learn new tasks (Delange et al.,, 2021). However, this tradeoff is necessary
to ensure stable performance in the tasks that were initially learned.
Considering the necessity for quick adaptation in constrained environments,
having a hyperparameter to adjust the plasticity of the model can be used as a
feature to preserve the performance in previous scenarios and slightly adjust
the network to the new circumstances. This feature can be especially
beneficial when new updates with mixed data (i.e., old and new samples) are
expected in the future.
### 4.3 TAESA Benchmark
Table 5: Results for incremental training on the TAESA Benchmark
| Task 1 | Task 2 | Task 3 | Task 4 | Final Eval
---|---|---|---|---|---
| % | Feature | mAP | mAP[.50] | mAP | mAP[.50] | mAP | mAP[.50] | mAP | mAP[.50] | | Average
---
mAP
| Average
---
mAP [.50]
$\Omega_{m}AP\uparrow$ | $\Omega_{m}AP[.50]\uparrow$ | $RSD_{mAP}\downarrow$ | $RPD_{mAP}\downarrow$
Freeze | 25 | mean | 43.7 | 67.9 | 5.6 | 13.5 | 13.3 | 24.1 | 35.1 | 60.8 | 24.4 | 41.6 | 0.55 | 0.60 | 51.18 | 28.22
median | 43.8 | 65.4 | 9.7 | 21 | 15.2 | 36.9 | 37.9 | 64.5 | 26.6 | 47.0 | 0.60 | 0.67 | 46.48 | 22.49
std | 41.7 | 62.5 | 10.5 | 21.6 | 19.3 | 32.9 | 38.6 | 64.9 | 27.5 | 45.5 | 0.62 | 0.65 | 44.28 | 21.06
entropy | 41.2 | 61.4 | 15.6 | 30.3 | 21 | 34.7 | 39.8 | 67.1 | 29.4 | 48.4 | 0.66 | 0.69 | 39.33 | 18.61
50 | mean | 44.0 | 69.6 | 5.8 | 13.9 | 11.8 | 23.2 | 35 | 61 | 24.2 | 41.9 | 0.55 | 0.60 | 51.96 | 28.43
median | 43.3 | 64.7 | 10.5 | 22.5 | 14.8 | 26.3 | 37.2 | 62.6 | 26.5 | 44.0 | 0.60 | 0.63 | 46.52 | 23.93
std | 41.4 | 64.4 | 10.9 | 22.8 | 19.8 | 34.3 | 38.4 | 64.9 | 27.6 | 46.6 | 0.62 | 0.67 | 43.77 | 21.47
entropy | 41.0 | 61.8 | 16.6 | 31.5 | 22.2 | 37.8 | 39 | 65.9 | 29.7 | 49.2 | 0.67 | 0.71 | 37.77 | 20.25
75 | mean | 47.9 | 71.4 | 3.5 | 9.8 | 12.4 | 24.1 | 31 | 55.3 | 31.4 | 49.0 | 0.71 | 0.70 | 50.28 | 36.61
median | 45.9 | 65.3 | 6.8 | 17.5 | 17.4 | 30.6 | 32.9 | 60 | 30.9 | 48.7 | 0.70 | 0.70 | 45.37 | 32.72
std | 44.1 | 63.2 | 10.8 | 24 | 19.3 | 32.5 | 34.4 | 62.1 | 30.5 | 48.7 | 0.69 | 0.70 | 42.14 | 29.65
entropy | 43.7 | 63.1 | 11.6 | 21.9 | 22.5 | 38.5 | 36.6 | 62.3 | 30.4 | 48.7 | 0.69 | 0.70 | 39.33 | 25.15
90 | mean | 46.2 | 69.9 | 6.8 | 13.9 | 9.9 | 20.7 | 23.3 | 44.9 | 21.6 | 37.4 | 0.49 | 0.54 | 50.95 | 52.35
median | 45.4 | 68.8 | 8.6 | 22.8 | 15.8 | 29.9 | 25 | 48.5 | 23.7 | 42.5 | 0.53 | 0.61 | 45.62 | 48.88
std | 44.8 | 68.6 | 13.1 | 27.6 | 18.4 | 33.4 | 25.7 | 49.7 | 25.5 | 44.8 | 0.58 | 0.64 | 40.54 | 47.44
entropy | 45.6 | 67.0 | 13.9 | 28.5 | 19.5 | 33.8 | 28.4 | 53 | 26.8 | 45.6 | 0.61 | 0.65 | 38.43 | 41.92
Grad | 25 | 0.1 | 44.2 | 67.8 | 7.5 | 16.6 | 20 | 34.5 | 37.2 | 64.4 | 27.2 | 45.8 | 0.61 | 0.66 | 44.14 | 23.93
0.01 | 29.2 | 65.7 | 8.8 | 18 | 19.9 | 34.1 | 37.9 | 64.7 | 24.0 | 45.6 | 0.54 | 0.65 | 54.84 | 22.49
50 | 0.1 | 45.7 | 69.7 | 9.7 | 21.4 | 18.8 | 32.6 | 35.2 | 61.7 | 27.4 | 46.4 | 0.62 | 0.67 | 42.16 | 28.02
0.01 | 45.4 | 67.9 | 11.2 | 23.1 | 20 | 34.9 | 37.1 | 64.3 | 28.4 | 47.5 | 0.64 | 0.68 | 40.28 | 24.13
75 | 0.1 | 47.5 | 70.6 | 9.7 | 23 | 18.5 | 31.6 | 31.5 | 57.7 | 26.8 | 45.7 | 0.61 | 0.66 | 40.97 | 35.58
0.01 | 47.0 | 71.6 | 21.1 | 36.5 | 19.2 | 32.6 | 32.3 | 59.4 | 29.9 | 50.0 | 0.67 | 0.72 | 31.96 | 33.95
90 | 0.1 | 48.7 | 72.9 | 15.6 | 31.1 | 17.7 | 32 | 28 | 53.1 | 27.5 | 47.3 | 0.62 | 0.68 | 36.09 | 42.74
0.01 | 49.2 | 73.5 | 20.4 | 39.4 | 18 | 32.3 | 27.9 | 53.7 | 28.9 | 49.7 | 0.65 | 0.71 | 31.69 | 42.94
MMN | 25 | - | 44.6 | 68.0 | 5.1 | 12.2 | 17.8 | 31.3 | 33.5 | 60 | 25.3 | 42.9 | 0.57 | 0.62 | 47.36 | 31.49
50 | - | 47.3 | 69.7 | 4.2 | 10.1 | 17.4 | 31.7 | 31.5 | 58 | 25.1 | 42.4 | 0.57 | 0.61 | 46.33 | 35.58
75 | - | 49.4 | 72.7 | 6.7 | 15.9 | 15.5 | 28.8 | 28.1 | 52.1 | 24.9 | 42.4 | 0.56 | 0.61 | 44.16 | 42.54
90 | - | 48.6 | 72.0 | 10.4 | 18.6 | 14.2 | 26.8 | 13.8 | 32.5 | 21.7 | 37.5 | 0.49 | 0.54 | 42.97 | 71.78
Fine tuning | - | - | 44.2 | 66.6 | 5.4 | 12.8 | 12 | 23.5 | 34.9 | 61.5 | 24.1 | 41.1 | 0.54 | 0.59 | 52.02 | 28.63
Experience Replay | - | - | 46.7 | 71.3 | 21.5 | 37.8 | 24.9 | 40.6 | 42.5 | 71.9 | 33.9 | 55.4 | 0.77 | 0.80 | 27.40 | 13.09
Ground Truth | - | - | 56.8 | 83.2 | 35.7 | 58.1 | 35.8 | 62.1 | 48.9 | 75.3 | 44.3 | 69.7 | - | - | - | -
Table 5 summarizes the results on the proposed benchmark with the green color
highlighting metrics related to $mAP$ and blue for $mAP_{[.50]}$. As the
benchmark involves class-incremental and domain-incremental aspects, we
noticed that when there is little drift in the appearance of previously known
objects that show up in the new task images, these instances reinforce the
“old knowledge” and can be considered as a small case of replay. This can be
checked by the fact that the forgetting in the fine-tuning approach is “soft”
when compared to other artificial benchmarks, such as Incremental Pascal VOC,
in which classes that do not appear in further training sets are completely
forgotten. Furthermore, the benchmark was organized in a way that minimized
label conflicts, leading to less interference in the weights assigned to each
class.
Applying a penalty to the gradients of important parameters improved the
results of leaving them frozen (i.e. MMN) in all scenarios. The best results
were seen when applying a 1% of the penalty to 50% or more of the important
weights. Due to a slight imbalance between the number of available data and
classes in each task and the fact that the first task had more learning steps,
it was found that keeping most of the old weights unchanged, or slightly
adjusting them to new tasks, proved to be effective for average performance.
However, when checking the performance in the intermediate tasks (i.e., Tasks
2 and 3) and comparing them to the fine-tuning and upper-bound results, we see
that forgetting still occurs, but to a lesser extent than in the other
evaluated methods.
Selecting the most important layers based on information entropy was the most
impartial in terms of the percentage of layers chosen, and generally yielded
superior outcomes compared to other statistical measures. Yet, freezing 75% of
the layers based on the mean of feature map activations seemed to produce the
best results, achieving a good balance in the final $\Omega_{mAP}$ and
$\Omega_{mAP[.50]}$, although it significantly impacted knowledge retention in
intermediate tasks The other layer-freezing methods attained similar results,
but with less forgetting in the intermediate tasks. This highlights the
necessity to look at the big picture and not only specific metrics based on
averages.
Although the full benchmark seemed challenging by having to deal with new
classes and domains, the initial task’s diverse and abundant data helped
prepare the model to learn with small adjustments in new task scenarios. All
evaluated strategies performed better than fine-tuning and MMN baselines but
fell behind the results achieved through experience replay. For scenarios
where saving samples is not feasible, a hybrid strategy involving parameter
isolation and fake labeling may help reduce the gap in performance against
replay methods. Nevertheless, when possible, combining these methods with
parameter-isolation strategies can be seen as a promising direction for
investigation.
## 5 CONCLUSIONS
In this paper, we discussed different ways to mitigate forgetting when
learning new object detection tasks by using simple criteria to freeze layers
and heuristics for how important parameters should be updated. We found that
mining and freezing layers based on feature map statistics, particularly on
their information entropy, yielded better results than freezing individual
neurons when updating the network with data from a single class. However, when
introducing data from several classes, the simple arrangements brought by the
layer-freezing strategy were not as successful. The layer-freezing strategies
mostly outperformed the mining of individual neurons but presented lower
performance when directly compared to more traditional and complex knowledge-
distillation methods such as ILOD and RILOD, or experience replay.
Additionally, results also showed that applying individual penalties to the
gradients of important neurons did not significantly differ from the
possibility of freezing them.
As a future line of work, it may be beneficial to explore fine-grained
freezing solutions that involve mining and freezing individual convolutional
filters based on their internal statistics. Hybrid techniques that balance
learning with the use of experience replay could also be proposed to prevent
forgetting and adapt more quickly to new scenarios. Furthermore, it would be
useful to investigate measures of task-relatedness as a means of defining the
freezing coefficients among sequential updates.
## ACKNOWLEDGEMENTS
This study was funded in part by the Coordenação de Aperfeiçoamento de Pessoal
de Nível Superior - Brasil (CAPES) - Finance Code 001 and by ANEEL (Agência
Nacional de Energia Elétrica) and TAESA (Transmissora Aliança de Energia
Elétrica S.A.), project PD-07130-0059/2020. The authors also would like to
thank the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
and the Eldorado Research Institute for supporting this research.
## REFERENCES
* Chaudhry et al., (2018) Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny, M. (2018). Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420.
* Chen et al., (2019) Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., et al. (2019). Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155.
* Delange et al., (2021) Delange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. (2021). A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
* Hadsell et al., (2020) Hadsell, R., Rao, D., Rusu, A. A., and Pascanu, R. (2020). Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences.
* Kirkpatrick et al., (2017) Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526.
* LeCun et al., (1989) LeCun, Y., Denker, J., and Solla, S. (1989). Optimal brain damage. Advances in neural information processing systems, 2.
* Li et al., (2019) Li, D., Tasci, S., Ghosh, S., Zhu, J., Zhang, J., and Heck, L. (2019). Rilod: Near real-time incremental learning for object detection at the edge. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, pages 113–126.
* Li et al., (2016) Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.
* Li et al., (2018) Li, W., Wu, Q., Xu, L., and Shang, C. (2018). Incremental learning of single-stage detectors with mining memory neurons. In 2018 IEEE 4th International Conference on Computer and Communications (ICCC), pages 1981–1985. IEEE.
* Li and Hoiem, (2017) Li, Z. and Hoiem, D. (2017). Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947.
* Lin et al., (2017) Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988.
* Liu and Wu, (2019) Liu, C. and Wu, H. (2019). Channel pruning based on mean gradient for accelerating convolutional neural networks. Signal Processing, 156:84–91.
* Luo and Wu, (2017) Luo, J.-H. and Wu, J. (2017). An entropy-based pruning method for cnn compression. arXiv preprint arXiv:1706.05791.
* Mallya and Lazebnik, (2018) Mallya, A. and Lazebnik, S. (2018). Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765–7773.
* Menezes et al., (2023) Menezes, A. G., de Moura, G., Alves, C., and de Carvalho, A. C. (2023). Continual object detection: A review of definitions, strategies, and challenges. Neural Networks.
* Mirzadeh et al., (2021) Mirzadeh, S. I., Chaudhry, A., Hu, H., Pascanu, R., Gorur, D., and Farajtabar, M. (2021). Wide neural networks forget less catastrophically. arXiv preprint arXiv:2110.11526.
* Shaheen et al., (2021) Shaheen, K., Hanif, M. A., Hasan, O., and Shafique, M. (2021). Continual learning for real-world autonomous systems: Algorithms, challenges and frameworks. arXiv preprint arXiv:2105.12374.
* Shmelkov et al., (2017) Shmelkov, K., Schmid, C., and Alahari, K. (2017). Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE international conference on computer vision, pages 3400–3409.
* Tian et al., (2020) Tian, Z., Shen, C., Chen, H., and He, T. (2020). Fcos: A simple and strong anchor-free object detector. IEEE Transactions on Pattern Analysis and Machine Intelligence.
* ul Haq et al., (2021) ul Haq, Q. M., Ruan, S.-J., Haq, M. A., Karam, S., Shieh, J. L., Chondro, P., and Gao, D.-Q. (2021). An incremental learning of yolov3 without catastrophic forgetting for smart city applications. IEEE Consumer Electronics Magazine.
* Wang et al., (2021) Wang, J., Jiang, T., Cui, Z., and Cao, Z. (2021). Filter pruning with a feature map entropy importance criterion for convolution neural networks compressing. Neurocomputing, 461:41–54.
* Wu et al., (2020) Wu, X., Sahoo, D., and Hoi, S. C. (2020). Recent advances in deep learning for object detection. Neurocomputing, 396:39–64.
* Zenke et al., (2017) Zenke, F., Poole, B., and Ganguli, S. (2017). Continual learning through synaptic intelligence. In International conference on machine learning, pages 3987–3995. PMLR.
* Zou et al., (2019) Zou, Z., Shi, Z., Guo, Y., and Ye, J. (2019). Object detection in 20 years: A survey. arXiv preprint arXiv:1905.05055.
|
# Lego-Features: Exporting modular encoder features
for streaming and deliberation ASR
###### Abstract
In end-to-end (E2E) speech recognition models, a representational tight-
coupling inevitably emerges between the encoder and the decoder. We build upon
recent work that has begun to explore building encoders with modular encoded
representations, such that encoders and decoders from different models can be
stitched together in a zero-shot manner without further fine-tuning. While
previous research only addresses full-context speech models, we explore the
problem in a streaming setting as well. Our framework builds on top of
existing encoded representations, converting them to modular features, dubbed
as _Lego-Features_ , without modifying the pre-trained model. The features
remain interchangeable when the model is retrained with distinct
initializations. Though sparse, we show that the Lego-Features are powerful
when tested with RNN-T or LAS decoders, maintaining high-quality downstream
performance. They are also rich enough to represent the first-pass prediction
during two-pass deliberation. In this scenario, they outperform the N-best
hypotheses, since they do not need to be supplemented with acoustic features
to deliver the best results. Moreover, generating the Lego-Features does not
require beam search or auto-regressive computation. Overall, they present a
modular, powerful and cheap alternative to the standard encoder output, as
well as the N-best hypotheses.
Index Terms— modular, representations, zero-shot stitching
## 1 Introduction
E2E speech recognition models, which combine acoustic, pronunciation and
language models from conventional systems [1] into one neural network, have
become widely used, especially for on-device applications [2, 3, 4, 5, 6, 7].
Since they are much smaller than conventional models, and their inference
speed is often much faster [2, 3, 8, 9], they work well for various streaming
applications. They typically use an encoder-decoder architecture [10]. Like
most deep neural networks, the whole architecture is usually trained end to
end. The encoder implicitly learns to serve the subsequent decoder layers, and
thus conversely, the decoder is thoroughly oriented towards inputs coming from
the specific encoder that it has been trained with. Therefore, encoders and
decoders from different models or training runs, are generally not
interchangeable without further E2E training.
This tight coupling between both components stands in the way of a flexible,
modular architecture. Speech encoders that have been trained on high-resource
ASR data can serve as foundation models for other tasks like sentiment
analysis [11] or low-resource translation [12], to name a few. However, this
presents a challenge if a shared encoder representation is used for multiple
downstream tasks: When the ASR encoder is retrained, all downstream models
must be retrained as well. Hence, it would be more practical if each component
can be developed and updated independently. To that end, we present a method
for building modular speech encoder features, where different versions of the
encoder can be plugged into the decoder in a zero-shot stitching manner
without fine-tuning.
Our method works by building on top of an existing base encoder, which is kept
frozen. We adapt the Beam-Convolution scheme described in [13] to train
streaming modular encoded representations, which we call Lego-Features. To
produce them, the original (fixed) continuous encoded features pass through a
few extra trainable “Exporter” layers, then through a CTC decoder, which is
trained with an auxiliary CTC loss. Lego-Features are defined as the sorted
top $K$ CTC logit indices at every frame, see Figure 1. The logits operate
over a discrete space (here: wordpiece vocabulary) and are grounded in the
transcript text, which is why they tend to be modular. Overall, the
traditional encoder features are forced through a tight discretizing
bottleneck, which protects downstream models from coupling themselves to fine
details in the encoded representation. Downstream consumers of Lego-Features
need to first re-embed them, since they come in as sparse indices.
[13, 14] have shown how this tight bottleneck still produces a powerful
representation which is sufficiently informative for downstream ASR decoders.
They also perform a “modularity test”: The downstream decoder is kept
constant, but gets input with a new version of the encoded representation,
which is obtained by retraining the encoder from scratch using a different
initialization. The switch is done in a zero-shot manner without any extra
fine-tuning. Traditional continuous encoded features categorically fail the
modularity test, bringing the downstream performance to nearly 100% WER, which
is what motivates this new type of encoded representation. We build on the
original works with a few novel contributions:
1. 1)
We find that training the modular encoder from scratch under the CTC loss is
insufficient for producing the best performance. Instead, our recipe pre-
trains some base encoder layers with RNN-T loss and keeps them frozen. Next,
we just train the extra Exporter layers with the auxiliary CTC loss. This
solution is also practical since it enables researchers to cheaply export
modular features without having to modify their original system. Thus, the
quality, latency and efficiency of the base model are all maintained.
2. 2)
We adapt the design to a streaming setting for the first time. Unlike the
original work [13, 14], our encoder layers attention have limited left and
right context windows, and the produced Lego-Features are successfully paired
with a streaming-friendly RNN-T decoder. The streaming architecture still
exhibits strong downstream ASR quality and passes the modularity test. By
plugging the same fixed set of Lego-Features into causal as well as non-causal
decoders, our work adds further evidence to their modularity and
interoperability.
3. 3)
Rather than merely looking at the Lego-Features as an encoded representation,
we also study them as an alternative to the N-best hypotheses within two-pass
systems. We provide new comparisons against the N-best in terms of speed,
accuracy and modularity. To this end, the Lego-Features are used as a first-
pass output within the deliberation framework [15]. This achieves good post-
deliberation WER performance, which is shown to be on-par with a baseline that
performs deliberation on 1st-pass RNN-T N-best hypotheses + audio features.
The Lego-Features demonstrate success in the modularity test here as well. On
the other hand, we find that the N-best hypothesis text does not pass the
modularity test, i.e. a new N-best from a second model would confuse the
deliberation decoder from the first, which is a novel observation. Moreover,
the Lego-Features are cheaper to produce than the N-best, since they require
no beam-search or auto-regressive decoding, but are generated via a simple
projection at every frame.
Other works have attempted to present generic methods for zero-shot stitching
between layers. In [16], this is achieved by learning representations relative
to data-dependent anchors. In contrast, the method presented here does not
need to choose anchor samples and leverages the existence of ground-truth
speech transcripts instead. Another general approach, presented in [17], uses
self-supervised objectives designed to encourage compatibility of different
layer outputs. It is an open question whether the cited methods can deal with
long sequences, whereas the CTC loss used here is a natural choice that works
well with ASR and gives interpretable outputs.
Further, some research has already experimented with deliberation on top of
CTC outputs to save the cost of first-pass decoding [18, 19, 20]. This
includes the Align-refine approach, which iteratively improves on the first-
pass output. Those works tend to focus on optimizing the size and speed of the
first-pass model, whereas our focus is mainly on modularity. Nevertheless,
since we build on base encoder layers that have been pre-trained with the
RNN-T loss, we find our CTC outputs to have high quality, which removes the
need for audio attention that is used in other deliberation models. Hence,
this work also introduces some speed gains to deliberation, without using the
iterative Align-refine approach.
On the whole, with one simple representations, we get a compelling cheap,
streaming-friendly, as well as modular, alternative to both the continuous
encoding vector and the N-best hypotheses, without any loss in quality.
## 2 Modeling
Our framework is trained in three separate stages described below.
### 2.1 Base Model
Fig. 1: Modular Encoder. Lego-Features are exported from frozen base encoder
by training extra layers with an auxiliary CTC loss.
We start off from a pre-trained end-to-end system that follows the cascade
architecture in [21]: The base encoder comprises 3 convolution layers, then 14
Conformer [22] blocks: 4 causal ones, followed by 5 blocks that process 180
milliseconds of right-context each, then 5 more causal ones. This base encoder
is pre-trained using the RNN-T loss on the same training set. For the
modularization steps below, the pre-trained RNN-T decoder layers will be
discarded, and the base encoder is kept frozen. This recipe allows us to keep
the existing pre-trained model unchanged while exporting modular features.
### 2.2 Exporting Lego-Features
Figure 1 shows how the modular encoder is trained on top of a frozen base
model. The Exporter layers comprise further Conformer blocks with 180ms look-
ahead context. The CTC decoder [23] amounts to a single projection layer to
compute the frame-level posterior over the output vocabulary. Our work uses
wordpiece output tokens, but further research can explore using phonemes or
graphemes instead. The depicted CTC loss is applied to those logits and is
what trains the Exporter layers. Finally, the Lego-Features are computed by
extracting the sorted top-$K$ indices of the CTC logits, giving $K$ integers
at every frame. Note that this is performed on the logit vector directly,
without requiring any actual decoding algorithm like beam-search.
### 2.3 Downstream Models
Fig. 2: Downstream models embed and process the fixed Lego-features before
passing them to a downstream decoder.
Figure 2 illustrates how downstream models generally consume the Lego-
Features, which come in as sparse indices. The downstream consumer does not
receive extra information about how the indices map to wordpiece tokens, and
hence starts by embedding them. An Importer module, once again consisting of
180ms look-ahead Conformer blocks, prepares the embeddings for the downstream
decoder. [13, 14] use 1D convolution + multi-headed attention in place of the
Importer, but our early experiments show that Conformer blocks improve over
this original stack. Note that the Lego-Features themselves are kept constant
during downstream training. We experiment with two types of ASR decoders as
examples for downstream tasks, which are used with the same fixed set of Lego-
Features.
#### 2.3.1 Downstream RNN-T Decoder
The first downstream model uses an RNN-T decoder, which tends to serve real-
time applications well, since it processes the input frames in a streaming
fashion as they become available and starts outputting text tokens after a
short delay [3, 24]. We adopt the same RNN-T decoder layer architecture from
the base model (Section 2.1) but use it as a simulated downstream task, as the
decoder in Figure 2, to see if the bottlenecked Lego-Features are as
informative as the continuous base encoded tensor.
#### 2.3.2 Downstream LAS decoder / Deliberation
Fig. 3: Baseline deliberation on N-best RNN-T hyps. The LAS decoder attends to
embedded text and optionally to the pre-RNN-T audio features. Modularity test
boundary shown as the dotted line in the middle.
As a second downstream ASR decoder in Figure 2, we experiment with a full-
context Listen-Attend-and-Spell (LAS) decoder [25], which can achieve higher
quality by attending to all input frames.
A fitting baseline to this experiment is second-pass deliberation ASR [15].
Typically, a deliberation system generates first-pass hypotheses using a fast
decoder, like RNN-T, then embeds its N-best hyps and attends to them with a
second-pass full-context LAS decoder. We have therefore constructed a
comparable deliberation baseline model shown in Figure 3. This model is
analogous to our full pipeline, i.e. Figures 1 & 2 put together, and is
designed to have a similar total model size and encoder latency. It starts
with the same frozen base encoder, then trains a first-pass RNN-T decoder to
obtain the N-best hyps, which stands to be compared to the Lego-Features in
terms of informativeness and modularity. Figure 3 also ends with an LAS
decoder, except this one can optionally attend to the continuous encoder
features as well, as is done in previous deliberation work [15]. Gradients do
not flow back through embedded N-best.
## 3 Experimental settings
### 3.1 CTC Logit Evaluation
An interesting aspect of the Lego-Features encoder is that one can evaluate
its quality directly before providing the features to any downstream tasks.
This is done via a preliminary experiment where we directly decode from the
full set of the CTC-trained logits (before the top-$K$ operation in Figure 1)
using beam search or greedy decoding. The decoding algorithm used for this
evaluation is tangential to how the Lego-Features are produced, since those
are only extracted as the top-$K$ logit ranks without decoding actual
transcripts. Yet this direct evaluation can inform us about the general
quality of the CTC-trained logits, from which the Lego-Features are produced.
### 3.2 WER and Modularity Test
The downstream ASR decoders trained on the Lego-Features (Section 2.3) are
then evaluated and a modularity test is performed. The aim of the test is to
check if two different versions of the encoded features are interchangeable.
We test that by keeping the downstream model fixed, but feeding it with a new
version of the encoded features, which we get from another training run. The
second training is done from scratch with a new initialization. We compare the
WER performance of the decode before and after the switch, denoted as “Normal
$\to$ Mod. Test WER” in our tables. For the Lego-Features, we retrain the
encoder in Figure 1, where the base frozen encoder is also replaced with a
second version from a retrained base. As a baseline, we also test the
modularity of the base model itself, where we simply train the base encoder +
decoder a second time end-to-end and get the retrained encoder from there.
### 3.3 Architectural Details
Our base architecture follows [21]: All Conformer layers [22] are 512-dim, use
8-headed self-attention and a convolution kernel size of 15. We train on a
128D log-mel feature frontend with a 16-D one-hot domain-id vector appended to
it, see [26].
Our models work with 4,096 word pieces [27]. The RNN-T decoder comprises a
prediction network and a joint network with a single 640-dim FF layer. The
embedding prediction network [28], uses an embedding dimension of 320, and has
9M parameters. For the deliberation decoder, we use a 2-layer LSTM similar to
[15], where each layer has 1536 hidden units followed by 384-dim projection.
We do not use external LMs.
### 3.4 Datasets
As discussed in [29], all E2E models are trained on multidomain audio-text
pairs [26]. All datasets obtain their labels in a semi-supervised fashion,
using larger teacher models trained on in-domain data to provide pseudo labels
[30, 31]. Data was handled in accordance to Google AI principles [32]. To
further increase data diversity, multi-condition training (MTR) [33], random
data down-sampling to 8kHz [34] and SpecAug [35] are also used. Noisy data is
generated at signal-noise-ratio (SNR) from 0 to 30 dB, with an average SNR of
12 dB, and with T60 times ranging from 0 to 900ms, averaging 500ms. Noise
segments are sampled from YouTube and daily life noisy environmental
recordings. Both 8 kHz and 16 kHz versions of the data are generated, each
with equal probability, to make the model robust to varying sample rates.
The _Voice-Search_ test set has 10K Voice Search utterances with an average
length of 5.5 seconds. They are anonymized, hand-transcribed, and are
representative of Google’s Voice Search traffic.
## 4 Experimental Results
### 4.1 Preliminary CTC Decoder Evaluation
Exporter Properties | CTC Test WER
---|---
# Blocks | Size | Right | Greedy | Beam-search
Context | (Oracle)
1 | 10M | +180 ms | 5.9% | 5.8% (2.8%)
3 | 30M | +540 ms | 5.5% | 5.3% (2.7%)
Table 1: CTC Voice-Search WER for different Exporter setups
As explained in Section 3.1, the CTC decoder in Figure 1 can be evaluated
directly. Table 1 shows two settings for the Exporter layers and their
corresponding CTC WER performance. The right-context length indicates the
extra duration of future context attended to by the Exporter, noting that the
base encoder already sees a future context of 900ms. In both cases, greedy
decoding performs close to beam search, which tracks 16 hypotheses in its
beam. For all the downstream experiments below, we use the better setup with 3
blocks for the Exporter, and apply the same design to the Importer.
### 4.2 Base RNN-T vs. Downstream RNN-T
Our first downstream setting works with an RNN-T decoder (Section 2.3.1).
Table 2 demonstrates how the Lego-Features bottleneck still produces a rich
encoding that the downstream Importer and RNN-T use well. We export $K=12$
Lego-Features per frame and the downstream re-embeds each into $32$
dimensions. Preliminary experiments, omitted here for brevity, indicate that
varying these values does not affect downstream WER performance significantly.
The Base case in the table is simply the frozen base model on the left of
Figure 1, in which case the modularity test connects a new base encoder (from
another training run) to the same frozen base RNN-T. The modularity test fails
for the base case, yet passes for the Lego-Features. Both models involve
different sizes and latencies, so a direct WER contest between them is not the
main concern. Rather, the goal is to show that the Lego-Features bottleneck
does not degrade performance while enabling modularity.
To test robustness across changing domains, we also supply the same Lego-
Features used above to a downstream RNN-T model that is trained on Librispeech
data instead. The modularity test results are shown in Table 3 and only cause
less than 4% relative WER decline.
Encoder | RNN-T WER
---|---
Type | Size | Right-Context | Normal $\to$ | Mod. Test
Base | 146M | 0900 ms | 6.4% $\to$ | 99%
Modularized | 207M | 1440 ms | 5.6% $\to$ | 5.6%
Table 2: Downstream RNN-T Test WER with Modularity Test. The base encoder is
from the original pre-trained model.
### 4.3 Deliberation on N-Best vs. Lego-Features
Table 4 compares the LAS deliberation scenarios described in Section 2.3.2,
where the Lego-Features are compared to an N-best as a first-pass output.
Dropping the audio connection significantly degrades performance in the N-best
case, which is consistent with previous findings [15]. The Lego-Features seem
to preserve more information in the encoding, and thus do not need the audio
connection. They are significantly better than N-best text, and are only off
by 0.1 in absolute WER from N-best + audio.
The modularity test causes no performance decline for the Lego-Features, but
does not work well in the N-best case; even the text-only case degrades by 17%
relative WER. This somewhat unexpected result might be a symptom of label
bias, which RNN-T suffers from because of local normalization [36, 37], but
the CTC decoder avoids with its conditional independence assumption. Hence,
two separately-trained RNN-T first-pass models might exhibit different biases
in their N-bests, leading to this result.
Dev-Clean WER | Test-Other WER
---|---
Normal $\to$ | Mod. Test | Normal $\to$ | Mod. Test
4.9 $\to$ | 5.1 | 10.0 $\to$ | 10.3
Table 3: Modularity tests if downstream is trained on Librispeech
#### 4.3.1 Speed Comparison
Table 4 notes a difference in the input shapes to the Importers across the
different types of first-pass models, after re-embedding in Figure 2 & 3.
Here, $E_{1}$ and $E_{2}$ are the respective embedding dimensions, $n$ is the
RNN-T’s beam width and $U$ is the number of text tokens produced by it. $K$ is
the number of logit indices in the Lego-Features and $T$ is their sequence
length (=number of encoded frames). Note how the N-best’s embedding expands
the output sequence length, since it stacks the $N$ hypothesis sequentially
while keeping the sentence structures intact, in order to attend to this order
information during second-pass decoding. Since the LegoFeatures describe per-
frame logit ranks without serializing them into sentences, we forgo this
expansion and concatenate the embeddings within the depth dimension at each
frame instead. This saves on computational cost, since the #GFLOPs used by LAS
is proportional to the sequence length it is attending to. While $U$ can
change from one utterance to the other, the embedded matrices have to padded
to maximum length when working with hardware accelerators. Our system uses
$n=8$, $U=120$, $E_{1}=384$, $T=343$, $K=12$, and $E_{2}=32$. This makes the
depth dimension equal, but LegoFeatures’ sequence length is $64\%$ smaller
than the N-best’s.
Another important computational benefit of deliberating on LegoFeatures is
that we can obtain them without performing a beam-search procedure. It is
hence possible to compute them for long utterances with high parallelization,
only limited by the number of TPU cores available. Generating the N-best, on
the other hand, requires sequential auto-regressive processing. For instance,
benchmarking this sequential path in the RNN-T (using an in-house server TPU
and the above dimensions) gives $1.8$ ms per output token, or $216$ ms per
utterance in the padded worst case, which does become the bottleneck after the
other layers are parallelized.
First Pass | Embedded | Attend | Downstream WER
---|---|---|---
Shape | Audio | Normal $\to$ | Mod. Test
RNN-T $N$-best | $\left[N\cdot U,E_{1}\right]$ | No | 5.4% $\to$ | 6.3%
Yes | 5.0% $\to$ | 14.3%
Lego-Features | $\left[T,K\cdot E_{2}\right]$ | No | 5.1% $\to$ | 5.1%
Table 4: Deliberation WER and Modularity Tests. Embedded Shapes discussed in
Section 4.3.1
## 5 Conclusions and Future Work
In this paper, we describe a simple recipe for exporting streaming-friendly
modular encoded representations and successfully test them with RNN-T and LAS
decoders. Overall, exporting the encoder output as top CTC-trained logits
introduces multiple benefits. The encoding achieves strong WER performance and
interchangability is demonstrated through the modularity test. If regarded as
a representation for first-pass ASR prediction, the Lego-Features surpass the
N-best in quality, modularity, and generation speed.
To address resource-limited environments like on-device ASR, and to improve
latency, future research can explore using smaller Exporter and Importer
layers. Another avenue is to export CTC logits over phoneme/triphone/grapheme
vocabularies, or a combination thereof. Different types of Lego-Features can
be tested with various downstream tasks, like confidence models, speech
translation or spoken language understanding.
## References
* [1] G. Pundak and T. N. Sainath, “Lower frame rate neural network acoustic models,” in Proc. Interspeech, 2016.
* [2] B. Li, A. Gulati, J. Yu, et al., “A Better and Faster End-to-End Model for Streaming ASR,” in Proc. ICASSP, 2021.
* [3] Y. He, T. N. Sainath, R. Prabhavalkar, et al., “Streaming End-to-end Speech Recognition For Mobile Devices,” in Proc. ICASSP, 2019.
* [4] C.-C. Chiu, T. N. Sainath, Y. Wu, et al., “State-of-the-art Speech Recognition With Sequence-to-Sequence Models,” in Proc. ICASSP, 2018.
* [5] S. Kim, T. Hori, and S. Watanabe, “Joint CTC-attention based end-to-end speech recognition using multi-task learning,” in Proc. ICASSP, 2017.
* [6] J. Li, R. Zhao, H. Hu, and Y. Gong, “Improving RNN transducer modeling for end-to-end speech recognition,” in Proc. ASRU, 2019.
* [7] A. Zeyer, A. Merboldt, R. Schlüter, and H. Ney, “A new training pipeline for an improved neural transducer,” in Proc. Interspeech, 2020.
* [8] T. N. Sainath, Y. He, Narayanan, et al., “An Efficient Streaming Non-Recurrent On-Device End-to-End Model with Improvements to Rare-Word Modeling,” in Interspeech, 2021.
* [9] Tara N Sainath, Yanzhang He, Bo Li, Arun Narayanan, et al., “A streaming on-device end-to-end model surpassing server-side conventional model quality and latency,” in Proc. ICASSP. IEEE, 2020, pp. 6059–6063.
* [10] R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” in Proc. Interspeech, 2017.
* [11] Zhiyun Lu, Liangliang Cao, Yu Zhang, Chung-Cheng Chiu, and James Fan, “Speech sentiment analysis via pre-trained features from end-to-end asr models,” in Proc. ICASSP. IEEE, 2020, pp. 7149–7153.
* [12] Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater, “Pre-training on high-resource speech recognition improves low-resource speech-to-text translation,” arXiv preprint arXiv:1809.01431, 2018.
* [13] Siddharth Dalmia, Abdelrahman Mohamed, Mike Lewis, Florian Metze, and Luke Zettlemoyer, “Enforcing encoder-decoder modularity in sequence-to-sequence models,” arXiv preprint arXiv:1911.03782, 2019.
* [14] Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, and Abdelrahman Mohamed, “Legonn: Building modular encoder-decoder models,” arXiv:2206.03318, 2022.
* [15] Ke Hu, Tara N Sainath, Ruoming Pang, and Rohit Prabhavalkar, “Deliberation model based two-pass end-to-end speech recognition,” in ICASSP. IEEE, 2020, pp. 7799–7803.
* [16] Luca Moschella, Valentino Maiorca, Marco Fumero, Antonio Norelli, Francesco Locatello, and Emanuele Rodolà, “Relative representations enable zero-shot latent space communication,” arXiv preprint arXiv:2209.15430, 2022.
* [17] Michael Gygli, Jasper Uijlings, and Vittorio Ferrari, “Towards reusable network components by learning compatible representations,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 7620–7629.
* [18] Ethan A Chi, Julian Salazar, and Katrin Kirchhoff, “Align-refine: Non-autoregressive speech recognition via iterative realignment,” arXiv preprint arXiv:2010.14233, 2020.
* [19] Weiran Wang, Ke Hu, and Tara N Sainath, “Deliberation of streaming rnn-transducer by non-autoregressive decoding,” in Proc. ICASSP. IEEE, 2022, pp. 7452–7456.
* [20] Weiran Wang, Ke Hu, and Tara N Sainath, “Streaming align-refine for non-autoregressive deliberation,” arXiv preprint arXiv:2204.07556, 2022.
* [21] Tara N Sainath, Yanzhang He, Arun Narayanan, Rami Botros, et al., “Improving the latency and quality of cascaded encoders,” in Proc. ICASSP. IEEE, 2022, pp. 8112–8116.
* [22] A. Gulati, J. Qin, C.-C. Chiu, et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proc. Interspeech, 2020.
* [23] A. Graves, S. Fernandez, F. Gomez, and J. Schmidhuber, “Connectionist Temporal Classification: Labeling Unsegmented Sequenece Data with Recurrent Neural Networks,” in Proc. ICML, 2006.
* [24] A. Graves, “Sequence Transduction with Recurrent Neural Networks,” CoRR, vol. abs/1211.3711, 2012.
* [25] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell,” CoRR, vol. abs/1508.01211, 2015.
* [26] A. Narayanan, R. Prabhavalkar, C.-C. Chiu, et al., “Recognizing Long-Form Speech Using Streaming End-to-End Models,” in Proc. ASRU, 2019.
* [27] M. Schuster and K. Nakajima, “Japanese and Korean voice search,” in Proc. ICASSP, 2012.
* [28] R. Botros and T.N. Sainath, “Tied & reduced rnn-t decoder,” in Proc. Interspeech, 2021.
* [29] T. N. Sainath, Y. He, B. Li, et al., “A Streaming On-Device End-To-End Model Surpassing Server-Side Conventional Model Quality and Latency,” in Proc. ICASSP, 2020.
* [30] D. Hwang, K. Sim, Z. Huo, and T. Strohman, “Pseudo Label Is Better Than Human Label,” in Proc. ICASSP, 2022.
* [31] H. Liao, E. McDermott, and A. Senior, “Large Scale Deep Neural Network Acoustic Modeling with Semi-supervised Training Data for YouTube Video Transcription,” in Proc. ASRU, 2013.
* [32] “Google ai principles,” https://ai.google/principles/.
* [33] C. Kim, A. Misra, K. Chin, et al., “Generation of Large-Scale Simulated Utterances in Virtual Rooms to Train Deep-Neural Networks for Far-Field Speech Recognition in Google Home,” in Proc. Interspeech, 2017.
* [34] J. Li, D. Yu, J. Huang, and Y. Gong, “Improving Wideband Speech Rcognition using Mixed-bandwidth Training Data in CD-DNN-HMM,” in Proc. SLT, 2012.
* [35] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E.D. Cubuk, and Q.V. Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in Proc. Interspeech, 2019.
* [36] Awni Hannun, “The label bias problem,” 2020\.
* [37] Brian Yan, Siddharth Dalmia, Yosuke Higuchi, Graham Neubig, Florian Metze, Alan W Black, and Shinji Watanabe, “Ctc alignments improve autoregressive translation,” arXiv preprint arXiv:2210.05200, 2022.
|
Simple finite elements and multigrid for efficient mass-consistent wind
downscaling in a coupled fire-atmosphere model
J. Mandel 1, A. Farguell 2, A. K. Kochanski 2, D. V. Mallia 3, K. Hilburn 4
${}^{1}\,$University of Colorado Denver, Denver, CO
2San José State University, San José, CA
3University of Utah, Salt Lake City, UT
4Colorado State University, Fort Collins, CO
## 1 Introduction
In the coupled atmosphere-fire model WRF-SFIRE [6, 7], the Weather Research
Forecasting (WRF) model [12] runs at 300m–1km horizontal resolution, while the
fire model runs at the resolution of 30m or finer. The wind has a fundamental
effect on fire behavior and the topography details have a strong effect on the
wind, but WRF does not see the topography on the fire grid scale. We want to
downscale the wind from WRF to account for the fine-scale terrain. For this
purpose, we fit the wind from WRF with a divergence-free flow over the
detailed terrain. Such methods, called mass-consistent approximations, were
originally proposed on regular grids [10, 11] for urban and complex terrain
modeling, with terrain and surface features modeled by excluding entire grid
cells from the domain. For fire applications, WindNinja [13] uses finite
elements on a terrain-following grid. The resulting equations are generally
solved by iterative methods such as SOR, which converge slowly, so use of GPUs
is of interest [2]. A multigrid method with a terrain-following grid by a
change of coordinates was proposed in [15].
The method proposed here is to be used in every time step of WRF-SFIRE in the
place of interpolation to the fire model grid. Therefore, it needs to have the
potential to (1) scale to hundreds or thousands of processors using WRF
parallel infrastructure [14]; (2) scale to domains size at least 100km by
100km horizontally, with $3000\times 3000\times 15$ grid cells and more; (3)
have reasonable memory requirements per grid point; (4) not add to the cost of
the time step significantly when started from the solution in the previous
time step; and, (5) adapt to the problem automatically, with minimum or no
parameters to be set by the user.
## 2 Finite element formulation
Given vector field $\boldsymbol{u}_{0}$ on domain
$\Omega\subset\mathbb{R}^{d}$, subset $\Gamma\subset\partial\Omega$, and
$d\times d$ symmetric positive definite coefficient matrix
$\boldsymbol{A}=\boldsymbol{A}\left(\boldsymbol{x}\right)$, we want to find
the closest divergence-free vector field $\boldsymbol{u}$ by solving the
problem
$\min_{\boldsymbol{u}}\frac{1}{2}\int\limits_{\Omega}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)\cdot\boldsymbol{A}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)d\boldsymbol{x}\text{\quad
subject to }\operatorname{div}\boldsymbol{u}=0\text{ in }\Omega\text{ and
}\boldsymbol{u}\cdot\boldsymbol{n}=0\text{ on }\Gamma,$ (1)
where $\Gamma$ is the bottom of the domain (the surface), and
$\boldsymbol{A}\left(\boldsymbol{x}\right)$ is a $3\times 3$ diagonal matrix
with penalty constants $a_{1}^{2},a_{2}^{2},a_{3}^{2}$ on the diagonal.
Enforcing the constraints in (1) by a Lagrange multiplier $\lambda$, we obtain
the solution $\left(\boldsymbol{u},\lambda\right)$ as a stationary point of
the Lagrangean
$\mathcal{L}\left(\boldsymbol{u},\lambda\right)=\frac{1}{2}\int\limits_{\Omega}\boldsymbol{A}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)\cdot\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)d\boldsymbol{x}+\int\limits_{\Omega}\lambda\operatorname{div}\boldsymbol{u}d\boldsymbol{x}-\int\limits_{\Gamma}\lambda\boldsymbol{n}\cdot\boldsymbol{u}d\boldsymbol{s}.$
(2)
Eliminating $\boldsymbol{u}$ from the stationarity conditions
$\partial\mathcal{L}(\boldsymbol{u},\lambda)/\partial\lambda=0$ and
$\partial\mathcal{L}(\boldsymbol{u},\lambda)/\partial\boldsymbol{u}=0$ by
$\boldsymbol{u}=\boldsymbol{u}_{0}+\boldsymbol{A}^{-1}\operatorname{grad}\lambda$
(3)
leads to the generalized Poisson equation for Lagrange multiplier $\lambda$,
$-\operatorname{div}\boldsymbol{A}^{-1}\operatorname{grad}\lambda=\operatorname{div}\boldsymbol{u}_{0}\text{
on }\Omega,\quad\lambda=0\text{ on }\partial\Omega\setminus\Gamma,\text{
\quad}\boldsymbol{n\cdot A}^{-1}\operatorname{grad}\lambda=-\boldsymbol{n\cdot
u}_{0}\text{ on }\Gamma.$ (4)
Multiplication of (4) by a test function $\mu$, $\mu=0$ on
$\partial\Omega\setminus\Gamma$, and integration by parts yields the
variational form to find $\lambda$ such that $\lambda=0$ on
$\partial\Omega\setminus\Gamma$ and
$\int_{\Omega}\boldsymbol{A}^{-1}\operatorname{grad}\lambda\cdot\operatorname{grad}\mu\,d\boldsymbol{x}=-\int_{\Omega}\operatorname{grad}\mu\cdot\boldsymbol{u}_{0}d\boldsymbol{x}$
(5)
for all $\mu$ such that $\mu=0$ on $\partial\Omega\setminus\Gamma$. The
solution is then recovered from (3). We proceed formally here; see [5] for a
different derivation of (5) in a functional spaces setting.
The variational problem (5) is discretized by standard isoparametric 8-node
hexahedral finite elements, e.g., [4]. The integral on the left-hand side of
(5) is evaluated by tensor-product Gauss quadrature with two nodes in each
dimension, while for the right-hand side, one-node quadrature at the center of
the element is sufficient. The same code for the derivatives of a finite
element function is used to evaluate $\operatorname{grad}$ $\lambda$ in (3) at
the center of each element.
The unknown $\lambda$ is represented by its values at element vertices, and
the wind vector is represented naturally by its values at element centers. No
numerical differentiation of $\lambda$ from its nodal values, computation of
the divergence of the initial wind field $\boldsymbol{u}_{0}$, or explicit
implementation of the boundary condition on $\operatorname{grad}\lambda$ in
(4) is needed. These are all taken care of by the finite elements naturally.
## 3 Multigrid iterations
The finite element method for (5) results in a system of linear equations
$Ku=f$. The values of the solution are defined on a grid, which we will call a
_fine grid_. One cycle of the multigrid method consists of several iterations
of a basic iterative method, such as Gauss-Seidel, called a _smoother_ ,
followed by a _coarse-grid correction_. A prolongation matrix $P$ is
constructed to interpolate values from a coarse grid, in the simplest case
consisting of every other node, to the fine grid. For a given approximate
solution $u$ after the smoothing, we seek an improved solution in the form
$u+Pu_{c}$ variationally, by solving
$P^{\top}K\left(u+Pu_{c}\right)=P^{\top}f$ (6)
for $u_{c}$, and obtain the coarse-grid correction procedure as
$\displaystyle f_{c}=P^{\top}\left(f-Ku\right)\qquad$ form the coarse right-
hand side $\displaystyle K_{c}=P^{\top}KP\qquad$ form the coarse stiffness
matrix $\displaystyle K_{c}u_{c}=f_{c}\qquad$ solve the coarse-grid problem
(7) $\displaystyle u\leftarrow u+Pu_{c}\qquad$ insert the coarse-grid
correction
The coarse grid correction is followed by several more smoothing steps, which
completes the multigrid cycle.
In the simplest case, $P$ is a linear interpolation and the coarse stiffness
matrix $K_{c}$ is the stiffness matrix for a coarse finite element
discretization on a grid with each coarse-grid element taking the place of a
$2\times 2\times 2$ agglomeration of fine-grid elements. That makes it
possible to apply the same method to the coarse-grid problem (7) recursively.
This process creates a hierarchy of coarser grids. Eventually, the coarsest
grid problem is solved by a direct method, or one can just do some more
iterations on it.
Multigrid methods gain their efficiency from the fact that simple iterative
methods like Gauss-Seidel change the values of the solution at a node from
differences of the values between this and neighboring nodes. When the error
values at neighboring nodes become close, the error can be well approximated
in the range of the prolongation $P$ and the coarse-grid correction can find
$u_{c}$ such that $u+Pu_{c}$ is a much better approximation of the solution.
For analysis of variational multigrid methods and further references, see [1,
8].
Multigrid methods are very efficient. For simple elliptic problems, such as
the Poisson equation on a regular grid, convergence rates of about $0.1$
(reduction of the error by a factor of $10$) at the cost of $4$ to $5$ Gauss-
Seidel sweeps on the finest grid are expected [3]. However, the convergence
rates get worse on more realistic grids, and adaptations are needed. We choose
as the smoother vertical sweeps of Gauss-Seidel from the bottom up to the top,
with red-black ordering horizontally into $4$ groups. For the base method, we
use $2\times 2\times 2$ coarsening and construct $P$ so that the vertices of
every $2\times 2\times 2$ agglomeration of elements interpolate to the fine-
grid nodes in the agglomeration, with the same weights as the trilinear
interpolation on a regular grid. The interpolation is still trilinear on a
stretched grid, but only approximately trilinear on a deformed terrain-
following grid.
The base method works as expected as long as some grid layers are not tightly
coupled. If they are, we mitigate the slower convergence by semicoarsening
[9]: After smoothing, the error is smoother in the tightly coupled
direction(s), which indicates that we should not coarsen the other
direction(s). When the grid is stretched vertically away from the ground, the
nodes are relatively closer and thus tightly coupled in the horizontal
direction. Similarly, when the penalty coefficient $a_{3}$ in the vertical
direction is larger than $a_{1}$ and $a_{2}$ in the horizontal directions, the
neighboring nodes in the vertical direction are tightly coupled numerically.
The algorithm to decide on coarsening we use is: Suppose that the penalty
coefficients are $a_{1}=a_{2}=1$ and $a_{3}\geq 1$, and at the bottom of the
grid, the grid spacing is $h_{1}=h_{2}$ (horizontal) and $h_{3}$ (vertical).
If $h_{3}/(h_{1}a_{3})>1/3$, coarsen in the horizontal directions by $2$,
otherwise do not coarsen. Then, replace $h_{1}$ and $h_{2}$ by their new
values, corsened (multiplied by 2) or not, and for every horizontal layer from
the ground up, if $h_{3}/(h_{1}a_{3})]<3$, coarsen about that layer
vertically, otherwise do not coarsen. This algorithm retains the coarse grids
as logically cartesian, which is important for computational efficiency and
keeping the code simple, and it controls the convergence rate to remain up to
about $0.28$ with four smoothing steps per cycle.
## 4 Conclusion
We have presented a simple and efficient finite element formulation of mass-
consistent approximation, and a multigrid iterative method with adaptive
semicoarsening, which maintains the convergence of iteration over a range of
grids and penalty coefficients. A prototype code is available at
https://github.com/openwfm/wrf-fire-matlab/tree/femwind/femwind.
Acknowledgement: This work has been supported by NSF grant ICER-1664175 and
NASA grant 80NSSC19K1091.
## References
* [1] R. E. Bank and T. Dupont: _An optimal order process for solving finite element equations_ , Math. Comp., 36 (1981), pp. 35–51.
* [2] B. Bozorgmehr, Z. Patterson, P. Willemsen, J. A. Gibbs, R. Stoll, J. J. Kim, and E. R. Pardyjak: _A CUDA-based implementation of a fast response urban wind model_. 100th American Meteorological Society Annual Meeting, 2020. https://ams.confex.com/ams/2020Annual/meetingapp.cgi/Paper/366583: accessed December 28, 2020.
* [3] A. Brandt: _Multi-level adaptive solutions to boundary-value problems_ , Math. Comp., 31 (1977), pp. 333–390.
* [4] T. J. R. Hughes: _The finite element method_ , Prentice Hall, Inc., Englewood Cliffs, NJ, 1987.
* [5] L. H. Juárez, M. L. Sandoval, J. López, and R. Reséndiz: _Mass-consistent wind field models: Numerical techniques by $L^{2}$ projection methods_, in Fluid Dynamics, Computational Modeling and Applications, L. H. Juárez, ed., IntechOpen, Rijeka, 2012, ch. 2, pp. 23–40.
* [6] J. Mandel, S. Amram, J. D. Beezley, G. Kelman, A. K. Kochanski, V. Y. Kondratenko, B. H. Lynn, B. Regev, and M. Vejmelka: _Recent advances and applications of WRF-SFIRE_ , Natural Hazards and Earth System Sciences, 14 (2014), pp. 2829–2845.
* [7] J. Mandel, J. D. Beezley, and A. K. Kochanski: _Coupled atmosphere-wildland fire modeling with WRF 3.3 and SFIRE 2011_ , Geoscientific Model Development, 4 (2011), pp. 591–610.
* [8] J. Mandel, S. McCormick, and R. Bank: _Variational multigrid theory_ , in Multigrid methods, vol. 3 of Frontiers Appl. Math., SIAM, Philadelphia, PA, 1987, pp. 131–177.
* [9] E. Morano, D. J. Mavriplis, and V. Venkatakrishnan: _Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems_ , SIAM J. Sci. Comput., 20 (1998), pp. 393–415.
* [10] C. A. Sherman: _A mass-consistent model for wind fields over complex terrain_ , Journal of Applied Meteorology, 17 (1978), pp. 312–319.
* [11] B. Singh, B. S. Hansen, M. J. Brown, and E. R. Pardyjak: _Evaluation of the QUIC-URB fast response urban wind model for a cubical building array and wide building street canyon_ , Environmental Fluid Mechanics, 8 (2008), pp. 281–312.
* [12] W. C. Skamarock, J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, M. G. Duda, X.-Y. Huang, W. Wang, and J. G. Powers: _A description of the Advanced Research WRF version 3_. NCAR Technical Note 475, 2008.
* [13] N. S. Wagenbrenner, J. M. Forthofer, B. K. Lamb, K. S. Shannon, and B. W. Butler: _Downscaling surface wind predictions from numerical weather prediction models in complex terrain with WindNinja_ , Atmospheric Chemistry and Physics, 16 (2016), pp. 5229–5241.
* [14] W. Wang, C. Bruyère, M. Duda, J. Dudhia, D. Gill, M. Kavulich, K. Werner, M. Chen, H.-C. Lin, J. Michalakes, S. Rizvi, X. Zhang, J. Berner, D. Munoz-Esparza, B. Reen, S. Ha, K. Fossell, J. D. Beezley, J. L. Coen, and J. Mandel: _ARW version 4 modeling system user’s guide_. National Center for Atmospheric Research, Boulder, CO, January 2019.
* [15] Y. Wang, C. Williamson, D. Garvey, S. Chang, and J. Cogan: _Application of a multigrid method to a mass-consistent diagnostic wind model_ , Journal of Applied Meteorology, 44 (2005), pp. 1078–1089.
|
# Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
Juntang Zhuang1; Yifan Ding2; Tommy Tang3; Nicha Dvornek1;
Sekhar Tatikonda1; James S. Duncan1
1 Yale University; 2 University of Central Florida; 3 University of Illinois
at Urbana-Champaign
<EMAIL_ADDRESS>
yf.ding<EMAIL_ADDRESS>
###### Abstract
We propose ACProp (Asynchronous-centering-Prop), an adaptive optimizer which
combines centering of second momentum and asynchronous update (e.g. for $t$-th
update, denominator uses information up to step $t-1$, while numerator uses
gradient at $t$-th step). ACProp has both strong theoretical properties and
empirical performance. With the example by Reddi et al. (2018), we show that
asynchronous optimizers (e.g. AdaShift, ACProp) have weaker convergence
condition than synchronous optimizers (e.g. Adam, RMSProp, AdaBelief); within
asynchronous optimizers, we show that centering of second momentum further
weakens the convergence condition. We demonstrate that ACProp has a
convergence rate of $O(\frac{1}{\sqrt{T}})$ for the stochastic non-convex
case, which matches the oracle rate and outperforms the
$O(\frac{logT}{\sqrt{T}})$ rate of RMSProp and Adam. We validate ACProp in
extensive empirical studies: ACProp outperforms both SGD and other adaptive
optimizers in image classification with CNN, and outperforms well-tuned
adaptive optimizers in the training of various GAN models, reinforcement
learning and transformers. To sum up, ACProp has good theoretical properties
including weak convergence condition and optimal convergence rate, and strong
empirical performance including good generalization like SGD and training
stability like Adam. We provide the implementation at
https://github.com/juntang-zhuang/ACProp-Optimizer.
## 1 Introduction
Deep neural networks are typically trained with first-order gradient
optimizers due to their computational efficiency and good empirical
performance [1]. Current first-order gradient optimizers can be broadly
categorized into the stochastic gradient descent (SGD) [2] family and the
adaptive family. The SGD family uses a global learning rate for all
parameters, and includes variants such as Nesterov-accelerated SGD [3], SGD
with momentum [4] and the heavy-ball method [5]. Compared with the adaptive
family, SGD optimizers typically generalize better but converge slower, and
are the default for vision tasks such as image classification [6], object
detection [7] and segmentation [8].
The adaptive family uses element-wise learning rate, and the representatives
include AdaGrad [9], AdaDelta [10], RMSProp [11], Adam [12] and its variants
such as AdamW [13], AMSGrad [14] AdaBound [15], AdaShift [16], Padam [30],
RAdam [17] and AdaBelief [18]. Compared with the SGD family, the adaptive
optimizers typically converge faster and are more stable, hence are the
default for generative adversarial networks (GANs) [19], transformers [20],
and deep reinforcement learning [21].
We broadly categorize adaptive optimizers according to different criteria, as
in Table. 1. (a) Centered v.s. uncentered Most optimizers such as Adam and
AdaDelta uses uncentered second momentum in the denominator; RMSProp-center
[11], SDProp [22] and AdaBelief [18] use square root of centered second
momentum in the denominator. AdaBelief [18] is shown to achieve good
generalization like the SGD family, fast convergence like the adaptive family,
and training stability in complex settings such as GANs. (b) Sync vs async The
synchronous optimizers typically use gradient $g_{t}$ in both numerator and
denominator, which leads to correlation between numerator and denominator;
most existing optimizers belong to this category. The asynchronous optimizers
decorrelate numerator and denominator (e.g. by using $g_{t}$ as numerator and
use $\\{g_{0},...g_{t-1}\\}$ in denominator for the $t$-th update), and is
shown to have weaker convergence conditions than synchronous optimizers[16].
Table 1: Categories of adaptive optimizers
| Uncentered second momentum | Centered second momentum
---|---|---
Synchronous | Adam , RAdam, AdaDelta, RMSProp, Padam | RMSProp-center, SDProp, AdaBelief
Asynchronous | AdaShift | ACProp (ours)
We propose Asynchronous Centering Prop (ACProp), which combines centering of
second momentum with the asynchronous update. We show that ACProp has both
good theoretical properties and strong empirical performance. Our
contributions are summarized as below:
* •
Convergence condition (a) Async vs Sync We show that for the example by Reddi
et al. (2018), asynchronous optimizers (AdaShift, ACProp) converge for any
valid hyper-parameters, while synchronous optimizers (Adam, RMSProp et al.)
could diverge if the hyper-paramaters are not carefully chosen. (b) Async-
Center vs Async-Uncenter Within the asynchronous optimizers family, by example
of an online convex problem with sparse gradients, we show that Async-Center
(ACProp) has weaker conditions for convergence than Async-Uncenter (AdaShift).
* •
Convergence rate We demonstrate that ACProp achieves a convergence rate of
$O(\frac{1}{\sqrt{T}})$ for stochastic non-convex problems, matching the
oracle of first-order optimizers [23], and outperforms the
$O(\frac{logT}{\sqrt{T}})$ rate of Adam and RMSProp.
* •
Empirical performance We validate performance of ACProp in experiments: on
image classification tasks, ACProp outperforms SGD and AdaBelief, and
demonstrates good generalization performance; in experiments with transformer,
reinforcement learning and various GAN models, ACProp outperforms well-tuned
Adam, demonstrating high stability. ACProp often outperforms AdaBelief, and
achieves good generalization like SGD and training stability like Adam.
## 2 Overview of algorithms
### 2.1 Notations
* $x,x_{t}\in\mathbb{R}^{d}$:
$x$ is a $d-$dimensional parameter to be optimized, and $x_{t}$ is the value
at step $t$.
* $f(x),f^{*}\in\mathbb{R}$:
$f(x)$ is the scalar-valued function to be minimized, with optimal (minimal)
$f^{*}$.
* $\alpha_{t},\epsilon\in\mathbb{R}$:
$\alpha_{t}$ is the learning rate at step $t$. $\epsilon$ is a small number to
avoid division by 0.
* $g_{t}\in\mathbb{R}^{d}$:
The noisy observation of gradient $\nabla f(x_{t})$ at step $t$.
* $\beta_{1},\beta_{2}\in\mathbb{R}$:
Constants for exponential moving average, $0\leq\beta_{1},\beta_{2}<1$.
* $m_{t}\in\mathbb{R}^{d}$:
$m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}$. The Exponential Moving Average
(EMA) of observed gradient at step $t$.
* $\Delta g_{t}\in\mathbb{R}^{d}$:
$\Delta g_{t}=g_{t}-m_{t}$. The difference between observed gradient $g_{t}$
and EMA of $g_{t}$.
* $v_{t}\in\mathbb{R}^{d}$:
$v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}$. The EMA of $g_{t}^{2}$.
* $s_{t}\in\mathbb{R}^{d}$:
$s_{t}=\beta_{2}s_{t-1}+(1-\beta_{2})(\Delta g_{t})^{2}$. The EMA of $(\Delta
g_{t})^{2}$.
Initialize $x_{0}$, $m_{0}\leftarrow 0$ , $s_{0}\leftarrow 0$, $t\leftarrow 0$
While $x_{t}$ not converged
$t\leftarrow t+1$
$g_{t}\leftarrow\nabla_{x}f_{t}(x_{t-1})$
$m_{t}\leftarrow\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}$
$s_{t}\leftarrow\beta_{2}s_{t-1}{+}(1{-}\beta_{2}){\color[rgb]{0,0,0}(g_{t}{-}m_{t})^{2}}$
${\color[rgb]{0,0,1}x_{t}\leftarrow\prod_{\mathcal{F},\sqrt{s_{t}}}\Big{(}x_{t-1}-\frac{\alpha}{\sqrt{{\color[rgb]{0,0,1}s_{t}}+{\color[rgb]{0,0,1}\epsilon}}}m_{t}\Big{)}}$
Algorithm 1 AdaBelief
Initialize $x_{0}$, $m_{0}\leftarrow 0$ , $s_{0}\leftarrow 0$, $t\leftarrow 0$
While $x_{t}$ not converged
$t\leftarrow t+1$
$g_{t}\leftarrow\nabla_{x}f_{t}(x_{t-1})$
$m_{t}\leftarrow\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}$
${\color[rgb]{0,0,1}x_{t}\leftarrow\prod_{\mathcal{F},\sqrt{s_{t-1}}}\Big{(}x_{t-1}-\frac{\alpha}{\sqrt{{\color[rgb]{0,0,1}s_{t-1}}+{\color[rgb]{0,0,1}\epsilon}}}g_{t}\Big{)}}$
$s_{t}\leftarrow\beta_{2}s_{t-1}{+}(1{-}\beta_{2}){\color[rgb]{0,0,0}(g_{t}{-}m_{t})^{2}}$
Algorithm 2 ACProp
### 2.2 Algorithms
In this section, we summarize the AdaBelief [18] method in Algo. 1 and ACProp
in Algo. 2. For the ease of notations, all operations in Algo. 1 and Algo. 2
are element-wise, and we omit the bias-correction step of $m_{t}$ and $s_{t}$
for simplicity. $\Pi_{\mathcal{F}}$ represents the projection onto feasible
set $\mathcal{F}$.
We first introduce the notion of “sync (async)” and “center (uncenter)”. (a)
Sync vs Async The update on parameter $x_{t}$ can be generally split into a
numerator (e.g. $m_{t},g_{t}$) and a denominator (e.g.
$\sqrt{s_{t}},\sqrt{v_{t}}$). We call it “sync” if the denominator depends on
$g_{t}$, such as in Adam and RMSProp; and call it “async” if the denominator
is independent of $g_{t}$, for example, denominator uses information up to
step $t-1$ for the $t$-th step. (b) Center vs Uncenter The “uncentered” update
uses $v_{t}$, the exponential moving average (EMA) of $g_{t}^{2}$; while the
“centered” update uses $s_{t}$, the EMA of $(g_{t}-m_{t})^{2}$.
Adam (Sync-Uncenter) The Adam optimizer [12] stores the EMA of the gradient in
$m_{t}$, and stores the EMA of $g_{t}^{2}$ in $v_{t}$. For each step of the
update, Adam performs element-wise division between $m_{t}$ and
$\sqrt{v_{t}}$. Therefore, the term $\alpha_{t}\frac{1}{\sqrt{v_{t}}}$ can be
viewed as the element-wise learning rate. Note that $\beta_{1}$ and
$\beta_{2}$ are two scalars controlling the smoothness of the EMA for the
first and second moment, respectively. When $\beta_{1}=0$, Adam reduces to
RMSProp [24].
AdaBelief (Sync-Center) AdaBelief optimizer [18] is summarized in Algo. 1.
Compared with Adam, the key difference is that it replaces the uncentered
second moment $v_{t}$ (EMA of $g_{t}^{2}$) by an estimate of the centered
second moment $s_{t}$ (EMA of $(g_{t}-m_{t})^{2}$). The intuition is to view
$m_{t}$ as an estimate of the expected gradient: if the observation $g_{t}$
deviates much from the prediction $m_{t}$, then it takes a small step; if the
observation $g_{t}$ is close to the prediction $m_{t}$, then it takes a large
step.
AdaShift (Async-Uncenter) AdaShift [16] performs temporal decorrelation
between numerator and denominator. It uses information of
$\\{g_{t-n},...g_{t}\\}$ for the numerator, and uses
$\\{g_{0},...g_{t-n-1}\\}$ for the denominator, where $n$ is the “delay step”
controlling where to split sequence $\\{g_{i}\\}_{i=0}^{t}$. The numerator is
independent of denominator because each $g_{i}$ is only used in either
numerator or denominator.
ACProp (Async-Center) Our proposed ACProp is the asynchronous version of
AdaBelief and is summarized in Algo. 2. Compared to AdaBelief, the key
difference is that ACProp uses $s_{t-1}$ in the denominator for step $t$,
while AdaBelief uses $s_{t}$. Note that $s_{t}$ depends on $g_{t}$, while
$s_{t-1}$ uses history up to step $t-1$. This modification is important to
ensure that
$\mathbb{E}(g_{t}/\sqrt{s_{t-1}}|g_{0},...g_{t-1})=(\mathbb{E}g_{t})/\sqrt{s_{t-1}}$.
It’s also possible to use a delay step larger than 1 similar to AdaShift, for
example, use $EMA(\\{g_{i}\\}_{i=t-n}^{t})$ as numerator, and
$EMA(\\{(g_{i}-m_{i})^{2}\\}_{i=0}^{t-n-1})$ for denominator.
Figure 1: Numerical results for the example defined by Eq. (1). We set the
initial value as $x_{0}=0$, and run each optimizer for $10^{4}$ steps trying
different initial learning rates in
$\\{10^{-5},10^{-4},10^{-3},10^{-2},10^{-1},1.0\\}$, and set the learning rate
decays with $1/\sqrt{t}$. If there’s a proper initial learning rate, such that
the average distance between the parameter and its optimal value $x^{*}=-1$
for the last 1000 steps is below 0.01, then it’s marked as “converge" (orange
plus symbol), otherwise as “diverge” (blue circle). For each optimizer, we
sweep through different $\beta_{2}$ values in a log grid ($x$-axis), and sweep
through different values of $P$ in the definition of problem ($y$-axis). We
plot the result for $\beta_{1}=0.9$ here; for results with different
$\beta_{1}$ values, please refer to appendix. Our results indicate that in the
$(P,\beta_{2})$ plane, there’s a threshold curve beyond which sync-optimizers
(Adam, RMSProp, AdaBelief) will diverge; however, async-optimizers (ACProp,
AdaShift) always converge for any point in the $(P,\beta_{2})$ plane. Note
that for AdaShift, a larger delay step $n$ is possible to cause divergence
(see example in Fig. 2 with $n=10$). To validate that the “divergence” is not
due to numerical issues and sync-optimizers are drifting away from optimal, we
plot trajectories in Fig. 2
## 3 Analyze the conditions for convergence
We analyze the convergence conditions for different methods in this section.
We first analyze the counter example by Reddi et al. (2018) and show that
async-optimizers (AdaShift, ACProp) always converge
$\forall\beta_{1},\beta_{2}\in(0,1)$, while sync-optimizers (Adam, AdaBelief,
RMSProp et al.) would diverge if $(\beta_{1},\beta_{2})$ are not carefully
chosen; hence, async-optimizers have weaker convergence conditions than sync-
optimizers. Next, we compare async-uncenter (AdaShift) with async-center
(ACProp) and show that momentum centering further weakens the convergence
condition for sparse-gradient problems. Therefore, ACProp has weaker
convergence conditions than AdaShift and other sync-optimizers.
### 3.1 Sync vs Async
We show that for the example in [14], async-optimizers (ACProp, AdaShift) have
weaker convergence conditions than sync-optimizers (Adam, RMSProp, AdaBelief).
###### Lemma 3.1 (Thm.1 in [14]).
There exists an online convex optimization problem where sync-optimizers (e.g.
Adam, RMSProp) have non-zero average regret, and one example is
$f_{t}(x)=\begin{cases}Px,&\ \textit{if \ \ }t\%P=1\\\ -x,&\
\textit{Otherwise}\\\ \end{cases}x\in[-1,1],P\in\mathbb{N},P\geq 3$ (1)
###### Lemma 3.2 ([25]).
For problem (1) with any fixed $P$, there’s a threshold of $\beta_{2}$ above
which RMSProp converges.
Figure 2: Trajectories of $x$ for different optimizers in Problem by Eq. 1.
Initial point is $x_{0}=0$, the optimal is $x^{*}=-1$, the trajectories show
that sync-optimizers (Adam, AdaBelief, RMSProp) diverge from the optimal,
validating the divergent area in Fig. 1 is correct rather than artifacts of
numerical issues. Async-optimizers (ACProp, AdaShift) converge to optimal
value, but large delay step $n$ in AdaShift could cause non-convergence.
In order to better explain the two lemmas above, we conduct numerical
experiments on the problem by Eq. (1), and show results in Fig. 1. Note that
$\sum_{t=k}^{k+P}f_{t}(x)=x$, hence the optimal point is $x^{*}=-1$ since
$x\in[-1,1]$. Starting from initial value $x_{0}=0$, we sweep through the
plane of $(P,\beta_{2})$ and plot results of convergence in Fig. 1, and plot
example trajectories in Fig. 2.
Lemma. 3.1 tells half of the story: looking at each vertical line in the
subfigure of Fig. 1, that is, for each fixed hyper-parameter $\beta_{2}$,
there exists sufficiently large $P$ such that Adam (and RMSProp) would
diverge. Lemma. 3.2 tells the other half of the story: looking at each
horizontal line in the subfigure of Fig. 1, for each problem with a fixed
period $P$, there exists sufficiently large $\beta_{2}$s beyond which Adam can
converge.
The complete story is to look at the $(P,\beta_{2})$ plane in Fig. 1. There is
a boundary between convergence and divergence area for sync-optimizers (Adam,
RMSProp, AdaBelief), while async-optimizers (ACProp, AdaShift) always
converge.
###### Lemma 3.3.
For the problem defined by Eq. (1), using learning rate schedule of
$\alpha_{t}=\frac{\alpha_{0}}{\sqrt{t}}$, async-optimizers (ACProp and
AdaShift with $n=1$) always converge
$\forall\beta_{1},\beta_{2}\in(0,1),\forall P\in\mathbb{N},P\geq 3$.
The proof is in the appendix. Note that for AdaShift, proof for the always-
convergence property only holds when $n=1$; larger $n$ could cause divergence
(e.g. $n=10$ causes divergence as in Fig. 2). The always-convergence property
of ACProp and AdaShift comes from the un-biased stepsize, while the stepsize
for sync-optimizers are biased due to correlation between numerator and
denominator. Taking RMSProp as example of sync-optimizer, the update is
$-\alpha_{t}\frac{g_{t}}{\sqrt{v_{t}}}=-\alpha_{t}\frac{g_{t}}{\sqrt{\beta_{2}^{t}g_{0}^{2}+...+\beta_{2}g_{t-1}^{2}+g_{t}^{2}}}$.
Note that $g_{t}$ is used both in the numerator and denominator, hence a large
$g_{t}$ does not necessarily generate a large stepsize. For the example in Eq.
(1), the optimizer observes a gradient of $-1$ for $P-1$ times and a gradient
of $P$ once; due to the biased stepsize in sync-optimizers, the gradient of
$P$ does not generate a sufficiently large stepsize to compensate for the
effect of wrong gradients $-1$, hence cause non-convergence. For async-
optimizers, $g_{t}$ is not used in the denominator, therefore, the stepsize is
not biased and async-optimizers has the always-convergence property.
Remark Reddi et al. (2018) proposed AMSGrad to track the element-wise maximum
of $v_{t}$ in order to achieve the always-convergence property. However,
tracking the maximum in the denominator will in general generate a small
stepsize, which often harms empirical performance. We demonstrate this through
experiments in later sections in Fig. 6.
Figure 3: Area of convergence for the problem in Eq. (2). The numerical
experiment is performed under the same setting as in Fig. 1.Our results
experimentally validated the claim that compared with async-uncenter
(AdaShift), async-center (ACProp) has a larger convergence area in the hyper-
parameter space.
### 3.2 Async-Uncenter vs Async-Center
In the last section, we demonstrated that async-optimizers have weaker
convergence conditions than sync-optimizers. In this section, within the
async-optimizer family, we analyze the effect of centering second momentum. We
show that compared with async-uncenter (AdaShift), async-center (ACProp) has
weaker convergence conditions. We consider the following online convex
problem:
$f_{t}(x)=\begin{cases}P/2\times x,\ \ \ t\%P==1\\\ -x,\ \ \ \ \ \ \ \ \ \ \
t\%P==P-2\\\ 0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \textit{otherwise}\\\
\end{cases}P>3,P\in\mathbb{N},x\in[0,1].$
(2)
Initial point is $x_{0}=0.5$. Optimal point is $x^{*}=0$. We have the
following results:
###### Lemma 3.4.
For the problem defined by Eq. (2), consider the hyper-parameter tuple
$(\beta_{1},\beta_{2},P)$, there exists cases where ACProp converges but
AdaShift with $n=1$ diverges, but not vice versa.
We provide the proof in the appendix. Lemma. 3.4 implies that ACProp has a
larger area of convergence than AdaShift, hence the centering of second
momentum further weakens the convergence conditions. We first validate this
claim with numerical experiments in Fig. 3; for sanity check, we plot the
trajectories of different optimizers in Fig. 4. We observe that the
convergence of AdaShift is influenced by delay step $n$, and there’s no good
criterion to select a good value of $n$, since Fig. 2 requires a small $n$ for
convergence in problem (1), while Fig. 4 requires a large $n$ for convergence
in problem (2). ACProp has a larger area of convergence, indicating that both
async update and second momentum centering helps weaken the convergence
conditions.
We provide an intuitive explanation on why momentum centering helps
convergence. Due to the periodicity of the problem, the optimizer behaves
almost periodically as $t\to\infty$. Within each period, the optimizer
observes one positive gradient $P/2$ and one negative gradient -1. As in Fig.
5, between observing non-zero gradients, the gradient is always 0. Within each
period, ACprop will perform a positive update $P/(2\sqrt{s^{+}})$ and a
negative update $-1/\sqrt{s^{-}}$, where $s^{+}$ ($s^{-}$) is the value of
denominator before observing positive (negative) gradient. Similar notations
for $v^{+}$ and $v^{-}$ in AdaShift. A net update in the correct direction
requires $\frac{P}{2\sqrt{s^{+}}}>\frac{1}{\sqrt{s^{-}}}$, (or
$s^{+}/s^{-}<P^{2}/4$).
When observing 0 gradient, for AdaShift,
$v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})0^{2}$; for ACProp,
$s_{t}=\beta_{2}s_{t-1}+(1-\beta_{2})(0-m_{t})^{2}$ where $m_{t}\neq 0$.
Therefore, $v^{-}$ decays exponentially to 0, but $s^{-}$ decays to a non-zero
constant, hence $\frac{s^{+}}{s^{-}}<\frac{v^{+}}{v^{-}}$, hence ACProp is
easier to satisfy $s^{+}/s^{-}<P^{2}/4$ and converge.
Figure 4: Trajectories for problem defined by Eq. (2). Note that the optimal
point is $x^{*}=0$.
Figure 5: Value of uncentered second momentum $v_{t}$ and centered momentum
$s_{t}$ for problem (2).
## 4 Analysis on convergence rate
In this section, we show that ACProp converges at a rate of $O(1/\sqrt{T})$ in
the stochastic nonconvex case, which matches the oracle [23] for first-order
optimizers and outperforms the $O(logT/\sqrt{T})$ rate for sync-optimizers
(Adam, RMSProp and AdaBelief) [26, 25, 18]. We further show that the upper
bound on regret of async-center (ACProp) outperforms async-uncenter (AdaShift)
by a constant.
For the ease of analysis, we denote the update as:
$x_{t}=x_{t-1}-\alpha_{t}A_{t}g_{t}$, where $A_{t}$ is the diagonal
preconditioner. For SGD, $A_{t}=I$; for sync-optimizers (RMSProp),
$A_{t}=\frac{1}{\sqrt{v_{t}}+\epsilon}$; for AdaShift with $n=1$,
$A_{t}=\frac{1}{\sqrt{v_{t-1}}+\epsilon}$; for ACProp,
$A_{t}=\frac{1}{\sqrt{s_{t-1}}+\epsilon}$. For async optimizers,
$\mathbb{E}[A_{t}g_{t}|g_{0},...g_{t-1}]=A_{t}\mathbb{E}g_{t}$; for sync-
optimizers, this does not hold because $g_{t}$ is used in $A_{t}$
###### Theorem 4.1 (convergence for stochastic non-convex case).
Under the following assumptions:
* •
$f$ is continuously differentiable, $f$ is lower-bounded by $f^{*}$ and upper
bounded by $M_{f}$. $\nabla f(x)$ is globally Lipschitz continuous with
constant $L$:
$||\nabla f(x)-\nabla f(y)||\leq L||x-y||$ (3)
* •
For any iteration $t$, $g_{t}$ is an unbiased estimator of $\nabla f(x_{t})$
with variance bounded by $\sigma^{2}$. Assume norm of $g_{t}$ is bounded by
$M_{g}$.
$\mathbb{E}\big{[}g_{t}\big{]}=\nabla f(x_{t})\ \ \ \
\mathbb{E}\big{[}||g_{t}-\nabla f(x_{t})||^{2}\big{]}\leq\sigma^{2}$ (4)
then for $\beta_{1},\beta_{2}\in[0,1)$, with learning rate schedule as:
$\alpha_{t}=\alpha_{0}t^{-\eta},\ \ \alpha_{0}\leq\frac{C_{l}}{LC_{u}^{2}},\ \
\eta\in[0.5,1)$
for the sequence $\\{x_{t}\\}$ generated by ACProp, we have
$\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{2}{C_{l}}\Big{[}(M_{f}-f^{*})\alpha_{0}T^{\eta-1}+\frac{LC_{u}^{2}\sigma^{2}\alpha_{0}}{2(1-\eta)}T^{-\eta}\Big{]}$
(5)
where $C_{l}$ and $C_{u}$ are scalars representing the lower and upper bound
for $A_{t}$, e.g. $C_{l}I\preceq A_{t}\preceq C_{u}I$, where $A\preceq B$
represents $B-A$ is semi-positive-definite.
Note that there’s a natural bound for $C_{l}$ and $C_{u}$:
$C_{u}\leq\frac{1}{\epsilon}$ and $C_{l}\geq\frac{1}{2M_{g}}$ because
$\epsilon$ is added to denominator to avoid division by 0, and $g_{t}$ is
bounded by $M_{g}$. Thm. 4.1 implies that ACProp has a convergence rate of
$O(1/\sqrt{T})$ when $\eta=0.5$; equivalently, in order to have $||\nabla
f(x)||^{2}\leq\delta^{2}$, ACProp requires at most $O(\delta^{-4})$ steps.
###### Theorem 4.2 (Oracle complexity [23]).
For a stochastic non-convex problem satisfying assumptions in Theorem. 4.1,
using only up to first-order gradient information, in the worst case any
algorithm requires at least $O(\delta^{-4})$ queries to find a
$\delta$-stationary point $x$ such that $||\nabla f(x)||^{2}\leq\delta^{2}$.
Optimal rate in big O Thm. 4.1 and Thm. 4.2 imply that async-optimizers
achieves a convergence rate of $O(1/\sqrt{T})$ for the stochastic non-convex
problem, which matches the oracle complexity and outperforms the
$O(logT/\sqrt{T})$ rate of sync-optimizers (Adam [14], RMSProp[25], AdaBelief
[18]). Adam and RMSProp are shown to achieve $O(1/\sqrt{T})$ rate under the
stricter condition that $\beta_{2,t}\to 1$ [27]. A similar rate has been
achieved in AVAGrad [28], and AdaGrad is shown to achieve a similar rate [29].
Despite the same convergence rate, we show that ACProp has better empirical
performance.
Constants in the upper bound of regret Though both async-center and async-
uncenter optimizers have the same convergence rate with matching upper and
lower bound in big O notion, the constants of the upper bound on regret is
different. Thm. 4.1 implies that the upper bound on regret is an increasing
function of $1/C_{l}$ and $C_{u}$, and
$1/C_{l}=\sqrt{K_{u}}+\epsilon,\ \ C_{u}=1/(\sqrt{K_{l}}+\epsilon)$
where $K_{l}$ and $K_{u}$ are the lower and upper bound of second momentum,
respectively.
We analyze the constants in regret by analyzing $K_{l}$ and $K_{u}$. If we
assume the observed gradient $g_{t}$ follows some independent stationary
distribution, with mean $\mu$ and variance $\sigma^{2}$, then approximately
$\displaystyle\textit{Uncentered second momentum:
}1/C_{l}^{v}=\sqrt{K_{u}^{v}}+\epsilon\approx\sqrt{\mu^{2}+\sigma^{2}}+\epsilon$
(6) $\displaystyle\textit{Centered second momentum:
}1/C_{l}^{s}=\sqrt{K_{u}^{s}}+\epsilon\approx\sqrt{\sigma^{2}}+\epsilon$ (7)
During early phase of training, in general $|\mu|\gg\sigma$, hence
$1/C_{l}^{s}\ll 1/C_{l}^{v}$, and the centered version (ACProp) can converge
faster than uncentered type (AdaShift) by a constant factor of around
$\frac{\sqrt{\mu^{2}+\sigma^{2}}+\epsilon}{\sqrt{\sigma^{2}}+\epsilon}$.
During the late phase, $g_{t}$ is centered around 0, and $|\mu|\ll\sigma$,
hence $K_{l}^{v}$ (for uncentered version) and $K_{l}^{s}$ (for centered
version) are both close to 0, hence $C_{u}$ term is close for both types.
Remark We emphasize that ACProp rarely encounters numerical issues caused by a
small $s_{t}$ as denominator, even though Eq. (7) implies a lower bound for
$s_{t}$ around $\sigma^{2}$ which could be small in extreme cases. Note that
$s_{t}$ is an estimate of mixture of two aspects: the change in true gradient
$||\nabla f_{t}(x)-\nabla f_{t-1}(x)||^{2}$, and the noise in $g_{t}$ as an
observation of $\nabla f(x)$. Therefore, two conditions are essential to
achieve $s_{t}=0$: the true gradient $\nabla f_{t}(x)$ remains constant, and
$g_{t}$ is a noise-free observation of $\nabla f_{t}(x)$. Eq. (7) is based on
assumption that $||\nabla f_{t}(x)-\nabla f_{t-1}(x)||^{2}=0$, if we further
assume $\sigma=0$, then the problem reduces to a trivial ideal case: a linear
loss surface with clean observations of gradient, which is rarely satisfied in
practice. More discussions are in appendix.
Figure 6: From left to right: (a) Mean value of denominator for a 2-layer MLP
on MNIST dataset. (b) Training loss of different optimizers for the 2-layer
MLP model. (c) Performance of AdaShift for VGG-11 on CIFAR10 varying with
learning rate ranging from 1e-1 to 1e-5, we plot the performance of ACProp
with learning rate 1e-3 as reference. Missing lines are because their accuracy
are below display threshold. All methods decay learning rate by a factor of 10
at 150th epoch. (d) Performance of AMSGrad for VGG-11 on CIFAR10 varying with
learning rate under the same setting in (c).
Empirical validations We conducted experiments on the MNIST dataset using a
2-layer MLP. We plot the average value of $v_{t}$ for uncentered-type and
$s_{t}$ for centered-type optimizers; as Fig. 6(a,b) shows, we observe
$s_{t}\leq v_{t}$ and the centered-type (ACProp, AdaBelief) converges faster,
validating our analysis for early phases. For epochs $>10$, we observe that
$\operatorname{min}s_{t}\approx\operatorname{min}v_{t}$, validating our
analysis for late phases.
As in Fig. 6(a,b), the ratio $v_{t}/s_{t}$ decays with training, and in fact
it depends on model structure and dataset noise. Therefore, empirically it’s
hard to compensate for the constants in regret by applying a larger learning
rate for async-uncenter optimizers. As shown in Fig. 6(c,d), for VGG network
on CIFAR10 classification task, we tried different initial learning rates for
AdaShift (async-uncenter) and AMSGrad ranging from 1e-1 to 1e-5, and their
performances are all inferior to ACProp with a learning rate 1e-3. Please see
Fig.9 for a complete table varying with hyper-parameters.
## 5 Experiments
We validate the performance of ACProp in various experiments, including image
classification with convolutional neural networks (CNN), reinforcement
learning with deep Q-network (DQN), machine translation with transformer and
generative adversarial networks (GANs). We aim to test both the generalization
performance and training stability: SGD family optimizers typically are the
default for CNN models such as in image recognition [6] and object detection
[7] due to their better generalization performance than Adam; and Adam is
typically the default for GANs [19], reinforcement learning [21] and
transformers [20], mainly due to its better numerical stability and faster
convergence than SGD. We aim to validate that ACProp can perform well for both
cases.
Figure 7: Test accuracy ($mean\pm std$) on CIFAR10 datset. Left to right:
VGG-11, ResNet-34, DenseNet-121.
Figure 8: Test accuracy (%) of VGG network on CIFAR10 under different hyper-
parameters. We tested learning rate in $\\{10^{-1},10^{-2},10^{-3},10^{-4}\\}$
and $\epsilon\in\\{10^{-5},...,10^{-9}\\}$.
Figure 9: The reward (higher is better) curve of a DQN-network on the four-
rooms problem. We report the mean and standard deviation across 10 independent
runs.
Table 2: Top-1 accuracy of ResNet18 on ImageNet. ⋄ is reported in PyTorch
Documentation, ${\dagger}$ is reported in [chen2020closing], $\ast$ is
reported in [17], ‡ is reported in [18]
SGD | Padam | Adam | AdamW | RAdam | AdaShift | AdaBelief | ACProp
---|---|---|---|---|---|---|---
69.76⋄ (70.23†) | 70.07† | 66.54∗ | 67.93† | 67.62∗ | 65.28 | 70.08${\ddagger}$ | 70.46
Image classification with CNN We first conducted experiments on CIFAR10 image
classification task with a VGG-11 [31], ResNet34 [6] and DenseNet-121 [32]. We
performed extensive hyper-parameter tuning in order to better compare the
performance of different optimizers: for SGD we set the momentum as 0.9 which
is the default for many cases [6, 32], and search the learning rate between
0.1 and $10^{-5}$ in the log-grid; for other adaptive optimizers, including
AdaBelief, Adam, RAdam, AdamW and AdaShift, we search the learning rate
between 0.01 and $10^{-5}$ in the log-grid, and search $\epsilon$ between
$10^{-5}$ and $10^{-10}$ in the log-grid. We use a weight decay of 5e-2 for
AdamW, and use 5e-4 for other optimizers. We report the $mean\pm std$ for the
best of each optimizer in Fig. 7: for VGG and ResNet, ACProp achieves
comparable results with AdaBelief and outperforms other optimizers; for
DenseNet, ACProp achieves the highest accuracy and even outperforms AdaBelief
by 0.5%. As in Table 2, for ResNet18 on ImageNet, ACProp outperforms other
methods and achieves comparable accuracy to the best of SGD in the literature,
validating its generalization performance.
To evaluate the robustness to hyper-parameters, we test the performance of
various optimizers under different hyper-parameters with VGG network. We plot
the results for ACProp and AdaShift as an example in Fig. 9 and find that
ACProp is more robust to hyper-parameters and typically achieves higher
accuracy than AdaShift.
Table 3: BLEU score (higher is better) on machine translation with
Transformer
| Adam | RAdam | AdaShift | AdaBelief | ACProp
---|---|---|---|---|---
DE-EN | 34.66$\pm$0.014 | 34.76$\pm$0.003 | 30.18$\pm$0.020 | 35.17$\pm$0.015 | 35.35$\pm$0.012
EN-VI | 21.83$\pm$0.015 | 22.54$\pm$0.005 | 20.18$\pm$0.231 | 22.45$\pm$0.003 | 22.62$\pm$0.008
JA-EN | 33.33$\pm$0.008 | 32.23$\pm$0.015 | 25.24$\pm$0.151 | 34.38$\pm$0.009 | 33.70$\pm$0.021
RO-EN | 29.78$\pm$ 0.003 | 30.26 $\pm$ 0.011 | 27.86$\pm$0.024 | 30.03$\pm$0.012 | 30.27$\pm$0.007
Table 4: FID (lower is better) for GANs
| Adam | RAdam | AdaShift | AdaBelief | ACProp
---|---|---|---|---|---
DCGAN | 49.29$\pm$0.25 | 48.24$\pm$1.38 | 99.32$\pm$3.82 | 47.25$\pm$0.79 | 43.43$\pm$4.38
RLGAN | 38.18$\pm$0.01 | 40.61$\pm$0.01 | 56.18 $\pm$0.23 | 36.58$\pm$0.12 | 37.15$\pm$0.13
SNGAN | 13.14$\pm$0.10 | 13.00$\pm$0.04 | 26.62$\pm$0.21 | 12.70$\pm$0.17 | 12.44$\pm$0.02
SAGAN | 13.98$\pm$0.02 | 14.25$\pm$0.01 | 22.11$\pm$0.25 | 14.17$\pm$0.14 | 13.54$\pm$0.15
Table 5: Performance comparison between AVAGrad and ACProp. $\uparrow$
($\downarrow$) represents metrics that upper (lower) is better. ⋆ are reported
in the AVAGrad paper [28]
| WideResNet Test Error ($\downarrow$) | Transformer BLEU ($\uparrow$) | GAN FID ($\downarrow$)
---|---|---|---
| CIFAR10 | CIFAR100 | DE-EN | RO-EN | DCGAN | SNGAN
AVAGrad | 3.80⋆$\pm$0.02 | 18.76⋆$\pm$0.20 | 30.23$\pm$0.024 | 27.73$\pm$0.134 | 59.32$\pm$3.28 | 21.02$\pm$0.14
ACProp | 3.67$\pm$0.04 | 18.72$\pm$0.01 | 35.35$\pm$0.012 | 30.27$\pm$0.007 | 43.34$\pm$4.38 | 12.44$\pm$0.02
Reinforcement learning with DQN We evaluated different optimizers on
reinforcement learning with a deep Q-network (DQN) [21] on the four-rooms task
[33]. We tune the hyper-parameters in the same setting as previous section. We
report the mean and standard deviation of reward (higher is better) across 10
runs in Fig. 9. ACProp achieves the highest mean reward, validating its
numerical stability and good generalization.
Neural machine translation with Transformer We evaluated the performance of
ACProp on neural machine translation tasks with a transformer model [20]. For
all optimizers, we set learning rate as 0.0002, and search for
$\beta_{1}\in\\{0.9,0.99,0.999\\}$, $\beta_{2}\in\\{0.98,0.99,0.999\\}$ and
$\epsilon\in\\{10^{-5},10^{-6},...10^{-16}\\}$. As shown in Table. 3, ACProp
achieves the highest BLEU score in 3 out 4 tasks, and consistently outperforms
a well-tuned Adam.
Generative Adversarial Networks (GAN) The training of GANs easily suffers from
mode collapse and numerical instability [34], hence is a good test for the
stability of optimizers. We conducted experiments with Deep Convolutional GAN
(DCGAN) [35], Spectral-Norm GAN (SNGAN) [36], Self-Attention GAN (SAGAN) [37]
and Relativistic-GAN (RLGAN) [38]. We set $\beta_{1}=0.5$, and search for
$\beta_{2}$ and $\epsilon$ with the same schedule as previous section. We
report the FID [39] on CIFAR10 dataset in Table. 4, where a lower FID
represents better quality of generated images. ACProp achieves the best
overall FID score and outperforms well-tuned Adam.
Remark Besides AdaShift, we found another async-optimizer named AVAGrad in
[28]. Unlike other adaptive optimizers, AVAGrad is not scale-invariant hence
the default hyper-parameters are very different from Adam-type
($lr=0.1,\epsilon=0.1$). We searched for hyper-parameters for AVAGrad for a
much larger range, with $\epsilon$ between 1e-8 and 100 in the log-grid, and
$lr$ between 1e-6 and 100 in the log-grid. For experiments with a WideResNet,
we replace the optimizer in the official implementation for AVAGrad by ACProp,
and cite results in the AVAGrad paper. As in Table 5, ACProp consistently
outperforms AVAGrad in CNN, Transformer, and GAN training.
## 6 Related Works
Besides the aforementioned, other variants of Adam include NosAdam [40], Sadam
[41], Adax [42]), AdaBound [15] and Yogi [43]. ACProp could be combined with
other techniques such as SWATS [44], LookAhead [45] and norm regularization
similar to AdamP [46]. Regarding the theoretical analysis, recent research has
provided more fine-grained frameworks [47, 48]. Besides first-order methods,
recent research approximate second-order methods in deep learning [49, 50,
51].
## 7 Conclusion
We propose ACProp, a novel first-order gradient optimizer which combines the
asynchronous update and centering of second momentum. We demonstrate that
ACProp has good theoretical properties: ACProp has a “always-convergence"
property for the counter example by Reddi et al. (2018), while sync-optimizers
(Adam, RMSProp) could diverge with uncarefully chosen hyper-parameter; for
problems with sparse gradient, async-centering (ACProp) has a weaker
convergence condition than async-uncentering (AdaShift); ACProp achieves the
optimal convergence rate $O(1/\sqrt{T})$, outperforming the $O(logT/\sqrt{T})$
rate of RMSProp (Adam), and achieves a tighter upper bound on risk than
AdaShift. In experiments, we validate that ACProp has good empirical
performance: it achieves good generalization like SGD, fast convergence and
training stability like Adam, and often outperforms Adam and AdaBelief.
## Acknowledgments and Disclosure of Funding
This research is supported by NIH grant R01NS035193.
## References
* [1] Ruoyu Sun, “Optimization for deep learning: theory and algorithms,” arXiv preprint arXiv:1912.08957, 2019.
* [2] Herbert Robbins and Sutton Monro, “A stochastic approximation method,” The annals of mathematical statistics, pp. 400–407, 1951.
* [3] Yu Nesterov, “A method of solving a convex programming problem with convergence rate o(1/k2̂),” in Sov. Math. Dokl, 1983.
* [4] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, 2013, pp. 1139–1147.
* [5] Boris T Polyak, “Some methods of speeding up the convergence of iteration methods,” USSR Computational Mathematics and Mathematical Physics, vol. 4, no. 5, pp. 1–17, 1964.
* [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
* [7] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” arXiv preprint arXiv:1506.01497, 2015.
* [8] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
* [9] John Duchi, Elad Hazan, and Yoram Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of machine learning research, vol. 12, no. Jul, pp. 2121–2159, 2011.
* [10] Matthew D Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701, 2012.
* [11] Alex Graves, “Generating sequences with recurrent neural networks,” arXiv preprint arXiv:1308.0850, 2013.
* [12] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
* [13] Ilya Loshchilov and Frank Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
* [14] Sashank J Reddi, Satyen Kale, and Sanjiv Kumar, “On the convergence of adam and beyond,” International Conference on Learning Representations, 2018.
* [15] Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun, “Adaptive gradient methods with dynamic bound of learning rate,” arXiv preprint arXiv:1902.09843, 2019.
* [16] Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Weinan Zhang, and Yong Yu, “Adashift: Decorrelation and convergence of adaptive learning rate methods,” in International Conference on Learning Representations, 2019.
* [17] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han, “On the variance of the adaptive learning rate and beyond,” arXiv preprint arXiv:1908.03265, 2019.
* [18] Juntang Zhuang, Tommy Tang, Sekhar Tatikonda, Nicha Dvornek, Yifan Ding, Xenophon Papademetris, and James S Duncan, “Adabelief optimizer: Adapting stepsizes by the belief in observed gradients,” arXiv preprint arXiv:2010.07468, 2020.
* [19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
* [20] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, pp. 5998–6008, 2017.
* [21] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
* [22] Yasutoshi Ida, Yasuhiro Fujiwara, and Sotetsu Iwamura, “Adaptive learning rate via covariance matrix based preconditioning for deep neural networks,” arXiv preprint arXiv:1605.09593, 2016.
* [23] Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth, “Lower bounds for non-convex stochastic optimization,” arXiv preprint arXiv:1912.02365, 2019.
* [24] Geoffrey Hinton, “Rmsprop: Divide the gradient by a running average of its recent magnitude,” Coursera, 2012.
* [25] Naichen Shi, Dawei Li, Mingyi Hong, and Ruoyu Sun, “{RMS}prop can converge with proper hyper-parameter,” in International Conference on Learning Representations, 2021.
* [26] Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong, “On the convergence of a class of adam-type algorithms for non-convex optimization,” arXiv preprint arXiv:1808.02941, 2018.
* [27] Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu, “A sufficient condition for convergences of adam and rmsprop,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11127–11135.
* [28] Pedro Savarese, David McAllester, Sudarshan Babu, and Michael Maire, “Domain-independent dominance of adaptive methods,” arXiv preprint arXiv:1912.01823, 2019.
* [29] Xiaoyu Li and Francesco Orabona, “On the convergence of stochastic gradient descent with adaptive stepsizes,” in The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019, pp. 983–992.
* [30] Jinghui Chen and Quanquan Gu, “Closing the generalization gap of adaptive gradient methods in training deep neural networks,” arXiv preprint arXiv:1806.06763, 2018.
* [31] Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
* [32] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
* [33] Richard S Sutton, Doina Precup, and Satinder Singh, “Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning,” Artificial intelligence, vol. 112, no. 1-2, pp. 181–211, 1999.
* [34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen, “Improved techniques for training gans,” in Advances in neural information processing systems, 2016, pp. 2234–2242.
* [35] Alec Radford, Luke Metz, and Soumith Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
* [36] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida, “Spectral normalization for generative adversarial networks,” arXiv preprint arXiv:1802.05957, 2018.
* [37] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena, “Self-attention generative adversarial networks,” in International conference on machine learning. PMLR, 2019, pp. 7354–7363.
* [38] Alexia Jolicoeur-Martineau, “The relativistic discriminator: a key element missing from standard gan,” arXiv preprint arXiv:1807.00734, 2018.
* [39] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in Advances in neural information processing systems, 2017, pp. 6626–6637.
* [40] Haiwen Huang, Chang Wang, and Bin Dong, “Nostalgic adam: Weighting more of the past gradients when designing the adaptive learning rate,” arXiv preprint arXiv:1805.07557, 2018.
* [41] Guanghui Wang, Shiyin Lu, Weiwei Tu, and Lijun Zhang, “Sadam: A variant of adam for strongly convex functions,” arXiv preprint arXiv:1905.02957, 2019.
* [42] Wenjie Li, Zhaoyang Zhang, Xinjiang Wang, and Ping Luo, “Adax: Adaptive gradient descent with exponential long term memory,” arXiv preprint arXiv:2004.09740, 2020.
* [43] Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar, “Adaptive methods for nonconvex optimization,” in Advances in neural information processing systems, 2018, pp. 9793–9803.
* [44] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson, “Averaging weights leads to wider optima and better generalization,” arXiv preprint arXiv:1803.05407, 2018.
* [45] Michael Zhang, James Lucas, Jimmy Ba, and Geoffrey E Hinton, “Lookahead optimizer: k steps forward, 1 step back,” in Advances in Neural Information Processing Systems, 2019, pp. 9593–9604.
* [46] Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, and Jung-Woo Ha, “Adamp: Slowing down the slowdown for momentum optimizers on scale-invariant weights,” in Proceedings of the International Conference on Learning Representations (ICLR), Online, 2021, pp. 3–7.
* [47] Ahmet Alacaoglu, Yura Malitsky, Panayotis Mertikopoulos, and Volkan Cevher, “A new regret analysis for adam-type algorithms,” in International Conference on Machine Learning. PMLR, 2020, pp. 202–210.
* [48] Xiaoyu Li, Zhenxun Zhuang, and Francesco Orabona, “Exponential step sizes for non-convex optimization,” arXiv preprint arXiv:2002.05273, 2020.
* [49] James Martens, “Deep learning via hessian-free optimization.,” in ICML, 2010, vol. 27, pp. 735–742.
* [50] Zhewei Yao, Amir Gholami, Sheng Shen, Kurt Keutzer, and Michael W Mahoney, “Adahessian: An adaptive second order optimizer for machine learning,” arXiv preprint arXiv:2006.00719, 2020.
* [51] Xuezhe Ma, “Apollo: An adaptive parameter-wise diagonal quasi-newton method for nonconvex stochastic optimization,” arXiv preprint arXiv:2009.13586, 2020.
* [52] Shi Naichen, Li Dawei, Hong Mingyi, and Sun Ruoyu, “Rmsprop can converge with proper hyper-parameter,” ICLR, 2021.
###### Contents
1. 1 Introduction
2. 2 Overview of algorithms
1. 2.1 Notations
2. 2.2 Algorithms
3. 3 Analyze the conditions for convergence
1. 3.1 Sync vs Async
2. 3.2 Async-Uncenter vs Async-Center
4. 4 Analysis on convergence rate
5. 5 Experiments
6. 6 Related Works
7. 7 Conclusion
8. A Analysis on convergence conditions
1. A.1 Convergence analysis for Problem 1 in the main paper
1. A.1.1 Numerical validations
2. A.2 Convergence analysis for Problem 2 in the main paper
3. A.3 Numerical experiments
9. B Convergence Analysis for stochastic non-convex optimization
1. B.1 Problem definition and assumptions
2. B.2 Convergence analysis of Async-optimizers in stochastic non-convex optimization
1. B.2.1 Validation on numerical accuracy of sum of generalized harmonic series
3. B.3 Convergence analysis of Async-moment-optimizers in stochastic non-convex optimization
10. C Experiments
1. C.1 Centering of second momentum does not suffer from numerical issues
2. C.2 Image classification with CNN
3. C.3 Neural Machine Translation with Transformers
4. C.4 Generative adversarial networks
## Appendix A Analysis on convergence conditions
### A.1 Convergence analysis for Problem 1 in the main paper
###### Lemma A.1.
There exists an online convex optimization problem where Adam (and RMSprop)
has non-zero average regret, and one of the problem is in the form
$f_{t}(x)=\begin{cases}Px,&\ \ \textit{if }t\mathrm{\ mod\ }P=1\\\ -x,&\ \
\textit{Otherwise}\\\ \end{cases}\ \ x\in[-1,1],\exists P\in\mathbb{N},P\geq
3$ (8)
###### Proof.
See [14] Thm.1 for proof. ∎
###### Lemma A.2.
For the problem defined above, there’s a threshold of $\beta_{2}$ above which
RMSprop converge.
###### Proof.
See [52] for details. ∎
###### Lemma A.3 (Lemma.3.3 in the main paper).
For the problem defined by Eq. (8), ACProp algorithm converges
$\forall\beta_{1},\beta_{2}\in(0,1),\forall P\in\mathbb{N},P\geq 3$.
###### Proof.
We analyze the limit behavior of ACProp algorithm. Since the observed gradient
is periodic with an integer period $P$, we analyze one period from with
indices from $kP$ to $kP+P$, where $k$ is an integer going to $+\infty$.
From the update of ACProp, we observe that:
$\displaystyle m_{kP}$
$\displaystyle=(1-\beta_{1})\sum_{i=1}^{kP}\beta_{1}^{kP-i}\times(-1)+(1-\beta_{1})\sum_{j=0}^{k-1}\beta_{1}^{kP-(jP+1)}(P+1)$
(9) $\displaystyle\Big{(}\textit{For each observation with gradient
}P,\textit{we break it into }P=-1+(P+1)\Big{)}$
$\displaystyle=-(1-\beta_{1})\sum_{i=1}^{kP}\beta_{1}^{kP-i}+(1-\beta_{1})(P+1)\beta_{1}^{-1}\sum_{j=0}^{k-1}\beta_{1}^{P(k-j)}$
(10)
$\displaystyle=-(1-\beta_{1}^{kP})+(1-\beta_{1})(P+1)\beta_{1}^{P-1}\frac{1-\beta_{1}^{(k-1)P}}{1-\beta_{1}^{P}}$
(11) $\displaystyle\lim_{k\to\infty}m_{kP}$
$\displaystyle=-1+(P+1)(1-\beta_{1})\beta_{1}^{P-1}\frac{1}{1-\beta_{1}^{P}}=\frac{(P+1)\beta_{1}^{P-1}-P\beta_{1}^{P}-1}{1-\beta_{1}^{P}}$
(12) $\displaystyle\Big{(}\textit{Since }\beta_{1}\in[0,1)\Big{)}$
Next, we derive $\lim_{k\to\infty}S_{kP}$. Note that the observed gradient is
periodic, and $\lim_{k\to\infty}m_{kP}=\lim_{k\to\infty}m_{kP+P}$, hence
$\lim_{k\to\infty}S_{kP}=\lim_{k\to\infty}S_{kP+P}$. Start from index $kP$, we
derive variables up to $kP+P$ with ACProp algorithm.
$\displaystyle index=kP,$ $\displaystyle m_{kP}$ $\displaystyle,S_{kP}$ (13)
$\displaystyle index=kP+1,$ $\displaystyle m_{kP+1}$
$\displaystyle=\beta_{1}m_{0}+(1-\beta_{1})P$ (14) $\displaystyle S_{kP+1}$
$\displaystyle=\beta_{2}S_{kP}+(1-\beta_{2})(P-m_{kP})^{2}$ (15)
$\displaystyle index=kP+2,$ $\displaystyle m_{kP+2}$
$\displaystyle=\beta_{1}m_{kP+1}+(1-\beta_{1})\times(-1)$ (16)
$\displaystyle=\beta_{1}^{2}m_{kP}+(1-\beta_{1})\beta_{1}P+(1-\beta_{1})\times(-1)$
(17) $\displaystyle S_{kP+2}$
$\displaystyle=\beta_{2}S_{kP+1}+(1-\beta_{2})(-1-m_{kP+1})^{2}$ (18)
$\displaystyle=\beta_{2}^{2}S_{kP}+(1-\beta_{2})\beta_{2}(P-m_{kP})^{2}+(1-\beta_{2})\big{[}\beta_{1}(P-m_{kP})-(P+1)\big{]}^{2}$
(19) $\displaystyle index=kP+3,$ $\displaystyle m_{kP+3}$
$\displaystyle=\beta_{1}m_{kP+2}+(1-\beta_{1})\times(-1)$ (20)
$\displaystyle=\beta_{1}^{3}m_{kP}+(1-\beta_{1})\beta_{1}^{2}P+(1-\beta_{1})\beta_{1}\times(-1)+(1-\beta_{1})\times(-1)$
(21) $\displaystyle S_{kP+3}$
$\displaystyle=\beta_{2}S_{2}+(1-\beta_{2})(-1-m_{kP+2})^{2}$ (22)
$\displaystyle=\beta_{2}^{3}S_{kP}+(1-\beta_{2})\beta_{2}^{2}(P-m_{kP})^{2}$
$\displaystyle+(1-\beta_{2})\beta_{2}\big{[}\beta_{1}(P-m_{kP})-(P+1)\big{]}^{2}(\beta_{2}+\beta_{1}^{2})$
(23) $\displaystyle index=kP+4,$ $\displaystyle m_{kP+4}$
$\displaystyle=\beta_{1}^{4}m_{kP}+(1-\beta_{1})\beta_{1}^{3}P+(-1)(1-\beta_{1})(\beta_{1}^{2}+\beta_{1}+1)$
(24) $\displaystyle S_{kP+4}$
$\displaystyle=\beta_{2}S_{kP+3}+(1-\beta_{2})(-1-m_{kP+3})^{2}$ (25)
$\displaystyle=\beta_{2}^{4}S_{kP}+(1-\beta_{2})\beta_{2}^{3}(P-m_{kP})^{2}$
$\displaystyle+(1-\beta_{2})\beta_{2}\big{[}\beta_{1}(P-m_{kP})-(P+1)\big{]}^{2}(\beta_{2}^{2}+\beta_{2}\beta_{1}^{2}+\beta_{1}^{4})$
(26) $\displaystyle\cdot\cdot\cdot$ $\displaystyle index=kP+P,$ $\displaystyle
m_{kP+P}$
$\displaystyle=\beta_{1}^{P}m_{kP}+(1-\beta_{1})\beta_{1}^{P-1}P+(-1)(1-\beta_{1})\big{[}\beta_{1}^{P-2}+\beta_{1}^{P-3}+...+1\big{]}$
(27)
$\displaystyle=\beta_{1}^{P}m_{kP}+(1-\beta_{1})\beta_{1}^{P-1}P+(\beta_{1}-1)\frac{1-\beta_{1}^{P-1}}{1-\beta
1}$ (28) $\displaystyle S_{kP+P}$
$\displaystyle=\beta_{2}^{P}S_{kP}+(1-\beta_{2})\beta_{2}^{P-1}(P-m_{kP})^{2}$
$\displaystyle+(1-\beta_{2})\big{[}\beta_{1}(P-m_{kP})-(P+1)\big{]}^{2}\big{(}\beta_{2}^{P-2}+\beta_{2}^{P-3}\beta_{1}^{2}+...+\beta_{2}^{0}\beta_{1}^{2P-4}\big{)}$
(29)
$\displaystyle=\beta_{2}^{P}S_{kP}+(1-\beta_{2})\beta_{2}^{P-1}(P-m_{kP})^{2}$
$\displaystyle+(1-\beta_{2})\big{[}\beta_{1}(P-m_{kP})-(P+1)\big{]}^{2}\beta_{2}^{P-2}\frac{1-(\beta_{1}^{2}/\beta_{2})^{P-1}}{1-(\beta_{1}^{2}/\beta_{2})}$
(30)
As $k$ goes to $+\infty$, we have
$\displaystyle\lim_{k\to\infty}m_{kP+P}$
$\displaystyle=\lim_{k\to\infty}m_{kP}$ (31)
$\displaystyle\lim_{k\to\infty}S_{kP+P}$
$\displaystyle=\lim_{k\to\infty}S_{kP}$ (32)
From Eq. (28) we have:
$m_{kP+P}=\frac{(P+1)\beta_{1}^{P-1}-P\beta_{1}^{P}-1}{1-\beta_{1}^{P}}$ (33)
which matches our result in Eq. (A.1). Similarly, from Eq. (30), take limit of
$k\to\infty$, and combine with Eq. (32), we have
$\displaystyle\lim_{k\to\infty}S_{kP}=\frac{1-\beta_{2}}{1-\beta_{2}^{P}}\Bigg{[}\beta_{2}^{P-1}(P-\lim_{k\to\infty}m_{kP})^{2}+\big{[}\beta_{1}(P-\lim_{k\to\infty}m_{kP})-(P+1)\big{]}^{2}\beta_{2}^{P-2}\frac{1-(\beta_{1}^{2}/\beta_{2})^{P-1}}{1-(\beta_{1}^{2}/\beta_{2})}\Bigg{]}$
(34)
Since we have the exact expression for the limit, it’s trivial to check that
$S_{i}\geq S_{kP},\ \ \forall i\in[kP+1,kP+P],i\in\mathbb{N},k\to\infty$ (35)
Intuitively, suppose for some time period, we only observe a constant gradient
-1 without observing the outlier gradient ($P$); the longer the length of this
period, the smaller is the corresponding $S$ value, because $S$ records the
difference between observations. Note that since last time that outlier
gradient ($P$) is observed (at index $kP+1-P$), index $kP$ has the longest
distance from index $kP+1-P$ without observing the outlier gradient ($P$).
Therefore, $S_{kP}$ has the smallest value within a period of $P$ as $k$ goes
to infinity.
For step $kP+1$ to $kP+P$, the update on parameter is:
$\displaystyle index=kP+1,-\Delta_{x}^{kP+1}$
$\displaystyle=\frac{\alpha_{0}}{\sqrt{kP+1}}\frac{P}{\sqrt{S_{kP}}+\epsilon}$
(36) $\displaystyle index=kP+2,-\Delta_{x}^{kP+2}$
$\displaystyle=\frac{\alpha_{0}}{\sqrt{kP+2}}\frac{-1}{\sqrt{S_{kP+1}}+\epsilon}$
(37) $\displaystyle...$ $\displaystyle index=kP+P,-\Delta_{x}^{kP+P}$
$\displaystyle=\frac{\alpha_{0}}{\sqrt{kP+P}}\frac{-1}{\sqrt{S_{kP+P-1}}+\epsilon}$
(38)
So the negative total update within this period is:
$\displaystyle\frac{\alpha_{0}}{\sqrt{kP+1}}\frac{P}{\sqrt{S_{kP}}+\epsilon}$
$\displaystyle-\underbrace{\Bigg{[}\frac{\alpha_{0}}{\sqrt{kP+2}}\frac{1}{\sqrt{S_{kP+1}}+\epsilon}+...+\frac{\alpha_{0}}{\sqrt{kP+P}}\frac{1}{\sqrt{S_{kP+P}}+\epsilon}\Bigg{]}}_{P-1\textit{
terms}}$ (39)
$\displaystyle\geq\frac{\alpha_{0}}{\sqrt{kP+1}}\frac{P}{\sqrt{S_{kP}}+\epsilon}-\underbrace{\Bigg{[}\frac{\alpha_{0}}{\sqrt{kP+1}}\frac{1}{\sqrt{S_{kP}}+\epsilon}+...+\frac{\alpha_{0}}{\sqrt{kP+1}}\frac{1}{\sqrt{S_{kP}}+\epsilon}\Bigg{]}}_{P-1\textit{
terms}}$ (40) $\displaystyle\Big{(}\textit{Since }S_{kP}\textit{ is the
minimum within the period}\Big{)}$
$\displaystyle=\frac{\alpha_{0}}{\sqrt{S_{kP}}+\epsilon}\frac{1}{\sqrt{kP+1}}$
(41)
where $\alpha_{0}$ is the initial learning rate. Note that the above result
hold for every period of length $P$ as $k$ gets larger. Therefore, for some
$K$ such that for every $k>K$, $m_{kP}$ and $S_{kP}$ are close enough to their
limits, the total update after $K$ is:
$\sum_{k=K}^{\infty}\frac{\alpha_{0}}{\sqrt{S_{kP}}+\epsilon}\frac{1}{\sqrt{kP+1}}\approx\frac{\alpha_{0}}{\sqrt{\lim_{k\to\infty}S_{kP}}+\epsilon}\frac{1}{\sqrt{P}}\sum_{k=K}^{\infty}\frac{1}{\sqrt{k}}\textit{\
\ If $K$ is sufficiently large}$ (42)
where $\lim_{k\to\infty}S_{kP}$ is a constant determined by Eq. (34). Note
that this is the negative update; hence ACProp goes to the negative direction,
which is what we expected for this problem. Also considering that
$\sum_{k=K}^{\infty}\frac{1}{\sqrt{k}}\to\infty$, hence ACProp can go
arbitrarily far in the correct direction if the algorithm runs for infinitely
long, therefore the bias caused by first $K$ steps will vanish with running
time. Furthermore, since $x$ lies in the bounded region of $[-1,1]$, if the
updated result falls out of this region, it can always be clipped. Therefore,
for this problem, ACProp always converge to
$x=-1,\forall\beta_{1},\beta_{2}\in(0,1)$. When $\beta_{2}=1$, the denominator
won’t update, and ACProp reduces to SGD (with momentum), and it’s shown to
converge. ∎
###### Lemma A.4.
For any constant $\beta_{1},\beta_{2}\in[0,1)$ such that
$\beta_{1}<\sqrt{\beta_{2}}$, there is a stochastic convex optimization
problem for which Adam does not converge to the optimal solution. One example
of such stochastic problem is:
$f_{t}(x)=\begin{cases}Px&\ \ \textit{with probability
}\frac{1+\delta}{P+1}\\\ -x&\ \ \textit{with probability
}\frac{P-\delta}{P+1}\end{cases}\ \ x\in[-1,1]$ (43)
###### Proof.
See Thm.3 in [14]. ∎
###### Lemma A.5.
For the stochastic problem defined by Eq. (43), ACProp converge to the optimal
solution, $\forall\beta_{1},\beta_{2}\in(0,1)$.
###### Proof.
The update at step $t$ is:
$\displaystyle\Delta_{x}^{t}=-\frac{\alpha_{0}}{\sqrt{t}}\frac{g_{t}}{\sqrt{S_{t-1}}+\epsilon}$
(44)
Take expectation conditioned on observations up to step $t-1$, we have:
$\displaystyle\mathbb{E}\Delta_{x}^{t}$
$\displaystyle=-\frac{\alpha_{0}}{\sqrt{t}}\frac{\mathbb{E}_{t}g_{t}}{\sqrt{S_{t-1}}+\epsilon}$
(45)
$\displaystyle=-\frac{\alpha_{0}}{\sqrt{t}\Big{(}\sqrt{S_{t-1}}+\epsilon\Big{)}}\mathbb{E}_{t}g_{t}$
(46)
$\displaystyle=-\frac{\alpha_{0}}{\sqrt{t}\Big{(}\sqrt{S_{t-1}}+\epsilon\Big{)}}\Big{[}P\frac{1+\delta}{P+1}-\frac{P-\delta}{P+1}\Big{]}$
(47)
$\displaystyle=-\frac{\alpha_{0}\delta}{\sqrt{t}\Big{(}\sqrt{S_{t-1}}+\epsilon\Big{)}}$
(48)
$\displaystyle\leq-\frac{\alpha_{0}\delta}{\sqrt{t}\Big{(}P+1+\epsilon\Big{)}}$
(49)
where the last inequality is due to $S_{t}\leq(P+1)^{2}$, because $S_{t}$ is a
smoothed version of squared difference between gradients, and the maximum
difference in gradient is $P+1$. Therefore, for every step, ACProp is expected
to move in the negative direction, also considering that
$\sum_{t=1}^{\infty}\frac{1}{\sqrt{t}}\to\infty$, and whenever $x<-1$ we can
always clip it to -1, hence ACProp will drift $x$ to -1, which is the optimal
value. ∎
#### A.1.1 Numerical validations
We validate our analysis above in numerical experiments, and plot the curve of
$S_{t}$ and $g_{t}$ for multiple periods (as $k\to\infty$) in Fig. 10 and zoom
in to a single period in Fig. 11. Note that the largest gradient $P$
(normalized as 1) appears at step $kP+1$, and $S$ takes it minimal at step
$kP$ (e.g. $S_{kP}$ is the smallest number within a period). Note the update
for step $kP+1$ is $g_{kP+1}/\sqrt{S_{kP}}$, it’s the largest gradient divided
the smallest denominator, hence the net update within a period pushes $x$
towards the optimal point.
Figure 10: Behavior of $S_{t}$ and $g_{t}$ in ACProp of multiple periods for
problem (1). Note that as $k\to\infty$, the behavior of ACProp is periodic.
Figure 11: Behavior of $S_{t}$ and $g_{t}$ in ACProp of one period for problem
(1).
### A.2 Convergence analysis for Problem 2 in the main paper
###### Lemma A.6 (Lemma 3.4 in the main paper).
For the problem defined by Eq. (50), consider the hyper-parameter tuple
$(\beta_{1},\beta_{2},P)$, there exists cases where ACProp converges but
AdaShift with $n=1$ diverges, but not vice versa.
$f_{t}(x)=\begin{cases}P/2\times x,\ \ \ \ \ t\%P==1\\\ -x,\ \ \ \ \ \ \ \ \ \
\ t\%P==P-2\\\ 0,\ \ \ \ \ \ \ \ \ \ \ \ \ \textit{otherwise}\\\
\end{cases}P>3,P\in\mathbb{N},x\in[0,1].$
(50)
###### Proof.
The proof is similar to Lemma. A.3,we derive the limit behavior of different
methods.
$\displaystyle index=kP,$ $\displaystyle m_{kP}$ $\displaystyle,v_{kP},s_{kP}$
$\displaystyle index=kP+1,$ $\displaystyle m_{kP+1}$
$\displaystyle=m_{kP}\beta_{1}+(1-\beta_{1})P/2$ (51) $\displaystyle v_{kP+1}$
$\displaystyle=v_{kP}\beta_{2}+(1-\beta_{2})P^{2}/4$ (52) $\displaystyle
s_{kP+1}$ $\displaystyle=s_{kP}\beta_{2}+(1-\beta_{2})(P/2-m_{kP})^{2}$ (53)
$\displaystyle...$ $\displaystyle index=kP+P-2,$ $\displaystyle m_{kP+P-2}$
$\displaystyle=m_{kP}\beta_{1}^{P-2}+(1-\beta_{1})\frac{P}{2}\beta_{1}^{P-3}+(1-\beta_{1})\times(-1)$
(54) $\displaystyle v_{kP+P-2}$
$\displaystyle=v_{kP}\beta_{2}^{P-2}+(1-\beta_{2})\frac{P^{2}}{4}\beta_{2}^{P-3}+(1-\beta_{2})$
(55) $\displaystyle s_{kP+P-2}$
$\displaystyle=s_{kP}\beta_{2}^{P-2}+(1-\beta_{2})\beta_{2}^{P-3}(\frac{P}{2}-m_{kP})^{2}+(1-\beta_{2})\beta_{2}^{P-4}m_{kP+1}^{2}+...$
$\displaystyle+(1-\beta_{2})\beta_{2}m_{kP+P-4}^{2}+(1-\beta_{2})(m_{kP+P-3}+1)^{2}$
(56) $\displaystyle index=kP+P-1,$ $\displaystyle m_{kP+P-1}$
$\displaystyle=m_{kP+P-1}\beta_{1}$ (57) $\displaystyle v_{kP+P-1}$
$\displaystyle=v_{kP+P-2}\beta_{2}$ (58) $\displaystyle s_{kP+P-1}$
$\displaystyle=s_{kP}\beta_{2}^{P-1}+(1-\beta_{2})\beta_{2}^{P-1}(\frac{P}{2}-m_{kP})^{2}+(1-\beta_{2})\beta_{2}^{P-3}m_{kP+1}^{2}+...$
$\displaystyle+(1-\beta_{2})\beta_{2}^{2}m_{kP+P-4}^{2}+(1-\beta_{2})\beta_{2}(m_{kP+P-3}+1)^{2}+(1-\beta_{2})m_{kP+P-2}^{2}$
(59) $\displaystyle index=kP+P,$ $\displaystyle m_{kP+P}$
$\displaystyle=m_{kP}\beta_{1}^{P}+(1-\beta_{1})\frac{P}{2}\beta_{1}^{P-1}+(1-\beta_{1})(-1)\beta_{1}^{2}$
(60) $\displaystyle v_{kP+P}$
$\displaystyle=v_{kP}\beta_{2}^{P}+(1-\beta_{2})\frac{P^{2}}{4}\beta_{2}^{P-1}+(1-\beta_{2})\beta_{2}^{2}$
(61) $\displaystyle s_{kP+p}$
$\displaystyle=s_{kP}\beta_{2}^{P}+(1-\beta_{2})\beta_{2}^{P-1}(\frac{P}{2}-m_{kP})^{2}+(1-\beta_{2})\beta_{2}^{P-2}m_{kP+1}^{2}+...$
$\displaystyle+(1-\beta_{2})\beta_{2}^{3}m_{kP+P-4}^{2}+(1-\beta_{2})\beta_{2}^{2}(m_{kP+P-3}+1)^{2}$
$\displaystyle+(1-\beta_{2})m_{kP+P-2}^{2}\beta_{2}+(1-\beta_{2})m_{kP+P-1}^{2}$
(62)
Next, we derive the exact expression using the fact that the problem is
periodic, hence
$\lim_{k\to\infty}m_{kP}=\lim_{k\to\infty}m_{kP+P},\lim_{k\to\infty}s_{kP}=\lim_{k\to\infty}s_{kP+P},\lim_{k\to\infty}v_{kP}=\lim_{k\to\infty}v_{kP+P}$,
hence we have:
$\displaystyle\lim_{k\to\infty}m_{kP}$
$\displaystyle=\lim_{k\to\infty}m_{kP}\beta_{1}^{P}+(1-\beta_{1})\frac{P}{2}\beta_{1}^{P-1}+(1-\beta_{1})(-1)\beta_{1}^{2}$
(63) $\displaystyle\lim_{k\to\infty}m_{kP}$
$\displaystyle=\frac{1-\beta_{1}}{1-\beta_{1}^{P}}\Big{[}\frac{P}{2}\beta_{1}^{P-1}-\beta_{1}^{2}\Big{]}$
(64) $\displaystyle\lim_{k\to\infty}m_{kP-1}$
$\displaystyle=\frac{1}{\beta_{1}}\lim_{k\to\infty}m_{kP}$ (65)
$\displaystyle\lim_{k\to\infty}m_{kP-2}$
$\displaystyle=\frac{1}{\beta_{1}}\Big{[}\lim_{k\to\infty}m_{kP-1}-(1-\beta_{1})0\Big{]}$
(66) $\displaystyle\lim_{k\to\infty}m_{kP-3}$
$\displaystyle=\frac{1}{\beta_{1}}\Big{[}\lim_{k\to\infty}m_{kP-2}-(1-\beta_{1})(-1)\Big{]}$
(67)
Similarly, we can get
$\displaystyle\lim_{k\to\infty}v_{kP}$
$\displaystyle=\frac{1-\beta_{2}}{1-\beta_{2}^{P}}\Big{[}\frac{P^{2}}{4}\beta_{2}^{P-1}+\beta_{2}^{2}\Big{]}$
(68) $\displaystyle\lim_{k\to\infty}v_{kP-1}$
$\displaystyle=\frac{1}{\beta_{2}}\lim_{k\to\infty}v_{kP}$ (69)
$\displaystyle\lim_{k\to\infty}v_{kP-2}$
$\displaystyle=\frac{1}{\beta_{2}}\lim_{k\to\infty}v_{kP-1}$ (70)
$\displaystyle\lim_{k\to\infty}v_{kP-3}$
$\displaystyle=\frac{1}{\beta_{2}}\Big{[}\lim_{k\to\infty}v_{kP-2}-(1-\beta_{2})\times
1^{2}\Big{]}$ (71)
For ACProp, we have the following results:
$\displaystyle\lim_{k\to\infty}s_{kP}$
$\displaystyle=\lim_{k\to\infty}\frac{1-\beta_{2}}{1-\beta_{2}^{P}}\Big{[}\beta_{2}^{P-4}(\frac{P}{2}-m_{kP})^{2}+\beta_{2}^{3}\frac{\beta_{2}^{P-5}-\beta_{1}^{2(P-4)}\beta_{2}}{1-\beta_{1}^{2}\beta_{2}}+\beta_{2}^{2}(m_{kP+P-3}+1)^{2}$
$\displaystyle+\beta_{2}m_{kP+P-2}^{2}+m_{kP+P-1}^{2}\Big{]}$ (72)
$\displaystyle\lim_{k\to\infty}s_{kP-1}$
$\displaystyle=\lim_{k\to\infty}\frac{1}{\beta_{2}}\Big{[}s_{kP}-(1-\beta_{2})m_{kP}^{2}\Big{]}$
(73) $\displaystyle\lim_{k\to\infty}s_{kP-2}$
$\displaystyle=\lim_{k\to\infty}\frac{1}{\beta_{2}}\Big{[}s_{kP-1}-(1-\beta_{2})m_{kP-1}^{2}\Big{]}$
(74) $\displaystyle\lim_{k\to\infty}s_{kP-3}$
$\displaystyle=\lim_{k\to\infty}\frac{1}{\beta_{2}}\Big{[}s_{kP-2}-(1-\beta_{2})(m_{kP-2}+1)^{2}\Big{]}$
(75)
Within each period, ACprop will perform a positive update $P/(2\sqrt{s^{+}})$
and a negative update $-1/\sqrt{s^{-}}$, where $s^{+}$ ($s^{-}$) is the value
of denominator before observing positive (negative) gradient. Similar
notations for $v^{+}$ and $v^{-}$ in AdaShift, where
$s^{+}=s_{kP},s^{-}=s_{kP-3},v^{+}=v_{kP},v^{-}=v_{kP-3}$. A net update in the
correct direction requires $\frac{P}{2\sqrt{s^{+}}}>\frac{1}{\sqrt{s^{-}}}$,
(or $s^{+}/s^{-}<P^{2}/4$). Since we have the exact expression for these terms
in the limit sense, it’s trivial to verify that $s^{+}/s^{-}\leq v^{+}/v^{-}$
(e.g. the value $\frac{s^{+}}{s^{-}}-\frac{v^{+}}{v^{-}}$ is negative as in
Fig. 13 and 13), hence ACProp is easier to satisfy the convergence condition.
∎
Figure 12: Value of $\frac{s^{+}}{s^{-}}-\frac{v^{+}}{v^{-}}$ when
$\beta_{1}=0.2$
Figure 13: Value of $\frac{s^{+}}{s^{-}}-\frac{v^{+}}{v^{-}}$ when
$\beta_{1}=0.9$
### A.3 Numerical experiments
We conducted more experiments to validate previous claims. We plot the area of
convergence for different $\beta_{1}$ values for problem (1) in Fig. 14 to
Fig. 16, and validate the always-convergence property of ACProp with different
values of $\beta_{1}$. We also plot the area of convergence for problem (2)
defined by Eq. (50), results are shown in Fig. 17 to Fig. 19. Note that for
this problem the always-convergence does not hold, but ACProp has a much
larger area of convergence than AdaShift.
Figure 14: Numerical experiments on problem (1) with $\beta_{1}=0.5$ Figure
15: Numerical experiments on problem (1) with $\beta_{1}=0.5$ Figure 16:
Numerical experiments on problem (1) with $\beta_{1}=0.9$ Figure 17: Numerical
experiments on problem (43) with $\beta_{1}=0.85$ Figure 18: Numerical
experiments on problem (43) with $\beta_{1}=0.9$ Figure 19: Numerical
experiments on problem (43) with $\beta_{1}=0.95$
(a) Trajectories of AdaShift with various $n$ for problem (1). Note that
optimal is $x^{*}=-1$. Note that convergence of problem (1) requires a small
delay step $n$, but convergence of problem (2) requires a large $n$, hence
there’s no good criterion to select an optimal $n$.
(b) Trajectories of AdaShift with various $n$ for problem (43). Note that
optimal is $x^{*}=0.0$, and the trajectories are oscillating at a high
frequency hence appears to be spanning an area.
## Appendix B Convergence Analysis for stochastic non-convex optimization
### B.1 Problem definition and assumptions
The problem is defined as:
$\operatorname{min}_{x\in\mathbb{R}^{d}}f(x)=\mathbb{E}[F(x,\xi)]$ (77)
where $x$ typically represents parameters of the model, and $\xi$ represents
data which typically follows some distribution.
We mainly consider the stochastic non-convex case, with assumptions below.
1. A.1
$f$ is continuously differentiable, $f$ is lower-bounde by $f^{*}$. $\nabla
f(f)$ is globalluy Lipschitz continuous with constant $L$:
$||\nabla f(x)-\nabla f(y)||\leq L||x-y||$ (78)
2. A.2
For any iteration $t$, $g_{t}$ is an unbiased estimator of $\nabla f(x_{t})$
with variance bounded by $\sigma^{2}$. The norm of $g_{t}$ is upper-bounded by
$M_{g}$.
$\displaystyle(a)\ \ \ \ \mathbb{E}g_{t}=\nabla f(x_{t})$ (79)
$\displaystyle(b)\ \ \ \ \mathbb{E}\big{[}||g_{t}-\nabla
f(x_{t})||^{2}\big{]}\leq\sigma^{2}$ (80)
### B.2 Convergence analysis of Async-optimizers in stochastic non-convex
optimization
###### Theorem B.1 (Thm.4.1 in the main paper).
Under assumptions A.1-2, assume $f$ is upper bounded by $M_{f}$, with learning
rate schedule as
$\alpha_{t}=\alpha_{0}t^{-\eta},\ \ \alpha_{0}\leq\frac{C_{l}}{LC_{u}^{2}},\ \
\eta\in[0.5,1)$ (81)
the sequence generated by
$x_{t+1}=x_{t}-\alpha_{t}A_{t}g_{t}$ (82)
satisfies
$\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{2}{C_{l}}\Big{[}(M_{f}-f^{*})\alpha_{0}T^{\eta-1}+\frac{LC_{u}^{2}\sigma^{2}\alpha_{0}}{2(1-\eta)}T^{-\eta}\Big{]}$
(83)
where $C_{l}$ and $C_{u}$ are scalars representing the lower and upper bound
for $A_{t}$, e.g. $C_{l}I\preceq A_{t}\preceq C_{u}I$, where $A\preceq B$
represents $B-A$ is semi-positive-definite.
###### Proof.
Let
$\delta_{t}=g_{t}-\nabla f(x_{t})$ (84)
then by A.2, $\mathbb{E}\delta_{t}=0$.
$\displaystyle f(x_{t+1})$ $\displaystyle\leq f(x_{t})+\Big{\langle}\nabla
f(x_{t}),x_{t+1}-x_{t}\Big{\rangle}+\frac{L}{2}\Big{|}\Big{|}x_{t+1}-x_{t}\Big{|}\Big{|}^{2}$
(85) $\displaystyle\Big{(}\textit{by L-smoothness of }f(x)\Big{)}$
$\displaystyle=f(x_{t})-\alpha_{t}\Big{\langle}\nabla
f(x_{t}),A_{t}g_{t}\Big{\rangle}+\frac{L}{2}\alpha_{t}^{2}\Big{|}\Big{|}A_{t}g_{t}\Big{|}\Big{|}^{2}$
(86) $\displaystyle=f(x_{t})-\alpha_{t}\Big{\langle}\nabla
f(x_{t}),A_{t}\big{(}\delta_{t}+\nabla
f(x_{t})\big{)}\Big{\rangle}+\frac{L}{2}\alpha_{t}^{2}\Big{|}\Big{|}A_{t}g_{t}\Big{|}\Big{|}^{2}$
(87) $\displaystyle\leq f(x_{t})-\alpha_{t}\Big{\langle}\nabla
f(x_{t}),A_{t}\nabla f(x_{t})\Big{\rangle}-\alpha_{t}\Big{\langle}\nabla
f(x_{t}),A_{t}\delta_{t}\Big{\rangle}+\frac{L}{2}\alpha_{t}^{2}C_{u}^{2}\Big{|}\Big{|}g_{t}\Big{|}\Big{|}^{2}$
(88)
Take expectation on both sides of Eq. (88), conditioned on
$\xi_{[t-1]}=\\{x_{1},x_{2},...x_{t-1}\\}$, also notice that $A_{t}$ is a
constant given $\xi_{[t-1]}$, we have
$\displaystyle\mathbb{E}\big{[}f(x_{t+1})|x_{1},...x_{t}\big{]}$
$\displaystyle\leq f(x_{t})-\alpha_{t}\Big{\langle}\nabla f(x_{t}),A_{t}\nabla
f(x_{t})\Big{\rangle}+\frac{L}{2}\alpha_{t}^{2}C_{u}^{2}\mathbb{E}\Big{|}\Big{|}g_{t}\Big{|}\Big{|}^{2}$
(89) $\displaystyle\Big{(}A_{t}\textit{ is independent of }g_{t}\textit{ given
}\\{x_{1},...x_{t-1}\\},\textit{ and }\mathbb{E}\delta_{t}=0\Big{)}$
In order to bound RHS of Eq. (89), we first bound
$\mathbb{E}\big{[}||g_{t}||^{2}\big{]}$.
$\displaystyle\mathbb{E}\Big{[}\Big{|}\Big{|}g_{t}\Big{|}\Big{|}^{2}\Big{|}x_{1},...x_{t}\Big{]}$
$\displaystyle=\mathbb{E}\Big{[}\Big{|}\Big{|}\nabla
f(x_{t})+\delta_{t}\Big{|}\Big{|}^{2}\Big{|}x_{1},...x_{t}\Big{]}$ (90)
$\displaystyle=\mathbb{E}\Big{[}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\Big{|}x_{1},...x_{t}\Big{]}+\mathbb{E}\Big{[}\Big{|}\Big{|}\nabla\delta_{t}\Big{|}\Big{|}^{2}\Big{|}x_{1},...x_{t}\Big{]}+2\mathbb{E}\Big{[}\Big{\langle}\delta_{t},\nabla
f(x_{t})\Big{\rangle}\Big{|}x_{1},...x_{t}\Big{]}$ (91)
$\displaystyle\leq\Big{|}\Big{|}\nabla f(x_{t})\Big{|}\Big{|}^{2}+\sigma^{2}$
(92) $\displaystyle\Big{(}\textit{By {A.2}, and }\nabla f(x_{t})\textit{ is a
constant given }x_{t}\Big{)}$
Plug Eq. (92) into Eq. (89), we have
$\displaystyle\mathbb{E}\Big{[}f(x_{t+1})\Big{|}x_{1},...x_{t}\Big{]}$
$\displaystyle\leq f(x_{t})-\alpha_{t}\Big{\langle}\nabla f(x_{t}),A_{t}\nabla
f(x_{t})\Big{\rangle}+\frac{L}{2}C_{u}^{2}\alpha_{t}^{2}\Big{[}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}+\sigma^{2}\Big{]}$ (93)
$\displaystyle=f(x_{t})-\Big{(}\alpha_{t}C_{l}-\frac{LC_{u}^{2}}{2}\alpha_{t}^{2}\Big{)}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}+\frac{LC_{u}^{2}\sigma^{2}}{2}\alpha_{t}^{2}$ (94)
By A.5 that $0<\alpha_{t}\leq\frac{C_{l}}{LC_{u}^{2}}$, we have
$\displaystyle\alpha_{t}C_{l}-\frac{LC_{u}^{2}\alpha_{t}^{2}}{2}=\alpha_{t}\Big{(}C_{l}-\frac{LC_{u}^{2}\alpha_{t}}{2}\Big{)}\geq\alpha_{t}\frac{C_{l}}{2}$
(95)
Combine Eq. (94) and Eq. (95), we have
$\displaystyle\frac{\alpha_{t}C_{l}}{2}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}$
$\displaystyle\leq\Big{(}\alpha_{t}C_{l}-\frac{LC_{u}^{2}\alpha_{t}^{2}}{2}\Big{)}\Big{|}\Big{|}\nabla
f(x_{t})||^{2}$ (96) $\displaystyle\leq
f(x_{t})-\mathbb{E}\Big{[}f(x_{t+1})\Big{|}x_{1},...x_{t}\Big{]}+\frac{LC_{u}^{2}\sigma^{2}}{2}\alpha_{t}^{2}$
(97)
Then we have
$\displaystyle\frac{C_{l}}{2}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{1}{\alpha_{t}}f(x_{t})-\frac{1}{\alpha_{t}}\mathbb{E}\Big{[}f(x_{t+1})\Big{|}x_{1},...x_{t}\Big{]}+\frac{LC_{u}^{2}\sigma^{2}}{2}\alpha_{t}$
(98)
Perform telescope sum on Eq. (98), and recursively taking conditional
expectations on the history of $\\{x_{i}\\}_{i=1}^{T}$, we have
$\displaystyle\frac{C_{l}}{2}\sum_{t=1}^{T}||\nabla
f(x_{t})\Big{|}\Big{|}^{2}$
$\displaystyle\leq\sum_{t=1}^{T}\frac{1}{\alpha_{t}}\Big{(}\mathbb{E}f(x_{t})-\mathbb{E}f(x_{t+1})\Big{)}+\frac{LC_{u}^{2}\sigma^{2}}{2}\sum_{t=1}^{T}\alpha_{t}$
(99)
$\displaystyle=\frac{\mathbb{E}f(x_{1})}{\alpha_{1}}-\frac{\mathbb{E}f(x_{T+1})}{\alpha_{T}}+\sum_{t=2}^{T}\Big{(}\frac{1}{\alpha_{t}}-\frac{1}{\alpha_{t-1}}\Big{)}\mathbb{E}f(x_{t})+\frac{LC_{u}^{2}\sigma^{2}}{2}\sum_{t=1}^{T}\alpha_{t}$
(100)
$\displaystyle\leq\frac{M_{f}}{\alpha_{1}}-\frac{f^{*}}{\alpha_{T}}+M_{f}\sum_{t=1}^{T}\Big{(}\frac{1}{\alpha_{t}}-\frac{1}{\alpha_{t-1}}\Big{)}+\frac{LC_{u}^{2}\sigma^{2}}{2}\sum_{t=1}^{T}\alpha_{t}$
(101)
$\displaystyle\leq\frac{M_{f}-f^{*}}{\alpha_{T}}+\frac{LC_{u}^{2}\sigma^{2}}{2}\sum_{t=1}^{T}\alpha_{t}$
(102)
$\displaystyle\leq(M_{f}-f^{*})\alpha_{0}T^{\eta}+\frac{LC_{u}^{2}\sigma^{2}\alpha_{0}}{2}\Big{(}\zeta(\eta)+\frac{T^{1-\eta}}{1-\eta}+\frac{1}{2}T^{-\eta}\Big{)}$
(103) $\displaystyle\Big{(}\textit{By sum of generalized harmonic series},$
$\displaystyle\sum_{k=1}^{n}\frac{1}{k^{s}}\sim\zeta(s)+\frac{n^{1-s}}{1-s}+\frac{1}{2n^{s}}+O(n^{-s-1}),$
(104) $\displaystyle\zeta(s)\textit{ is Riemann zeta function}.\Big{)}$
Then we have
$\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{2}{C_{l}}\Big{[}(M_{f}-f^{*})\alpha_{0}T^{\eta-1}+\frac{LC_{u}^{2}\sigma^{2}\alpha_{0}}{2(1-\eta)}T^{-\eta}\Big{]}$
(105)
∎
#### B.2.1 Validation on numerical accuracy of sum of generalized harmonic
series
We performed experiments to test the accuracy of the analytical expression of
sum of harmonic series. We numerically calculate
$\sum_{i=1}^{N}\frac{1}{i^{\eta}}$ for $\eta$ varying from 0.5 to 0.999, and
for $N$ ranging from $10^{3}$ to $10^{7}$ in the log-grid. We calculate the
error of the analytical expression by Eq. (104), and plot the error in Fig.
21. Note that the $y$-axis has a unit of $10^{-7}$, while the sum is typically
on the order of $10^{3}$, this implies that expression Eq. (104) is very
accurate and the relative error is on the order of $10^{-10}$. Furthermore,
note that this expression is accurate even when $\eta=0.5$.
Figure 21: The error between numerical sum for
$\sum_{i=1}^{N}\frac{1}{i^{\eta}}$ and the analytical form.
### B.3 Convergence analysis of Async-moment-optimizers in stochastic non-
convex optimization
###### Lemma B.2.
Let $m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}$, let $A_{t}\in\mathbb{R}^{d}$,
then
$\Big{\langle}A_{t},g_{t}\Big{\rangle}=\frac{1}{1-\beta_{1}}\Big{(}\Big{\langle}A_{t},m_{t}\Big{\rangle}-\Big{\langle}A_{t-1},m_{t-1}\Big{\rangle}\Big{)}+\Big{\langle}A_{t-1},m_{t-1}\Big{\rangle}+\frac{\beta_{1}}{1-\beta_{1}}\Big{\langle}A_{t-1}-A_{t},m_{t-1}\Big{\rangle}$
(106)
###### Theorem B.3.
Under assumptions 1-4, $\beta_{1}<1,\beta_{2}<1$, also assume $A_{t+1}\leq
A_{t}$ element-wise which can be achieved by tracking maximum of $s_{t}$ as in
AMSGrad, $f$ is upper bounded by $M_{f}$, $||g_{t}||_{\infty}\leq M_{g}$, with
learning rate schedule as
$\alpha_{t}=\alpha_{0}t^{-\eta},\ \ \alpha_{0}\leq\frac{C_{l}}{LC_{u}^{2}},\ \
\eta\in(0.5,1]$ (107)
the sequence is generated by
$x_{t+1}=x_{t}-\alpha_{t}A_{t}m_{t}$ (108)
then we have
$\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{1}{\alpha_{0}C_{l}}T^{\eta-1}\Big{[}M_{f}-f^{*}+EM_{g}^{2}\Big{]}$
(109)
where
$E=\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\frac{1}{1-\beta_{1}}\alpha_{0}M_{g}+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\alpha_{0}^{2}C_{u}^{2}\frac{1}{1-2\eta}$
(110)
###### Proof.
Let $A_{t}=\alpha_{t}A_{t}\nabla f(x_{t})$ and let $A_{0}=A_{1}$, we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}A_{t},g_{t}\Big{\rangle}$
$\displaystyle=\frac{1}{1-\beta_{1}}\Big{\langle}A_{T},m_{T}\Big{\rangle}+\sum_{t=1}^{T}\Big{\langle}A_{t-1},m_{t-1}\Big{\rangle}+\frac{\beta_{1}}{1-\beta_{1}}\sum_{t=1}^{T}\Big{\langle}A_{t-1}-A_{t},m_{t-1}\Big{\rangle}$
(111)
$\displaystyle=\frac{\beta_{1}}{1-\beta_{1}}\Big{\langle}A_{T},m_{T}\Big{\rangle}+\sum_{t=1}^{T}\Big{\langle}A_{t},m_{t}\Big{\rangle}+\frac{\beta_{1}}{1-\beta_{1}}\sum_{t=0}^{T-1}\Big{\langle}A_{t}-A_{t+1},m_{t}\Big{\rangle}$
(112)
First we derive a lower bound for Eq. (112).
$\displaystyle\Big{\langle}A_{t},g_{t}\Big{\rangle}$
$\displaystyle=\Big{\langle}\alpha_{t}A_{t}\nabla f(x_{t}),g_{t}\Big{\rangle}$
(113) $\displaystyle=\Big{\langle}\alpha_{t}A_{t}\nabla
f(x_{t})-\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}+\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}$ (114)
$\displaystyle=\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}-\Big{\langle}(\alpha_{t-1}A_{t-1}-\alpha_{t}A_{t})\nabla
f(x_{t}),g_{t}\Big{\rangle}$ (115)
$\displaystyle\geq\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}-\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}_{\infty}\Big{|}\Big{|}\alpha_{t-1}A_{t-1}-\alpha_{t}A_{t}\Big{|}\Big{|}_{1}\Big{|}\Big{|}g_{t}\Big{|}\Big{|}_{\infty}$
(116) $\displaystyle\Big{(}\textit{By H\"{o}lder's inequality}\Big{)}$
$\displaystyle\geq\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}-M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{t-1}A_{t-1}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{t}A_{t}\Big{|}\Big{|}_{1}\Big{)}$
(117) $\displaystyle\Big{(}\textit{Since
}\Big{|}\Big{|}g_{t}\Big{|}\Big{|}_{\infty}\leq
M_{g},\alpha_{t-1}\geq\alpha_{t}>0,A_{t-1}\geq A_{t}>0\textit{ element-
wise}\Big{)}$ (118)
Perform telescope sum, we have
$\sum_{t=1}^{T}\Big{\langle}A_{t},g_{t}\Big{\rangle}\geq\sum_{t=1}^{T}\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}-M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{0}H_{0}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{T}A_{t}\Big{|}\Big{|}_{1}\Big{)}$
(119)
Next, we derive an upper bound for
$\sum_{t=1}^{T}\Big{\langle}A_{t},g_{t}\Big{\rangle}$ by deriving an upper-
bound for the RHS of Eq. (112). We derive an upper bound for each part.
$\displaystyle\langle A_{t},m_{t}\Big{\rangle}$
$\displaystyle=\Big{\langle}\alpha_{t}A_{t}\nabla
f(x_{t}),m_{t}\Big{\rangle}=\Big{\langle}\nabla
f(x_{t}),\alpha_{t}A_{t}m_{t}\Big{\rangle}$ (120)
$\displaystyle=\Big{\langle}\nabla f(x_{t}),x_{t}-x_{t+1}\Big{\rangle}$ (121)
$\displaystyle\leq
f(x_{t})-f(x_{t+1})+\frac{L}{2}\Big{|}\Big{|}x_{t+1}-x_{t}\Big{|}\Big{|}^{2}\Big{(}\textit{By
L-smoothness of }f\Big{)}$ (122)
Perform telescope sum, we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}A_{t},m_{t}\Big{\rangle}\leq
f(x_{1})-f(x_{T+1})+\frac{L}{2}\sum_{t=1}^{T}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}$
(123) $\displaystyle\langle A_{t}-A_{t+1},m_{t}\Big{\rangle}$
$\displaystyle=\Big{\langle}\alpha_{t}A_{t}\nabla
f(x_{t})-\alpha_{t+1}A_{t+1}\nabla f(x_{t+1}),m_{t}\Big{\rangle}$ (124)
$\displaystyle=\Big{\langle}\alpha_{t}A_{t}\nabla
f(x_{t})-\alpha_{t}A_{t}\nabla f(x_{t+1}),m_{t}\rangle$
$\displaystyle+\Big{\langle}\alpha_{t}A_{t}\nabla
f(x_{t+1})-\alpha_{t+1}A_{t+1}\nabla f(x_{t+1}),m_{t}\rangle$ (125)
$\displaystyle=\Big{\langle}\nabla f(x_{t})-\nabla
f(x_{t+1}),\alpha_{t}A_{t}m_{t}\Big{\rangle}+\Big{\langle}(\alpha_{t}A_{t}-\alpha_{t+1}A_{t+1})\nabla
f(x_{t}),m_{t}\Big{\rangle}$ (126) $\displaystyle=\Big{\langle}\nabla
f(x_{t})-\nabla f(x_{t+1}),x_{t}-x_{t+1}\Big{\rangle}+\Big{\langle}\nabla
f(x_{t}),(\alpha_{t}A_{t}-\alpha_{t+1}A_{t+1})m_{t}\Big{\rangle}$ (127)
$\displaystyle\leq
L\Big{|}\Big{|}x_{t+1}-x_{t}\Big{|}\Big{|}^{2}+\Big{\langle}\nabla
f(x_{t}),(\alpha_{t}A_{t}-\alpha_{t+1}A_{t+1})m_{t}\Big{\rangle}$ (128)
$\displaystyle\Big{(}\textit{By smoothness of }f\Big{)}$ $\displaystyle\leq
L\Big{|}\Big{|}x_{t+1}-x_{t}\Big{|}\Big{|}^{2}+\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}_{\infty}\Big{|}\Big{|}\alpha_{t}A_{t}-\alpha_{t+1}A_{t+1}\Big{|}\Big{|}_{1}\Big{|}\Big{|}m_{t}\Big{|}\Big{|}_{\infty}$
(129) $\displaystyle\Big{(}\textit{By H\"{o}lder's inequality}\Big{)}$
$\displaystyle\leq
L\Big{|}\Big{|}x_{t+1}-x_{t}\Big{|}\Big{|}^{2}+M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{t}A_{t}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{t+1}A_{t+1}\Big{|}\Big{|}_{1}\Big{)}$
(130) $\displaystyle\Big{(}\textit{Since }\alpha_{t}\geq\alpha_{t+1}\geq
0,A_{t}\geq A_{t+1}\geq 0,\textit{element-wise}\Big{)}$ (131)
Perform telescope sum, we have
$\displaystyle\sum_{t=1}^{T-1}\Big{\langle}A_{t}-A_{t+1},m_{t}\rangle\leq
L\sum_{t=1}^{T-1}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}+M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{T}A_{t}\Big{|}\Big{|}_{1}\Big{)}$
(132)
We also have
$\displaystyle\Big{\langle}A_{T},m_{T}\Big{\rangle}$
$\displaystyle=\Big{\langle}\alpha_{T}A_{t}\nabla
f(x_{T}),m_{T}\Big{\rangle}=\Big{\langle}\nabla
f(x_{T}),\alpha_{T}A_{t}m_{T}\Big{\rangle}$ (133) $\displaystyle\leq
L\frac{1-\beta_{1}}{\beta_{1}}\Big{|}\Big{|}\alpha_{T}A_{t}m_{T}\Big{|}\Big{|}^{2}+\frac{\beta_{1}}{4L(1-\beta_{1})}\Big{|}\Big{|}\nabla
f(x_{T})\Big{|}\Big{|}^{2}$ (134) $\displaystyle\Big{(}\textit{By Young's
inequality}\Big{)}$
$\displaystyle=L\frac{1-\beta_{1}}{\beta_{1}}\Big{|}\Big{|}\alpha_{T}A_{t}m_{T}\Big{|}\Big{|}^{2}+\frac{\beta_{1}}{4L(1-\beta_{1})}M_{g}^{2}$
(135)
Combine Eq. (123), Eq. (132) and Eq. (135) into Eq. (112), we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}A_{t},g_{t}\Big{\rangle}$
$\displaystyle\leq\frac{\beta_{1}}{1-\beta_{1}}\Big{\langle}A_{T},m_{T}\Big{\rangle}+f(x_{1})-f(x_{T+1})+\frac{L}{2}\sum_{t=1}^{T}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}$
$\displaystyle+\frac{\beta_{1}}{1-\beta_{1}}L\sum_{t=1}^{T-1}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}+\frac{\beta_{1}}{1-\beta_{1}}M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{T}A_{t}\Big{|}\Big{|}_{1}\Big{)}$
(136) $\displaystyle\leq
f(x_{1})-f(x_{T+1})+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\sum_{t=1}^{T}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}$
$\displaystyle+\Big{(}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\frac{\beta_{1}}{1-\beta_{1}}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}\Big{)}M_{g}^{2}$
(137)
Combine Eq. (119) and Eq. (137), we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}$ $\displaystyle-
M_{g}^{2}\Big{(}\Big{|}\Big{|}\alpha_{0}H_{0}\Big{|}\Big{|}_{1}-\Big{|}\Big{|}\alpha_{T}A_{t}\Big{|}\Big{|}_{1}\Big{)}\leq\sum_{t=1}^{T}\Big{\langle}A_{t},g_{t}\Big{\rangle}$
$\displaystyle\leq
f(x_{1})-f(x_{T+1})+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\sum_{t=1}^{T}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}$
$\displaystyle+\Big{(}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\frac{\beta_{1}}{1-\beta_{1}}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}\Big{)}M_{g}^{2}$
(138)
Hence we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),g_{t}\Big{\rangle}\leq
f(x_{1})-f(x_{T+1})+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\sum_{t=1}^{T}\Big{|}\Big{|}\alpha_{t}A_{t}m_{t}\Big{|}\Big{|}^{2}$
$\displaystyle+\Big{(}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\Big{|}\Big{|}\alpha_{0}H_{0}\Big{|}\Big{|}_{1}+\frac{\beta_{1}}{1-\beta_{1}}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}\Big{)}M_{g}^{2}$
(139) $\displaystyle\leq
f(x_{1})-f^{*}+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\alpha_{0}^{2}M_{g}^{2}C_{u}^{2}\sum_{t=1}^{T}t^{-2\eta}$
$\displaystyle+\Big{(}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\Big{|}\Big{|}\alpha_{0}H_{0}\Big{|}\Big{|}_{1}+\frac{\beta_{1}}{1-\beta_{1}}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}\Big{)}M_{g}^{2}$
(140) $\displaystyle\leq f(x_{1})-f^{*}$
$\displaystyle+M_{g}^{2}\Big{[}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\Big{|}\Big{|}\alpha_{0}H_{0}\Big{|}\Big{|}_{1}+\frac{\beta_{1}}{1-\beta_{1}}\Big{|}\Big{|}\alpha_{1}H_{1}\Big{|}\Big{|}_{1}+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\alpha_{0}^{2}C_{u}^{2}\frac{T^{1-2\eta}}{1-2\eta}\Big{]}$
(141) $\displaystyle\leq
f(x_{1})-f^{*}+M_{g}^{2}\underbrace{\Big{[}\frac{\beta_{1}^{2}}{4L(1-\beta_{1})^{2}}+\frac{1}{1-\beta_{1}}\alpha_{0}M_{g}+\Big{(}\frac{\beta_{1}}{1-\beta_{1}}+\frac{1}{2}\Big{)}L\alpha_{0}^{2}C_{u}^{2}\frac{1}{1-2\eta}\Big{]}}_{E}$
(142)
Take expectations on both sides, we have
$\displaystyle\sum_{t=1}^{T}\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),\nabla
f(x_{t})\Big{\rangle}\leq\mathbb{E}f(x_{1})-f^{*}+EM_{g}^{2}\leq
M_{f}-f^{*}+EM_{g}^{2}$ (143)
Note that we have $\alpha_{t}$ decays monotonically with $t$, hence
$\displaystyle\sum_{t=1}^{T}\Big{\langle}\alpha_{t-1}A_{t-1}\nabla
f(x_{t}),\nabla f(x_{t})\Big{\rangle}$
$\displaystyle\geq\alpha_{0}T^{-\eta}\sum_{t=1}^{T}\Big{\langle}A_{t-1}\nabla
f(x_{t}),\nabla f(x_{t})\Big{\rangle}$ (144)
$\displaystyle\geq\alpha_{0}T^{1-\eta}C_{l}\Big{[}\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\Big{]}$ (145)
Combine Eq. (143) and Eq. (145), assume $f$ is upper bounded by $M_{f}$, we
have
$\displaystyle\frac{1}{T}\sum_{t=1}^{T}\Big{|}\Big{|}\nabla
f(x_{t})\Big{|}\Big{|}^{2}\leq\frac{1}{\alpha_{0}C_{l}}T^{\eta-1}\Big{[}M_{f}-f^{*}+EM_{g}^{2}\Big{]}$
(146)
∎
## Appendix C Experiments
Figure 22: Behavior of ACProp for optimization of the function $f(x)=|x|$ with
$lr=0.00001$.
Figure 23: Behavior of ACProp for optimization of the function $f(x)=|x|$ with
$lr=0.01$.
### C.1 Centering of second momentum does not suffer from numerical issues
Note that the centered second momentum $s_{t}$ does not suffer from numerical
issues in practice. The intuition that “$s_{t}$ is an estimate of variance in
gradient” is based on a strong assumption that the gradient follows a
stationary distribution, which indicates that the true gradient $\nabla
f_{t}(x)$ remains a constant function of $t$. In fact, $s_{t}$ tracks
$EMA((g_{t}-m_{t})^{2})$, and it includes two aspects: the change in true
gradient $||\nabla f_{t+1}(x)-\nabla f_{t}(x)||^{2}$, and the noise in
gradient observation $||g_{t}-\nabla f_{t}(x)||^{2}$. In practice, especially
in deep learning, the gradient suffers from large noise, hence $s_{t}$ does
not take extremely small values.
Next, we consider an ideal case that the observation $g_{t}$ is noiseless, and
conduct experiments to show that centering of second-momentum does not suffer
from numerical issues. Consider the function $f(x)=|x|$ with initial value
$x_{0}=100$, we plot the trajectories and stepsizes of various optimizers in
Fig. 22 and Fig. 23 with initial learning rate $lr=0.00001$ and $lr=0.01$
respectively. Note that ACProp and AdaBelief take a large step at the initial
phase, because a constant gradient is observed without noise. But note that
the gradient remains constant only within half of the plane; when it cross the
boundary $x=0$, the gradient is reversed, hence $||\nabla f_{t+1}(x)-\nabla
f_{t}(x)||^{2}\neq 0$, and $s_{t}$ becomes a non-zero value when it hits a
valley in the loss surface. Therefore, the stepsize of ACProp and AdaBelief
automatically decreases when they reach the local minimum. As shown in Fig. 22
and Fig. 23, ACProp and AdaBelief does not take any extremely large stepsizes
for both a very large (0.01) and very small (0.00001) learning rates, and they
automatically decrease stepsizes near the optimal. We do not observe any
numerical issues even for noise-free piecewise-linear functions. If the
function is not piecewise linear, or the gradient does not remain constant
within any connected set, then $||\nabla f_{t+1}(x)-\nabla f_{t}(x)||^{2}\neq
0$ almost everywhere, and the numerical issue will never happen.
The only possible case where centering second momentum causes numerical issue
has to satisfy two conditions simultaneously: (1) $||\nabla f_{t+1}(x)-\nabla
f_{t}(x)||^{2}=0,\forall t$ and (2) $g_{t}$ is a noise-free observation of
$\nabla f(x)$. This is a trivial case where the loss surface is linear, and
gradient is noise-free. This is case is almost never encountered in practice.
Furthermore, in this case, $s_{t}=0$ and ACProp reduces to SGD with stepsize
$1/\epsilon$. But note that the optimal is $-\infty$ and achieved at $\infty$
or $-\infty$, taking a large stepsize $1/\epsilon$ is still acceptable for
this trivial case.
Table 6: Hyper-parameters for ACProp in various experiments | lr | beta1 | beta2 | eps
---|---|---|---|---
ImageNet | 1e-3 | 0.9 | 0.999 | 1e-12
GAN | 2e-4 | 0.5 | 0.999 | 1e-16
Transformer | 5e-4 | 0.9 | 0.999 | 1e-16
### C.2 Image classification with CNN
We performed extensive hyper-parameter tuning in order to better compare the
performance of different optimizers: for SGD we set the momentum as 0.9 which
is the default for many cases, and search the learning rate between 0.1 and
$10^{-5}$ in the log-grid; for other adaptive optimizers, including AdaBelief,
Adam, RAdam, AdamW and AdaShift, we search the learning rate between 0.01 and
$10^{-5}$ in the log-grid, and search $\epsilon$ between $10^{-5}$ and
$10^{-10}$ in the log-grid. We use a weight decay of 5e-2 for AdamW, and use
5e-4 for other optimizers. We conducted experiments based on the official code
for AdaBound and AdaBelief 111https://github.com/juntang-zhuang/Adabelief-
Optimizer.
(a) VGG11 on Cifar10
(b) ResNet34 on Cifar10
(c) DenseNet121 on Cifar10
(d) VGG11 on Cifar10
(e) ResNet34 on Cifar10
(f) DenseNet121 on Cifar10
Figure 24: Training (top row) and test (bottom row) accuracy of CNNs on
Cifar10 dataset.
Figure 25: The training and test accuracy curve of VGG11 on CIFAR10 with
different $\beta_{1}$ values.
Figure 26: The training and test accuracy curve of VGG11 on CIFAR10 with
different $\beta_{2}$ values.
Figure 27: Test accuracy of VGG-11 on CIFAR10 trained under various hyper-
parameter settings with different optimizers
We further test the robustness of ACProp to values of hyper-parameters
$\beta_{1}$ and $\beta_{2}$. Results are shown in Fig. 25 and Fig. 26
respectively. ACProp is robust to different values of $\beta_{1}$, and is more
sensitive to values of $\beta_{2}$.
Figure 28: BLEU score on validation set of a Transformer-base trained with
ACProp and Adam
### C.3 Neural Machine Translation with Transformers
We conducted experiments on Neural Machine Translation (NMT) with transformer
models. Our experiments on the IWSLT14 DE-EN task is based on the 6-layer
transformer-base model in fairseq implementation
222https://github.com/pytorch/fairseq. For all methods, we use a learning rate
of 0.0002, and standard invser sqrt learning rate schedule with 4,000 steps of
warmup. For other tasks, our experiments are based on an open-source
implementation333https://github.com/DevSinghSachan/multilingual_nmt using a
1-layer Transformer model. We plot the BLEU score on validation set varying
with training epoch in Fig. 28, and ACProp consistently outperforms Adam
throughout the training.
### C.4 Generative adversarial networks
The training of GANs easily suffers from mode collapse and numerical
instability [34], hence is a good test for the stability of optimizers. We
conducted experiments with Deep Convolutional GAN (DCGAN) [35], Spectral-Norm
GAN (SNGAN) [36], Self-Attention GAN (SAGAN) [37] and Relativistic-GAN (RLGAN)
[38]. We set $\beta_{1}=0.5$, and search for $\beta_{2}$ and $\epsilon$ with
the same schedule as previous section. Our experiments are based on an open-
source implementation 444https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
Figure 29: Generated figures by the SN-GAN trained with ACProp. Figure 30:
Generated figures by the SA-GAN trained with ACProp. Figure 31: Generated
figures by the DC-GAN trained with ACProp. Figure 32: Generated figures by the
RL-GAN trained with ACProp.
|
# Small batch deep reinforcement learning
Johan Obando-Ceron
Mila, Université de Montréal
<EMAIL_ADDRESS>
&Marc G. Bellemare
Mila, Université de Montréal
<EMAIL_ADDRESS>&Pablo Samuel Castro
Google DeepMind
Mila, Université de Montréal
<EMAIL_ADDRESS>
Work done during an internship at Google DeepMind
###### Abstract
In value-based deep reinforcement learning with replay memories, the batch
size parameter specifies how many transitions to sample for each gradient
update. Although critical to the learning process, this value is typically not
adjusted when proposing new algorithms. In this work we present a broad
empirical study that suggests reducing the batch size can result in a number
of significant performance gains; this is surprising, as the general tendency
when training neural networks is towards larger batch sizes for improved
performance. We complement our experimental findings with a set of empirical
analyses towards better understanding this phenomenon.
## 1 Introduction
One of the central concerns for deep reinforcement learning (RL) is how to
efficiently make the most use of the collected data for policy improvement.
This is particularly important in online settings, where RL agents learn while
interacting with an environment, as interactions can be expensive. Since the
introduction of DQN (Mnih et al., 2015), one of the core components of most
modern deep RL algorithms is the use of a finite replay memory where
experienced transitions are stored. During learning, the agent samples mini-
batches from this memory to update its network parameters.
Since the policy used to collect transitions is changing throughout learning,
the replay memory contains data coming from a mixture of policies (that differ
from the agent’s current policy), and results in what is known as off-policy
learning. In contrast with training data for supervised learning problems,
online RL data is highly non-stationary. Still, at any point during training
the replay memory exhibits a distribution over transitions, which the agent
samples from at each learning step. The number of sampled transitions at each
learning step is known as the batch size, and is meant to produce an unbiased
estimator of the underlying data distribution. Thus, in theory, larger batch
sizes should be more accurate representations of the true distribution.
Some in the supervised learning community suggest that learning with large
batch sizes leads to better optimization (Shallue et al., 2019), since smaller
batches yield noisier gradient estimations. Contrastingly, others have
observed that larger batch sizes tend to converge to “sharper” optimization
landscapes, which can result in worsened generalization (Keskar et al., 2017);
smaller batches, on the other hand, seem to result in “flatter” landscapes,
resulting in better generalization.
Learning dynamics in deep RL are drastically different than those observed in
supervised learning, in large part due to the data non-stationarity mentioned
above. Given that the choice of batch size will have a direct influence on the
agent’s sample efficiency and ultimate performance, developing a better
understanding of its impact is critical. Surprisingly, to the best of our
knowledge there have been no studies exploring the impact of the choice of
batch size in deep RL. Most recent works have focused on related questions,
such as the number of gradient updates per environment step (Nikishin et al.,
2022; D’Oro et al., 2023; Sokar et al., 2023), but have kept the batch size
fixed.
In this work we conduct a broad empirical study of batch size in online value-
based deep reinforcement learning. We uncover the surprising finding that
reducing the batch size seems to provide substantial performance benefits and
computational savings. We showcase this finding in a variety of agents and
training regimes (section 3), and conduct in-depth analyses of the possible
causes (section 4). The impact of our findings and analyses go beyond the
choice of the batch size hyper-parameter, and help us develop a better
understanding of the learning dynamics in online deep RL.
## 2 Background
Figure 1: Evaluating QR-DQN (Dabney et al., 2018a) with varying batch sizes
over all 60 Atari 2600 games. (Left) Average improvement obtained when using a
batch size of 8 over 32 (default); (Right) Aggregate Interquantile Mean
(Agarwal et al., 2021) of human normalized scores. All games run for 3 seeds,
with shaded areas displaying 95% stratified bootstrap confidence intervals.
A reinforcement learning problem is typically formulated as a Markov decision
process (MDP), which consists of a 5-tuple
$\langle\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma,\rangle$, where
$\mathcal{S}$ denotes the state space, $\mathcal{A}$ denotes the actions,
$\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow Dist(\mathcal{S})$
encodes the transition dynamics,
$\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$ is the reward
function, and $\gamma\in[0,1)$ is a discount factor. The aim is to learn a
policy $\pi_{\theta}:\mathcal{S}\mapsto\mathcal{A}$ parameterized by $\theta$
such that the sum of discounted returns
$\mathbb{E}_{\pi_{\theta}}\left[\sum_{t=1}^{\infty}\gamma^{t}r_{t}\right]$ is
maximized; here, the state-action trajectory
$\left(\mathbf{s}_{0},\mathbf{a}_{0},\mathbf{s}_{1},\mathbf{a}_{1},\ldots\right)$
is obtained by sampling an action
$\mathbf{a}_{t}\sim\pi_{\theta}\left(\cdot\mid\mathbf{s}_{t}\right)$ and
reaching state
$\mathbf{s}_{t+1}\sim\mathcal{P}\left(\cdot\mid\mathbf{s}_{t},\mathbf{a}_{t}\right)$
at each decision step $t$, and
$r_{t}\sim\mathcal{R}\left(\cdot\mid\mathbf{s}_{t},\mathbf{a}_{t}\right)$.
In value-based methods, the policy is obtained as the argmax of a learned
$Q$-function:
$\pi_{\theta}(s)\equiv\arg\max_{a\in\mathcal{A}}Q_{\theta}(s,a)$. This
function aims to approximate the optimal state-action values $Q^{*}$, defined
via the well-known Bellman recurrence:
$Q^{*}(\mathbf{s}_{t},\mathbf{a}_{t})=\max_{\mathbf{a}^{\prime}}\mathbb{E}[\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t})+$
$\left.\gamma Q^{*}\left(\mathbf{s}_{t+1},\mathbf{a}_{t+1}\right)\right]$, and
is typically learned using $Q$-learning (Watkins and Dayan, 1992; Sutton and
Barto, 2018).
To deal with large state spaces, such as all possible images in an Atari 2600
game, Mnih et al. (2015) introduced DQN, which combined Q-learning with deep
neural networks to represent $Q_{\theta}$. A large replay buffer $D$ is
maintained to store experienced transitions, from which mini-batches are
sampled to perform learning updates (Lin, 1992). Specifically, temporal
difference learning is used to update the network parameters with the
following loss function:
$L(\theta)=\mathbb{E}_{(s_{t},a_{t},r_{t},s_{t+1})\sim
D}[(\left(r_{t}+\gamma\max_{a^{\prime}\in\mathcal{A}}Q_{\bar{\theta}}(s_{t+1},a_{t+1})\right)-Q_{\theta}(s_{t},a_{t}))^{2}]$.
Here $Q_{\bar{\theta}}$ is a target network that is a delayed copy of
$Q_{\theta}$, with the parameters synced with $Q_{\theta}$ less frequently
than $Q_{\theta}$ is updated.
Since the introduction of DQN, there have been a number of algorithmic
advances in deep RL agents, in particular those which make use of
distributional RL (Bellemare et al., 2017), introduced with the C51 algorithm.
The Rainbow agent combined C51 with other advances such as multi-step learning
and prioritized replay sampling (Hessel et al., 2018). Different ways of
parameterizing return distributions were proposed in the form of the IQN
(Dabney et al., 2018b) and QR-DQN (Dabney et al., 2018a) algorithms. For
reasons which will be clarified below, most of our evaluations and analyses
were conducted with the QR-DQN agent.
## 3 The small batch effect on agent performance
In this section we showcase the performance gains that arise when training
with smaller batch sizes. We do so first with four standard value-based agents
(§3.1), with varying architectures (§3.2), agents optimized for sample
efficiency (§3.3), and with extended training (§3.4). Additionally, we explore
the impact of reduced batch sizes on exploration (§3.5) and computational cost
(§3.6).
Experimental setup: We use the Jax implementations of RL agents, with their
default hyper-parameter values, provided by the Dopamine library (Castro et
al., 2018)111Dopamine code available at https://github.com/google/dopamine.
and applied to the Arcade Learning Environment (ALE) (Bellemare et al.,
2013).222Dopamine uses sticky actions by default (Machado et al., 2018). It is
worth noting that the default batch size is $32$, which we indicate with a
black color in all the plots below, for clarity. We evaluate our agents on 20
games chosen by Fedus et al. (2020) for their analysis of replay ratios,
picked to offer a diversity of difficulty and dynamics. To reduce the
computational burden, we ran most of our experiments for 100 million frames
(as opposed to the standard 200 million). For evaluation, we follow the
guidelines of Agarwal et al. (2021). Specifically, we run 3 independent seeds
for each experiment and report the human-normalized interquantile mean (IQM),
aggregated over the 20 games, configurations, and seeds, with the 95%
stratified bootstrap confidence intervals. Note that this means that for most
of the aggregate results presented here, we are reporting mean and confidence
intervals over 60 independent seeds. All experiments were run on NVIDIA Tesla
P100 GPUs.
### 3.1 Standard agents
Figure 2: IQM for human normalized scores with varying neural network
architectures over 20 games, with 3 seeds per experiment. Shaded areas
represent 95% stratified bootstrap confidence intervals.
We begin by investigating the impact reducing the batch size can have on four
popular value-based agents, which were initially benchmarked on the ALE suite:
DQN (Mnih et al., 2015), Rainbow (Hessel et al., 2018) (Note that Dopamine
uses a “compact” version of the original Rainbow agent, including only multi-
step updates, prioritized replay, and C51), QR-DQN (Dabney et al., 2018a), and
IQN (Dabney et al., 2018b). In Figure 3 we can observe that, in general,
reduced batch size results in improved performance. The notable exception is
DQN, for which we provide an analysis and explanation for why this is the case
below. To verify that our results are not a consequence of the set of 20 games
used in our analyses, we ran QR-DQN (where the effect is most observed) over
the full 60 games in the suite and report the results in Figure 19.
Remarkably, a batch size of 8 results in significant gains on 38 out of the
full 60 games, for an average performance improvement of 98.25%.
Figure 3: IQM for human normalized scores for DQN, Rainbow, QR-DQN, and IQN
over 20 games. All games run with 3 independent seeds, shaded areas
representing 95% confidence intervals.
### 3.2 Varying architectures
Although the CNN architecture originally introduced by DQN (Mnih et al., 2015)
has been the backbone for most deep RL networks, there have been some recent
works exploring the effects of varying architectures (Espeholt et al., 2018;
Agarwal et al., 2022; Sokar et al., 2023). We investigate the small batch
effect by varying the QR-DQN architecture in two ways: (1) expanding the
convolutional widths by 4 times (resulting in a substantial increase in the
number of parameters), and (2) using the Resnet architecture proposed by
Espeholt et al. (2018) (which results in a similar number of parameters to the
original CNN architecture, but is a deeper network). In Figure 2 we can
observe that not only do reduced batch sizes yield improved performance, but
they are better able to leverage the increased number of parameters (CNNx4)
and the increased depth (Resnet).
### 3.3 Atari 100k agents
There has been an increased interest in evaluating Atari agents on very few
environment interactions, for which Kaiser et al. (2020) proposed the 100k
benchmark333Here, 100k refers to agent steps, or 400k environment frames, due
to skipping frames in the standard training setup.. We evaluate the effect of
reduced batch size on three of the most widely used agents for this regime:
Data-efficient Rainbow (DER), a version of the Rainbow algorithm with hyper-
parameters tuned for faster early learning (van Hasselt et al., 2019);
DrQ($\epsilon$), which is a variant of DQN that uses data augmentation
(Agarwal et al., 2021); and SPR, which incorporates self-supervised learning
to improve sample efficiency (Schwarzer et al., 2020). For this evaluation we
evaluate on the standard 26 games for this benchmark (Kaiser et al., 2020),
aggregated over 6 independent trials.
Figure 4: Measured IQM of human-normalized scores on the 26 100k benchmark
games, with varying batch sizes, of DER, SPR, and DrQ($\epsilon$). We evaluate
performance at 100k agent steps (or 400k environment frames), and at 30
million environment frames, run with 6 independent seeds for each experiment,
and shaded areas display 95% confidence intervals.
In Figure 4 we include results both at the 100k benchmark (left side of
plots), and when trained for 30 million frames. Our intent is to evaluate the
batch size effect on agents that were optimized for a different training
regime. We can see that although there is little difference in 100k, there is
a much more pronounced effect when trained for longer. This finding suggests
that reduced batch sizes enables continued performance improvements when
trained for longer.
Figure 5: Measuring IQM for human-normalized scores when training for 200
million frames. Results aggregated over 20 games, where each experiment was
run with 3 independent seeds and we report 95% confidence intervals.
### 3.4 Training Stability
To further investigate whether reduced batch sizes enables continual
improvements with longer training, we extend the training of QR-DQN up to the
standard 200 million frames. In Figure 5 we can see that training performance
tends to plateau for the higher batch sizes. In contrast, the smaller batch
sizes seem to be able to continuously improve their performance.
### 3.5 Impact on exploration
The simplest and most widely used approach for exploration is to select
actions randomly with a probability $\epsilon$, as opposed to selecting them
greedily from the current $Q_{\theta}$ estimate. The increased variance
resulting from reduced batch sizes (as we will explore in more depth below)
may also result in a natural form of exploration. To investigate this, we set
the target $\epsilon$ value to $0.0$ for QR-DQN444Note that we follow the
training schedule of Mnih et al. (2015) where the $\epsilon$ value begins at
$1.0$ and is linearly decayed to its target value over the first million
environment frames.. In Figure 6 we compare performance across four known hard
exploration games (Bellemare et al., 2016; Taiga et al., 2020) and observe
that reduced batch sizes tends to result in improved performance for these
games.
Many methods have been proposed to address the exploitation-exploration
dilemma, and some techniques emphasize exploration by adding noise directly to
the parameter space of agents (Fortunato et al., 2018; Plappert et al., 2018;
Hao et al., 2023; Eberhard et al., 2023), which inherently adds variance to
the learning process. Our analyses show that increasing variance by reducing
the batch size may result in similar beneficial exploratory effects, as the
mentioned works suggest.
height 90pt depth 0 pt width 1.2 pt
Figure 6: Left: Performance of QR-DQN on four hard exploration games with a
target $\epsilon$ value of $0.0$, and with varying batch sizes. Right:
Aggregate IQM of human-normalized scores over 20 games with a target
$\epsilon$ value of $0.0$. In all the plots 3 independent seeds were used for
each game/batch-size configuration, with shaded areas representing 95%
confidence intervals.
### 3.6 Computational impact
Empirical advances in deep reinforcement learning are generally measured with
respect to sample efficiency; that is, the number of environment interactions
required before achieving a certain level of performance. It fails to capture
computational differences between algorithms. If two algorithms have the same
performance with respect to environment interactions, but one takes twice as
long to perform each training step, one would clearly opt for the faster of
the two. This important distinction, however, is largely overlooked in the
standard evaluation methodologies used by the DRL community.
Figure 7: Measuring wall-time versus IQM of human-normalized scores when
varying batch sizes in DQN (with $n$-step set to 3), Rainbow, QR-DQN, and IQN
over 20 games. Each experiment had 3 independent runs, and the confidence
intervals show 95% confidence intervals.
We have already demonstrated the performance benefits obtained when reducing
batch size, but an additional important consequence is the reduction in
computation wall-time. Figure 7 demonstrates that not only can we obtain
better performance with a reduced batch size, but we can do so at a fraction
of the runtime. As a concrete example, when changing the batch size of QR-DQN
from the default value of 32 to 8, we achieve both a 50% performance increase
and a 29% speedup in wall-time. It may seem surprising that smaller batch
sizes have a faster runtime, since larger batches presumably make better use
of GPU parallelism. However, as pointed out by Masters and Luschi (2018), the
speedups may be a result of a smaller memory footprint, enabling better
machine throughput.
Considering the unsuitable increase in computational requirements, progress
with deep learning demands more compute-efficient training methods. A natural
direction is to eliminate algorithmic inefficiencies in the learning process,
aiming to reduce time, energy consumption and carbon footprint associated with
training these models (Bartoldson et al., 2023; Chen et al., 2021). Figure 14
illustrates the wall-time reduction when using high-capacity neural networks
and smaller batch size value. This motivates a fundamental trade-off in the
choice of batch size, and the way of how we benchmark deep reinforcement
learning algorithms.
Key observations on reduced batch sizes: • They generally improve performance,
as evaluated across a variety of agents and network architectures. • When
trained for longer, the performance gains continue, rather than plateauing. •
They seem to have a beneficial effect on exploration. • They result in faster
training, as measured by wall-time.
## 4 Understanding the small batch effect
Having demonstrated the performance benefits arising from a reduced batch size
across a wide range of tasks, in this section we seek to gain some insight
into possible causes. We will focus on QR-DQN, as this is the agent where the
small batch effect is most pronounced (Figure 3). We begin by investigating
possible confounding factors for the small batch effect, and then provide
analyses on the effect of reduced batch sizes on network dynamics.
### 4.1 Relation to other hyperparameters
Figure 8: Varying batch sizes for different learning values. Results
aggregated IQM of human-normalized scores over 20 games for QR-DQN.
#### Learning rates
It is natural to wonder whether an improved learning rate could produce the
same effect as simply reducing the batch size. In Figure 8 we explored a
variety of different learning rates and observe that, although performance is
relatively stable with a batch size of 32, it is unable to reach the
performance gains obtained with a batch size of 8 or 16. Figure 8 shows that
the smaller the learning rate, the larger batch size needs to be, and thus the
longer training takes. This result aligns well with the findings of Wilson and
Martinez (2003).
#### Second order optimizer effects
All our experiments, like most modern RL agents, use the Adam optimizer
(Kingma and Ba, 2015), a variant of stochastic gradient descent (SGD) that
adapts its learning rate based on the first- and second-order moments of the
gradients, as estimated from mini-batches used for training. It is thus
possible that smaller batch sizes have a second-order effect on the learning-
rate adaptation that benefits agent performance. To investigate this we
evaluated, for each training step, performing multiple gradient updates on
subsets of the original sampled batch; we define the parameter $BatchDivisor$
as the number of gradient updates and dividing factor (where a value of 1 is
the default setting). Thus, for a $BatchDivisor$ of 4, we would perform 4
gradient updates with subsets of size 8 instead of a single gradient update
with a mini-batch of size 32. With an optimizer like SGD this has no effect
(as they are mathematically equivalent), but we may see differing performance
due to Adam’s adaptive learning rates. Figure 9 demonstrates that, while there
are differences, these are not consistent nor significant enough to explain
the performance boost.
height 75pt depth 0 pt width 1.2 pt
Figure 9: Varying the number of gradient updates per training step, for a
fixed batch size of 32. Left: Performance of QR-DQN on three games with
different $BatchDivisor$ value. Right: Results aggregated IQM of human-
normalized scores over 20 games for QR-DQN.
#### Relationship with multi-step learning
In Figure 3 we observed that DQN was the only agent where reducing batch size
did not improve performance. Recalling that the Dopamine version of Rainbow
used is simply adding three components to the base DQN agent, we follow the
analyses of Hessel et al. (2018) and Ceron and Castro (2021). Specifically, in
Figure 10 (top row) we simultaneously add these components to DQN (top left
plot) and remove these components from Rainbow (top center plot). Remarkably,
batch size is inversely correlated with performance only when multi-step
returns are used. Given that DQN is the only agent considered here without
multi-step learning, this finding explains the anomalous findings in Figure 3.
Indeed, as the right panel of Figure 10 (top row) shows, adding multi-step
learning to DQN results in improved performance with smaller batch sizes. To
further investigate the relationship between batch size and multi-step
returns, in Figure 10 (bottom row) we evaluate varying both batch sizes and
$n$-step values for DQN, Rainbow, and QR-DQN. We can observe that smaller
batch sizes suffer less from degrading performance as the $n$-step value is
increased.
height 80pt depth 0 pt width 1.2 pt
Figure 10: Measured IQM human normalized scores over 20 games with 3
independent seeds for each configuration, displaying 95% stratified bootstrap
confidence intervals. Top left: Adding components to DQN; Top center: Removing
components from Rainbow. Top right: Aggregate DQN performance with $n$-step of
3. Bottom: Varying batch sizes and $n$-steps in DQN (left), Rainbow (center),
and QR-DQN (right).
Key insights: • The small batch effect does not seem to be a consequence of a
sub-optimal choice of learning rate for the default value of 32. • The small
batch effect does not arise due to beneficial interactions with the Adam
optimizer. • The small batch effect appears to be more pronounced with multi-
step learning. • When increasing the update horizon in multi-step learning,
smaller batches produce better results.
### 4.2 Analysis of network optimization dynamics
In this section we will focus on three representative games (Asteroids,
DemonAttack, and SpaceInvaders), and include results for more games in the
supplemental material. In Figure 11 we present the training returns as well as
a variety of metrics we collected for our analyses. We will discuss each in
more detail below. The first column in this figure displays the training
returns for each game, where we can observe the inverse correlation between
batch size and performance.
Figure 11: Empirical analyses for three representative games with varying
batch sizes. From left to right: training returns, aggregate loss variance,
average gradient norm, average representation norm, $srank$ (Kumar et al.,
2021a), and dormant neurons (Sokar et al., 2023). All results averaged over 3
seeds, shaded areas represent 95% confidence intervals.
#### Variance of updates
Intuition suggests that as we decrease the batch size, we will observe an
increase in the variance of our updates as our gradient estimates will be
noisier. This is confirmed in the second column of Figure 11, where we see an
increased variance with reduced batch size. A natural question is whether
directly increasing variance results in improved performance, thereby
(partially) explaining the results with reduced batch size. To investigate, we
added Gaussian noise (at varying scales) to the learning target
$Q_{\bar{\theta}}$ (see section 2 for definition). As Figure 12 demonstrates,
simply adding noise to the target does provide benefits, albeit with some
variation across games.
height 75pt depth 0 pt width 1.2 pt
Figure 12: Adding noise of varying scales to the learning target with the
default batch size of 32. Left: Performance of QR-DQN on three games with
different target noise scale values. Right: Results aggregated IQM of human-
normalized scores over 20 games for QR-DQN.
#### Gradient and representation norms
Keskar et al. (2017) and Zhao et al. (2022) both argue that smaller gradient
norms can lead to improved generalization and performance, in part due to less
“sharp” optimization landscapes. In Figure 11 (third column) we can see that
batch size is, in fact, correlated with gradient norms, which may be an
important factor in the improved performance. In Appendix D, we conducted
experiments on a different subset of games, and observed a consistent trend:
better performance is achieved with smaller batch sizes and gradient norms.
There have been a number of recent works suggesting RL representations, taken
to be the output of the convolutional layers in our networks555This is a
common interpretation used recently, for example, by Castro et al. (2021),
Gogianu et al. (2021), and Farebrother et al. (2023), yield better agent
performance when their norms are smaller. Gogianu et al. (2021) demonstrated
that normalizing representations yields improved agent performance as a result
of a change to optimization dynamics; Kumar et al. (2021b) further observed
that smaller representation norms can help mitigate feature co-adaptation,
which can degrade agent performance in the offline setting. As Figure 11
(fourth column) shows, the norms of the representations are correlated with
batch size, which aligns well with the works just mentioned.
#### Effect on network expressivity and plasticity
Kumar et al. (2021a) introduced the notion of the effective rank of the
representation $srank_{\delta}(\phi)$666$\delta$ is a threshold parameter. We
used the same value of $0.01$ as used by Kumar et al. (2021a)., and argued
that it is correlated with a network’s expressivity: a reduction in effective
rank results in an implicit under-parameterization. The authors provide
evidence that bootstrapping is the likeliest cause for effective rank collapse
(and reduced performance).
Figure 13: Gradient covariance matrices for Asteroids (left) and SpaceInvaders
(right). In environments where smaller batch size significantly improves
performance, it also induces weaker gradient correlation7 and less gradient
interference.
Interestingly, in Figure 11 (fifth column) we see that with smaller batch
sizes $srank$ collapse occurs earlier in training than with larger batch
sizes. Given that there is mounting evidence that deep RL networks tend to
overfit during training (Dabney et al., 2021; Nikishin et al., 2022; Sokar et
al., 2023), it is possible that the network is better able to adapt to an
earlier rank collapse than to a later one.
To further investigate the effects on network expressivity, we measured the
fraction of dormant neurons (neurons with near-zero activations). Sokar et al.
(2023) demonstrated that deep RL agents suffer from an increase in the number
of dormant neurons in their network; further, the higher the level of dormant
neurons, the worse the performance. In Figure 11 (rightmost column) we can see
that, although the relationship with batch size is not as clear as with some
of the other metrics, smaller batch sizes appear to have a much milder
increase in their frequency. Further, there does appear to be a close
relationship with the measured $srank$ findings above. Lyle et al. (2023)
evaluated the covariance structure of the gradients to revisit the network’s
loss landscape, and argue that weaker gradient correlation and less gradient
interference improve performance. We observe similar results in the gradient
covariance heat maps shown in Figure 13 and Figure 16, where gradients appear
to be largely colinear777 Dark red color refers to high negative correlation,
and dark blue one high positive correlation. when using larger batch size
values.
Key insights: • Reduced batch sizes result in increased variance of losses and
gradients. This increased variance can have a beneficial effect during
training. • Smaller batch sizes result in smaller gradient and representation
norms, which tend to result in improved performance. • Smaller batch sizes
seem to result in networks that are both more expressive and with greater
plasticity.
## 5 Related work
There is a considerable amount of literature on understanding the effect of
batch size in supervised learning settings. Keskar et al. (2016) presented
quantitative experiments that support the view that large-batch methods tend
to converge to sharp minimizers of the training and testing functions, and as
has been shown in the optimization community, sharp minima tends to lead to
poorer generalization. Masters and Luschi (2018) support the previous finding,
presenting an empirical study of stochastic gradient descent’s performance,
and reviewing the underlying theoretical assumptions surrounding smaller
batches. They conclude that using smaller batch sizes achieves the best
training stability and generalization performance. Additionally, Golmant et
al. (2018) reported that across a wide range of network architectures and
problem domains, increasing the batch size yields no decrease in wall-clock
time to convergence for either train or test loss.
Although batch size is central to deep reinforcement learning algorithms, it
has not been extensively studied. One of the few results in this space is the
work by Stooke and Abbeel (2018), where they argued that larger batch sizes
can lead to improved performance when training in distributed settings. Our
work finds the opposite effect: smaller batch sizes tends to improve
performance; this suggests that empirical findings may not directly carry over
between single-agent and distributed training scenarios. Islam et al. (2017)
and Hilton et al. (2022) have investigated the role of batch size in on-policy
algorithms. The latter demonstrates how to make these algorithms batch size-
invariant, aiming to sustain training efficiency at small batch sizes.
Lahire et al. (2021) cast the replay buffer sampling problem as an importance
sampling one, allowing it to perform well when using large batch. Fedus et al.
(2020) presented a systematic and extensive analysis of experience replay in
Q-learning methods, focusing on two fundamental properties: the replay
capacity and the ratio of learning updates to experience collected (e.g. the
replay ratio). Although their findings are complementary to ours, further
investigation into the interplay of batch size and replay ratio is an
interesting avenue for future work. Finally, there have been a number of
recent works investigating network plasticity (Schwarzer et al., 2023; D’Oro
et al., 2023; Sokar et al., 2023; Nikishin et al., 2022), but all have kept
the batch size fixed.
Wołczyk and Krutsylo (2021) investigate the dynamics of experience replay in
online continual learning, and focus on the effect of batch size choice when
sampling from a replay buffer. They find that smaller batches are better at
preventing forgetting than using larger batches, contrary to the intuitive
assumption that it is better to recall more samples from the past to avoid
forgetting. Additionally, the authors show that this phenomenon does not
disappear under learning rate tuning. Their settings are similar to those used
to generate Figure 3 in (Sokar et al., 2023), and suggest that target non-
stationarity (e.g. bootstrapping) may have a role to play in explaining the
small batch size effect we are observing.
## 6 Conclusions
In online deep RL, the amount of data sampled during each training step is
crucial to an agent’s learning effectiveness. Common intuition would lead one
to believe that larger batches yield better estimates of the data distribution
and yield computational savings due to data parallelism on GPUs. Our findings
here suggest the opposite: the batch size parameter generally alters the
agent’s learning curves in surprising ways, and reducing the batch size below
its standard value is often beneficial.
From a practical perspective, our experimental results make it clear that the
effect of batch size on performance is substantially more complex than in
supervised learning. Beyond the obvious performance and wall-time gains we
observe, changing the batch size appears to have knock-on effects on
exploration as well as asymptotic behaviour. Figure 8 hints at a complex
relationship between learning rate and batch size, suggesting the potential
usefulness of “scaling laws” for adjusting these parameters appropriately.
Conversely, our results also highlight a number of theoretically-unexplained
effects in deep reinforcement learning. For example, one would naturally
expect that decreasing the batch size should increase variance, and eventually
affect prediction accuracy. That its effect on performance, both transient and
asymptotic, should so critically depend on the degree to which bootstrapping
occurs (as in $n$-step returns; Figure 10), suggests that gradient-based
temporal-difference learning algorithms need a fundamentally different
analysis from supervised learning methods.
#### Future Work
Our focus in this paper has been on value-based online methods. This raises
the question of whether our findings carry over to actor-critic methods, and
different training scenarios such as offline RL (Levine et al., 2020) and
distributed training (Stooke and Abbeel, 2018). While similar findings are
likely for actor-critic methods, the dynamics are sufficiently different in
offline RL and in distributed training that it would likely require a
different investigative and analytical approach. It is also an interesting
direction to explore adaptive schemes that dynamically varies the batch size
during training. Our experiments used a constant batch size, so further
research is needed to determine whether it is advantageous to reduce the batch
size over time in practice, as well as how quickly it should be reduced.
Our work has broader implications than just the choice of the batch size
hyper-parameter. For instance, our findings on the impact of variance on
performance suggest a promising avenue for new algorithmic innovations via the
explicit injection of variance. Most exploration algorithms are designed for
tabular settings and then adapted for deep networks; our results in section
3.5 suggest there may be opportunities for exploratory algorithms designed
specifically for use with neural networks. We hope our analyses can prove
useful for further advances in the development and understanding of deep
networks for reinforcement learning.
#### Acknowledgements.
Many thanks to Georg Ostrovski and Gopeshh Subbaraj for their feedback on an
earlier draft of this paper. We also acknowledge Max Schwarzer, Adrien Ali
Taiga, Rishabh Agarwal and Jesse Farebrother for useful discussions, as well
as the rest of the DeepMind Montreal team for their feedback on this work. The
authors would also like to thank the anonymous reviewers for useful feedback
on this paper. Last but not least, we would also like to thank the Python
community (Van Rossum and Drake Jr, 1995; Oliphant, 2007) for developing tools
that enabled this work, including NumPy (Harris et al., 2020), Matplotlib
(Hunter, 2007) and JAX (Bradbury et al., 2018).
#### Broader impact
Although the work presented here is mostly of an academic nature, it aids in
the development of more capable autonomous agents. While our contributions do
not directly contribute to any negative societal impacts, we urge the
community to consider these when building on our research
## References
* Agarwal et al. [2021] Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Deep reinforcement learning at the edge of the statistical precipice. In _Thirty-Fifth Conference on Neural Information Processing Systems_ , 2021.
* Agarwal et al. [2022] Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Beyond tabula rasa: Reincarnating reinforcement learning. In _Thirty-Sixth Conference on Neural Information Processing Systems_ , 2022.
* Bartoldson et al. [2023] Brian R Bartoldson, Bhavya Kailkhura, and Davis Blalock. Compute-efficient deep learning: Algorithmic trends and opportunities. _Journal of Machine Learning Research_ , 24:1–77, 2023\.
* Bellemare et al. [2013] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, jun 2013. doi: 10.1613/jair.3912.
* Bellemare et al. [2016] Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_files/paper/2016/file/afda332245e2af431fb7b672a68b659d-Paper.pdf.
* Bellemare et al. [2017] Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ICML’17, page 449–458, 2017.
* Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, et al. Jax: composable transformations of python+ numpy programs. 2018\.
* Castro et al. [2018] Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. Dopamine: A Research Framework for Deep Reinforcement Learning. 2018\. URL http://arxiv.org/abs/1812.06110.
* Castro et al. [2021] Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, and Mark Rowland. MICo: Learning improved representations via sampling-based state similarity for Markov decision processes. In _Advances in Neural Information Processing Systems_ , 2021.
* Ceron and Castro [2021] Johan Samir Obando Ceron and Pablo Samuel Castro. Revisiting rainbow: Promoting more insightful and inclusive deep reinforcement learning research. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 1373–1383. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/ceron21a.html.
* Chen et al. [2021] Lili Chen, Kimin Lee, Aravind Srinivas, and Pieter Abbeel. Improving computational efficiency in visual reinforcement learning via stored embeddings. _Advances in Neural Information Processing Systems_ , 34:26779–26791, 2021.
* Dabney et al. [2018a] W. Dabney, M. Rowland, Marc G. Bellemare, and R. Munos. Distributional reinforcement learning with quantile regression. In _AAAI_ , 2018a.
* Dabney et al. [2018b] Will Dabney, Georg Ostrovski, David Silver, and Remi Munos. Implicit quantile networks for distributional reinforcement learning. In _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 1096–1105. PMLR, 2018b.
* Dabney et al. [2021] Will Dabney, Andre Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, and David Silver. The value-improvement path: Towards better representations for reinforcement learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2021.
* D’Oro et al. [2023] Pierluca D’Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, and Aaron Courville. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In _The Eleventh International Conference on Learning Representations_ , 2023. URL https://openreview.net/forum?id=OpC-9aBBVJe.
* Eberhard et al. [2023] Onno Eberhard, Jakob Hollenstein, Cristina Pinneri, and Georg Martius. Pink noise is all you need: Colored noise exploration in deep reinforcement learning. In _The Eleventh International Conference on Learning Representations_ , 2023. URL https://openreview.net/forum?id=hQ9V5QN27eS.
* Espeholt et al. [2018] Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In _Proceedings of the 35th International Conference on Machine Learning)_ , ICML’18, 2018.
* Farebrother et al. [2023] Jesse Farebrother, Joshua Greaves, Rishabh Agarwal, Charline Le Lan, Ross Goroshin, Pablo Samuel Castro, and Marc G Bellemare. Proto-value networks: Scaling representation learning with auxiliary tasks. In _The Eleventh International Conference on Learning Representations_ , 2023. URL https://openreview.net/forum?id=oGDKSt9JrZi.
* Fedus et al. [2020] William Fedus, Prajit Ramachandran, Rishabh Agarwal, Yoshua Bengio, Hugo Larochelle, Mark Rowland, and Will Dabney. Revisiting fundamentals of experience replay. In _International Conference on Machine Learning_ , pages 3061–3071. PMLR, 2020.
* Fortunato et al. [2018] Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alexander Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, and Shane Legg. Noisy networks for exploration. In _Proceedings of the International Conference on Representation Learning (ICLR 2018)_ , Vancouver (Canada), 2018.
* Gogianu et al. [2021] Florin Gogianu, Tudor Berariu, Mihaela C Rosca, Claudia Clopath, Lucian Busoniu, and Razvan Pascanu. Spectral normalisation for deep reinforcement learning: An optimisation perspective. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 3734–3744. PMLR, 18–24 Jul 2021.
* Golmant et al. [2018] Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W Mahoney, and Joseph Gonzalez. On the computational inefficiency of large batch sizes for stochastic gradient descent. _arXiv preprint arXiv:1811.12941_ , 2018.
* Hao et al. [2023] Jianye Hao, Tianpei Yang, Hongyao Tang, Chenjia Bai, Jinyi Liu, Zhaopeng Meng, Peng Liu, and Zhen Wang. Exploration in deep reinforcement learning: From single-agent to multiagent domain. _IEEE Transactions on Neural Networks and Learning Systems_ , pages 1–21, 2023. doi: 10.1109/tnnls.2023.3236361. URL https://doi.org/10.1109%2Ftnnls.2023.3236361.
* Harris et al. [2020] Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with numpy. _Nature_ , 585(7825):357–362, 2020.
* Hessel et al. [2018] Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining Improvements in Deep Reinforcement learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2018.
* Hilton et al. [2022] Jacob Hilton, Karl Cobbe, and John Schulman. Batch size-invariance for policy optimization. _Advances in Neural Information Processing Systems_ , 35:17086–17098, 2022.
* Hunter [2007] John D Hunter. Matplotlib: A 2d graphics environment. _Computing in science & engineering_, 9(03):90–95, 2007.
* Islam et al. [2017] Riashat Islam, Peter Henderson, Maziar Gomrokchi, and Doina Precup. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control, 2017.
* Kaiser et al. [2020] Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błażej Osiński, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model based reinforcement learning for atari. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=S1xCPJHtDB.
* Keskar et al. [2016] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. _arXiv preprint arXiv:1609.04836_ , 2016.
* Keskar et al. [2017] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In _International Conference on Learning Representations_ , 2017. URL https://openreview.net/forum?id=H1oyRlYgg.
* Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , 2015. URL http://arxiv.org/abs/1412.6980.
* Kumar et al. [2021a] Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. Implicit under-parameterization inhibits data-efficient deep reinforcement learning. In _International Conference on Learning Representations_ , 2021a. URL https://openreview.net/forum?id=O9bnihsFfXU.
* Kumar et al. [2021b] Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. Dr3: Value-based deep reinforcement learning requires explicit regularization. In _International Conference on Learning Representations_ , 2021b.
* Lahire et al. [2021] Thibault Lahire, Matthieu Geist, and Emmanuel Rachelson. Large batch experience replay. In _International Conference on Machine Learning_ , 2021. URL https://api.semanticscholar.org/CorpusID:238259488.
* Levine et al. [2020] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. _arXiv preprint arXiv:2005.01643_ , 2020.
* Lin [1992] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. _Mach. Learn._ , 8(3–4):293–321, May 1992\.
* Lyle et al. [2023] Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, and Will Dabney. Understanding plasticity in neural networks. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, _Proceedings of the 40th International Conference on Machine Learning_ , volume 202 of _Proceedings of Machine Learning Research_ , pages 23190–23211. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/lyle23b.html.
* Machado et al. [2018] Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. _J. Artif. Int. Res._ , 61(1):523–562, jan 2018\. ISSN 1076-9757.
* Masters and Luschi [2018] Dominic Masters and Carlo Luschi. Revisiting small batch training for deep neural networks. _ArXiv_ , abs/1804.07612, 2018.
* Mnih et al. [2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529–533, February 2015.
* Nikishin et al. [2022] Evgenii Nikishin, Max Schwarzer, Pierluca D’Oro, Pierre-Luc Bacon, and Aaron Courville. The primacy bias in deep reinforcement learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 16828–16847. PMLR, 17–23 Jul 2022.
* Oliphant [2007] Travis E. Oliphant. Python for scientific computing. _Computing in Science & Engineering_, 9(3):10–20, 2007. doi: 10.1109/MCSE.2007.58.
* Plappert et al. [2018] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=ByBAl2eAZ.
* Schwarzer et al. [2020] Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. In _International Conference on Learning Representations_ , 2020.
* Schwarzer et al. [2023] Max Schwarzer, Johan Samir Obando Ceron, Aaron Courville, Marc G Bellemare, Rishabh Agarwal, and Pablo Samuel Castro. Bigger, better, faster: Human-level atari with human-level efficiency. In _International Conference on Machine Learning_ , pages 30365–30380. PMLR, 2023.
* Shallue et al. [2019] Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. Measuring the effects of data parallelism on neural network training. _Journal of Machine Learning Research_ , 20(112):1–49, 2019. URL http://jmlr.org/papers/v20/18-789.html.
* Sokar et al. [2023] Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, and Utku Evci. The dormant neuron phenomenon in deep reinforcement learning. In _ICML_ , 2023.
* Stooke and Abbeel [2018] Adam Stooke and Pieter Abbeel. Accelerated methods for deep reinforcement learning. _CoRR_ , abs/1803.02811, 2018. URL http://arxiv.org/abs/1803.02811.
* Sutton and Barto [2018] Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
* Taiga et al. [2020] Adrien Ali Taiga, William Fedus, Marlos C. Machado, Aaron Courville, and Marc G. Bellemare. On bonus based exploration methods in the arcade learning environment. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=BJewlyStDr.
* van Hasselt et al. [2019] Hado P van Hasselt, Matteo Hessel, and John Aslanides. When to use parametric models in reinforcement learning? In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/1b742ae215adf18b75449c6e272fd92d-Paper.pdf.
* Van Rossum and Drake Jr [1995] Guido Van Rossum and Fred L Drake Jr. _Python reference manual_. Centrum voor Wiskunde en Informatica Amsterdam, 1995.
* Watkins and Dayan [1992] Christopher JCH Watkins and Peter Dayan. Q-learning. _Machine learning_ , 8(3):279–292, 1992.
* Wilson and Martinez [2003] D.Randall Wilson and Tony R. Martinez. The general inefficiency of batch training for gradient descent learning. _Neural Networks_ , 16(10):1429–1451, 2003. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(03)00138-2. URL https://www.sciencedirect.com/science/article/pii/S0893608003001382.
* Wołczyk and Krutsylo [2021] Maciej Wołczyk and Andrii Krutsylo. Remember more by recalling less: Investigating the role of batch size in continual learning with experience replay (student abstract). 35(18):15923–15924, 2021.
* Zhao et al. [2022] Yang Zhao, Hao Zhang, and Xiuyuan Hu. Penalizing gradient norm for efficiently improving generalization in deep learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 26982–26992. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/zhao22i.html.
## Appendix A Code availability
Our experiments were built on open source code, mostly from the Dopamine
repository. The root directory for these is
https://github.com/google/dopamine/tree/master/dopamine/, and we specify the
subdirectories below (with clickable links):
* •
DQN, Rainbow, QR-DQN and IQN agents from /jax/agents/
* •
Atari-100k agents from /labs/atari-100k/
* •
Batch size from /jax/agents/quantile/configs/quantile.gin (line 36)
* •
Exploration $\epsilon=0$ from /jax/agents/quantile/configs/quantile.gin (line
16)
* •
Resnet from /labs/offline-rl/jax/networks.py (line 108)
* •
Dormant neurons metric from /labs/redo/
For the srank metric experiments we used code from:
https://github.com/google-research/google-research/blob/master/
generalization_representations_rl_aistats22/coherence/coherence_compute.py
## Appendix B Atari 2600 games used
Most of our experiments were run with 20 games from the ALE suite [Bellemare
et al., 2013], as suggested by Fedus et al. [2020]. However, for the Atari
100k agents (subsection 3.3), we used the standard set of 26 games [Kaiser et
al., 2020] to be consistent with the benchmark. Finally, we also ran some
experiments with the full set of 60 games. The specific games are detailed
below.
20 game subset: AirRaid, Asterix, Asteroids, Bowling, Breakout, DemonAttack,
Freeway, Gravitar, Jamesbond, MontezumaRevenge, MsPacman, Pong, PrivateEye,
Qbert, Seaquest, SpaceInvaders, Venture, WizardOfWor, YarsRevenge, Zaxxon.
26 game subset: Alien, Amidar, Assault, Asterix, BankHeist, BattleZone,
Boxing, Breakout, ChopperCommand, CrazyClimber, DemonAttack, Freeway,
Frostbite, Gopher, Hero, Jamesbond, Kangaroo, Krull, KungFuMaster, MsPacman,
Pong, PrivateEye, Qbert, RoadRunner, Seaquest, UpNDown.
60 game set: The 26 games above in addition to: AirRaid, Asteroids, Atlantis,
BeamRider, Berzerk, Bowling, Carnival, Centipede, DoubleDunk, ElevatorAction,
Enduro, FishingDerby, Gravitar, IceHockey, JourneyEscape, MontezumaRevenge,
NameThisGame, Phoenix, Pitfall, Pooyan, Riverraid, Robotank, Skiing, Solaris,
SpaceInvaders, StarGunner, Tennis, TimePilot, Tutankham, Venture,
VideoPinball, WizardOfWor, YarsRevenge, Zaxxon.
## Appendix C Wall-time versus IQM of human-normalized
Figure 14: Measuring wall-time versus IQM of human-normalized scores when
varying batch sizes and neural network architectures over 20 games in QR-DQN.
Each experiment had 3 independent runs, and the confidence intervals show 95%
confidence intervals.
## Appendix D Average gradient norm
Figure 15: Empirical analyses for 5 representative games with varying batch
sizes. Top: training returns, Bottom: average gradient norm. Results averaged
over 3 seeds, shaded areas represent 95% confidence intervals.
## Appendix E Gradient covariance
Figure 16: Gradient covariance plots for 6 representative games, which
highlight the role of the gradient structure with varying batch sizes. We find
that smaller batch size significantly improves performance and induces less
gradient interference and weaker gradient correlation.
## Appendix F Second order optimizer effects
Figure 17: Evaluating multiple gradient updates per training step on QR-DQN,
training curves for all games. Results averaged over 3 seeds, shaded areas
represent 95% confidence intervals.
## Appendix G Variance of updates.
Figure 18: Evaluating the effect of adding target noise to QR-DQN, learning
curves for all games. Results averaged over 3 seeds, shaded areas represent
95% confidence intervals.
## Appendix H Results on the full ALE suite
We additionally provide complete results for all games using QR-DQN agent in
Figure 19.
Figure 19: Training curves for QR-DQN agent. The results for all games are
over 3 independent runs.
## Appendix I Varying architectures
Figure 20: Evaluating the effect of CNNx4 to QR-DQN, learning curves for all
games. Results averaged over 3 seeds, shaded areas represent 95% confidence
intervals. Figure 21: Evaluating the effect of Resnet to QR-DQN, learning
curves for all games. Results averaged over 3 seeds, shaded areas represent
95% confidence intervals.
## Appendix J Training Stability
Figure 22: Measuring IQM for human-normalized scores when training for 200
million frames using IQN [Dabney et al., 2018b]. Results aggregated over 20
games, where each experiment was run with 3 independent seeds and we report
95% confidence intervals. Figure 23: Learning curves for individual games,
when trained for 200 million frames using IQN [Dabney et al., 2018b]. Results
aggregated over 3 seeds, reporting 95% confidence intervals. Figure 24:
Learning curves for individual games, when trained for 200 million frames
using QR-DQN [Dabney et al., 2018a]. Results aggregated over 3 seeds,
reporting 95% confidence intervals.
|
# A nice acyclic matching on the nerve of the partition lattice
Ralf Donau Fachbereich Mathematik, Universität Bremen, Bibliothekstraße 1,
28359 Bremen, Germany<EMAIL_ADDRESS>
###### Abstract
The author has already proven that the space $\Delta(\Pi_{n})/G$ is homotopy
equivalent to a wedge of spheres of dimension $n-3$ for all natural numbers
$n\geq 3$ and all subgroups $G\subset S_{1}\times S_{n-1}$. We wish to
construct an acyclic matching on $\Delta(\Pi_{n})/G$ that allows us to give a
basis of its cohomology. This is also a more elementary approach to
determining the number of spheres. Furthermore we give a description of the
group action by an action on the spheres. We also obtain another result that
we call Equivariant Patchwork Theorem.
###### keywords:
Discrete Morse Theory, Regular trisp, Acyclic matching, Equivariant homotopy
††journal: Topology and its Applications††journal: arXiv.org
## 1 Introduction
Let $n\geq 3$ and let $\Pi_{n}$ denote the poset consisting of all partitions
of $[n]:=\\{1,\dots,n\\}$ ordered by refinement, such that the finer partition
is the smaller partition. Let $\overline{\Pi}_{n}$ denote the poset obtained
from $\Pi_{n}$ by removing both the smallest and greatest element, which are
$\\{\\{1\\},\dots,\\{n\\}\\}$ and $\\{[n]\\}$, respectively. We consider
$\overline{\Pi}_{n}$ as a category, which is acyclic, and define
$\Delta(\overline{\Pi}_{n})$ to the nerve of the acyclic category
$\overline{\Pi}_{n}$, which is a regular trisp, see [8, Chapter 10]. The
symmetric group $S_{n}$ operates on $\Delta(\overline{\Pi}_{n})$ in a natural
way.
It is well-known that $\Delta(\overline{\Pi}_{n})$ is homotopy equivalent to a
wedge of spheres of dimension $n-3$. The following two theorems are the first
results concerning the topology of the quotient
$\Delta(\overline{\Pi}_{n})/G$, where $G$ is a non-trivial subgroup of
$S_{n}$.
###### Theorem 1.1 (Kozlov, [7]).
For any $n\geq 3$, the topological space $\Delta(\overline{\Pi}_{n})/S_{n}$ is
contractible.
We set $S_{1}\times S_{n-1}:=\\{\sigma\in S_{n}\mid\sigma(1)=1\\}$.
###### Theorem 1.2 (Donau, [2]).
Let $n\geq 3$ and $G\subset S_{1}\times S_{n-1}$ be a subgroup, then the
topological space $\Delta(\overline{\Pi}_{n})/G$ is homotopy equivalent to a
wedge of $k$ spheres of dimension $n-3$, where $k$ is the index of $G$ in
$S_{1}\times S_{n-1}$.
This leads to a general question of determining the homotopy type of
$\Delta(\overline{\Pi}_{n})/G$ for an arbitrary subgroup $G\subset S_{n}$. One
might conjecture that $\Delta(\overline{\Pi}_{n})/G$ is homotopy equivalent to
a wedge of spheres for any $n\geq 3$ and any subgroup $G\subset S_{n}$. But
unfortunately this statement is not true as the following example will show:
Let $p\geq 5$ be a prime number and let $C_{p}$ denote the subgroup of $S_{p}$
that is generated by the cycle $(1,2,\dots,p)$. Then the fundamental group of
$\Delta(\overline{\Pi}_{p})/C_{p}$ is isomorphic to
$\mathbbm{Z}/p\mathbbm{Z}$. In particular $\Delta(\overline{\Pi}_{p})/C_{p}$
cannot be homotopy equivalent to a wedge of spheres. A proof, which uses facts
about covering spaces111See [6, Chapter 1.3], can be found in [3].
In this paper we construct an $(S_{1}\times S_{n-1})$-equivariant acyclic
matching on the face poset ${\cal F}(\Delta(\overline{\Pi}_{n}))$ of
$\Delta(\overline{\Pi}_{n})$ for $n\geq 3$ such that we have a description of
the critical simplices. This induces an acyclic matching on
$\Delta(\overline{\Pi}_{n})/G$ for any subgroup $G\subset S_{1}\times
S_{n-1}$. Another benefit of having a description of the critical simplices is
that we can easily give cocycle representatives of the generators of the
cohomology, which can be useful for further analysis of the cohomology.
Equivariant acyclic matchings are also useful to find equivariant homotopies
between spaces, since there exists an equivariant version of the Main Theorem
of Discrete Morse Theory, see [5]. For the construction of an equivariant
acyclic matching we have similar tools as in Discrete Morse Theory. An
equivariant closure operator induces an equivariant trisp closure map which
induces an equivariant acyclic matching. A detailed description of the non-
equivariant versions of these tools can be found in [7].
## 2 Discrete Morse Theory
The definitions of regular trisps, partial matchings, acyclic matchings and
foundations of Discrete Morse Theory can be found in [4, 7, 8]. The following
two theorems of Discrete Morse Theory are frequently used in our proofs.
###### Theorem 2.1 (Patchwork Theorem).
Let $\varphi:P\longrightarrow Q$ be an order-preserving map and assume we have
acyclic matchings on the subposets $\varphi^{-1}(q)$ for all $q\in Q$. Then
the union of these matchings is an acyclic matching on $P$.
###### Theorem 2.2.
Let $\Delta$ be a finite regular trisp and let $M$ be an acyclic matching on
the poset ${\cal F}(\Delta)\setminus\\{\hat{0}\\}$. Let $c_{i}$ denote the
number of critical $i$-dimensional simplices of $\Delta$. Then $\Delta$ is
homotopy equivalent to a CW complex with $c_{i}$ cells of dimension $i$.
The proofs of Theorems 2.1 and 2.2 as well as further facts on Discrete Morse
Theory can be found in [8, Chapter 11].
## 3 An Equivariant Patchwork Theorem
We wish to construct an equivariant acyclic matching on a poset by gluing
together smaller equivariant acyclic matchings on parts of the poset. This is
similar to Theorem 2.1 with the difference that we also create copies of these
matchings in our construction, see Figure 1.
###### Definition 3.1.
Let $P$ be a poset and let $G$ be a group acting on $P$. Let $M$ be an acyclic
matching on $P$. We call $M$ an _$G$ -equivariant acyclic matching_ if
$(a,b)\in M$ implies $(ga,gb)\in M$ for all $g\in G$ and $a,b\in P$.
Let $G$ be a group acting on some posets $P$ and $Q$. For an element $q\in Q$
we set $G_{q}:=\\{g\in G\mid gq=q\\}$, known as the stabilizer subgroup of
$q$.
###### Proposition 3.2.
Let $\varphi:P\longrightarrow Q$ be an order-preserving $G$-map and let
$R\subset Q$ be a subset such that $R$ contains exactly one representative for
each orbit in $Q$. Assume for each $r\in R$ we have an $G_{r}$-equivariant
acyclic matching $M_{r}$ on $\varphi^{-1}(r)$. For $r\in R$, let $C_{r}$
denote the set of critical elements of $M_{r}$. Then we have an
$G$-equivariant acyclic matching on $P$ such that
$\bigcup_{g\in G,r\in R}gC_{r}$
is the set of critical elements.
Let $r\in R$ and assume $G_{r}$ acts transitively on $C_{r}$. Then $G$ acts
transitively on
$\bigcup_{g\in G}gC_{r}$
###### Proof.
We define acyclic matchings on the fibers of $\varphi$ as follows. For each
$q\in Q$ we choose $r\in R$ and $g\in G$ with $gr=q$. The map
$g:\varphi^{-1}(r)\longrightarrow\varphi^{-1}(q)$, which is an isomorphism of
posets, induces an acyclic matching on $\varphi^{-1}(q)$. If we choose another
$h\in G$ with $hr=q$, then we obtain the same matching. By Theorem 2.1 the
union of these acyclic matchings is an acyclic matching which is
$G$-equivariant by construction. The second statement is easy to see. ∎
Figure 1: A simple example: An $\mathbbm{Z}_{2}$-equivariant acyclic matching
composed of acyclic matchings on the fibers of $0$ and $1$. The matching pair
in the fiber of $1$ is copied to the fiber of $2$. $\mathbbm{Z}_{2}$ acts on
both posets by reflection across the vertical line.
###### Remark 3.3.
Let $G$ be a group acting on a regular trisp $\Delta$. Assume we have an
$G$-equivariant acyclic matching $M$ on ${\cal
F}(\Delta)\setminus\\{\hat{0}\\}$. Let $C$ be the set of critical simplices.
Clearly we have an action of $G$ on $C$. Let $H\subset G$ be a subgroup. Then
$M/H$ is an acyclic matching on ${\cal F}(\Delta/H)\setminus\\{\hat{0}\\}$,
where $C/H$ is the set of critical simplices. In particular, if $\Delta$ is
$G$-collapsible, then $\Delta/H$ is collapsible. Furthermore if $H$ is a
normal subgroup, then the acyclic matching $M/H$ is $(G/H)$-equivariant.
We also have an equivariant version of the Main Theorem of Discrete Morse
Theory.
###### Theorem 3.4 (Freij, [5]).
Let $G$ be a finite group. Let $\Delta$ be a finite regular $G$-trisp and let
$M$ be a $G$-equivariant acyclic matching on the poset ${\cal
F}(\Delta)\setminus\\{\hat{0}\\}$. Let $c_{i}$ denote the number of critical
$i$-dimensional simplices of $\Delta$. Then $\Delta$ is $G$-homotopy
equivalent to a $G$-CW complex where the cells correspond to the critical
simplices of $M$ and the action of $G$ is the same as the action on $\Delta$
restricted to the critical simplices of $M$.
## 4 The main result
Let $n\geq 3$ be a fixed natural number.
###### Definition 4.1.
Let $A$ be the set of all vertices of $\Delta(\overline{\Pi}_{n})$ where all
blocks not containing $1$ are singleton. We define the following set of
simplices of $\Delta(\overline{\Pi}_{n})$, see Figure 2.
$C_{n}:=\\{\sigma\in{\cal
F}(\Delta(\overline{\Pi}_{n}))\mid\text{$V(\sigma)\subset A$ and
$\dim\sigma=n-3$}\\}$
$V(\sigma)$ denotes the set of vertices of $\sigma$ and ${\cal
F}(\Delta(\overline{\Pi}_{n}))$ denotes the face poset of
$\Delta(\overline{\Pi}_{n})$. Furthermore we set $\alpha_{n}$ to the vertex
$\\{\\{1\\},\\{2,\dots,n\\}\\}$.
Figure 2: A simplex in $C_{5}$ which has dimension $2$.
###### Remark 4.2.
The cardinality of $C_{n}$ is $(n-1)!$.
###### Proposition 4.3.
There exists an $(S_{1}\times S_{n-1})$-equivariant acyclic matching on ${\cal
F}(\Delta(\overline{\Pi}_{n}))$, such that $C_{n}\cup\\{\alpha_{n}\\}$ is the
set of critical simplices.
Let $V$ be the set of all vertices where the block containing $1$ has exactly
two elements and any other block is singleton. Such a vertex can be written as
$v_{k}:=\\{\\{1,k\\},\\{2\\},\dots,\widehat{\\{k\\}},\dots,\\{n\\}\\}$
with $k\in\\{2,\ldots,n\\}$. The element with the hat above is omitted.
We define a poset $P:=V\cup\\{0\\}$ such that $0$ is the smallest element of
$P$ and the only element that is comparable with some other element. That
means $x,y\in P$, $x<y$ implies $x=0$. We define an order-preserving map
$\varphi:{\cal F}(\Delta(\overline{\Pi}_{n}))\longrightarrow P$ as follows.
Let $\sigma\in{\cal F}(\Delta(\overline{\Pi}_{n}))$, then we map $\sigma$ to
$0$ if $V(\sigma)\cap V=\emptyset$. Otherwise we map $\sigma$ to the special
vertex of $V$ that belongs to $\sigma$, which is unique. Notice that
$S_{1}\times S_{n-1}$ acts on $P$ in a natural way and $\varphi$ is a
$(S_{1}\times S_{n-1})$-map. $P$ has two orbits where one consists of one
element which is $0$. The other orbit may be represented by $v_{n}$.
###### Lemma 4.4.
There exists an $(S_{1}\times S_{n-1})$-equivariant acyclic matching on
$\varphi^{-1}(0)$, such that $\alpha_{n}$ is the only critical simplex.
###### Proof.
The proof of Lemma 4.4 is the same as the proof of Lemma 3.2 in [2] for the
case $G=\\{\operatorname{id}_{[n]}\\}$. It is easy to see that the acyclic
matching, that is constructed in this proof, is $(S_{1}\times
S_{n-1})$-equivariant. ∎
###### Proof of Proposition 4.3.
It is easy to see, that the statement is true for $n=3$. Now we assume $n>3$
and proceed by induction.
We define acyclic matchings on $\varphi^{-1}(0)$ and $\varphi^{-1}(v_{n})$ as
follows. By Lemma 4.4 there exists an $(S_{1}\times S_{n-1})$-equivariant
acyclic matching on $\varphi^{-1}(0)$, where $\alpha_{n}$ is the only critical
simplex.
We define a map
$\psi:{\cal
F}(\Delta(\overline{\Pi}_{n-1}))\longrightarrow\varphi^{-1}(v_{n})\setminus\\{v_{n}\\}$
as follows. We add $n$ to the block that contains $1$ in each partition and
append $v_{n}$ to the bottom of the chain, see Figure 3. The map $\psi$ is an
isomorphism of posets. A more general definition of $\psi$ as well as a
detailed description of its inverse can be found in the proof of Lemma 4.1 in
[2].
Figure 3: Example for the map $\psi$, where $n=5$.
Via $\psi$ we get an acyclic matching $M$ on $\varphi^{-1}(v_{n})$, where the
set of critical simplices consists of the simplices in $\psi[C_{n-1}]$, one
critical simplex $s_{n}$ consisting of the two vertices $v_{n}$ and
$\\{\\{1,n\\},\\{2,\dots,(n-1)\\}\\}$, which has dimension $1$. Additionally
we have the critical simplex that has only the vertex $v_{n}$. Finally we
match $v_{n}$ with $s_{n}$.
We have to show that $\sigma(v_{n})=v_{n}$ and $(a,b)\in M$ implies $(\sigma
a,\sigma b)\in M$ for all $\sigma\in S_{1}\times S_{n-1}$ and all
$a,b\in\varphi^{-1}(v_{n})$. Let $\sigma\in S_{1}\times S_{n-1}$.
$\sigma(v_{n})=v_{n}$ implies $\sigma(1)=1$ and $\sigma(n)=n$. We define a
$\widetilde{\sigma}\in S_{1}\times S_{n-2}$ by setting
$\widetilde{\sigma}(x):=\sigma(x)$ for $1\leq x\leq n-1$. Notice
$\sigma\psi=\psi\widetilde{\sigma}$ which implies
$\psi^{-1}\sigma=\widetilde{\sigma}\psi^{-1}$. Let $(a,b)\in M$. Clearly we
have $(v_{n},s_{n})=(\sigma v_{n},\sigma s_{n})$, hence we assume
$a\not=v_{n}$ and $b\not=s_{n}$. By the induction hypothesis, we have an
acyclic matching $\widetilde{M}$ on ${\cal F}(\Delta(\overline{\Pi}_{n-1}))$
which is $(S_{1}\times S_{n-2})$-equivariant. By the construction of $M$ we
have $(\psi(a)^{-1},\psi(b)^{-1})\in\widetilde{M}$. This implies
$(\widetilde{\sigma}\psi(a)^{-1},\widetilde{\sigma}\psi(b)^{-1})\in\widetilde{M}$,
hence $(\sigma a,\sigma b)\in M$.
By Proposition 3.2 there exists an $(S_{1}\times S_{n-1})$-equivariant acyclic
matching on ${\cal F}(\Delta(\overline{\Pi}_{n}))$ such that
$\left(\bigcup_{g\in S_{1}\times
S_{n-1}}g\psi[C_{n-1}]\right)\cup\\{\alpha_{n}\\}$
is the set of critical elements. It is easy to see that this set equals
$C_{n}\cup\\{\alpha_{n}\\}$. ∎
###### Corollary 4.5.
Let $G\subset S_{1}\times S_{n-1}$ be a subgroup. Then there exists an acyclic
matching on ${\cal F}(\Delta(\overline{\Pi}_{n}))/G$, such that the set of
critical simplices consists of the simplices in $C_{n}/G$ and $\alpha_{n}/G$.
###### Proof.
Apply Remark 3.3. ∎
###### Example 4.6.
Assume $G=S_{1}\times S_{n-1}$. The vertices of
$\Delta(\overline{\Pi}_{n})/S_{1}\times S_{n-1}$ can be indexed with number
partitions of $n$, which we may write as $v_{0}\oplus v_{1}+\dots+v_{r}$, that
distinguish the first number, i.e. $\oplus$ is non-commutative. The number on
the left side of $\oplus$, that is $v_{0}$, corresponds to the block that
contains $1$. There exists an acyclic matching on the poset ${\cal
F}(\Delta(\overline{\Pi}_{n})/S_{1}\times S_{n-1})$, where the set of critical
simplices consists of the vertex $1\oplus(n-1)$ and the unique simplex
$\sigma$ whose vertices are $v_{0}\oplus 1^{n-v_{0}}$ with
$v_{0}=2,\dots,n-1$, which has dimension $n-3$.
A slightly different proof of the result in Example 4.6, as well as a detailed
description of $\Delta(\overline{\Pi}_{n})/S_{1}\times S_{n-1}$, can be found
in [1].
## 5 Applications
Let $n\geq 3$ be a fixed natural number.
###### Corollary 5.1.
The topological space $\Delta(\overline{\Pi}_{n})$ is $(S_{1}\times
S_{n-1})$-homotopy equivalent to a wedge of $(n-1)!$ spheres of dimension
$n-3$. The spheres are indexed with the simplices in $C_{n}$, which induces an
action of $S_{1}\times S_{n-1}$ on the $(n-1)!$ spheres.
###### Proof.
Apply Theorem 3.4. ∎
###### Lemma 5.2.
$S_{1}\times S_{n-1}$ acts freely and transitively on $C_{n}$.
###### Proof.
Since $C_{n}=\bigcup_{g\in S_{1}\times S_{n-1}}g\psi[C_{n-1}]$, the action is
transitive, which follows inductively by the second statement of Proposition
3.2.
By Remark 4.2 the cardinality of $C_{n}$ is $(n-1)!$ which equals the
cardinality of $S_{1}\times S_{n-1}$. Hence the action is free. ∎
Let $G\subset S_{1}\times S_{n-1}$ be an arbitrary subgroup.
###### Remark 5.3.
The cardinality of $C_{n}/G$ is the index of $G$ in $S_{1}\times S_{n-1}$.
###### Proof.
Apply Lemma 5.2. ∎
Now Theorem 1.2 follows as a corollary. We can either apply Corollary 4.5 or
Corollary 5.1.
###### Corollary 5.4.
The topological space $\Delta(\overline{\Pi}_{n})/G$ is homotopy equivalent to
a wedge of spheres of dimension $n-3$. The number of spheres is the index of
$G$ in $S_{1}\times S_{n-1}$.
## Acknowledgments
The author would like to thank Dmitry N. Kozlov for this interesting problem,
Ragnar Freij and Giacomo d’Antonio for the helpful discussions.
## References
* [1] R. Donau, On a quotient topology of the partition lattice with forbidden block sizes, Topology and its Applications 159 (8) (2012), pp. 2052-2057.
* [2] R. Donau, Quotients of the order complex $\Delta(\overline{\Pi}_{n})$ by subgroups of the Young subgroup $S_{1}\times S_{n-1}$, Topology and its Applications 157 (16) (2010), pp. 2476-2479.
* [3] R. Donau, Quotients of the topology of the partition lattice which are not homotopy equivalent to wedges of spheres, arXiv:1202.4368v2 [math.AT] (2012).
* [4] R. Forman, Morse theory for cell complexes, Adv. Math. 134 (1) (1998), pp. 90-145.
* [5] R. Freij, Equivariant discrete Morse theory, Discrete Mathematics 309 (12) (2009), pp. 3821-3829.
* [6] A. Hatcher, Algebraic Topology, Cambridge University Press, 2008.
* [7] D.N. Kozlov, Closure maps on regular trisps, Topology and its Applications 156 (15) (2009), pp. 2491-2495.
* [8] D.N. Kozlov, Combinatorial Algebraic Topology, Algorithms and Computation in Mathematics 21, Springer-Verlag Berlin Heidelberg, 2008.
|
# Spectrum of a class of matrices and its applications111L. You’s research is
supported by the Zhujiang Technology New Star Foundation of Guangzhou (Grant
No. 2011J2200090) and Program on International Cooperation and Innovation,
Department of Education, Guangdong Province (Grant No.2012gjhz0007).
Lihua You 222Corresponding author<EMAIL_ADDRESS>Man Yang 333Email
address<EMAIL_ADDRESS> Jinxi Li444Email address<EMAIL_ADDRESS>Liyong Ren555Email address<EMAIL_ADDRESS>
(School of Mathematical Sciences, South China Normal University,
Guangzhou, 510631, P.R. China
)
Abstract In this paper, we give the spectrum of a matrix by using the
quotient matrix, then we apply this result to various matrices associated to a
graph and a digraph, including adjacency matrix, (signless) Laplacian matrix,
distance matrix, distance (signless) Laplacian matrix, to obtain some known
and new results. Moreover, we propose some problems for further research.
AMS Classification: 05C50, 05C35, 05C20, 15A18
Keywords: Matrix; Quotient matrix; Graph; Digraph; Spectrum; Spectral radius
## 1 Introduction
We begin by recalling some definitions. Let $M$ be an $n\times n$ matrix,
$\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$ be the eigenvalues of $M$. It is
obvious that the eigenvalues may be complex numbers since $M$ is not symmetric
in general. We usually assume that
$|\lambda_{1}|\geq|\lambda_{2}|\geq\ldots\geq|\lambda_{n}|$. The spectral
radius of $M$ is defined as $\rho(M)=|\lambda_{1}|$, i.e., it is the largest
modulus of the eigenvalues of $M$. If $M$ is a nonnegative matrix, it follows
from the Perron-Frobenius theorem that the spectral radius $\rho(M)$ is a
eigenvalue of $M$. If $M$ is a nonnegative irreducible matrix, it follows from
the Perron-Frobenius theorem that $\rho(M)=\lambda_{1}$ is simple.
Let $G$ be a connected graph with vertex set $V(G)$ and edge set $E(G)$. Let
$A(G)=(a_{ij})$ denote the adjacency matrix of $G$, where $a_{ij}$ is equal to
the number of edges $v_{i}v_{j}$. The spectral radius of $A(G)$, denoted by
$\rho(G)$, is called the spectral radius of $G$. Let
$diag(G)=diag(d_{1},d_{2},\ldots,d_{n})$ be the diagonal matrix with degree of
the vertices of $G$ and $Q(G)=diag(G)+A(G)$ be the signless Laplacian matrix
of $G$, $L(G)=diag(G)-A(G)$ be the Laplacian matrix of $G$. The spectral
radius of $Q(G)$, denoted by $q(G)$, is called the signless Laplacian spectral
radius of $G$. The spectral radius of $L(G)$, denoted by $\mu(G)$, is called
the Laplacian spectral radius of $G$.
For $u,v\in V(G)$, the distance between $u$ and $v$, denoted by $d_{G}(u,v)$
or $d_{uv}$, is the length of the shortest path connecting them in $G$. For
$u\in V(G)$, the transmission of vertex $u$ in $G$ is the sum of distances
between $u$ and all other vertices of $G$, denoted by $Tr_{G}(u)$.
Let $G$ be a connected graph with vertex set
$V(G)=\\{v_{1},v_{2},\ldots,v_{n}\\}$. The distance matrix of $G$ is the
$n\times n$ matrix $\mathcal{D}(G)=(d_{ij})$ where $d_{ij}=d_{v_{i}v_{j}}$.
The distance spectral radius of $G$, denoted by $\rho^{\mathcal{D}}(G)$, is
the spectral radius of $\mathcal{D}(G)$, which is the largest eigenvalue of
$\mathcal{D}(G)$.
In fact, for $1\leq i\leq n$, the transmission of vertex $v_{i}$,
$Tr_{G}(v_{i})$ is just the $i$-th row sum of $\mathcal{D}(G)$. Let
$Tr(G)=diag(Tr_{G}(v_{1}),Tr_{G}(v_{2}),\ldots,Tr_{G}(v_{n}))$ be the diagonal
matrix of vertex transmission of $G$. M. Aouchiche and P. Hansen [1]
introduced the Laplacian and the signless Laplacian for the distance matrix of
a connected graph. The matrix $\mathcal{L}(G)=Tr(G)-\mathcal{D}(G)$ is called
the distance Laplacian of $G$, while the matrix
$\mathcal{Q}(G)=Tr(G)+\mathcal{D}(G)$ is called the distance signless
Laplacian matrix of $G$. It is obvious that $\mathcal{Q}(G)$ is irreducible,
nonnegative, symmetric and positive semidefinite. The distance signless
Laplacian spectral radius of $G$, denoted by $q^{\mathcal{D}}(G)$, is the
spectral radius of $\mathcal{Q}(G)$, which is the largest eigenvalue of
$\mathcal{Q}(G)$. The spectral radius of $\mathcal{L}(G)$, denoted by
$\mu^{\mathcal{D}}(G)$, is called the distance Laplacian spectral radius of
$G$.
Since $G$ is a connected graph, then $A(G)$, $Q(G)$, $\mathcal{D}(G)$ and
$\mathcal{Q}(G)$ are nonnegative irreducible matrices, it follows from the
Perron Frobenius Theorem that $\rho(G)$, $q(G)$, $\rho^{\mathcal{D}}(G)$ and
$q^{\mathcal{D}}(G)$ are real numbers and there is a positive unit eigenvector
corresponding to $\rho(G)$, $q(G)$, $\rho^{\mathcal{D}}(G)$ and
$q^{\mathcal{D}}(G)$, respectively.
Let $\overrightarrow{G}=(V(\overrightarrow{G}),E(\overrightarrow{G}))$ be a
digraph, where $V(\overrightarrow{G})=\\{v_{1},v_{2},\ldots,v_{n}\\}$ and
$E(\overrightarrow{G})$ are the vertex set and arc set of
$\overrightarrow{G}$, respectively. A digraph $\overrightarrow{G}$ is simple
if it has no loops and multiple arcs. A digraph $\overrightarrow{G}$ is
strongly connected if for every pair of vertices $v_{i},v_{j}\in
V(\overrightarrow{G})$, there are directed paths from $v_{i}$ to $v_{j}$ and
from $v_{j}$ to $v_{i}$. In this paper, we consider finite, simple strongly
connected digraphs.
Let $\overrightarrow{G}$ be a digraph. If two vertices are connected by an
arc, then they are called adjacent. For $e=(v_{i},v_{j})\in
E(\overrightarrow{G})$, $v_{i}$ is the tail (the initial vertex) of $e$,
$v_{j}$ is the head (the terminal vertex) of $e$.
Let $N^{-}_{\overrightarrow{G}}(v_{i})=\\{v_{j}\in
V(\overrightarrow{G})|(v_{j},v_{i})\in E(\overrightarrow{G})\\}$ and
$N^{+}_{\overrightarrow{G}}(v_{i})=\\{v_{j}\in V(\overrightarrow{G})|$
$(v_{i},v_{j})\in E(\overrightarrow{G})\\}$ denote the in-neighbors and out-
neighbors of $v_{i}$, respectively.
For a digraph $\overrightarrow{G}$, let $A(\overrightarrow{G})=(a_{ij})$
denote the adjacency matrix of $\overrightarrow{G}$, where $a_{ij}$ is equal
to the number of arcs $(v_{i},v_{j})$. The spectral radius of
$A(\overrightarrow{G})$, denoted by $\rho(\overrightarrow{G})$, is called the
spectral radius of $\overrightarrow{G}$.
Let $diag(\overrightarrow{G})=diag(d^{+}_{1},d^{+}_{2},\ldots,d^{+}_{n})$ be
the diagonal matrix with outdegree of the vertices of $\overrightarrow{G}$ and
$Q(\overrightarrow{G})=diag(\overrightarrow{G})+A(\overrightarrow{G})$ be the
signless Laplacian matrix of $\overrightarrow{G}$,
$L(\overrightarrow{G})=diag(\overrightarrow{G})-A(\overrightarrow{G})$ be the
Laplacian matrix of $\overrightarrow{G}$. The spectral radius of
$Q(\overrightarrow{G})$, $\rho(Q(\overrightarrow{G}))$, denoted by
$q(\overrightarrow{G})$, is called the signless Laplacian spectral radius of
$\overrightarrow{G}$.
For $u,v\in V(G)$, the distance from $u$ to $v$, denoted by
$d_{\overrightarrow{G}}(u,v)$ or $d_{uv}$, is the length of the shortest
directed path from $u$ to $v$ in ${\overrightarrow{G}}$. For $u\in
V({\overrightarrow{G}})$, the transmission of vertex $u$ in
${\overrightarrow{G}}$ is the sum of distances from $u$ to all other vertices
of ${\overrightarrow{G}}$, denoted by $Tr_{{\overrightarrow{G}}}(u)$.
Let ${\overrightarrow{G}}$ be a connected digraph with vertex set
$V({\overrightarrow{G}})=\\{v_{1},v_{2},\ldots,v_{n}\\}$. The distance matrix
of ${\overrightarrow{G}}$ is the $n\times n$ matrix
$\mathcal{D}({\overrightarrow{G}})=(d_{ij})$ where
$d_{ij}=d_{\overrightarrow{G}}(v_{i},v_{j})$. The distance spectral radius of
$\overrightarrow{G}$, denoted by $\rho^{\mathcal{D}}(\overrightarrow{G})$, is
the spectral radius of $\mathcal{D}(\overrightarrow{G})$.
In fact, for $1\leq i\leq n$, the transmission of vertex $v_{i}$,
$Tr_{\overrightarrow{G}}(v_{i})$ is just the $i$-th row sum of
$\mathcal{D}(\overrightarrow{G})$. Let
$Tr(\overrightarrow{G})=diag(Tr_{\overrightarrow{G}}(v_{1}),Tr_{\overrightarrow{G}}(v_{2}),\ldots,Tr_{\overrightarrow{G}}(v_{n}))$
be the diagonal matrix of vertex transmission of $\overrightarrow{G}$. The
distance signless Laplacian matrix of ${\overrightarrow{G}}$ is the $n\times
n$ matrix defined similar to the undirected graph by Aouchiche and Hansen as
$\mathcal{Q}({\overrightarrow{G}})=Tr({\overrightarrow{G}})+\mathcal{D}({\overrightarrow{G}}).$
Let
$\mathcal{L}(\overrightarrow{G})=Tr({\overrightarrow{G}})-\mathcal{D}({\overrightarrow{G}})$
be the distance Laplacian matrix of $\overrightarrow{G}$. The distance
signless Laplacian spectral radius of $\overrightarrow{G}$,
$\rho(\mathcal{Q}(\overrightarrow{G}))$, denoted by
$q^{\mathcal{D}}(\overrightarrow{G})$, is the spectral radius of
$\mathcal{Q}(\overrightarrow{G})$.
Since $\overrightarrow{G}$ is a simple strongly connected digraph, then
$A(\overrightarrow{G})$, $Q(\overrightarrow{G})$,
$\mathcal{D}({\overrightarrow{G}})$ and $\mathcal{Q}({\overrightarrow{G}})$
are nonnegative irreducible matrices. It follows from the Perron Frobenius
Theorem that $\rho(\overrightarrow{G})$,
$\rho(Q(\overrightarrow{G}))=q(\overrightarrow{G})$,
$\rho^{\mathcal{D}}(\overrightarrow{G})$ and
$\rho(\mathcal{Q}(\overrightarrow{G}))=q^{\mathcal{D}}(\overrightarrow{G})$
are positive real numbers and there is a positive unit eigenvector
corresponding to $\rho(\overrightarrow{G})$, $q(\overrightarrow{G})$,
$\rho^{\mathcal{D}}(\overrightarrow{G})$ and
$q^{\mathcal{D}}(\overrightarrow{G})$, respectively.
For a connected graph $G=(V(G),E(G))$, the vertex connectivity of a graph
denoted by $\kappa(G)$, is the minimum number of vertices whose deletion
yields the resulting graph disconnected. Clearly, let $G$ be a connected graph
on $n$ vertices, then $1\leq\kappa(G)\leq n-1$. Similarly, for a strongly
connected digraph
$\overrightarrow{G}=(V(\overrightarrow{G}),E(\overrightarrow{G}))$, the vertex
connectivity of a digraph denoted by $\kappa(\overrightarrow{G})$, is the
minimum number of vertices whose deletion yields the resulting digraph non-
strongly connected. Clearly, let $\overrightarrow{G}$ be a strongly connected
digraph with $n$ vertices, then $1\leq\kappa(\overrightarrow{G})\leq n-1$.
There are many literatures about graphs’ and digraphs’ connectivity. For early
work, see [27], Ye-Fan-Liang characterize the graphs with the minimal least
eigevalue among all graphs with given vertex connectivity or edge
connectivity. In 2010, Ye-Fan-Wang [28] characterize the graphs with maximum
signless Laplacian or adjacency spectral radius among all graphs with fixed
order and given vertex or edge connectivity. Liu [24] characterized the
minimal distance spectral radius of simple connected graphs with given vertex
connectivity, or matching number, or chromatic number, respectively. Brualdi
[4] wrote a stimulating survey on this topic.
In 2012, Lin-Shu-Wu-Yu [20] establish some upper or lower bounds for digraphs
with some given graph parameters, such as clique number, girth, and vertex
connectivity, and characterize the corresponding extremal graphs, give the
exact value of the spectral radii of those digraphs. Besides, Lin-Yang-Zhang-
Shu [22] characterize the extremal digraphs (graphs) with minimum distance
spectral radius among among all digraphs (graphs) with given vertex (edge)
connectivity.
In 2013, Lin-Drury [18] characterize the extremal digraphs which attain the
maximum Perron root of digraphs with given arc connectivity and number of
vertices. Lin-Shu [21] determine the extremal digraph with the minimal
distance spectral radius with given arc connectivity. Xing-Zhou [26] determine
the graphs with minimal distance signless Laplacian spectral radius among the
connected graphs with fixed number of vertices and connectivity. Oscar Rojo
and Eber Lenes [25] obtained a sharp upper bound on the incidence energy of
graphs in terms of connectivity. Furthermore, some upper or lower bounds were
obtained by the outdegrees and the average 2-outdegrees [6, 11].
In 2014, Hong-You [13] determine the digraphs with maximal signless Laplacian
spectral radius among the strongly connected digraphs with given vertex
connectivity. On the other hand, some extremal digraphs which attain the
maximum or minimum spectral radius, the signless Laplacian spectral radius,
the distance spectral radius, or the distance signless Laplacian spectral
radius of digraphs with given parameters, such as given vertex connectivity,
given arc connectivity, given dichromatic number, given clique number, given
girth and so on, were characterized, see e.g. [18, 21, 20, 22, 26].
In this paper, we give the spectrum of a matrix using the quotient matrix, and
also apply these results to various matrices associated with graphs and
digraphs as mentioned above. Some know results are improved.
## 2 Some preliminaries
###### Definition 2.1.
([3, 14]) Let $A=(a_{ij}),B=(b_{ij})$ be $n\times n$ matrices. If $a_{ij}\leq
b_{ij}$ for all $i$ and $j$, then $A\leq B$. If $A\leq B$ and $A\neq B$, then
$A<B$. If $a_{ij}<b_{ij}$ for all $i$ and $j$, then $A\ll B$.
###### Lemma 2.2.
([3, 14]) Let $A,B$ be $n\times n$ matrices with the spectral radius $\rho(A)$
and $\rho(B)$. If $0\leq A\leq B$, then $\rho(A)\leq\rho(B)$. Furthermore, if
$B$ is irreducible and $0\leq A<B$, then $\rho(A)<\rho(B)$.
By Lemma 2.2, we have the following results in terms of digraphs.
###### Corollary 2.3.
Let $\overrightarrow{G}$ be a digraph and $\overrightarrow{H}$ be a spaning
subdigraph of $\overrightarrow{G}$. Then
(i) $\rho(\overrightarrow{H})\leq\rho(\overrightarrow{G})$,
$q(\overrightarrow{H})\leq q(\overrightarrow{G})$.
(ii) If $\overrightarrow{G}$ is strongly connected, and $\overrightarrow{H}$
is a proper subdigraph of $\overrightarrow{G}$, then
$\rho(\overrightarrow{H})<\rho(\overrightarrow{G})$,
$q(\overrightarrow{H})<q(\overrightarrow{G})$.
(iii) If $\overrightarrow{G}$ and $\overrightarrow{H}$ are strongly
connected, then
$\rho^{\mathcal{D}}(\overrightarrow{H})\geq\rho^{\mathcal{D}}(\overrightarrow{G})$,
$q^{\mathcal{D}}(\overrightarrow{H})\geq q^{\mathcal{D}}(\overrightarrow{G})$.
(iv) If $\overrightarrow{H}$ is a proper subdigraph of $\overrightarrow{G}$,
then
$\rho^{\mathcal{D}}(\overrightarrow{H})>\rho^{\mathcal{D}}(\overrightarrow{G})$,
$q^{\mathcal{D}}(\overrightarrow{H})>q^{\mathcal{D}}(\overrightarrow{G})$.
The theorem for undirected graph is also established.
###### Lemma 2.4.
([3]) If $A$ is an $n\times n$ nonnegative matrix with the spectral radius
$\rho(A)$ and row sums $r_{1},r_{2},\ldots,r_{n}$, then $\min\limits_{1\leq
i\leq n}r_{i}\leq\rho(A)\leq\max\limits_{1\leq i\leq n}r_{i}$. Moreover, if
$A$ is irreducible, then one of the equalities holds if and only if the row
sums of $A$ are all equal.
By Lemma 2.4, we have
$\rho(\overset{\longleftrightarrow}{K_{n}})=\rho^{\mathcal{D}}(\overset{\longleftrightarrow}{K_{n}})=n-1$,
$q(\overset{\longleftrightarrow}{K_{n}})=q^{\mathcal{D}}(\overset{\longleftrightarrow}{K_{n}})=2(n-1)$;
$\rho(\overrightarrow{C_{n}})=1$, $q(\overrightarrow{C_{n}})=2$,
$\rho^{\mathcal{D}}(\overrightarrow{C_{n}})=\frac{n(n-1)}{2}$,
$q^{\mathcal{D}}(\overrightarrow{C_{n}})=n(n-1)$. Then by Corollary 2.3, we
have
###### Corollary 2.5.
Let $\overrightarrow{G}$ be a strongly connected digraph with $n$ vertices.
Then
$\rho(\overrightarrow{G})\leq n-1,\quad q(\overrightarrow{G})\leq
2n-2,\quad\rho^{\mathcal{D}}(\overrightarrow{G})\geq n-1,\quad
q^{\mathcal{D}}(\overrightarrow{G})\geq 2n-2,$
with equality holds if and only if
$\overrightarrow{G}\cong\overset{\longleftrightarrow}{K_{n}}$.
###### Corollary 2.6.
Let $\overrightarrow{G}$ be a strongly connected digraph with $n$ vertices.
Then
$\rho(\overrightarrow{G})\geq 1,\quad q(\overrightarrow{G})\geq
2,\quad\rho^{\mathcal{D}}(\overrightarrow{G})\leq\frac{n(n-1)}{2},\quad
q^{\mathcal{D}}(\overrightarrow{G})\leq n(n-1),$
with equality holds if and only if
$\overrightarrow{G}\cong\overrightarrow{C_{n}}.$
###### Proof.
In [21], Theorem 3.2 show that
$\rho^{\mathcal{D}}(\overrightarrow{G})\leq\frac{n(n-1)}{2},$ and the equality
holds if and only if $\overrightarrow{G}\cong\overrightarrow{C_{n}}.$ Now we
only show $q^{\mathcal{D}}(\overrightarrow{G})\leq n(n-1)$ and the equality
holds if and only if $\overrightarrow{G}\cong\overrightarrow{C_{n}}.$
If $\overrightarrow{G}$ has a Hamiltonian dicycle, we have
$q^{\mathcal{D}}(\overrightarrow{G})\leq
q^{\mathcal{D}}(\overrightarrow{C_{n}})$ and the equality holds if and only if
$\overrightarrow{G}\cong\overrightarrow{C_{n}}$ by Corollary 2.3.
If $\overrightarrow{G}$ does not contain a Hamiltonian dicycle. Noting that
$\max\limits_{1\leq i\leq n}r_{i}\leq n(n-1)$. If $\max\limits_{1\leq i\leq
n}r_{i}<n(n-1)$, then
$q^{\mathcal{D}}(\overrightarrow{G})\leq\max\limits_{1\leq i\leq
n}r_{i}<n(n-1)=q^{\mathcal{D}}(\overrightarrow{C_{n}})$ by Lemma 2.4.
If $\max\limits_{1\leq i\leq n}r_{i}=n(n-1)$, then $\overrightarrow{G}$
contains a vertex $v_{1}$ such that $2Tr_{\overrightarrow{G}}(v_{1})=n(n-1),$
then $\overrightarrow{G}$ contains a Hamiltonian dipath $P$ initiating at
$v_{1}$. Suppose that $P=v_{1}\rightarrow v_{2}\rightarrow\ldots\rightarrow
v_{n}$ is the Hamiltonian dipath initiating at $v_{1}$. Then there is no arc
$(v_{i},v_{j})\in E(\overrightarrow{G})$ if $j-i\geq 2$ since
$Tr_{\overrightarrow{G}}(v_{1})=\frac{n(n-1)}{2}$. Since $\overrightarrow{G}$
is strongly connected and does not a Hamiltonian dicycle, there exists a
dipath $P^{\prime}$ from $v_{n}$ to $v_{1}$ and thus there exists some vertex,
namely, $v_{k}(k\neq n)$, is adjacent to $v_{1}$, that is $(v_{k},v_{1})\in
E(\overrightarrow{G})$. Since $v_{k}$ is on the Hamiltonian dipath $P$, we
have $(v_{k},v_{k+1})\in E(\overrightarrow{G}).$ Hence
$r_{k}\leq
2(1+1+2+\ldots+n-2)<2(1+2+\ldots+n-1)=2Tr_{\overrightarrow{G}}(v_{1})=n(n-1),$
it implies that the row sums of $\mathcal{Q}(\overrightarrow{G})$ are not
equal. Then by Lemma 2.4, we have
$q^{\mathcal{D}}(\overrightarrow{G})<q^{\mathcal{D}}(\overrightarrow{C_{n}}).$
Combining the above arguments, we complete the proof. ∎
## 3 The spectrum of a matrix
Let $I_{p}$ be the $p\times p$ identity matrix and $J_{p,q}$ be the $p\times
q$ matrix in which every entry is $1$, or simply $J_{p}$ if $p=q$. Let $M$ be
a matrix of order $n$, $\sigma(M)$ be the spectrum of the matrix $M$,
$P_{M}(\lambda)=det(xI_{n}-M)$ be the characteristic polynomial of matrix $M$.
###### Definition 3.1.
([23]) Let $M$ be a real matrix of order $n$ described in the following block
form
$M=\left(\begin{array}[]{ccc}M_{11}&\cdots&M_{1t}\\\ \vdots&\ddots&\vdots\\\
M_{t1}&\cdots&M_{tt}\\\ \end{array}\right),$ (3.1)
where the diagonal blocks $M_{ii}$ are $n_{i}\times n_{i}$ matrices for any
$i\in\\{1,2,\ldots,t\\}$ and $n=n_{1}+\ldots+n_{t}$. For any
$i,j\in\\{1,2,\ldots,t\\}$, let $b_{ij}$ denote the average row sum of
$M_{ij}$, i.e. $b_{ij}$ is the sum of all entries in $M_{ij}$ divided by the
number of rows. Then $B(M)=(b_{ij})$ (simply by $B$) is called the quotient
matrix of $M$. If in addition for each pair $i,j$, $M_{ij}$ has constant row
sum, then $B(M)$ is called the equitable quotient matrix of $M$.
###### Lemma 3.2.
Let $M=(m_{ij})_{n\times n}$ be defined as (3.1), and for any
$i,j\in\\{1,2\ldots,t\\}$, the row sum of each block $M_{ij}$ be constant. Let
$B=B(M)=(b_{ij})$ be the equitable quotient matrix of $M$, and $\lambda$ be an
eigenvalue of $B$. Then $\lambda$ is also an eigenvalue of $M$.
###### Proof.
Let $By=\lambda y$ where $y=(y_{1},y_{2},\ldots,y_{t})^{T}$. Define
$Y=(y_{11},\ldots,y_{1,n_{1}},\ldots,y_{t1},\ldots,y_{t,n_{t}})^{T}$ by the
relation $y_{i1}=y_{i2}=\ldots=y_{i,n_{i}}=y_{i}$ for each
$i\in\\{1,2,\ldots,t\\}$. For any $i\in\\{1,2,\ldots,t\\}$ and
$k\in\\{1,2,\ldots,n_{i}\\}$, let $M_{i}(k)$ be the $k$-th row of the $i$-th
row blocks $(M_{i1},\ldots,M_{it})$, that is, $M_{i}(k)$ is the $l$-th row of
$M$ where $l=n_{1}+\ldots+n_{i-1}+k$, then by
$M_{i}(k)Y=(MY)_{l}=\sum\limits_{j=1}^{n_{1}}m_{lj}y_{1}+\sum\limits_{j=n_{1}+1}^{n_{1}+n_{2}}m_{lj}y_{2}+\ldots+\sum\limits_{j=n_{1}+\ldots+n_{t-1}+1}^{n_{1}+\ldots+n_{t}}m_{lj}y_{t}$
and the definition of $b_{ij}$ for each $i,j\in\\{1,2,\ldots t\\}$, we have
$\lambda Y_{l}=\lambda y_{ik}=\lambda
y_{i}=(By)_{i}=\sum\limits_{j=1}^{t}b_{ij}y_{j}=M_{i}(k)Y=(MY)_{l},$
thus we have $MY=\lambda Y,$ and we complete the proof. ∎
###### Example 3.1.
Let $G=(V,E)$ be the Petersen graph as Figure 1. Let $\\{V_{1},V_{2}\\}$ be a
partition of $V=\\{1,2,\ldots,10\\}$, where $V_{1}=\\{1,2,3,4,5\\}$ and
$V_{2}=\\{6,7,8,9,10\\}$. Then the equitable quotient matrices
$B(A),B(L),B(Q),B(\mathcal{D}),B(\mathcal{L}),B(\mathcal{Q})$ corresponding to
the adjacency matrix $A(G)$, the Laplacian matrix $L(G)$, the signless
Laplacian matrix $Q(G)$, the distance matrix $\mathcal{D}(G)$, the distance
Laplacian matrix $\mathcal{L}(G)$, the distance signless Laplacian matrix
$\mathcal{Q}(G)$, respectively, are as follows:
12345678910Figure $1$. The Petersen graph
$B(A)=\left(\begin{array}[]{lcr}2&1\\\ 1&2\\\ \end{array}\right),\qquad
B(L)=\left(\begin{array}[]{lcr}1&-1\\\ -1&1\\\ \end{array}\right),\qquad
B(Q)=\left(\begin{array}[]{lcr}5&1\\\ 1&5\\\ \end{array}\right),$
$B(\mathcal{D})=\left(\begin{array}[]{lcr}6&9\\\ 9&6\\\
\end{array}\right),\qquad B(\mathcal{L})=\left(\begin{array}[]{lcr}9&-9\\\
-9&9\\\ \end{array}\right),\qquad
B(\mathcal{Q})=\left(\begin{array}[]{lcr}21&9\\\ 9&21\\\ \end{array}\right).$
Then
$\rho(B(A))=3,\rho(B(L))=2,\rho(B(Q))=6,\rho(B(\mathcal{D}))=15,\rho(B(\mathcal{L}))=18,\rho(B(\mathcal{Q}))=30,$
but by directly calculating, we have
$\rho(G)=3,\mu(G)=5,q(G)=6,\rho^{\mathcal{D}}(G)=15,\mu^{\mathcal{D}}(G))=18,q^{\mathcal{D}}(G)=30.$
We see that the largest eigenvalue of the equitable quotient matrix $B(M)$ is
the largest eigenvalue of $M$ when $M$ is the adjacency matrix $A(G)$, the
signless Laplacian matrix $Q(G)$, the distance matrix $\mathcal{D}(G)$, the
distance Laplacian matrix $\mathcal{L}(G)$ or the distance signless Laplacian
matrix $\mathcal{Q}(G)$ of a graph $G$, and the result is totally different
when $M$ is the Laplacian matrix $L(G)$ of a graph $G$.
###### Lemma 3.3.
Let $M$ be defined as (3.1), and for any $i,j\in\\{1,2\ldots,t\\}$,
$M_{ii}=l_{i}J_{n_{i}}+p_{i}I_{n_{i}},$ $M_{ij}=s_{ij}J_{n_{i},n_{j}}$ for
$i\not=j$, where $l_{i},p_{i},s_{ij}$ are real numbers, $B=B(M)$ be the
quotient matrix of $M$. Then
$\sigma(M)=\sigma(B)\cup\\{p_{i}^{[n_{i}-1]}\mid i=1,2\ldots,t\\},$ (3.2)
where $\lambda^{[t]}$ means that $\lambda$ is an eigenvalue with multiplicity
$t$.
###### Proof.
It is obvious that for any $i,j\in\\{1,2\ldots,t\\}$, $M_{ij}$ has constant
row sum, so $B$ is the equitable quotient matrix of $M$. Then
$\sigma(B)\subseteq\sigma(M)$ by Lemma 3.2.
On the other hand, we note that
$\sigma(l_{i}J_{n_{i}}+p_{i}I_{n_{i}})=\\{l_{i}n_{i}+p_{i},p_{i}^{[n_{i}-1]}\\}$,
where $l_{i}J_{n_{i}}+p_{i}I_{n_{i}}$ has the all-one vector $J_{n_{i},1}$
such that
$(l_{i}J_{n_{i}}+p_{i}I_{n_{i}})J_{n_{i},1}=(l_{i}n_{i}+p_{i})J_{n_{i},1}$,
and its all other eigenvectors corresponding to eigenvalue $p_{i}$ are
orthogonal to $J_{n_{i},1}$.
Let $x$ be an any eigenvector such that
$(l_{i}J_{n_{i}}+p_{i}I_{n_{i}})x=p_{i}x$, then $x^{T}J_{n_{i},1}=0$, and
$(\mathbf{0}_{1,n_{1}},\ldots,x^{T},\ldots,\mathbf{0}_{1,n_{t}})^{T}$ is an
eigenvector of $M$ corresponding to eigenvalue $p_{i}$. Therefore the $p_{i}$
is an eigenvalue of $M$ with multiplicities at least $n_{i}-1$. And thus we
obtain at least $\sum_{i=1}^{t}(n_{i}-1)=n-t$ eigenvalues of $M$, that is,
$\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}\subseteq\sigma(M)$.
Therefore
$\sigma(B)\cup\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}\subseteq\sigma(M)$
by Lemma 3.2, and
$|\sigma(M)|\leq|\sigma(B)|+|\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}|=n$
by $|\sigma(B)|=t$ and
$|\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}|=n-t$.
If there exists some $p_{i}$ such that $p_{i}\in\sigma(B)$ where
$i\in\\{1,2,\ldots,t\\}$, by the proof of Lemma 3.2, we have $My=p_{i}y$ with
$y=(y_{11},\ldots,y_{1,n_{1}},\ldots,y_{t1},\ldots,y_{t,n_{t}})^{T}$, where
$y_{i1}=y_{i2}=\ldots=y_{i,n_{i}}=y_{i}$ for each $i\in\\{1,2,\ldots,t\\}$.
Then we have
$(\mathbf{0}_{1,n_{1}},\ldots,x^{T},\ldots,\mathbf{0}_{1,n_{t}})y$$=y_{i}(x^{T}J_{n_{i},1})=0$,
it implies that the eigenvectors corresponding to the eigenvalue $p_{i}$ of
$B$ and the eigenvalue $p_{i}$ in
$\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}$ are all orthogonal, then
$|\sigma(M)|=|\sigma(B)|+|\\{p_{1}^{[n_{1}-1]},\ldots,p_{t}^{[n_{t}-1]}\\}|=n$
and thus (3.2) holds. ∎
###### Example 3.2.
Let $G=K_{n_{1},n_{2},\ldots,n_{t}}$ be a complete t-partite graph with $n$
vertices for $t\geq 2$, the adjacency matrix $A=A(G)$, the Laplacian matrix
$L=L(G)$, the signless Laplacian matrix $Q=Q(G)$, the distance matrix
$\mathcal{D}=\mathcal{D}(G),$ the distance Laplacian matrix
$\mathcal{L}=\mathcal{L}(G)$ and the distance signless Laplacian matrix
$\mathcal{Q}(G)$ of $G=K_{n_{1},n_{2},\ldots,n_{t}}$ are as follows:
(1). $A=M$, where $l_{i}=p_{i}=0,s_{ij}=1$ for $i\not=j$ where
$i,j\in\\{1,2,\ldots,t\\}.$
(2). $L=M$, where $l_{i}=0,p_{i}=n-n_{i},s_{ij}=-1$ for $i\not=j$ where
$i,j\in\\{1,2,\ldots,t\\}.$
(3). $Q=M$, where $l_{i}=0,p_{i}=n-n_{i},s_{ij}=1$ for $i\not=j$ where
$i,j\in\\{1,2,\ldots,t\\}.$
(4). $\mathcal{D}=M$, where $l_{i}=2,p_{i}=-2,s_{ij}=1$ for $i\not=j$ where
$i,j\in\\{1,2,\ldots,t\\}.$
(5). $\mathcal{L}=M$, where $l_{i}=-2,p_{i}=n+n_{i},s_{ij}=-1$ for $i\not=j$
where $i,j\in\\{1,2,\ldots,t\\}.$
(6). $\mathcal{Q}=M$, where $l_{i}=2,p_{i}=n+n_{i}-4,s_{ij}=1$ for $i\not=j$
where $i,j\in\\{1,2,\ldots,t\\}.$
It is obvious that for any $i,j\in\\{1,2\ldots,t\\}$, $M_{ij}$ has constant
row sum. Then the corresponding equitable quotient matrices are as follows:
$B(A)=\left(\begin{array}[]{cccc}0&n_{2}&\cdots&n_{t}\\\
n_{1}&0&\cdots&n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\ n_{1}&n_{2}&\cdots&0\\\
\end{array}\right),\qquad
B(L)=\left(\begin{array}[]{cccc}n-n_{1}&-n_{2}&\cdots&-n_{t}\\\
-n_{1}&n-n_{2}&\cdots&-n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\
-n_{1}&-n_{2}&\cdots&n-n_{t}\\\ \end{array}\right),$
$B(Q)=\left(\begin{array}[]{cccc}n-n_{1}&n_{2}&\cdots&n_{t}\\\
n_{1}&n-n_{2}&\cdots&n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\
n_{1}&n_{2}&\cdots&n-n_{t}\\\ \end{array}\right),\quad
B(\mathcal{D})=\left(\begin{array}[]{cccc}2n_{1}-2&n_{2}&\cdots&n_{t}\\\
n_{1}&2n_{2}-2&\cdots&n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\
n_{1}&n_{2}&\cdots&2n_{t}-2\\\ \end{array}\right),$
$B(\mathcal{L})=\left(\begin{array}[]{cccc}n-n_{1}&-n_{2}&\cdots&-n_{t}\\\
-n_{1}&n-n_{2}&\cdots&-n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\
-n_{1}&-n_{2}&\cdots&n-n_{t}\\\ \end{array}\right),$
$B(\mathcal{Q})=\left(\begin{array}[]{cccc}n+3n_{1}-4&n_{2}&\cdots&n_{t}\\\
n_{1}&n+3n_{2}-4&\cdots&n_{t}\\\ \vdots&\vdots&\ddots&\vdots\\\
n_{1}&n_{2}&\cdots&n+3n_{t}-4\\\ \end{array}\right).$
By Lemma 3.3, we have
(1).
$P_{A}(\lambda)=\lambda^{n-t}P_{B(A)}(\lambda)=\lambda^{n-t}[\prod\limits_{i=1}^{t}(\lambda+n_{i})-\sum\limits_{i=1}^{t}n_{i}\prod\limits_{j=1,j\neq
i}^{t}(\lambda+n_{j})].$
(2).
$P_{L}(\lambda)=\prod\limits_{i=1}^{t}(\lambda-n+n_{i})^{n_{i}-1}P_{B(L)}(\lambda)=\lambda(\lambda-n)^{t-1}\prod\limits_{i=1}^{t}(\lambda-n+n_{i})^{n_{i}-1}.$
(3).
$P_{Q}(\lambda)=\prod\limits_{i=1}^{t}(\lambda-n+n_{i})^{n_{i}-1}P_{B(Q)}(\lambda)$
$=\prod\limits_{i=1}^{t}(\lambda-n+n_{i})^{n_{i}-1}[\prod\limits_{i=1}^{t}(\lambda-n+2n_{i})-\sum\limits_{i=1}^{t}n_{i}\prod\limits_{j=1,j\neq
i}^{t}(\lambda-n+2n_{j})].$
(4). $P_{\mathcal{D}}(\lambda)=(\lambda+2)^{n-t}P_{B(\mathcal{D})}(\lambda)$
$=(\lambda+2)^{n-t}[\prod\limits_{i=1}^{t}(\lambda-
n_{i}+2)-\sum\limits_{i=1}^{t}n_{i}\prod\limits_{j=1,j\neq i}^{t}(\lambda-
n_{j}+2)].\hskip 11.38092pt{\rm(\cite[cite]{[\@@bibref{}{2013LAA2}{}{}]})}$
(5). $P_{\mathcal{L}}(\lambda)=\prod\limits_{i=1}^{t}(\lambda-n-
n_{i})^{n_{i}-1}P_{B(\mathcal{L})}(\lambda)=\lambda(\lambda-n)^{t-1}\prod\limits_{i=1}^{t}(\lambda-
n-n_{i})^{n_{i}-1}.$
(6). $P_{\mathcal{Q}}(\lambda)=\prod\limits_{i=1}^{t}(\lambda-n-
n_{i}+4)^{n_{i}-1}P_{B(\mathcal{Q})}(\lambda)$
$=\prod\limits_{i=1}^{t}(\lambda-n-
n_{i}+4)^{n_{i}-1}[\prod\limits_{i=1}^{t}(\lambda-n-2n_{i}+4)-\sum\limits_{i=1}^{t}n_{i}\prod\limits_{j=1,j\neq
i}^{t}(\lambda-n-2n_{j}+4)].$
It is obvious that we obtain the spectrums of $L$ and $\mathcal{L}$
immediately. In fact,
$\sigma(L)=\\{0,n^{[t-1]},(n-n_{i})^{[n_{i}-1]},i\in\\{1,2,\ldots,t\\}\\}$,
and
$\sigma(\mathcal{L})=\\{0,n^{[t-1]},(n+n_{i})^{[n_{i}-1]},i\in\\{1,2,\ldots,t\\}\\}.$
A block of $G$ is a maximal connected subgraph of $G$ that has no cut-vertex.
A graph $G$ is a clique tree if each block of $G$ is a clique. We call
$\mathbb{K}_{u,n_{2},\ldots,n_{k+1}}$ is a clique star if we replace each edge
of the star $K_{1,k}$ by a clique $K_{n_{i}}$ such that $V(K_{n_{i}})\cap
V(K_{n_{j}})=u$ for $i\neq j$ and $i,j\in\\{2,\ldots,k+1\\}.$
###### Example 3.3.
Let $G=\mathbb{K}_{u,n_{2},\ldots,n_{k+1}}$, where $n_{1}=|\\{u\\}|=1$,
$n_{i}\geq 2$ for any $i\in\\{2,\ldots,k+1\\}$ and
$n=n_{1}+n_{2}+n_{3}+\ldots+n_{k+1}-k$. Then the adjacency matrix $A=A(G)$,
the Laplacian matrix $L=L(G)$, the signless Laplacian matrix $Q=Q(G)$, the
distance matrix $\mathcal{D}=\mathcal{D}(G),$ the distance Laplacian matrix
$\mathcal{L}=\mathcal{L}(G)$ and the distance signless Laplacian matrix
$\mathcal{Q}(G)$ of $G=\mathbb{K}_{u,n_{2},\ldots,n_{k+1}}$ are as follows.
(1). $A=M$, where $l_{1}=p_{1}=0$ and $l_{i}=1,p_{i}=-1$ for $i\not=1$,
$s_{ij}=1$ for $i=1\mbox{or }j=1$, and $s_{ij}=0$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
(2). $L=M$, where $l_{1}=n-1,p_{1}=0$ and $l_{i}=-1,p_{i}=n_{i}$ for
$i\not=1$, $s_{ij}=-1$ for $i=1\mbox{or }j=1$, and $s_{ij}=0$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
(3). $Q=M$, where $l_{1}=n-1,p_{1}=0$ and $l_{i}=1,p_{i}=n_{i}-2$ for
$i\not=1$, $s_{ij}=1$ for $i=1\mbox{or }j=1$, and $s_{ij}=0$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
(4). $\mathcal{D}=M$, where $l_{1}=0,p_{1}=0$ and $l_{i}=1,p_{i}=-1$ for
$i\not=1$, $s_{ij}=1$ for $i=1\mbox{or }j=1$, and $s_{ij}=2$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
(5) $\mathcal{L}=M$, where $l_{1}=n-1,p_{1}=0$ and $l_{i}=-1,p_{i}=2n-n_{i}$
for $i\not=1$, $s_{ij}=-1$ for $i=1\mbox{or }j=1$, and $s_{ij}=-2$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
(6). $\mathcal{Q}=M$, where $l_{1}=n-1,p_{1}=0$ and $l_{i}=1,p_{i}=2n-n_{i}-2$
for $i\not=1$, $s_{ij}=1$ for $i=1\mbox{or }j=1$, and $s_{ij}=2$ for any
$i,j\in\\{2,\ldots,k+1\\}$ and $i\neq j$.
It is obvious that for any $i,j\in\\{2,\ldots,k+1\\}$, $M_{ij}$ has constant
row sum. Then the corresponding equitable quotient matrices are as follows:
$B(A)=\left(\begin{array}[]{cccc}0&n_{1}-1&\cdots&n_{k}-1\\\
1&n_{1}-2&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&0&\cdots&n_{k}-2\\\
\end{array}\right),\qquad
B(L)=\left(\begin{array}[]{cccc}n-1&1-n_{1}&\cdots&1-n_{k}\\\ -1&1&\cdots&0\\\
\vdots&\vdots&\vdots\ddots&\vdots\\\ -1&0&\cdots&1\\\ \end{array}\right),$
$B(Q)=\left(\begin{array}[]{cccc}n-1&n_{1}-1&\cdots&n_{k}-1\\\
1&2n_{1}-3&\cdots&0\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&0&\cdots&2n_{k}-3\\\
\end{array}\right),\qquad
B(\mathcal{D})=\left(\begin{array}[]{cccc}0&n_{1}-1&\cdots&n_{k}-1\\\
1&n_{1}-2&\cdots&2n_{k}-1\\\ \vdots&\vdots&\ddots&\vdots\\\
1&2n_{1}-1&\cdots&n_{k}-2\\\ \end{array}\right),$
$B(\mathcal{L})=\left(\begin{array}[]{cccc}n-1&1-n_{1}&\cdots&1-n_{k}\\\
-1&2n-2n_{1}+1&\cdots&-2(n_{k}-1)\\\ \vdots&\vdots&\ddots&\vdots\\\
-1&-2(n_{1}-1)&\cdots&2n-2n_{k}+1\\\ \end{array}\right),$
$B(\mathcal{Q})=\left(\begin{array}[]{cccc}n-1&n_{1}-1&\cdots&n_{k}-1\\\
1&2n-3&\cdots&2n_{k}-1\\\ \vdots&\vdots&\ddots&\vdots\\\
1&2(n_{1}-1)&\cdots&2n-3\\\ \end{array}\right).$
By Lemma 3.3, we have
(1). $P_{A}(\lambda)=(\lambda+1)^{n-k-1}P_{B(A)}(\lambda)$
$=(\lambda+1)^{n-k-1}[\lambda\prod\limits_{i=2}^{k+1}(\lambda-
n_{i}+2)-\sum\limits_{i=2}^{k+1}(n_{i}-1)\prod\limits_{j=2,j\neq
i}^{k+1}(\lambda-n_{j}+2)].$
(2). $P_{L}(\lambda)=(\lambda-
n_{i})^{n_{i}-2}P_{B(L)}(\lambda)=\lambda(\lambda-n)(\lambda-1)^{k-1}(\lambda-
n_{i})^{n_{i}-2}.$
(3). $P_{Q}(\lambda)=\prod\limits_{i=2}^{k+1}(\lambda-
n_{i}+2)^{n_{i}-2}P_{B(Q)}(\lambda)$
$=\prod\limits_{i=2}^{k+1}(\lambda-
n_{i}+2)^{n_{i}-2}[\lambda\prod\limits_{i=2}^{k+1}(\lambda-2n_{i}+3)-\sum\limits_{i=2}^{k+1}(n_{i}-1)\prod\limits_{j=2,j\neq
i}^{k+1}(\lambda-2n_{j}+3)].$
(4). $P_{\mathcal{D}}(\lambda)=(\lambda+1)^{n-k-1}P_{B(\mathcal{D})}(\lambda)$
$=(\lambda+1)^{n-k-1}[\lambda\prod\limits_{i=2}^{k+1}(\lambda+n_{i})-(2\lambda+1)\sum\limits_{i=2}^{k+1}(n_{i}-1)\prod\limits_{j=2,j\neq
i}^{k+1}(\lambda+n_{j})].$
(5).
$P_{\mathcal{L}}(\lambda)=(\lambda-2n+n_{i})^{n_{i}-2}P_{B(\mathcal{L})}(\lambda)=\lambda(\lambda-n)(\lambda-2n+1)^{k-1}(\lambda-2n+n_{i})^{n_{i}-2}.$
(6).
$P_{\mathcal{Q}}(\lambda)=\prod\limits_{i=2}^{k+1}(\lambda-2n+n_{i}+2)^{n_{i}-2}P_{B(\mathcal{Q})}(\lambda)$
$=\prod\limits_{i=2}^{k+1}(\lambda-2n+n_{i}+2)^{n_{i}-2}[(\lambda-n+1)\prod\limits_{i=2}^{k+1}(\lambda-2n+2n_{i}+1)$
$-(2\lambda-2n+3)\sum\limits_{i=2}^{k+1}(n_{i}-1)\prod\limits_{j=2,j\neq
i}^{k+1}(\lambda-2n+2n_{j}+1)].$
It is obvious that we can obtain the spectrum of $L$ and $\mathcal{L}$
immediately. In fact,
$\sigma(L)=\\{0,n,1^{[k-1]},n_{i}^{[n_{i}-2]},i\in\\{2,3,\ldots,k+1\\}\\}$ and
$\sigma(\mathcal{L})=\\{0,n,(2n-1)^{[k-1]},(2n-n_{i})^{[n_{i}-2]},i\in\\{2,3,\ldots,k+1\\}\\}.$
By Lemma 3.2, Example 3.1 and Examples 3.2–3.3, we proposed the following
conjecture for further research.
###### Conjecture 3.4.
Let $M$ be a nonnegative matrix, $B(M)$ be the equitable quotient matrix of
$M$. Then the largest eigenvalue of $B(M)$ is the largest eigenvalue of $M$.
Consider two sequences of real numbers:
$\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}$, and
$\mu_{1}\geq\mu_{2}\geq...\geq\mu_{m}$ with $m<n$. The second sequence is said
to interlace the first one whenever
$\lambda_{i}\geq\mu_{i}\geq\lambda_{n-m+i}$ for $i=1,2,...,m$. The interlacing
is called tight if there exists an integer $k\in[1,m]$ such that
$\lambda_{i}=\mu_{i}$ hold for $1\leq i\leq k$ and $\lambda_{n-m+i}=\mu_{i}$
hold for $k+1\leq i\leq m$.
###### Lemma 3.5.
([12]) Let $M$ be a symmetric matrix and have the block form as (3.1), $B$ be
the quotient matrix of $M$. Then
(1) The eigenvalues of $B$ interlace the eigenvalues of $M$.
(2) If the interlacing is tight, then $B$ is the equitable matrix of $M$.
By Lemmas 3.2-3.5, we have the following result immediately.
###### Theorem 3.1.
Let $M=(m_{ij})_{n\times n}$ be a symmetric matrix and defined as (3.1),
$B=B(M)$ be the quotient matrix of $M$, and
$\mu_{1}\geq\mu_{2}\geq...\geq\mu_{m}$ be all eigenvalues of $B$. Then
$\mu_{1},\mu_{2},...,\mu_{m}$ are eigenvalues of $M$ if and only if $B$ is the
equitable matrix of $M$.
## 4 Spectral radius of strongly connected digraphs with given connectivity
Let $\Omega(n,k)$ be the set of all simple strong connected digraphs on $n$
vertices with vertex connectivity $k$. Let
$\overrightarrow{G}_{1}\bigtriangledown\overrightarrow{G}_{2}$ denote the
digraph $G=(V,E)$ obtained from two disjoint digraphs
$\overrightarrow{G}_{1}$, $\overrightarrow{G}_{2}$ with vertex set
$V=V(\overrightarrow{G}_{1})\cup V(\overrightarrow{G}_{2})$ and arc set
$E=E(\overrightarrow{G}_{1})\cup
E(\overrightarrow{G}_{2})\cup\\{(u,v),(v,u)|u\in
V(\overrightarrow{G}_{1}),v\in V(\overrightarrow{G}_{2})\\}.$
Let $p,k$ be integers with $1\leq k\leq n-2,1\leq p\leq n-k-1,$
$\overrightarrow{K}(n,k,p)$ denote the digraph
$\overrightarrow{K_{k}}\bigtriangledown(\overrightarrow{K_{p}}\cup\overrightarrow{K}_{n-p-k})\cup
E,$ where
$E=\\{(u,v)|u\in\overrightarrow{K_{p}},v\in\overrightarrow{K}_{n-p-k}\\}$.
Clearly, $\overrightarrow{K}(n,k,p)\in\Omega(n,k).$ Then the adjacency matrix,
the signless Laplacian matrix, the distance matrix, the distance signless
Laplacian matrix of $\overrightarrow{K}(n,k,p)$ are as follows, where
$q=n-p-k$.
$A(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}-I_{p}&J_{p,k}&J_{p,q}\\\
J_{k,p}&J_{k}-I_{k}&J_{k,q}\\\ \mathbf{0}_{q,p}&J_{q,k}&J_{q}-I_{q}\\\
\end{array}\right),$
$Q(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}+(n-2)I_{p}&J_{p,k}&J_{p,q}\\\
J_{k,p}&J_{k}+(n-2)I_{k}&J_{k,q}\\\
\mathbf{0}_{q,p}&J_{q,k}&J_{q}+(n-p-2)I_{q}\\\ \end{array}\right),$
$\mathcal{D}(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}-I_{p}&J_{p,k}&J_{p,q}\\\
J_{k,p}&J_{k}-I_{k}&J_{k,q}\\\ 2J_{q,p}&J_{q,k}&J_{q}-I_{q}\\\
\end{array}\right),$
$\mathcal{Q}(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}+(n-2)I_{p}&J_{p,k}&J_{p,q}\\\
J_{k,p}&J_{k}+(n-2)I_{k}&J_{k,q}\\\ 2J_{q,p}&J_{q,k}&J_{q}+(n+p-2)I_{q}\\\
\end{array}\right).$
###### Proposition 4.1.
([2]) Let $\overrightarrow{G}$ be a strongly connected digraphs with vertex
connectivity $k$. Suppose that $S$ is a $k$-vertex cut of $\overrightarrow{G}$
and
$\overrightarrow{G}_{1},\overrightarrow{G}_{2},\ldots,\overrightarrow{G}_{t}$
are the strongly connected components of $\overrightarrow{G}-S$. Then there
exists an ordering of
$\overrightarrow{G}_{1},\overrightarrow{G}_{2},\ldots,\overrightarrow{G}_{t}$
such that for $1\leq i\leq t$ and $v\in V(\overrightarrow{G}_{i})$, every tail
of $v$ is in $\bigcup\limits_{j=1}^{i-1}\overrightarrow{G}_{j}$.
###### Remark 4.2.
By Proposition 4.1, we know that $\overrightarrow{G}_{1}$ is the strongly
connected component of $\overrightarrow{G}-S$ where the inneighbors of
vertices of $V(\overrightarrow{G}_{1})$ in
$\overrightarrow{G}-S-\overrightarrow{G}_{1}$ are zero. Let
$\overrightarrow{G}_{2}=\overrightarrow{G}-S-\overrightarrow{G}_{1}$. We add
arcs to $\overrightarrow{G}$ until both induced subdigraph of
$V(\overrightarrow{G}_{1})\cup S$ and induced subdigraph of
$V(\overrightarrow{G}_{2})\cup S$ attain to complete digraphs, add arc $(u,v)$
for any $u\in V(\overrightarrow{G}_{1})$ and any $v\in
V(\overrightarrow{G_{2}})$, the new digraph denoted by $\overrightarrow{H}$.
Clearly, $\overrightarrow{H}=\overrightarrow{K}(n,k,p)\in\Omega(n,k)$ for some
$p$ such that $1\leq p\leq n-k-1$. Since $\overrightarrow{G}$ is the spanning
subdigraph of $\overrightarrow{H}$, then by Corollary 2.3, we have
$\rho(\overrightarrow{G})\leq\rho(\overrightarrow{K}(n,k,p))$,
$q(\overrightarrow{G})\leq q(\overrightarrow{K}(n,k,p))$,
$\rho^{\mathcal{D}}(\overrightarrow{G})\geq\rho^{\mathcal{D}}(\overrightarrow{K}(n,k,p))$
and $q^{\mathcal{D}}(\overrightarrow{G})\geq
q^{\mathcal{D}}(\overrightarrow{K}(n,k,p))$. Thus the extremal digraphs which
achieve the maximal (signless Laplacian) spectral radius and the minimal
distance (signless Laplacian) spectral radius in $\Omega(n,k)$ must be some
$\overrightarrow{K}(n,k,p)$ for $1\leq p\leq n-k-1$.
###### Theorem 4.1.
Let $n,k$ be given positive integers with $1\leq k\leq n-2$,
$\overrightarrow{G}\in\Omega(n,k).$ Then
(i). ([20])
$\rho(\overrightarrow{G})\leq\frac{n-2+\sqrt{(n-2)^{2}+4k}}{2},$ (4.1)
with equality if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1)$ or
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1).$
(ii). ([13])
$q(\overrightarrow{G})\leq\frac{2n+k-3+\sqrt{(2n-k-3)^{2}+4k}}{2},$ (4.2)
with equality if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1).$
(iii). ([22])
$\rho^{\mathcal{D}}(\overrightarrow{G})\geq\frac{n-2+\sqrt{(n+2)^{2}-4k-8}}{2},$
(4.3)
with equality if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1)$ or
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1)$.
(iv).
$q^{\mathcal{D}}(\overrightarrow{G})\geq\frac{3n-3+\sqrt{(n+3)^{2}-8k-16}}{2},$
(4.4)
with equality if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1).$
###### Proof.
Now we show (i) holds. We apply Lemma 3.3 to $A=A(\overrightarrow{K}(n,k,p))$.
Since $t=3$, $l_{i}=1,p_{i}=-1$ for $1\leq i\leq 3$, $s_{31}=0$ and $s_{ij}=1$
for others $i,j\in\\{1,2,3\\}$ and $i\not=j$, we have
$\sigma(A)=\sigma(B(A))\cup\\{(-1)^{[n-3]}\\}$, where the corresponding
equitable quotient matrix of $A$ is
$B(A)=\left(\begin{array}[]{ccc}p-1&k&q\\\ p&k-1&q\\\ 0&k&q-1\\\
\end{array}\right),$
the eigenvalues of $B(A)$ are
$-1,\frac{n-2\pm\sqrt{4p^{2}-4(n-k)p+n^{2}}}{2}$. Thus
$\rho(A)=\frac{n-2+\sqrt{4p^{2}-4(n-k)p+n^{2}}}{2}$.
It is obvious that
$\frac{n-2+\sqrt{4p^{2}-4(n-k)p+n^{2}}}{2}\leq\frac{n-2+\sqrt{(n-2)^{2}+4k}}{2}$,
and equality holds if and only if $p=1$ or $p=n-k-1$. Thus (4.1) holds and
equality holds if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1)$ or
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1).$
Now we show (ii) holds. Similarly, we apply Lemma 3.3 to
$Q=Q(\overrightarrow{K}(n,k,p))$. Since $t=3$, $l_{i}=1$ for $1\leq i\leq 3$,
$p_{1}=p_{2}=n-2$, $p_{3}=n-p-2,$ $s_{31}=0$ and $s_{ij}=1$ for others
$i,j\in\\{1,2,3\\}$ and $i\not=j$, we have
$\sigma(Q)=\sigma(B(Q))\cup\\{(n-2)^{[p+k-2]},(n-p-2)^{[q-1]}\\},$ where the
corresponding equitable quotient matrix of $Q$ is
$B(Q)=\left(\begin{array}[]{ccc}p+n-2&k&q\\\ p&k+n-2&q\\\ 0&k&q+n-p-2\\\
\end{array}\right),$
the eigenvalues of $B(Q)$ are
$n-2,\frac{(3n-p-4)\pm\sqrt{(n-3p)^{2}+8pk}}{2}$. Thus
$\rho(Q)=\frac{3n-p-4+\sqrt{(n-3p)^{2}+8pk}}{2}$.
By the same proof of Theorem 7.6 in [13], we can show (4.2) holds by proving
$\frac{3n-p-4+\sqrt{(n-3p)^{2}+8pk}}{2}\leq\frac{2n+k-3+\sqrt{(2n-k-3)^{2}+4k}}{2}$
for $1\leq p\leq n-k-1$, and equality holds if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1).$
Now we show (iii) holds. We apply Lemma 3.3 to
$\mathcal{D}=\mathcal{D}(\overrightarrow{K}(n,k,p))$. Since $l_{i}=1,p_{i}=-1$
for $1\leq i\leq 3$, $s_{31}=2$ and $s_{ij}=1$ for others $i,j\in\\{1,2,3\\}$
and $i\not=j$, we have
$\sigma(\mathcal{D})=\sigma(B(\mathcal{D}))\cup\\{(-1)^{[n-3]}\\},$ and the
corresponding equitable quotient matrix of $\mathcal{D}$ is
$B(\mathcal{D})=\left(\begin{array}[]{ccc}p-1&k&q\\\ p&k-1&q\\\ 2p&k&q-1\\\
\end{array}\right),$
the eigenvalues of $B(\mathcal{D})$ are
$-1,\frac{(n-2)\pm\sqrt{-4p^{2}+4(n-k)p+n^{2}}}{2}$. Thus
$\rho(\mathcal{D})=\frac{n-2+\sqrt{-4p^{2}+4(n-k)p+n^{2}}}{2}$.
It is obvious that
$\frac{n-2+\sqrt{-4p^{2}+4(n-k)p+n^{2}}}{2}\geq\frac{n-2+\sqrt{(n+2)^{2}-4k-8}}{2}$,
and equality holds if and only if $p=1$ or $p=n-k-1$. Thus (4.3) holds and
equality holds if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1)$ or
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,n-k-1).$
Now we show (iv) holds. We apply Lemma 3.3 to
$\mathcal{Q}=\mathcal{Q}(\overrightarrow{K}(n,k,p))$. Since
$l_{1}=l_{2}=l_{3}=1,$ $p_{1}=p_{2}=n-2,$ $p_{3}=n+p-2,$ $s_{31}=2$ and
$s_{ij}=1$ for others $i,j\in\\{1,2,3\\}$ and $i\not=j$, we have
$\sigma(\mathcal{Q})=\sigma(B(\mathcal{Q}))\cup\\{(n-2)^{[p+k-2]},(n+p-2)^{[q-1]}\\},$
and the corresponding equitable quotient matrix of $\mathcal{Q}$ is
$B(\mathcal{Q})=\left(\begin{array}[]{ccc}p+n-2&k&q\\\ p&k+n-2&q\\\
2p&k&q+n+p-2\\\ \end{array}\right),$
the eigenvalues of $B(\mathcal{Q})$ are
$n-2,\frac{(3n+p-4)\pm\sqrt{(n+3p)^{2}-16p^{2}-8kp}}{2}$. Thus
$\rho(\mathcal{Q})=\frac{3n+p-4+\sqrt{(n+3p)^{2}-16p^{2}-8kp}}{2}.$
Now we show
$\frac{3n+p-4+\sqrt{(n+3p)^{2}-16p^{2}-8kp}}{2}\geq\frac{3n-3+\sqrt{(n+3)^{2}-8k-16}}{2}$
for $1\leq p\leq n-k-1$, and the equality holds if and only if $p=1$. Let
$f(p)=\frac{3n+p-4+\sqrt{(n+3p)^{2}-16p^{2}-8kp}}{2}$. Then
$\frac{\partial^{2}f(p)}{\partial
p^{2}}=\frac{-4((2n-k)(n-k)+k^{2})}{((n+3p)^{2}-16p^{2}-8kp)^{\frac{3}{2}}}<0.$
Thus, for fixed $n$ and $k$, the minimal value of $f(p)$ must be taken at
either $p=1$ or $p=n-k-1$. Let $\alpha=k^{2}-6k-7+8n$ and
$\beta=n^{2}+6n-7-8k$. Then
$2[f(n-k-1)-f(1)]=n-k-2+\sqrt{\alpha}-\sqrt{\beta}=(n-k-2)(1-\frac{n+k}{\sqrt{\alpha}+\sqrt{\beta}}).$
We can assume that $n>k+2$ since in case $k=n-2$ there is only one value of
$p$ under consideration. Now suppose that $f(n-k-1)-f(1)\leq 0$. We will
produce a contradiction. We have
$\sqrt{\alpha}+\sqrt{\beta}\leq n+k,\sqrt{\alpha}-\sqrt{\beta}\leq-n+k+2.$
Whence $\sqrt{\alpha}\leq k+1$ and $\alpha\leq(k+1)^{2}$ which reduces to
$k\geq n-1$ which is out of range. Thus $f(n-k-1)>f(1)$ and
$q^{\mathcal{D}}(\overrightarrow{G})\geq
f(1)\texttt{}=q^{\mathcal{D}}(\overrightarrow{K}(n,k,1))=\frac{3n-3+\sqrt{(n+3)^{2}-8k-16}}{2}$,
with equality if and only if
$\overrightarrow{G}\cong\overrightarrow{K}(n,k,1).$ ∎
It is natural that whether there exists similar result for the Laplacian
spectral radius or the distance Laplacian spectral radius in $\Omega(n,k)$ or
not? In fact, we can obtain the spectrum of the Laplacian matrix or the
distance Laplacian matrix of $\overrightarrow{K}(n,k,p)$ immediately.
###### Proposition 4.3.
Let $\overrightarrow{K}(n,k,p)$ defined as before. Then
(i). $\sigma(L(\overrightarrow{K}(n,k,p)))=\\{0,n^{[p+k-1]},(n-p)^{[q]}\\}.$
(ii).
$\sigma(\mathcal{L}(\overrightarrow{K}(n,k,p)))=\\{0,n^{[p+k-1]},(n+p)^{[q]}\\}.$
###### Proof.
Firstly, the Laplacian matrix $L(\overrightarrow{K}(n,k,p))$ and the distance
Laplacian matrix $\mathcal{L}(\overrightarrow{K}(n,k,p))$ of
$\overrightarrow{K}(n,k,p)$ are the following matrices, where $q=n-p-k$.
$L=L(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{ccc}-J_{p}+nI_{p}&-J_{p,k}&-J_{p,q}\\\
-J_{k,p}&-J_{k}+nI_{k}&-J_{k,q}\\\
\mathbf{0}_{q,p}&-J_{q,k}&-J_{q}+(n-p)I_{q}\\\ \end{array}\right),$
$\mathcal{L}=\mathcal{L}(\overrightarrow{K}(n,k,p))=\left(\begin{array}[]{ccc}-J_{p}+nI_{p}&-J_{p,k}&-J_{p,q}\\\
-J_{k,p}&-J_{k}+nI_{k}&-J_{k,q}\\\ -2J_{q,p}&-J_{q,k}&-J_{q}+(n+p)I_{q}\\\
\end{array}\right).$
Then the corresponding equitable quotient matrices are as follows:
$B(L)=\left(\begin{array}[]{ccc}n-p&-k&-q\\\ -p&n-k&-q\\\ 0&-k&k\\\
\end{array}\right),\qquad
B(\mathcal{L})=\left(\begin{array}[]{lcr}n-p&-k&-q\\\ -p&n-k&-q\\\
-2p&-k&n+p-q\\\ \end{array}\right).$
Then by Lemma 3.3 and directly calculating, we obtain (i) and (ii). ∎
## 5 Spectral radius of connected graphs with given connectivity
Let $\mathcal{C}(n,k)$ be the set of all simple connected graphs on $n$
vertices with vertex connectivity $k$. Let ${G_{1}}\bigtriangledown{G_{2}}$
denote the graph $G=(V,E)$ obtained from two disjoint graphs ${G_{1}}$,
${G_{2}}$ by joining each vertex of $G_{1}$ to each vertex of $G_{2}$ with
vertex set $V=V({G}_{1})\cup V({G}_{2})$ and edge set $E=E({G}_{1})\cup
E({G}_{2})\cup\\{uv|u\in V({G}_{1}),v\in V({G}_{2})\\}$.
Let $p,k$ be integers with $1\leq k\leq n-2,1\leq p\leq n-k-1,$ and
${K}(n,k,p)$ be the graph ${K_{k}}\bigtriangledown({K_{p}}\cup{K}_{n-p-k})$.
Clearly, ${K}(n,k,p)\in\mathcal{C}(n,k).$ Then the adjacency matrix, the
signless Laplacian matrix, the distance matrix, the distance signless
Laplacian matrix of ${K}(n,k,p)$ are as follows, where $q=n-p-k$.
$A({K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}-I_{p}&J_{p,k}&\mathbf{0}_{p,q}\\\
J_{k,p}&J_{k}-I_{k}&J_{k,q}\\\ \mathbf{0}_{q,p}&J_{q,k}&J_{q}-I_{q}\\\
\end{array}\right),$
$Q({K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}+(p+k-2)I_{p}&J_{p,k}&\mathbf{0}_{p,q}\\\
J_{k,p}&J_{k}+(n-2)I_{k}&J_{k,q}\\\
\mathbf{0}_{q,p}&J_{q,k}&J_{q}+(n-p-2)I_{q}\\\ \end{array}\right),$
$\mathcal{D}({K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}-I_{p}&J_{p,k}&2J_{p,q}\\\
J_{k,p}&J_{k}-I_{k}&J_{k,q}\\\ 2J_{q,p}&J_{q,k}&J_{q}-I_{q}\\\
\end{array}\right),$
$\mathcal{Q}({K}(n,k,p))=\left(\begin{array}[]{lcr}J_{p}+(n+q-2)I_{p}&J_{p,k}&2J_{p,q}\\\
J_{k,p}&J_{k}+(n-2)I_{k}&J_{k,q}\\\ 2J_{q,p}&J_{q,k}&J_{q}+(n+p-2)I_{q}\\\
\end{array}\right).$
###### Remark 5.1.
Let ${G}$ be a connected graphs with vertex connectivity $k$. Suppose that $S$
is a $k$-vertex cut of ${G}$, and $G_{1}$ is a connected component of $G-S$.
Let ${G_{2}}={G}-S-{G_{1}}$, we add edges to ${G}$ until both induced subgraph
of $V({G_{1}})\cup S$ and induced subgraph of $V({G_{2}})\cup S$ attain to
complete graphs, the new graph denoted by ${H}$. Clearly,
${H}={K}(n,k,p)\in\mathcal{C}(n,k)$ for some $p$ such that $1\leq p\leq
n-k-1$. Since ${G}$ is the spanning subgraph of ${H}$, then by Corollary 2.3,
we have $\rho({G})\leq\rho({K}(n,k,p))$, $q({G})\leq q({K}(n,k,p))$,
$\rho^{\mathcal{D}}({G})\geq\rho^{\mathcal{D}}({K}(n,k,p))$ and
$q^{\mathcal{D}}({G})\geq q^{\mathcal{D}}({K}(n,k,p))$. Thus the extremal
graphs which achieve the maximal (signless Laplacian) spectral radius and the
minimal distance (signless Laplacian) spectral radius in $\mathcal{C}(n,k)$
must be some $K(n,k,p)$ for $1\leq p\leq n-k-1$.
###### Theorem 5.1.
Let $n,k$ be given positive integers with $1\leq k\leq n-2$,
${G}\in\mathcal{C}(n,k).$ Then
(i) ([28]) $\rho({G})\leq\rho(K(n,k,1)),$ and $\rho({{K}(n,k,1)}$ is the
largest root of equation (5.1):
$\lambda^{3}-(n-3)\lambda^{2}-(n+k-2)\lambda+k(n-k-2)=0,$ (5.1)
with equality holds if and only if $G=K(n,k,1).$
(ii) ([28])
$q({G})\leq q(K(n,k,1))=\frac{2n+k-4+\sqrt{(2n-k-4)^{2}+8k}}{2},$ (5.2)
with equality holds if and only if $G=K(n,k,1).$
(iii) ([22]) $\rho^{\mathcal{D}}({G})\geq\rho^{\mathcal{D}}(K(n,k,1)),$ and
$\rho^{\mathcal{D}}({{K}(n,k,1)}$ is the largest root of equation (5.3):
$\lambda^{3}-(n-3)\lambda^{2}-(5n-3k-6)\lambda+kn-k^{2}+2k-4n+4=0,$ (5.3)
with equality holds if and only if $G=K(n,k,1).$
(iv) ([26]) $q^{\mathcal{Q}}({G})\geq q^{\mathcal{Q}}(K(n,k,1)),$ and
$q^{\mathcal{Q}}({{K}(n,k,1)}$ is the largest root of equation (5.4):
$\lambda^{3}-(5n-k-6)\lambda^{2}+(8n^{2}-19kn-24n+8k+16)\lambda-4n^{3}+2(k+10)n^{2}-2(5k+16)n+12k+16=0,$
(5.4)
with equality holds if and only if $G=K(n,k,1).$
###### Proof.
Firstly, we show (i) holds. We apply Lemma 3.3 to $A=A({K}(n,k,p))$. Since
$t=3$, $l_{1}=l_{2}=l_{3}=1,$ $p_{1}=p_{2}=p_{3}=-1,$ $s_{13}=s_{31}=0$ and
$s_{12}=s_{21}=s_{23}=s_{32}=1$, we have
$\sigma(A)=\sigma(B(A))\cup\\{(-1)^{[n-3]}\\},$ where the corresponding
equitable quotient matrix of $A$ is $B(A)=\left(\begin{array}[]{ccc}p-1&k&0\\\
p&k-1&q\\\ 0&k&q-1\\\ \end{array}\right),$ the eigenvalues of $B(A)$ are the
roots of the equation
$\lambda^{3}-(n-3)\lambda^{2}+(pq-2n+3)\lambda+pq-n+pqk+1=0.$ (5.5)
It is obvious that $\rho(A({K}(n,k,p)))$ is the largest root of the equation
(5.5).
Now we show $\rho(A({K}(n,k,1)))=\max\\{\rho(A({K}(n,k,p)))|1\leq p\leq
n-k-1\\}$. We note that $p+q=n-k$ and the adjacency matrix is symmetric,
without loss of generality, we assume that $q\geq p\geq 1$. Let
$f_{p,q}(\lambda)=\lambda^{3}-(n-3)\lambda^{2}+(pq-2n+3)\lambda+pq-n+pqk+1$.
Let $H={K_{k}}\bigtriangledown({K_{p-1}}\cup{K}_{q+1})={K}(n,k,p-1)$.
Obviously, $H\in\mathcal{C}(n,k)$ and $\rho(H)$ is the largest root of
$f_{p-1,q+1}(\lambda)=0$, then
$f_{p,q}(\lambda)-f_{p-1,q+1}(\lambda)$
$=pq\lambda+pq+pqk-(p-1)(q+1)\lambda-(p-1)(q+1)-(p-1)(q+1)k$
$=(q+1-p)(\lambda+k+1)>0$,
and
$f_{p,q}(\rho(H))=f_{p,q}(\rho(H))-f_{p-1,q+1}(\rho(H))>0=f_{p,q}(\rho({K}(n,k,p))).$
It implies $\rho(H)=\rho({K}(n,k,p-1))>\rho({K}(n,k,p))$. Thus
$\rho({G})\leq\rho(K(n,k,1))$, $\rho({{K}(n,k,1)})$ is the largest root of the
equation (5.1), $\rho({G})=\rho(K(n,k,1))$ if and only if $G=K(n,k,1).$
Second, we show (ii) holds. We apply Lemma 3.3 to $Q=Q({K}(n,k,p))$. Since
$t=3$, $l_{1}=l_{2}=l_{3}=1,$ $p_{1}=p+k-2,$ $p_{2}=n-2,$ $p_{3}=n-p-2,$
$s_{13}=s_{31}=0$ and $s_{12}=s_{21}=s_{23}=s_{32}=1$, we have
$\sigma(Q)=\sigma(B(Q))\cup\\{(p+k-2)^{[p-1]},(n-2)^{[k-1]},(n-p-2)^{[q-1]}\\}$
where the corresponding equitable quotient matrix of $Q$ is
$B(Q)=\left(\begin{array}[]{ccc}2p+k-2&k&0\\\ p&k+n-2&q\\\ 0&k&q+n-p-2\\\
\end{array}\right),$
the eigenvalues of $B(Q)$ are
$n-2,n-2+\frac{k}{2}\pm\frac{1}{2}\sqrt{(k-2n)^{2}+16p(k-n+p)}$. Thus
$\rho(Q)=n-2+\frac{k}{2}+\frac{1}{2}\sqrt{(k-2n)^{2}+16p(k-n+p)}.$
Let $f(p)=n-2+\frac{k}{2}+\frac{1}{2}\sqrt{(k-2n)^{2}+16p(k-n+p)}$, then
$f(1)=f(n-k-1)=\max\\{f(p)|1\leq p\leq n-k-1\\}.$ Therefore,
$q({G})\leq\frac{2n+k-4+\sqrt{(2n-k-4)^{2}+8k}}{2},$ and we complete the proof
of (ii) by ${K}(n,k,1)\cong{K}(n,k,n-k-1)$.
Third, we show (iii) holds. We apply Lemma 3.3 to
$\mathcal{D}=\mathcal{D}({K}(n,k,p))$. Since $t=3$, $l_{1}=l_{2}=l_{3}=1,$
$p_{1}=p_{2}=p_{3}=-1,$ $s_{13}=s_{31}=2$ and $s_{12}=s_{21}=s_{23}=s_{32}=1$,
we have $\sigma(\mathcal{D})=\sigma(B(\mathcal{D}))\cup\\{(-1)^{[n-3]}\\}$
where the corresponding equitable quotient matrix of $\mathcal{D}$ is
$B(\mathcal{D})=\left(\begin{array}[]{ccc}p-1&k&2q\\\ p&k-1&q\\\ 2p&k&q-1\\\
\end{array}\right),$
the eigenvalues of $B(\mathcal{D})$ are the roots of the equation:
$\lambda^{3}-(n-3)\lambda^{2}-(3pq+2n-3)\lambda+pqk-3pq-n+1=0.$ (5.6)
It is obvious that $\rho(\mathcal{D}({K}(n,k,p)))$ is the largest root of the
equation (5.6).
Similar to the proof of (i), we can show (iii) holds, we omit it.
Finally, we show (iv) holds. We apply Lemma 3.3 to
$\mathcal{Q}=\mathcal{Q}({K}(n,k,p))$. Since $t=3$, $l_{1}=l_{2}=l_{3}=1,$
$p_{1}=n+q-2,$ $p_{2}=n-2,$ $p_{3}=n+p-2,$ $s_{13}=s_{31}=2$ and
$s_{12}=s_{21}=s_{23}=s_{32}=1$, we have
$\sigma(\mathcal{Q})=\sigma(B(\mathcal{Q}))\cup\\{(n+q-2)^{[p-1]},(n-2)^{[k-1]},(n+p-2)^{[q-1]}\\}$
where the corresponding equitable quotient matrix of $\mathcal{Q}$ is
$B(\mathcal{Q})=\left(\begin{array}[]{lcr}n+p+q-2&k&2q\\\ p&n+k-2&q\\\
2p&k&n+p+q-2\\\ \end{array}\right),$
the eigenvalues of $B(\mathcal{Q})$ are the roots of the equation:
$\lambda^{3}-(5p+5q+4k-6)\lambda^{2}+(8p^{2}+8q^{2}+5k^{2}+12pq+13pk+13qk-20p-20q-16k+12)\lambda-4p^{3}-4q^{3}-2k^{3}-8p^{2}q-8pq^{2}-10p^{2}k-10q^{2}k-8pk^{2}-8qk^{2}-16pqk+16p^{2}+16q^{2}+10k^{2}+24pq+26pk+26qk-20p-20q-16k+8=0.$
Similar to the proof of (i), we can show (iv) holds, we omit it. ∎
It is natural that whether there exists similar result for the Laplacian
spectral radius or the distance Laplacian spectral radius in
$\mathcal{C}(n,k)$ or not? In fact, we can obtain the spectrum of the
Laplacian matrix or the distance Laplacian matrix of $K(n,k,p)$ immediately.
###### Proposition 5.2.
Let $K(n,k,p)$ defined as before. Then
(i). $\sigma(L(K(n,k,p)))=\\{0,k,n^{[k]},(p+k)^{[p-1]},(q+k)^{[q-1]}\\}.$
(ii).
$\sigma(\mathcal{L}(K(n,k,p)))=\\{0,n+p+q,n^{[k]},(n+q)^{p-1},(n+p)^{q-1}\\}.$
###### Proof.
Firstly, the Laplacian matrix $L(K(n,k,p))$ and the distance Laplacian matrix
$\mathcal{L}(K(n,k,p))$ of $K(n,k,p)$ are the following matrices, where
$q=n-p-k$.
$L=L(K(n,k,p))=\left(\begin{array}[]{ccc}-J_{p}+(p+k)I_{p}&-J_{p,k}&\mathbf{0}_{p,q}\\\
-J_{k,p}&-J_{k}+nI_{k}&-J_{k,q}\\\
\mathbf{0}_{q,p}&-J_{q,k}&-J_{q}+(q+k)I_{q}\\\ \end{array}\right),$
$\mathcal{L}=\mathcal{L}(K(n,k,p))=\left(\begin{array}[]{ccc}-J_{p}+(n+q)I_{p}&-J_{p,k}&-2J_{p,q}\\\
-J_{k,p}&-J_{k}+nI_{k}&-J_{k,q}\\\ -2J_{q,p}&-J_{q,k}&-J_{q}+(n+p)I_{q}\\\
\end{array}\right).$
Then the corresponding equitable quotient matrices are as follows:
$B(L)=\left(\begin{array}[]{ccc}k&-k&0\\\ -p&n-k&-q\\\ 0&-k&k\\\
\end{array}\right),\qquad
B(\mathcal{L})=\left(\begin{array}[]{ccc}n+q-p&-k&-2q\\\ -p&n-k&-q\\\
-2p&-k&n+p-q\\\ \end{array}\right).$
Then by Lemma 3.3 and directly calculating, we obtain (i) and (ii). ∎
## References
* [1] M. Aouchiche, P. Hansen, Two Laplacian for the distance matrix of a graph, Linear Algebra Appl. 439 (2013) 21–33.
* [2] J.A. Bondy, U.S.R. Murty, Graph Theory with Applications, Macmillan, London, 1976.
* [3] Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, New York: Academic Press, 1979.
* [4] R. Brualdi, Spectra of digraphs, Linear Algebra Appl. 432 (2010) 2181–2213.
* [5] Andries E. Brouwer, Willem H. Haemers, Spectra of graphs - Monograph -, Springer, 2011.
* [6] S. Burcu Bozkurt and Durmus Bozkurt, On the signless Laplacian spectral radius of digraphs, Ars Combinatoria, 108 (2013) 193–200.
* [7] D. Cvetkovic, P. Rowlinson, S. Simic, An Introduction to the Theory of Graph Spectra, London, 2010.
* [8] Y.Y. Chen, H.Q. Lin, J.L. Shu, Sharp upper bounds on the distance spectral radius of a graph, Linear Algebra Appl. 439 (2013) 2659–2666.
* [9] S.W. Drury, H.Q. Lin, Extremal digraphs with given clique number, Linear Algebra Appl. 439 (2013) 328–345.
* [10] Roman Drnovsêk, The spread of the spectrum of a nonnegative matrix with a zero diagonal element, Linear Algebra Appl. 439 (2013) 2381–2387.
* [11] X. Duan, B. Zhou, Sharp bounds on the spectral radius of a nonnegative matrix, Linear Algebra Appl. 439 (2013) 2961–2970.
* [12] W.H. Haemers, Interlacing eigenvalues and graph, Linear Algebra Appl. 226–228 (1995) 593–616.
* [13] W.X. Hong, L.H. You, Spectral radius and signless Laplacian spectral radius of strongly connected digraphs, Linear Algebra Appl. 457 (2014) 93–113.
* [14] R.A. Horn, Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1986.
* [15] G. Indulal, I. Gutman, On the distance spectra of some graphs, Mathematical Communications, 13 (2008) 123–131.
* [16] J.B. Jensen, G. Gutin, Digraphs Theory, Algorithms and Applications, Springer, 2001.
* [17] H.Q. Lin, Y. Hong, J.F. Wang, J.L. Shu, On the distance spectrum of graphs, Linear Algebra Appl. 439 (2013) 1662–1669.
* [18] H.Q. Lin, S.W. Drury, The maximum Perron roots of digraphs with some given parameters, Discrete Math. 313 (2013) 2607–2613.
* [19] H.Q. Lin, R.F. Liu, X.W. Lu, The inertia and energy of the distance matrix of a connected graph, Linear Algebra Appl. 467 (2015) 29–39.
* [20] H.Q. Lin, J.L. Shu, Y.R. Wu, G.L. Yu, Spectral radius of strongly connected digraphs, Discrete Math. 312 (2012) 3663–3669.
* [21] H.Q. Lin, J.L. Shu, The distance spectral radius of digraphs, Discrete Applied Math. 161 (2013) 2537–2543.
* [22] H.Q. Lin, W.H. Yang, H.L. Zhang, J.L. Shu, Distance spectral radius of digraphs with given connectivity, Discrete Math. 312 (2012) 1849–1856.
* [23] Chia-an Liu, Chih-wen Weng, Spectral radius of bipartite graphs, arXiv:1402.5621v1 [math.CO] 23 Feb 2014.
* [24] Z.Z. Liu, On spectral radius of the distance matrix, Appl. Anal. Discrete Math. 4 (2010) 269–277.
* [25] O. Rojo, E. Lenes, A sharp upper bound on the incidence energy of graphs in terms of connectivity, Linear Algebra Appl. 438 (2013) 1485–1493.
* [26] R. Xing, B. Zhou and J. Li, On the distance signless Laplacian spectral radius of graphs, Linear and Multilinear Algebra, 62 (2014) 1377–1387.
* [27] M.L. Ye, Y.Z. Fan, D. Liang, The least eigevalue of graphs with given connectivity, Linear Algebra Appl. 430 (2009) 1375–1379.
* [28] M.L. Ye, Y.Z. Fan, H.F. Wang, Maximizing signless Laplacian or adjacency spctral radius of graphs subject to fixed connectivity, Linear Algebra Appl. 433 (2010) 1180–1186.
* [29] X.L. Zhang, The inertia and energy of the distance matrix of complete $k$-partite graphs, Linear Algebra Appl. 450 (2014) 108–120.
|
# Kinetic Exchange Income Distribution Models with Saving Propensities:
Inequality Indices and Self-Organised Poverty Lines
Sanjukta Paul<EMAIL_ADDRESS>Satyendra Nath Bose National Centre
for Basic Sciences, Block-JD, Salt Lake, Kolkata-700106, India. Sudip
Mukherjee<EMAIL_ADDRESS>Department of Physics, Barasat Government
College, Kolkata 700124, India. Saha Institute of Nuclear Physics, Kolkata
700064, India. Bijin Joseph<EMAIL_ADDRESS>St. Xaviers College,
Mumbai 400001, India. Asim Ghosh<EMAIL_ADDRESS>Department of
Physics, Raghunathpur College, Raghunathpur, Purulia 723133, India. Bikas K.
Chakrabarti<EMAIL_ADDRESS>Saha Institute of Nuclear Physics,
Kolkata 700064, India. Economic Research Unit, Indian Statistical Institute,
Kolkata 700108, India.
###### Abstract
We report the numerical results for the steady state income or wealth
distribution $P(m)$ and the resulting inequality measures (Gini $g$ and
Kolkata $k$ indices) in the kinetic exchange models of market dynamics. We
study the variations of $P(m)$ and of the indices $g$ and $k$ with the saving
propensity $\lambda$ of the agents, with two different kinds of trade (kinetic
exchange) dynamics. One, where the exchange occurs between randomly chosen
pairs of agents, other where one of the agents in the chosen pair is the
poorest of all and the other agent is randomly picked up from the rest (where,
in the steady state, a self-organized poverty level or SOPL appears). These
studies have also been made for two different kinds of saving behaviors. One
where each agent has the same value of $\lambda$ (constant over time) and the
other where $\lambda$ for each agent can take two values (0 and 1) and changes
randomly maintaining a fraction of time $\rho(<1)$ of choosing $\lambda=1$. We
also study the nature of distributions $P(m)$ and values of the inequality
indices ($g$ and $k$) and the SOPL as $\lambda$ and $\rho$ varies. We find
that the inequality decreases with increasing savings ($\lambda$).
## I Introduction
The kinetic theory of gases, more than a century old and the first successful
classical many-body theory of condensed matter physics, has recently been
applied in econophysics and sociophysics (see e.g., sinha2010econophysics ;
chakrabarti2006econophysics ) in the modelling of different socio-economic
contexts. These two-body exchange dynamics studies have been extensively
developed in the context of modeling income or wealth distributions in a
society (see e.g., yakovenko2009colloquium ; chakrabarti2013econophysics ;
pareschi2013interacting . Further extensions of these kinetic exchange models
for social opinion formation studies, see e.g., sen2014sociophysics .
In generic kinetic exchange models of income or wealth distributions in a
society, one studies a system of $N$ agents who interact among themselves
through two-agent money ($m$) conserving stochastic trade (scattering)
processes, where each one saves a fraction $\lambda$ of their money or wealth
at each trade (exchange) or instant of time chakraborti2000statistical ;
chatterjee2004pareto . The resulting steady state distributions $P(m)$ of
money, for different values of the saving propensities $\lambda$ are compared
with the available data (see e.g., chakrabarti2013econophysics ;
chatterjee2007kinetic ). One can also study the effect of modification in the
kinetic exchange dynamics such that one of the agents in any chosen pair
participating in the trading (or scattering) process has the lowest money or
wealth at that point of time, while the other agent is selected randomly from
the rest of the population, with no constraint in the value of money
possessed. pianegonda2003wealth ; ghosh2011threshold . Alternatively, one can
also choose the pair of agents based on their total wealth, such that one of
them has money below an arbitrarily chosen poverty-line and the other one,
selected randomly from the whole population can have any value of money. The
kinetic exchange dynamics is continued until no agent is left with money below
a new Self-Organized Poverty-Line or SOPL ghosh2011threshold (see also
iglesias2010simple ; chakrabarti2021development ). Then by varying $\lambda$,
it is investigated whether the SOPL can be shifted upwards.
The resulting inequalities can be measured here by determining the Most
Probable Income (MPI), given by the location of maximum value of the
distribution $P(m)$, or by the location of the SOPL, below which $P(m)=0$,
together with the determination of the values of Gini ($g$) and Kolkata ($k$)
indices (see e.g., chakrabarti2021development ; banerjee2020inequality ). Both
the indices, Gini (oldest and most popular one) and Kolkata (introduced in
ghosh2014inequality , see banerjee2020inequality for a recent review), are
based on the Lorenz curve or function (see chakrabarti2021development ;
banerjee2020inequality ) $L(x)$, giving the cumulative fraction
($L=\int_{0}^{m}mP(m)dm$/ $[\int_{0}^{\infty}mP(m)dm]$) of (total accumulated)
income or wealth possessed by the fraction
($x=\int_{0}^{m}P(m)dm/[\int_{0}^{\infty}P(m)dm]$) of the population, when
counted from the poorest to the richest (see Fig. 1).
If the income (wealth) of every agent is identical, then $L(x)$ will be a
linear function represented by the diagonal (in Fig. 1) passing through the
origin. This diagonal defined by $L(x)=x$, is called the equality line. The
Gini coefficient ($g$) is given by the area between the Lorenz curve $L(x)$
and the equality line (normalized the area under the equality line): $g$ = 0
corresponds to equality and $g$ = 1 corresponds to extreme inequality. The
Kolkata index or $k$-index is given by the ordinate value $k$ of the
intersecting point of the Lorenz curve and the diagonal perpendicular to the
equality line. By construction, (see Fig. 1) $1-L(k)=k$, that $k$ fraction of
wealth is being possessed by ($1-k$) fraction of the richest population. As
such, it gives a quantitative generalization of the approximately established
(phenomenological) 80–20 law of Pareto (see e.g., banerjee2020inequality ),
indicating that typically about $80\%$ wealth is possessed by only $20\%$ of
the richest population in any economy. Now defining the complementary Lorenz
function $L^{c}(x)\equiv[1-L(x)]$, one gets $k$ as its (nontrivial) fixed
point (while Lorenz function $L(x)$ itself has trivial fixed points at $x$ = 0
and 1). As such, $k$ = 0.5 corresponds to complete equality and $k$ = 1
corresponds to extreme inequality. As an example, both $g$ and $k$ may be
exactly calculated or estimated in the well known case of Gibbs distribution
(normalized) $P(m)=\exp(-m)$: With
$x=\int_{0}^{m}\exp(-m^{\prime})dm^{\prime}=1-\exp(-m)$, giving $m=-\ln(1-x)$,
and $L=\int_{0}^{m}m^{\prime}\exp(-m^{\prime})dm^{\prime}$ =
$1-(m+1)\exp(-m)$, giving $L(x)=$ $1-(1-x)[1-\ln(1-x)]$. As the area under the
equality line is 1/2, the Gini index $g=$ $2\int_{0}^{1}L(x)dx=1/2$ and the
Kolkata index $k$ for this Gibbs (exponential) distribution is given by the
self-consistent equation $1-k=L(k)$ or $1-2k=(1-k)[\ln(1-k)]$, giving $k\simeq
0.68$.
Figure 1: The Lorenz curve $L(x)$ represents the fraction of overall income or
wealth assumed by the bottom $x\%$ fraction of the people. The Gini
coefficient is the ratio of the area that lies between the line of equality
and Lorenz curve over the total area under the line of equality (Gini index
$g=2s$). The complementary Lorenz function $L^{c}(x)\equiv 1-L(x)$ function is
represented by the green line. The $k$ index is the ordinate value of the
intersecting point of the Lorenz curve and the diagonal perpendicular to the
equality line. Therefore, the index value $k$ implies that $k$ fraction of
poorest people possess only $1-k$ fraction of the total income.
In this paper we numerically study the variations of income or wealth
distribution $P(m)$ generated from the kinetic exchange models described below
(section II) and extract the variations in the values of Gini $g$ and Kolkata
$k$ indices with the saving propensity $\lambda$ of the agents, with two
different kinds of trade or (kinetic) exchange dynamics (see e.g.,
sinha2020econophysics for a pedagogic to the simulations of kinetic exchange
models). First, where the exchange occurs among randomly chosen pairs of
agents (where the most probable income in the steady state increases with
increasing saving propensity $\lambda$) and second, where one of the agents or
traders in the chosen pair is the poorest one at that point of time (trade,
exchange or scattering) while the other one is randomly picked up from the
rest (where, in the steady state, a self-organized minimum poverty level or
SOPL of income or wealth appears). These studies have also been done for two
different kinds of saving behaviors. First, where each agent has the same
value of $\lambda$ (constant over time) and varies from zero (exactly soluble;
Gibbs limit) to values of $\lambda$ infinitesimally smaller than unity (where
dynamics stops). Second where $\lambda$ for each agent can take two values,
zero or unity, and it changes randomly over time or trade, with the fraction
$\rho(<1)$ of time any agent chooses $\lambda=1$. Along with the nature of
steady state distributions $P(m)$, we have studied the variations in the
values of the inequality indices ($g$ and $k$) and the location of the Most
Probable Income or wealth (MPI) or the Self-Organized Poverty Line (SOPL, if
any) of income or wealth as the saving propensity $\lambda$ of the agents
(fixed over time) and as the time fraction $\rho$ of choosing the saving
propensity $\lambda=1$ over the other choice $\lambda=0$.
The rest of the paper is organized as follows. In section II, we describe the
kinetic-exchange model of trade between the agents followed by calculation of
wealth distribution and inequality indices obtained through Monte Carlo
simulations by invoking different kinds of kinetic exchange dynamics. In
section III we summarize our work and conclude with useful discussions.
## II Models and Simulation Results
In this section, we will discuss the numerical studies of the kinetic exchange
models of income distribution (among agents or traders having saving
propensities $\lambda$), employing two kinds of dynamics. One is for the
straightforward randomly chosen pair-wise money conserving kinetic exchange
process (with distributions having most probable income or MPI ), and the
other is for similar money conserving kinetic exchange between the agents of a
pair, where one agent always has the lowest income and the other one is chosen
randomly from the rest (with the resulting distributions having self-organized
poverty line or SOPL). The agents are considered to have the same (and fixed
over time) saving propensities $\lambda$ (values in the range 0 to $1_{-}$) in
one case and in the other case every agent has a choice of $\lambda$ between 0
and 1, with the average fraction $\rho$ (maximum value $1_{-}$) of choosing
$\lambda=1$. The resulting income distributions are characterized by the
values of MPI or SOPL (whichever applicable) and of the inequality indices
Gini ($g$) and Kolkata ($k$), introduced earlier.
We performed numerical simulations with fixed number of agents $N$ and total
money $M=N$ for both the models. In our simulation, one Monte Carlo step is
equivalent to $N$ pairwise exchanges. We take $N=1000$ agents and total money
$M=N=1000$, initially distributed over all the agents uniformly. The steady
state distribution is measured over $10^{3}$ Monte Carlo time steps after
relaxing $10^{6}$ Monte Carlo time steps for equilibration.
Figure 2: The pure kinetic exchange dynamics: (a) The steady state income
distributions P(m) for different saving propensity $\lambda$ are shown in the
plot. (Inset) The same distributions are shown in semi-log scale indicating an
exponential nature of the tail end of the distributions. (b) The variation of
Kolkata index ($k$), Gini index ($g$) and Most Probable Income (MPI) are shown
against fixed saving propensity $\lambda$ (maximum value of $\lambda$ is
$1_{-}$).
### II.1 Uniform saving income exchange models and inequality indices
In this model, we consider a conserved system where total money $M$ and total
agents $N$ are fixed. Each agent i possesses money $m_{i}(t)$ at a time t and
in any interaction, a pair of agents i and j exchange their money such that
their total money is conserved. For fixed saving propensity $\lambda$ of the
agents, the exchange of money between two randomly chosen pairs can be
expressed as
$\begin{split}m_{i}(t+1)&=\lambda
m_{i}(t)+\epsilon_{ij}((1-\lambda)(m_{i}(t)+m_{j}(t)))\\\ m_{j}(t+1)&=\lambda
m_{j}(t)+(1-\epsilon_{ij})((1-\lambda)(m_{i}(t)+m_{j}(t)))\end{split}$ (1)
where $0\leq\epsilon_{ij}\leq 1$ is a random fraction varying in every
interaction.
The steady state income distribution $P(m)$ as a function of money for fixed
saving propensity $\lambda$ are shown in Fig. 2(a). For $\lambda=0$, the
steady state follows a Gibbs distribution and Most Probable Income (MPI)
distribution is at MPI= 0. MPI per agent shifts from m = 0 to m = 1 as
$\lambda\to 1$. Furthermore, the plot in semi-log (as shown in Fig. 2(a)
inset) indicates an exponential nature for the tail end of the distribution.
In plot 2(b), we show the variation of Kolkata index ($k$), Gini index ($g$)
and Most Probable Income (MPI) against saving propensity $\lambda$. The Gini
coefficient value diminishes from $0.5$ to $0$ as $\lambda$ approaches from
$0$ to $1$. Similarly $k$-index value reduces from $0.68$ to $0.5$ as
$\lambda$ approaches from $0$ to $1$.
### II.2 Self-organized minimum poverty line model and inequality indices
Here we consider a model where one of the agents in the chosen pair is
necessarily the poorest at that point of time and the other one is randomly
chosen from the rest (where, in the steady state, a self-organized minimum
poverty level or SOPL appears). The exchange of money will follow the same
rule as described by Eqn. 1.
In Fig. 3(a), the steady state income distribution $P(m)$ for different
$\lambda$ values are shown and the same distributions are shown in semi-log
scale in the inset indicating an exponential nature of the tail end of the
distributions. In Fig. 3(b), the variation of Kolkata index ($k$), Gini index
($g$) and Self-Organized Poverty-Line or SOPL are shown against saving
propensity $\lambda$. The figure indicates that inequality of the distribution
diminishing as $\lambda\to 1$ and also Self-Organized Poverty-Line or SOPL is
rising to 1 as $\lambda\to 1$.
Figure 3: Self-Organized Poverty Line model: (a) Steady state income
distribution $P(m)$ for fixed saving propensity $\lambda$ are shown. (Inset)
The same distributions are shown in semi-log scale indicating an exponential
nature of the tail end of the distributions. (b) The variation of Kolkata
index ($k$), Gini index ($g$) and location of the Self-Organized Poverty Line
(SOPL) are shown against fixed saving propensity $\lambda$ (maximum value of
$\lambda=1_{-}$).
### II.3 Indices for pure kinetic exchange model of two choices of $\lambda$
Here we consider the same exchange dynamics as described in subsection II.1,
the only difference being that each agent has two choices of $\lambda$ over
time. In our study, the agents can take the saving propensity either 1 (with
probability $\rho$) or 0 (with probability $1-\rho$) over time.
In Fig. 4(a), the steady state income distribution $P(m)$ is shown for
different probability $\rho$. We observe that the most probable income (MPI)
occurs at $m=0$, and the semi-log plots of the distributions indicate the
exponential nature of the tail end of the distribution (see inset of Fig.
4(a)). The Kolkata index ($k$) and Gini index ($g$) rise slowly against $\rho$
(see Fig. 4(b)).
Figure 4: The pure kinetic exchange model of two choices of $\lambda$. (a)
Steady state income distribution $P(m)$ for fixed saving propensity $\lambda$
are shown. (Inset) The same distributions are shown in semi-log scale
indicating an exponential nature of the tail end of the distributions. (b) The
variation of Kolkata index ($k$) and Gini index ($g$) are shown against the
probability for taking $\lambda=1$ i.e. $\rho$ (maximum value of
$\rho=1_{-}$).
### II.4 Self-organized minimum poverty line model: Indices for two choices
of $\lambda$
As before, we consider the same dynamics as described in subsectionII.2 but
the difference is that each agent has two choices of $\lambda$ over time. In
our study, the agents can take the saving propensity either 1 (with
probability $\rho$) or 0 (with probability $1-\rho$) over time. In Fig. 5(a),
the steady state income distribution $P(m)$ is shown for different probability
$\rho$. The semi-log plots of the distributions indicate the exponential
nature of the distribution (see inset of Fig. 5(a)). The variation of Kolkata
index ($k$) and Gini index ($g$) and Self-Organized Poverty-Line or SOPL are
shown against $\rho$ in Fig. 5(b). A very slow increasing trend of the
inequality indices with $\rho$ can be observed here. Also SOPL of the
distribution slowly decreases against $\rho$.
Figure 5: Self-organized minimum poverty line model for two choices of saving
propensity: (a) The steady state income distribution $P(m)$ for fixed saving
propensity $\lambda$ are shown. (Inset) The same distributions are shown in
semi-log scale indicating an exponential nature of the tail end of the
distributions.. (b) The variation of Kolkata index ($k$), Gini index ($g$) and
Self-Organized Poverty-Line or SOPL are shown against fixed saving propensity
$\lambda$. A very slow increasing trend of the inequality indices and a slow
decreasing trend of SOPL against $\rho$ can be observed here (maximum value of
$\rho=1_{-}$).
## III Summary & Discussion
We have studied here numerically the variations of income or wealth
distribution $P(m)$ generated from the kinetic exchange models described below
and extracted the variations in the values of Gini $g$ and Kolkata $k$ indices
with the saving propensity $\lambda$ of the agents, with two different kinds
of trade or (kinetic) exchange dynamics. First, where the exchange occurs
among randomly chosen pairs of agents (where the most probable income in the
steady state increases with increasing saving propensity $\lambda$) and
second, where one of the agents or traders in the chosen pair is the poorest
one at that point of time (trade, exchange or scattering) while the other one
is randomly picked up from the rest (where, in the steady state, a self-
organized minimum poverty level or SOPL of income or wealth appears). These
studies have also been made for two different kinds of saving behaviors.
First, where each agent has the same value of $\lambda$ (constant over time)
that varies from zero (exactly soluble; Gibbs limit) to values infinitesimally
smaller than unity (where dynamics stops). Second, where $\lambda$ for each
agent can take two values, zero and unity, and it changes randomly over time
or trade, with the fraction $\rho(<1)$ of time any agent chooses $\lambda=1$.
Along with the nature of steady state distributions $P(m)$, we have studied
the variations in the values of the inequality indices ($g$ and $k$) and the
location of the Most Probable Income (MPI) or the Self-Organized Poverty Line
(SOPL, if any) of income or wealth as the saving propensity $\lambda$ of the
agents (fixed over time) and as the time fraction $\rho$ of choosing the
saving propensity $\lambda=1$ over the other choice $\lambda=0$.
As shown in Figs. 2-5, the most-probable income or MPI (where $P(m)$ is
highest) or the self-organized poverty line, the SOPL (below which $P(m)=0$
and usually the MPI coincides with the SOPL) increases with increasing saving
propensity or $\lambda$. Generally speaking, in all these fixed saving
propensity cases (see Figs. 2 and 3), the income or wealth inequalities, as
measured by the index values of Gini $g$ and Kolkata $k$ (= 0.5 and $\simeq$
0.68 respectively, in the pure kinetic exchange or Gibbs case, discussed
analytically in the Introduction) decreases with increasing saving propensity
($\lambda$) of the agents. This encouraging observation from kinetic exchange
models may have important economic policy implications.
## acknowledgments
BKC is thankful to the Indian National Science Academy for their Senior
Scientist Research Grant. BJ is grateful to the Saha Institute of Nuclear
Physics for the award of their Undergraduate Associateship.
## References
* (1) Sitabhra Sinha, Arnab Chatterjee, Anirban Chakraborti, and Bikas K Chakrabarti. Econophysics: an introduction. John Wiley & Sons, 2010.
* (2) Bikas K Chakrabarti, Anirban Chakraborti, and Arnab Chatterjee. Econophysics and sociophysics: trends and perspectives. John Wiley & Sons, 2006.
* (3) Victor M Yakovenko and J Barkley Rosser Jr. Colloquium: Statistical mechanics of money, wealth, and income. Reviews of modern physics, 81(4):1703, 2009.
* (4) Bikas K Chakrabarti, Anirban Chakraborti, Satya R Chakravarty, and Arnab Chatterjee. Econophysics of income and wealth distributions. Cambridge University Press, 2013.
* (5) Lorenzo Pareschi and Giuseppe Toscani. Interacting multiagent systems: kinetic equations and Monte Carlo methods. OUP Oxford, 2013.
* (6) Parongama Sen and Bikas K Chakrabarti. Sociophysics: an introduction. Oxford University Press, 2014.
* (7) Anirban Chakraborti and Bikas K Chakrabarti. Statistical mechanics of money: how saving propensity affects its distribution. The European Physical Journal B-Condensed Matter and Complex Systems, 17(1):167–170, 2000.
* (8) Arnab Chatterjee, Bikas K Chakrabarti, and SS Manna. Pareto law in a kinetic model of market with random saving propensity. Physica A: Statistical Mechanics and its Applications, 335(1-2):155–163, 2004.
* (9) Arnab Chatterjee and Bikas K Chakrabarti. Kinetic exchange models for income and wealth distributions. The European Physical Journal B, 60(2):135–149, 2007.
* (10) Salete Pianegonda, Jose Rroberto Iglesias, Guillermo Abramson, and JL Vega. Wealth redistribution with conservative exchanges. Physica A: Statistical Mechanics and its Applications, 322:667–675, 2003.
* (11) Asim Ghosh, Urna Basu, Anirban Chakraborti, and Bikas K Chakrabarti. Threshold-induced phase transition in kinetic exchange models. Physical Review E, 83(6):061130, 2011.
* (12) Jose Rroberto Iglesias. How simple regulations can greatly reduce inequality. arXiv:1007.0461; Science & Culture, 76:437–443, 2010.
* (13) Bikas K Chakrabarti and Antika Sinha. Development of econophysics: A biased account and perspective from kolkata. Entropy, 23(2):254, 2021.
* (14) Suchismita Banerjee, Bikas K Chakrabarti, Manipushpak Mitra, and Suresh Mutuswami. Inequality measures: The kolkata index in comparison with other measures. Frontiers in Physics, 8:562182, 2020.
* (15) Asim Ghosh, Nachiketa Chattopadhyay, and Bikas K Chakrabarti. Inequality in societies, academic institutions and science journals: Gini and k-indices. Physica A: Statistical Mechanics and its Applications, 410:30–34, 2014.
* (16) Antika Sinha, Sudip Mukherjee, and Bikas K Chakrabarti. Econophysics through computation. arXiv:2001.04188; Journal of Physics Through Computation, 3(1):1–54, 2020.
|
# On the diffusion approximation of the stationary radiative transfer equation
with absorption and emission
Elena Demattè, Juan J.L. Velázquez Institute for Applied Mathematics,
University of Bonn, 53115 Bonn, Germany.E-mail<EMAIL_ADDRESS>bonn.deInstitute for Applied Mathematics, University of Bonn, 53115 Bonn,
Germany.E-mail<EMAIL_ADDRESS>
###### Abstract
We study the situation in which the distribution of temperature a body is due
to its interaction with radiation. We consider the boundary value problem for
the stationary radiative transfer equation under the assumption of the local
thermodynamic equilibrium. We study the diffusion equilibrium approximation in
the absence of scattering. We consider absorption coefficient independent of
the frequency $\nu$ (the so-called Grey approximation) and the limit when the
photons’ mean free path tends to zero, i.e. the absorption coefficient tends
to infinity. We show that the densitive of radiative energy $u$, which is
proportional to the fourth power of the temperature due to the Stefan-
Boltzmann law, solves in the limit an elliptic equation where the boundary
value can be determined uniquely in terms of the original boundary condition.
We derive formally with the method of matched asymptotic expansions the
boundary condition for the limit problem and we prove rigorously the
convergence to the solution of the limit problem with a careful analysis of
some non-local integral operators. The method developed here allows to prove
all the results using only maximum principle arguments.
Acknowledgments: The authors gratefully acknowledge the financial support of
the collaborative research centre The mathematics of emerging effects (CRC
1060, Project-ID 211504053) and Bonn International Graduate School of
Mathematics (BIGS) at the Hausdorff Center for Mathematics founded through the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under
Germany’s Excellence Strategy – EXC-2047/1 – 390685813.
Keywords: Radiative transfer equation, diffusion approximation, stationary
solution, maximum principle, boundary layers.
Statements and Declarations: The authors have no relevant financial or non-
financial interests to disclose.
Data availability: Data sharing not applicable to this article as no datasets
were generated or analysed during the current study.
###### Contents
1. 1 Introduction
1. 1.1 Motivation and previous results
2. 1.2 Structure of the paper and notation
2. 2 Derivation of the limit problem
1. 2.1 Formal derivation of the limit problem in the diffusive equilibrium approximation
2. 2.2 Formal derivation of boundary condition for the limit problem in the diffusive equilibrium approximation
3. 2.3 Some properties of the kernel
3. 3 The boundary condition for the limit problem
1. 3.1 The homogeneous equation
2. 3.2 Well-posedness theory for the inhomogeneous equation
3. 3.3 Asymptotic behavior of the bounded solution of the inhomogeneous equation
4. 4 Rigorous proof of the diffusion equilibrium approximation for constant absorption coefficient
1. 4.1 Derivation of the equation for $u^{\varepsilon}$
2. 4.2 Uniform boundedness of $u^{\varepsilon}$
3. 4.3 Estimates of $u^{\varepsilon}-\overline{u}$ near the boundary $\partial\Omega$
4. 4.4 Convergence of $u^{\varepsilon}$ to the solution of the new boundary value problem
5. 5 Diffusion approximation for space dependent absorption coefficient
1. 5.1 The limit problem and the boundary layer equation
2. 5.2 Rigorous proof of the convergence: equation for $u^{\varepsilon}$ and properties of the kernel
3. 5.3 Rigorous proof of the convergence: uniform boundedness of $u^{\varepsilon}$
4. 5.4 Rigorous proof of the convergence: estimates of $u^{\varepsilon}-\overline{u}$ near the boundary $\partial\Omega$
5. 5.5 Rigorous proof of the convergence of $u^{\varepsilon}$ to the solution of the new boundary value problem
## 1 Introduction
The radiative transfer equation is the kinetic equation which describes the
distribution of energy and direction of motions of a set of photons, which can
be absorbed and scattered by a medium. This equation can be used to describe
the transfer of heat in a material due to radiative processes. The radiative
transfer equation can be written in its more general form as
$\frac{1}{c}\partial_{t}I_{\nu}(x,n,t)+n\cdot\nabla_{x}I_{\nu}(x,n,t)=\alpha_{\nu}^{e}-\alpha_{\nu}^{a}I_{\nu}(x,n,t)-\alpha_{\nu}^{s}I_{\nu}(x,n,t)+\alpha_{\nu}^{s}\int_{\mathbb{S}^{2}}K(n,n^{\prime})I_{\nu}(x,n^{\prime},t)dn^{\prime}.$
(1.1)
We denote by $I_{\nu}(x,n,t)$ the intensity of radiation (i.e. radiating
energy) of frequency $\nu$ at position $x\in\Omega$ and in direction
$n\in\mathbb{S}^{2}$ and at time $t\geq 0$. The coefficients
$\alpha_{\nu}^{a}$, $\alpha_{\nu}^{e}$ and $\alpha_{\nu}^{s}$ are respectively
the absorption, the emission and the scattering coefficient. In the scattering
term the kernel is normalized such that
$\int_{\mathbb{S}^{2}}K(n,n^{\prime})dn^{\prime}=1$. The speed of light is
indicated by $c$.
In this paper we focus on the stationary problem and on processes, where the
scattering is negligible. Therefore the equation we will study reduces to
$n\cdot\nabla_{x}I_{\nu}\left(x,n\right)=\alpha_{\nu}^{e}-\alpha^{a}_{\nu}I_{\nu}\left(x,n\right).$
(1.2)
In this article we consider the situation of local thermal equilibrium (LTE),
which means that at every point $x\in\Omega$ there is a well-defined
temperature $T(x)\geq 0$. This yields according to the Kirchhoff’s law (cf.
[37]) the following relation for the absorption and emission coefficient
$\alpha_{\nu}^{e}(x)=\alpha_{\nu}^{a}(x)B_{\nu}(T(x)),$
where $B_{\nu}(T(x))=\frac{2h\nu^{3}}{c^{2}}\frac{1}{e^{\frac{h\nu}{kT}}-1}$
is the Plank emission of a black body at temperature $T(x)$ and $k$ the
Boltzmann constant. Moreover, it is well-known that
$\int_{0}^{\infty}B_{\nu}\left(T(x)\right)\;d\nu=\sigma T^{4}(x),$ (1.3)
where $\sigma=\frac{2\pi^{4}k^{4}}{15h^{3}c^{2}}$ is the Stefan-Boltzmann
constant. We will denote for simplicity from now on as the absorption
coefficient $\alpha_{\nu}^{a}$ as $\alpha_{\nu}$.
The solution $I_{\nu}(x,n)$ of (1.2) can be used to compute the flux of energy
at each point $x\in\Omega$ of the material, which is given by
$\mathcal{F}(x):=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;n\>I_{\nu}\left(x,n\right).$
In the stationary case, if the temperature is independent of time at every
point, the total incoming and outgoing flux of energy should balance. In
mathematical terms this can be formulated by the condition for the flux of
energy to be divergence free, i.e.
$\nabla_{x}\cdot\mathcal{F}(x)=0.$
This situation is denoted in the physical literature by pointwise radiative
equilibrium.
We study the situation when the radiation is coming from a very far source of
infinite distance. This can be formalized in mathematical terms by means of
the boundary condition
$I_{\nu}\left(x,n\right)=g_{\nu}\left(n\right)\geq 0$ (1.4)
if $x\in\partial\Omega$ and $n\cdot N_{x}<0$ for $N_{x}$ the outer normal
vector of the boundary at point $x\in\partial\Omega$. Throughout this paper we
will consider $\Omega\subset\mathbb{R}^{3}$ to be a bounded convex domain with
$C^{3}$-boundary.
We are concerned in this paper with the study of the diffusion approximation
that arises in optically thick media. This means, that we consider the case
when the optical depth is very large compared to the characteristic length of
the system. Hence, we rescale and for $\varepsilon\ll 1$ we consider the
following boundary value problem
$\begin{cases}n\cdot\nabla_{x}I_{\nu}\left(x,n\right)=\frac{\alpha_{\nu}(x)}{\varepsilon}\left(B_{\nu}\left(T\left(x\right)\right)-I_{\nu}\left(x,n\right)\right)&x\in\Omega,\\\
\nabla_{x}\cdot\mathcal{F}=0&x\in\Omega,\\\
I_{\nu}\left(x,n\right)=g_{\nu}\left(n\right)&x\in\partial\Omega\text{ and
}n\cdot N_{x}<0.\end{cases}$ (1.5)
For the solution to this equation we will prove that the intensity of
radiation $I_{\nu}(x,n)$ is approximately the Plank distribution
$B_{\nu}(T(x))$ with the local temperature at each point $x\in\Omega$, i.e. we
will show
$I_{\nu}^{\varepsilon}(x,n)\to B_{\nu}(T(x))\;\;\;\;\;\;\;\;\text{ as
}\varepsilon\to 0.$ (1.6)
Notice however, that this approximation cannot be expected for points $x$ that
are close to the boundary $\partial\Omega$. The situation in which (1.6) holds
is denoted in the physical literature as the diffusion equilibrium
approximation (see e.g. [24] and [37]). More precisely, we will consider the
limit problem when $\varepsilon\to 0$ and we will rigorously prove that it is
given by a Dirichlet problem for the heat equation of the temperature with
boundary value uniquely determined by the incoming source $g_{\nu}(n)$ and the
outer normal $N_{x}$ for $x\in\partial\Omega$. The main result we will prove
in this paper is for the so called Grey approximation, i.e. the case when the
absorption coefficient is independent of the frequency $\nu$. The main reason
for that is that some of the estimates are already in this case very
technical. Hopefully, the type of methods we are developing in this paper can
be extended to the non-Grey case.
###### Theorem 1.1.
Let $\alpha_{\nu}(x)=\alpha(x)$ independent of $\nu$, $\alpha\in
C^{3}\left(\Omega\right)$, $g_{\nu}\geq 0$ with
$\int_{0}^{\infty}g_{\nu}(n)\;d\nu\in L^{\infty}\left(\mathbb{S}\right)$ in
(1.5), $\Omega$ bounded convex with $C^{3}$-boundary and strictly positive
curvature. Let $T_{\varepsilon}$ be the temperature associated to the
intensity $I_{\nu}$ solution to the initial value problem (1.5). Then there
exists a functional
$T_{\Omega}:L^{\infty}\left(\mathbb{S}^{2},L^{1}\left(\mathbb{R}_{+}\right)\right)\to
C\left(\partial\Omega\right)$ which maps $g_{\nu}$ to a continuous function
$T_{\Omega}[g_{\nu}](p)$ on the boundary $p\in\partial\Omega$ such that
$T_{\varepsilon}(x)\to T(x)$
uniformly in every compact subset of $\Omega$, where $T$ is the solution to
the Dirichlet problem
$\begin{cases}-\operatorname*{div}\left(\frac{\sigma 4T^{3}}{\alpha}\nabla
T\right)=0&x\in\Omega,\\\
T(p)=T_{\Omega}[g_{\nu}](p)&p\in\partial\Omega.\end{cases}$
### 1.1 Motivation and previous results
The computation of the distribution of temperature of matter interacting with
radiation is an important issue in many physical application and in addition
it rises interesting mathematical questions. The kinetic equation describing
the interaction of matter with radiation is the radiative transfer equation. A
detailed explanation of its derivation and its main properties can be found in
[5, 24, 26, 31, 37]. In particular, in [24, 37] there is an extensive
discussion about the diffusion equilibrium approximation and the situations
where this can be expected or not.
Since the earlier result by Compton [6] in 1922 the interaction of a gas with
radiation has been extensively studied. Milne for example studied a simplified
model, where the radiation is monochromatic and the gas density depends only
on one space variable (cf. [25]).
A question which has been much studied in mathematical literature is the
situation in which $\alpha_{\nu}^{e}=\alpha_{\nu}^{a}=0$ in (1.1), i.e. the
interaction between matter and radiation is due to scattering only. In this
case the problem reduces to
$\frac{1}{c}\partial_{t}I_{\nu}(x,n,t)+n\cdot\nabla_{x}I_{\nu}(x,n,t)=-\alpha_{\nu}^{s}(x)I_{\nu}(x,n,t)+\alpha_{\nu}^{s}(x)\int_{\mathbb{S}^{2}}K(n,n^{\prime})I_{\nu}(x,n^{\prime},t)dn^{\prime}.$
(1.7)
The same equation arises also in the study of neutron transport, a problem
which has been extensively studied in mathematics.
It turns out that in the Grey approximation the problem (1.5) can be reduced
exactly to the study of a particular neutron transport equation, namely the
case when the kernel $K$ is constant $1$. Indeed, denoting by
$J(x,n)=\int_{0}^{\infty}I_{\nu}(x,n)\;d\nu$ and combining the first two
equations of (1.5) we obtain
$\int_{0}^{\infty}B_{\nu}(T(x))\;d\nu=\fint_{\mathbb{S}^{2}}J(x,n)\;dn=\frac{1}{4\pi}\int_{\mathbb{S}^{2}}J(x,n)\;dn$.
Hence, equation (1.5) is equivalent to the study of
$\begin{cases}n\cdot\nabla_{x}J(x,n)=\frac{\alpha(x)}{\varepsilon}\left(\fint_{\mathbb{S}^{2}}J(x,n)\;dn-J(x,n)\right)&\text{
if }x\in\Omega,\\\ J(x,n)=\int_{0}^{\infty}g_{\nu}(n)\;d\nu&\text{ if
}x\in\partial\Omega,\;n\cdot N_{x}<0.\end{cases}$ (1.8)
However, the equivalence between (1.5) and (1.8) does not hold in the non-Grey
case. The properties of equation (1.8) as well as the diffusion approximation
limit have been studied for a long time, starting with the seminal paper [4]
of 1979, where the stationary version of (1.7) was studied. In that work the
authors proved the diffusion approximation for the neutron transport equation
using a stochastic method. The result they obtained for $J$ would imply in
particular our main Theorem 1.1.
More recently, in a series of papers [16, 33, 34, 35, 36] Yan Guo and Lei Wu
have studied the diffusion approximation of both the stationary and the time
dependent neutron transport equation (1.7) when $K\equiv 1$ and
$\alpha^{s}_{\nu}(x)\equiv 1$, independent of $x$, for different classes of
boundary conditions in $2$ and $3$ dimensions, in bounded convex domains or
annuli (in 2D). In particular the result in paper [34] imply again the main
Theorem 1.1 when $\alpha\equiv 1$. Their proof relies on PDE methods and not
on a stochastic approach. Moreover, they also computed the geometric
approximation in the structure of the boundary layer.
The main goal of this paper is to develop a method which allows to obtain
diffusive limit approximations like the one in Theorem 1.1 for the radiative
transfer equation (1.1) using PDE methods that rely only in maximum principle
tools. This tools are different from those used by Guo and Wu. Specifically,
the method in [16, 33, 34, 35, 36] relies on the $L^{2}$-$L^{p}$-$L^{\infty}$
estimates that were introduced for the analysis of kinetic equations by Yan
Guo in [15]. In particular, the method is based on the estimates of the
velocity distribution $J$. Our approach is based on the direct derivation of
estimates for the temperature $T(x)$ associated to a given distribution of
radiation $I_{\nu}(x,n)$. More precisely, equation (1.5) can be reformulated
as a non-local integral equation for the temperature (c.f. [21]). In the case
of the Grey approximation we have the following equation for $u(x)=4\pi\sigma
T^{4}(x)$
$u(x)-\int_{\Omega}K_{\varepsilon}(x,\eta)u(\eta)\;d\eta=S(x),$ (1.9)
where the precise form of the kernel $K_{\varepsilon}$ and of the source
$S(x)$ are discussed in Sections 4 and 5.
Equation (1.9) can be thought as a non-local elliptic equation which in
particular satisfies good properties, such as the maximum principle.
Specifically, our proof relies only in finding suitable supersolutions and
applying the maximum principle. The way in which we constructed these
supersolutions is mimicking particular solutions of elliptic equations with
constant coefficients. These supersolutions give also an insight of the
behavior of the solution near the boundary $\partial\Omega$. Our hope is that
the method developed in this paper could be extended to the non-Grey case, at
least for some suitable choice of $\alpha_{\nu}(x)$. One reason why this
should be possible is that [21] shows how to solve the non-local equation
(1.9) for some class of non-Grey problems.
Another type of diffusion approximation for (1.1) is the one in [13, 14] in
which it has been considered the situation when $\alpha_{\nu}^{s}\to\infty$
while $\alpha_{\nu}^{e}$ and $\alpha_{\nu}^{a}$ remain bounded combined with
the equation for balancing the energy either in the one dimensional case or in
the whole space.
The well-posedness and the diffusion approximation of the time dependent
problem (1.7) in the frame work of $L^{1}$-functions using the theory of
$m$-accretive operators has ben studied in a series of papers [2, 3].
Seemingly, although the techniques in these papers allow to develop a theory
for the time dependent problem, they do not provide information about the
stationary solution.
Some versions of the stationary problem involving the radiative transfer
equation can be found in [22, 23, 27, 32]. The problems studied in these
papers include also heat conduction and different type of boundary condition
of our model (for a more detailed discussion see [21]).
It is important to emphasize that equation (1.5) is very different in the non-
Grey case from the scattering problem (1.7), in the sense that the system
(1.5) provides an equation for the temperature. Specifically, the equation
$\nabla_{x}\cdot\mathcal{F}=0$ is automatic satisfied in the stationary
version of (1.7). Physically, this is due to the fact that the radiation
arriving at every point is just deflected. Equation (1.5) plays the same role
as the Laplace equation in order to describe the stationary distribution of
temperature in systems where the energy is transported by means of heat
conduction. In the case of (1.5) the energy is transported by means of
radiation which results in non-locality for determining the temperature
distribution. The fact that the determination of the temperature in a body
where the energy is transported by radiation is non-local was first formulated
in [18]. Since the approximation (1.6) fails at the boundary, some boundary
layers appears for which the intensity of radiation $I_{\nu}^{\varepsilon}$
differs from the Plank distribution $B_{\nu}(T)$. Hence, a careful analysis
must be made for these boundary layers where the radiation is out of
equilibrium. This will be essential in order to determine the functional
$T_{\Omega}$ in Theorem 1.1, which defines the temperature at every point of
the boundary.
Finally, we mention that one can consider more complicated interactions
between radiation and matter. For instance when the matter that interacts with
radiation is a moving fluid. (cf. [11, 12, 24, 37]). The case when the
interacting medium is a Boltzmann gas whose molecules can be in different
energy layers has been considered in [8, 20, 26, 30].
### 1.2 Structure of the paper and notation
We aim to prove Theorem 1.1. In Section 2 we will formally derive the limit
problem using ideas from the theory of matched asymptotic expansions and from
boundary layers theory. Sections 3 and 4 deal with the case of constant
absorption coefficient $\alpha\equiv 1$, while Section 5 shows the diffusion
approximation in the case of space dependent coefficient $\alpha\in
C^{3}(\Omega)$. In Section 3 we will study the properties of the solution for
the non-local integral equation describing the distribution of energy at the
boundary layer. In particular, this will allow us to construct in the case of
$\alpha\equiv 1$ the functional $T_{\Omega}$ of Theorem 1.1 which assigns the
boundary value of the limit solution at every point of the boundary. Important
tools we will use for the well-posedness are the maximum principle and a
combination of Fourier methods with the classical tools of sub- and super-
solution which resembles the Perron method for the Laplace equation. For the
asymptotic theory we use the theory of Fourier transform for distributions.
Section 4 deals with the rigorous proof of the convergence to the diffusion
equilibrium approximation for the limit problem in the constant absorption
coefficient case. Here the main tool is the maximum principle for the non-
local integral operator we can construct for the boundary value problem (1.5).
Finally, in Section 5 we consider the more general Grey approximation in which
$\alpha\in C^{3}(\overline{\Omega})$ is not constant. We will derive formally
the limit problem and then we will extend the theory developed in Sections 3
and 4 for this case. We will hence prove again by means of the maximum
principle and suitable supersolutions the convergence to the diffusion
equilibrium approximation.
We introduce here some notation we will use throughout this paper. First of
all, $\Omega\subset\mathbb{R}^{3}$ is an open bounded convex domain with
$C^{3}$-boundary and strictly positive curvature. In order to avoid
meaningless constants we assume without loss of generality that
$0\in\overline{\Omega}$. $N_{x}$ indicates always the outer normal vector for
a point $x\in\partial\Omega$.
We assume $\Omega$ to be convex in order to simplify some geometrical
argument. First of all this assumption implies that for every point
$p\in\partial\Omega$ the tangent plane to the boundary at $p$ divided the
space $\mathbb{R}^{3}$ in two disjoint half-spaces, one of them containing the
whole domain $\Omega$. This will be used several times in the definition for
every point $p\in\partial\Omega$ of the isometric transformation mapping $p$
to $0$ and $\Omega$ in the positive half-space
$\mathbb{R}_{+}\times\mathbb{R}^{2}$. The assumption of convexity can be
relaxed and the geometrical estimates should still hold, but we would need a
more careful analysis of the geometry of the problem.
Moreover, for $g_{\nu}(n)\geq 0$ with $\int_{0}^{\infty}g_{\nu}(n)\;d\nu\in
L^{\infty}\left(\mathbb{S}^{2}\right)$ we define the norms
$\Arrowvert
g\Arrowvert_{1}:=\int_{0}^{\infty}\int_{\mathbb{S}^{2}}g_{\nu}(n)\;d\nu\;dn$
(1.10)
and
$\Arrowvert
g\Arrowvert_{\infty}:=\sup\limits_{n\in\mathbb{S}^{2}}\left(\int_{0}^{\infty}g_{\nu}(n)\;d\nu\right).$
(1.11)
###### Remark.
The reason why we are assuming the seemingly restrictive boundary condition
(1.4) is because we are supposing that the source of radiation is placed at
infinity. We can obtain analogous results to the one of the paper if we
consider the more general boundary condition $g_{\nu}(n,x)$ depending also on
$x\in\partial\Omega$. In addition to the assumption above we need to require
$g_{\nu}(n,x)$ to be a $C^{1}$-function with respect to $x\in\partial\Omega$.
Figure 1: Representation of the boundary value problem.
For any point $p\in\partial\Omega$ we choose a fixed isometry mapping $p$ to
$0$ and the vector $N_{p}$ to $-e_{1}$. We will denote this rigid motion by
$\mathcal{R}_{p}:\mathbb{R}^{3}\to\mathbb{R}^{3}$ with the following
properties
$\mathcal{R}_{p}(p)=0\;\;\;\;\;\;\;\text{ and
}\;\;\;\;\;\;\;\mathcal{R}_{p}(N_{p})=-e_{1}.$ (1.12)
Finally, we define by
$\begin{split}\pi_{\partial\Omega}:\left\\{x\in\mathbb{R}^{3}:\text{dist}(x,\partial\Omega)<\delta\right\\}&\to\partial\Omega\\\
x&\mapsto\pi_{\partial\Omega}(x)\end{split}$ (1.13)
the projection to the unique closest point in the boundary $\partial\Omega$.
This function is continuous and well-defined in small neighborhood of
$\partial\Omega$, i.e. for $\delta>0$ small enough.
## 2 Derivation of the limit problem
### 2.1 Formal derivation of the limit problem in the diffusive equilibrium
approximation
We first remind how to obtain formally the equation for the interior in the
limit problem. First of all we expand the intensity of radiation
$I_{\nu}\left(x,n\right)=f_{\nu}^{0}\left(x,n\right)+\varepsilon
f_{\nu}^{1}\left(x,n\right)+\varepsilon^{2}...$ (2.1)
Substituting it in the first equation of (1.5) and identifying the terms
containing $\varepsilon^{-1}$ and $\varepsilon^{0}$ we see
$f_{\nu}^{0}\left(x,n\right)=B_{\nu}\left(T\left(x\right)\right)$
and
$f_{\nu}^{1}\left(x,n\right)=-\frac{1}{\alpha_{\nu}(x)}n\cdot\nabla_{x}B_{\nu}\left(T\left(x\right)\right)$
Using the second equation in (1.5) and the expansion in (2.1) we deduce
$\begin{split}0&=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;n\cdot\nabla_{x}I_{\nu}\left(x,n\right)\\\
&=\operatorname*{div}\left[\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;nB_{\nu}\left(T\left(x\right)\right)\right]-\varepsilon\operatorname*{div}\left[\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\>\left(n\otimes
n\right)\frac{1}{\alpha_{\nu}(x)}\nabla_{x}B_{\nu}\left(T\left(x\right)\right)\right]\\\
&=-\varepsilon\frac{4}{3}\pi\operatorname*{div}\left(\left(\int_{0}^{\infty}d\nu\frac{1}{\alpha_{\nu}(x)}\nabla_{x}B_{\nu}\left(T\left(x\right)\right)\right)\right),\end{split}$
where we used
$\int_{\mathbb{S}^{2}}dn(n\otimes n)=\frac{4}{3}\pi\text{Id}\;\;\;\;\;\text{
and }\;\;\;\;\;\int_{\ss^{2}}dn\;n=0.$
Therefore,
$\operatorname*{div}\left(\kappa\left(T\right)\nabla_{x}T\right)=0,$ (2.2)
where
$\kappa\left(T\right):=\int_{0}^{\infty}d\nu\frac{\partial_{T}B_{\nu}\left(T\left(x\right)\right)}{\alpha_{\nu}(x)}$.
In the particular case of the Grey approximation when $\alpha_{\nu}(x)=1$ we
have $\kappa(T)=4\sigma T^{3}(x)$. Then defining $u(x):=4\pi\sigma T^{4}(x)$
we obtain the following equation
$\Delta u=0.$
This is the limit problem we will study.
### 2.2 Formal derivation of boundary condition for the limit problem in the
diffusive equilibrium approximation
In order to obtain the intensity of radiation closed to the boundary of
$\Omega$ we derive a boundary layer equation, whose solution will be used to
determine the value of the temperature at the boundary by means of a matched
argument. Suppose that $x_{0}\in\partial\Omega$, without loss of generality we
can assume $x_{0}=0$ and $N_{x_{0}}=N=-e_{1}$ using the rigid motion
$\mathcal{R}_{x_{0}}$ defined in (1.12) and putting
$\overline{g}_{\nu}(n):=g_{\nu}\left(\mathcal{R}^{-1}_{x_{0}}(n)\right)$. We
rescale $x=\varepsilon y$, where $y\in\frac{1}{\varepsilon}\Omega$. Thus, at
the leading order as $\varepsilon\to 0$ we obtain
$\alpha_{\nu}(x)=\alpha_{\nu}(\varepsilon
y)=\alpha_{\nu}(0)+\mathcal{O}(\varepsilon)$. Taking $\varepsilon\to 0$ we
obtain that the intensity of radiation satisfies
$\begin{cases}n\cdot\nabla_{y}I_{\nu}\left(y,n\right)=\alpha_{\nu}(0)\left(B_{\nu}\left(T\left(y\right)\right)-I_{\nu}\left(y,n\right)\right)&y\in\mathbb{R}_{+}\times\mathbb{R}^{2}\\\
\nabla_{y}\cdot\mathcal{F}=0&y\in\mathbb{R}_{+}\times\mathbb{R}^{2}\\\
I_{\nu}\left(y,n\right)=\overline{g}_{\nu}\left(n\right)&y\in\\{0\\}\times\mathbb{R}^{2}\text{
and }n\cdot N<0\end{cases}$ (2.3)
The first equation can be solved for $I_{\nu}$ using the method of
characteristics. Given $y\in\mathbb{R}_{+}\times\mathbb{R}^{2}$ and
$n\in\mathbb{S}^{2}$ with $n\cdot N<0$ we call $Y(y,n)$ the unique point
belonging to
$\partial\left(\mathbb{R}_{+}\times\mathbb{R}^{2}\right)=\\{0\\}\times\mathbb{R}^{2}$
such that
$y=Y(y,n)+s(y,n)n,$
where $s(y,n)=\left|y-Y(y,n)\right|$. Notice that $s(y,n)$ is the distance to
the first intersection point of the boundary $\\{0\\}\times\mathbb{R}^{2}$
with the half line $\\{y-tn:t>0\\}$. For $n\cdot N\geq 0$ we define
$s(y,n)=\infty$. Solving the equation by characteristics we obtain
$\begin{split}I_{\nu}\left(y,n\right)=&\overline{g}_{\nu}(n)e^{-\alpha_{\nu}(0)s(y,n)}\raisebox{2.0pt}{$\chi$}_{n\cdot
N<0}+\int_{0}^{s(y,n)}\;e^{-\alpha_{\nu}(0)t}\alpha_{\nu}(0)B_{\nu}\left(T\left({y-tn}\right)\right)dt.\end{split}$
Using the second equation in the rescaled problem (2.3) we calculate
$\begin{split}0=&\operatorname*{div}\left[\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;n\overline{g}_{\nu}(n)e^{-\alpha_{\nu}(0)s(y,n)}+\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\int_{0}^{s(y,n)}dt\;ne^{-\alpha_{\nu}(0)t}\alpha_{\nu}B_{\nu}\left(T\left({y-tn}\right)\right)\right]\\\
=&-\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;\overline{g}_{\nu}(n)\alpha_{\nu}n\cdot\nabla_{y}s(y,n)e^{-\alpha_{\nu}(0)s(y,n)}\\\
&+\operatorname*{div}\left(\int_{0}^{\infty}d\nu\int_{\mathbb{R}_{+}\times\mathbb{R}^{2}}d\eta\;\frac{y-\eta}{\left|y-\eta\right|^{3}}e^{-\alpha_{\nu}(0)\left|y-\eta\right|}\alpha_{\nu}(0)B_{\nu}\left(T\left({\eta}\right)\right)\right)\\\
=&-\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;\overline{g}_{\nu}(n)\alpha_{\nu}(0)e^{-\alpha_{\nu}(0)s(y,n)}+4\pi\int_{0}^{\infty}d\nu(0)\;\alpha_{\nu}B_{\nu}\left(T\left(y\right)\right)\\\
&-\int_{0}^{\infty}d\nu\int_{\mathbb{R}_{+}\times\mathbb{R}^{2}}d\eta\;\frac{\alpha_{\nu}^{2}(0)}{\left|y-\eta\right|^{2}}e^{-\alpha_{\nu}(0)\left|y-\eta\right|}B_{\nu}\left(T\left({\eta}\right)\right).\end{split}$
(2.4)
The second equality holds via the spherical change of variable
$\begin{split}\mathbb{S}^{2}\times\mathbb{R}_{+}&\to\mathbb{R}_{+}\times\mathbb{R}^{2}\\\
(n,t)&\mapsto\eta=y-tn\end{split}$
so that $n=\frac{y-\eta}{\left|y-\eta\right|}$. For the third inequality we
use on the one hand that
$\operatorname*{div}_{y}\left(\frac{y-\eta}{\left|y-\eta\right|^{3}}\right)=4\pi\delta(y-\eta)$
and on the other hand that $n\cdot\nabla_{y}s(y,n)=1$. The latter can be seen
by the fact that
$Y(y,n)+s(y+tn,n)n=y+tn=Y(y+tn,n)+s(y+tn,n)n.$
This implies that $Y(y+tn,n)$ is $t$-constant and therefore
$1=\partial_{t}s(y+tn,n)=\left(\nabla_{y}s(y+tn,n)\right)\cdot n$. We assume
now that the temperature depends only on the first variable. This can be
expected because we are considering limits for $\varepsilon\ll 1$ and hence
the temperature can be considered to depend only on the distance to the point
$x_{0}$, which is approximated by the first variable in this setting. After
the change of variables $\xi=(y_{2}+\eta_{2},y_{3}-\eta_{3})$ and calling
$y-\eta:=y_{1}-\eta_{1}$ the last integral in (2.4) can be written as
$\int_{0}^{\infty}d\nu\int_{\mathbb{R}_{+}}d\eta\int_{\mathbb{R}^{2}}d\xi\;\alpha_{\nu}^{2}(0)\frac{e^{-\alpha_{\nu}(0)\sqrt{(y-\eta)^{2}+|\xi|^{2}}}}{(y-\eta)^{2}+|\xi|^{2}}B_{\nu}\left(T\left(\eta\right)\right).$
Using polar coordinates we obtain
$\begin{split}\int_{\mathbb{R}^{2}}d\xi\;\frac{e^{-\alpha_{\nu}(0)\sqrt{(y-\eta)^{2}+|\xi|^{2}}}}{(y-\eta)^{2}+|\xi|^{2}}=&\pi\int_{|y-\eta|^{2}}^{\infty}dx\;\frac{e^{-\alpha_{\nu}(0)\sqrt{x}}}{x}=2\pi\int_{\alpha_{\nu}(0)|y-\eta|}^{\infty}dt\;\frac{e^{-t}}{t}=4\pi
K(\alpha_{\nu}(0)|y-\eta|),\end{split}$ (2.5)
where we will denote $K(x)=\frac{1}{2}\int_{|x|}^{\infty}dt\;\frac{e^{-t}}{t}$
as the normalized exponential integral.
Notice that $s(y,n)=\frac{y_{1}}{\left|n\cdot N\right|}$ if $n\cdot N<0$. We
can summarize the equation the temperature satisfies in the non-Grey
approximation as follows
$\begin{split}&\int_{0}^{\infty}d\nu\;\alpha_{\nu}(0)B_{\nu}\left(T(y_{1})\right)-\int_{0}^{\infty}d\nu\int_{0}^{\infty}d\eta\;\alpha_{\nu}^{2}(0)K\left(\alpha_{\nu}(0)\left|y_{1}-\eta_{1}\right|\right)B_{\nu}\left(T\left(\eta_{1}\right)\right)\\\
=&\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;\overline{g}_{\nu}(n)\alpha_{\nu}(0)e^{-\alpha_{\nu}(0)\frac{y_{1}}{\left|n\cdot
N\right|}}.\end{split}$ (2.6)
In the particular case of the Grey approximation when $\alpha\equiv 1$ using
that $u(y)=4\pi\sigma T^{4}(y)$ we can simplify equation (2.6) by property
(1.3)
$u(y_{1})-\int_{0}^{\infty}d\eta\;K(y_{1}-\eta)u(\eta)=\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;\overline{g}_{\nu}(n)e^{-\frac{y_{1}}{\left|n\cdot N\right|}}.$ (2.7)
In some occasions, when the dependence of the boundary layer function $u$ on
the point $p\in\partial\Omega$ is needed, we will use the notation
$\overline{u}(y_{1},p)$, where this function solves according to the rigid
motion $\mathcal{R}_{p}$ in (1.12)
$\overline{u}(y_{1},p)-\int_{0}^{\infty}d\eta\;K(y_{1}-\eta)u(\eta,p)=\int_{0}^{\infty}d\nu\int_{n\cdot
N_{p}<0}dn\;g_{\nu}(n)e^{-\frac{y_{1}}{\left|n\cdot N_{p}\right|}}.$ (2.8)
For the rest of Section 2 and Section 3 we will focus on the study of
$\overline{u}(y_{1},p)$ for an arbitrary given $p\in\partial\Omega$, hence we
will call $u(y_{1})=\overline{u}(y_{1},p)$ and $N=N_{p}$. In order to simplify
the reading from now on we set $G(x)=\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;\overline{g}_{\nu}(n)e^{-\frac{x}{\left|n\cdot
N\right|}}\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}$ and if we want to stress out
the dependence on $p\in\partial\Omega$ we write
$G_{p}(x)=\int_{0}^{\infty}d\nu\int_{n\cdot
N_{p}<0}\;g_{\nu}(n)e^{-\frac{x}{\left|n\cdot
N_{p}\right|}}\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}$.
From now on until Section 5 we consider the case of constant absorption
coefficient $\alpha\equiv 1$.
### 2.3 Some properties of the kernel
We consider the kernel $K$ introduced in Section 2.2. We remark that
$K(x)=\frac{1}{2}E_{1}(|x|)$, where $E_{1}$ is the standard exponential
integral function. See [1]. We collect some properties of the normalized
exponential integral.
###### Proposition 2.1.
The function $K$ satisfies $\int_{-\infty}^{\infty}dx\;K(x)=1$, $K\in
L^{{1}}\left(\mathbb{R}\right)\cap L^{{2}}\left(\mathbb{R}\right)$ and the
following estimate holds
$\frac{1}{4}e^{-|x|}\ln(1+\frac{2}{|x|})\leq
K(x)\leq\frac{1}{2}e^{-|x|}\ln(1+\frac{1}{|x|}).$
Moreover, the Fourier transform of $K$ is
$\hat{K}(\xi)=\frac{1}{\sqrt{2\pi}}\frac{\arctan(\xi)}{\xi}$.
###### Proof.
Since $K$ is even and non negative we can calculate, applying Tonelli’s
Theorem
$\int_{-\infty}^{\infty}K(s)\;ds=2\int_{0}^{\infty}K(s)\;ds=\int_{0}^{\infty}\int_{s}^{\infty}\frac{e^{-t}}{t}\;dt\;ds=\int_{0}^{\infty}\frac{e^{-t}}{t}\int_{0}^{t}\;ds\;dt=\int_{0}^{\infty}e^{-t}\;dt=1.$
This proves also that $K\in L^{{1}}\left(\mathbb{R}\right)$.
For the square integrability we refer to equation 5.1.33 in [1] and see
$\int_{\mathbb{R}}\left|K(x)\right|^{2}\;dx=\ln(2)$. Estimate 5.1.20 in [1]
also implies $\frac{1}{4}e^{-|x|}\ln(1+\frac{2}{|x|})\leq
K(x)\leq\frac{1}{2}e^{-|x|}\ln(1+\frac{1}{|x|})$.
We now move to the computation of the Fourier transform of the kernel $K$. The
kernel is an even function, hence we compute
$\begin{split}\hat{K}(\xi)=&\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-i\xi
x}K(x)\;dx=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}\cos\left(\xi
x\right)\int_{x}^{\infty}\frac{e^{-t}}{t}\;dt\;dx\\\
=&\frac{1}{\sqrt{2\pi}}\frac{1}{\xi}\int_{0}^{\infty}\frac{e^{-t}}{t}\sin(\xi
t)\;dt=\frac{1}{\sqrt{2\pi}}\frac{\arctan(\xi)}{\xi}.\end{split}$
The last identity can be justified noticing that
$F(\xi)=\int_{0}^{\infty}\frac{e^{-t}}{t}\sin(\xi t)\;dt$ has derivative
$F^{\prime}(\xi)=\frac{1}{\xi^{2}+1}$. ∎
The following calculation will also be very useful in the next section.
###### Proposition 2.2.
Let $x>0$. Then we can compute
$\int_{-x}^{\infty}K(s)\;ds=1-\frac{e^{-x}}{2}+xK(x);$ (2.9)
$\int_{x}^{\infty}K(s)\;ds=\frac{e^{-x}}{2}-xK(x);$ (2.10)
$\int_{-x}^{\infty}sK(s)\;ds=\int_{x}^{\infty}sK(s)\;ds=\frac{xe^{-x}}{4}+\frac{e^{-x}}{4}-\frac{x^{2}}{2}K(x);$
(2.11)
###### Proof.
The proof relies on basic integral computations. We have to compute several
integrals changing the order of integration applying Tonelli’s Theorem and
integrating by parts. We assume $x>0$. We prove only (2.9), since all other
formulas can be obtained in a similar way.
$\begin{split}\int_{-x}^{\infty}K(s)\;ds=&\frac{1}{2}\int_{-x}^{0}\int_{|s|}^{\infty}\frac{e^{-t}}{t}\;dt\;ds+\frac{1}{2}\int_{0}^{\infty}\int_{s}^{\infty}\frac{e^{-t}}{t}\;dt\;ds\\\
=&\frac{1}{2}\int_{0}^{x}\frac{e^{-t}}{t}\int_{0}^{t}\;ds\;dt+\frac{1}{2}\int_{x}^{\infty}\frac{e^{-t}}{t}\int_{0}^{x}\;ds\;dt+\frac{1}{2}\\\
=&1-\frac{e^{-x}}{2}+xK(x).\end{split}$
∎
## 3 The boundary condition for the limit problem
We now start with the boundary layer analysis. This boundary layer problem,
know in the literature as Milne problem, was studied with different
approaches, e.g. [2, 3, 7, 10, 19]. We will present another proof of the
boundary layer analysis for the equation (2.7) of the temperature. The proof
uses a combination of comparison arguments and Fourier analysis. In addition,
instead of considering the intensity of radiation the analysis is made
directly for the temperature.
Our aim is now to solve equation (2.7). Indeed, according to the method of
matched asymptotic expansions we expect the boundary condition for the limit
problem to be the limit of $u$ as $y\to\infty$ for every point
$x\in\partial\Omega$. In order to simplify the notation we call
$\mathcal{L}\left(u\right)(x):=u(x)-\int_{0}^{\infty}dy\;K\left(x-y\right)u(y)$
and
$\overline{\mathcal{L}}\left(u\right)(x):=u(x)-\int_{-\infty}^{\infty}dy\;K\left(x-y\right)u(y)$.
### 3.1 The homogeneous equation
We start with the study of the homogeneous equation, i.e. (2.7) with
$G(x)\equiv 0$. We will show using maximum principle that any bounded solution
is the trivial solution $u\equiv 0$. We will use the following version of the
maximum principle for the non-local operator $\mathcal{L}$.
###### Lemma 3.1.
Let $\overline{u}\in C\left([0,\infty)\right)$ with
$\lim\limits_{x\to\infty}\overline{u}(x)\in[0,\infty]$ be a supersolution of
(2.7), i.e.
$\begin{cases}\overline{u}(x)-\int_{0}^{\infty}dy\;K\left(x-y\right)\overline{u}(y)\geq
0&x>0\\\ \overline{u}(x)=0&x<0\end{cases}$
Then $u\geq 0$ for all $x\geq 0$.
###### Proof.
Let us assume the contrary, i.e. that there exists some $x\in[0,\infty]$ such
that $\overline{u}(x)<0$. By assumption $x\in[0,\infty)$. Since $\overline{u}$
is continuous in $[0,\infty)$ and it has non-negative limit at infinity which
is bounded or infinity, $u$ attains its global minimum in $[0,\infty)$, i.e.
there exists some $x_{0}\in[0,\infty)$ such that
$\overline{u}(x_{0})=\inf_{x\in[0,\infty)}\overline{u}(x)<0$. Since
$\overline{u}$ is a super solution we can calculate
$\begin{split}0\leq&\mathcal{L}(\overline{u})(x_{0})=\overline{u}(x_{0})-\int_{0}^{\infty}dy\;K\left(x_{0}-y\right)\overline{u}(y)\\\
=&\int_{-\infty}^{\infty}dy\;K\left(x_{0}-y\right)\overline{u}(x_{0})-\int_{0}^{\infty}dy\;K\left(x_{0}-y\right)\overline{u}(y)\\\
=&\int_{-\infty}^{0}dy\;K\left(x_{0}-y\right)\overline{u}(x_{0})+\int_{0}^{\infty}dy\;K\left(x_{0}-y\right)\left(\overline{u}(x_{0})-\overline{u}(y)\right)\\\
<&0,\end{split}$
where we used the positivity of $K\left(x_{0}-y\right)$, the fact that the
integral of the kernel $K$ is $1$ and the fact that $\overline{u}(x_{0})$ is
the minimum of $\overline{u}$ and it is strictly negative. This leads to a
contradiction and thus we conclude the proof. ∎
With the maximum principle we can now show the following theorem on the
triviality of the solution to the homogeneous equation.
###### Theorem 3.1.
Assume $u$ is a bounded solution to
$\overline{\mathcal{L}}(u)(x)=0$ (3.1)
with $u(x)\equiv 0$ for $x<0$. Then $u=0$ for almost every $x\in\mathbb{R}$.
###### Proof.
We will construct a supersolution $\overline{u}$ which converges to infinity
and we will apply Lemma 3.1 to the supersolutions $\overline{u}-u$ and
$u+\overline{u}$. First of all we see that for $x>0$ the bounded solution $u$
is continuous, indeed $u(x)=K*u(x)$. Since $K\in
L^{{1}}\left(\mathbb{R}\right)$ and $u\in L^{{\infty}}\left(\mathbb{R}\right)$
then the convolution is a continuous bounded function. Moreover we can extend
continuously $u$ in $0$. Indeed, we define
$u(0)=\lim\limits_{x\to
0}\left[G(x)+\int_{0}^{\infty}dy\;K\left(x-y\right)u(y)\right].$
This limit exists because $G$ is continuous in $[0,\infty)$ and for the
integral term we can apply the generalized dominated convergence theorem using
that the sequence $K\left(x-y\right)\to K\left(y\right)$ as $x\to 0$ pointwise
and in $L^{{1}}\left(\mathbb{R}\right)$.
We consider now the function
$\overline{u}(x)=\begin{cases}1+x&x\geq 0\\\ 0&x<0\end{cases}$
$\overline{u}$ is a supersolution. It is indeed possible to calculate
$\mathcal{L}(\overline{u})(x)$. Let $x\geq 0$. Then
$\mathcal{L}(\overline{u})=\mathcal{L}(Id)+\mathcal{L}(1)$. By a simple
calculation we get on the one hand
$\begin{split}\mathcal{L}(Id)(x)=&x-\int_{0}^{\infty}dy\;K\left(x-y\right)y=x-\int_{-x}^{\infty}dy\;K\left(y\right)(x+y)=\frac{x}{4}e^{-x}-\frac{e^{-x}}{4}-\frac{x^{2}}{2}K(x)\end{split}$
and on the other hand
$\begin{split}\mathcal{L}(1)(x)=&1-\int_{0}^{\infty}dy\;K\left(x-y\right)=1-\int_{-x}^{\infty}dy\;K\left(y\right)=\frac{e^{-x}}{2}-xK(x).\end{split}$
Therefore we want to show that the function
$f(x):=\mathcal{L}(\overline{u})(x)=\frac{e^{-x}}{4}(1+x)-\frac{x}{2}K(x)(2+x)$
is non-negative for all $x\geq 0$. It is not difficult to see that
$f(0)=\frac{1}{4}>0$ and that $\lim\limits_{x\to\infty}f(x)=0$. Moreover, we
can consider the derivative
$\begin{split}f^{\prime}(x)=&\frac{1}{2}\left(e^{-x}-K(x)(2x+2)\right)\leq\frac{1}{2}\left(e^{-x}-\frac{e^{-x}}{2}\ln\left(1+\frac{2}{x}\right)(x+1)\right)\leq
0.\end{split}$
The first inequality is given by the estimate of Proposition 2.1 and the
second one is due to the well-know estimate $\ln(1+x)\geq\frac{2x}{2+x}$. The
non-positivity of the derivative implies that $f$ is monotonously decreasing,
and therefore $\mathcal{L}(\overline{u})(x)=f(x)\geq 0$ for all $x\geq 0$.
Let now $\varepsilon>0$ arbitrary. We know that $u$ is bounded and
$\overline{u}$ converges to infinity, moreover both $u$ and $\overline{u}$ are
continuous in $[0,\infty)$. Also $u$ is a homogeneous solution of (2.7) and
the operator $\mathcal{L}$ is linear. Therefore we can apply Lemma 3.1 to the
supersolutions $\varepsilon\overline{u}-u$ and $u+\varepsilon\overline{u}$ and
get that the
$\inf_{x\in[0,\infty)}\left[\varepsilon\overline{u}(x)-u(x)\right]\geq 0$ and
$\inf_{x\in[0,\infty)}\left[\varepsilon\overline{u}(x)+u(x)\right]\geq 0$.
This implies that for any $x\in\mathbb{R}$ the following holds
$-\varepsilon\overline{u}(x)\leq u(x)\leq\varepsilon\overline{u}(x)$
Since $\varepsilon$ was arbitrary we conclude $u(x)=0$ for all
$x\in\mathbb{R}$. ∎
### 3.2 Well-posedness theory for the inhomogeneous equation
We can now move to the well-posedness theory for the inhomogeneous equation,
for which the next theorem is the main result.
###### Theorem 3.2.
Let $H:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be a continuous function bounded by an
exponential function, i.e. $|H(x)|\leq
Ce^{-Ax}\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}$ for $C,A>0$. Then there exists a
unique bounded solution to the equation
$\begin{cases}u(x)-\int_{0}^{\infty}dy\;K\left(x-y\right)u(y)=H(x)&x>0,\\\
u(x)=0&x<0.\end{cases}$ (3.2)
Moreover, $u$ is continuous on $(0,\infty)$.
###### Proof.
The assumption on the exponential decay of $H$ yields $H\in
L^{1}\left(\mathbb{R}\right)\cap L^{2}\left(\mathbb{R}\right)\cap
L^{\infty}\left(\mathbb{R}\right)$. In order to find a bounded solution for
(3.2) we will follow several steps. We will look for functions $\tilde{u}$ and
$v$ solutions of the following equations
$\tilde{u}(x)-\int_{-\infty}^{\infty}K(x-y)\tilde{u}(y)\;dy=\overline{H}(x):=H(x)-H(-x)\;\;\;\;\;\;\;\;x\in\mathbb{R}$
and
$\begin{cases}v(x)-\int_{-\infty}^{\infty}K(x-y)v(v)\;dy=0,&x>0\\\
v(x)=-\tilde{u}(x)&x<0.\end{cases}$ (3.3)
Then $u=\tilde{u}+v$ will be the desired solution.
Step 1: Construction of $\tilde{u}$.
We can construct the solution $\tilde{u}$ via Fourier method. First of all we
notice that any affine function is a solution to the homogeneous equation in
the whole space $\mathbb{R}$. This is because
$\int_{-\infty}^{\infty}K(x)\;dx=1$ and $\int_{-\infty}^{\infty}xK(x)\;dx=0$.
Since by assumption $H\in L^{{2}}\left(\mathbb{R}\right)$ also
$\overline{H}\in L^{{2}}\left(\mathbb{R}\right)$. We define for an integrable
function $f$ the kth-moment as
$m_{k}\left(f\right)=\int_{-\infty}^{\infty}x^{k}f(x)\;dx$ assuming it exists.
Then clearly by construction $m_{0}\left(\overline{H}\right)=0$ while
$m_{1}\left(\overline{H}\right)=\frac{2C}{A}>0$. Moreover, since
$\overline{H}$ has exponential decay, all moments
$m_{k}\left(\overline{H}\right)<\infty$ are bounded.
We define also the function
$F(x)=\overline{\mathcal{L}}\left(\frac{3}{2}\text{sgn}\right)(x)$. It can be
compute that
$F(x)=\frac{3}{2}\left(\text{sgn}(x)-\int_{-\infty}^{\infty}K(x-y)\text{sgn}(y)\;dy\right)=\frac{3}{2}\left(\text{sgn}(x)-\int_{-x}^{x}K(y)\;dy\right).$
It is not difficult to see that $F(0)=0$, $\lim\limits_{|x|\to\infty}F(x)=0$
and that $F$ is a stepwise continuous function with the discontinuity in $0$.
Therefore $F(x)$ is bounded. We proceed with the construction of $\tilde{u}$.
We can write it as $\tilde{u}=u^{(1)}+u^{(2)}+a+bx$, where
$u^{(1)}(x)=m_{1}\left(\overline{H}\right)\frac{3}{2}\text{sgn}(x)$ solves the
equation
$\mathcal{L}\left(u^{(1)}\right)(x)=m_{1}\left(\overline{H}\right)F(x)\;\;\;\;\;\;\;\;\;x\in\mathbb{R}$
and $u^{(2)}$ solves
$\mathcal{L}\left(u^{(2)}\right)(x)=\overline{H}(x)-m_{1}\left(\overline{H}\right)F(x)\;\;\;\;\;\;\;\;\;x\in\mathbb{R}.$
(3.4)
applying now the Fourier transform to the equation (3.4), recalling the
convolution rule and the Fourier transforms of the kernel $K$ and of the sgn
function we get first in distributional sense
$\hat{u}^{(2)}(s)\left(\frac{s-\arctan(s)}{s}\right)=\mathcal{F}(\overline{H})(s)+\frac{3m_{1}\left(\overline{H}\right)}{\sqrt{2\pi}}\frac{i}{s}\frac{s-\arctan(s)}{s}.$
(3.5)
The Fourier transform of $\overline{H}$ is in $C^{\infty}$, since
$\overline{H}$ has exponential decay and therefore it has all kth-moment
finite. Therefore there exists a function $\tilde{H}$ such that
$\tilde{H}(0)=\tilde{H}^{\prime}(0)=0$ such that
$\mathcal{F}(\overline{H})(s)=-\frac{i}{\sqrt{2\pi}}m_{1}\left(\overline{H}\right)s+\tilde{H}(s)$,
since $m_{0}\left(\overline{H}\right)=0$ and by definition
$\mathcal{F}(\overline{H})^{\prime}(s)\big{|}_{s=0}=-\frac{i}{\sqrt{2\pi}}m_{1}\left(\overline{H}\right)$.
We can therefore find first formally $u^{(2)}$ analyzing its Fourier transform
$\begin{split}\hat{u}^{(2)}(s)=&\frac{s}{s-\arctan(s)}\mathcal{F}(\overline{H})(s)+\frac{3m_{1}\left(\overline{H}\right)}{\sqrt{2\pi}}\frac{i}{s}\\\
=&-\frac{is^{2}}{s-\arctan(s)}\frac{m_{1}\left(\overline{H}\right)}{\sqrt{2\pi}}+\frac{3m_{1}\left(\overline{H}\right)}{\sqrt{2\pi}}\frac{i}{s}+\tilde{H}(s)\frac{s}{s-\arctan(s)}\\\
=&\mathbb{H}(s).\end{split}$ (3.6)
It is important to notice that $\lim\limits_{s\to
0}\frac{s^{2}}{s-\arctan(s)}-\frac{3}{s}=0$, since
$\frac{s}{s-\arctan(s)}=\frac{3}{s^{2}}+\frac{9}{5}+O(s^{2})$ near zero. Using
L’Hôpital rule we see also that $\lim\limits_{s\to
0}\tilde{H}(s)\frac{s}{s-\arctan(s)}$ is finite. On the other hand
$\frac{s}{s-\arctan(s)}$ is bounded for $|s|>1$. Since
$\mathcal{F}(\overline{H})(s)$ and $\frac{1}{s}$ are both square integrable
functions and since $\mathbb{H}$ is bounded near $0$ we conclude that
$\mathbb{H}\in L^{{2}}\left(\mathbb{R}\right)$. Therefore also the in (3.6)
defined $\hat{u}^{(2)}$ is square integrable. We can hence invert it
$u^{(2)}(x):=\mathcal{F}^{-1}\left(\mathbb{H}\right)(x)\in
L^{{2}}\left(\mathbb{R}\right).$
Since this function solves (3.5) not only in distributional sense but also
pointwise almost everywhere, we can conclude rigorously that indeed the
function in (3.6) is the desired $u^{(2)}$ solving (3.4). Moreover,
$u^{(2)}=K*u^{(2)}+\overline{H}-F$ and since both $K$ and $u^{(2)}$ itself are
square integrable and both $H$ and $F$ are bounded, then also $u^{(2)}$ is
bounded. We can conclude this step therefore defining
$\tilde{u}(x)=\frac{3}{2}m_{1}\left(\overline{H}\right)\text{sgn}(x)+a+bx+u^{(2)}(x).$
(3.7)
Step 2: Construction of $v$.
We recall that the equation $v$ shall solve (3.3) As we found out in the first
step,
$\tilde{u}=\frac{3}{2}m_{1}\left(\overline{H}\right)\text{sgn}(x)+a+bx+u^{(2)}(x)$.
As we already pointed out, affine solutions are always solution of the
homogeneous equation in the whole space $\mathbb{R}$. Therefore, we shall look
for a function of the form
$v(x)=\frac{3}{2}m_{1}\left(\overline{H}\right)-a-bx+v^{(2)}(x)$ (3.8)
where $v^{(2)}$ solves similarly as above
$\begin{cases}v^{(2)}(x)-\int_{-\infty}^{\infty}K(x-y)v(v)\;dy=0&x>0,\\\
v^{(2)}(x)=-u^{(2)}(x)&x<0.\end{cases}$ (3.9)
We proceed now iteratively constructing the desired solution. We call $B>0$
the constant such that $\left\Arrowvert u^{(2)}\right\Arrowvert_{\infty}\leq
B$ and we define $\overline{v}=B$ and $\underline{v}=-B$. Inductively we
define $v_{0}:=\underline{v}$ and for $k\geq 1$ we set
$v_{k}(x)=\begin{cases}-u^{(2)}(x)&x<0,\\\
\int_{-\infty}^{\infty}K\left(x-y\right)v_{k-1}(y)\;dy&x>0.\\\ \end{cases}$
We claim that $\underline{v}=v_{0}\leq v_{1}\leq v_{2}\leq...\leq v_{k}\leq
v_{k+1}\leq...$ and that $v_{k}\leq\underline{v}$ for all $k\in\mathbb{N}$.
Clearly for $k=0$ both statements hold. On the one hand since
$\int_{-\infty}^{\infty}K\left(x-y\right)v_{0}(y)=-B$ we see that
$v_{1}(x)-v_{0}(x)=\begin{cases}-u^{(2)}(x)+B\geq 0&x<0,\\\ 0&x>0,\\\
\end{cases}$
on the other hand per definition we have $\overline{v}-\underline{v}=2B\geq
0$. We see also that $\overline{v}-v_{1}\geq 0$, indeed
$\overline{v}(x)-v_{1}(x)=\begin{cases}B+u^{(2)}(x)\geq 0&x<0,\\\ 2B&x>0.\\\
\end{cases}$
We now prove inductively that $v_{k}\geq v_{k-1}$ and $\overline{v}\geq
v_{k}$. Hence, we assume that these inequalities are satisfied for $k$ and we
prove them for $k+1$. Indeed this just follows from the identities
$v_{k+1}(x)-v_{k}(x)=\begin{cases}0&x<0,\\\
\int_{-\infty}^{\infty}K\left(x-y\right)\left(v_{k}(y)-v_{k-1}(y)\right)\geq
0&x>0,\\\ \end{cases}$
$\overline{v}(x)-v_{k+1}(x)=\begin{cases}B+u^{(2)}(x)\geq 0&x<0,\\\
\int_{-\infty}^{\infty}K\left(x-y\right)\left(B-v_{k}(y)\right)\geq 0&x>0,\\\
\end{cases}$
where we used again that the integral in the whole line of the kernel $K$ is
$1$. Therefore the sequence $v_{k}(x)$ is increasing and bounded. This means
that there exists a pointwise limit. By the dominated convergence theorem and
by construction this will be also the desired solution of (3.9), i.e.
$v^{(2)}(x):=\lim\limits_{k\to\infty}v_{k}(x)$
solves the equation (3.9) and it is by construction bounded.
Step 3: Properties of $u$.
Now we are ready to write down the whole solution. As we remarked at the
beginning $u=\tilde{u}+v$, where $\tilde{u}$ solves as in Step 1 (3.2) and $v$
solves as in Step 2 (3.3). Therefore by (3.8) and by (3.7)
$u(x)=\begin{cases}6m_{1}\left(H\right)+u^{(2)}(x)+v^{(2)}(x)&x>0,\\\
0&x<0,\end{cases}$
solves the initial problem (3.2) and it is by construction bounded. Moreover,
since $K$ is integrable and $H$ is continuous in $[0,\infty)$ also $u=K*u+H$
is continuous in $[0,\infty)$.
Step 4: Uniqueness.
Let us assume that $u_{1}$ and $u_{2}$ are two bounded solution to the problem
(3.2). Then $u_{1}-u_{2}$ will be a bounded continuous solution to the
homogeneous problem (3.1). Therefore by Theorem 3.1 $u_{1}-u_{2}=0$. Hence,
there exists a unique bounded solution $u$ to the inhomogeneous problem (3.2).
This concludes the proof. ∎
###### Corollary 3.1.
Let $p\in\partial\Omega$ and $G_{p}(x)$ as defined in (2.8). Let
$g_{\nu}(n)\geq 0$ and assume $\int_{0}^{\infty}d\nu\;g_{\nu}(n)\in
L^{\infty}\left(\mathbb{S}\right)$. Then there exists a unique bounded
solution to the equation
$\begin{cases}u(x)-\int_{0}^{\infty}dy\;K\left(x-y\right)u(y)=G_{p}(x)&x>0,\\\
u(x)=0&x<0.\end{cases}$ (3.10)
Moreover, $u$ is continuous on $(0,\infty)$.
###### Proof.
By assumption $G_{p}$ is continuous for $x>0$ and $|G_{p}(x)|\leq\Arrowvert
g\Arrowvert_{1}e^{-y}\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}$. Hence we can apply
Theorem 3.2. ∎
It is also possible to show, that the bounded solution $u$ is non-negative
###### Lemma 3.2.
Let $u$ be the unique bounded solution to (3.10). Then $u(x)\geq 0$ for all
$x\in\mathbb{R}$.
###### Proof.
The proof is very similar to the proof of Theorem 3.1. We consider the
supersolution
$\overline{u}(x)=\begin{cases}1+x&x\geq 0\\\ 0&x<0\end{cases}.$
As we have seen before, $u=K*u+G$ is continuous in $[0,\infty)$. Moreover,
since $G>0$ as $x\geq 0$, $u$ is a supersolution too. Let now $\varepsilon>0$
be arbitrary. Let us consider the supersolution $\varepsilon\overline{u}+u$.
This is continuous in $[0,\infty)$ and since $u$ is bounded it converges to
infinity as $x\to\infty$. Therefore Lemma 3.1 implies that there exists no
$x_{0}\in[0,\infty)$ such that
$\inf_{x\in[0,\infty)}\left(\varepsilon\overline{u}(x)+u(x)\right)=\varepsilon\overline{u}(x_{0})+u(x_{0})<0.$
Hence $u\geq-\varepsilon\overline{u}$ and since $\varepsilon>0$ was arbitrary
we conclude $u\geq 0$. ∎
###### Remark.
Theorem 3.2 can be proved also using the Wiener-Hopf solution formula for the
problem (2.7) as given in [17]. It is true that in this way one obtains an
explicit formula, which not only assures the well-posedness of the planar
problem we are studying but also directly shows the existence of a limit for
the solution $u$ when $x\to\infty$. However, the Wiener-Hopf method produces a
complicate formula which requires a careful analysis with complex variables in
order to be understood. We have preferred to use this soft method approach
which in particular allows us to prove some relevant properties of the
solution, such as the positivity.
### 3.3 Asymptotic behavior of the bounded solution of the inhomogeneous
equation
We were able to show that the equation for the boundary value in the Grey
approximation has a unique bounded solution which is positive whenever $G>0$.
As we anticipated at the beginning of this section, we would like to study the
limit as $x\to\infty$ of the solution $u(x)$. We will show, that such limit
exists and is uniquely characterized by $g_{\nu}(n)$ and $N$. To this end we
first prove that the function $u$ is uniformly continuous.
###### Lemma 3.3.
Let $u$ be the unique bounded solution to the problem (2.7). The $u$ is
uniform continuous on $[0,\infty)$ and it satisfies for $x,y\in[0,\infty)$
$\begin{split}\left|u(x)-u(y)\right|\leq&\left|G(x)-G(y)\right|\\\
+&\left\Arrowvert
u\right\Arrowvert_{\infty}\left[\frac{\left|e^{-x}-e^{-y}\right|}{2}+2\left(1-e^{\frac{\left|x-y\right|}{2}}\right)+4\left|\frac{y-x}{2}\right|K\left(\frac{y-x}{2}\right)+\left|xK(x)-yK(y)\right|\right].\end{split}$
(3.11)
###### Proof.
This is a consequence of the uniform continuity of $G$ and $xK(x)$. Clearly,
since $u$ solves the problem (3.10), we have the estimate
$\left|u(x)-u(y)\right|\leq\left|G(x)-G(y)\right|+\int_{0}^{\infty}\left|K\left(\eta-x\right)-K\left(\eta-y\right)\right|u(\eta)\;d\eta.$
(3.12)
Since $G$ is continuous on $[0,1]$, and therefore uniformly continuous on
$[0,1]$ and since $G$ is Lipschitz continuous in $[1,\infty)$, $G$ is uniform
continuous in $[0,\infty)$. The latter affirmation is true, since
$\sup_{x\geq 1}\left|G^{\prime}(x)\right|\leq\int_{0}^{\infty}d\nu\int_{n\cdot
N<0}dn\;g_{\nu}(n)\frac{e^{-\frac{1}{|n\cdot N|}}}{|n\cdot N|}<\infty,$
where the finiteness is due to the fact that $\lim\limits_{|n\cdot N|\to
0}\frac{e^{-\frac{1}{|n\cdot N|}}}{|n\cdot N|}=0$.
For the integral term in (3.12) we assume that $x<y$. Then we can calculate
using the fact that for positive arguments the kernel $K$ is decreasing
$\begin{split}\int_{0}^{\infty}&\left|K\left(\eta-x\right)-K\left(\eta-y\right)\right|u(\eta)\;d\eta\\\
=&\int_{0}^{\frac{x+y}{2}}\left(K\left(\eta-x\right)-K\left(\eta-y\right)\right)u(\eta)\;d\eta+\int_{\frac{x+y}{2}}^{\infty}\left(K\left(\eta-y\right)-K\left(\eta-x\right)\right)u(\eta)\;d\eta\\\
\leq&\left\Arrowvert
u\right\Arrowvert_{\infty}\left[\int_{0}^{\frac{x+y}{2}}\left(K\left(\eta-x\right)-K\left(\eta-y\right)\right)\;d\eta+\int_{\frac{x+y}{2}}^{\infty}\left(K\left(\eta-y\right)-K\left(\eta-x\right)\right)\;d\eta\right]\\\
\end{split}$
We can calculate explicitly the last two integrals using the result of
Proposition 2.2, indeed by a change of variable
$\begin{split}\int_{0}^{\infty}&\left|K\left(\eta-x\right)-K\left(\eta-y\right)\right|u(\eta)\;d\eta\\\
\leq&\left\Arrowvert
u\right\Arrowvert_{\infty}\left[\int_{-x}^{\frac{y-x}{2}}K\left(\eta\right)\;d\eta-\int_{-y}^{\frac{x-y}{2}}K\left(\eta\right)\;d\eta+\int_{\frac{x-y}{2}}^{\infty}K\left(\eta\right)\;d\eta-\int_{\frac{y-x}{2}}^{\infty}K\left(\eta\right)\;d\eta\right]\\\
=&\left\Arrowvert
u\right\Arrowvert_{\infty}\left[\frac{e^{-y}-e^{-x}}{2}+2\left(1-e^{\frac{x-y}{2}}\right)+4\frac{y-x}{2}K\left(\frac{y-x}{2}\right)+xK(x)-yK(y)\right].\end{split}$
Recalling that $x<y$ we get the estimate (3.11). From the well-known estimates
$\left|e^{-x}-e^{-y}\right|\leq\left|x-y\right|$ and
$\left|1-e^{\frac{x-y}{2}}\right|\leq\frac{\left|x-y\right|}{2}$ we see that
we shall only consider the function $f(x)=xK(x)$. Since $f(0)=0$ and $f$ is
continuous, $f$ is uniformly continuous on $[0,1]$, on the other hand $f$ is
Lipschitz continuous on $[1,\infty]$. This is because
$\sup\limits_{x\geq 1}\left|f^{\prime}(x)\right|=\sup\limits_{x\geq
1}\left|K(x)-e^{-x}\right|\leq\frac{1}{e}+K(1)<\infty.$
Therefore $f$ is uniform continuous on $[0,\infty)$. By the continuity of $f$
in $0$ we also now that given an $\varepsilon>0$ there exists some $\delta$
such that $\frac{y-x}{2}K\left(\frac{y-x}{2}\right)<\varepsilon$ for all
$\left|x-y\right|<\delta$. Hence, we conclude that $u$ is uniform continuous.
∎
We want now to show that the limit $\lim\limits_{y\to\infty}u(y)$ exists. To
this end we proceed again using Fourier methods.
###### Theorem 3.3.
Let $u$ be the unique bounded solution to the problem (3.10). Then
$\lim\limits_{x\to\infty}u(x)$ exists and it is uniquely determined by $G$ and
$u$ itself. Moreover, the limit is positive if $\left\\{n\in\mathbb{S}:n\cdot
N<0\text{ and }\int_{0}^{\infty}d\nu\overline{g}_{\nu}(n)\not\equiv
0\right\\}$ is not a zero measure set.
###### Proof.
Since $u$ is the unique bounded solution, $u$ solves for all $x\in\mathbb{R}$
$u(x)-\int_{-\infty}^{\infty}K(y-x)u(y)\;dy=G(x)\;\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}-\int_{0}^{\infty}K(y-x)u(y)\;dy\;\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\equiv
W(x).$ (3.13)
Indeed, this is equivalent to (3.10). This can be seen easily, since $u$
solves for $x<0$
$u(x)-\int_{-\infty}^{0}K(y-x)u(y)\;dy=0$
and since $u=0$ for $x<0$ is a possible solution, by uniqueness, this is the
only possible solution. It is not only true that $W\in
L^{1}\left(\mathbb{R}\right)\cap L^{2}\left(\mathbb{R}\right)$ but also that
$W$ has all moments bounded. This follows from the similar property of $G$
(cf. Step 1 in Theorem 3.2) as well as from the inequality
$0\leq\int_{0}^{\infty}K(y-x)u(y)\;dy\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\leq\Arrowvert
u\Arrowvert_{\infty}\;\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\left(\frac{e^{-|x|}}{2}-|x|K(x)\right)$.
Notice that $|x|K(x)\leq\frac{e^{-}|x|}{2}$. Hence, finite moments and
Riemann-Lebesgue Theorem imply that $W$ has a Fourier transform $\hat{W}\in
C_{0}\left(\mathbb{R}\right)\cap C^{\infty}\left(\mathbb{R}\right)\cap
L^{2}\left(\mathbb{R}\right)$. Moreover, looking at the left hand side of
(3.13) we recall as in [28] that in distributional sense for all
$\phi\in\mathcal{S}\left(\mathbb{R}\right)$
$\langle\hat{u}-\mathcal{F}\left(u*K\right),\phi\rangle:=\langle
u-u*K,\hat{\phi}\rangle=\langle
u,\mathcal{F}\left((1-\sqrt{2\pi}\hat{K})\phi\right)\rangle,$
where the last equality is due to an elementary calculation involving the
convolution and we define $\langle f,g\rangle=\int_{\mathbb{R}}f(x)g(x)\;dx$.
We recall also that
$1-\sqrt{2\pi}\hat{K}(\xi)=\frac{\xi-\arctan(\xi)}{\xi}:=F(\xi)$. Hence, for
all $\phi\in\mathcal{S}\left(\mathbb{R}\right)$ we have
$\langle u,\mathcal{F}(\phi F)\rangle=\langle\hat{W},\phi\rangle.$ (3.14)
Now we consider for $\varepsilon>0$ the sequence of standard mollifiers
$\phi_{\varepsilon}(\xi):=\frac{1}{\varepsilon}\phi\left(\frac{\xi}{\varepsilon}\right)\in
C_{c}^{\infty}\left(\mathbb{R}\right)\subset\mathcal{S}\left(\mathbb{R}\right)$
such that in distributional sense $\phi_{\varepsilon}\rightharpoonup\delta$.
The smoothness of $\hat{W}$ implies $\langle\hat{W},\phi\rangle\to\hat{W}(0)$
as $\varepsilon\to 0$. It is our first aim to show that $\hat{W}(0)$ is zero.
To this end we study the left hand side of (3.14). We calculate
$\begin{split}\langle
u,&\mathcal{F}(\phi_{\varepsilon}F)\rangle=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}dx\;u(x)\int_{\mathbb{R}}d\xi\;\phi_{\varepsilon}(\xi)F(\xi)e^{-i\xi
x}\\\
=&\frac{1}{\sqrt{2\pi}}\int_{0}^{1}dx\;u(x)\int_{\mathbb{R}}d\xi\;\phi_{\varepsilon}(\xi)F(\xi)e^{-i\xi
x}-\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2}}\int_{\mathbb{R}}d\xi\;\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}e^{-i\xi
x},\end{split}$
where for the last equality we integrated twice by parts in $\xi$. By a change
of coordinates and the dominated convergence theorem, since $F(0)=0$ and
$|F(\varepsilon\xi)\phi(\xi)|\leq|\phi(\xi)|$ we see for the first term as
$\varepsilon\to 0$
$\left|\frac{1}{\sqrt{2\pi}}\int_{0}^{1}dx\;u(x)\int_{\mathbb{R}}d\xi\;\phi_{\varepsilon}(\xi)F(\xi)e^{-i\xi
x}\right|\leq\frac{1}{\sqrt{2\pi}}\int_{0}^{1}dx\;u(x)\int_{\mathbb{R}}d\xi\;|F(\varepsilon\xi)\phi(\xi)|\to
0.$
Thus, we shall consider only the second term. We use the following well-known
estimate $\left|e^{-i\xi x}-1\right|\leq 2|\xi|^{\delta}|x|^{\delta}$ for
$0<\delta<1$ and $x\in\mathbb{R}$. Then using
$\int_{\mathbb{R}}\left(\phi_{\varepsilon}F\right)^{\prime\prime}=0$
$\begin{split}\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2}}\int_{\mathbb{R}}d\xi\;\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}e^{-i\xi
x}=&\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2}}\int_{\mathbb{R}}d\xi\;\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}\left(e^{-i\xi
x}-1\right),\end{split}$
and hence
$\begin{split}\left|\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2}}\int_{\mathbb{R}}d\xi\;\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}e^{-i\xi
x}\right|\leq\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2-\delta}}\int_{\mathbb{R}}d\xi\;\left|\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}\right|2|\xi|^{\delta}.\end{split}$
Now we notice that $\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2-\delta}}<\infty$ and
also we see that $F(\xi)\simeq\frac{\xi^{2}}{3}$ as $x\to 0$, similarly as
$\xi\to 0$ also $F^{\prime}(\xi)\simeq\frac{2}{3}\xi$ and
$F^{\prime\prime}(\xi)\simeq\frac{2}{3}$. Hence, with a change of variables we
see that
$\begin{split}\int_{\mathbb{R}}d\xi\;&\left|\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}\right||\xi|^{\delta}\\\
\leq&\int_{\mathbb{R}}d\xi\;\left[|\phi(\xi)||F^{\prime\prime}(\varepsilon\xi)|\varepsilon^{\delta}|\xi|^{\delta}+2|\phi^{\prime}(\xi)|\frac{|F^{\prime}(\varepsilon\xi)||\xi|^{\delta}}{\varepsilon^{1-\delta}}+|\phi^{\prime\prime}(\xi)|\frac{|F(\varepsilon\xi)||\xi|^{\delta}}{\varepsilon^{2-\delta}}\right].\end{split}$
With the consideration above about $F$ and since $\phi\in
C_{c}^{\infty}\left(\mathbb{R}\right)$ we see that there exists a constant
$C=2\Arrowvert\phi\Arrowvert_{C_{c}^{\infty}\left(\mathbb{R}\right)}\left(\max\limits_{\text{supp}\phi}|\xi|\right)^{2+\delta}<\infty$
such that
$|\phi(\xi)||F^{\prime\prime}(\varepsilon\xi)|\varepsilon^{\delta}|\xi|^{\delta}+|\phi^{\prime}(\xi)|\frac{|F^{\prime}(\varepsilon\xi)||\xi|^{\delta}}{\varepsilon^{1-\delta}}+|\phi^{\prime\prime}(\xi)|\frac{|F(\varepsilon\xi)||\xi|^{\delta}}{\varepsilon^{2-\delta}}\leq
C\varepsilon^{\delta}$
for any $\xi\in\text{supp}(\phi)$. Thus, again with the dominated convergence
theorem we conclude
$\left|\frac{1}{\sqrt{2\pi}}\int_{1}^{\infty}dx\;\frac{u(x)}{x^{2}}\int_{\mathbb{R}}d\xi\;\left(\phi_{\varepsilon}(\xi)F(\xi)\right)^{\prime\prime}e^{-i\xi
x}\right|\to 0,$
which implies the first claim, namely $\hat{W}(0)=0$.
As next step we prove that the limit $\lim\limits_{x\to\infty}u(x)$ exists.
First of all we know that in distributional sense $\hat{u}$ solves the
equation
$F\hat{u}\overset{\mathcal{S}^{\prime}}{=}\hat{W}.$ (3.15)
Given any distributional solution $\hat{u}$ to (3.15) also
$\hat{u}+\hat{u}_{h}$ is a solution, where $\hat{u}_{h}$ is the homogeneous
solution to $F\hat{u}_{h}\overset{\mathcal{S}^{\prime}}{=}0$. Let us consider
the tempered distribution given by $\hat{u}_{h}$ and let
$\varphi\in\mathcal{S}(\mathbb{R})$ be any testfunction with support away from
zero, i.e. $\text{supp}(\varphi)\subset\mathbb{R}\setminus\\{0\\}$. Since
$F(\xi)=0$ if and only $\xi=0$ and since it is bounded, the function
$\frac{\varphi}{F}\in\mathcal{S}\left(\mathbb{R}\right)$. Hence,
$\int_{\mathbb{R}}\hat{u}_{h}\varphi=0$. This implies (see [29]) that
$\hat{u}_{h}\overset{\mathcal{S}^{\prime}}{=}\sum\limits_{0\leq\alpha<m}c_{\alpha}(D^{\alpha}\delta)$,
for $c_{\alpha}$ constants and a suitable $m\in\mathbb{N}$. Since
$c_{\alpha}F(D^{\alpha}\delta)\not\equiv 0$ for any $\alpha\geq 2$ we conclude
$\hat{u}_{h}=c_{0}\delta+c_{1}\delta^{\prime}$
for suitable constants $c_{0},c_{1}$. Using the smoothness of $\hat{W}$ we can
write $\hat{W}(\xi)=\hat{W}^{\prime}(0)\xi+H(\xi)$ where
$\hat{W}^{\prime}(0)=\frac{m_{1}(W)}{\sqrt{2\pi}i}$ and $H\in
C^{\infty}\left(\mathbb{R}\right)$ with $H(0)=H^{\prime}(0)=0$. Let us
consider the behavior of $F$
$F(\xi)\simeq\begin{cases}\frac{\xi^{2}}{3}-\frac{\xi^{4}}{5}+\mathcal{O}(\xi^{6})&\xi\to.0,\\\
1-\frac{\pi}{2\xi}+\mathcal{O}\left(\frac{1}{\xi^{2}}\right)&\xi\to\infty\end{cases}$
(3.16)
Hence,
$f(\xi):=\hat{W}(\xi)-\frac{3m_{1}(W)}{\sqrt{2\pi}i}\frac{F(\xi)}{\xi}\in
L^{2}\left(\mathbb{R}\right)$ (3.17)
and it also satisfies
$f(\xi)\simeq
H^{\prime\prime}(0)\xi^{2}+\mathcal{O}(\xi^{3})\;\;\;\;\;\;\text{ as }\xi\to
0.$ (3.18)
By the boundedness of $F$ and given its behavior as in (3.16) we conclude that
the function $\hat{h}:=\frac{f}{F}\in L^{2}\left(\mathbb{R}\right)$, in
particular $\hat{h}$ is well-defined in zero. It is easy to see that $\hat{u}$
solves
$F(\xi)\hat{u}(\xi)\overset{\mathcal{S}^{\prime}}{=}\frac{3m_{1}(W)}{\sqrt{2\pi}i}\frac{F(\xi)}{\xi}+f(\xi).$
(3.19)
Therefore, since $\hat{h}\in L^{2}(\mathbb{R})$ we have that
$\hat{u}(\xi)=\frac{3m_{1}(W)}{\sqrt{2\pi}i}PV\left(\frac{1}{\xi}\right)+\hat{h}(\xi)$
is a solution to (3.19). We denote by $PV(\cdot)$ the principal value. Thus,
adding the homogeneous solution we conclude
$\hat{u}(\xi)\overset{\mathcal{S}^{\prime}}{=}c_{0}\delta+c_{1}\delta^{\prime}+\frac{3}{2i}m_{1}(W)\sqrt{\frac{2}{\pi}}PV\left(\frac{1}{\xi}\right)+\hat{h}(\xi),$
which yields
$u(x)\overset{\mathcal{S}^{\prime}}{=}\frac{c_{0}}{\sqrt{2\pi}}-\frac{c_{1}i}{\sqrt{\pi}}x+\frac{3}{2}m_{1}(W)\text{sgn}(x)+h(x),$
where $h\in L^{2}(\mathbb{R})$ is the inverse transform of $\hat{h}$. Since
$u$ is bounded and satisfies $u(x)=0$ for all $x<0$, we have in distributional
sense
$u(x)=\frac{3}{2}m_{1}(W)+\frac{3}{2}m_{1}(W)\text{sgn}(x)+h(x).$
Hence for $x>0$ also $u(x)=3m_{1}(W)+h(x)$ pointwise. Lemma 3.3 implies also
that $h$ is uniformly continuous in the positive real line. Hence,
$\lim\limits_{x\to\infty}h(x)=0$ and therefore the limit of $u$ as
$x\to\infty$ exists and is uniquely determined by $g_{\nu}(n)$ and $N$. This
is true since
$\lim\limits_{y\to\infty}u(y)=3m_{1}(W)=3\left(\int_{0}^{\infty}dx\;xG(x)-\int_{-\infty}^{0}dx\;x\int_{0}^{\infty}dy\;K(y-x)u(y)\right)\geq
0.$
Also the positivity of the limit is guaranteed when
$\left\\{n\in\mathbb{S}:n\cdot N<0\text{ and
}\int_{0}^{\infty}d\nu\overline{g}_{\nu}(n)\not\equiv 0\right\\}$ is not a
zero measure set. ∎
We will define
$\overline{u}_{\infty}(p):=\lim\limits_{y\to\infty}\overline{u}(y,p)$ for
$p\in\partial\Omega$. We can also show that $\overline{u}$ converges to
$\overline{u}_{\infty}$ with exponential rate.
###### Lemma 3.4.
Let $u$ be the unique bounded solution to the problem (3.10) and
$u_{\infty}=\lim\limits_{x\to\infty}u(x)$. Then there exists a constant $C>0$
such that
$|u-u_{\infty}|\leq Ce^{-\frac{|x|}{2}}.$
###### Proof.
We use the same notation as in Theorem 3.3. Hence, we know that
$\hat{u}(\xi)\overset{\mathcal{S}^{\prime}}{=}\frac{u_{\infty}}{2}\sqrt{2\pi}\delta+\frac{u_{\infty}}{2}\sqrt{\frac{2}{\pi}}PV\left(\frac{1}{\xi}\right)+\hat{h}(\xi),$
(3.20)
with
$F(\xi)\hat{h}(\xi)=\hat{W}(\xi)-\frac{3m_{1}(W)}{\sqrt{2\pi}i}\frac{F(\xi)}{\xi}$.
By the definition of $W$ we see
$\lim\limits_{x\nearrow 0}W(x)-\lim\limits_{x\searrow
0}W(x)=W(0^{+})-W(0^{-})=u(0).$ (3.21)
We recall that $W$ has exactly one discontinuity in $x=0$ and that
$W\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\in
C^{\infty}\left(\mathbb{R}_{-}\right)$ and
$W\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}\in
C^{\infty}\left(\mathbb{R}_{+}\right)$. By the monotonicity of the two
functions $W\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}$ and
$W\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}$ and since $W\in
L^{\infty}\left(\mathbb{R}\right)$ we see that
$W^{\prime}\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\in L^{1}(\mathbb{R}_{-})$ and
$W^{\prime}\raisebox{2.0pt}{$\chi$}_{\\{x>0\\}}\in L^{1}(\mathbb{R}_{+})$.
Moreoevr, we have the asymptotics
$\hat{W}(\xi)\simeq\frac{u(0)}{\sqrt{2\pi}i\xi}+\mathcal{O}\left(\frac{1}{\xi^{1+\delta}}\right)$
as $|\xi|\to\infty$ for $0<\delta<1$. Indeed, integrating by parts and using
that $\lim\limits_{|x|\to\infty}W(x)=0$ we compute
$\begin{split}\sqrt{2\pi}\hat{W}(\xi)=&\int_{-\infty}^{0}W(x)e^{-i\xi
x}\;dx+\int_{0}^{\infty}W(x)e^{-i\xi x}\;dx\\\
=&\frac{u(0)}{i\xi}+\frac{1}{i\xi}\left(\int_{-\infty}^{0}W^{\prime}(x)e^{-i\xi
x}\;dx+\int_{0}^{\infty}W^{\prime}(x)e^{-i\xi x}\;dx\right)\\\
=&\frac{u(0)}{i\xi}-\frac{1}{i\xi}\left(\int_{-\infty}^{-1}dx\int_{0}^{\infty}dy\;\frac{e^{-(y-x)}u(y)}{2(y-x)}e^{-i\xi
x}+\int_{-\infty}^{0}dx\int_{1}^{\infty}dy\;\frac{e^{-(y-x)}u(y)}{2(y-x)}e^{-i\xi
x}\right)\\\
&-\frac{1}{i\xi}\left(\int_{-1}^{0}dx\int_{0}^{1}dy\;\frac{e^{-(y-x)}u(y)}{2(y-x)}\frac{d}{dx}\frac{e^{-i\xi
x}-1}{-i\xi}\right)\\\
&+\frac{1}{i\xi}\left(\int_{1}^{\infty}G^{\prime}(x)e^{-i\xi
x}\;dx+\int_{0}^{1}G^{\prime}(x)\frac{d}{dx}\frac{e^{-i\xi
x}-1}{-i\xi}\right)\end{split}$ (3.22)
We conclude integrating by parts and applying the Riemann-Lebesgue Theorem in
the following way. First of all, the function
$\partial_{x}\frac{e^{-(y-x)}}{(y-x)}$ is integrable on
$(-\infty,1)\times\mathbb{R}_{+}\cup\mathbb{R}_{-}\times(1,\infty)$ and also
$G^{\prime\prime}(x)$ is integrable in $(1,\infty)$. Moreover, using
$\left|e^{-i\xi x}-1\right|\leq 2|\xi|^{\delta}|x|^{\delta}$ for $0<\delta<1$
we have
$\int_{-1}^{0}dx\int_{0}^{1}dy\;\frac{e^{-(y-x)}u(y)}{(y-x)}|x|^{\delta}\leq
C\int_{-1}^{0}dx\;\left(|x|^{\delta-1}\right)<\infty$
and
$\int_{0}^{1}dx\;G^{\prime\prime}(x)|x|^{\delta}\leq
C\int_{0}^{1}\frac{e^{-x}}{x^{1-\delta}}\;dx<\infty.$
For this last estimate we also used that
$\frac{d}{d\theta}e^{\frac{x}{\cos(\theta)}}=x\frac{e^{\frac{x}{cos(\theta)}}}{\cos^{2}(\theta)}\sin(\theta)$,
which implies $\left|G^{\prime\prime}(x)\right|\leq 2\pi\Arrowvert
g\Arrowvert_{\infty}\frac{e^{-x}}{x}$. Thus, by the definition of $\hat{h}$
and using (3.16) we have
$\hat{h}(\xi)\simeq\begin{cases}\mathcal{O}(1)&|\xi|\to 0,\\\
\frac{u(0)}{\sqrt{2\pi}}\frac{1}{i\xi}-\frac{u_{\infty}}{\sqrt{2\pi}}\frac{1}{i\xi}+\mathcal{O}\left(\frac{1}{\xi^{1+\delta}}\right)&|\xi|\to\infty.\end{cases}$
(3.23)
By the definition of $\hat{u}$ in (3.20) we see
$\hat{v}(\xi):=\hat{u}(\xi)-\frac{u_{\infty}}{2}\sqrt{2\pi}\delta-
PV\left(\frac{1}{i\xi}\right)\left(\frac{u_{\infty}}{2}\sqrt{\frac{2}{\pi}}\frac{1}{1+\xi^{2}}+\frac{u(0)}{\sqrt{2\pi}}\frac{\xi^{2}}{1+\xi^{2}}\right)\in
L^{2}(\mathbb{R}).$ (3.24)
We claim that
1. (i)
$\hat{v}$ is analytic in the strip
$S=\\{z\in\mathbb{C}:|\Im(z)|<\frac{3}{4}\\}$;
2. (ii)
$|\hat{v}(\xi)|\leq\frac{C}{|1+\xi^{1+\delta}|}$;
3. (iii)
$v(x)=u(x)-u_{\infty}+\frac{e^{-|x|}}{2}\left(u_{\infty}-u(0)\right)$ for
$x>0$ and $v(x)=\mathcal{F}^{-1}(\hat{v})(x)$.
A contour integral implies then the lemma. Indeed for $x>0$ we can compute
$\begin{split}\sqrt{2\pi}|v(x)|=&\lim\limits_{R\to\infty}\left|\int_{-R}^{R}\hat{v}(\xi)e^{i\xi
x}\;d\xi\right|\\\
\leq&\lim\limits_{R\to\infty}\left|i\int_{0}^{\frac{1}{2}}\hat{v}(R+it)e^{iRx}e^{-tx}\;dt\right|+\lim\limits_{R\to\infty}\left|i\int_{0}^{\frac{1}{2}}\hat{v}(-R+it)e^{-iRx}e^{-tx}\;dt\right|\\\
&+\lim\limits_{R\to\infty}\left|\int_{-R}^{R}\hat{v}\left(t+\frac{1}{2}i\right)e^{itx}e^{-\frac{x}{2}}\;dt\right|\\\
\leq&e^{-\frac{x}{2}}\lim\limits_{R\to\infty}\int_{-R}^{R}\frac{C}{\frac{1}{2}+t^{1+\delta}}\;dt=\overline{C}e^{-\frac{x}{2}},\end{split}$
(3.25)
where for the first inequality we used the triangle inequality and the
analycity of $\hat{v}$ by (i), the second inequality is due to dominate
convergence and the claim (ii), finally the last integral is finite. Equation
(3.25) and claim (iii) imply $|u(x)-u_{\infty}|\leq Ce^{-\frac{x}{2}}$ for
$x>0$.
We prove now the claims. To prove claim (i) it is enough to show that
$\hat{h}$ is analytic in $S$. Then, (3.20) and (3.24) implies (i). First of
all we recall that $W$ has an exponential decay like $|W(x)|\leq Ce^{-|x|}$,
hence $|W(x)|e^{\frac{3}{4}|x|}\in L^{1}\left(\mathbb{R}\right)$ and therefore
Paley-Wiener Theorem implies that $\hat{W}$ is analytic in $S$. Since
$\arctan(z)=\frac{1}{2i}\ln(\frac{1+iz}{1-iz})$ is analytic in
$\\{z\in\mathbb{C}:|\Im(z)|<1\\}$ and since $F(z)=\frac{z-\arctan(z)}{z}$ has
exactly one zero in $z=0$, which is of degree $2$, the definition of
$\hat{h}=\frac{f}{F}$ together with (3.17) implies that $\hat{h}$ is analytic
in $S$ since (3.18) implies that $0$ is a removable singularity.
For claim (ii) we just put together equations (3.20), (3.23) and (3.24). We
notice also that the constant $C>0$ of claim (ii) depends only on $\hat{W}$.
Claim (iii) is more involved. We have to consider again two different contour
integrals in order to compute the inverse Fourier transform of $\hat{v}$. We
start with considering the function
$PV\left(f(\xi)\right)=PV\left(\frac{1}{i\xi(1+\xi^{2})}\right)$. Let first of
all $x>0$ and let $\gamma^{+}_{1}$ the path around $i$ given as in the
following picture.
Figure 2: sketch of $\gamma^{+}_{1}$.
Hence, we compute
$\begin{split}\mathcal{F}^{-1}&\left(PV\left(\frac{1}{i\xi}\frac{1}{1+\xi^{2}}\right)\right)(x)=\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{-R}^{-\frac{1}{R}}f(\xi)e^{i\xi
x}\;d\xi+\int_{\frac{1}{R}}^{R}f(\xi)e^{i\xi x}\;d\xi\right)\\\
=&\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{\gamma^{+}_{1}}f(\xi)e^{i\xi
x}d\xi\right)+\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{0}^{\pi}f\left(\frac{e^{i\theta}}{R}\right)\frac{ie^{i\theta}}{R}e^{-\frac{\sin(\theta)x}{R}}e^{\frac{i\cos(\theta)x}{R}}\;d\theta\right)\\\
&-\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{0}^{\pi}f\left(Re^{i\theta}\right)Rie^{i\theta}e^{-R\sin(\theta)x}e^{iR\cos(\theta)x}\;d\theta\right)\\\
=&\sqrt{\frac{\pi}{2}}\left(1-e^{-x}\right).\end{split}$
For the computation of these integrals we used the Cauchy’s residue theorem
and $\text{Res}_{i}f(\xi)e^{i\xi x}=\frac{ie^{-x}}{2}$, the second integral
converges to $\pi$ as $R\to\infty$ and the third converges to zero, both limit
are due to the Lebesgue dominated convergence theorem. Denoting by
$\gamma^{-}_{1}$ the mirrored path to $\gamma^{+}_{1}$ with respect to the
real axis and arguing similarly we also get that for $x<0$ the inverse Fourier
transformation is
$\mathcal{F}^{-1}\left(PV\left(\frac{1}{i\xi}\frac{1}{1+\xi^{2}}\right)\right)(x)=-\sqrt{\frac{\pi}{2}}\left(1-e^{-|x|}\right)$.
Hence,
$\mathcal{F}^{-1}\left(PV\left(\frac{1}{i\xi}\frac{1}{1+\xi^{2}}\right)\right)(x)=\text{sgn}(x)\sqrt{\frac{\pi}{2}}\left(1-e^{-|x|}\right).$
(3.26)
For the function $g(x)(\xi)=\frac{\xi}{i(1+\xi^{2})}$ we consider again first
of all $x>0$ and the path $\gamma^{+}_{2}$ around $i$ given by
Figure 3: sketch of $\gamma^{+}_{2}$.
Hence, the Cauchy’s residue theorem and the dominated convergence imply
$\begin{split}\mathcal{F}^{-1}&\left(\frac{\xi}{i(1+\xi^{2})}\right)(x)=\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\int_{-R}^{R}g(\xi)e^{i\xi
x}\;d\xi\\\
=&\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{\gamma^{+}_{2}}g(\xi)e^{i\xi
x}d\xi\right)-\frac{1}{\sqrt{2\pi}}\lim\limits_{R\to\infty}\left(\int_{0}^{\pi}g\left(Re^{i\theta}\right)Rie^{i\theta}e^{-R\sin(\theta)x}e^{iR\cos(\theta)x}\;d\theta\right)\\\
=&\sqrt{\frac{\pi}{2}}e^{-x},\end{split}$
where we also used that $\text{Res}_{i}g(\xi)e^{-\xi x}=\frac{e^{-x}}{2i}$.
Denoting similarly as before by $\gamma^{-}_{2}$ the mirrored path to
$\gamma^{+}_{2}$ with respect to the real axis we obtain
$\mathcal{F}^{-1}(g)(x)=-\sqrt{\frac{\pi}{2}}e^{-|x|}$for $x<0$ and thus
$\mathcal{F}^{-1}\left(\frac{\xi}{i(1+\xi^{2})}\right)(x)=\text{sgn}(x)\sqrt{\frac{\pi}{2}}e^{-|x|}.$
(3.27)
Hence, the definition of $u$ and equations (3.26), (3.27) imply claim (iii)
for $x>0$
$v(x)=u(x)-u_{\infty}+\frac{e^{-|x|}}{2}\left(u_{\infty}-u(0)\right).$
∎
There are still two important properties of $\overline{u}(y,p)$ we will need
for the rest of the paper and which are explained in the next two Lemmas.
First of all $\overline{u}(y,p)$ is uniformly bounded in both variables.
###### Lemma 3.5.
Let $\overline{u}(y,p)$ be the non-negative bounded solution to the problem
(2.8) for $g_{\nu}(n)$ satisfying the assumption as in Theorem 3.2. Then there
exists a constant $C$ such that
$\sup\limits_{y\in\mathbb{R},\;p\in\partial\Omega}\overline{u}(y,p)\leq
C<\infty.$
###### Proof.
By definition $\overline{u}$ satisfies $\mathcal{L}(\overline{u})(y)=G_{p}(y)$
for $y>0$ and $\overline{u}(y,p)=0$ for $y<0$. Moreover, recalling the norm as
in (1.10) the source can be estimated by
$0\leq G_{p}(y)\leq\Arrowvert g\Arrowvert_{1}e^{-y},$
since $|n\cdot N_{p}|\leq 1$.
Theorem 3.2 assures us the existence of a unique bounded continuous (in the
positive line) solution $v$ of $\mathcal{L}(v)(y)=\Arrowvert
g\Arrowvert_{1}e^{-y}$ for $y>0$ and $v(y)=0$ for $y<0$. Hence, we can apply
the maximum principle of Theorem 3.1 as we did in Lemma 3.2 to the function
$v-\overline{u}(\cdot,p)\in C\left([0,\infty]\right)$ and we conclude
$0\leq\overline{u}(y,p)\leq v(y)\leq\Arrowvert v\Arrowvert_{\infty}:=C<\infty$
for all $y\in\mathbb{R}$ and $p\in\partial\Omega$. ∎
Also, the rate of convergence of $\overline{u}(y,p)$ to
$\overline{u}_{\infty}(p)$ can be bounded independently of
$p\in\partial\Omega$.
###### Corollary 3.2.
There exists a constant $C>0$ independent of $p\in\partial\Omega$ such that
$|\overline{u}(y,p)-\overline{u}_{\infty}(p)|\leq Ce^{-\frac{y}{2}}$
.
###### Proof.
This is a consequence of Lemma 3.4 and Lemma 3.5. From Lemma 3.5 we know that
there exists a constant $C>0$ independent of $p\in\partial\Omega$ such that
$\left|W(x)\right|\leq
C\left(e^{-|x|}+|x|K(x)\raisebox{2.0pt}{$\chi$}_{\\{x<0\\}}\right)\in
L^{1}(\mathbb{R})\cap L^{2}(\mathbb{R})\cap L^{\infty}(\mathbb{R}),$
where $W$ is the function defined in (3.13). Since
$|x|K(x)\leq\frac{e^{-|x|}}{2}$ all moments of $W$ are finite and for any
$n\in\mathbb{N}$ there exists a constant $C_{n}>0$ independent of
$p\in\partial\Omega$ such that
$\left|m_{n}\left(W\right)\right|\leq C_{n}<\infty.$
Hence, $\hat{W}\in C_{0}(\mathbb{R})\cap C^{\infty}(\mathbb{R})\cap
L^{2}(\mathbb{R})$ and also all derivatives are uniformly bounded in
$p\in\partial\Omega$ since
$\left|\hat{W}^{(n)}(\xi)\right|\leq\frac{C_{n}}{\sqrt{2\pi}}$. Thus, the
function $\hat{h}$ in (3.23) defined using (3.18) can be bounded independently
of $p\in\partial\Omega$.
Moreover, we notice that in (3.22) as $|\xi|\to\infty$ we can bound
$\left|\hat{W}(\xi)-\frac{u(0)}{\sqrt{(}2\pi)i\xi}\right|$ by
$\frac{C}{\left|\xi^{1+\delta}\right|}$ with a constant $C>0$ independent of
$p\in\partial\Omega$. Indeed, as we have seen in Lemma 3.4 we have
$\left|G^{\prime\prime}(x)\right|\leq 2\pi\Arrowvert
g\Arrowvert_{\infty}\frac{e^{-x}}{x}$ and by Lemma 3.5 we have also
$\left|\overline{u}(y,p)\right|\leq C$.
Hence, we conclude as we did in Lemma 3.4 that there exists a constant $C>0$
independent of $p\in\partial\Omega$ such that
$\left|\hat{v}(\xi)\right|\leq\frac{C}{|1+\xi^{1+\delta}|}$, where $\hat{v}$
was defined in (3.24).
Arguing now exactly as in Lemma 3.4 using also Lemma 3.5 we conclude that
there exists a constant $C>0$ independent of $p\in\partial\Omega$ such that
$|\overline{u}(y,p)-\overline{u}_{\infty}(p)|\leq Ce^{-\frac{y}{2}}$. ∎
Next, using again the maximum principle we can also show that
$\overline{u}(y,p)$ is Lipschitz continuous with respect to
$p\in\partial\Omega$ uniformly in $y$.
###### Lemma 3.6.
Let $g_{\nu}(n)$ be as in Theorem 3.2 and let $\overline{u}$ be the unique
bounded solution to (2.8). Then $\overline{u}$ is uniformly continuous with
respect the variable $p\in\partial\Omega$ uniformly in $y$. More precisely, it
is Lipschitz continuous, i.e. there exists a constant $C>0$ such that for
every $p,q\in\partial\Omega$
$\sup_{y\geq 0}\left|\overline{u}(y,p)-\overline{u}(y,q)\right|\leq
C|p-q|:=\omega_{1}(|p-q|).$
###### Proof.
The proof is based on the maximum principle. We start taking
$0<\tilde{\delta}<1$ sufficiently small and we consider $p,q\in\partial\Omega$
with $|p-q|<\tilde{\delta}$. We denote by $S_{p}(q)$ the plane defined by the
vector $\overset{\rightharpoonup}{pq}$ and the unit vector $N_{p}$. Given that
$\partial\Omega$ is a $C^{3}$-surface we can define $\rho_{p}$ to be the
radius of curvature of the curve $C_{p}(q):=S_{p}(q)\cap\partial\Omega$ at
$p$. Since by assumption the curvature of $\partial\Omega$ is bounded from
below by a positive constant, for $\tilde{\delta}$ small enough we can
estimate
$\frac{1}{2}\rho_{p}\theta_{pq}\leq|p-q|\leq 2\rho_{p}\theta_{pq},$ (3.28)
where $\theta_{pq}$ is the angle between $N_{p}$ and $N_{q}$. This is true,
because for $\tilde{\delta}$ sufficiently small the angle $\theta_{pq}$ is not
zero and it is approximately the central angle between the rays connecting $p$
and $q$ with the center of the circle with radius $\rho_{p}$ tangent to $p$.
We denote by $R$ the minimal radius of curvature of $\partial\Omega$, hence
$\rho_{p}\geq R$. Now we consider the operator $\mathcal{L}$ acting on the
difference $\overline{u}(y,p)-\overline{u}(y,q)$. We can estimate its absolute
value by the sum of the following six terms
$\begin{split}\left|\mathcal{L}\left(\right.\right.&\left.\left.\overline{u}(y,p)-\overline{u}(y,q)\right)\right|\leq\int_{A_{1}}\int_{0}^{\infty}g_{\nu}(n)e^{-\frac{y}{|n\cdot
N_{p}|}}\;d\nu\;dn+\int_{A_{2}}\int_{0}^{\infty}g_{\nu}(n)e^{-\frac{y}{|n\cdot
N_{q}|}}\;d\nu\;dn\\\
&+\int_{A_{3}}\int_{0}^{\infty}g_{\nu}(n)\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot
N_{q}|}}\right|\;d\nu\;dn+\int_{A_{4}}\int_{0}^{\infty}g_{\nu}(n)\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot N_{q}|}}\right|\;d\nu\;dn\\\
&+\int_{A_{5}}\int_{0}^{\infty}g_{\nu}(n)\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot
N_{q}|}}\right|\;d\nu\;dn+\int_{A_{6}}\int_{0}^{\infty}g_{\nu}(n)\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot N_{q}|}}\right|\;d\nu\;dn,\end{split}$ (3.29)
where we denote by $A_{i}$ the following sets
$\begin{split}A_{1}&:=\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;n\cdot
N_{q}\geq 0\right\\},\;\;\;\;A_{2}:=\left\\{n\in\mathbb{S}^{2}:n\cdot
N_{p}\geq 0,\;n\cdot N_{q}<0\right\\},\\\
A_{3}&:=\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;n\cdot N_{q}<0,\;|n\cdot
N_{p}|\geq|n\cdot N_{q}|,|;|n\cdot N_{p}|>\frac{4}{R}|p-q|\right\\},\\\
A_{4}&:=\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;n\cdot N_{q}<0,\;|n\cdot
N_{p}|\geq|n\cdot N_{q}|,|;|n\cdot N_{p}|\leq\frac{4}{R}|p-q|\right\\},\\\
A_{5}&:=\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;n\cdot N_{q}<0,\;|n\cdot
N_{q}|\geq|n\cdot N_{p}|,|;|n\cdot N_{q}|>\frac{4}{R}|p-q|\right\\}\text{ and
}\\\ A_{6}&:=\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;n\cdot
N_{q}<0,\;|n\cdot N_{q}|\geq|n\cdot N_{p}|,|;|n\cdot
N_{q}|\leq\frac{4}{R}|p-q|\right\\}.\\\ \end{split}$
By symmetry, we need to estimate only the first, the third and the fourth
terms. We start with the first line of equation (3.29). The set $A_{1}$ is
contained by the set given by all the $n$ such that their angle with $N_{p}$
is in the interval $(\frac{\pi}{2},\frac{\pi}{2}+\theta_{pq})$. Using the fact
that $\frac{y}{|n\cdot N_{p}|}>y$, we estimate the exponential by $e^{-y}$ and
hence we see
$\int_{A_{1}}\int_{0}^{\infty}g_{\nu}(n)e^{-\frac{y}{|n\cdot
N_{p}|}}\;d\nu\;dn\leq\Arrowvert
g\Arrowvert_{\infty}2\pi\theta_{pq}e^{-y}\leq\frac{4\pi}{R}\Arrowvert
g\Arrowvert_{\infty}e^{-y}.$ (3.30)
The second term in (3.29) is estimated similarly. For the third term of
equation (3.29) we estimate the difference of the exponential as follows,
assuming $|n\cdot N_{p}|\geq|n\cdot N_{q}|$
$\left|e^{-\frac{y}{|n\cdot N_{p}|}}-e^{-\frac{y}{|n\cdot N_{q}|}}\right|\leq
e^{-\frac{y}{|n\cdot N_{p}|}}y\left|\frac{1}{|n\cdot N_{q}|}-\frac{1}{|n\cdot
N_{p}|}\right|\leq e^{-\frac{y}{|n\cdot N_{p}|}}y\left|\frac{|n\cdot
N_{p}|-|n\cdot N_{q}|}{|n\cdot N_{q}||n\cdot N_{p}|}\right|,$
where we used for $x>0$ the inequality $1-e^{-x}\leq x$. By definition
$|n\cdot(N_{p}-N_{q})|\leq\theta_{pq}\leq\frac{2}{R}|p-q|$ which implies
$0\leq|n\cdot N_{p}|-|n\cdot
N_{q}|=|n\cdot(N_{q}-N_{p})|\leq\frac{2}{R}|p-q|.$
Since $|n\cdot N_{p}|>\frac{4}{R}|p-q|$ we see also that
$|n\cdot N_{q}|\geq|n\cdot N_{p}|-\frac{2}{R}|p-q|\geq\frac{|n\cdot
N_{p}|}{2}.$
Hence,
$\left|e^{-\frac{y}{|n\cdot N_{p}|}}-e^{-\frac{y}{|n\cdot N_{q}|}}\right|\leq
e^{-\frac{y}{|n\cdot N_{p}|}}y\frac{4|p-q|}{R|n\cdot N_{p}|^{2}}.$
Putting together these inequalities we compute
$\begin{split}\int_{A_{3}}\int_{0}^{\infty}g_{\nu}(n)&\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot
N_{q}|}}\right|\;d\nu\;dn\leq\frac{4|p-q|}{R}\Arrowvert
g\Arrowvert_{\infty}\int_{A_{3}}dn\;e^{-\frac{y}{|n\cdot
N_{p}|}}\frac{y}{|n\cdot N_{p}|^{2}}\\\ \leq&\frac{4|p-q|}{R}\Arrowvert
g\Arrowvert_{\infty}4\pi\int_{0}^{\frac{\pi}{2}}e^{-\frac{y}{\cos(\theta)}}\frac{y\sin(\theta)}{\cos^{2}(\theta)}\;d\theta=\frac{16\pi|p-q|}{R}\Arrowvert
g\Arrowvert_{\infty}e^{-y},\end{split}$ (3.31)
where we estimated the last integral in $A_{3}$ using polar coordinates in
$\mathbb{S}^{2}$ using as reference $N_{p}$. It remains to estimate the
integral on $A_{4}$. For this term we use the inclusion
$\begin{split}A_{4}\subset&\left\\{n\in\mathbb{S}^{2}:n\cdot N_{p}<0,\;|n\cdot
N_{p}|\leq\frac{4}{R}|p-q|\right\\}\\\
\subset&\left\\{(\varphi,\theta)\in[0,2\pi]\times[0,\pi]:\theta\in\left(-\frac{\pi}{2},-\frac{\pi}{2}+C(R)|p-q|\right)\cup\left(\frac{\pi}{2}-C(R)|p-q|,\frac{\pi}{2}\right)\right\\},\end{split}$
where the last inclusion is due to the smallness of $\frac{4}{R}|p-q|<1$ and
the expansion of the arc-cosine. Moreover, $C(R)$ is a constant depending only
on $R$. Hence, as we estimated in (3.30) we have
$\int_{A_{4}}\int_{0}^{\infty}g_{\nu}(n)\left|e^{-\frac{y}{|n\cdot
N_{p}|}}-e^{-\frac{y}{|n\cdot N_{q}|}}\right|\;d\nu\;dn\leq C(R)4\pi\Arrowvert
g\Arrowvert_{\infty}|p-q|.$ (3.32)
Now, with equations (3.30),(3.31) and (3.32) we estimate the operator by
$\left|\mathcal{L}\left(\overline{u}(y,p)-\overline{u}(y,q)\right)\right|\leq
C(R)\Arrowvert g\Arrowvert_{\infty}|p-q|e^{-y},$
where $C(R)>0$ is a constant depending only on the minimal radius of curvature
$R$. Theorem 3.2 and the maximum principle imply the existence of a unique
non-negative bounded continuous function $V$ solution to the equation
$\mathcal{L}(V)(y)=e^{-y}$ for $y\geq 0$. Hence, we apply the maximum
principle of Theorem 3.1 as in Lemma 3.2 to the continuous functions
$C(R)\Arrowvert
g\Arrowvert_{\infty}|p-q|V-\left(\overline{u}(y,p)-\overline{u}(y,q)\right)$
and $C(R)\Arrowvert
g\Arrowvert_{\infty}|p-q|V-\left(\overline{u}(y,q)-\overline{u}(y,p)\right)$.
We conclude the uniformly continuity of $\overline{u}(y,p)$ in $p$ uniformly
in $y$
$\left|\overline{u}(y,p)-\overline{u}(y,q)\right|\leq C(R)\Arrowvert
g\Arrowvert_{\infty}|p-q|.$
The modulus of continuity $\omega_{1}$ is hence defined by
$\omega_{1}(r)=C(R)\Arrowvert g\Arrowvert_{\infty}r$. ∎
###### Corollary 3.3.
The limit $\overline{u}_{\infty}$ is Lipschitz continuous in
$p\in\partial\Omega$.
###### Proof.
This is a direct consequence of the previous Lemma 3.6. The modulus of
continuity of $\overline{u}_{\infty}$ is still the same $\omega_{1}$ of
$\overline{u}(y,p)$. ∎
Finally, we summarize all properties of $\overline{u}$ in the following
proposition.
###### Proposition 3.1.
Let $g_{\nu}(n)$ be as in Theorem 3.2 and $\Omega$ as in the assumption. For
every $p\in\partial\Omega$ there exists a unique non-negative bounded solution
$\overline{u}(y,p)$ to (2.8). For every $p\in\partial\Omega$ the function
$\overline{u}(\cdot,p)$ is uniformly continuous in $[0,\infty)$ and has a non-
negative limit
$\overline{u}_{\infty}(p)=\lim\limits_{y\to\infty}\overline{u}(y,p)$, which is
strictly positive if $\left\\{n\in\mathbb{S}:n\cdot N_{p}<0\text{ and
}\int_{0}^{\infty}d\nu g_{\nu}(n)\not\equiv 0\right\\}$ is not a zero measure
set. Moreover, $\overline{u}(y,p)$ is uniformly bounded in both variables and
it is Lipschitz continuous with respect to $p\in\partial\Omega$ uniformly on
$y\in\mathbb{R}_{+}$. Finally, $\overline{u}_{\infty}$ is Lipschitz continuous
and there exists a constant $C>0$ independent of $p\in\partial\Omega$ such
that $|\overline{u}(y,p)-\overline{u}_{\infty}(p)|\leq Ce^{-\frac{|y|}{2}}$.
## 4 Rigorous proof of the diffusion equilibrium approximation for constant
absorption coefficient
This section of the paper deals with the rigorous proof of the diffusion
equilibrium approximation for the constant absorption coefficient case. We
will show that the Stefan-Boltzmann law $u^{\varepsilon}(x)=4\pi\sigma
T_{\varepsilon}^{4}(x)$ for the temperature $T_{\varepsilon}$ associated to
the boundary value problem (1.5) converges pointwise as $\varepsilon\to 0$ to
$v$, the solution to the Dirichlet problem
$\begin{cases}-\Delta v=0&\text{ in }\Omega,\\\ v=\overline{u}_{\infty}&\text{
on }\partial\Omega,\end{cases}$ (4.1)
where $\overline{u}_{\infty}$ is defined as in Proposition 3.1.
### 4.1 Derivation of the equation for $u^{\varepsilon}$
Let us call $I^{\varepsilon}_{\nu}$ the solution to the initial boundary value
problem (1.5). We start with the derivation of the integral equation satisfied
by $u^{\varepsilon}=4\pi\sigma T_{\varepsilon}^{4}$. To this end we solve by
characteristics the equation
$n\cdot\nabla_{x}I_{\nu}\left(x,n\right)=\frac{1}{\varepsilon}\left(B_{\nu}\left(T\left(x\right)\right)-I_{\nu}\left(x,n\right)\right)$
Let $x\in\Omega$ and $n\in\mathbb{S}^{2}$. The convexity of $\Omega$ implies
the existence of a unique $x_{\Omega}(x,n)\in\partial\Omega$ connecting $x$ in
direction $-n$ with the boundary $\partial\Omega$. Hence,
$\frac{x-x_{\Omega}(x,n)}{\left|x-x_{\Omega}(x,n)\right|}=n$ and we define
$s(x,n)=\left|x-x_{\Omega}(x,n)\right|$. Then $x=x_{\Omega}(x,n)+s(x,n)n$.
Integrating along the characteristics equation (1.5) we get
$\begin{split}I^{\varepsilon}_{\nu}(x,n)=g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}+\frac{1}{\varepsilon}\int_{0}^{s(x,n)}e^{-\frac{t}{\varepsilon}}B_{\nu}\left(T\left({x-tn}\right)\right)\;dt.\end{split}$
Using the heat equation, i.e. $\nabla_{x}\cdot\mathcal{F}=0$ (see (1.5)), we
calculate
$0=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;n\cdot\nabla_{x}I^{\varepsilon}_{\nu}(x,n)=\frac{1}{\varepsilon}\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;B_{\nu}(T_{\varepsilon})(x)-I^{\varepsilon}_{\nu}(x,n).$
We define $u^{\varepsilon}(x)=4\pi\sigma
T_{\varepsilon}^{4}(x)=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dnB_{\nu}\left(T_{\varepsilon}(x)\right)$
according to (1.3). Hence also
$u^{\varepsilon}(x)=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;I_{\nu}^{\varepsilon}(x,n)$.
We integrate now the expression we got for the intensity and we conclude with
the equation satisfied by $u^{\varepsilon}$ as follows
$\begin{split}u^{\varepsilon}(x)&=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}+\frac{1}{4\pi\varepsilon}\int_{\mathbb{S}^{2}}dn\int_{0}^{s(x,n)}e^{-\frac{t}{\varepsilon}}u^{\varepsilon}(x-tn)\;dt\\\
&=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}+\frac{1}{4\pi\varepsilon}\int_{\Omega}\frac{e^{-\frac{\left|x-\eta\right|}{\varepsilon}}}{\left|x-\eta\right|^{2}}u^{\varepsilon}(\eta)\;d\eta,\end{split}$
where the last equality is due to the change of variables
$\mathbb{S}^{2}\times(0,\infty)\to\Omega$ with $(n,t)\mapsto x-tn=\eta$. Hence
the sequence $u^{\varepsilon}$ of exact solutions solves
$u^{\varepsilon}(x)-\int_{\Omega}\frac{e^{-\frac{\left|x-\eta\right|}{\varepsilon}}}{4\pi\varepsilon\left|x-\eta\right|^{2}}u^{\varepsilon}(\eta)\;d\eta=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}.$
(4.2)
We define the kernel
$K_{\varepsilon}(x):=\frac{e^{-\frac{\left|{x}\right|}{\varepsilon}}}{4\pi\varepsilon\left|{x}\right|^{2}}$
and we notice that its integral in $\mathbb{R}^{3}$ is $1$.
###### Remark.
There exists a unique solution $u^{\varepsilon}$ continuous and bounded. We
adapt the proof in [21]. The existence and uniqueness of a solution
$u^{\varepsilon}\in L^{\infty}\left(\Omega\right)$ can be shown with the
Banach fixed-point Theorem. We define for every given $g$ and $\varepsilon>0$
the self map $A_{g}^{\varepsilon}:L^{\infty}\left(\Omega\right)\to
L^{\infty}\left(\Omega\right)$ by
$A_{g}^{\varepsilon}(u)(x)=\int_{\Omega}K_{\varepsilon}\left(\eta-x\right)u(\eta)\;d\eta+\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}.$
Then since
$\int_{\Omega}K_{\varepsilon}(\eta-x)\;d\eta<\int_{\mathbb{R}^{3}}K_{\varepsilon}(\eta-x)\;d\eta=1$
we conclude that $A_{g}^{\varepsilon}$ is a contraction, hence there is a
unique fixed-point, which is the desired unique solution. Moreover,
$G^{\varepsilon}_{x_{\Omega}}(x):=\int_{0}^{\infty}d\nu\int_{\mathbb{S}^{2}}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}$
is continuous and since $u^{\varepsilon}\in L^{\infty}\left(\Omega\right)$ and
$K_{\varepsilon}(x-\cdot)\in L^{{1}}\left(\mathbb{R}^{3}\right)$ we conclude
that the convolution
$\int_{\Omega}K_{\varepsilon}(\eta-x)u^{\varepsilon}(\eta)\;d\eta$ is
continuous and bounded. Hence, $u^{\varepsilon}$ is continuous and bounded. We
can also extend continuously $u^{\varepsilon}$ to the boundary
$\partial\Omega$ defining $|x-x_{\Omega}(x,n)|=0$ for $x\in\partial\Omega$ and
$n\cdot N_{x}\leq 0$. Then using the generalized dominated convergence theorem
we see that both integral terms in (4.2) are continuous up to the boundary.
Hence, $u^{\varepsilon}\in C\left(\overline{\Omega}\right)$. Moreover,
$u^{\varepsilon}$ is non-negative. This is because of the maximum principle as
stated in the following theorem.
###### Theorem 4.1 (Maximum Principle).
Let $v$ be bounded and continuous, $v\in C\left(\overline{\Omega}\right)$. Let
$\mathcal{L}_{\Omega}^{\varepsilon}(v)(x)=v(x)-\int_{\Omega}K_{\varepsilon}(\eta\leavevmode\nobreak\
-\leavevmode\nobreak\ x)v(\eta)\;d\eta$. Assume $v$ satisfies one of the
following properties:
1. (i)
$\mathcal{L}_{\Omega}^{\varepsilon}(v)(x)\geq 0$ if $x\in\Omega$;
2. (ii)
$\mathcal{L}_{\Omega}^{\varepsilon}(v)(x)\geq 0$ if $x\in O\subset\Omega$ open
and $v(x)\geq 0$ if $x\in\Omega\setminus O$.
Then, $v\geq 0$.
###### Proof.
Let $y\in\overline{\Omega}$ such that $v(y)=\min_{x\in\overline{\Omega}}v(x)$.
Assume $v(y)<0$.
Assume that property $(i)$ holds. By continuity of the the operator we have
that $\mathcal{L}^{\varepsilon}_{\Omega}(v)(x)\geq 0$ for all
$x\in\overline{\Omega}$. Then
$\begin{split}0\leq&\mathcal{L}^{\varepsilon}_{\Omega}(v)(y)=v(y)-\int_{\Omega}K_{\varepsilon}(\eta-y)v(\eta)\;d\eta\\\
=&\int_{\Omega}K_{\varepsilon}(\eta-y)\left(v(y)-v(\eta)\right)\;d\eta+v(y)\int_{\Omega^{c}}K_{\varepsilon}(\eta-y)\;d\eta<0,\end{split}$
(4.3)
where we used the normalization of the kernel $K_{\varepsilon}$. Hence, this
contradiction yields $v\geq 0$.
Assume now that $(ii)$ holds. Then in this case $y\in\overline{O}$. Then again
by the continuity of the operator we obtain exactly as in (4.3) a
contradiction. Thus the Theorem is proved. ∎
### 4.2 Uniform boundedness of $u^{\varepsilon}$
In this section we will show that the sequence $u^{\varepsilon}$ is uniformly
bounded in $\varepsilon$. We will use the maximum principle again. Indeed, we
will construct functions $\Phi^{\varepsilon}$ uniformly bounded such that
$\mathcal{L}^{\varepsilon}_{\Omega}(\Phi^{\varepsilon})(x)\geq\Arrowvert
g\Arrowvert_{1}e^{-\frac{\text{dist}(x,\partial\Omega)}{\varepsilon}}$. We
will use this to prove
$\mathcal{L}^{\varepsilon}_{\Omega}\left(\Phi^{\varepsilon}-u^{\varepsilon}\right)(x)\geq
0$ which implies using the maximum principle $0\leq
u^{\varepsilon}\leq\Phi^{\varepsilon}$. The main result of this subsection is
the following.
###### Theorem 4.2.
There exists suitable constants $0<\mu<1$, $0<\gamma(\mu)<\frac{1}{3}$,
$C_{1},\;C_{2},\;C_{3}>0$ and there exists some $\varepsilon_{0}>0$ such that
the function
$\Phi^{\varepsilon}(x)=C_{3}\Arrowvert
g\Arrowvert_{1}\left(C_{1}-\left|x\right|^{2}\right)+C_{2}\Arrowvert
g\Arrowvert_{1}\left[\left(1-\frac{\gamma}{1+\left(\frac{d(x)}{\varepsilon}\right)^{2}}\right)\wedge\left(1-\frac{\gamma}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}\right)\right],$
for $a\wedge b\>=\min\left(a,b\right)$, $R>0$ the minimal radius of curvature
$R=\min_{x\in\partial\Omega}R(x)$ and
$d(x):=\text{dist}\left(x,\partial\Omega\right)$, satisfies
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\Phi^{\varepsilon}\right)(x)\geq\Arrowvert
g\Arrowvert_{1}e^{-\frac{d(x)}{\varepsilon}}$ in $\Omega$ uniformly for all
$\varepsilon<\varepsilon_{0}$. Moreover, the solutions $u^{\varepsilon}$ of
(4.2) are uniformly bounded in $\varepsilon$.
We split the proof of this theorem in two lemmas.
###### Lemma 4.1.
Let
$C_{1}:=2\max_{x\in\overline{\Omega}}\left|x\right|^{2}+2\;\textnormal{diam}\left(\Omega\right)^{2}+4\;\textnormal{diam}\left(\Omega\right)+4$,
let $0<\varepsilon<1$. Then
$\mathcal{L}^{\varepsilon}_{\Omega}\left(C_{1}-\left|x\right|^{2}\right)\geq
2\varepsilon^{2}.$
###### Proof.
We start computing the action of $\mathcal{L}^{\varepsilon}_{\mathbb{R}^{3}}$
on $\left|x\right|^{2}$.
$\begin{split}\mathcal{L}^{\varepsilon}_{\mathbb{R}^{3}}\left[\left|\cdot\right|^{2}\right](x)=&\left|x\right|^{2}-\int_{\mathbb{R}^{3}}K_{\varepsilon}\left(\eta-x\right)\left|\eta\right|^{2}\;d\eta\\\
=&-\int_{\mathbb{R}^{3}}K_{\varepsilon}\left(\eta-x\right)\left|\eta-x\right|^{2}\;d\eta=-2\varepsilon^{2},\end{split}$
where we expanded $|\eta|^{2}=|x+(\eta-x)|^{2}$ and we used that
$\int_{\mathbb{R}^{3}}K_{\varepsilon}=1$ and the symmetry of the kernel
$K_{\varepsilon}$.
Le $D:=\textnormal{diam}(\Omega)$ and let
$B=2\max_{x\in\overline{\Omega}}\left|x\right|^{2}$ and $\beta=2D^{2}+4D+4$.
Thus, $C_{1}=B+\beta$. Then
$\begin{split}\mathcal{L}^{\varepsilon}_{\Omega}&\left(K+\beta-\left|\cdot\right|^{2}\right)(x)=(B+\beta)\int_{\Omega^{c}}K_{\varepsilon}\left(\eta-x\right)\;d\eta\;-\mathcal{L}^{\varepsilon}_{\mathbb{R}^{3}}\left[\left|\cdot\right|^{2}\right](x)-\int_{\Omega^{c}}K_{\varepsilon}\left(\eta-x\right)\left|\eta\right|^{2}\;d\eta\\\
\geq&\left(B+\beta\right)\int_{\Omega^{c}}K_{\varepsilon}\left(\eta-x\right)\;d\eta+2\varepsilon^{2}-2\left|x\right|^{2}\int_{\Omega^{c}}K_{\varepsilon}\left(\eta-x\right)\;d\eta-2\int_{\Omega^{c}}K_{\varepsilon}\left(\eta-x\right)\left|\eta-x\right|^{2}\;d\eta,\\\
\end{split}$
where we used $\left|\eta\right|^{2}\leq
2\left|x\right|^{2}+2\left|\eta-x\right|^{2}$. Moreover using that
$B-2|x|^{2}\geq 0$ and splitting for $x\in\Omega$ the complement of the domain
as $\Omega^{c}=\left(\Omega^{c}\cap B_{D}(x)\right)\cup B_{D}^{c}(x)$ we
obtain
$\begin{split}\mathcal{L}^{\varepsilon}_{\Omega}\left(K+\beta-\left|\cdot\right|^{2}\right)(x)\geq&2\varepsilon^{2}+\int_{B^{c}_{D/\varepsilon}(0)}K_{\varepsilon}\left(\eta\right)\left(\beta-2\varepsilon^{2}\left|\eta\right|^{2}\right)\;d\eta\\\
=&2\varepsilon^{2}+\beta
e^{-\frac{D}{\varepsilon}}-e^{-\frac{D}{\varepsilon}}\left(2D^{2}+4D\varepsilon+4\varepsilon^{2}\right)\geq
2\varepsilon^{2},\end{split}$
where in the first inequality we used that $2\left|\eta-x\right|^{2}\leq
2D^{2}\leq\beta$ for $\eta,x\in B_{D}(x)$ and for the integral in
$B^{c}_{D}(x)$ we changed variables $\frac{\eta-x}{\varepsilon}\mapsto\eta$
and we computed the resulting integral using also that $\varepsilon<1$. ∎
In order to proceed further with the construction of the supersolution, we
will use repeatedly the distance function and its relation to the curvature of
the domain’s boundary. All the properties of this function can be found in the
Appendix “Boundary curvatures and distance functions” in [9]. It is well-know
that if the boundary $\partial\Omega$ is $C^{3}$, then in a neighborhood of
the boundary the distance function can be expanded by Taylor as
$d(\eta)=d(x)+\nabla
d(x)\cdot(\eta-x)+\frac{1}{2}\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)+\mathcal{O}\left(\left|\eta-x\right|^{3}\right)$
(4.4)
Moreover, the following proposition holds.
###### Proposition 4.1.
For $x\in\Omega$ in a neighborhood of the boundary the gradient of the
distance function is the inner normal, so that $\left|\nabla d(x)\right|=1$.
Moreover, denoting $R=\min_{x\in\partial\Omega}R(x)>0$ the minimal radius of
curvature and letting $\mu\in(0,1)$ we have
$\xi^{\top}\nabla^{2}d(x)\xi\leq\frac{1}{(1-\mu)R}$ (4.5)
for every $x\in\left\\{y\in\Omega:\;d(y)<R\mu\right\\}$ and
$\Arrowvert\xi\Arrowvert=1$.
###### Proof.
See 14.6, Appendix “Boundary curvatures and distance functions” ([9]). ∎
Using these properties of the distance function we can prove the next lemma.
###### Lemma 4.2.
Let
$\psi(x):=\left(1-\frac{\gamma}{1+\left(\frac{d(x)}{\varepsilon}\right)^{2}}\right)\wedge\left(1-\frac{\gamma}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}\right)$. Then there exists some $0<\mu<1$ small
enough, $0<\gamma(\mu)<\frac{1}{3}$, $0<\varepsilon_{1}<1$ small enough and
constants $C_{0}:=C_{0}(R,\Omega,\mu,\gamma)>0$ and $c:=c(R,\mu,\gamma)>0$
such that for all $0<\varepsilon\leq\varepsilon_{1}$
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\psi\right)(x)\geq\begin{cases}C_{0}e^{-\frac{d(x)}{\varepsilon}}&0<d(x)\leq\frac{R\mu}{2}\\\
-c\varepsilon^{2}&\frac{R\mu}{2}<d(x)<R\mu\\\ 0&d(x)\geq R\mu\\\ \end{cases}$
(4.6)
###### Proof.
We start with some preliminary consideration on the distance function. We
define $\frac{d(\eta)}{\varepsilon}:=d_{\varepsilon}\left(\eta\right)$. For
every $x,\eta\in\left\\{y\in\Omega:\;d(y)<R\mu\right\\}$ we have using (4.4)
$\begin{split}d_{\varepsilon}\left(\eta\right)^{2}=&d_{\varepsilon}\left(x\right)^{2}+\frac{2d(x)\nabla
d(x)\cdot\left(\eta-x\right)}{\varepsilon^{2}}+\frac{d(x)\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)}{\varepsilon^{2}}\\\
&+\frac{\left(\nabla
d(x)\cdot(\eta-x)\right)^{2}}{\varepsilon^{2}}+\mathcal{O}\left(\frac{d(x)}{\varepsilon^{2}}\left|\eta-x\right|^{3}\right).\end{split}$
(4.7)
Then Taylor’s expansion shows
$\begin{split}\frac{1}{1+d_{\varepsilon}\left(\eta\right)^{2}}=&\frac{1}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)\left(1+\left[d_{\varepsilon}\left(\eta\right)^{2}-d_{\varepsilon}\left(x\right)^{2}\right]\frac{1}{1+d_{\varepsilon}\left(x\right)^{2}}\right)}\\\
=&Q_{\varepsilon}^{(1)}(x,\eta)+Q_{\varepsilon}^{(2)}(x,\eta)+Q_{\varepsilon}^{(3)}(x,\eta),\\\
\end{split}$ (4.8)
where we the terms $Q_{\varepsilon}^{(i)}$ are defined as follows.
$Q_{\varepsilon}^{(1)}(x,\eta)=\frac{1}{1+d_{\varepsilon}\left(x\right)^{2}}-\frac{2d(x)\nabla
d(x)\cdot\left(\eta-x\right)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}},$
$Q_{\varepsilon}^{(2)}(x,\eta)=-\frac{d(x)\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}-\frac{\left(\nabla
d(x)\cdot(\eta-x)\right)^{2}}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}+\frac{4d^{2}(x)\left(\nabla
d(x)\cdot(\eta-x)\right)^{2}}{\varepsilon^{4}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}},$
$Q_{\varepsilon}^{(3)}(x,\eta)=\mathcal{O}\left(\frac{d(x)}{\varepsilon^{2}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\right)+\mathcal{O}\left(\frac{d(x)}{\varepsilon^{4}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}}\right).$
We consider now the function $\psi(x)$ defined in the statement of Lemma 4.2.
We take $M=\frac{1}{\mu^{2}}$ for $0<\mu<1$ small enough and $0<\varepsilon<1$
also small enough such that $0<M\varepsilon<\frac{R\mu}{2}$, i.e.
$0<\varepsilon<\frac{R\mu^{3}}{2}$, and we decompose $\Omega$ in four disjoint
sets
$\Omega=\left\\{d(x)\geq
R\mu\right\\}\cup\left\\{d(x)<M\varepsilon\right\\}\cup\left\\{M\varepsilon\leq
d(x)\leq\frac{R\mu}{2}\right\\}\cup\left\\{\frac{R\mu}{2}<d(x)<R\mu\right\\}.$
We proceed estimating $\mathcal{L}_{\Omega}^{\varepsilon}(\psi)(x)$ for $x$ in
each of these regions of $\Omega$.
Figure 4: Decomposition of $\Omega$.
For further reference we write
$\begin{split}\mathcal{L}^{\varepsilon}_{\Omega}\left(\psi\right)(x)=&\psi(x)-\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta
K_{\varepsilon}(\eta-x)\left(1-\frac{\gamma}{1+d_{\varepsilon}\left(\eta\right)^{2}}\right)\\\
&-\int_{\Omega\cap\\{d(\eta)\geq R\mu\\}}d\eta
K_{\varepsilon}(\eta-x)\left(1-\frac{\gamma}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}\right).\end{split}$ (4.9)
In order to estimate $\mathcal{L}_{\Omega}^{\varepsilon}(\psi)(x)$ in the
region $\\{d(x)\geq R\mu\\}$ we will use the fact that the minimum of
supersolutions is again a supersolution. In the region where
$d(x)<M\varepsilon$ we will use the explicit form of the kernel to see that
the main contribution has the right sign. Finally, in the region
$\\{M\varepsilon\leq d(x)<R\mu\\}$ the idea behind the arguments we will
present is that $\mathcal{L}_{\Omega}^{\varepsilon}(\psi)(x)$ can be
approximated by $-\varepsilon^{2}\Delta\psi$ using Taylor.
Step 1: $\\{d(x)\geq R\mu\\}$
First of all we notice that if $d(x)\geq R\mu$ then
$\mathcal{L}^{\varepsilon}_{\Omega}(\psi)(x)\geq 0$. Indeed,
$\psi(\eta)\leq\psi(x)=1-\frac{\gamma}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}$ in the first integral of (4.9) since
$d(\eta)<R\mu$ there. Hence
$\mathcal{L}^{\varepsilon}_{\Omega}\left(\psi\right)(x)\geq\mathcal{L}^{\varepsilon}_{\Omega}\left(1-\frac{\gamma}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}\right)\geq 0.$ (4.10)
Step 2: $\\{d(x)<M\varepsilon\\}$
We consider now the region $\\{d(x)<M\varepsilon\\}$. After a suitable rigid
motion we can assume $0\in\partial\Omega$ and $x=(d(x),0,0)$. Hence,
$\Omega\subset\mathbb{R}_{+}\times\mathbb{R}^{2}$ and
$\begin{split}\int_{\Omega^{c}}\frac{e^{-\frac{\left|{\eta-x}\right|}{\varepsilon}}}{4\pi\varepsilon\left|{\eta-x}\right|^{2}}\;d\eta&\geq\int_{-\infty}^{-d(x)/\varepsilon}K\left(\eta\right)\;d\eta\geq\int_{-\infty}^{-M}K\left(\eta\right)\;d\eta:=\nu_{M}>0.\end{split}$
$K$ is as usual the normalized exponential integral. On the other hand, using
that $\frac{1}{1+d_{\varepsilon}\left(x\right)^{2}}\leq 1$ and choosing
$\gamma<\frac{\nu_{M}}{2}$ we can conclude
$\begin{split}\mathcal{L}^{\varepsilon}_{\Omega}\left(\psi\right)(x)=&-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}+\int_{\Omega^{c}}d\eta
K_{\varepsilon}(\eta-x)+\gamma\int_{\Omega}d\eta
K_{\varepsilon}(\eta-x)\left(\frac{1}{1+d_{\varepsilon}\left(\eta\right)^{2}}\vee\frac{1}{1+\left(\frac{\mu
R}{\varepsilon}\right)^{2}}\right)\\\
\geq&\frac{\nu_{M}}{2}\geq\frac{\nu_{M}}{2}e^{-d_{\varepsilon}\left(x\right)},\end{split}$
(4.11)
where $a\vee b=\max(a,b)$.
Step 3: $\left\\{M\varepsilon\leq d(x)\leq\frac{R\mu}{2}\right\\}$
We consider now the set $\left\\{M\varepsilon\leq
d(x)\leq\frac{R\mu}{2}\right\\}$. As first step we plug (4.8) into the right
hand side of (4.9). To this end we define three integral terms
$J_{1},\;J_{2},\;J_{3}$ as
$\begin{split}J_{1}=&1-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}-\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right)\\\ &-\int_{\Omega\cap\\{d(\eta)\geq
R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\frac{\gamma}{1+\frac{R^{2}\mu^{2}}{\varepsilon^{2}}}\right),\end{split}$
(4.12)
$J_{2}=\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(\gamma
Q_{\varepsilon}^{(2)}(x,\eta)\right),$ (4.13)
$J_{3}=\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(\gamma
Q_{\varepsilon}^{(3)}(x,\eta)\right).$ (4.14)
Hence, we have
$\begin{split}\mathcal{L}_{\Omega}^{\varepsilon}(\psi)(x)=J_{1}+J_{2}+J_{3}.\end{split}$
(4.15)
The main contribution to these terms is due to $J_{2}$. Therefore we start
with this term and we show that for $0<\mu<1$ small enough there exists a
constant $\tilde{C}(\mu)>0$ independent of $\varepsilon$ such that
$\begin{split}J_{2}\geq\frac{\tilde{C}(\mu)\gamma}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}.\end{split}$
(4.16)
In order to prove this estimate we first notice that
$\begin{split}\frac{4d_{\varepsilon}\left(x\right)^{2}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}-1=&3-\frac{4}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\geq
3-\frac{4}{\left(1+M^{2}\right)}\geq 0.\end{split}$ (4.17)
Hence, multiplying this inequality by
$K_{\varepsilon}(\eta-x)\frac{\gamma\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}$
and integrating on $\\{d(\eta)<R\mu\\}$ we obtain
$\begin{split}\int_{\Omega\cap\\{d(\eta)<R\mu\\}}&d\eta\;K_{\varepsilon}(\eta-x)\left(-\frac{\gamma\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}+\frac{4\gamma
d^{2}(x)\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{4}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}}\right)\\\
\geq&\frac{\gamma\left(3-\frac{4}{1+M^{2}}\right)}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\int_{B_{M\varepsilon}(x)}d\eta\;K_{\varepsilon}(\eta-x)\frac{\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{2}}\\\
=&\frac{\gamma\left(3-\frac{4}{1+M^{2}}\right)}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\frac{1}{4\pi}\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}d\theta\sin(\theta)\cos^{2}(\theta)\int_{0}^{M}dr\>e^{-r}r^{2}=\frac{\gamma
C(M)\left(3-\frac{4}{1+M^{2}}\right)}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}},\end{split}$
(4.18)
where used that $B_{M\varepsilon}(x)\subset\\{d(\eta)<R\mu\\}$ and we define
the constant
$C(M)=\frac{1}{3}\int_{0}^{M}dr\;e^{-r}r^{2}=\frac{1}{3}(2-2e^{-M}-2Me^{-M}-M^{2}e^{-M})$
which depends on $M=\frac{1}{\mu^{2}}$. Notice that $C(M)\to\frac{2}{3}$ as
$M\to\infty$ and hence for $M$ sufficiently large we have also
$C(M)\geq\frac{1}{2}$.
In order to conclude the estimate for $J_{2}$ we use the result (4.5) to
estimate the Hessian of the distance function, thus
$\frac{\gamma
d(x)\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\leq\frac{\gamma\mu\left|\eta-x\right|^{2}}{\varepsilon^{2}(1-\mu)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}$
(4.19)
and we conclude
$-\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\frac{\gamma
d(x)\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\geq-C\frac{\gamma\mu}{(1-\mu)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}},$
(4.20)
for some constant $C>0$.
Combining (4.18) and (4.20) we obtain (4.16).
We proceed now with the term $J_{1}$ in (4.12). Using the symmetry of the
scalar product in $\mathbb{R}^{3}$ we write
$\begin{split}J_{1}=&\int_{\Omega^{c}}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right)+\int_{\Omega\cap\\{d(\eta)\geq
R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(\frac{\gamma}{1+\frac{R^{2}\mu^{2}}{\varepsilon^{2}}}-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right)\\\ =&J_{1,1}+J_{1,2}.\end{split}$ (4.21)
We proceed with the estimate for $J_{1,1}$ in (4.21). By means of a suitable
coordinate system we can assume again $0\in\partial\Omega$ and $x=(d(x),0,0)$.
We notice that if $\eta\in\left(-\infty,d(x)\right)\times\mathbb{R}^{2}$ then
$\nabla d(x)\cdot\left(\eta-x\right)=\eta_{1}-d(x)\leq 0$, while if
$\eta\in\left(d(x),\infty\right)\times\mathbb{R}^{2}$ then $\nabla
d(x)\cdot\left(\eta-x\right)\geq 0$. Hence, we obtain
$\begin{split}J_{1,1}\geq&\int_{\Omega^{c}\cap\left(-\infty,d(x)\right)\times\mathbb{R}^{2}}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right).\\\ \end{split}$ (4.22)
We now decompose the set
$\Omega^{c}\cap\left(\left(-\infty,d(x)\right)\times\mathbb{R}^{2}\right)=\left(\left(-\infty,0\right)\times\mathbb{R}^{2}\right)\cup\left(\Omega^{c}\cap\left(\left(0,d(x)\right)\times\mathbb{R}^{2}\right)\right)$.
Using that
$\frac{d(x)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}=\frac{1}{d(x)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}-\frac{1}{d(x)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}$
(4.23)
and since $\gamma<\frac{1}{3}$ we have
$1-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}>0$ and therefore we
obtain
$\begin{split}\int_{\left(-\infty,0\right)\times\mathbb{R}^{2}}&d\eta\;K_{\varepsilon}(\eta-x)\left(1-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right)\geq\int_{\left(-\infty,0\right)\times\mathbb{R}^{2}}d\eta\;K_{\varepsilon}(\eta-x)\frac{2\gamma\nabla
d(x)\cdot\left(\eta-x\right)}{d_{\varepsilon}\left(x\right)\varepsilon\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\\\
=&-\frac{2\gamma}{d_{\varepsilon}\left(x\right)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\int_{d_{\varepsilon}\left(x\right)}^{\infty}dz\;K\left(z\right)z\geq-\frac{\gamma}{2d_{\varepsilon}\left(x\right)}\frac{1+d_{\varepsilon}\left(x\right)}{1+d_{\varepsilon}\left(x\right)^{2}}e^{-d_{\varepsilon}\left(x\right)}\\\
\geq&-\frac{\gamma
C}{M}\frac{1}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}},\end{split}$
(4.24)
where we also changed variable $(d_{\varepsilon}\left(x\right)-z)\mapsto z$,
we used the identity (2.11) for the normalized exponential integral in
Proposition 2.2, we estimated $d_{\varepsilon}\left(x\right)\geq M$ and
finally we denote by $C$ the constant such that
$\frac{(1+x^{2})^{2}}{2}e^{-|x|}\leq C$.
Concerning the integral in the set
$\Omega^{c}\cap\left(\left(0,d(x)\right)\times\mathbb{R}^{2}\right)$ we
proceed similarly using again (4.23) and also the fact that if $z>0$ then
$z-d(x)>-d(x)$. Hence, we have
$\begin{split}&\int_{\Omega^{c}\cap\left(\left(0,d(x)\right)\times\mathbb{R}^{2}\right)}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\gamma
Q_{\varepsilon}^{(1)}(x,\eta)\right)\\\
\geq&\int_{\Omega^{c}\cap\left(\left(0,d(x)\right)\times\mathbb{R}^{2}\right)}d\eta\;K_{\varepsilon}(\eta-x)\left(1-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}+\frac{2\gamma\nabla
d(x)\cdot\left(\eta-x\right)}{d_{\varepsilon}\left(x\right)\varepsilon\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\right)\\\
=&\int_{\Omega^{c}\cap\left((0,d(x)\times\mathbb{R}^{2}\right)}dz\;K_{\varepsilon}(\eta-d(x)e_{1})\left(1-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}+\frac{2\gamma(\eta_{1}-d(x))}{d(x)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\right)\geq
0\end{split}$ (4.25)
Hence, for $M\varepsilon\leq d(x)<R\mu$ and $\gamma<\frac{1}{3}$ we can
summarize
$J_{1,1}\geq-\frac{\gamma}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\frac{C}{M}.$
(4.26)
###### Remark.
Notice that the estimates (4.22)-(4.26) are valid in the whole region
$\\{M\varepsilon\leq d(x)<R\mu\\}$.
We still have to consider the integral $J_{1,2}$ in (4.21). We notice that for
all $\eta\in\Omega$ with $d(\eta)\geq R\mu$ we have on the one hand
$\left|\eta-x\right|\geq\frac{R\mu}{2}$ and on the other hand $\nabla
d(x)\cdot(\eta-x)\geq 0$ since $d(\eta)>d(x)$. We recall that
$D:=\textnormal{diam}\left(\Omega\right)$ and that $\Omega\cap\\{d(\eta)\geq
R\mu\\}\subset B_{D}(x)$. Therefore, we estimate
$\begin{split}J_{1,2}\geq&-\int_{\Omega\cap\\{d(\eta)\geq
R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}\geq-\frac{\gamma
e^{-\frac{R\mu}{2\varepsilon}}}{1+d_{\varepsilon}\left(x\right)^{2}}\int_{B_{D}(0)}dz\;\frac{1}{4\pi\varepsilon|z|^{2}}\\\
\geq&-\gamma\frac{e^{-\frac{d_{\varepsilon}\left(x\right)}{2}}}{1+d_{\varepsilon}\left(x\right)^{2}}\frac{4D}{R\mu}\geq-\gamma
C\frac{D}{R}\frac{\mu}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\end{split}$
(4.27)
where we used the well-known estimate $xe^{-x}\leq e^{-1}$ combined with
$e^{-\frac{R\mu}{4\varepsilon}}\leq e^{-\frac{d(x)}{2\varepsilon}}$ and we
denoted by $C$ the constant such that $4x(1+x^{2})e^{-\frac{x}{2}}\leq C$ and
finally the relation $M=\frac{1}{\mu^{2}}$.
Finally we estimate the term $J_{3}$ in (4.14). Here we have to estimate the
integral term containing the error terms $Q_{\varepsilon}^{(3)}(x,\eta)$ of
the Taylor expansion (4.7). If $M\varepsilon\leq d(x)\leq\frac{R\mu}{2}$ and
if $\varepsilon<1$ we use
$\frac{x}{1+x^{2}}=\frac{1}{x}-\frac{1}{x\left(1+x^{2}\right)}$ and we
calculate
$\begin{split}\gamma\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;&K_{\varepsilon}(\eta-x)\left(\frac{d(x)}{\varepsilon^{2}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}+\frac{d(x)}{\varepsilon^{4}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}}\right)\\\
\leq&\int_{\mathbb{R}^{3}}d\eta\;\frac{\gamma
e^{-|\eta|}}{4\pi}\frac{\left|\eta\right|}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\left(d(x)\varepsilon+\frac{1}{\frac{d(x)}{\varepsilon}}-\frac{1}{\frac{d(x)}{\varepsilon}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)}\right)\\\
\leq&\frac{C\gamma}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\left(\frac{R\mu}{2}+\mu^{2}\right).\end{split}$
(4.28)
Hence, also
$J_{3}\geq-\frac{C\gamma}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\left(\frac{R\mu}{2}+\mu^{2}\right)$.
We conclude putting together estimates (4.16) (4.21), (4.26), (4.27) and
(4.28) the existence of a constant $C(\Omega)>0$ independent of
$\mu,\gamma,\varepsilon$ such that
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\psi\right)(x)\geq\frac{\gamma}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\left[C(M)\left(3-\frac{4}{1+M^{2}}\right)-C(\Omega)\frac{\mu}{1-\mu}\right].$
(4.29)
Choosing $0<\mu<1$ small enough, depending only on $\Omega$, such that
$C(M)>\frac{1}{3}$ and $C(\Omega)\frac{\mu}{1-\mu}<\frac{1}{6}$ we obtain
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\psi\right)(x)\geq\frac{\gamma}{6\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\geq
Ce^{-\frac{d(x)}{\varepsilon}}$ (4.30)
for $M\varepsilon\leq d(x)\leq\frac{R\mu}{2}$ and some constant $C$ depending
on $\Omega$, $R$, $\gamma$, $\mu$ but independent of $\varepsilon$.
Step 4: $\left\\{\frac{R\mu}{2}<d(x)<R\mu\right\\}$
It remains to calculate the behavior of
$\mathcal{L}_{\Omega}^{\varepsilon}(\psi)$ when $\frac{R\mu}{2}<d(x)<R\mu$.
Here, we show that there exists a constant $c(R,\mu,\gamma)$ such that
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\psi\right)(x)\geq-c\varepsilon^{2}$.
We can use several results we obtained in Step 3. We decompose again the
operator $\mathcal{L}_{\Omega}^{\varepsilon}(\psi)(x)=J_{1}+J_{2}+J_{3}$
according to (4.15) using the integral terms defined in (4.12)-(4.14).
First of all (4.17) implies
$\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\left(-\frac{\gamma\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}+\frac{4\gamma
d^{2}(x)\left(\nabla
d(x)\cdot\left(\eta-x\right)\right)^{2}}{\varepsilon^{4}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}}\right)\geq
0$
and hence we estimate $J_{2}$ using (4.19) and (4.20)
$\begin{split}J_{2}\geq&-\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\frac{\gamma
d(x)\left(\eta-x\right)^{\top}\nabla^{2}d(x)\left(\eta-x\right)}{\varepsilon^{2}\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\\\
\geq&-C\frac{\gamma\mu}{(1-\mu)\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\geq-\frac{8\gamma
C}{(1-\mu)R^{3}}\varepsilon^{3},\end{split}$ (4.31)
where we used $1+d_{\varepsilon}\left(x\right)^{2}\geq
d_{\varepsilon}\left(x\right)^{2}\geq\left(\frac{R\mu}{2\varepsilon}\right)^{2}$
and $0<\varepsilon<\frac{R\mu^{3}}{2}$.
We now proceed to estimate $J_{1}$. To this end we use again the decomposition
(4.21). The estimate (4.26) for $J_{1,1}$ is also valid in the region
$\\{\frac{R\mu}{2}<d(x)<R\mu\\}$, as we indicated in the remark after (4.26).
Hence we have for $\varepsilon<\frac{R\mu^{3}}{2}$
$\begin{split}J_{1,1}\geq-\frac{\gamma\mu^{2}C}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}\geq-\frac{8\gamma
C}{R^{2}}\varepsilon^{3}.\end{split}$
Concerning the term $J_{1,2}$ we have to argue slightly different than in Step
3. Using now the first inequality in (4.27) and $\int_{\mathbb{R}^{3}}d\eta
K_{\varepsilon}(\eta-x)=1$ we compute
$\begin{split}J_{1,2}\geq&-\int_{\Omega\cap\\{d(\eta)\geq
R\mu\\}}d\eta\;K_{\varepsilon}(\eta-x)\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}\geq-\frac{\gamma}{1+d_{\varepsilon}\left(x\right)^{2}}\geq-\frac{4\gamma}{\left(R\mu\right)^{2}}\varepsilon^{2}.\end{split}$
(4.32)
Finally, we estimate $J_{3}$ as defined in (4.14). Arguing as in (4.28) and
using $1+x^{2}\geq x^{2}$ and $0<\varepsilon<\frac{R\mu^{3}}{2}$ we compute
$\begin{split}\int_{\Omega\cap\\{d(\eta)<R\mu\\}}d\eta\;&K_{\varepsilon}(\eta-x)\left(\frac{d(x)}{\varepsilon^{2}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{2}}+\frac{d(x)}{\varepsilon^{4}}\frac{\left|\eta-x\right|^{3}}{\left(1+d_{\varepsilon}\left(x\right)^{2}\right)^{3}}\right)\\\
\leq&\frac{\gamma\left(d(x)^{2}+1\right)}{4\pi\left(d_{\varepsilon}\left(x\right)\right)^{5}}\int_{\mathbb{R}^{3}}d\eta\;e^{-|\eta|}\left|\eta\right|\leq\frac{2\gamma
C\left({R^{2}}+2\right)}{R^{3}}\varepsilon^{2}.\end{split}$ (4.33)
Thus, also $J_{3}\geq-\frac{2\gamma
C\left({R^{2}}+2\right)}{R^{3}}\varepsilon^{2}$.
Hence, (4.31),(4.2),(4.32) and (4.33) imply the existence of a constant
$c(R,\mu,\gamma)>0$ independent of $\varepsilon$ such that
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\psi\right)(x)\geq-c\varepsilon^{2}$
(4.34)
for all $\frac{R\mu}{2}<d(x)<R\mu$ .
We know summarize the results. Equations (4.10), (4.11), (4.30), (4.34) imply
the claim in (4.6). We remark that $\mu$, $\gamma$ and $\varepsilon_{1}$ are
chosen as follows. First of all $\mu$ is chosen according to Step 3 as in
(4.29), then $\gamma$ is taken according to Step 2 such that
$0<\gamma<\frac{\nu_{M}}{2}$ and finally $\varepsilon_{1}$ satisfies
$0<\varepsilon_{1}<\frac{R\mu^{3}}{2}$. This concludes the Lemma 4.2. ∎
Using Lemma 4.1 and 4.2 we can now prove Theorem 4.2.
###### (Proof of Theorem 4.2).
Let $C_{1}$ be the constant defined in Lemma 4.1 and let
$\gamma,\;\mu,\;C_{0},\;c$ be as in Lemma 4.2. We define
$C_{2}:=\frac{1}{C_{0}}$ and $C_{3}:=\frac{C_{0}+c}{2C_{0}}>\frac{1}{2}$.
Notice that all these constants are independent of $\varepsilon$. Hence, Lemma
4.1 and 4.2 imply
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\Phi^{\varepsilon}\right)(x)\geq\Arrowvert
g\Arrowvert_{1}\begin{cases}e^{-\frac{d(x)}{\varepsilon}}+2C_{3}\varepsilon^{2}&0<d(x)\leq\frac{R\mu}{2},\\\
\varepsilon^{2}&\frac{R\mu}{2}<d(x)<R\mu,\\\ 2C_{3}\varepsilon^{2}&d(x)\geq
R\mu,\\\ \end{cases}\geq\Arrowvert
g\Arrowvert_{1}\begin{cases}e^{-\frac{d(x)}{\varepsilon}}&0<d(x)\leq\frac{R\mu}{2},\\\
\varepsilon^{2}&\frac{R\mu}{2}<d(x)<R\mu,\\\ \varepsilon^{2}&d(x)\geq R\mu.\\\
\end{cases}$ (4.35)
We define now $\varepsilon_{0}:=\min\left\\{1,\;a,\varepsilon_{1}\right\\}$
with $a$ such that $2a\ln(\frac{1}{a})<\frac{R\mu}{2}$ and $\varepsilon_{1}>0$
as in Lemma 4.2. Then $\varepsilon^{2}\geq e^{-\frac{R\mu}{2\varepsilon}}\geq
e^{-\frac{d(x)}{\varepsilon}}$ for all $d(x)>\frac{R\mu}{2}$.
We now apply the maximum principle in Theorem 4.1 to the function
$\Phi^{\varepsilon}-u^{\varepsilon}$. This function satisfies the continuity
and boundedness assumption. Indeed, for any $\varepsilon>0$ the function
$u^{\varepsilon}$ is continuous and bounded as we have seen at the beginning
of Section 4.1. Moreover, by construction $\Phi^{\varepsilon}$ is continuous
and it is easy to see that it is even uniformly bounded since
$0\leq\Phi^{\varepsilon}(x)\leq\Arrowvert
g\Arrowvert_{1}\left(2C_{3}C_{1}+C_{2}\right).$
We also have
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\Phi^{\varepsilon}-u^{\varepsilon}\right)(x)\geq\Arrowvert
g\Arrowvert_{1}e^{-\frac{d(x)}{\varepsilon}}-\int_{0}^{\infty}d\nu\int_{n\cdot
N_{x_{\Omega}}<0}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Omega}(x,n)\right|}{\varepsilon}}\geq
0,$
since $\left|x-x_{\Omega}(x,n)\right|\geq d(x)$. Hence, Theorem 4.1 implies
that $\Phi^{\varepsilon}-u^{\varepsilon}\geq 0$ and thus
$0\leq u^{\varepsilon}\leq\Phi^{\varepsilon}\leq\tilde{C}<\infty$
uniformly in $\varepsilon$ and $x\in\Omega$. ∎
### 4.3 Estimates of $u^{\varepsilon}-\overline{u}$ near the boundary
$\partial\Omega$
In this subsection we will prove that for each point $p\in\partial\Omega$ the
function $\overline{u}$ defined in (2.8) is a good approximation of
$u^{\varepsilon}$ in a neighborhood of size close to
$\varepsilon^{\frac{1}{2}}$. Notice that this neighborhood is much greater
than the region of size $\varepsilon$. We will do it by means of the maximum
principle in Theorem 4.1. Now we start estimating the action of the operator
$\mathcal{L}_{\Omega}^{\varepsilon}$ on $\overline{u}-u^{\varepsilon}$.
###### Lemma 4.3.
Let $p\in\partial\Omega$ and let $\mathcal{R}_{p}$ be the isometry defined in
(1.12). Then the following holds for $x\in\Omega$, $\delta>0$ sufficiently
small and independent of $\varepsilon$ and a suitable $0<A<1$ and constant
$C>0$
$\left|\mathcal{L}_{\Omega}^{\varepsilon}\left(\overline{u}\left(\frac{\mathcal{R}_{p}(\cdot)\cdot
e_{1}}{\varepsilon},p\right)-u^{\varepsilon}\right)(x)\right|\leq
Ce^{-\frac{Ad(x)}{\varepsilon}}\begin{cases}\varepsilon^{\delta}&\text{ if
}|x-p|<\varepsilon^{\frac{1}{2}+2\delta},\\\ 1&\text{ if
}|x-p|\geq\varepsilon^{\frac{1}{2}+2\delta}.\end{cases}$ (4.36)
###### Proof.
Let us denote by $\Pi_{p}$ the half space
$\Pi_{p}:=\mathcal{R}_{p}^{-1}\left(\mathbb{R}_{+}\times\mathbb{R}^{2}\right)$.
Then the function
$\overline{U}_{\varepsilon}(x,p):=\overline{u}\left(\frac{\mathcal{R}_{p}(x)\cdot
e_{1}}{\varepsilon},p\right)$ is a continuous bounded function which maps
$\Pi_{p}\times\partial\Omega$ to $\mathbb{R}_{+}$. Notice that
$\overline{U}_{\varepsilon}(x,p)$ is the solution to the planar equation (2.8)
before rescaling and rotating. Our plan is to approximate
$\mathcal{L}_{\Omega}^{\varepsilon}\left(\overline{U}_{\varepsilon}\right)$ by
$\mathcal{L}_{\Pi_{p}}^{\varepsilon}\left(\overline{U}_{\varepsilon}\right)$.
Let $x\in\Pi_{p}$ and $p\in\Omega$. Using the definition of $\overline{u}$ in
(2.8) we can compute
$\begin{split}\int_{0}^{\infty}d\eta\;&K\left(\eta-\frac{\mathcal{R}_{p}(x)\cdot
e_{1}}{\varepsilon}\right)\overline{u}(\eta,p)=\int_{\mathbb{R}_{+}\times\mathbb{R}^{2}}d\eta\;\frac{e^{-\left|\eta-\frac{\mathcal{R}_{p}(x)}{\varepsilon}\right|}}{4\pi\left|\eta-\frac{\mathcal{R}_{p}(x)}{\varepsilon}\right|^{2}}\overline{u}\left(\eta_{1},p\right)\\\
&=\int_{\mathbb{R}_{+}\times\mathbb{R}^{2}}d\eta\;\frac{e^{-\frac{\left|\eta-\mathcal{R}_{p}(x)\right|}{\varepsilon}}}{4\pi\varepsilon\left|\eta-\mathcal{R}_{p}(x)\right|^{2}}\overline{u}\left(\frac{\eta_{1}}{\varepsilon},p\right)=\int_{\Pi_{p}}d\eta\;K_{\varepsilon}(\eta-x)\overline{U}_{\varepsilon}(\eta,p),\end{split}$
where we used in the first equality the translation invariance of the integral
with respect to the second and third variable, the definition of the planar
kernel and the definition of $y$. For the second equality we used the change
of variables $\tilde{\eta}=\varepsilon\eta$ and in the last identity the
change of variables $\tilde{\eta}=\mathcal{R}_{p}^{-1}(\eta)$ gives the
result. In order to write the value of
$\mathcal{L}_{\Pi_{p}}^{\varepsilon}\left(\overline{U}_{\varepsilon}\right)$
we use once again equation (2.8) and we define $x_{\Pi_{p}}(x,n)$ as the point
on the boundary of $\Pi_{p}$ with
$\frac{x-x_{\Pi_{p}}(x,n)}{\left|x-x_{\Pi_{p}}(x,n)\right|}=n$, i.e.
$x=x_{\Pi_{p}}(x,n)+\left|x-x_{\Pi_{p}}(x,n)\right|n$ if $n\cdot N_{p}<0$. By
construction we see that $\frac{\mathcal{R}_{p}(x)\cdot e_{1}}{|n\cdot
N_{p}|}=\left|x-x_{\Pi_{p}}(x,n)\right|$. Hence,
$\mathcal{L}_{\Pi_{p}}^{\varepsilon}\left(\overline{U}_{\varepsilon}(\cdot,p)\right)(x)=\int_{0}^{\infty}d\nu\int_{n\cdot
N_{p}<0}dn\;g_{\nu}(n)e^{-\frac{\left|x-x_{\Pi_{p}}(x,n)\right|}{\varepsilon}}.$ |
# Subaru Hyper Suprime-Cam Survey of Cygnus OB2 Complex - I: Introduction,
Photometry and Source Catalog
Saumya Gupta1, Jessy Jose1, Surhud More2, Swagat R. Das1, Gregory J. Herczeg3,
Manash R. Samal4, Zhen Guo5, Prem Prakash1, Belinda Damian6, Michihiro
Takami7, Satoko Takahashi8,9, Katsuo Ogura10, Tsuyoshi Terai11, Tae-Soo
Pyo11,12
1Indian Institute of Science Education and Research (IISER) Tirupati, Rami
Reddy Nagar, Karakambadi Road, Mangalam (P.O.), Tirupati 517 507, India
2Inter University Centre for Astronomy and Astrophysics, Ganeshkhind, Pune
411007, India
3 Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He
Yuan Lu 5, Haidian Qu, Beijing 100871, China
4Physical Research Laboratory (PRL), Navrangpura, Ahmedabad 380 009, Gujarat,
India
5 Centre for Astrophysics Research, University of Hertfordshire, Hatfield AL10
9AB, UK
6 Christ (Deemed to be University), Bangalore, India
7 Institute of Astronomy and Astrophysics, Academia Sinica 11F of Astronomy-
Mathematics Building, National Taiwan University, Taiwan, R.O.C
8 Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura, Santiago, Chile
9 NAOJ Chile, National Astronomical Observatory of Japan, Alonso de Córdova
3788, Office 61B, Vitacura, Santiago, Chile, 7630492
10 Kokugakuin University, Higashi, Shibuya-ku, Tokyo 150-8440, Japan
11 Subaru Telescope, National Astronomical Observatory of Japan, National
Institutes of Natural Sciences, 650 North Aohoku Place Hilo, HI 96720, USA
12 School of Mathematical and Physical Science, SOKENDAI (The Graduate
University for Advanced Studies), Hayama, Kanagawa 240-0193, Japan
kcsaumya.gupta<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Low mass star formation inside massive clusters is crucial to understand the
effect of cluster environment on processes like circumstellar disk evolution,
planet and brown dwarf formation. The young massive association of Cygnus OB2,
with a strong feedback from massive stars, is an ideal target to study the
effect of extreme environmental conditions on its extensive low-mass
population.We aim to perform deep multi-wavelength studies to understand the
role of stellar feedback on the IMF, brown dwarf fraction and circumstellar
disk properties in the region. We introduce here, the deepest and widest
optical photometry of 1.5∘ diameter region centred at Cygnus OB2 in r2, i2, z
and Y-filters using Subaru Hyper Suprime-Cam (HSC). This work presents the
data reduction, source catalog generation, data quality checks and preliminary
results about the pre-main sequence sources. We obtain 713,529 sources in
total, with detection down to $\sim$ 28 mag, 27 mag, 25.5 mag and 24.5 mag in
r2, i2, z and Y-band respectively, which is $\sim$ 3 - 5 mag deeper than the
existing Pan-STARRS and GTC/OSIRIS photometry. We confirm the presence of a
distinct pre-main sequence branch by statistical field subtraction of the
central 18′ region. We find the median age of the region as $\sim$ 5 $\pm$ 2
Myrs with an average disk fraction of $\sim$ 9$\%$. At this age, combined with
AV $\sim$ 6 - 8 mag, we detect sources down to a mass range $\sim$ 0.01 - 0.17
M⊙. The deep HSC catalog will serve as the groundwork for further studies on
this prominent active young cluster.
###### keywords:
stars:low-mass – stars: pre-main-sequence – stars:imaging – methods:
observational – techniques: photometric – catalogues
††pubyear: 2021††pagerange: Subaru Hyper Suprime-Cam Survey of Cygnus OB2
Complex - I: Introduction, Photometry and Source Catalog–C
## 1 Introduction
The complete stellar life cycle is significantly shaped by its mass, which is
in-turn determined by the less understood evolutionary stages of star
formation and its related processes (Luhman 2012; Armitage 2015; Manara et al.
2017 and references therein). As low-mass stars ($<$ 1-2 M⊙) spend
comparatively longer time in the rudimentary stages than their massive
counterparts ($>$ 8 M⊙), comprehensive studies on low-mass star formation can
provide useful insight into the interesting underlying processes like
protoplanetary disk formation and evolution (Hartmann 2008; Williams & Cieza
2011; Armitage 2015), brown dwarf formation and the factors affecting them
(Basu 2017; Megeath et al. 2019). Moreover, since most of the stars form in
clusters, hence cluster environment plays a crucial role in stellar evolution
and related processes (Sicilia-Aguilar, Aurora et al. 2013; Samal et al. 2015;
Jose et al. 2016; Parker et al. 2021; Damian et al. 2021). For example, disk
evolution has been observed to be affected by various factors like viscous
accretion (Gorti et al. 2015; Ercolano & Pascucci 2017), stellar density
(Winter et al. 2018), external photoevaporation in diverse harsh environments
like ONC (O’dell et al. 1993), NGC 1977 (Kim et al. 2016), Cygnus OB2 (Wright
et al. 2012; Guarcello et al. 2016; Winter et al. 2019). Another intriguing
question which requires further investigation is the ambiguous uniformity of
Initial Mass Function (IMF) and its behavior in the low-mass and sub-stellar
regime. Although many recent and past studies suggest a uniform IMF across
various star forming regions in the Milky Way (Bastian et al. 2010; Offner et
al. 2014; Moraux 2016; Jose et al. 2017; Damian et al. 2021), variation has
been observed in the extreme environments like the Galactic Center (e.g. Lu et
al. 2013; Hosek et al. 2019), least luminous Milky Way satellites (Geha et al.
2013; Gennaro et al. 2018) and massive elliptical galaxies (van Dokkum &
Conroy 2010; Cappellari et al. 2012).
Since, both Galactic and extragalactic star formation principally occurs in
clusters and OB-associations (e.g Carpenter 2000; Lada & Lada 2003; Pfalzner
et al. 2012), an empirical model for low mass star formation developed by
eclectic inferences drawn from both Galactic as well as extragalactic studies,
is a pre-requisite to answer these fundamental questions. However, due to
observational constraints with the current technology, we can only start by
analysing the relatively distant young massive Galactic star forming regions
using powerful observing facilities. The nearby clusters (for e.g Gould Belt
regions), which are the focus of most of the studies (Dunham et al. 2015; Dzib
et al. 2018; Bobylev & Bajkova 2020; Kubiak et al. 2021; Damian et al. 2021)
are not the representative samples of extragalactic star-forming regions,
where most of the star formation occurs in the extreme cluster environments of
giant molecular complexes. The deep and wide field surveys of distant young
massive Galactic clusters are need of the hour as such clusters are less
dynamically evolved and hence, provide a robust sample of stars with similar
history of formation in extreme environments (e.g. Portegies Zwart et al.
2010; Longmore et al. 2014). The primary goal of this work is to obtain good
quality deep observations and use them to carry out an elaborate study of
Cygnus OB2, a young massive Galactic cluster with extreme environmental
conditions analogous to that of extragalactic star forming regions.
Cygnus OB2 (Right Ascension: 20:33:15, Declination: +41:18:54), located at
$\sim$ 1.6 kpc (Lim et al. 2019) from the Sun, is a typical analogue of the
extragalactic massive star forming regions located outside the solar
neighborhood. It is the central massive OB-association ( 2 – 10 $\times$ 104
M⊙ as determined by Knödlseder (2000); Wright et al. (2010)) embedded in the
giant Cygnus X molecular complex (Schneider et al. 2006; Reipurth & Schneider
2008) and harbors $\sim$ 220 OB-type stars (Comerón & Pasquali 2012; Berlanas
et al. 2020) along with tens of thousands of low mass stars (Albacete Colombo
et al. 2007; Drew et al. 2008; Wright & Drake 2009). The OB2 association has
an estimated age of $\sim$ 3 – 5 Myrs (Drew et al. 2008; Wright et al. 2010,
2015) and is affected by variable extinction, AV ranging between $\sim$ 4 - 8
mag (Wright et al., 2015). With a cluster environment impinged by high energy
radiation from massive OB-stars in the association, Cygnus OB2 is an ideal
laboratory to study the role of stellar feedback on the surrounding low-mass
stellar population in the region. The presence of globules and proplyds (see
Figure 20 in Appendix A for HSC r2-band images of the known proplyds from
Wright et al. (2012)) in the surrounding region (Schneider et al. 2012; Wright
et al. 2012; Schneider et al. 2016) and a reduced circumstellar disk fraction
in the vicinity of massive O-type stars (Guarcello et al., 2016) suggest the
effect of ongoing external photoevaporation on disk evolution. Approximately
1843 candidate young stellar objects (YSOs) have been identified based on
their NIR excess properties (Guarcello et al. 2013) within an area $\sim$
1${}^{{}^{\circ}}$ $\times$ 1${}^{{}^{\circ}}$ of Cygnus OB2. The GTC-OSIRIS
optical study by Guarcello et al. (2012) covers the central 40′ $\times$ 40′
region of the Cygnus OB2 with photometry of the sources reaching down to
$\sim$ 25 mag in r’-band, however photometric error exceeds 0.1 mag for $\sim$
40$\%$ of the total sources in the catalog. Similarly, previous studies
regarding the kinematics, structure as well as mass function of Cygnus OB2 are
confined to a stellar mass of $\sim$ $>$ 1 M⊙ (Wright et al. 2010, 2015;
Comerón & Pasquali 2012; Arnold et al. 2020). However, the low mass regime of
the region covered by $<$ 0.5 M⊙ stars, remains unexplored. Cygnus OB2 is
thus, a potential young massive cluster for which deep and wide-field optical
and NIR studies are essential. This paper is a step towards a detailed study
of one of the most massive star forming regions outside the solar
neighbourhood with detections reaching down to the sub-stellar regime ($\leq$
0.07 M⊙).
We present here the deepest (r${}_{2}\sim$ 28 mag) and the widest (1.5∘
diameter) (see Figures 1) optical catalog of one of the most massive Galactic
star forming regions i.e Cygnus OB2 along with the preliminary analysis for a
limited area using the presented HSC data. Thanks to the superb wide-field
imaging capabilities of Subaru Hyper Suprime-Cam (HSC), we have obtained high
quality deep optical photometry which is useful to give an insight into the
low mass star formation, proto-planetary disk evolution and the effect of
feedback from massive stars on the cluster properties like Initial Mass
Function (IMF), star formation efficiency and star to brown dwarf ratio.
This paper is divided into the following sections: The Section 2 interprets
the Subaru Hyper Suprime-Cam observations, data reduction and catalog
generation using HSC pipeline. Section 3 presents the data quality in terms of
photometry, astrometry, completeness of the HSC data along with comparison
relative to already available optical photometry. In Section 4 we present the
data analysis and results obtained, aided with color-magnitude diagrams, age
analysis and disk fraction analysis. We then discuss and interpret the results
obtained with this data so far in Section 5 and encapsulate the entire work
along with our future plans, finally in Section 6.
## 2 Observations and Data Reduction
### 2.1 HSC Observations
Figure 1: RGB image of the 1.5 ∘ diameter region centred at Cygnus OB2 (RA:
20:33:15; Dec: +41:18:54) obtained with r2, i2 and Y-bands of Subaru HSC. The
inset white box covers the 40′ $\times$ 40′ (18.6 pc $\times$ 18.6 pc) region
observed by the past GTC/OSIRIS observations (Guarcello et al. 2013). The
inset green box covers 1′ $\times$ 1′ region ((RA: 20:32:12.7220; Dec:
+41:06:58.778)), further zoomed in the right corner of the image which gives a
vivid view of the abundance and high resolution of the point stellar sources
achieved by our observations of the target region.
Subaru is an 8.2 m class optical-infrared telescope built and operated by the
National Astronomical Observatory of Japan (NAOJ). With an 870 Megapixels
mosaic CCD camera comprising of 116 2k $\times$ 4k CCDs with a pixel scale
$\sim$ 0.17′′, the Hyper Suprime-Cam (HSC) instrument installed at the prime
focus of the telescope provides an excellent image quality over a wide field
of view (FOV; 1.8 $\deg^{2}$) (Miyazaki et al. 2012; Komiyama et al. 2017;
Furusawa et al. 2017; Miyazaki et al. 2018). We observed a region of 1.5∘
diameter centered at Cygnus OB2 (see Figure 1) with Subaru HSC in 4 broad-band
optical filters, namely, r2, i2, z and Y (Kawanomoto et al. 2018) on 17th
September’2017 (PI: J.Jose; Program ID: S17B0108N), using EAO (East Asian
Observatory) time111This EAO time for Cygnus OB2 observations was a
compensatory time given to us for the ToO event GW170817, which happened
during our scheduled night. Several long exposure and short exposure frames
(details given in Table LABEL:tab:_HSC_Observation_Specifications) were taken
to enhance the photometric accuracy of both faint as well as bright stars. The
excellent seeing conditions ($\sim$ 0.5′′ \- 0.7′′) atop Mauna Kea during the
observations (1.07 $\leq$ airmass $\leq$ 1.35) and superb optics of the camera
with a focal length $\sim$ 18320 mm have effectively enabled the otherwise
difficult pairing of a wide field of view with detailed spatial resolution
(see Figure 1). The mean FWHM values achieved in individual HSC filters are
indicated in Table LABEL:tab:_HSC_Observation_Specifications and Figure 2
Left. The achieved FWHM in individual filters is approximately uniform across
the observed FOV (Figure 2 Right).
Figure 2: Left: Histogram distribution for FWHM in each HSC-filter i.e Y, z,
i2 and r2. Right: Spatial distribution map of FWHM in z-band for the observed
region. The spatial map is obtained by binning the RA and Dec parameter space
into 10′ $\times$ 10′ bins across the entire observed region. The colorbar
indicates the mean FWHM of each bin.
Hitherto, HSC has primarily been used for extra-galactic observations (e.g.
Matsuoka et al. 2019; Ishikawa et al. 2020; Jaelani et al. 2020). However,
there is a dire lack of similar observations in Galactic stellar fields with
HSC. This study is a pioneering work to utilize the powerful and highly
sensitive imaging capabilities of Subaru Hyper Suprime-Cam for observations of
young Galactic star forming regions. A summary of the various procedures
followed and the modifications introduced in the default pipeline parameters
to reduce the observed HSC data is presented below.
Table 1: Details about short and long exposure frames and FWHM in individual filters. Filters | HSC-Y | HSC-z | HSC-i2 | HSC-r2
---|---|---|---|---
Exposure Timeshort | 30s $\times$ 5 frames | 25s $\times$ 3 frames | 25s $\times$ 3 frames | 30s $\times$ 3 frames
Exposure Timelong | 200s $\times$ 3 frames | 300s $\times$ 4 frames | 300s $\times$ 10 frames | 300s $\times$ 16 frames
Mean FWHM | 0.61′′ | 0.68′′ | 0.62′′ | 0.53′′
### 2.2 Data Reduction and Catalog Generation
The observed raw data was downloaded from STARS (Subaru Telescope Archive
System) and reduced with the help of HSC Pipeline version 6.7. The entire
process of the data reduction by HSC pipeline (hscPipe) can be broadly
classified into (1) Single-visit Processing (2) Joint Calibration (3)
Coaddition (4) Coadd Processing/ Multiband Analysis. For details regarding the
following processes, refer to Bosch et al. (2017); Aihara et al. (2017, 2019).
The hscPipe initiates the data reduction with single-visit processing. The
detrending of the raw data includes overscan subtraction, bias correction,
dark current subtraction, flat-fielding, and fringe subtraction. The hscPipe
then performs Instrument Signature Removal (ISR) to mask and interpolate the
defects such as bad pixels, cross-talk, and saturated pixels. A few bright
sources short-listed using a 50$\sigma$ threshold value are used as reference
to model the Point Spread Function (PSF) using PSFEx software. The astrometric
and photometric calibration of these sources is performed with respect to the
Pan-STARRS DR1 PV3 reference catalog using the ‘Pessimistic B‘ matching
algorithm222refer https://dmtn-031.lsst.io/#pessimism-of-the-algorithm for
details. We discard the default ’Optimistic B’ algorithm as it is well-suited
for low density fields like extragalactic realms and has failure modes in
comparatively high density Galactic regions333See https://dmtn-031.lsst.io/,
which results in false matches and incorrect astrometry of the detected
sources. After performing sky subtraction and source measurements444The source
measurement step includes centroiding, shape measurements, aperture
corrections, etc., the previously generated PSF model is used to generate a
deeper catalog of stars using a 5$\sigma$ threshold. The above explained
process including the source extraction using 5$\sigma$ detection threshold,
is performed for each CCD during single visit processing. An internal
calibration is then carried out across different observing shots, termed as
visits. The astrometric and photometric calibrations are carried out by
matching the wcs and flux scale of each visit with the previously generated
matched list of reference bright stars and corresponding corrections are then
applied to each visit.
In the next step, the hscPipe coadds the images from various visits to create
a single deeper image and a PSF model is constructed for the coadded image.
The sky correction applied prior to the coadd process is turned off as it
contaminates our coadded images due to high amount of nebulosity present in
the region. The sky correction applied at this step merely writes a new
background model without modifying the photometry of detected sources. We
coadd the long exposure visits and short exposure visits separately for
individual filters, to obtain precise photometry for some of the bright
sources which get saturated in the long exposure images. Eventually, hscPipe
performs multiband analysis in order to generate the final photometric catalog
for each band. The source extraction is performed again, this time on the
coadded images using 5$\sigma$ threshold value to detect sources and
photometry is subsequently performed on the coadded images in each filter. As
a result of the source extraction, certain above-threshold regions called
footprints are generated each of which, comprises of one or more discrete
sources. These footprints, containing several peaks are merged together across
different filters. The overlapping footprints from different filters are then
combined. Within each of such combined footprints, the peaks close enough to
each other (that is, lying within 0.3′′ of the nearest peak) are merged as one
peak, otherwise are assigned as an individual new peak. This results in
consistent peaks and hence, footprints across individual filters. Each of the
peak corresponds to individual objects. The peaks within individual footprints
are further deblended in individual filters and the total flux is apportioned
into them.
The number of stellar sources detected during image coaddition relies upon the
footprint size as each footprint consists of several blended individual peaks.
The larger the size of the footprint, the more peaks or distinct objects it
can hold. As the hscPipe is designed primarily for sparse regions, the default
footprint size defined by the pipeline i.e 106 pixels is insufficient to
detect all stellar point sources in a comparatively dense Galactic star
forming region like Cygnus OB2. Hence, after performing rigorous checks with
several footprint sizes, we finally increased it to 1010 pixels for i2, z and
Y filters. The footprint size is increased to 1011 pixels for r2 filter
however, to ensure maximum detection inspite of it’s high sensitivity to the
extensive nebulosity present in the region. The modified footprint sizes in
individual filters aid in yielding an exhaustive catalog of point sources to
be detected in the images. Finally, hscPipe performs source measurements and
photometry for the detected sources and thus, both long exposure and short
exposure catalogs are obtained in r2, i2, z and Y bands. However, these
catalogs in individual filters are contaminated with plenty of
spurious555detections with no visible source present detections. Hence, we
have applied certain flags and external constraints to eradicate such spurious
detections, which we explain in the following section.
### 2.3 Point Source Selection
Table 2: shows various flags applied with their description.
Flagging Condition | Description
---|---
deblend_nchild != 0 | Objects which consist of multiple childa peaks
deblend_skipped | Objects which contain multiple blended peaks
base_PixelFlags_flag_crCenter | Objects overlaps cosmic ray contaminated pixels
base_PixelFlags_flag_suspectCenter | Object overlaps a pixel with unreliable linearity correction
base_PixelFlags_flag_saturatedCenter | Object with saturated pixels
base_PixelFlags_flag_interpolatedCenter | Object with interpolated pixels from surrounding pixels
* a
Each individual source peak obtained after deblending each footprint
We apply certain flags (see Table 2) and external constraints to remove the
spurious contamination from the obtained long and short exposure catalogs
(Section 2.2) with minimal loss of genuine point sources in individual
filters. For more details on catalog flags, please refer to Bosch et al.
(2017). Additionally, we select sources with photometric error $\leq$ 0.1 mag
in individual bands for both long and short exposure catalogs. We impose an
additional constraint of internal astrometric error $\leq$ 0.1′′ to remove
spurious sources without any loss of good point sources. This error in
astrometry of a source is with respect to its peak PSF position in different
visits (For more details please refer to Section 3.2). We consider only those
sources which have detection in at least two filters. Since the seeing
conditions during our observations varied between 0.5′′– 0.7′′, we have chosen
the upper limit of seeing i.e 0.7′′, as the maximum matching radius and best
match as the match selection criteria to cross match the sources between any
two bands using TOPCAT tool666http://www.star.bris.ac.uk/ mbt/topcat, in order
to avoid loss of any genuine counterparts (Mehta et al. 2018; Murata et al.
2020). The cross-matching radius, even if reduced (e.g 0.5′′) or increased
(e.g 0.8′′ or 1′′) varies the census of genuine sources atmost by a few
hundreds, which is a negligible amount when compared to the total number of
detected sources. Similary, the specified constraints of 0.1 mag in
photometric error and 0.1′′ in the internal astrometric error have been chosen
after checking and discarding several values ranging between 0.07 mag – 0.2
mag (in photometric error) and 0.08′′ – 0.5′′ (in astrometric error) as it
results either in loss of numerous faint point sources or an increment in
spurious detection by 5–10$\%$.
The availability of both short exposure and long exposure photometry for the
sources has enabled us to deal with the saturated sources effectively. We
consider long exposure photometry in all the bands for those sources with
magnitude in Y-band $>$ 18 mag. In a similar fashion, sources with Y $\leq$ 18
mag are incorporated from short-exposure catalog in all the filters. However,
in addition to this, we also include short exposure photometry for sources
with 18 mag $\leq$ Y $\leq$ 22 mag and without any long exposure photometry
available for them. This is specifically done in order to include the sources
which lie close to bright stars and have been missed in the photometry from
long exposure. The particular threshold of Y $\leq$ 22 mag is chosen after
verifying that the sources with only short exposure photometry and Y $>$ 22
mag, are spurious detections and hence, discarded. This merging of short and
long exposure photometry can result in missing sources or repetition of
sources near the merging threshold i.e Y $=$ 18 mag and its corresponding
counterparts in other filters. Hence, to deal with this we take an average of
the long and short exposure magnitudes for the sources with 17.8 mag $\leq$ Y
$\leq$ 18.2 mag and their corresponding counterparts in other filters. An
important point to note here is that the long and short exposure photometry is
merged on the basis of the threshold values 18 mag or 22 mag taken in Y-band
and applied to the corresponding counterparts in other bands. Finally, we
perform an internal matching of the sources in the entire output catalog with
the upper value of astrometric uncertainity, i.e 0.1′′ as the matching radius
to avoid any repetition of sources. Any duplicates (0.5$\%$ of the total
sources) of the already detected sources in the catalog are removed in this
step.
Figure 3: This flowchart summarizes the external conditions imposed after
applying flags mentioned in Table 2. These conditions ensure the maximum point
source detection and remove spurious sources from both long exposure and short
exposure catalogs separately, obtained after data reduction using hscPipe. The
short and long exposure photometry are then concatenated and merged based on
conditions mentioned above. For details please refer to Section 2.3.
To summarise, the output catalog thus procured, includes only those sources
which have detection in at least any 2 filters, photometric error $\leq$ 0.1
mag in all the filters and internal astrometric uncertianity $\leq$ 0.1′′. To
avoid any saturation effect due to bright stars, we incorporate the short
exposure photometry in all the filters (r2, i2, z and Y) as explained above.
The key steps in this process of point source selection are briefly shown as a
flowchart in Figure 3. We have finally secured 713,529 point sources all of
which have at least a 2-band detection. Approximately, 699,798 ($\sim 98\%$)
sources have Y-band photometry, 685,511 sources ($\sim$ 96$\%$) have z-band
photometry, 622,011 sources ($\sim$ 90$\%$) have $i_{2}$ band photometry and
358,372 sources ($\sim$ 50$\%$) have $r_{2}$ band photometry. Figure 4
presents a sample of our exemplary detection in different bands for a
particular region (RA: 20:34:10.4835 Dec: +40:57:48.783) of 2′ $\times$ 2′.
Almost all the visible sources, although faint, have been successfully
detected in the final HSC catalog. The adopted approach of selecting genuine
point sources as described in this section has yielded the deepest and the
widest comprehensive optical catalog of one of the most massive regions in the
Galaxy outside the solar neighborhood.
Figure 4: A 2′ $\times$ 2′ area (RA: 20:34:10.4835 Dec: +40:57:48.783) in
different filters is overplotted with sources detected in each individual band
i.e Top Left: r2-band, Top Right: i2-band, Bottom Left: z-band and Bottom
Right: Y-band.
## 3 Data Quality
In the following sections, we discuss the data quality in terms of the
photometry, astrometry, limiting magnitude of detection, completeness of the
reduced HSC data with respect to the existing Pan-STARRS DR1777downloaded from
https://vizier.u-strasbg.fr/viz-bin/VizieR (Chambers et al. 2019) and
GTC/OSIRIS (Guarcello et al. 2012) optical data. We also perform a comparison
of the obtained HSC photometry with respect to Pan-STARRS DR1 photometry with
the help of magnitude offset plots and check the astrometric offset with
respect to Pan-STARRS DR1 and Gaia EDR3 data (Brown et al. 2016; Gaia
Collaboration et al. 2020).
### 3.1 Photometric Quality
Figure 5: Scatter plots of HSC magnitudes versus error in individual HSC
filters. All the sources have error $\leq$ 0.1 mag. The discontinuity at Y =
18 mag in magnitude-error plot of Y-band (Top Left) is due to the merging of
long and short exposure photometry. Y = 18 mag is taken as the threshold
magnitude for this merging (see Section 2.3 for details on the merging
procedure.). The multiple branches observed in these plots are due to the long
and short exposure photometry merged to obtain the final catalog. Figure 6:
A comparative magnitude versus error scatter plot for HSC (blue) with the
existing photometry from Pan-STARRS (Green) and GTC/OSIRIS (Guarcello et al.,
2012) (Red) in i2-band for an area of 30′ radius centred on Cygnus OB2. The
two branches observed in the HSC i2-band plot correspond to the long and short
exposure photometry.
The error versus magnitude plots shown in Figure 5 for the individual HSC
filters i.e r2, i2, z and Y-filter, summarize the accuracy of the obtained HSC
photometry. The plot illustrates that the photometric error is $\leq$ 0.05 mag
for sources with magnitudes down to $\sim$ 26.0 mag in i2-band, 27.5 mag in
r2-band, 24.7 mag in z and 24.0 mag in Y-band. Approximately 91$\%$ and 95$\%$
of the total sources have a photometric error $\leq$ 0.05 mag in Y and z-band
respectively. Similarly, 93$\%$ of the sources detected in i2-band and almost
90$\%$ of the detected sources in r2-band have an error $\leq$ 0.05 mag. A
comparative error versus magnitude plot is presented in Figure 6 for an area
of 30′ radius centred on Cygnus OB2 to juxtapose the accuracy of HSC
photometry with previous optical studies of the region such as with Pan-
STARRS, GTC/OSIRIS. Since GTC/OSIRIS observations are available for a limited
FOV (40′ $\times$ 40′), the chosen area (30′ radius centred at Cygnus OB2)
allows a fair comparison of photometric accuracy among the HSC, Pan-STARRS and
GTC/OSIRIS sources. The maximum detection limit within a photometric error
$\leq$ 0.1 mag attainable with Pan-STARRS and GTC/OSIRIS photometry is $\sim$
22.5 mag–24.0 mag (i-band), which is at least 3 mag shallower when compared to
the high accuracy attained by the HSC photometry down to the very faint sub-
stellar regime (i${}_{2}\sim$ 27.0 mag ; $\leq$ 0.07 M⊙) (see Section 3.3 and
Section 4.1 for details).
Figure 7: Spatial distribution map generated by binning the entire observed
region into 10′ $\times$ 10′ bins in RA and Dec parameter space to signify the
variation of magnitude offset of HSC $i_{2}$-band photometry with respect to
Pan-STARRS DR1 i-band photometry across the area of observations. The colorbar
indicates the mean offset of sources in each bin. Figure 8: Scatter plots for
determining magnitude offset of HSC photometry with respect to Pan-STARRS
photometry in different individual bands. An offset of 0.03$\pm 0.06$ mag,
0.01$\pm 0.07$ mag, 0.01$\pm 0.03$ mag and 0.01$\pm 0.03$ mag is observed in
Y, z, $i_{2}$, $r_{2}$-band respectively for the range of magnitudes marked by
dashed black lines. The marked magnitude ranges have been selected to
calculate the mean magnitude offset in order to avoid the saturation of HSC
photometry towards the brighter end and unreliable photometry of Pan-STARRS
towards fainter end of sources. The blue sources lie within 3$\sigma$ range
from mean offset whereas grey sources lie beyond 3$\sigma$ range from mean
offset.
In order to assess the photometric quality, we check the offset between HSC
and the counterpart Pan-STARRS DR1 photometry in the individual filters. To
compare the photometry, we transformed the Pan-STARRS DR1 photometry from Pan-
STARRS filter system to HSC system using the equations given in Appendix B.
The sources with good quality Pan-STARRS photometry have been selected by
giving an error cut off $\leq$ 0.05 mag and number of stack detections $>$ 2
(Chambers et al. 2019). We observe a moderate uniformity in the magnitude
offset across the entire region as presented in the spatial distribution map
in Figure 7. Figure 8 shows the scatter plots of magnitude offset i.e HSC
magnitudes–Pan-STARRS magnitudes versus HSC magnitudes, in all HSC filters. A
mean offset of 0.01$\pm$0.07 mag is observed in z-band with respect to the
Pan-STARRS magnitudes. Similarly, other bands i.e r2, i2, Y-band exhibit an
offset of 0.01$\pm$0.03 mag, 0.01$\pm$0.03 mag and 0.03$\pm$0.06 mag
respectively, which agrees well with the offset estimated in other studies
between HSC and Pan-STARRS (Komiyama et al. 2018; Aihara et al. 2019). The
mentioned mean offsets have been calculated for sources within a certain range
of magnitudes (range marked by dotted black lines in Figure 8) in individual
bands, after discarding the sources lying beyond 3$\sigma$ level (represented
by grey colored dots in Figure 8) iteratively for 5 iterations.
### 3.2 Astrometric Quality
Figure 9: Spatial plots signifying the variation of internal error in Right
Ascension (Left) and Declination (Right) across the entire region. The spatial
maps are obtained by binning the RA and Dec parameter space into 10′ $\times$
10′ bins across the entire observed region. The colorbar indicates the mean
uncertainity in RA (Left) and Dec (Right) of each bin. The observed internal
astrometric error ranges between 0.01′′ \- 0.03′′, with almost uniform
distribution throughout the region.
We present a graphical interpretation of the high precision astrometry of
point sources in the HSC catalog in Figure 9 and 10. Due to our strict
selection criteria (see Section 2.3), all the sources have both $\Delta$ RA
and $\Delta$ Dec $\leq$ 0.1′′. This astrometric uncertainity of each source is
attributed to the uncertainity in the position of its observed peak flux in
different exposures. Hence, the mentioned astrometric error threshold of 0.1′′
is a quality measure of the internal astrometric calibration relative to
different visits. The internal astrometric error, mainly ranging between
0.01′′–0.03′′ appears to be uniform across the observed region (see Figure 9)
with a mean value $\sim$ 0.016′′ for the detected sources. However, the census
of sources decreases rapidly with increasing internal astrometric error
(Figures 10 Top Left and Top Right).
Figure 10: Histograms of internal error in Right Ascension (Top Left) and in
Declination (Top Right). Histograms of the observed offset in astrometry of
HSC with respect to Pan-STARRS (blue), astrometry of Pan-STARRS with respect
to Gaia EDR3 (black) and astrometry of HSC with respect to Gaia EDR3 (red) in
Right Ascension is shown in the Bottom left panel and in Declination is shown
in the Bottom Right panel (See Section 3.2 for details).
We perform an additional check of the astrometry of the detected HSC sources
with respect to the external data such as Pan-STARRS DR1 and Gaia EDR3
available for the observed area of Cygnus OB2. The histograms in Figure 10
Bottom Left and Bottom Right show the offset between HSC, Pan-STARRS DR1 and
Gaia EDR3 astrometry in the HSC FOV (1.5∘ diameter region centred at Cygnus
OB2). The absence of any visible offset between HSC and Pan-STARRS astrometry
is attributed to the astrometric calibration performed with respect to Pan-
STARRS PV3 DR1 data to develop a PSF model during the single-visit processing
(refer Section 2.2). However, a mean offset of $\sim 1.9\pm
2^{\prime\prime}\times 10^{-5}$ in Right Ascension and $\sim 6.6\pm
8^{\prime\prime}\times 10^{-6}$ in Declination is observed with respect to the
Gaia EDR3 astrometry for both HSC and Pan-STARRS data and is well in
accordance with the astrometric accuracy estimated in Aihara et al. (2019)
with these two data sets. We also present the spatial distribution of the
astrometric offsets of HSC with respect to the GAIA EDR3 and Pan-STARRS DR1
astrometry in Figure 21 and observe an excellent uniformity throughout the
observed region.
### 3.3 Completeness
Figure 11: Top: Histograms representing the detection limit of individual HSC
bands with Black: r2-band; Green: i2-band; Red: z-band and Blue: Y-band. The
limiting magnitudes in individual HSC filters are mentioned in the legend. The
dashed lines and the corresponding magnitudes denote the 90$\%$ completeness
limit attained in individual filters as indicated by the turn-over point
method (see Section 3.3 and Table 3 for details). Bottom: Histogram depicting
the completeness of r2-band (Black) ; i2-band (Green) and z-band (Red) with
respect to the Y-band (Blue) of HSC. Figure 12: Histogram plot representing
the completeness of Pan-STARRS r-band (Green) and GTC/OSIRIS r-band (Red) with
respect to the HSC $r_{2}$-band (Blue) for a comparable common area of 30′
radius centred at Cygnus OB2. The dashed lines represent the corresponding
90$\%$ completeness limits which are found to be 21.5 mag for Pan-STARRS, 23.5
mag for GTC/OSIRIS and 26.5 mag for HSC.
The analysis of the final data gives the 5$\sigma$ limiting magnitude i.e the
magnitude of the faintest star detectable with our observations in individual
HSC filters. The histogram shown in Figure 11 (Top) indicates the detection
limit of HSC photometry in different bands. In-spite of the high amount of
nebulosity and moderate extinction prevalent in Cygnus OB2 (Drew et al. 2008;
Wright et al. 2010; Guarcello et al. 2012), the limiting magnitude reaches
down to888magnitude values rounded off to nearest 0.2 mag 28.0 mag in r2-band,
27.0 mag in i2-band, 25.5 mag in z and 24.5 mag in Y-band. At a distance of
1600 pc, age $\sim$ 5 $\pm$ 2 Myrs (see Section 4.3) and an average extinction
AV ranging between 6 – 8 mag (refer Section 4.1), the mentioned detection
limit of 27.0 mag in i2-band corresponds to a stellar mass of 0.02 – 0.03 M⊙
(using isochrones of Baraffe et al. (2015)) i.e less than the Lithium-burning
limit. The final HSC photometry is $\sim$ 90% complete down to 26.5 mag, 25.5
mag, 24.0 mag and 23.5 mag in r2, i2, z and Y-band respectively, as indicated
by the turn-over point in the histogram (denoted by dashed lines in Figure
11). The turnover point in source count approach to evaluate the 90$\%$
completeness limit gives similar results to the artificial star-count method
(Jose et al. 2016, 2017; Damian et al. 2021; Das et al. 2021). Since Y-band
has the highest number of detections, we take it as reference and calculate
the number of counterpart sources in r2, i2 and z-band in each 0.5 mag bin to
assess the completeness of other HSC filters relative to Y-band. The
completeness of the photometry in various filters relative to Y-band attained
by this method is presented in Figure 11 Bottom. We provide a summary of the
useful quality parameters in individual HSC filters, for an age $\sim$ 5 $\pm$
2 Myrs and AV = 6 - 8 mag in the Table 3. The obtained HSC photometry is found
to be deeper by an order of 3 - 5 mag , when compared with the existing Pan-
STARRS and GTC/OSIRIS photometry (limited to $\sim$ 21.5 mag and 23.5 mag,
respectively in r-band), and thus provides a substantial sample of faint low
mass sources in Cygnus OB2.
Table 3: Details of final HSC catalog in individual filters. (For more
details of the given parameters, please refer to Sections 3.1, 3.2, 3.3 and
4.1)
Filters | HSC-Y | HSC-z | HSC-$i_{2}$ | HSC-$r_{2}$
---|---|---|---|---
Number of sources | 699,798 | 685,511 | 622,011 | 358,372
Fraction of sources $\leq$ 0.05 mag error | 91% | 95% | 93% | 90%
Brightness limita (mag) | 14.0 | 14.2 | 15.3 | 15.6
Limiting magnitudeb,c (mag) | 24.5 | 25.5 | 27.0 | 28.0
Limiting magnitude upto 90% completeness (mag) | 23.5 | 24.0 | 25.5 | 26.5
Limiting mass (in M⊙)d | 0.02-0.03 | 0.03-0.04 | 0.03-0.06 | 0.15-0.30
* a
Magnitude of the brightest object detected
* b
Magnitude of the faintest object detected
* c
Magnitudes rounded off to 0.2 mag
* d
Mass corresponding to magnitude with 90% completeness for AV: 6 – 8 mag and
age: 5 $\pm$ 2 Myrs.
## 4 Data Analysis and Results
We present here some preliminary analysis based on the HSC data to illustrate
the significance of Cygnus OB2 as an ideal target for low-mass star formation
studies with the help of a few color-magnitude diagrams (CMDs) presented in
this section. We also perform a statistical field decontamination using a
peripheral control field to obtain a statistical estimate of member stars and
use that to obtain the approximate median age and average disk fraction of the
central 18′ region of Cygnus OB2.
### 4.1 Color-Magnitude Diagrams
Figure 13: Hess plot of z-Y vs z Color-Magnitude Diagram (CMD) with HSC
sources detected in the entire area of 1.5∘ diameter centred at Cygnus OB2.
The Hess plot is obtained by binning the color and magnitude parameter space
into bins of size 0.01 mag and 0.03 mag respectively. The black arrow marks
the direction of reddening vector of $A_{V}$ = 6 mag.
Figure 14: Left: i2-Y vs i2 CMD within the central 18′ radius of Cygnus OB2.
Isochrones of age 0.5, 3 and 10 Myr and evolutionary tracks for various masses
(Baraffe et al., 2015), which are corrected for an Av=6 mag and distance =
1600 pc are shown using solid curves. The previously known YSOs of the complex
(Guarcello et al. 2013) are overplotted as red dots. Right: r2-i2 vs r2 CMD
for the same 18′ radius region. The black arrow marks the direction of
reddening vector for AV = 6 mag.
Color-magnitude diagrams (CMDs) are integral to segregate the cluster members
from foreground and background contaminants (e.g Jose et al. 2017; Esplin &
Luhman 2020; Damian et al. 2021) and estimate the age, temperature and
spectral type of member stars in a star-forming cluster. We present the Hess
plot of the z-Y vs z Color-Magnitude Diagram (CMD) in Figure 13, plotted with
our optical catalog obtained for the entire 1.5∘ diameter area of Cygnus OB2.
A similar i2-Y vs i2 CMD in Figure 14 (Left) and r2-i2 vs r2 CMD in Figure 14
(Right) have been plotted for the sources lying in the central region of 18′
radius. This area has been particularly selected due to the high concentration
($\sim$ 50$\%$ of the total) of YSOs (identified previously by Guarcello et
al. (2013)) present in this region. Cygnus OB2 exhibits a distinct pre-main
sequence branch which is a prominent feature observed in CMDs of young
clusters (Jose et al. 2013; Jose et al. 2017; Panwar et al. 2018; Damiani et
al. 2019; Biazzo et al. 2019; Ksoll et al. 2020; Damian et al. 2021). In order
to analyse the approximate age of the cluster, we over-plot isochrones of age
0.5, 3 and 10 Myr and evolutionary tracks for various masses from Baraffe et
al. (2015) on the i2-Y vs i2 CMD. As per the past studies, an extinction of AV
= 4 – 5 mag has been observed towards the north-west of Cygnus OB2 along with
AV = 5.5 – 7.0 mag observed towards centre and south of the association
(Wright et al. 2015). Hence, we choose a mean value of extinction as AV = 6.0
mag in order to redden our isochrones. The isochrones have been reddened using
the extinction laws of Wang & Chen (2019) for Pan-STARRS filter system, taking
AV = 6.0 mag and 1600 parsecs as distance of Cygnus OB2 from the Sun (Lim et
al. 2019). Consequently, the transformation equations (given in Appendix B)
have been used to convert the obtained magnitudes of Baraffe isochrones (in
Pan-STARRS filter system) to HSC filter system.
The majority ($\sim 88\%$) of the previously detected YSOs (Guarcello et al.
2013), overplotted as red circles, are located within the 10 Myr isochrone
overplotted on the i2-Y vs i2 CMD in Figure 14 Left and thus, occupy the
characteristic pre-main sequence branch. The source population occupying the
young pre-main sequence branch consists of both cluster members as well as
background contaminants. We obtain a statistical estimate of the membership in
the central 18′ using the field decontamination process further in Section
4.2. The color of these sources (i.e i${}_{2}-Y\geq$ 2) reinforces the claim
that they constitute the pre-main sequence population present in the central
18′ radius region of Cygnus OB2.
Figure 15: The comparative Hess diagrams of r2-Y vs r2 CMDs to emphasize
cluster membership for sources located within (Left) the inner 18′ radius of
Cygnus OB2 (RA: 308.2785; Dec: 41.7477) and (Right) a rectangular region of
the same area towards the outskirts of Cygnus OB2 (RA: 308.2655; Dec:
41.7497). The black arrow marks the direction of reddening vector for AV = 6
mag.
We emphasize the cluster membership of the sources in the pre-main sequence
branch with the aid of a comparative study between an 18′ radius circular
region towards the centre and an equal rectangular area towards the periphery
of Cygnus OB2 (RA: 308.2665; Dec: 41.7497), as shown in Figure 15. The Hess
plot of r2-Y vs r2 CMD (Figure 15 (Left)) is plotted for the sources in the
central 18′ radius region, which is prolific in pre-main sequence cluster
members and a similar Hess plot is plotted in Figure 15 (Right) for the
sources lying towards the outskirts of Cygnus OB2. The absence of a
distinguished pre-main sequence branch in the CMD of the sources towards the
periphery as compared to the central region, suggests that it is mainly
populated by the non-cluster members in the foreground or background. Hence,
in accord with the literature (Knödlseder 2000; Wright et al. 2010; Guarcello
et al. 2013; Wright et al. 2015; Guarcello et al. 2016), our optical data
analysis advocates that Cygnus OB2 is an active young star formation site rich
in pre-main sequence, low mass as well as sub-stellar population with a
suggested age $\leq$ 10 Myrs.
### 4.2 Field Star Decontamination
The background and foreground contaminants, also termed as field star
contaminants, generally lie in the line of sight of the observed target region
and can overlap with the young pre-main sequence population in the CMDs as
mentioned in Section 4.1. Hence, the identification of cluster members is
particularly crucial for an accurate estimation of various cluster parameters
like age, distance, disk fraction which can otherwise be biased by the
presence of field stars. Although, kinematic parameters like proper motion,
radial velocity and other methods such as spectroscopy and SED analysis
provide the most precise membership identification (Panwar et al. 2017; Dutta
et al. 2018; Herczeg et al. 2019; Bhardwaj et al. 2019; Jose et al. 2020; Das
et al. 2021), such data is available only for a handful of the sources with
Gaia eDR3 counterparts complete down to $\sim$ 20 mag, which is inadequate for
the low mass pre-main sequence members in Cyngus OB2. Hence, a statistical
field star subtraction using an appropriate control field is useful to obtain
a statistical estimate of the probable cluster members down to faint low mass
limits (r2 $\sim$ 28 mag) (eg. Jose et al. 2017; Kaur et al. 2020; Damian et
al. 2021).
We perform the statistical field decontamination for a cluster field of 18′
radius centred at Cygnus OB2, which encloses $\sim$ 50$\%$ of the known YSOs
in the region. In the absence of a control field external to the observed
region, we choose a rectangular control field located towards the outskirts of
the Cygnus OB2 (centred at RA: 308.2655; Dec: 41.7497) of an area equal to
that of the cluster field. This control field is the same as used above for
Figure 15 (Right). We observe a higher source density in the control field as
compared to the cluster field, which may either be due to differences in the
stellar density or could be attributed to the different extinction observed in
the two directions. Although, the CO maps and mid-IR images from MSX from
Schneider et al. (2006) and Reipurth & Schneider (2008) suggest an approximate
uniform extinction across the Cygnus OB2, the extinction mapping performed by
us using deep near-IR UKIDSS data (to be discussed in the forth-coming work.)
reveals moderate differential reddening across the region with the control
field being less extincted than the cluster field by 1 - 1.5 mag. To address
the stellar density fluctuation, we chose a box in the color magnitude diagram
where we do not expect to see any pre-main sequence stars in the cluster field
(such as the one shown in Figure 16 (Left)). We scale down the counts in the
color magnitude diagram of the control field by a constant factor $f$, such
that the number of detected objects in this box is consistent between the
cluster and the control field within Poisson fluctuations. We infer the
posterior distribution of the parameter $f$ using Monte Carlo Markov sampling
using the package emcee (Foreman-Mackey et al., 2013). We performed multiple
iterations over several smaller box areas (located over the entire r2
magnitude range and r2 \- i2 color $\leq$ 2) in the CMD of the control field,
and obtain a median likelihood value of 0.73 that is used to scale the bin
counts of the control field in the entire color magnitude diagram. This median
likelihood value scales down the overdensity of sources in the control field,
which can otherwise result in the over subtraction of the sources while
performing field decontamination of the cluster field.
We then perform the field subtraction using r2-i2 versus r2 CMD and divide the
color and magnitude parameter space into 0.1 and 0.2 mag bins. For each bin,
we first scale down the count of sources in the control field and then,
subtract the control field count from the cluster field count. The resultant
count thus obtained, is a floating point number which represents the average
number of sources to be selected randomly as the field subtracted sources in
each bin. Hence, in order to obtain an integer count, we randomly select an
integer value within the Poisson fluctuations of the average count obtained as
a result of subtraction. The derived integer count is considered as the number
of sources to be selected as field subtracted sources in the cluster field per
bin. We emphasize here that this field decontamination is purely statistical
and the resultant field subtracted sources may not be the confirmed members of
the cluster. The Figure 16 shows the Hess plots of r2-i2 versus r2 CMD for the
cluster and control field along with that for the field subtracted sources. We
observe that the field subtracted sources distinctly occupy the pre-main
sequence branch in the CMD with a few scattered sources, which can be
attributed to the statistical uncertainty in the field decontamination
process. We repeated the field subtraction with another control field located
in the outskirts of Cygnus OB2, and find that the statistics remain comparable
within 10$\%$ uncertainty. Hence, we consider the field subtracted sources for
further analysis to estimate the median age and disk fraction of the chosen
cluster field area as described in the following sections.
Figure 16: Hess plots of r2-i2 versus r2 CMD for (Left) the cluster field,
(Middle) the control field and (Right) the field subtracted sources. For the
hess plot of control field (Middle), the control field data count per bin is
scaled by the median log likelihood value, i.e 0.73. A sample box area chosen
to calculate this log likelihood value is shown as the white box in the Hess
plot of the cluster field (Left). Several such box areas are considered to
calculate the median log likelihood value. The white arrow marks the direction
of reddening vector for AV = 6 mag.
### 4.3 Age distribution of Cygnus OB2
The information about the age of the sources, combined with an estimate of the
disk bearing sources (YSOs) in a cluster is helpful in constraining the star
formation history of the region. However, the age estimation can be biased if
the sample is contaminated with field stars. Hence, we use the statistically
subtracted sources obtained after the field decontamination process, described
above in Section 4.2, to estimate the age of the chosen cluster field area.
However, to eliminate any leftover contaminants due to statistical error in
the field decontamination process which may bias our age estimation, we
consider only those sources with 20.5 mag $\leq$ r2 $\leq$ 26.5 mag, in
accordance with the completeness limit of r2-band. The upper limit of 20.5 mag
corresponds to 1.4 M⊙ source (the upper mass limit in Baraffe isochrones) at
an age $\sim$ 5 Myrs. Since, approximately 90$\%$ of the total field
subtracted sources have mass less than the considered upper limit, it will not
modify our results significantly. To further refine our selection, we define
an empirical pre-main sequence (PMS) locus and select only those sources which
are within 1 $\sigma$ limits of this empirical locus. We refer to these
sources as the selected sources. The PMS locus is obtained by dividing the r2
magnitude range into 0.5 mag bins. For each bin then, we take the mean of the
r2 magnitude and median of the r2 \- i2 color of the sources inside the bin.
This mean magnitude and the median r2 \- i2 color in each magnitude bin thus,
defines the empirical PMS locus (see Damian et al. (2021) for details). The
Figure 17 (Left) shows the Hess plot of r2 \- i2 versus r2 CMD overplotted
with the finally selected sources (red sources) and the empirical PMS locus
(green solid curve) along with the 20 Myr Baraffe isochrone (black dashed
curve). We also present the color distribution in each magnitude bin which
defines the PMS locus in Figure 17 (Right).
Figure 17: Left: Hess plot of r2 \- i2 vs r2 CMD of the field subtracted
members in the central cluster field of 18′ region of Cygnus OB2. This is
overplotted with the selected sources (red dots) i.e within 1 $\sigma$ limits
of the empirical pre-main sequence (PMS) locus (green solid curve) and 20.5
mag $\leq$ r2 $\leq$ 26.5 mag. These selected sources are considered for the
age estimation. Also, the 20 Myr Baraffe isochrone corrected for an Av=6 mag
and distance = 1600 pc is shown as the black dashed curve. The white arrow
marks the direction of reddening vector for AV = 6 mag. Right: Histograms for
r2 \- i2 color distribution in each r2 magnitude bin of 0.5 mag (the legend in
each histogram shows the respective magnitude bin for which the histogram of
color distribution is plotted).
We determine the age of these selected sources by fitting the Baraffe
isochrones of various ages (available at an interval of log(t) = 0.01). The
age is then assigned to each source based on its distance to the different
isochrones. Since for any particular age, the available isochrones are a set
of few discrete points (color and magnitude values), the age estimation based
on the distance to these few points can be biased. Hence, we fit these
discrete points using linear regression model with fifth order polynomial
distribution to interpolate the isochrones. This interpolation generates a
larger set of discrete points for any particular age and the accuracy of these
predicted values (color and magnitude values) is $\geq$ 99$\%$ for all the
isochrones of different ages. The interpolation of the isochrones thus, helps
in improving the overall accuracy of this age estimation method. We then
proceed to find, for each source, the two nearest isochrones with ages, say t1
and t2 and distances D1 and D2 respectively, from the source. The age is then
calculated as the weighted average of the two ages t1 and t2. The inverse of
the distances D1 and D2 are used as weights in order to calculate the weighted
average (t) of the ages of the two isochrones as given in equation below:
$t=\frac{t_{2}D_{1}+t_{1}D_{2}}{D_{1}+D_{2}}$
The weighted average t is thus, assigned as the age of the source. The process
is repeated for all the selected sources. The median age of the field
decontaminated sources within 18′ is thus, obtained to be 6.5 $\pm$ 5 Myrs. We
further converge this distribution to within 2 $\sigma$ limits from the mean
age of the entire distribution after performing 8 iterations. The median age
for the 2 $\sigma$ converged sample turns out to be 5 $\pm$ 2 Myrs. The Figure
18 shows histogram plot for the age distribution of the sources for the un-
converged sources. Although for the above age calculation, we have reddened
the Baraffe isochrones for an AV = 6 mag, we derive similar results (median
age within 4 – 6 Myrs) for an extinction variation between AV = 4.5 - 7.5 mag
(Wright et al. 2010, 2015). This is expected because the reddening vector
stays parallel to the isochrones for optical wavelengths. Hence, a variation
in the extinction simply shifts the sources along the isochrones without thus,
introducing any significant modification in the derived ages. Also, the
derived age of the region remains within 4 - 6 Myrs for a distance variation
ranging between $\sim$ 1500 - 1700 pcs (distance to Cygnus OB2 = 1600 $\pm$
100 pcs (Lim et al. 2019)). The other possible factors like binarity, optical
variability, although add to the broadening of the color in CMDs of young star
forming regions, however, may not affect the true age spread as well as the
cluster parameters like IMF significantly (Jose et al. 2017; Damian et al.
2021). The above analysis thus, confirms the median age of the central 18′
region with that of $\leq$ 10 Myrs as estimated by the previous studies (Drew
et al. 2008; Wright et al. 2015; Berlanas et al. 2018).
Figure 18: Histogram to represent the distribution of age among the selected
sources (represented as red dots in Figure 17).
### 4.4 Disk Fraction
Circumstellar disk evolution sets the timescale for planet formation and
hence, measuring the disk fraction, that is, the fraction of stars surrounded
by circumstellar disks for a certain cluster age, is an important parameter to
give an insight into the star and planet formation in a young cluster (Haisch
et al. 2001; Williams & Cieza 2011; Helled et al. 2014; Ribas et al. 2014).
Although in a young cluster, disk fraction depends upon various factors such
as the metallicity, stellar density, environmental factors like external and
internal photoevaporation (Yasui et al. 2016; Yasui 2021; Thies et al. 2010;
Guarcello et al. 2016; Reiter & Parker 2019), a general trend of disk fraction
declining with age is observed. It ranges between 60$\%$ \- 80$\%$ for
clusters like NGC 1333 (Ribas et al. 2014), NGC 2023, RCW36 (Richert et al.
2018) with an age $<$ 1 Myr (e.g ) to 5$\%$ \- 10$\%$ for clusters like
LowCent-Crux (Hernández et al. 2007), 25 Orionis (Pecaut & Mamajek 2016) with
age $\sim$ 10 Myrs. In this section we calculate the disk fraction for the
central 18′ region of Cygnus OB2.
In order to calculate the disk fraction, we consider the previously identified
YSOs by Guarcello et al. (2013) within the cluster field area of 18′ radius.
The previously identified YSOs are complete between 0.7 M⊙ – 2 M⊙ (Guarcello
et al. 2013), which corresponds to 18.5 mag $\leq$ r2 $\leq$ 22.5 mag at a
distance $\sim$ 1600 pc and AV $\sim$ 6 mag. Hence, for estimating the disk
fraction, we consider only those YSOs with optical counterparts within the
mentioned r2-band magnitude completeness range. The sample data used to
calculate the disk fraction thus consists of only those field subtracted
member sources which lie within 1 $\sigma$ limit of the pre-main sequence
locus (Section 4.3) and 18.5 mag $\leq$ r2 $\leq$ 22.5 mag. Figure 19 shows
the Hess plot of r2 \- i2 versus r2 CMD for the field subtracted sources. This
Hess diagram is overplotted with the YSOs (Red circles) along with the sample
selected to calculate the disk fraction (i.e the total number of candidate
members) (White crosses). We find that the ratio of the number of YSOs to that
of the total number of sources, also termed as the disk fraction, turns out to
be $\sim$ 9$\%$. This is however, a lower limit on the disk fraction as the
previously identified YSOs are limited by the Spitzer IRAC Channel 2
sensitivity. This reason accounts for the lower disk fraction ($\sim$ 9$\%$)
obtained by our analysis as compared to the 18$\%$ \- 40$\%$ estimated by
Guarcello et al. (2016). Cygnus OB2 has a lower disk fraction, in comparison
to other young clusters like NGC 2264, CepOB3-East and West, which could be a
result of external photoevaporation of circumstellar disks as a result of
massive stars in vicinity.
Figure 19: Hess plot of r2 \- i2 versus r2 CMD for the field subtracted
sources. This Hess diagram is overplotted with the YSOs (Red circles) along
with the sample selected to calculate the disk fraction (i.e the total number
of sources) (White crosses).
## 5 Discussion
Rigorous studies of the low mass star formation in young massive Galactic
clusters using multi-wavelength data sets are crucial to understand and solve
some of the important yet unanswered questions such as the nature of IMF for
stellar masses $<$ 0.5 M⊙, the role of feedback driven cluster environment on
the evolution of circumstellar disks, proportion of sub-stellar objects etc.
The young massive association of Cygnus OB2 is a promising target for such
purpose with its substantial massive as well as pre-main sequence population
(Albacete Colombo et al. 2007; Wright & Drake 2009). This paper presents the
deepest and the widest optical photometry of Cygnus OB2 available as of yet.
We detect a total of 713,529 sources with reliable data quality for objects
detected down to the faint low mass end (Section 3). The preliminary data
analysis performed with the deep HSC catalog suggests the presence of two
sequences in various CMDs (Section 4.1), the rightward sequence occupied by
the PMS cluster members along with background contaminants. The previously
identified YSOs overplotted on i2-Y vs i2 CMD in Figure 14 (Left) occupy the
pre-main sequence branch in the CMD, mostly towards the right side of the
isochrones of age $<$ 10 Myrs, as expected for a young association like Cygnus
OB2 (e.g. Jose et al. 2017; Damian et al. 2021). We observe that the pre-main
sequence segregation in various CMDs (Figure 15) for the central region is
consistent with most of the star formation being significantly clustered
around the centre of this dynamically unevolved region (Wright et al. 2016;
Arnold et al. 2020). The isochrone fitting done in Figure 14 Left suggests
that $\sim$ $45\%$ of the total 713,529 sources detected in the region, lie
within age less than 10 Myrs and a significant fraction of these sources
($\sim 12\%$) lie below the evolutionary track of mass less than 0.08 M⊙.
However, we caution the readers that this is an upper limit of candidate pre-
main sequence population in the region as the estimated fraction is likely to
be contaminated by the reddened background sources. More qualitative
identification and classification of the YSOs in the entire HSC FoV of Cygnus
OB2, both disk and diskless will be done in a future follow-up study using
multi-wavelength photometry.
We perform the field decontamination of the central 18′ region to get a
statistical estimate of membership of the sources, using a control field
located towards the periphery, which may be mostly contaminated with
foreground and background stars. Approximately, 70$\%$ of the field
decontaminated sources distinctly occupy the PMS branch with age less than 10
Myrs (Figure 16). Since these statistically decontaminated members are used
further to calculate age and disk fraction in the cluster field, we refine the
membership with the help of an empirical PMS locus (see Section 4.3 for
details). The median age of the central 18′ region is $\sim$ 5 $\pm$ 2 Myrs.
The age obtained by our analysis agrees quite well with that estimated by
several other studies of the region. For example, Drew et al. (2008) analyse
200 A-type stars across the Cygnus OB2, using IPHAS photometry and find the
age to be $\sim$ 5 Myrs. Similarly, Wright et al. (2015) used a list of 169
massive OB stars to derive the age of the region as $\sim$ 4 - 5 Myrs using
rotating stellar evolutionary models from Ekström et al. (2012) while Wright
et al. (2010) use X-ray sources to obtain 3.5 - 5.2 Myrs as the average age of
the region. Recent studies by Berlanas et al. (2018); Comerón et al. (2020)
perform spectroscopy of $\sim$ 60 OB-type stars (observed with INT, ISIS,
OSIRIS instruments) and find that the age of the region ranges between 1 - 6
Myrs irrespective of the stellar model used for age estimation. We corroborate
this result by verifying our age estimation with Parsec isochrone models
(Bressan et al. 2012) in addition to the Baraffe models, for a mass range of
0.3 M⊙ \- 1.4 M⊙ and derive the median age $\sim$ 4.5 $\pm$ 2 Myrs. Cygnus OB2
is a part of the larger Cygnus X giant molecular cloud which formed
approximately 40 - 50 Myrs ago. The star formation towards Cygnus OB2 region
however, has mainly taken place in the last 10 - 20 Myrs with the last star
formation activity peaking around 3 - 5 Myrs ago (Reipurth & Schneider 2008;
Comerón & Pasquali 2012; Comerón et al. 2016; Berlanas et al. 2018; Comerón et
al. 2020). This may suggest the substantial pre-main sequence population with
the median age $\sim$ 5 Myrs in the region as obtained with our data analysis.
We obtain a disk fraction of $\sim$ 9$\%$ for this cluster field using the
already known YSOs in the region. There is a wide variety of disk fractions
measured in young clusters. An average disk fraction of 30$\%$ \- 50$\%$ is
observed in several young clusters (within age $\sim$ 3 – 6 Myrs) such as NGC
2264 (Sung et al. 2009), CepOB3b-East and West (Allen et al. 2012), AFGL
333/W3 (Jose et al. 2016), IC348/U (Richert et al. 2018) and NGC 2282 (Dutta
et al. 2015). However, recent studies of some nearby young clusters (Hernández
et al. 2010; Guarcello et al. 2016; Richert et al. 2018) show considerably
smaller disk fractions. For example, the recent study by Richert et al. (2018)
with 69 MYStIX and SFiNCs young clusters reveals that the disk fraction could
drop to values $\leq$ 15$\%$ for a cluster age $\geq$ 4 Myrs, which is
consistent with our results. The particularly low disk fraction obtained for
the central region of Cygnus OB2 and such other clusters which lie at the
lower end of the spectrum of disk fractions, may be attributed to either the
evolutionary effect or the feedback effect from the massive OB-type stars in
vicinity (Guarcello et al. 2016). In this work we cannot conclusively pinpoint
the exact reason, however, evolutionary effects or external photo-evaporation
could be some of the possible reasons for the observed low disk fractions.
The significant census of low mass and sub-stellar sources detected with deep
HSC photometry (r2 $\sim$ 28 mag) will serve as an excellent statistical
sample for further studies to test the effect of feedback driven environmental
conditions of Cygnus OB2 on low mass population across the region. To
conclude, we find from our preliminary analysis that in accordance with the
literature, Cygnus OB2 is a young active star-forming region (age $<$ 10 Myr)
with a substantial pre-main sequence population. The deep multi-wavelength
studies are essential to understand low mass star formation in the region and
will be the area of focus in our future works.
## 6 Summary and Future Works
This paper presents the deepest and the widest optical catalog of the young
feedback-driven OB association of Cygnus OB2.
1) A 1.5∘ diameter area of Cygnus OB2 was observed with Subaru Hyper Suprime-
Cam (HSC) in 4 filters namely r2, i2, z and Y. The observations were taken in
excellent seeing conditions ranging between 0.5′′–0.7′′. The observed raw data
was reduced using HSC pipeline version 6.7.
2) The final HSC catalog contains only those point sources which have at least
2-band detection and additionally, have internal astrometric error $\leq$
0.1′′ along with photometric error $\leq$ 0.1 mag in individual bands. A total
of 713,529 sources are detected with 699,798 sources having a must detection
in Y-band, 685,511 sources in z-band, 622,011 in i2 and 358,372 sources in
r2-band.
3) We detect sources down to 28.0 mag, 27.0 mag, 25.5 mag and 24.5 mag in r2,
i2, z and Y-band respectively. Coupled with a distance of 1600 pc for an age
ranging between 5 $\pm$ 2 Myrs and extinction AV $\sim$ 6 – 8 mag, we achieve
$\sim$ 90% completeness down to a stellar mass $\sim$ 0.03 – 0.06 M⊙ and
$\sim$ 0.03 – 0.04 M⊙ i.e $<$ Lithium burning limit, in i2 and z-band
respectively. The corresponding mass completeness limit is down to $\sim$
0.02-0.03 M⊙ and $\sim$ 0.15-0.30 M⊙ in Y and r2-bands, respectively.
4) The median age of the central region of Cygnus OB2 ranges between 4 – 6
Myrs for an AV ranging between 4.5 – 7.5 mag and distance between 1500 – 1700
pcs. We obtain a disk fraction $\sim$ 9$\%$ in the central cluster, which is
however a lower limit given the restricted completeness of the already known
YSOs.
As the next step, we plan to adopt a multi-wavelength approach by combining
the presented HSC optical data with other existing data from UKIDSS, 2MASS and
Spitzer surveys to carry out a detailed analysis of the YSOs present in the
region. In addition to this we would use our deep optical photometry presented
in this paper, coupled with other data sets to evaluate cluster parameters
like IMF for very low mass stars ($<$ 0.1 M⊙) along with identification and
characterization of sub-stellar objects like brown dwarfs and understand the
role of feedback-driven environment of Cygnus OB2 on such parameters.
## 7 Data Availability
A sample table of the HSC catalog is presented in Table 4. The complete
catalog is provided as online material.
Table 4: Sample table of HSC catalog data. The complete table is available as online material. Source | RA | Dec | r2 | r${}_{2_{err}}$ | i2 | i${}_{2_{err}}$ | z | zerr | Y | Yerr
---|---|---|---|---|---|---|---|---|---|---
| (deg) | (deg) | (mag) | (mag) | (mag) | (mag) | (mag) | (mag) | (mag) | (mag)
1 | 308.69298 | 41.86609 | 25.728 | 0.019 | 23.568 | 0.006 | 22.090 | 0.008 | 21.434 | 0.008
2 | 308.83647 | 41.86581 | 24.790 | 0.010 | 22.666 | 0.003 | 21.175 | 0.004 | 20.515 | 0.004
3 | 308.70283 | 41.86674 | 26.425 | 0.044 | 24.641 | 0.018 | 22.859 | 0.015 | 22.154 | 0.016
4 | 308.84554 | 41.86651 | 25.894 | 0.028 | 22.267 | 0.002 | 20.231 | 0.002 | 19.183 | 0.001
5 | 308.79625 | 41.86680 | 24.398 | 0.007 | 22.314 | 0.002 | 21.279 | 0.005 | 20.026 | 0.002
## Acknowledgements
The authors thank the referee for the useful constructive comments which has
refined the overall structure and quality of this paper. This research is
based on data collected at Subaru Telescope with Hyper Suprime-Cam, which is
operated by the National Astronomical Observatory of Japan. We are honored and
grateful for the opportunity of observing the Universe from Mauna Kea, which
has the cultural, historical and natural significance in Hawaii. We are
gateful to The East Asian Observatory which is supported by The National
Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and
Astrophysics; the Korea Astronomy and Space Science Institute; the Operation,
Maintenance and Upgrading Fund for Astronomical Telescopes and Facility
Instruments, budgeted from the Ministry of Finance (MOF) of China and
administrated by the Chinese Academy of Sciences (CAS), as well as the
National Key R&D Program of China (No. 2017YFA0402700). The authors thank the
entire HSC staff and HSC helpdesk for their help. We would like to thank
S.Mineo, H.Furusawa, Y.Yamada and M.Kubo in HSC helpdesk team for useful
discussions regarding the data reduction. We thank NAOJ for providing access
to hanaco account which was used to perform some initial stages of data
reduction. We gratefully acknowledge the use of high performance computing
facilities at IUCAA, Pune for the HSC data reduction. We thank I.Baraffe for
providing us with isochrone models for an interval of log (Age) = 0.01,
through personal communication. We use Pan-STARRS and GAIA ED3 data for data
quality checks. The Pan-STARRS1 Surveys (PS1) and the PS1 public science
archive have been made possible through contributions by the Institute for
Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-
Planck Society and its participating institutes, the Max Planck Institute for
Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial
Physics, Garching, The Johns Hopkins University, Durham University, the
University of Edinburgh, the Queen’s University Belfast, the Harvard-
Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global
Telescope Network Incorporated, the National Central University of Taiwan, the
Space Telescope Science Institute, the National Aeronautics and Space
Administration under Grant No. NNX08AR22G issued through the Planetary Science
Division of the NASA Science Mission Directorate, the National Science
Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand
University (ELTE), the Los Alamos National Laboratory, and the Gordon and
Betty Moore Foundation. This work has made use of data from the European Space
Agency (ESA) mission GAIA processed by Gaia Data processing and Analysis
Consortium (DPAC: https://www.cosmos.esa.int/web/gaia/dpac/consortium). PP and
JJ acknowledge the DST-SERB, Gov. of India for the start up research grant
(No: SRG/2019/000664).
## References
* Aihara et al. (2017) Aihara H., et al., 2017, Publications of the Astronomical Society of Japan, 70
* Aihara et al. (2019) Aihara H., et al., 2019, PASJ, 71, 114
* Albacete Colombo et al. (2007) Albacete Colombo J. F., Flaccomio E., Micela G., Sciortino S., Damiani F., 2007, A&A, 464, 211
* Allen et al. (2012) Allen T. S., et al., 2012, ApJ, 750, 125
* Armitage (2015) Armitage P. J., 2015, arXiv e-prints, p. arXiv:1509.06382
* Arnold et al. (2020) Arnold B., Goodwin S. P., Wright N. J., 2020, MNRAS, 495, 3474
* Baraffe et al. (2015) Baraffe I., Homeier D., Allard F., Chabrier G., 2015, A&A, 577, A42
* Bastian et al. (2010) Bastian N., Covey K. R., Meyer M. R., 2010, ARA&A, 48, 339
* Basu (2017) Basu S., 2017, Perspectives on Low-Mass Star Formation (arXiv:1703.01542)
* Berlanas et al. (2018) Berlanas S. R., Herrero A., Comerón F., Pasquali A., Bertelli Motta C., Sota A., 2018, A&A, 612, A50
* Berlanas et al. (2020) Berlanas S. R., et al., 2020, A&A, 642, A168
* Bhardwaj et al. (2019) Bhardwaj A., Panwar N., Herczeg G. J., Chen W. P., Singh H. P., 2019, A&A, 627, A135
* Biazzo et al. (2019) Biazzo K., Beccari G., De Marchi G., Panagia N., 2019, ApJ, 875, 51
* Bobylev & Bajkova (2020) Bobylev V. V., Bajkova A. T., 2020, Astrophysical Bulletin, 75, 267
* Bosch et al. (2017) Bosch J., et al., 2017, Publications of the Astronomical Society of Japan, 70
* Bressan et al. (2012) Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, MNRAS, 427, 127
* Brown et al. (2016) Brown A. G. A., et al., 2016, Astronomy & Astrophysics, 595, A2
* Cappellari et al. (2012) Cappellari M., et al., 2012, Nature, 484, 485
* Carpenter (2000) Carpenter J. M., 2000, AJ, 120, 3139
* Chambers et al. (2019) Chambers K. C., et al., 2019, The Pan-STARRS1 Surveys (arXiv:1612.05560)
* Comerón & Pasquali (2012) Comerón F., Pasquali A., 2012, A&A, 543, A101
* Comerón et al. (2016) Comerón F., Djupvik A., Schneider N., Pasquali A., 2016, Astronomy and Astrophysics, 586
* Comerón et al. (2020) Comerón F., Djupvik A. A., Schneider N., Pasquali A., 2020, A&A, 644, A62
* Damian et al. (2021) Damian B., Jose J., Samal M. R., Moraux E., Das S. R., Patra S., 2021, MNRAS, 504, 2557
* Damiani et al. (2019) Damiani F., Prisinzano L., Pillitteri I., Micela G., Sciortino S., 2019, A&A, 623, A112
* Das et al. (2021) Das S. R., Jose J., Samal M. R., Zhang S., Panwar N., 2021, MNRAS, 500, 3123
* Drew et al. (2008) Drew J. E., Greimel R., Irwin M. J., Sale S. E., 2008, MNRAS, 386, 1761
* Dunham et al. (2015) Dunham M. M., et al., 2015, The Astrophysical Journal Supplement Series, 220, 11
* Dutta et al. (2015) Dutta S., Mondal S., Jose J., Das R. K., Samal M. R., Ghosh S., 2015, MNRAS, 454, 3597
* Dutta et al. (2018) Dutta S., Mondal S., Joshi S., Jose J., Das R., Ghosh S., 2018, MNRAS, 476, 2813
* Dzib et al. (2018) Dzib S. A., Loinard L., Ortiz-León G. N., Rodríguez L. F., Galli P. A. B., 2018, The Astrophysical Journal, 867, 151
* Ekström et al. (2012) Ekström S., et al., 2012, A&A, 537, A146
* Ercolano & Pascucci (2017) Ercolano B., Pascucci I., 2017, Royal Society Open Science, 4, 170114
* Esplin & Luhman (2020) Esplin T. L., Luhman K. L., 2020, The Astronomical Journal, 159, 282
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Furusawa et al. (2017) Furusawa H., et al., 2017, Publications of the Astronomical Society of Japan, 70
* Gaia Collaboration et al. (2020) Gaia Collaboration Brown A. G. A., Vallenari A., Prusti T., de Bruijne J. H. J., Babusiaux C., Biermann M., 2020, arXiv e-prints, p. arXiv:2012.01533
* Geha et al. (2013) Geha M., et al., 2013, ApJ, 771, 29
* Gennaro et al. (2018) Gennaro M., et al., 2018, ApJ, 855, 20
* Gorti et al. (2015) Gorti U., Hollenbach D., Dullemond C. P., 2015, ApJ, 804, 29
* Guarcello et al. (2012) Guarcello M. G., Wright N. J., Drake J. J., García-Alvarez D., Drew J. E., Aldcroft T., Kashyap V. L., 2012, ApJS, 202, 19
* Guarcello et al. (2013) Guarcello M. G., et al., 2013, The Astrophysical Journal, 773, 135
* Guarcello et al. (2016) Guarcello M. G., et al., 2016, arXiv e-prints, p. arXiv:1605.01773
* Haisch et al. (2001) Haisch Karl E. J., Lada E. A., Lada C. J., 2001, ApJ, 553, L153
* Hartmann (2008) Hartmann L., 2008, Physica Scripta, T130, 014012
* Helled et al. (2014) Helled R., et al., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. p. 643 (arXiv:1311.1142), doi:10.2458/azu_uapress_9780816531240-ch028
* Herczeg et al. (2019) Herczeg G. J., et al., 2019, ApJ, 878, 111
* Hernández et al. (2007) Hernández J., et al., 2007, ApJ, 671, 1784
* Hernández et al. (2010) Hernández J., Morales-Calderon M., Calvet N., Hartmann L., Muzerolle J., Gutermuth R., Luhman K. L., Stauffer J., 2010, ApJ, 722, 1226
* Hosek et al. (2019) Hosek Matthew W. J., Lu J. R., Anderson J., Najarro F., Ghez A. M., Morris M. R., Clarkson W. I., Albers S. M., 2019, ApJ, 870, 44
* Ishikawa et al. (2020) Ishikawa S., et al., 2020, The Astrophysical Journal, 904, 128
* Jaelani et al. (2020) Jaelani A. T., et al., 2020, Monthly Notices of the Royal Astronomical Society, 495, 1291
* Jose et al. (2013) Jose J., et al., 2013, MNRAS, 432, 3445
* Jose et al. (2016) Jose J., Kim J. S., Herczeg G. J., Samal M. R., Bieging J. H., Meyer M. R., Sherry W. H., 2016, The Astrophysical Journal, 822, 49
* Jose et al. (2017) Jose J., Herczeg G. J., Samal M. R., Fang Q., Panwar N., 2017, ApJ, 836, 98
* Jose et al. (2020) Jose J., et al., 2020, ApJ, 892, 122
* Kaur et al. (2020) Kaur H., Sharma S., Dewangan L. K., Ojha D. K., Durgapal A., Panwar N., 2020, ApJ, 896, 29
* Kawanomoto et al. (2018) Kawanomoto S., et al., 2018, PASJ, 70, 66
* Kim et al. (2016) Kim J. S., Clarke C. J., Fang M., Facchini S., 2016, ApJ, 826, L15
* Knödlseder (2000) Knödlseder J., 2000, A&A, 360, 539
* Komiyama et al. (2017) Komiyama Y., et al., 2017, Publications of the Astronomical Society of Japan, 70
* Komiyama et al. (2018) Komiyama Y., et al., 2018, The Astrophysical Journal, 853, 29
* Ksoll et al. (2020) Ksoll V. F., et al., 2020, arXiv e-prints, p. arXiv:2012.00524
* Kubiak et al. (2021) Kubiak K., Mužić K., Sousa I., Almendros-Abad V., Köhler R., Scholz A., 2021, arXiv e-prints, p. arXiv:2102.05589
* Lada & Lada (2003) Lada C. J., Lada E. A., 2003, Annual Review of Astronomy and Astrophysics, 41, 57
* Lim et al. (2019) Lim B., Nazé Y., Gosset E., Rauw G., 2019, Monthly Notices of the Royal Astronomical Society, 490, 440
* Longmore et al. (2014) Longmore S. N., et al., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. p. 291 (arXiv:1401.4175), doi:10.2458/azu_uapress_9780816531240-ch013
* Lu et al. (2013) Lu J. R., Do T., Ghez A. M., Morris M. R., Yelda S., Matthews K., 2013, The Astrophysical Journal, 764, 155
* Luhman (2012) Luhman K. L., 2012, Annual Review of Astronomy and Astrophysics, 50, 65
* Manara et al. (2017) Manara C., Prusti T., Voirin J., Zari E., 2017, Proceedings of the International Astronomical Union, 12
* Matsuoka et al. (2019) Matsuoka Y., et al., 2019, ApJ, 883, 183
* Megeath et al. (2019) Megeath T., et al., 2019, BAAS, 51, 333
* Mehta et al. (2018) Mehta V., et al., 2018, ApJS, 235, 36
* Miyazaki et al. (2012) Miyazaki S., et al., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84460Z, doi:10.1117/12.926844
* Miyazaki et al. (2018) Miyazaki S., et al., 2018, PASJ, 70, S1
* Moraux (2016) Moraux E., 2016, in EAS Publications Series. pp 73–114 (arXiv:1607.00027), doi:10.1051/eas/1680004
* Murata et al. (2020) Murata R., Sunayama T., Oguri M., More S., Nishizawa A. J., Nishimichi T., Osato K., 2020, PASJ, 72, 64
* O’dell et al. (1993) O’dell C. R., Wen Z., Hu X., 1993, ApJ, 410, 696
* Offner et al. (2014) Offner S. S. R., Clark P. C., Hennebelle P., Bastian N., Bate M. R., Hopkins P. F., Moraux E., Whitworth A. P., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. p. 53 (arXiv:1312.5326), doi:10.2458/azu_uapress_9780816531240-ch003
* Panwar et al. (2017) Panwar N., et al., 2017, MNRAS, 468, 2684
* Panwar et al. (2018) Panwar N., Pandey A. K., Samal M. R., Battinelli P., Ogura K., Ojha D. K., Chen W. P., Singh H. P., 2018, The Astronomical Journal, 155, 44
* Parker et al. (2021) Parker R. J., Nicholson R. B., Alcock H. L., 2021, MNRAS, 502, 2665
* Pecaut & Mamajek (2016) Pecaut M. J., Mamajek E. E., 2016, MNRAS, 461, 794
* Pfalzner et al. (2012) Pfalzner S., Kaczmarek T., Olczak C., 2012, A&A, 545, A122
* Portegies Zwart et al. (2010) Portegies Zwart S. F., McMillan S. L., Gieles M., 2010, Annual Review of Astronomy and Astrophysics, 48, 431
* Reipurth & Schneider (2008) Reipurth B., Schneider N., 2008, Star Formation and Young Clusters in Cygnus. p. 36
* Reiter & Parker (2019) Reiter M., Parker R. J., 2019, MNRAS, 486, 4354
* Ribas et al. (2014) Ribas Á., Merín B., Bouy H., Maud L. T., 2014, A&A, 561, A54
* Richert et al. (2018) Richert A. J. W., Getman K. V., Feigelson E. D., Kuhn M. A., Broos P. S., Povich M. S., Bate M. R., Garmire G. P., 2018, MNRAS, 477, 5191
* Samal et al. (2015) Samal M. R., et al., 2015, A&A, 581, A5
* Schneider et al. (2006) Schneider N., Bontemps S., Simon R., Jakob H., Motte F., Miller M., Kramer C., Stutzki J., 2006, A&A, 458, 855
* Schneider et al. (2012) Schneider N., et al., 2012, A&A, 542, L18
* Schneider et al. (2016) Schneider N., et al., 2016, A&A, 591, A40
* Sicilia-Aguilar, Aurora et al. (2013) Sicilia-Aguilar, Aurora Kim, Jinyoung Serena Sobolev, Andrej Getman, Konstantin Henning, Thomas Fang, Min 2013, A&A, 559, A3
* Sung et al. (2009) Sung H., Stauffer J. R., Bessell M. S., 2009, AJ, 138, 1116
* Thies et al. (2010) Thies I., Kroupa P., Goodwin S. P., Stamatellos D., Whitworth A. P., 2010, The Astrophysical Journal, 717, 577
* Wang & Chen (2019) Wang S., Chen X., 2019, The Astrophysical Journal, 877, 116
* Williams & Cieza (2011) Williams J. P., Cieza L. A., 2011, ARA&A, 49, 67
* Winter et al. (2018) Winter A. J., Clarke C. J., Rosotti G., Ih J., Facchini S., Haworth T. J., 2018, MNRAS, 478, 2700
* Winter et al. (2019) Winter A. J., Clarke C. J., Rosotti G. P., 2019, MNRAS, 485, 1489
* Wright & Drake (2009) Wright N. J., Drake J. J., 2009, ApJS, 184, 84
* Wright et al. (2010) Wright N. J., Drake J. J., Drew J. E., Vink J. S., 2010, ApJ, 713, 871
* Wright et al. (2012) Wright N. J., Drake J. J., Drew J. E., Guarcello M. G., Gutermuth R. A., Hora J. L., Kraemer K. E., 2012, ApJ, 746, L21
* Wright et al. (2015) Wright N. J., Drew J. E., Mohr-Smith M., 2015, Monthly Notices of the Royal Astronomical Society, 449, 741
* Wright et al. (2016) Wright N. J., Bouy H., Drew J. E., Sarro L. M., Bertin E., Cuillandre J.-C., Barrado D., 2016, MNRAS, 460, 2593
* Yasui (2021) Yasui C., 2021, arXiv e-prints, p. arXiv:2104.11764
* Yasui et al. (2016) Yasui C., Kobayashi N., Saito M., Izumi N., 2016, AJ, 151, 115
* van Dokkum & Conroy (2010) van Dokkum P. G., Conroy C., 2010, Nature, 468, 940
## Appendix A
We present in Figure 20, interesting images from HSC-$r_{2}$-band for a few
proplyds/globules/globulettes identified by Wright et al. 2012, with centre of
the regions mentioned in the sub-caption below.
Figure 20: Images of proplyds/globules/globulettes in Cygnus OB2 in
$r_{2}$-band with their central co-ordinates Upper Left RA: 20:34:46.28; Dec:
+40:52:36.9 Upper Right RA: 20:34:14.4438; Dec: +41:07:39.961 Bottom Left RA:
20:33:12; Dec: +40:41:48.657 Bottom Middle RA: 20:34:47; Dec: +41:14:45 Bottom
Right RA: 20:34:53.6; Dec: +40:48:14.
## Appendix B Transformation Equations
The transformation equations999During the data reduction the coefficients used
by the pipeline are as mentioned in the transformation equations used to
convert magnitudes from Pan-STARRS system to Subaru HSC system in individual
bands in order to plot the magnitude offsets are given below:
$\begin{split}Y_{HSC}&=Y_{Pan-STARRS}-0.001952+(0.19957(Y-z)_{Pan-STARRS})\\\
&+(0.216821((Y-z)^{2})_{Pan-STARRS})\end{split}$ (1)
$\begin{split}z_{HSC}&=z_{Pan-STARRS}-0.005585-(0.220704(z-Y)_{Pan-STARRS})\\\
&-(0.298211((z-Y))^{2}_{Pan-STARRS})\end{split}$ (2)
$\begin{split}i_{2_{HSC}}&=i_{2_{Pan-
STARRS}}+0.001653-(0.206313(i_{2}-z)_{Pan-STARRS})\\\
&-(0.016085(i_{2}-z)^{2}_{Pan-STARRS})\end{split}$ (3)
$\begin{split}r_{2_{HSC}}&=r_{2_{Pan-
STARRS}}+0.000118-(0.00279(r_{2}-i_{2})_{Pan-STARRS})\\\
&-(0.014363(r_{2}-i_{2})^{2}_{Pan-STARRS})\\\ \end{split}$ (4)
The reddening laws (Wang & Chen 2019) adopted by us to correct the Baraffe
isochrones for extinction in the Pan-STARRS are mentioned below:
$\frac{A_{r}}{A_{V}}=0.843\pm 0.006$
$\frac{A_{i}}{A_{V}}=0.628\pm 0.004$
$\frac{A_{z}}{A_{V}}=0.487\pm 0.003$
$\frac{A_{y}}{A_{V}}=0.395\pm 0.003$
These equations were used to convert the absolute Pan-STARRS magnitudes to
apparent magnitudes using distance = 1600 parseccs and AV = 6 mag. The
transformation equations mentioned above are then used to convert to HSC
photometric system to redden the isochrones appropriately.
## Appendix C
We present here the spatial distribution of astrometric offset of HSC data
with respect to Pan-STARRS DR1 and Gaia EDR3 data.
Figure 21: Spatial plots signifying the variation of astrometric offset in
Right Ascension (Upper Left) and Declination (Bottom Left) between HSC and
Pan-STARRS data as well as HSC and Gaia EDR3 data (Upper Right and Bottom
Right) across the entire region. The spatial maps are obtained by binning the
RA and Dec parameter space into 10′ $\times$ 10′ bins across the entire
observed region. The colorbar indicates the mean uncertainity in RA (Left) and
Dec (Right) of each bin.
|
# On the road(s) to the Standard Model
Rodrigo Alonso Mia West Institute for Particle Physics Phenomenology, Durham
University, South Road, Durham, DH1 3LE
###### Abstract
Experimental measurements point at the Standard Model (SM) as the theory of
electroweak symmetry breaking, but as we close in on our characterization the
question arises of what limits in theory space lead to the SM. The class of
theories with this property cannot be ruled out, only constrained to an ever
smaller neighbourhood of the SM. In contrast, which class of theories do not
posses this limit and can therefore potentially be ruled out experimentally?
In this work we study both classes and find evidence to support the Standard
Model Effective field theory being the single road to the Standard Model,
theories that fall outside this class keeping a ‘minimum distance’ from the SM
characterized by a cut-off of at most $4\pi v/g_{\rm SM}$.
††preprint: IPPP/21/29
## I Introduction
Nowadays, particle physics finds itself in the midst of Electroweak Symmetry
Breaking (EWSB) exploration, the outcome of this endeavour will chart Nature’s
theory of elementary particles. Experimental data collated and compared with
predictions of theories of EWSB has narrowed down the range possibilities;
many a casualties lie indeed now discarded having been disproven by the
progress in our measurements. The Higgs boson discovery, coming up on a decade
old, was the main stroke in our map, subsequent data giving a profile that
resembles the one heralded by the Standard Model (SM). Theory considerations
have long pointed out the SM case for EWSB to be unstable under higher scale
corrections and indicated that new physics should lie in wait at the
electroweak scale. Whether these considerations should be revisited and our
theory perspective profoundly changed, or if instead patience is all that is
needed, the pressing question at present posed by experimental data is to
characterize the theory ‘neighbourhood’ of the SM. The claim that one observes
nothing but the SM at the LHC is indeed only as good our characterization of
what else we could observe; it is here we find value in the aforementioned
casualties. The aim in this work is to explore the consistent theory
neighbourhood of the Standard Model.
A long known and studied approach, or ‘trajectory’, to the SM is a linearly
realized Effective Field Theory (SMEFT), see [1] for a review, this road being
pointed at by the decoupling theorem [2]. The integration of any heavy
particle whose mass can be arbitrarily larger than the EWSB vev ($M>v$) in a
perturbative linear realization will yield the SMEFT; supersymmetry or
composite Higgs models fall into this category. Is this the only road to the
Standard Model, i.e. are there other consistent limits to obtain the SM
couplings for the known spectrum of elementary particles? As fundamental as
this topic is, on its present formulation the candidate preceded the question;
Higgs Effective Theory [3, 4] is an EFT that encompasses the SMEFT but extends
beyond it and might offer new roads. In HEFT, a linear realization is not
assumed (though admissible in certain limit) and is indeed the most general
Lorentz and gauge invariant theory with the known spectrum of particles (which
suggests it should be possible to formulate it in terms of amplitudes). The
theories that this EFT describes but fall out of SMEFT, which will be called
here theories of the quotient space HEFT/SMEFT or simply quotient EFTs 111In
other works these are called, with a slight abuse of notation, HEFT., could
contain a path to the SM other than via SMEFT. This quotient space is
characterized as missing a point in field space which is left invariant under
an $O(4)$ transformation [5, 6], be it because it is not present or because
the would be invariant point is singular [7]. A geometric formalism was used
to derive this result and also aids in exploring the properties of theories
without field redundancies, as introduced in [5, 6], and followed up in [7, 8,
9] \- it is also adopted here. Some theories in HEFT/SMEFT quotient space have
been formulated while having a perturbative expansion [7]; they have been
found to have a cut-off of $\sim 4\pi v$ and no limit can be taken within them
that yields the SM. It has been suggested that all of this this quotient space
shares this property of a finite $v$-bound cut-off [10] with further evidence
provided in [9], which means in turn that they all could be casualties of our
exploration with present and future machines. This question has been explored
so far with perturbative unitarity bounds, while here it is looked at with
semi-classical arguments.
This letter is structured as follows. Section II introduces geometry from
amplitudes, and sec. II.1 presents the basis in Riemann normal coordinates.
This first part has been rendered review, rather than new results, by virtue
of [9] although all results here are derived independently. Section. II.2
presents theory and experimental bounds on the curvature plane while III
characterizes SMEFT on this plane. In sec. IV, example models of SMEFT and
quotient space are presented and characterized in the curvature plane. Sec. V
presents theories in quotient space arising from geometry rather than explicit
models and finds candidate quotient theories that seem to approach the SM. A
semi-classical argument for the finite cut-off of theories in quotient space
is given in VI.
## II Geometry and Amplitudes
For simplicity, $O(4)\supset SU(2)\times U(1)$ invariance in the EWSB sector
is assumed. We take the high energy limit and make use of the equivalence
theorem. The Higgs singlet field is denoted $h$, and the Goldstones swallowed
by the $W$ and $Z$ bosons as $\varphi^{a}$, $a=1,2,3$.
Let us start by defining our geometry from the scattering-matrix $S$ in order
to depart from a common-place, basis-invariant magnitude in particle physics.
Following the line-integral definition for general amplitudes valid also in
the UV, we have ($S=1-i\mathcal{A}$):
$\displaystyle-R_{h+h-}$ $\displaystyle=\frac{1}{2\pi
i}\oint\frac{1}{s_{12}^{2}}\mathcal{A}_{W^{+}_{1}W^{-}_{2}\to hh}$ (1)
$\displaystyle-R_{+-+-}$ $\displaystyle=\frac{1}{2\pi
i}\oint\frac{1}{s_{12}^{2}}\mathcal{A}_{W^{+}_{1}W^{+}_{2}\to W^{+}W^{+}}$ (2)
$\displaystyle-\nabla_{h}R_{+h-h}$ $\displaystyle=\frac{1}{2\pi
i}\oint\frac{1}{s_{12}^{2}}\mathcal{A}_{W_{1}^{+}W_{2}^{-}\to hhh}$ (3)
$\displaystyle-\nabla_{h}R_{+-+-}$ $\displaystyle=\frac{1}{\pi
i}\oint\frac{1}{s_{12}^{2}}\mathcal{A}_{W_{1}^{+}W_{2}^{+}\to
W_{3}^{+}W_{4}^{+}h}$ (4) $\displaystyle=\frac{1}{\pi
i}\oint\frac{1}{s_{34}^{2}}\mathcal{A}_{W_{1}^{+}W_{2}^{+}\to
W_{3}^{+}W_{4}^{+}h}$ (5)
where $s_{ij}=(p_{i}+p_{j})^{2}$. Indices in the Riemann tensor run through
$h,a=1,2,3$ and the $\pm$ entries are given by contracting an $a$-index with
the projector $(\delta^{a}_{\,1}\pm i\delta^{a}_{\,2})/\sqrt{2}$, for example
$\displaystyle R_{h+h-}$
$\displaystyle=R_{hahb}\frac{(\delta^{a}_{\,1}+i\delta^{a}_{\,2})}{\sqrt{2}}\frac{(\delta^{b}_{\,1}-i\delta^{b}_{\,2})}{\sqrt{2}}$
(6)
While the above definition is useful to include UV models and derive
positivity bounds [11], in practice we will work with the low energy EFT. In
which case the correspondence is taking our geometry from the order
$\mathcal{O}(s)$ coefficients in a Taylor expansion. What’s more is they
capture all terms to this order. Being explicit,
$\displaystyle\mathcal{A}_{W_{1}^{+}W_{2}^{-}\to hh}=$ $\displaystyle-
s_{12}R_{+h-h}$ (7) $\displaystyle\mathcal{A}_{W^{+}_{1}W^{+}_{2}\to WW}=$
$\displaystyle-s_{12}R_{+-+-}$ (8)
$\displaystyle\mathcal{A}_{W_{1}^{+}W_{2}^{-}\to hhh}=$ $\displaystyle-
s_{12}\nabla_{h}R_{+h-h}$ (9) $\displaystyle\mathcal{A}_{W^{+}_{1}W^{+}_{2}\to
W_{3}^{+}W_{4}^{+}h}=$
$\displaystyle-\frac{s_{12}+s_{34}}{2}\nabla_{h}R_{+-+-}$ (10)
where we neglected masses assuming $s\gg M_{W}^{2},M_{Z}^{2},m_{h}^{2}$.
This starting point makes evident that our tensor, $R$, and its derivatives
are physical and field redefinition (coordinate) invariant. Even if intuitive,
this last statement should be qualified. On the geometry side, having defined
tensor entries rather than invariants, one has that these change under
coordinate transformations - albeit with well defined properties. They are
nonetheless the same for local (defined around the vacuum) transformations of
our fields which leave the amplitudes the same [12]:
$\displaystyle\hat{\phi}^{i}=$
$\displaystyle\left(\delta^{i}_{j}+\sum_{k=1}c^{k}_{j}\phi^{k}\right)\phi^{j}$
(11)
so that after quantization both fields produce a particle out of the vacuum,
$\displaystyle\langle p|\phi^{i}|0\rangle=\langle p|\hat{\phi}^{i}|0\rangle$
(12)
with $|p\rangle$ the state associated with the field. It is for this type of
transformation that the $S$ matrix will be left invariant, and tensors
evaluated at the vacuum transform trivially, since:
$\displaystyle\left.\frac{\partial\phi^{i}}{\partial\hat{\phi}^{j}}\,\right|_{\phi=0}=\delta^{i}_{j}$
(13)
Still, from where we stand the definition of Riemann tensor components in
terms of amplitudes seems arbitrary and potentially inconsistent. So let us
now turn to the Lagrangian theory which yields such relations.
### II.1 Riemann Normal Coordinates
Take the metric that the Riemann tensor derives from in eqs. (1-5) as
$G_{ij}(\phi)$, with $i,j=h,1,2,3$, $\phi=(h,\varphi^{a})$ $a=1,2,3$. The
amplitudes in eqs. (1-5) follow from the action
$\displaystyle S=$ $\displaystyle\frac{1}{2}\int
d^{4}x\partial_{\mu}\phi^{i}G_{ij}\partial^{\mu}\phi^{i}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\int
d^{4}x\left(\partial_{\mu}h\partial^{\mu}h+F(h)^{2}g_{ab}\partial^{\mu}\varphi^{a}\partial_{\mu}\varphi^{b}\right)$
(14)
In matrix notation, our parametrization of the metric reads
$\displaystyle G_{ij}=\left(\begin{array}[]{cc}1&\\\
&F^{2}g_{ab}\end{array}\right)$ (17)
where off-diagonal entries are forbidden by symmetry and $g_{ab}$ is the
metric on the 3-sphere which we find useful to represent via the unit vector
$u(\varphi)$:
$\displaystyle g_{ab}=$ $\displaystyle\frac{\partial
u(\varphi)}{\partial\varphi^{a}}\frac{\partial
u(\varphi)}{\partial\varphi^{b}}$ $\displaystyle u\cdot u$ $\displaystyle=1$
(18)
with $u$ transforming as a vector under $O(4)$. It follows that the non-
vanishing elements of the Riemann tensor and its first covariant derivative
are
$\displaystyle R_{abcd}$
$\displaystyle=\left(\frac{1}{v^{2}}-(F^{\prime})^{2}\right)F^{2}g_{a[c}g_{bd]}$
(19) $\displaystyle R_{ahbh}$ $\displaystyle=-F^{\prime\prime}F\tilde{g}_{ab}$
(20) $\displaystyle\nabla_{h}R_{ahbh}$
$\displaystyle=F^{2}\left(-\frac{F^{\prime\prime}}{F}\right)^{\prime}g_{ab}$
(21) $\displaystyle\nabla_{h}R_{abcd}$
$\displaystyle=F^{4}\left(\frac{1}{v^{2}F^{2}}-\frac{(F^{\prime})^{2}}{F^{2}}\right)^{\prime}g_{a[c}g_{bd]}$
(22) $\displaystyle\nabla_{a}R_{hbcd}$
$\displaystyle=\frac{F^{4}}{2}\left(\frac{1}{v^{2}F^{2}}-\frac{(F^{\prime})^{2}}{F^{2}}\right)^{\prime}g_{a[c}g_{bd]}$
(23)
where prime denotes differentiation with respect to $h$ and it is useful to
define
$\displaystyle R_{h}$ $\displaystyle\equiv-\frac{F^{\prime\prime}}{F}$
$\displaystyle R_{\varphi}$
$\displaystyle\equiv\frac{1}{v^{2}F^{2}}-\frac{(F^{\prime})^{2}}{F^{2}}$ (24)
Verifying that these tensor entries appear as coefficients in the 4- and
5-point amplitudes is a matter of computing amplitudes: expanding our metric
around the vacuum and adding over the various diagrams, e.g. see fig. 1 for
those contributing to $WW\to hhh$, relations (1-5) are recovered. The $O(4)$
symmetry in our system reduces the number of independent components and
amplitudes to $R_{h}$, $R_{\varphi}$ and its derivatives.
Figure 1: Diagrams for the $\mathcal{O}(s)$ contribution to the $WWhhh$
amplitude in the basis of eq. (14).
Geometry does tell us however, that there is a frame where this computation is
particularly simple: the frame where our coordinates follow geodesics, i.e.
Riemann normal coordinates (RNC).
Let us then go into a brief outline of RNC. One can solve iteratively the
Geodesic equation:
$\displaystyle\frac{d^{2}\phi^{i}}{d^{2}\sigma}+\Gamma^{i}_{jk}(\phi)\frac{d\phi^{j}}{d\sigma}\frac{d\phi^{k}}{d\sigma}=0$
(25)
in an expansion which assumes the dependence on $\phi$ of $\Gamma$ admits a
Taylor expansion and introduces new coordinates $\phi^{\prime}$ defined to
second order as
$\phi^{\prime
i}=\phi^{i}+\frac{1}{2}\Gamma^{i}_{jk}(0)\phi^{j}\phi^{k}+\mathcal{O}(\phi^{3})$
Together with a metric in the new coordinates and to $\phi^{\prime 3}$ order
[13]:
$G(\phi^{\prime})_{ij}=G(0)_{ij}+\phi^{\prime k}\phi^{\prime
l}\frac{1}{3}R_{iklj}+\frac{1}{6}\phi^{\prime k}\phi^{\prime l}\phi^{\prime
m}\nabla_{m}R_{iklj}$
For concreteness, one can work out this transformation for our metric to find:
$\displaystyle\left(\begin{array}[]{c}h^{\prime}\\\
\varphi^{\prime}\end{array}\right)=\left(\begin{array}[]{c}h-FF^{\prime}\varphi^{2}/2\\\
\varphi^{a}+F^{\prime}h\varphi^{a}/F+\Gamma^{a}_{bc}\varphi^{b}\varphi^{c}/2\end{array}\right)+\mathcal{O}(\phi^{3})$
(30)
The use of RNC is the reduction to parametrization independent magnitudes,
i.e. Riemann tensor and its derivatives with the Christoffel symbols absent in
our frame. In an analogy with general relativity, this is the free-falling
frame where tidal effects reveal the geometry of the space-time manifold. In
practice, there are no 3-point amplitudes 222They are reinstated however once
we account for massive states. and the interacting Lagrangian for 4-point
reads:
$\displaystyle\mathcal{L}_{4}^{\rm RNC}=$
$\displaystyle\frac{1}{6}R_{hahb}\left(2h\partial
h\varphi^{a}\partial\varphi^{b}-(\partial
h)^{2}\varphi^{a}\varphi^{b}-h^{2}\partial\varphi^{a}\partial\varphi^{b}\right)$
$\displaystyle+\frac{1}{6}R_{abcd}\partial\varphi^{a}\varphi^{b}\varphi^{c}\partial\varphi^{d}$
(31)
The first line gives the Feynman rule
$\varphi^{a}(p_{1})$$\varphi^{b}(p_{2})$$h(p_{3})$$h(p_{4})$
$\displaystyle\frac{iR_{ahbh}}{3}$
$\displaystyle\left(\begin{array}[]{c}(p_{1}+p_{2})(p_{3}+p_{4})\\\
+2p_{1}p_{2}+2p_{3}p_{4}\end{array}\right)$ (34)
which evaluated on-shell is the sole diagram needed to compute $A_{WW\to hh}$
in this frame. For 5-point vertexes, we have
$\displaystyle\mathcal{L}_{5}^{\rm RNC}=$
$\displaystyle\frac{1}{12}(\nabla_{h}R_{\partial\varphi
hh\partial\varphi}+\nabla_{h}R_{\partial h\varphi\varphi\partial
h}+2\nabla_{h}R_{\partial h\varphi h\partial\varphi})$
$\displaystyle+\frac{1}{12}(\nabla_{h}R_{\partial\varphi\varphi\varphi\partial\varphi}+2\nabla_{\varphi}R_{\partial\varphi
h\varphi\partial\varphi})$ (35)
where the term $\nabla_{\varphi}R_{dh\varphi\varphi d\varphi}$ cancels due to
the Riemann tensor asymmetry; and with abuse of notation
$V_{\varphi}=V_{a}\varphi^{a}$, similarly for $h$. For the 5-point amplitude,
again due to the absence of 3-point vertexes, evaluating the Feynman rule that
follows from the 5-point action yields the result (i.e. in this frame there is
only the last diagram in fig. 1 to compute). Amplitudes for six or more
particles in total do require a sum over diagrams and contain, in addition,
poles which nevertheless can be derived from lower-point amplitudes, see [9].
### II.2 Experimental and theory constraints on curvature
Unitarity constrains the magnitude of curvature, and its derivatives, for a
given c.m. energy $s$, to the 4-point level. Symbolically
$\displaystyle 2{\rm\,Im}\left(\raisebox{-22.76219pt}{ \leavevmode\hbox
to50.12pt{\vbox to50.12pt{\pgfpicture\makeatletter\hbox{\hskip
25.28194pt\lower-24.84001pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{}
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}}
}}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{{}}} }
{}\pgfsys@moveto{-24.64pt}{24.64pt}\pgfsys@curveto{-23.75612pt}{23.75612pt}{-21.54642pt}{25.08194pt}{-20.66254pt}{24.19806pt}\pgfsys@curveto{-20.02261pt}{23.55814pt}{-20.33376pt}{22.14394pt}{-20.66254pt}{20.66254pt}\pgfsys@curveto{-20.99132pt}{19.18117pt}{-21.30247pt}{17.76695pt}{-20.66254pt}{17.12701pt}\pgfsys@curveto{-20.02261pt}{16.48709pt}{-18.60841pt}{16.79823pt}{-17.12701pt}{17.12701pt}\pgfsys@curveto{-15.64565pt}{17.4558pt}{-14.23143pt}{17.76695pt}{-13.59149pt}{17.12701pt}\pgfsys@curveto{-12.95157pt}{16.48709pt}{-13.26271pt}{15.07289pt}{-13.59149pt}{13.59149pt}\pgfsys@curveto{-13.92027pt}{12.11012pt}{-14.23143pt}{10.6959pt}{-13.59149pt}{10.05597pt}\pgfsys@curveto{-12.95157pt}{9.41605pt}{-11.53737pt}{9.72719pt}{-10.05597pt}{10.05597pt}\pgfsys@curveto{-8.5746pt}{10.38475pt}{-7.16039pt}{10.6959pt}{-6.52045pt}{10.05597pt}\pgfsys@curveto{-5.88052pt}{9.41605pt}{-6.19167pt}{8.00185pt}{-6.52045pt}{6.52045pt}\pgfsys@curveto{-6.84923pt}{5.03908pt}{-7.16039pt}{3.62486pt}{-6.52045pt}{2.98492pt}\pgfsys@lineto{-0.00002pt}{0.00002pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}}
}}{{}} {{{}}} }
{}\pgfsys@moveto{-24.64pt}{-24.64pt}\pgfsys@curveto{-23.75612pt}{-23.75612pt}{-25.08194pt}{-21.54642pt}{-24.19806pt}{-20.66254pt}\pgfsys@curveto{-23.55814pt}{-20.02261pt}{-22.14394pt}{-20.33376pt}{-20.66254pt}{-20.66254pt}\pgfsys@curveto{-19.18117pt}{-20.99132pt}{-17.76695pt}{-21.30247pt}{-17.12701pt}{-20.66254pt}\pgfsys@curveto{-16.48709pt}{-20.02261pt}{-16.79823pt}{-18.60841pt}{-17.12701pt}{-17.12701pt}\pgfsys@curveto{-17.4558pt}{-15.64565pt}{-17.76695pt}{-14.23143pt}{-17.12701pt}{-13.59149pt}\pgfsys@curveto{-16.48709pt}{-12.95157pt}{-15.07289pt}{-13.26271pt}{-13.59149pt}{-13.59149pt}\pgfsys@curveto{-12.11012pt}{-13.92027pt}{-10.6959pt}{-14.23143pt}{-10.05597pt}{-13.59149pt}\pgfsys@curveto{-9.41605pt}{-12.95157pt}{-9.72719pt}{-11.53737pt}{-10.05597pt}{-10.05597pt}\pgfsys@curveto{-10.38475pt}{-8.5746pt}{-10.6959pt}{-7.16039pt}{-10.05597pt}{-6.52045pt}\pgfsys@curveto{-9.41605pt}{-5.88052pt}{-8.00185pt}{-6.19167pt}{-6.52045pt}{-6.52045pt}\pgfsys@curveto{-5.03908pt}{-6.84923pt}{-3.62486pt}{-7.16039pt}{-2.98492pt}{-6.52045pt}\pgfsys@lineto{-0.00002pt}{-0.00002pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.88388pt}{0.88388pt}{-0.44194pt}{3.09358pt}{0.44194pt}{3.97746pt}\pgfsys@curveto{1.08186pt}{4.61739pt}{2.49606pt}{4.30624pt}{3.97746pt}{3.97746pt}\pgfsys@curveto{5.45883pt}{3.64868pt}{6.87305pt}{3.33752pt}{7.51299pt}{3.97746pt}\pgfsys@curveto{8.15291pt}{4.61739pt}{7.84177pt}{6.03159pt}{7.51299pt}{7.51299pt}\pgfsys@curveto{7.1842pt}{8.99435pt}{6.87305pt}{10.40857pt}{7.51299pt}{11.04851pt}\pgfsys@curveto{8.15291pt}{11.68843pt}{9.56711pt}{11.37729pt}{11.04851pt}{11.04851pt}\pgfsys@curveto{12.52988pt}{10.71973pt}{13.94409pt}{10.40857pt}{14.58403pt}{11.04851pt}\pgfsys@curveto{15.22395pt}{11.68843pt}{14.91281pt}{13.10263pt}{14.58403pt}{14.58403pt}\pgfsys@curveto{14.25525pt}{16.0654pt}{13.94409pt}{17.47961pt}{14.58403pt}{18.11955pt}\pgfsys@curveto{15.46797pt}{19.0035pt}{17.67767pt}{17.67767pt}{18.56155pt}{18.56155pt}\pgfsys@lineto{24.64001pt}{24.64001pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.88388pt}{-0.88388pt}{3.09358pt}{0.44194pt}{3.97746pt}{-0.44194pt}\pgfsys@curveto{4.61739pt}{-1.08186pt}{4.30624pt}{-2.49606pt}{3.97746pt}{-3.97746pt}\pgfsys@curveto{3.64868pt}{-5.45883pt}{3.33752pt}{-6.87305pt}{3.97746pt}{-7.51299pt}\pgfsys@curveto{4.61739pt}{-8.15291pt}{6.03159pt}{-7.84177pt}{7.51299pt}{-7.51299pt}\pgfsys@curveto{8.99435pt}{-7.1842pt}{10.40857pt}{-6.87305pt}{11.04851pt}{-7.51299pt}\pgfsys@curveto{11.68843pt}{-8.15291pt}{11.37729pt}{-9.56711pt}{11.04851pt}{-11.04851pt}\pgfsys@curveto{10.71973pt}{-12.52988pt}{10.40857pt}{-13.94409pt}{11.04851pt}{-14.58403pt}\pgfsys@curveto{11.68843pt}{-15.22395pt}{13.10263pt}{-14.91281pt}{14.58403pt}{-14.58403pt}\pgfsys@curveto{16.0654pt}{-14.25525pt}{17.47961pt}{-13.94409pt}{18.11955pt}{-14.58403pt}\pgfsys@curveto{19.0035pt}{-15.46797pt}{17.67767pt}{-17.67767pt}{18.56155pt}{-18.56155pt}\pgfsys@lineto{24.64001pt}{-24.64001pt}\pgfsys@stroke\pgfsys@invoke{
}
{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.5pt}{0.0pt}\pgfsys@curveto{2.5pt}{1.38072pt}{1.38072pt}{2.5pt}{0.0pt}{2.5pt}\pgfsys@curveto{-1.38072pt}{2.5pt}{-2.5pt}{1.38072pt}{-2.5pt}{0.0pt}\pgfsys@curveto{-2.5pt}{-1.38072pt}{-1.38072pt}{-2.5pt}{0.0pt}{-2.5pt}\pgfsys@curveto{1.38072pt}{-2.5pt}{2.5pt}{-1.38072pt}{2.5pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\right)+\raisebox{-28.45274pt}{
\leavevmode\hbox to121.25pt{\vbox
to61.36pt{\pgfpicture\makeatletter\hbox{\hskip
25.28194pt\lower-30.67949pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{}
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}}
}}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{{}}} }
{}\pgfsys@moveto{-24.64pt}{24.64pt}\pgfsys@curveto{-23.75612pt}{23.75612pt}{-21.54642pt}{25.08194pt}{-20.66254pt}{24.19806pt}\pgfsys@curveto{-20.02261pt}{23.55814pt}{-20.33376pt}{22.14394pt}{-20.66254pt}{20.66254pt}\pgfsys@curveto{-20.99132pt}{19.18117pt}{-21.30247pt}{17.76695pt}{-20.66254pt}{17.12701pt}\pgfsys@curveto{-20.02261pt}{16.48709pt}{-18.60841pt}{16.79823pt}{-17.12701pt}{17.12701pt}\pgfsys@curveto{-15.64565pt}{17.4558pt}{-14.23143pt}{17.76695pt}{-13.59149pt}{17.12701pt}\pgfsys@curveto{-12.95157pt}{16.48709pt}{-13.26271pt}{15.07289pt}{-13.59149pt}{13.59149pt}\pgfsys@curveto{-13.92027pt}{12.11012pt}{-14.23143pt}{10.6959pt}{-13.59149pt}{10.05597pt}\pgfsys@curveto{-12.95157pt}{9.41605pt}{-11.53737pt}{9.72719pt}{-10.05597pt}{10.05597pt}\pgfsys@curveto{-8.5746pt}{10.38475pt}{-7.16039pt}{10.6959pt}{-6.52045pt}{10.05597pt}\pgfsys@curveto{-5.88052pt}{9.41605pt}{-6.19167pt}{8.00185pt}{-6.52045pt}{6.52045pt}\pgfsys@curveto{-6.84923pt}{5.03908pt}{-7.16039pt}{3.62486pt}{-6.52045pt}{2.98492pt}\pgfsys@lineto{-0.00002pt}{0.00002pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}}
}}{{}} {{{}}} }
{}\pgfsys@moveto{-24.64pt}{-24.64pt}\pgfsys@curveto{-23.75612pt}{-23.75612pt}{-25.08194pt}{-21.54642pt}{-24.19806pt}{-20.66254pt}\pgfsys@curveto{-23.55814pt}{-20.02261pt}{-22.14394pt}{-20.33376pt}{-20.66254pt}{-20.66254pt}\pgfsys@curveto{-19.18117pt}{-20.99132pt}{-17.76695pt}{-21.30247pt}{-17.12701pt}{-20.66254pt}\pgfsys@curveto{-16.48709pt}{-20.02261pt}{-16.79823pt}{-18.60841pt}{-17.12701pt}{-17.12701pt}\pgfsys@curveto{-17.4558pt}{-15.64565pt}{-17.76695pt}{-14.23143pt}{-17.12701pt}{-13.59149pt}\pgfsys@curveto{-16.48709pt}{-12.95157pt}{-15.07289pt}{-13.26271pt}{-13.59149pt}{-13.59149pt}\pgfsys@curveto{-12.11012pt}{-13.92027pt}{-10.6959pt}{-14.23143pt}{-10.05597pt}{-13.59149pt}\pgfsys@curveto{-9.41605pt}{-12.95157pt}{-9.72719pt}{-11.53737pt}{-10.05597pt}{-10.05597pt}\pgfsys@curveto{-10.38475pt}{-8.5746pt}{-10.6959pt}{-7.16039pt}{-10.05597pt}{-6.52045pt}\pgfsys@curveto{-9.41605pt}{-5.88052pt}{-8.00185pt}{-6.19167pt}{-6.52045pt}{-6.52045pt}\pgfsys@curveto{-5.03908pt}{-6.84923pt}{-3.62486pt}{-7.16039pt}{-2.98492pt}{-6.52045pt}\pgfsys@lineto{-0.00002pt}{-0.00002pt}\pgfsys@stroke\pgfsys@invoke{
}
{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{-2.84544pt}{0.0pt}\pgfsys@moveto{-0.34544pt}{0.0pt}\pgfsys@curveto{-0.34544pt}{1.38072pt}{-1.46472pt}{2.5pt}{-2.84544pt}{2.5pt}\pgfsys@curveto{-4.22617pt}{2.5pt}{-5.34544pt}{1.38072pt}{-5.34544pt}{0.0pt}\pgfsys@curveto{-5.34544pt}{-1.38072pt}{-4.22617pt}{-2.5pt}{-2.84544pt}{-2.5pt}\pgfsys@curveto{-1.46472pt}{-2.5pt}{-0.34544pt}{-1.38072pt}{-0.34544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{-2.84544pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} {}{}{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {} {}{}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}
{}{}{}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{{}{}}{{}{}}{{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}{}{}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}
{}\pgfsys@moveto{28.45276pt}{28.45276pt}\pgfsys@curveto{27.20276pt}{28.45276pt}{26.57776pt}{25.95276pt}{25.32776pt}{25.95276pt}\pgfsys@curveto{24.4281pt}{25.85468pt}{23.5205pt}{26.98297pt}{22.57153pt}{28.16708pt}\pgfsys@curveto{21.62262pt}{29.3512pt}{20.71498pt}{30.47949pt}{19.8153pt}{30.38141pt}\pgfsys@curveto{18.94684pt}{30.12689pt}{18.54144pt}{28.73677pt}{18.1193pt}{27.27924pt}\pgfsys@curveto{17.69716pt}{25.82172pt}{17.29175pt}{24.43158pt}{16.42328pt}{24.17706pt}\pgfsys@curveto{15.61317pt}{23.77368pt}{14.37115pt}{24.51811pt}{13.07104pt}{25.3006pt}\pgfsys@curveto{11.77097pt}{26.08311pt}{10.52893pt}{26.82753pt}{9.71881pt}{26.42413pt}\pgfsys@curveto{8.9917pt}{25.88533pt}{9.09137pt}{24.44075pt}{9.19861pt}{22.9271pt}\pgfsys@curveto{9.30586pt}{21.41348pt}{9.40553pt}{19.96887pt}{8.6784pt}{19.43005pt}\pgfsys@curveto{8.05606pt}{18.77303pt}{6.63394pt}{19.04572pt}{5.14417pt}{19.33423pt}\pgfsys@curveto{3.65443pt}{19.62276pt}{2.23228pt}{19.89545pt}{1.60992pt}{19.2384pt}\pgfsys@curveto{1.11147pt}{18.48306pt}{1.70012pt}{17.1601pt}{2.31953pt}{15.77483pt}\pgfsys@curveto{2.93895pt}{14.3896pt}{3.52762pt}{13.0666pt}{3.02914pt}{12.31125pt}\pgfsys@curveto{2.67075pt}{11.48024pt}{1.24158pt}{11.2472pt}{-0.25653pt}{11.0057pt}\pgfsys@curveto{-1.75461pt}{10.76424pt}{-3.18379pt}{10.53119pt}{-3.5422pt}{9.70016pt}\pgfsys@curveto{-3.74863pt}{8.81903pt}{-2.73871pt}{7.78133pt}{-1.67838pt}{6.69583pt}\pgfsys@curveto{-0.61804pt}{5.61037pt}{0.39189pt}{4.57265pt}{0.18546pt}{3.6915pt}\pgfsys@curveto{0.13626pt}{2.78784pt}{-1.12434pt}{2.07533pt}{-2.44675pt}{1.33112pt}\pgfsys@curveto{-3.76912pt}{0.58691pt}{-5.02974pt}{-0.12561pt}{-5.07895pt}{-1.02927pt}\pgfsys@curveto{-4.97134pt}{-1.92784pt}{-3.66724pt}{-2.55722pt}{-2.29942pt}{-3.21425pt}\pgfsys@curveto{-0.93163pt}{-3.87128pt}{0.37248pt}{-4.50066pt}{0.4801pt}{-5.39923pt}\pgfsys@curveto{0.74391pt}{-6.26492pt}{-0.19576pt}{-7.36667pt}{-1.18257pt}{-8.51942pt}\pgfsys@curveto{-2.1694pt}{-9.67213pt}{-3.10907pt}{-10.77391pt}{-2.84525pt}{-11.63962pt}\pgfsys@curveto{-2.43324pt}{-12.44539pt}{-0.99188pt}{-12.58444pt}{0.51881pt}{-12.72737pt}\pgfsys@curveto{2.0295pt}{-12.8703pt}{3.47086pt}{-13.00934pt}{3.88287pt}{-13.81512pt}\pgfsys@curveto{4.42935pt}{-14.5365pt}{3.92787pt}{-15.89491pt}{3.39975pt}{-17.3175pt}\pgfsys@curveto{2.8716pt}{-18.74005pt}{2.37013pt}{-20.09851pt}{2.91663pt}{-20.81989pt}\pgfsys@curveto{3.5802pt}{-21.43526pt}{4.9817pt}{-21.07106pt}{6.44968pt}{-20.6867pt}\pgfsys@curveto{7.91762pt}{-20.30234pt}{9.31914pt}{-19.93816pt}{9.98273pt}{-20.55353pt}\pgfsys@curveto{10.74333pt}{-21.04393pt}{10.73776pt}{-22.49194pt}{10.72916pt}{-24.00937pt}\pgfsys@curveto{10.72052pt}{-25.52676pt}{10.71497pt}{-26.97481pt}{11.47559pt}{-27.46521pt}\pgfsys@curveto{17.66129pt}{-29.04599pt}{19.20319pt}{-26.98125pt}{20.35623pt}{-27.46394pt}\pgfsys@lineto{28.45282pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{}}{}\pgfsys@moveto{35.56595pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{15.90796pt}{-2.5pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\int\times d\Pi_{\rm
LIPS}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{}{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {} {}{}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}
{}{}{}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{{}{}}{{}{}}{{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{}{}{}{}} {{}{}{}{}}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}
{}\pgfsys@moveto{42.67914pt}{-28.45276pt}\pgfsys@curveto{43.92914pt}{-28.45276pt}{44.55414pt}{-25.95276pt}{45.80414pt}{-25.95276pt}\pgfsys@curveto{46.7038pt}{-25.85468pt}{47.6114pt}{-26.98297pt}{48.56036pt}{-28.16708pt}\pgfsys@curveto{49.50928pt}{-29.3512pt}{50.41692pt}{-30.47949pt}{51.31659pt}{-30.38141pt}\pgfsys@curveto{52.18506pt}{-30.12689pt}{52.59045pt}{-28.73677pt}{53.0126pt}{-27.27924pt}\pgfsys@curveto{53.43474pt}{-25.82172pt}{53.84015pt}{-24.43158pt}{54.70862pt}{-24.17706pt}\pgfsys@curveto{55.51872pt}{-23.77368pt}{56.76074pt}{-24.51811pt}{58.06085pt}{-25.3006pt}\pgfsys@curveto{59.36093pt}{-26.08311pt}{60.60297pt}{-26.82753pt}{61.41309pt}{-26.42413pt}\pgfsys@curveto{62.14018pt}{-25.88531pt}{62.0405pt}{-24.44073pt}{61.93323pt}{-22.9271pt}\pgfsys@curveto{61.82596pt}{-21.41348pt}{61.72626pt}{-19.96887pt}{62.45337pt}{-19.43005pt}\pgfsys@curveto{63.0757pt}{-18.77301pt}{64.49783pt}{-19.04567pt}{65.9876pt}{-19.33415pt}\pgfsys@curveto{67.47733pt}{-19.62265pt}{68.89948pt}{-19.89531pt}{69.52182pt}{-19.23825pt}\pgfsys@curveto{70.02026pt}{-18.4829pt}{69.4316pt}{-17.15993pt}{68.81213pt}{-15.77466pt}\pgfsys@curveto{68.19267pt}{-14.38943pt}{67.60399pt}{-13.06644pt}{68.10245pt}{-12.31107pt}\pgfsys@curveto{68.46082pt}{-11.48006pt}{69.88995pt}{-11.24696pt}{71.38805pt}{-11.00545pt}\pgfsys@curveto{72.8861pt}{-10.76395pt}{74.31526pt}{-10.53085pt}{74.67365pt}{-9.69983pt}\pgfsys@curveto{74.88005pt}{-8.8187pt}{73.8701pt}{-7.78104pt}{72.80974pt}{-6.69557pt}\pgfsys@curveto{71.74937pt}{-5.61015pt}{70.73943pt}{-4.57246pt}{70.94583pt}{-3.69131pt}\pgfsys@curveto{70.99501pt}{-2.78766pt}{72.2556pt}{-2.07513pt}{73.578pt}{-1.3309pt}\pgfsys@curveto{74.90038pt}{-0.58669pt}{76.16098pt}{0.12585pt}{76.21017pt}{1.02951pt}\pgfsys@curveto{76.10257pt}{1.92809pt}{74.79846pt}{2.55746pt}{73.43065pt}{3.2145pt}\pgfsys@curveto{72.06285pt}{3.87152pt}{70.75874pt}{4.5009pt}{70.65112pt}{5.39948pt}\pgfsys@curveto{70.38731pt}{6.26517pt}{71.32698pt}{7.36691pt}{72.3138pt}{8.51967pt}\pgfsys@curveto{73.30063pt}{9.67238pt}{74.2403pt}{10.77415pt}{73.97647pt}{11.63986pt}\pgfsys@curveto{73.56447pt}{12.44563pt}{72.12311pt}{12.58469pt}{70.61241pt}{12.72762pt}\pgfsys@curveto{69.10173pt}{12.87054pt}{67.66037pt}{13.00958pt}{67.24835pt}{13.81537pt}\pgfsys@curveto{66.70187pt}{14.53674pt}{67.20335pt}{15.89516pt}{67.73148pt}{17.31775pt}\pgfsys@curveto{68.25963pt}{18.7403pt}{68.7611pt}{20.09875pt}{68.2146pt}{20.82013pt}\pgfsys@curveto{67.55103pt}{21.4355pt}{66.14952pt}{21.0713pt}{64.68155pt}{20.68695pt}\pgfsys@curveto{63.21361pt}{20.30258pt}{61.81209pt}{19.9384pt}{61.1485pt}{20.55377pt}\pgfsys@curveto{60.3879pt}{21.04417pt}{60.39346pt}{22.49219pt}{60.40207pt}{24.00961pt}\pgfsys@curveto{60.4107pt}{25.52701pt}{60.41626pt}{26.97505pt}{59.65564pt}{27.46545pt}\pgfsys@curveto{58.82085pt}{27.81491pt}{57.63025pt}{26.99072pt}{56.38417pt}{26.12474pt}\pgfsys@curveto{55.13812pt}{25.25876pt}{53.94751pt}{24.43456pt}{53.1127pt}{24.78403pt}\pgfsys@lineto{42.67911pt}{28.45274pt}\pgfsys@stroke\pgfsys@invoke{
}
{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{73.97734pt}{0.0pt}\pgfsys@moveto{76.47734pt}{0.0pt}\pgfsys@curveto{76.47734pt}{1.38072pt}{75.35806pt}{2.5pt}{73.97734pt}{2.5pt}\pgfsys@curveto{72.59662pt}{2.5pt}{71.47734pt}{1.38072pt}{71.47734pt}{0.0pt}\pgfsys@curveto{71.47734pt}{-1.38072pt}{72.59662pt}{-2.5pt}{73.97734pt}{-2.5pt}\pgfsys@curveto{75.35806pt}{-2.5pt}{76.47734pt}{-1.38072pt}{76.47734pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{73.97734pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@curveto{72.01578pt}{0.88388pt}{70.68996pt}{3.09358pt}{71.57384pt}{3.97746pt}\pgfsys@curveto{72.45772pt}{4.86134pt}{74.66742pt}{3.53552pt}{75.5513pt}{4.4194pt}\pgfsys@lineto{95.77193pt}{24.64003pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@curveto{72.01578pt}{-0.88388pt}{74.22548pt}{0.44194pt}{75.10936pt}{-0.44194pt}\pgfsys@curveto{75.99324pt}{-1.32582pt}{74.66742pt}{-3.53552pt}{75.5513pt}{-4.4194pt}\pgfsys@lineto{95.77193pt}{-24.64003pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}$
$\displaystyle+\raisebox{-28.45274pt}{ \leavevmode\hbox to121.25pt{\vbox
to57.31pt{\pgfpicture\makeatletter\hbox{\hskip
25.28194pt\lower-28.65276pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{}
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}}
}}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{{}}} }
{}\pgfsys@moveto{-24.64pt}{24.64pt}\pgfsys@curveto{-23.75612pt}{23.75612pt}{-21.54642pt}{25.08194pt}{-20.66254pt}{24.19806pt}\pgfsys@curveto{-20.02261pt}{23.55814pt}{-20.33376pt}{22.14394pt}{-20.66254pt}{20.66254pt}\pgfsys@curveto{-20.99132pt}{19.18117pt}{-21.30247pt}{17.76695pt}{-20.66254pt}{17.12701pt}\pgfsys@curveto{-20.02261pt}{16.48709pt}{-18.60841pt}{16.79823pt}{-17.12701pt}{17.12701pt}\pgfsys@curveto{-15.64565pt}{17.4558pt}{-14.23143pt}{17.76695pt}{-13.59149pt}{17.12701pt}\pgfsys@curveto{-12.95157pt}{16.48709pt}{-13.26271pt}{15.07289pt}{-13.59149pt}{13.59149pt}\pgfsys@curveto{-13.92027pt}{12.11012pt}{-14.23143pt}{10.6959pt}{-13.59149pt}{10.05597pt}\pgfsys@curveto{-12.95157pt}{9.41605pt}{-11.53737pt}{9.72719pt}{-10.05597pt}{10.05597pt}\pgfsys@curveto{-8.5746pt}{10.38475pt}{-7.16039pt}{10.6959pt}{-6.52045pt}{10.05597pt}\pgfsys@curveto{-5.88052pt}{9.41605pt}{-6.19167pt}{8.00185pt}{-6.52045pt}{6.52045pt}\pgfsys@curveto{-6.84923pt}{5.03908pt}{-7.16039pt}{3.62486pt}{-6.52045pt}{2.98492pt}\pgfsys@lineto{-0.00002pt}{0.00002pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}}
{{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}}
{{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}} }}{{}} {{}{}{}{}} {{}{}{}{}} }{{{{}{}{{}}
}}{{}} {{{}}} }
{}\pgfsys@moveto{-24.64pt}{-24.64pt}\pgfsys@curveto{-23.75612pt}{-23.75612pt}{-25.08194pt}{-21.54642pt}{-24.19806pt}{-20.66254pt}\pgfsys@curveto{-23.55814pt}{-20.02261pt}{-22.14394pt}{-20.33376pt}{-20.66254pt}{-20.66254pt}\pgfsys@curveto{-19.18117pt}{-20.99132pt}{-17.76695pt}{-21.30247pt}{-17.12701pt}{-20.66254pt}\pgfsys@curveto{-16.48709pt}{-20.02261pt}{-16.79823pt}{-18.60841pt}{-17.12701pt}{-17.12701pt}\pgfsys@curveto{-17.4558pt}{-15.64565pt}{-17.76695pt}{-14.23143pt}{-17.12701pt}{-13.59149pt}\pgfsys@curveto{-16.48709pt}{-12.95157pt}{-15.07289pt}{-13.26271pt}{-13.59149pt}{-13.59149pt}\pgfsys@curveto{-12.11012pt}{-13.92027pt}{-10.6959pt}{-14.23143pt}{-10.05597pt}{-13.59149pt}\pgfsys@curveto{-9.41605pt}{-12.95157pt}{-9.72719pt}{-11.53737pt}{-10.05597pt}{-10.05597pt}\pgfsys@curveto{-10.38475pt}{-8.5746pt}{-10.6959pt}{-7.16039pt}{-10.05597pt}{-6.52045pt}\pgfsys@curveto{-9.41605pt}{-5.88052pt}{-8.00185pt}{-6.19167pt}{-6.52045pt}{-6.52045pt}\pgfsys@curveto{-5.03908pt}{-6.84923pt}{-3.62486pt}{-7.16039pt}{-2.98492pt}{-6.52045pt}\pgfsys@lineto{-0.00002pt}{-0.00002pt}\pgfsys@stroke\pgfsys@invoke{
}
{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.5pt}{0.0pt}\pgfsys@curveto{2.5pt}{1.38072pt}{1.38072pt}{2.5pt}{0.0pt}{2.5pt}\pgfsys@curveto{-1.38072pt}{2.5pt}{-2.5pt}{1.38072pt}{-2.5pt}{0.0pt}\pgfsys@curveto{-2.5pt}{-1.38072pt}{-1.38072pt}{-2.5pt}{0.0pt}{-2.5pt}\pgfsys@curveto{1.38072pt}{-2.5pt}{2.5pt}{-1.38072pt}{2.5pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} {}{}{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{
}{}\pgfsys@moveto{28.45276pt}{28.45276pt}\pgfsys@curveto{12.73854pt}{28.45276pt}{0.0pt}{15.71422pt}{0.0pt}{0.0pt}\pgfsys@curveto{0.0pt}{-15.71422pt}{12.73854pt}{-28.45276pt}{28.45276pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{}}{}\pgfsys@moveto{35.56595pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{15.90796pt}{-2.5pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\int\times d\Pi_{\rm
LIPS}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{}{}{{}}{}{{}{}{}{{}}{{{{}{}{}{}}} {{}{}{}{}}}}{} {} {} {}
{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{
}{}\pgfsys@moveto{42.67914pt}{-28.45276pt}\pgfsys@curveto{58.39336pt}{-28.45276pt}{71.1319pt}{-15.71422pt}{71.1319pt}{0.0pt}\pgfsys@curveto{71.1319pt}{15.71422pt}{58.39336pt}{28.45276pt}{42.67914pt}{28.45276pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@moveto{73.6319pt}{0.0pt}\pgfsys@curveto{73.6319pt}{1.38072pt}{72.51262pt}{2.5pt}{71.1319pt}{2.5pt}\pgfsys@curveto{69.75117pt}{2.5pt}{68.6319pt}{1.38072pt}{68.6319pt}{0.0pt}\pgfsys@curveto{68.6319pt}{-1.38072pt}{69.75117pt}{-2.5pt}{71.1319pt}{-2.5pt}\pgfsys@curveto{72.51262pt}{-2.5pt}{73.6319pt}{-1.38072pt}{73.6319pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@curveto{72.01578pt}{0.88388pt}{70.68996pt}{3.09358pt}{71.57384pt}{3.97746pt}\pgfsys@curveto{72.45772pt}{4.86134pt}{74.66742pt}{3.53552pt}{75.5513pt}{4.4194pt}\pgfsys@lineto{95.77193pt}{24.64003pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {}{}{}
}{{{{}{}{{}} }}{{}} {}{}{} }{{{{}{}{{}} }}{{}} {{{}}} }
{}\pgfsys@moveto{71.1319pt}{0.0pt}\pgfsys@curveto{72.01578pt}{-0.88388pt}{74.22548pt}{0.44194pt}{75.10936pt}{-0.44194pt}\pgfsys@curveto{75.99324pt}{-1.32582pt}{74.66742pt}{-3.53552pt}{75.5513pt}{-4.4194pt}\pgfsys@lineto{95.77193pt}{-24.64003pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}+\cdots=0$
where the first partial wave for $W^{+}W^{-}$ gives
$\displaystyle\left(\frac{R_{\varphi}s}{16\pi}\right)^{2}+\frac{1}{2}\left(\frac{R_{h}s}{8\pi}\right)^{2}\leq
1$ (36)
where we have accounted for the amplitude being real. One can also select the
$W^{+}W^{+}$ channel, but the emphasis in here is on bounds which are
sensitive to both curvatures simultaneously which helps to better close some
corners in the curvature plane.
One can use these constraints to determine the theory cut-off in terms of
curvature; however, here we turn this around to note that given that we have
explored energies up to $s\sim v^{2}$ and no new states have showed up, we can
set an upper limit on curvature.
This limit is super-seeded by experimental bounds from LHC which bound Higgs
couplings. In the conventional parametrization, one has:
$\displaystyle
F(h)^{2}=1+2a\frac{h}{v}+b\frac{h^{2}}{v^{2}}+\mathcal{O}(h^{3})$ (37)
which gives a curvature around the origin
$\displaystyle
v^{2}\left(R_{\varphi}(0),R_{h}(0)\right)=\left(1-a^{2},-(b-a^{2})\right)$
(38)
itself related to amplitudes as, substituting (24,19,20) on (7,8),
$\displaystyle\mathcal{A}_{W^{+}_{1}W^{+}_{2}\to WW}=$ $\displaystyle
s_{12}R_{\varphi}$ (39) $\displaystyle\mathcal{A}_{W_{1}^{+}W_{2}^{-}\to hh}=$
$\displaystyle-s_{12}R_{h}$ (40)
Translating bounds on the coefficients from present and future measurements
into curvature, we present the plot in fig. 2. The value in both sets of
constraints is to put into context how much of the theory-consistent curvature
space have we explored experimentally.
From the outer-most to inner-most region of fig. 2: the (outer-most) grey
region is excluded due to unitarity; up to the blue region is excluded by
current LHC bounds (the region is translated from bounds on $a$ in [14], and
$b$ in [15]); finally, up to the green and orange (inner-most) regions we
present expected exclusion limits for HL-LHC and FCC respectively. The
projected bounds on $R_{\varphi},R_{h}$ are derived using sensitivity
predictions of $a$ (HL-LHC, [16]; FCC-ee, [17]); and $b$ ([18] for both HL-LHC
and FCC-hh), around their SM values. All uncertainties and projected
sensitivities are displayed at the $95\%$ confidence level; where multiple
sensitivity estimates are given, the most conservative is selected. Note that
HL-LHC bounds used here predate the LHC ones so that the seemingly marginal
improvement is likely an underestimation.
Figure 2: Theoretically (grey), and experimentally (up to blue) excluded (up
to 95% confidence level) regions of the curvatures $R_{h},R_{\varphi}$ which
are related to electroweak amplitudes as in eqs (40,39); and sensitivity
limits of future colliders (HL-LHC, up to green; FCC, up to orange), also up
to 95% confidence level. See text for detail. The plot scales linearly within
the dashed box, and logarithmically outside.
## III Correlation of curvature in SMEFT
In the linear realization and to first order (with our assumption of $O(4)$
invariance) we have:
$\displaystyle R_{\varphi}=R_{h}$ (41)
Which is to say the coefficients of $s$ in the 4-point amplitudes for
$W^{+}W^{+}$ scattering and $W^{+}W^{-}\to hh$ in eqs (40,39) are anti-
correlated. Correlations do appear in the linear parametrization of SMEFT in
HEFT [19] in line with what we find here, nonetheless in this section we go
into some length of how this can be derived to display the utility of a
geometric language.
A simple argument to show there is a correlation, if a bit more abstract, is
to use Riemann normal coordinates and custodial symmetry around the
$O(4)$-symmetric point - which admits Cartesian coordinates. In this frame,
the metric reads
$\displaystyle
G_{ij}(\phi)=\delta_{ij}+\frac{1}{3}R_{iklj}\phi^{k}\phi^{l}+\mathcal{O}(\phi^{3})$
(42)
and a linear realization of $O(4)$ symmetry dictates that the Riemann tensor
be of the form $R(\delta_{il}\delta_{kj}-\delta_{kl}\delta_{ij})$, with a
single unknown $R$. A transformation from Cartesian to polar coordinates then
reveals $R_{h}=R_{\varphi}$.
The collapse of the two curvatures into a single one can also be derived
matching the two EFTs:
$\displaystyle\frac{\left(\partial h^{2}+F^{2}\partial\varphi^{2}\right)}{2}=$
$\displaystyle K\left(\frac{H^{\dagger}H}{M^{2}}\right)(\partial
H^{\dagger}H)^{2}$
$\displaystyle+G\left(\frac{H^{\dagger}H}{M^{2}}\right)D_{\mu}H^{\dagger}D^{\mu}H$
(43)
where it should be understood from a general SMEFT action, we transformed to a
basis where the Higgs singlet is canonically normalized.
This exercise yields, to order $M^{-4}$
$\displaystyle R_{\varphi}$
$\displaystyle=-3\frac{G^{\prime}(0)}{M^{2}}+\frac{H^{\dagger}H}{M^{4}}\left(2(G^{\prime}(0))^{2}-\frac{5}{2}G^{\prime\prime}(0)\right)$
(44) $\displaystyle R_{h}$
$\displaystyle=-3\frac{G^{\prime}(0)}{M^{2}}+\frac{H^{\dagger}H}{M^{4}}\left(4(G^{\prime}(0))^{2}-5G^{\prime\prime}(0)\right)$
(45)
which also reveals the correlation is lost at order $M^{-4}$.
Finally, and in a direct connection with observables, one can compute the
amplitude which has been used to define our curvature, the computation itself
getting rid of any field redundancy. Take the non-canonically normalized
action
$\displaystyle\mathcal{L}=\frac{1}{2}\frac{c_{H\Box}}{M^{2}}(\partial_{\mu}H^{\dagger}H)^{2}+\frac{c_{HDD}}{M^{2}}H^{\dagger}HD_{\mu}H^{\dagger}D^{\mu}H$
(46)
After normalization of the theory, computation of diagrams such as those shown
in fig. 3, where we note that in this frame there is a $h^{3}$ coupling that
scales with $s$ and must be accounted for, yields
$\displaystyle{\mathcal{A}}_{W^{+}W^{+}\to
W^{+}W^{+}}=\frac{s}{M^{2}}\left(c_{H\Box}-c_{HDD}\right)$ (47)
$\displaystyle{\mathcal{A}}_{W^{+}W^{-}\to
hh}=-\frac{s}{M^{2}}\left(c_{H\Box}-c_{HDD}\right)$ (48)
and hence the direct connection with SMEFT geometry as
$\displaystyle\left(R_{\varphi}\,,\,R_{h}\right)=\frac{1}{M^{2}}\left(c_{H\Box}-c_{HDD},c_{H\Box}-c_{HDD}\right)\,.$
(49)
Figure 3: A selection of diagrams for the $WWhh$ and $WWWW$ amplitudes with
the action in eq. (46)
## IV Models as probes into HEFT
Recent study of EFT has shown that UV completion might impose extra
constraints on an otherwise seemingly valid EFT, as is the case of positivity
constraints [20]. It should be said that these constraints on the curvatures
themselves $R_{h}$ and $R_{\varphi}$ do not restrict their sign, but reveal
the need for doubly-charged states if the curvature is negative [11]. It is
for these reasons that this section looks at models and introduces two new
representations under $O(4)$ as
$\displaystyle{\bf h}$ $\displaystyle:\quad 4\quad{\rm of}\quad O(4)$ (50)
$\displaystyle\Phi$ $\displaystyle:\quad 9\quad{\rm of}\quad O(4)\,\,({\rm
traceless\,\,symmetric})$ (51) $\displaystyle S$ $\displaystyle:\quad
1\quad{\rm of}\quad O(4)$ (52)
with the results of positivity constraints suggesting $S$ and $\Phi$ will
produce positive and negative curvature respectively. Note that ${\bf h}$ is
the Higgs doublet $H$ in a real representation as
$\displaystyle\left(\tilde{H},H\right)$
$\displaystyle=\hat{\sigma}_{I}\frac{{\bf h}^{I}}{\sqrt{2}}$ (53)
with $\tilde{H}=\epsilon H^{*}$ and $\sigma^{I}=(\sigma^{i},1)$ with
$\sigma^{i}$ the Pauli matrices. We consider the addition of a 9 and a 1
separately with respective actions
$\displaystyle\mathcal{L}_{S}=\frac{1}{2}D_{\mu}{\bf h}^{T}D_{\mu}{\bf
h}+\frac{1}{2}(\partial S)^{2}-V({\bf h},S^{2})$ (54)
$\displaystyle\mathcal{L}_{\Phi}=\frac{1}{2}D_{\mu}{\bf h}^{T}D_{\mu}{\bf
h}+\frac{1}{2}{\rm Tr}\left(D_{\mu}\Phi D^{\mu}\Phi\right)-V({\bf h},\Phi)$
(55)
The key distinction is whether ${\langle{\Phi}\rangle}=0$ or not, which
depends on the sign of its mass term and its mixing as induced by the
potential.
### IV.1 Only $h$ acquires a vev, SMEFT case
In this subsection we momentarily restrict the $O(4)$ symmetry to $SO(4)$ to
allow for tri-linear couplings. First for the singlet $S$ case, we take a
potential as
$\displaystyle V=-\frac{g_{*}m_{S}}{2}S\,{\bf
h}^{2}+\frac{m_{S}^{2}}{2}S^{2}+\frac{m_{\bf h}^{2}}{2}{\bf h}^{2}$ (56)
extra terms allowed by the symmetry will give controlled corrections to the
result and we neglect them. Integrating the field $S$ at tree level returns
$\displaystyle\mathcal{L}_{\rm eff}$
$\displaystyle=\frac{1}{2}\frac{g_{*}m_{S}}{2}{\bf
h}^{2}\frac{1}{\partial^{2}+m_{S}^{2}}\frac{g_{*}m_{S}}{2}{\bf h}^{2}$ (57)
$\displaystyle=\frac{g_{*}^{2}}{2}(H^{\dagger}H)^{2}+\frac{g_{*}^{2}}{2m_{S}^{2}}(\partial(H^{\dagger}H))^{2}+\mathcal{O}\left(\partial^{4}\right)$
(58)
then via eq. (49)
$\displaystyle\left(R_{\varphi},R_{h}\right)=\left(\frac{g_{*}^{2}}{m_{S}^{2}},\frac{g_{*}^{2}}{m_{S}^{2}}\right)$
(59)
i.e. positive curvature for the singlet case, as expected.
Along the same lines, the potential for the symmetric representation is
$\displaystyle V=-\frac{g_{*}m_{\Phi}}{2}{\bf h}^{T}\Phi{\bf
h}+\frac{m_{\Phi}^{2}}{2}\Phi^{2}+\frac{m_{\bf h}^{2}}{2}{\bf h}^{2}$ (60)
The integration now returns, to dimension six:
$\displaystyle\mathcal{L}_{\rm eff}=$ $\displaystyle\frac{g_{*}^{2}}{8}{\rm
Tr}\left[\left({\bf h}{\bf h}^{T}-\frac{{\bf
h}^{2}}{4}\right)\frac{m_{\Phi}^{2}}{\Box+m_{\Phi}^{2}}\left({\bf h}{\bf
h}^{T}-\frac{{\bf h}^{2}}{4}\right)\right]$ (61) $\displaystyle=$
$\displaystyle\frac{3g_{*}^{2}}{8}(H^{\dagger}H)^{2}+\frac{g_{*}^{2}}{m^{2}_{\Phi}}\left(H^{\dagger}HDH^{\dagger}DH+\frac{(\partial
H^{\dagger}H)^{2}}{8}\right)$
where $\Box=D_{\mu}D^{\mu}$ and one has that the operator does yield negative
curvature:
$\displaystyle\left(R_{\varphi},R_{h}\right)=\left(-\frac{3g_{*}^{2}}{4m_{\Phi}^{2}},-\frac{3g_{*}^{2}}{4m_{\Phi}^{2}}\right)\,.$
(62)
### IV.2 Both $\Phi$ and h break the symmetry, HEFT/SMEFT quotient space
As we will show, this case does not belong in SMEFT and stands as a
representative of quotient space. We take the extension of a mexican hat
potential for two fields as:
$\displaystyle V(\Phi)=$
$\displaystyle-\frac{\vec{m}^{2}}{2}\cdot\left(\begin{array}[]{c}{\bf
h}^{2}\\\ \Phi^{2}\end{array}\right)+\left(\begin{array}[]{c}{\bf h}^{2}\\\
\Phi^{2}\end{array}\right)^{T}\frac{\lambda}{8}\,\left(\begin{array}[]{c}{\bf
h}^{2}\\\ \Phi^{2}\end{array}\right)$ (69)
$\displaystyle-\frac{\tilde{\lambda}}{8}{\bf h}^{T}\Phi\Phi\,{\bf
h}+\frac{\tilde{\lambda}_{\Phi}}{8}{\rm Tr}\left(\Phi\Phi\Phi\Phi\right)$ (70)
with ${\vec{m}}^{2}$ a 2-vector and $\lambda$ a $2\times 2$ symmetric matrix.
Since $\Phi$ acquires a vev, we take $\tilde{\lambda}>0$ which triggers
$O(4)\to O(3)$ and preserves custodial symmetry. Linear terms in the fields
are absent, contrary to the previous case which since we restore $O(4)$ in
place of $SO(4)$. The key question as will be shown is to consistently compute
particle couplings and masses from an explicit potential.
The Goldstone boson Lagrangian and couplings to the radial singlet modes
$\delta h$, $\delta\Phi$ read:
$\displaystyle\mathcal{L}=$ $\displaystyle\frac{1}{2}\left((v_{\bf h}+\delta
h)^{2}+C_{9}(v_{\Phi}+\delta\Phi)^{2}\right)\frac{g_{ab}}{v^{2}}D^{\mu}\varphi^{a}D_{\mu}\varphi^{b}$
(71)
where
$\displaystyle C_{9}$ $\displaystyle=\frac{2\times 4}{4-1}\,,$ $\displaystyle
v^{2}$ $\displaystyle=v_{{\bf h}}^{2}+C_{9}v_{\Phi}^{2},$
$\displaystyle\sin\beta$ $\displaystyle=\sqrt{C_{9}}\frac{v_{\Phi}}{v},$ (72)
and
$\displaystyle{\langle{{\bf h}}\rangle}$
$\displaystyle=\left(\begin{array}[]{c}0\\\ 0\\\ 0\\\ v_{\bf
h}\end{array}\right)$ $\displaystyle{\langle{\Phi}\rangle}$
$\displaystyle=\frac{v_{\Phi}}{2\sqrt{3}}\left(\begin{array}[]{cccc}1&&&\\\
&1&&\\\ &&1&\\\ &&&-3\end{array}\right)$ (81)
the generalization of $C_{9}$ to $SO(N)$ being $C_{N(N+1)/2-1}=2N/(N-1)$. Take
the mixing for the singlet radial modes $\delta{\bf h}$ and $\delta\Phi$ as
(note that no other field in $\Phi$ or ${\bf h}$ is a singlet of $SO(3)$ so we
know these two only mix among each other):
$\displaystyle\left(\begin{array}[]{c}\delta{\bf h}\\\
\delta\Phi\end{array}\right)=\left(\begin{array}[]{cc}\cos\omega&-\sin\omega\\\
\sin\omega&\cos\omega\end{array}\right)\left(\begin{array}[]{c}h\\\
\tilde{h}\end{array}\right)$ (88)
Putting the above back in the Lagrangian for the Goldstones and taking $h$ to
be the lightest singlet, one obtains in our basis of eq. (14,37)
$\displaystyle a$
$\displaystyle=c_{\omega}c_{\beta}+\sqrt{C_{9}}s_{\beta}s_{\omega}$
$\displaystyle b=$ $\displaystyle c_{\omega}^{2}+C_{9}s_{\omega}^{2}$ (89)
Note that the limit of no mixing gives $b=1$ and a parametrization of the
curvature $R_{h}=-R_{\varphi}$ orthogonal to the SMEFT with a potential new
road to the SM. The question to be answered is then: can one take
$\omega=\beta=0$ while keeping $m_{\tilde{h}}\gg m_{h}$ and maintaining
perturbativity?
To answer this question we should express $\omega$ and $\beta$ in terms of
physical masses and couplings, then use eq. (38) to substitute and find
curvature as a function of physical masses and couplings. In practice we have
to solve for the potential. The value of the fields that minimize $V$ can be
read off after rearranging as
$\displaystyle V(v_{\bf
h},v_{\Phi})=\left({\vec{v}}^{2}-2\hat{\lambda}^{-1}{\vec{m}}^{2}\right)^{T}\frac{\hat{\lambda}}{8}\left(\vec{v}^{2}-2\hat{\lambda}^{-1}{\vec{m}}^{2}\right)$
(90)
with
$\displaystyle{\vec{v}}^{2}$ $\displaystyle=2\hat{\lambda}^{-1}{\vec{m}}^{2}$
$\displaystyle\hat{\lambda}$
$\displaystyle=\lambda+\left(\begin{array}[]{cc}&-3\tilde{\lambda}/8\\\
-3\tilde{\lambda}/8&7\tilde{\lambda}_{\Phi}/12\end{array}\right)$ (93)
Next, expanding around the vevs we find the mass matrix for the singlets
$\delta{\bf h},\delta\Phi$ as
$\displaystyle M^{2}=$ $\displaystyle{\rm Diag}(v)\,\hat{\lambda}\,{\rm
Diag}(v)=U\,{\rm Diag}(m_{h}^{2},m_{\tilde{h}}^{2})\,U^{T}$ (94)
with ${\rm Diag}(v)=\delta^{ij}v_{j}$. The aim is to express $\omega,\beta$ as
$\omega(m_{h},m_{\tilde{h}},\hat{\lambda},v),\beta(m_{h},m_{\tilde{h}},\hat{\lambda},v)$,
which can be done by taking the determinant of the mass matrix
$\displaystyle{\rm det}(M^{2})$ $\displaystyle=v_{\bf h}^{2}v_{\Phi}^{2}{\rm
det}(\hat{\lambda})=m_{h}^{2}m_{\tilde{h}}^{2}$ (95)
and combining the eigenvector equations into
$\displaystyle\sin(2\omega)=\frac{2v_{\bf
h}v_{\Phi}}{m_{h}^{2}-m_{\tilde{h}}^{2}}\hat{\lambda}_{{\bf h}\Phi}$ (96)
to obtain
$\displaystyle\sin(2\omega)$
$\displaystyle=\frac{2m_{h}m_{\tilde{h}}}{m_{h}^{2}-m_{\tilde{h}}^{2}}\frac{\hat{\lambda}_{{\bf
h}\Phi}}{\sqrt{\det(\hat{\lambda})}}$ (97) $\displaystyle\sin(2\beta)$
$\displaystyle=\sqrt{C_{9}}\frac{2m_{h}m_{\tilde{h}}}{v^{2}\sqrt{\det\hat{\lambda}}}$
(98)
No obstacle prevents taking $\omega\to 0$ with $\hat{\lambda}_{{\bf h}\Phi}\to
0$, but it is evident that $\beta$ cannot be arbitrarily close to zero while
keeping $\tilde{h}$ massive and respecting unitarity. Qualitatively then, we
have a minimum attainable curvature as:
$\displaystyle\left(v^{2}R_{\varphi}\geq\frac{3m_{h}^{2}m_{\tilde{h}}^{2}}{8\pi^{2}v^{4}}\,,\,\,v^{2}R_{h}\leq-\frac{3m_{h}^{2}m_{\tilde{h}}^{2}}{8\pi^{2}v^{4}}\right)$
(99)
where we took the unitarity bound on $\hat{\lambda}$ that follows from the
4-pt amplitude for $\delta h$ and $\delta\Phi$, see e.g. [21]. This result,
being proportional to the extra state mass, yields a naive cut-off
$R=\frac{4\pi}{\Lambda^{2}}$ with inverse dependence on the new physics scale:
$\displaystyle\frac{\Lambda^{2}}{v^{2}}\sim\frac{(4\pi)^{3}}{\lambda_{\rm
SM}}\frac{v^{2}}{m_{\tilde{h}}^{2}}$ (100)
so that the largest cut-off, or the closest to the SM couplings one can get,
is attained for the lowest new physics scale. How low this scale can be while
still being able to assume an EFT applies can be estimated from the amplitude
for $W$ scattering, mediated by the singlets in the full theory
$\displaystyle-\mathcal{A}=\frac{s}{v^{2}}\left(1-c_{\beta}^{2}\frac{s}{s-m_{h}^{2}}-s_{\beta}^{2}\frac{s}{s-m_{\tilde{h}}^{2}}\right)+(s\to
t)$ (101)
The plot in fig. 4 shows the region in the curvature plane that the models
discussed in this section cover. In particular for the minimum mass of the
extra singlet we take the limit of $m_{\tilde{h}}\gtrsim 350$ GeV from [22] as
reference.
Figure 4: Range of curvature for SMEFT and quotient theories, on the same
background as Fig. 2. Two quotient theories are plotted: the yellow region
shows curvature for the symmetric representation with
${\langle{\Phi}\rangle}\neq 0$, and the dark-grey region shows a hyperbolic
manifold (see sec. V). The black line shows SMEFT curvature; on which the
purple and red dots represent the singlet and the symmetric representation
with ${\langle{\Phi}\rangle}=0$ examples from sec. IV respectively. The outer-
most to inner-most dots are evaluated with coupling $g_{*}=1$ and heavy
singlet mass: 500 GeV, 1 TeV, 1.5 TeV, 2 TeV and 4 TeV.
## V Manifolds
The above HEFT cases fall into the category of manifolds with a singularity,
as one can see by integrating out heavy states [7]. In contrast, one can also
have that no $O(4)$-symmetric point is present and the manifold is smooth at
every point. This section visualizes both types of manifolds, together with
those that admit a SMEFT description. Consider (higher dimensional)
cylindrical coordinates, the gauge symmetry acts rotating along the axis and
orthogonal to this rotation we have a cylindrical radial coordinate $\rho$ and
a ‘height’ $z$. Our manifolds are hypersurfaces within this 5d space
parametrized by $h$ and $\varphi^{a}$
$\displaystyle(\rho(h)u(\varphi),z(h))$ (102)
With a line element:
$\displaystyle
d\ell^{2}=\left(\left(\frac{d\rho}{dh}\right)^{2}\pm\left(\frac{dz}{dh}\right)^{2}\right)dh^{2}+\rho(h)^{2}du^{2}$
(103)
which defines the 4-d metric, where the plus sign is for Euclidean 5d space
and the minus for the metric with a $(-1,1,1,1,1)$ signature. In our basis,
eq. (14), $dh^{2}$ has unit coefficient which can always be attained by a
field redefinition. In terms of geometry, the singlet Higgs field $h$ equals
distance in field space for fixed $u$. From the equation above and our basis
it also follows that $F(h)=\rho(h)/v$ with $F(0)=1$ giving $\rho(0)=v$. For
convenience let us define $\theta=(h+h_{0})/f$ with $f$ a new physics scale.
The most symmetric manifolds are $S^{4}$, $R^{4}$ & $\mathcal{H}^{4}$ which
are parametrized in our basis as
$\displaystyle S^{4}$ $\displaystyle(f\sin(\theta)u,f\cos(\theta))$ (104)
$\displaystyle R^{4}$ $\displaystyle((h+v)u,0)$ (105)
$\displaystyle\mathcal{H}^{4}$ $\displaystyle(f\sinh(\theta)u,f\cosh(\theta))$
(106)
and yield constant (field-independent) curvature:
$\displaystyle R_{\varphi},$ $\displaystyle R_{h}$ $\displaystyle
S^{4},\mathcal{H}^{4}$ $\displaystyle\pm\frac{1}{f^{2}},$
$\displaystyle\pm\frac{1}{f^{2}}$ (107)
while the $f\to\infty$ limit yields $R^{4}$ which corresponds to the SM.
Indeed these manifolds can be described in SMEFT and correspond to Composite
Higgs Models [23] or negative curvature models [24].
### V.1 quotient space theories with a singularity
A one-parameter deformation of the manifolds above takes us into quotient
space with a singularity at the origin:
$\displaystyle{\rm deformed\,}S^{4}$
$\displaystyle\left(fs_{\gamma\theta}u,\int
dh\sqrt{1-\gamma^{2}c^{2}_{\gamma\theta}}\right)$ (108) $\displaystyle{\rm
deformed\,}\mathcal{H}^{4}$ $\displaystyle\left(fsh_{\gamma\theta}u,\int
dh\sqrt{\gamma^{2}ch^{2}_{\gamma\theta}-1}\right)$ (109)
where $s_{\gamma\theta}=\sin(\gamma\theta)$ and the singularity is made
evident by the curvature
$\displaystyle R_{\varphi}$ $\displaystyle R_{h}$ $\displaystyle{\rm
deformed\,}S^{4}$
$\displaystyle\frac{1-\gamma^{2}}{f^{2}s^{2}_{\gamma\theta}}+\frac{\gamma^{2}}{f^{2}},$
$\displaystyle\frac{\gamma^{2}}{f^{2}}$ (110) $\displaystyle{\rm
deformed\,}\mathcal{H}^{4}$
$\displaystyle\frac{1-\gamma^{2}}{f^{2}sh^{2}_{\gamma\theta}}-\frac{\gamma^{2}}{f^{2}},$
$\displaystyle-\frac{\gamma^{2}}{f^{2}}$ (111)
since the origin, and would-be-$O(4)$ invariant point, $\theta=0$, returns
$R_{\varphi}=\infty$. This singularity is present for any $\gamma\neq\pm 1$
which seemingly presents a way to approximate the SM by sending first
$f\to\infty$ while keeping $fs_{\gamma\theta_{0}}(fsh_{\gamma\theta_{0}})=v$
constant, then $\gamma\to 1$. Indeed in this limit,
$\partial^{n}R\propto(1-\gamma^{2})$ and contributions to amplitudes of an
arbitrary number of particles cancel. Nonetheless and quite relevantly in this
limit, the singularity is just a field distance $v/\gamma$ from the vacuum
$h=0$. The model in the section above with a symmetric representation taking a
vev also belongs to the quotient theories with singularities, yet it showed
that the SM point cannot be reached. So it could be that the deformed
manifolds have no UV completion, yet from low energy we see no indication for
it. This highlights the need for a bound based purely in the EFT perspective
to comprise all possibilities.
Figure 5: Examples of manifolds which belong in SMEFT (a), or in quotient
space (b,c,d) with the gauge symmetry action being rotation around the $z$
axis. SMEFT manifolds in (a) correspond to: Composite models (yellow), the SM
(green), and negative curvature models (blue). quotient manifolds (b,d) are
smooth, while (c) present a singularity and both (c,d) are in a class which
resembles the SM around the vacuum. For (d), part of the manifolds has been
cut out for better visualization.
### V.2 Smooth quotient theories
On the other hand, one could have smooth manifolds in quotient space,
$\rho\neq 0\,\forall\,h$; we take here as examples a torus and a hyperbola (in
Euclidean space)
$\displaystyle{\rm torus}$
$\displaystyle((\rho_{0}+fc_{\theta})u,fs_{\theta})$ (112) $\displaystyle{\rm
hyperbola}$ $\displaystyle((\rho_{0}+fch_{\hat{\theta}})u,fsh_{\hat{\theta}})$
(113)
where $\hat{\theta}=(\hat{h}(h)+\hat{h}_{0})/f$ with
$(dh/d\hat{h})^{2}=sh_{\hat{\theta}}^{2}+ch_{\hat{\theta}}^{2}$ as follows
from our normalization in Euclidean 5d. In terms of curvature, these manifolds
give:
$\displaystyle R_{\varphi}$ $\displaystyle R_{h}$ $\displaystyle{\rm Torus}$
$\displaystyle\frac{\cos(\theta)^{2}}{v^{2}},$
$\displaystyle\frac{\cos(\theta)}{fv}$ (114) $\displaystyle{\rm Hyperbola}$
$\displaystyle\frac{ch_{\hat{\theta}}^{2}}{(ch_{\hat{\theta}}^{2}+sh_{\hat{\theta}}^{2})v^{2}},$
$\displaystyle\frac{-ch_{\hat{\theta}}}{(ch_{\hat{\theta}}^{2}+sh_{\hat{\theta}}^{2})^{2}fv}$
(115)
We see that the hyperbola does not go through the zero curvature point for any
value of $f,\theta$, always keeping a distance as the explicit model in the
previous section did. The torus however for $\theta=\pi/2$ does have both
curvatures vanish, yet by construction the manifold is not $R^{4}$. Visually,
for this point we are sitting atop of the torus and for its first two
derivatives it does resemble a plane, but its third derivative is non-
vanishing and indeed $R^{\prime}_{h}=1/f^{2}v$ which is bounded from below
given $\rho_{0}>f$ and for $\theta=\pi/2$, $v=\rho_{0}$.
This nonetheless illustrates the possibility of manifolds that do look locally
like the SM to the $n$th derivative, yet do not go through the origin. Let us
take on such set of manifolds labelled by $n$
$\displaystyle F_{(n)}(h)$
$\displaystyle=1+\frac{h}{v}+c_{n}\left(\frac{h}{v}\right)^{n}$
$\displaystyle|c_{n}|$ $\displaystyle>\frac{(n-1)^{n-1}}{n^{n}}$ (116)
The manifolds associated with these $F_{n}$ for $n=3,4,5$ are plotted in fig.
5 and they resemble a plane and hence the SM ever more accurately for
increasing $n$ around $h=0$.
## VI Obstacles in the road to the SM
We have encountered HEFT/SMEFT quotient theories which either come from smooth
manifolds with no $O(4)$-invariant point, or manifolds which get arbitrarily
close to the would be $O(4)$-invariant point, but the point itself is
singular.
A number of UV complete theories yield quotient theories with singularities at
the origin. From working out an explicit example, we have seen that these can
only get within a finite distance of the SM point. This explicit computation
relied on knowledge of the full theory, but here we attempt to give an
argument as to why quotient theories are not a road to the SM model in purely
low energy grounds.
Let us turn to semi-classical arguments. Consider the Higgs field as sourced
by a probe particle $i$ localized in a region $\sigma_{x}$ and with a mass
$m_{i}>m_{h}$. This configuration is, of course, short lived yet for times
smaller than the decay rates one might consider such system. The
renormalizable linear realization gives an equation of motion 333The spin 0, 1
case has an extra $h/v$ times the source which we dropped,
$\displaystyle(-\Box-m^{2}_{h})h(x)=\frac{m_{i}}{v}J_{i}(x)$ (117)
where
$\displaystyle{\rm Spin\,1/2}$ $\displaystyle J_{i}$ $\displaystyle=\langle
i|\bar{\psi}\psi|i\rangle$ (118) $\displaystyle{\rm Spin\,1}$ $\displaystyle
J_{i}$ $\displaystyle=-\langle i|m_{i}V_{\mu}V^{\mu}|i\rangle$ (119)
and the particle state is
$\displaystyle|i\rangle=$
$\displaystyle\int\frac{d^{3}p}{(2\pi)^{3}}\Psi(p)\frac{a_{i,p}^{\dagger}}{\sqrt{2E_{p}}}|0\rangle$
(120)
Away from the localized source the field is
$\displaystyle h(r>\sigma_{x})$
$\displaystyle=\frac{m_{i}}{v}\int\frac{d^{4}xd^{4}q}{(2\pi)^{4}}\frac{e^{iq(x-y)}\hat{J}_{i}(\vec{x})}{q^{2}-m^{2}}$
(121) $\displaystyle\simeq-\frac{m_{i}}{v}\frac{e^{-m_{h}r}}{4\pi r}$ (122)
where in the second line we assumed that the current $J_{i}$ is the same as
the probability density, as we shall see justified in the non-relativistic
limit.
Consider now the candidate quotient theories that resemble the Standard Model
to a high degree, examples given in the previous section are the functions
$F_{(n)}$ as given in (116) or the deformed $S^{4},\mathcal{H}^{4}$ theories
(110,111). The solution above should be a good first approximation certainly
for large distances $r>1/m_{h}$ where the field value is exponentially close
to the vacuum value. However, at shorter distances if our candidate theories
truly present a limit in which the SM couplings are recovered, the solution
should still be a good approximation. The field value nonetheless increases
with decreasing distance and if there is a singularity, in this SM limit, it
is just a distance $v/\gamma\simeq v$ away in field space. Conversely, for
smooth quotient theories, even if our series example $F_{(n)}$ resembles the
SM locally around the vacuum, the corrections in eq. (117) read
$1+nc_{n}(h/v)^{n}$ with $nc_{n}\sim 1$ for $n\gg 1$ and would dominate over
the SM for $h\sim v$. This is indeed the same condition for both types of
theories and yields a naive minimum distance or cut-off
$\displaystyle\frac{h(\sigma_{x}<r<m_{h}^{-1})}{v}\simeq\frac{m_{i}}{v}\frac{1}{4\pi
vr}$ (123) $\displaystyle\frac{h(r_{0})}{v}\sim 1\quad{\rm
for}\quad\frac{1}{r_{0}}\equiv\Lambda\sim 4\pi v\frac{v}{m_{i}}$ (124)
This points at a cut-off an inverse coupling factor higher than other
estimates based on pertubative unitarity. Nevertheless, quantum mechanics has
something to say about our implicit assumption $\sigma_{x}<r_{0}$. Indeed
$r_{0}\sim(m^{2}_{i}/4\pi v^{2})m_{i}^{-1}$ is smaller than the inverse mass
of a particle for perturbative couplings (which is the case for the SM) but in
order to localize the particle in a distance smaller than the inverse mass,
the uncertainty principle dictates a range of momenta that extends to the
relativistic regime. In this high energy limit, our current $J_{i}$ suffers a
relativist factor $m/E$ suppression as explicit evaluation of the matrix
elements shows when going beyond the non-relativistic approximation. For a
fermion, one has
$\displaystyle
J_{i}(x)=\int\frac{d^{3}pd^{3}k}{(2\pi)^{6}}\frac{\bar{u}(k)u(p)}{\sqrt{2E_{p}2E_{k}}}e^{i(p-k)x}\Psi^{*}(k)\Psi(p)$
(125)
which implies that the space-integral over the source $J_{i}$ is suppressed
and the field value at a distance $r>\sigma_{x}$ is
$\displaystyle\frac{h(\sigma_{x}<r)}{v}$
$\displaystyle=\frac{N(m_{i}\sigma_{x})}{4\pi
vr}\frac{m_{i}}{v^{2}}=\frac{N(\sigma_{x}m_{i})}{rm_{i}}\alpha_{i}$ (126)
$\displaystyle N(m_{i},\sigma_{x})$ $\displaystyle=\frac{\int
d^{3}k(m_{i}/E_{p})|\Psi(p)|^{2}}{\int
d^{3}k|\Psi(p)|^{2}}\quad\alpha_{i}=\frac{m_{i}^{2}}{4\pi v^{2}}$ (127)
which is the same result for spin $1/2$ and $1$. This suppression implies that
the pre-factor of $\alpha_{i}$ in the eq. (126) is at most order one, which
would then require an order one $\alpha_{i}$ to probe $(h/v)\sim 1$. Note that
this $\alpha_{i}$ will be at the edge of perturbative unitarity, although loop
corrections will be supressed by $\sim 1/(4\pi)$.
As an estimate, we take a Gaussian distribution $\Psi\sim
e^{-(p\sigma_{x})^{2}/2}$ and evaluate the potential at a distance
$r=2\sigma_{x}$ which encloses 95% of the probability density to find that
with $\alpha_{i}\sim 2$ the cut-off, or inverse distance, where we would probe
$h\sim v$ would be $r_{0}=0.6m_{i}^{-1}$,
$\displaystyle\Lambda\sim\sqrt{\frac{8\pi\sigma_{x}m_{i}}{N(\sigma_{x}m_{i})}}\Bigg{|}_{m_{i}\sigma\sim
0.3}v\simeq 2{\,\rm TeV}\,.$ (128)
The nature of EWSB and the question of whether a symmetric $O(4)$ point exits
should be independent of the introduction of our probe particle $i$, although
admittedly the fact that one would require couplings on the pertubative edge
makes the above a rough estimate.
The naive scaling from eq. (123) does, however, point towards the typical
scale for non-perturbative effects. This is indeed the natural scale for
answering non-local questions about our theory. While the detailed study of
this effect will be presented elsewhere [25] here we sketch the modifications
in a well known non-perturbative effect, sphalerons, whose energy is
$\displaystyle E_{\rm sph}\sim\frac{4\pi v}{g}$ (129)
In particular, the topological argument by Manton [26] has to do with a loop
(parametrized by $\mu$) of mappings from the sphere at spatial infinity to the
vacuum manifold, characterized by our unit vector $u$, i.e.
$u(\theta,\phi;\mu)$ and holds regardless of the Higgs singlet role.
Nonetheless the boundary conditions to find the energy of the potential
barrier have to be drastically changed in quotient theories. Indeed, the
proposed field at the top of the barrier $\mu=\pi/2$ in [26] is $({\bf
h}=h(r)u)$
$\displaystyle\bf h$
$\displaystyle=h(r)\left(\begin{array}[]{c}s_{\mu}s_{\theta}c_{\phi}\\\
s_{\mu}s_{\theta}s_{\phi}\\\ s_{\mu}c_{\mu}(c_{\theta}-1)\\\
s_{\mu}^{2}c_{\theta}+c_{\mu}^{2}\end{array}\right)$ $\displaystyle{\rm B.C.}$
$\displaystyle\left\\{\begin{array}[]{c}h(0)=0\,,\\\
h(\infty)=v\,.\end{array}\right.$ (136)
In particular, the condition at the origin, that the Higgs field go to its
symmetry preserving $O(4)$ symmetric point, is demanded to remove dependence
on angular variables of the Higgs doublet at the origin where $\theta,\phi$
are ill-defined. For quotient theories, it is clear that this does not apply
given that an $O(4)$ point is absent or singular. One can introduce a radial
dependent function on $u$ itself such that
$\displaystyle u(\theta,\phi,r\to\infty)$ $\displaystyle=u_{\infty}$
$\displaystyle u(\theta,\phi,r\to 0)$ $\displaystyle\to u_{0}$ (137)
The boundary conditions on $h$ would naively be $h^{\prime}(0)=0$. In either
case, the quotient theory effect is an order one modification which serves as
a handle to tell quotient theories apart from the Standard Model.
## VII Summary
This work studied the quotient space HEFT/SMEFT, and the potential limits to
recover the SM other than via SMEFT with the use of a geometric formulation.
Explicit examples, which include perturbative UV complete models, can and will
be told apart from the SMEFT case by future experiments via projection of
measurements on the curvature plane defined from the $WW$ scattering and
$WW\to hh$ amplitudes (see fig. 4). These examples of quotient space
HEFT/SMEFT theories do not offer a limit to recover the SM and possess a
finite cut-off. In contrast to these, quotient theories were formulated in
sec. V which resemble the SM amplitudes for arbitrary precision and number of
particles. While these theories look like the SM model around the vacuum, at a
Higgs-singlet-distance of $\sim v$ they reveal their quotient space nature.
Making use of semi-classical arguments to displace the Higgs field by $\sim
v$, we find an argument for general theories in quotient space to be
distinguishable from the SM when probing the theory at an energy (inverse
distance) of at most $4\pi v/g_{\rm SM}$. Our discussion applies to quotient
theories both with and without singularities (non-analyticities). The most
pressing outstanding question is the characterization of experimental
signatures that follow from the semi-classical arguments given here.
## VIII Acknowledgements
R. A. and M.W. are supported by the STFC under Grant No. ST/P001246/1.
## References
* Brivio and Trott [2019] I. Brivio and M. Trott, The Standard Model as an Effective Field Theory, Phys. Rept. 793, 1 (2019), arXiv:1706.08945 [hep-ph] .
* Appelquist and Carazzone [1975] T. Appelquist and J. Carazzone, Infrared Singularities and Massive Fields, Phys. Rev. D 11, 2856 (1975).
* Feruglio [1993] F. Feruglio, The Chiral approach to the electroweak interactions, Int. J. Mod. Phys. A 8, 4937 (1993), arXiv:hep-ph/9301281 .
* Grinstein and Trott [2007] B. Grinstein and M. Trott, A Higgs-Higgs bound state due to new physics at a TeV, Phys. Rev. D 76, 073002 (2007), arXiv:0704.1505 [hep-ph] .
* Alonso _et al._ [2016a] R. Alonso, E. E. Jenkins, and A. V. Manohar, A Geometric Formulation of Higgs Effective Field Theory: Measuring the Curvature of Scalar Field Space, Phys. Lett. B 754, 335 (2016a), arXiv:1511.00724 [hep-ph] .
* Alonso _et al._ [2016b] R. Alonso, E. E. Jenkins, and A. V. Manohar, Geometry of the Scalar Sector, JHEP 08, 101, arXiv:1605.03602 [hep-ph] .
* Cohen _et al._ [2021a] T. Cohen, N. Craig, X. Lu, and D. Sutherland, Is SMEFT Enough?, JHEP 03, 237, arXiv:2008.08597 [hep-ph] .
* Helset _et al._ [2020] A. Helset, A. Martin, and M. Trott, The Geometric Standard Model Effective Field Theory, JHEP 03, 163, arXiv:2001.01453 [hep-ph] .
* Cohen _et al._ [2021b] T. Cohen, N. Craig, X. Lu, and D. Sutherland, Unitarity Violation and the Geometry of Higgs EFTs, (2021b), arXiv:2108.03240 [hep-ph] .
* Falkowski and Rattazzi [2019] A. Falkowski and R. Rattazzi, Which EFT, JHEP 10, 255, arXiv:1902.05936 [hep-ph] .
* Falkowski _et al._ [2012] A. Falkowski, S. Rychkov, and A. Urbano, What if the Higgs couplings to W and Z bosons are larger than in the Standard Model?, JHEP 04, 073, arXiv:1202.1532 [hep-ph] .
* Coleman _et al._ [1969] S. Coleman, J. Wess, and B. Zumino, Structure of phenomenological lagrangians. i, Phys. Rev. 177, 2239 (1969).
* Hatzinikitas [2000] A. Hatzinikitas, A Note on Riemann normal coordinates, (2000), arXiv:hep-th/0001078 .
* Aad _et al._ [2020] G. Aad _et al._ (ATLAS), Combined measurements of Higgs boson production and decay using up to $80$ fb-1 of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment, Phys. Rev. D 101, 012002 (2020), arXiv:1909.02845 [hep-ex] .
* CMS [2021] Search for Higgs boson pair production via vector boson fusion with highly Lorentz-boosted Higgs bosons in the four b quark final state at $\sqrt{s}=13$ TeV, (2021).
* Cepeda _et al._ [2019] M. Cepeda _et al._ , Report from Working Group 2: Higgs Physics at the HL-LHC and HE-LHC, CERN Yellow Rep. Monogr. 7, 221 (2019), arXiv:1902.00134 [hep-ph] .
* Abada _et al._ [2019] A. Abada _et al._ (FCC), FCC-ee: The Lepton Collider: Future Circular Collider Conceptual Design Report Volume 2, Eur. Phys. J. ST 228, 261 (2019).
* Bishara _et al._ [2017] F. Bishara, R. Contino, and J. Rojo, Higgs pair production in vector-boson fusion at the LHC and beyond, Eur. Phys. J. C 77, 481 (2017), arXiv:1611.03860 [hep-ph] .
* Brivio _et al._ [2014] I. Brivio, T. Corbett, O. J. P. Éboli, M. B. Gavela, J. Gonzalez-Fraile, M. C. Gonzalez-Garcia, L. Merlo, and S. Rigolin, Disentangling a dynamical Higgs, JHEP 03, 024, arXiv:1311.1823 [hep-ph] .
* Adams _et al._ [2006] A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis, and R. Rattazzi, Causality, analyticity and an IR obstruction to UV completion, JHEP 10, 014, arXiv:hep-th/0602178 .
* Lee _et al._ [1977] B. W. Lee, C. Quigg, and H. B. Thacker, Weak Interactions at Very High-Energies: The Role of the Higgs Boson Mass, Phys. Rev. D 16, 1519 (1977).
* Aaboud _et al._ [2018] M. Aaboud _et al._ (ATLAS), Search for heavy resonances decaying into $WW$ in the $e\nu\mu\nu$ final state in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, Eur. Phys. J. C 78, 24 (2018), arXiv:1710.01123 [hep-ex] .
* Panico and Wulzer [2016] G. Panico and A. Wulzer, _The Composite Nambu-Goldstone Higgs_, Vol. 913 (Springer, 2016) arXiv:1506.01961 [hep-ph] .
* Alonso _et al._ [2016c] R. Alonso, E. E. Jenkins, and A. V. Manohar, Sigma Models with Negative Curvature, Phys. Lett. B 756, 358 (2016c), arXiv:1602.00706 [hep-ph] .
* [25] R. Alonso and M. West, _In preparation_.
* Manton [1983] N. S. Manton, Topology in the Weinberg-Salam Theory, Phys. Rev. D 28, 2019 (1983).
|
11institutetext: Ben-Gurion University of the Negev
Academic College of Telaviv Yafo
11email: {berat, drobya<EMAIL_ADDRESS>11email<EMAIL_ADDRESS>
# Unsupervised learning of text line segmentation by differentiating coarse
patterns
Berat Kurar Barakat(✉) 11 Ahmad Droby 11 Raid Saabni 22 Jihad El-Sana 11
###### Abstract
Despite recent advances in the field of supervised deep learning for text line
segmentation, unsupervised deep learning solutions are beginning to gain
popularity. In this paper, we present an unsupervised deep learning method
that embeds document image patches to a compact Euclidean space where
distances correspond to a coarse text line pattern similarity. Once this space
has been produced, text line segmentation can be easily implemented using
standard techniques with the embedded feature vectors. To train the model, we
extract random pairs of document image patches with the assumption that
neighbour patches contain a similar coarse trend of text lines, whereas if one
of them is rotated, they contain different coarse trends of text lines. Doing
well on this task requires the model to learn to recognize the text lines and
their salient parts. The benefit of our approach is zero manual labelling
effort. We evaluate the method qualitatively and quantitatively on several
variants of text line segmentation datasets to demonstrate its effectivity.
###### Keywords:
Text line segmentation Text line extraction Text line detection Unsupervised
deep learning.
| | | | | |
---|---|---|---|---|---|---
| | | | | |
Figure 1: The proposed method learns an embedding space in an unsupervised
manner such that the distances between the embedded image patches correspond
to the similarity of the coarse text line pattern they include.
## 1 Introduction
Text line segmentation is a central task in document image analysis. Basically
text line segmentation can be represented as text line detection and text line
extraction. Text line detection is a coarse representation of text lines in
terms of baselines or blob lines. Text line extraction is a fine grained
representation of text lines in terms of pixel labels or bounding polygons.
Once the text line detection is achieved, text line extraction is trivial
using standard tools. However, text line detection is challenging due to the
prevalence of irregular texture regions in handwriting.
Given a document image patch, it contains a coarse trend of text lines. Human
visual system can easily track these trend lines (Figure 2), but a computer
algorithm cannot track them due to the textured structure of each text line at
fine details. Inspired by this fact, we hypothesize that a convolutional
network can be trained in an unsupervised manner to map document image patches
to some vector space such that the patches with the same coarse text line
pattern are proximate and the patches with different coarse text line pattern
are distant. We can assume that two neighbouring patches contain the same
coarse text line pattern and contain different coarse text line pattern if one
of them is rotated 90 degrees. Doing well on this task requires the model to
learn to recognize the text lines and their salient parts. Hence the embedded
features of document patches can also be used to discriminate the differences
in the horizontal text line patterns that they contain. Clustering the patches
of a document page by projecting their vectors onto three principle directions
yields a pseudo-rgb image where coarse text line patterns correspond to
similar colours (Figure 1). The pseudo-rgb image can then be thresholded into
blob lines that strikethrough the text lines and guide an energy minimization
function for extracting the text lines.
The proposed method has been evaluated on two publicly available handwritten
documents dataset. The results demonstrate that this unsupervised learning
method provides interesting text line segmentation results on handwritten
document images.
Age | 5 | 7 | 10 | 11
---|---|---|---|---
| Perceived
---
text lines
| | |
Figure 2: Human visual system can easily perceive the coarse trend of
handwritten text lines. Children can segment the text lines although written
in a language they are not its reader.
## 2 Related work
The recent trend in solving the handwritten text line segmentation problem is
to employ deep networks that learn the representation directly from the pixels
of the image rather than using engineered features [7]. These methods use a
large dataset of labelled text lines to acquire the texture variances due to
different font types, font sizes, and orientations.
Early attempts formulate text line segmentation as a supervised binary dense
prediction problem. Given a document image, a Fully Convolutional Network
(FCN) [17] is trained to densely predict whether a pixel is a text line pixel
or not. However, the question that arises here is: Which pixels belong to a
text line? Foreground pixels definitely can not discriminate a text line from
the others because FCN output is a semantic segmentation where multiple
instances of the same object are not separated. Very recently, text line
segmentation has been formulated as an instance segmentation problem using
Mask-RCNN [10], and its results are available in [14]. However, when using
FCN, each text line is represented as a single connected component. This
component can be either a blob line [22, 20, 16, 18, 14] strikes through the
main body area of the characters that belong to a text line or a baseline [9]
passes through the bottom part of the main body of the characters that belong
to a text line. FCNs are very successful at detecting handwritten text lines
[7]. However, scarcity of labelled data causes rarely occurring curved text
lines to be poorly detected. This problem has been handled via augmentation
[9] or learning-free detection [13].
Both the text line representations, blob line and baseline, are coarse grained
representations and do not fully label all the pixels of a text line but only
detect the spatial location of a text line. There are metrics that can
evaluate the detected baselines [7, 20, 18] or blob lines [16]. Alternatively,
detected spatial location of text lines are utilized to further extract the
pixels of text lines. Some of these extraction methods assume horizontal text
lines [22] whereas some can extract text lines at any orientation, with any
font type and font size [14]. Text line extraction is evaluated by classical
image segmentation metrics [7].
Deep networks have apparently increased handwritten text line segmentation
performance by their ability to learn comprehensive visual features. However,
they need to leverage large labelled datasets, which in turn brings costly
human annotation effort. Learning-free algorithms would be a natural solution
but still they do not achieve state of the art [12] except used in hybrid with
deep networks [1]. Another solution would be unsupervised learning methods.
However, the main concern is to find an objective function that will use a
representation to capture text lines, although they are not labelled. Kurar et
al. [15] formulated this concern as the answer to the question of whether a
given document patch contains a text line or space line. The answer is based
on a human adjusted score. In this paper, we propose an unsupervised text line
segmentation method that trains a deep network to answer whether two document
image patches contain the same coarse text line pattern or different coarse
text line pattern. The network is urged to learn the salient features of text
lines in order to answer this question.
## 3 Method
Unsupervised learning of text line segmentation is a three stage method
(Figure 3). The first stage relies on a deep convolutional network that can
predict a relative similarity for a pair of patches and embed the patches into
feature vectors. The similarity of two patches in document images correlates
with their text line orientation assuming that the neighbouring patches
contain the same orientation. The second stage generates a pseudo-rgb image
using the three principals of the feature vectors obtained from the first
stage. The pseudo-rgb image is further thresholded to detect the blob lines
that strike through the text lines. Final stage performs pixel labelling for
text lines using an energy minimization function that is assisted by the
detected blob lines.
Figure 3: Given a handwritten document image (a), first stage extracts feature
vectors of image patches such that the patches with similar text line trends
are close in the space. Second stage clusters the patches of a document image
according to the first three principal components of their feature vectors.
This stage outputs the a pseudo-rgb image (b) which is then thresholded onto
blob lines (c) that strike through text lines. Energy minimization with the
assistance of detected blob lines extracts the pixel labels of text lines (d).
### 3.1 Deep convolutional network
Convolutional networks are well known to learn complex image representations
from raw pixels. We aim the convolutional network to learn the coarse trend of
text lines. We train it to predict the similarity for a pair of patches in
terms of text line orientation. In a given document image neighbouring patches
would contain the same coarse trend of text lines. Therefore, the network is
expected to learn a feature embedding such that the patches that contain the
same text line pattern would be close in the space.
To achieve this we use a pair of convolutional networks with shared weights
such that the same embedding function is computed for both patches. Each
convolutional branch processes only one of the patches hence the network
performs most of the semantic reasoning for each patch separately.
Consequently, the feature representations are concatenated and fed to fully
connected layers in order to predict whether the two image patches are similar
or different.
The architecture of the branches is based on AlexNet [11] and through
experiments we tune the hyperparameters to fit our task. Each of the branches
has five convolutional layers as presented in Figure 4. Dotted lines indicate
identical weights, and the numbers in parentheses are the number of filters,
filter size and stride. All convolutional and fully connected layers are
followed by ReLU activation functions, except fc5, which feeds into a sigmoid
binary classifier.
Figure 4: Convolutional network architecture for pair similarity. Dotted lines
stand for identical weights, conv stands for convolutional layer, fc stands
for fully connected layer and pool is a max pooling layer.
#### 3.1.1 Pair generation
Given a document image, we sample the first patch uniformly from regions
containing foreground pixels. Given the position of the first patch we sample
the second patch randomly from the eight possible neighbouring locations. We
include a gap and jitter between patches in order to prevent cues like
boundary patterns or lines continuing between patches. Neighbouring patches in
a document image can be assumed to contain the same text line orientation and
are labeled as similar pairs. Different pairs are generated by rotating the
second patch $90$ degrees. Additionally for both, the similar pairs and the
different pairs, the second patches are randomly rotated $0$ degrees or
rotated $180$ degrees or flipped. Pair generation is demonstrated in Figure 5.
In case of fluctuating or skewed text lines, the similarity does not correlate
with the proximity. However in a document image with almost all horizontal
text lines these dissimilar and close patches are rare.
Figure 5: The pairs are generated with the assumption that neighbouring
patches contain similar text line trends. Different pairs are generated by
rotating one of the patches 90 degrees. Both, the similar and different, pairs
are augmented by randomly rotating one of the patches $0$ degrees or $180$
degrees or flipping.
#### 3.1.2 Training
For each dataset we train the model from scratch using $n_{p}$ pairs:
$n_{p}=\frac{h_{a}\times w_{a}}{p\times p}\times n_{d}$ (1)
where $h_{a}$ and $w_{a}$ are the average document image height and width in
the dataset, $p$ is the patch size, and $n_{d}$ is the number of document
images in the set. The learning rate is $0.00001$, the batch size is $8$ and
the optimizing algorithm is Adam. We continue training until there is no
improvement on the validation accuracy with a patience of $7$ epochs and save
the model with the best validation accuracy for the next stage.
### 3.2 Pseudo-rgb image
The convolutional network performs most of the semantic reasoning for each
patch separately because only three layers receive input from both patches.
Hence we can use a single branch to extract the significant features of
patches. This embeds every patch into a feature vector of $512$ dimensions. To
visualize the features of a complete document image, a sliding window of the
size $p\times p$ is used, but only the inner window of the size $w\times w$ is
considered to increase the representation resolution. We also pad the document
image with background pixels at its right and bottom sides if its size is not
an integer multiple of the sliding window size. An additional padding is added
at four sides of the document image for considering only the central part of
the sliding window. Resultantly, a document image with the size $h_{d}\times
w_{d}$ is mapped to a representation matrix of the size
$\frac{h_{d}}{w}\times\frac{w_{d}}{w}\times 512$. We project $512D$ vectors
into their three principle components and use these components to construct
pseudo-rgb image in which similar patches are assigned the similar colors
(Figure 3(b)). Binary blob lines image is an outcome of thresholded pseudo-rgb
image (Figure 3(c)).
### 3.3 Energy minimization
We adopt the energy minimization framework [4] that uses graph cuts to
approximate the minimal of an arbitrary function. We adapt the energy function
to be used with connected components for extracting the text lines. Minimum of
the adapted function correspond to a good extraction which urges to assign
components to the label of the closest blob line while straining to assign
closer components to the same label (Figure 3(d)). A touching component $c$
among different blob lines is split by assigning each pixel in $c$ to the
label of the closest blob line.
Let $\mathcal{L}$ be the set of binary blob lines, and $\mathcal{C}$ be the
set of components in the binary document image. Energy minimization finds a
labeling $f$ that assigns each component $c\in\mathcal{C}$ to a label
$l_{c}\in\mathcal{L}$, where energy function $\textbf{E}(f)$ has the minimum.
$\textbf{E}(f)=\sum_{c\in{\mathcal{C}}}D(c,\ell_{c})+\sum_{\\{c,c^{\prime}\\}\in\mathcal{N}}d(c,c^{\prime})\cdot\delta(\ell_{c}\neq\ell_{c^{\prime}})$
(2)
The term $D$ is the data cost, $d$ is the smoothness cost, and $\delta$ is an
indicator function. Data cost is the cost of assigning component $c$ to label
$l_{c}$. $D(c,\ell_{c})$ is defined to be the Euclidean distance between the
centroid of the component $c$ and the nearest neighbour pixel in blob line
$l_{c}$ for the centroid of the component $c$. Smoothness cost is the cost of
assigning neighbouring elements to different labels. Let $\mathcal{N}$ be the
set of nearest component pairs. Then $\forall\\{c,c^{\prime}\\}\in\mathcal{N}$
$d(c,c^{\prime})=\exp({-\beta\cdot d_{c}(c,c^{\prime})})$ (3)
where $d_{c}(c,c^{\prime})$ is the Euclidean distance between the centroids of
the components $c$ and $c^{\prime}$, and $\beta$ is defined as
$\beta=(2\left<d_{c}(c,c^{\prime})\right>)^{-1}$ (4)
$\left<\cdot\right>$ denotes expectation over all pairs of neighbouring
components [5] in a document page image.
$\delta(\ell_{c}\neq\ell_{c^{\prime}})$ is equal to $1$ if the condition
inside the parentheses holds and $0$ otherwise.
## 4 Experiments
In this section we first introduce the datasets used in the experiments. We
define the parameters of the baseline experiment, and investigate the
influence of patch size and central window size on the results. Then we
visualize patch saliency for understanding the unsupervised learning of text
line segmentation. Finally we discuss the limitations of the method.
### 4.1 Data
The experiments cover five datasets that are different in terms of the
challenges they pose. The VML-AHTE dataset [14] consists of Arabic handwritten
documents with crowded diacritics and cramped text lines. The Pinkas dataset
[3] contains slightly edge rounded and noisy images of Hebrew handwritten
documents. Their ground truth is provided in PAGE xml format [19, 6]. The
Printed dataset is our private and synthetic dataset that is created using
various font types and sizes. The ICFHR2010 [8] is a dataset of modern
handwriting that is heterogeneous by document resolutions, text line heights
and skews. The ICDAR2017 dataset [21] includes three books, CB55, CSG18, and
CSG863. In this dataset we run our algorithm on presegmented main text regions
by the given ground truth. The VML-MOC dataset [2] is characterized by
multiply oriented and curved handwritten text lines.
Figure 6: Train and validation logs on the VML-AHTE and ICDAR2017 datasets.
### 4.2 Baseline experiment
We choose to experiment on five datasets with different challenges in order to
verify that the method generalizes. Therefore, we define a baseline experiment
that set the parameter values. There is no best set of parameters that fit all
challenges and one can always boost the performance on a particular dataset by
ad-hoc adjusting. However we wish to propose a baseline experiment that can
fit all challenges as much as possible. The baseline experiment sets the input
patch size $p=350$, and the sliding central window size $w=20$. The results
are shown in Figure 7. The convolutional network easily learns the embedding
function. The validation accuracy almost always reaches over $99\%$ (Figure
6). We have preliminary experiment which suggest that increasing the number of
layers until VGG-16 and then until VGG-19 leads to successful blob detection
as well as AlexNet do. However, a deeper network such as ResNet does not
detect blobs, probably because the reception field of its last convolutional
layer is larger.
Pinkas | ICFHR | AHTE | MOC
---|---|---|---
| | |
Printed | CSG-863 | CB-55 | CSG-18
| | |
Figure 7: The results of baseline experiment are shown overlapped with the
input images. The result on the VML-MOC dataset is a mess because the method
assumes almost horizontal text lines when labeling the similar and different
pairs.
### 4.3 Effect of patch size ($p$)
We have experimented with different patch sizes and found $350\times 350$
performs well while keeping memory overhead manageable. Figure 8 shows results
using patches of variable sizes. One can see that larger patch sizes lead to
compact and well separated clusters of blob lines. Obviously at some point the
performance is expected to decrease, if the patch size is increased further,
because the assumption that the neighbouring patches are similar will
gradually decrease. On the other hand the small patches do not contain a
coarse trend of text line patterns therefore the blob lines fade out.
| Pinkas | ICFHR | AHTE | Printed | CSG-863 | CB-55 | CSG-18
---|---|---|---|---|---|---|---
400 | | | | | | |
300 | | | | | | |
200 | | | | | | |
100 | | | | | | |
Figure 8: Patch size comparison by qualitative results. Each row shows an
example output from different datasets using a patch size. A patch size larger
than 400 pixels could not be experimented due to memory overhead. Vertical
observation illustrates that the method is insensitive to small variations in
the patch size. Very small patches lead blob lines to fade out because they
don’t contain a coarse trend of text line patterns.
### 4.4 Effect of central window size ($w$)
Consider that the input document that is downsampled by a factor of central
window size should still be containing the text lines in an apartable form.
Input document image size is downsampled by a factor of the central window
size of the sliding window. Therefore this factor is effective on the
representability of text lines in the pseudo-rgb image. This factor has to be
small enough so the text lines in the downsampled images will not be
scrambled. Otherwise it is impossible to represent the detected blob lines
that strike through the scrambled text lines (Figure 9). On the other hand,
the computation time is inversely proportional to the central window size. We
have experimented with central window sizes and found $w=20$ is efficient and
effective well enough.
| 20 | 10 | 5
---|---|---|---
ICFHR | | |
Figure 9: Visualization of the effect of central window size. From left to
right shows the results with a decreasing central window size. Central window
has to be small enough so the text lines in the downsampled images will not be
scrambled. Otherwise blob lines that strike through the text lines will be
scrambled.
### 4.5 Patch saliency visualization
We visualize the features from last convolutional layer of a single branch to
gain insight into the regions that the network looks at the decision of the
classifier. The output from the last convolutional layer is a matrix of the
size $m\times m\times 512$ where $m$ is determined by the number of pooling
layers and the input patch size $p$. We consider this matrix as $n=m\times m$
vectors each with $512$ dimensions. Then, we get the first three components of
these multidimensional vectors and visualize them as a pseudo-rgb image. No
matter the transformation on the patch, the network recognizes the similar
salient features on every patch (Figure 10). As a result of this, it can
segment the text lines in a document image that is entirely transformed
(Figure 11).
| Normal | Flipped | Rotated $180$ | Rotated $90$
---|---|---|---|---
Pinkas | | | |
AHTE | | | |
CSG18 | | | |
ICFHR | | | |
Figure 10: Visualization of the features from the last convolutional layer. No matter the transformation on the patch, the network recognizes the similar salient features on every patch. | Pinkas | AHTE
---|---|---
Input | |
Output | |
Figure 11: The trained machine can segment an input document image that is
entirely rotated by $90$ degrees.
### 4.6 Limitations
Extracting the features of a document image at patch level is a
computationally intensive task and time consuming. Especially the consumed
time is inversely proportional to the central window size which has to be
small enough to represent the well separated blob lines. Severely skewed or
curved text lines do not comply with the assumption that neighbouring patches
contain similar coarse trends of text lines. Therefore the method cannot
segment a multiply oriented and curved dataset such as the VML-AHTE.
## 5 Results
This section provides quantitative results on the VML-AHTE dataset and the
ICDAR 2017 dataset. The results are compared with some other supervised and
unsupervised methods. Note that the proposed method uses the same parameters
of the baseline experiment on all the datasets. The performance is measured
using the text line segmentation evaluation metrics, LIU and PIU, of the
ICDAR2017 competition on layout analysis [21].
### 5.1 Results on the VML-AHTE dataset
We compare our results with those of supervised learning methods, Mask-RCNN
[14] and FCN+EM [14], and an unsupervised deep learning method, UTLS [15].
Mask-RCNN is an instance segmentation algorithm which is fully supervised
using the pixel labels of the text lines. FCN+EM method [14] is fully
supervised by human annotated blob lines. It uses energy minimization to
extract the pixel labels of text lines. The comparison in terms of LIU and PIU
are reported in Table 1. On the VML-AHTE dataset, the proposed method
outperforms the compared methods in terms of LIU metric, and is competitive in
terms of the PIU metric. The error cases arise from few number of touching
blob lines. Such errors can easily be eliminated but this is out of the focus
of this paper. The advantage of the proposed method on the supervised methods
is zero labelling effort. Also UTLS [15] has zero labelling effort, however it
requires to adjust a heuristic formula. The proposed method eliminates this
formula by assuming the neighbouring patches contain the same text line
patterns.
Table 1: LIU and PIU values on the VML-AHTE dataset. | LIU | PIU
---|---|---
Unsupervised | |
UTLS [15] | 98.55 | 88.95
Proposed method | 90.94 | 83.40
Supervised | |
Mask-RCNN [14] | 93.08 | 86.97
FCN+EM [14] | 94.52 | 90.01
### 5.2 Results on the ICDAR2017 dataset
The second evaluation is carried out on the ICDAR2017 dataset [21]. We run our
algorithm on presegmented text block areas by the given ground truth. Hence,
we can compare our results with unsupervised System 8 and System 9 which are
based on a layout analysis prior to text line segmentation. The comparison in
terms of LIU and PIU are reported in Table 2. The main challenge in this
dataset for the proposed method is the text line parts that are single handed
and not accompanied by other text lines in their above and below. Since this
is a rare case, the learning system recognizes as an insignificant noise. The
performance of the proposed method on the ICDAR dataset is on par with the
performances of two unsupervised methods, but these methods probably will need
to be readjusted for each new dataset. However, the proposed method has been
tested using the same parameters on all the considered datasets.
Table 2: LIU and PIU values on the ICDAR2017 dataset | CB55 | CSG18 | CSG863
---|---|---|---
| LIU | PIU | LIU | PIU | LIU | PIU
Unsupervised | | | | | |
UTLS [15] | 80.35 | 77.30 | 94.30 | 95.50 | 90.58 | 89.40
System-8 | 99.33 | 93.75 | 94.90 | 94.47 | 96.75 | 90.81
System-9+4.1 | 98.04 | 96.67 | 96.91 | 96.93 | 98.62 | 97.54
Proposed method | 93.45 | 90.90 | 97.25 | 96.90 | 92.61 | 91.50
## 6 Conclusion
We presented a novel method for unsupervised deep learning of handwritten text
line segmentation. It is based on the assumption that in a document image of
almost horizontal text lines, the neighbouring patches contain similar coarse
pattern of text lines. Hence if one of the neighbouring patches is rotated by
$90$ degrees, they contain different coarse pattern of text lines. A network
that is trained to embed the similar patches close and the different patches
apart in the space, can extract interpretable features for text line
segmentation. The method is insensitive to small variations in the input patch
size but requires a careful selection of the central window size. We also
demonstrated that entirely rotated document images can also be segmented with
the same model. The method is effective at detecting cramped, crowded and
touching text lines and can surpass the supervised learning methods whereas it
has comparable results in terms of text line extraction.
## Acknowledgment
The authors would like to thank Gunes Cevik and Hamza Barakat for helping in
data preparation. This research was partially supported by The Frankel Center
for Computer Science at Ben-Gurion University.
## References
* [1] Alberti, M., Vögtlin, L., Pondenkandath, V., Seuret, M., Ingold, R., Liwicki, M.: Labeling, cutting, grouping: An efficient text line segmentation method for medieval manuscripts. In: ICDAR. pp. 1200–1206. IEEE (2019)
* [2] Barakat, B.K., Cohen, R., Rabaev, I., El-Sana, J.: VML-MOC: Segmenting a multiply oriented and curved handwritten text line dataset. In: ICDARW. vol. 6, pp. 13–18. IEEE (2019)
* [3] Barakat, B.K., El-Sana, J., Rabaev, I.: The pinkas dataset. In: ICDAR. pp. 732–737. IEEE (2019)
* [4] Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
* [5] Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in nd images. In: ICCV. vol. 1, pp. 105–112. IEEE (2001)
* [6] Clausner, C., Pletschacher, S., Antonacopoulos, A.: Aletheia-an advanced document layout and text ground-truthing system for production environments. In: ICDAR. pp. 48–52. IEEE (2011)
* [7] Diem, M., Kleber, F., Fiel, S., Grüning, T., Gatos, B.: cbad: Icdar2017 competition on baseline detection. In: ICDAR. vol. 1, pp. 1355–1360. IEEE (2017)
* [8] Gatos, B., Stamatopoulos, N., Louloudis, G.: ICFHR 2010 handwriting segmentation contest. In: ICFHR. pp. 737–742. IEEE (2010)
* [9] Grüning, T., Leifert, G., Strauß, T., Michael, J., Labahn, R.: A two-stage method for text line detection in historical documents. IJDAR 22(3), 285–302 (2019)
* [10] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: ICCV. pp. 2961–2969 (2017)
* [11] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. pp. 1097–1105 (2012)
* [12] Kurar Barakat, B., Cohen, R., Droby, A., Rabaev, I., El-Sana, J.: Learning-free text line segmentation for historical handwritten documents. Applied Sciences 10(22), 8276 (2020)
* [13] Kurar Barakat, B., Cohen, R., El-Sana, J.: Vml-moc: Segmenting a multiply oriented and curved handwritten text line dataset. In: ICDARW. vol. 6, pp. 13–18. IEEE (2019)
* [14] Kurar Barakat, B., Droby, A., Alaasam, R., Madi, B., Rabaev, I., El-Sana, J.: Text line extraction using fully convolutional network and energy minimization. In: PatReCH. pp. 3651–3656. IEEE (2020)
* [15] Kurar Barakat, B., Droby, A., Alasam, R., Madi, B., Rabaev, I., Shammes, R., El-Sana, J.: Unsupervised deep learning for text line segmentation. In: ICPR. pp. 3651–3656. IEEE (2020)
* [16] Kurar Barakat, B., Droby, A., Kassis, M., El-Sana, J.: Text line segmentation for challenging handwritten document images using fully convolutional network. In: ICFHR. pp. 374–379. IEEE (2018)
* [17] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR. pp. 3431–3440 (2015)
* [18] Mechi, O., Mehri, M., Ingold, R., Amara, N.E.B.: Text line segmentation in historical document images using an adaptive u-net architecture. In: ICDAR. pp. 369–374. IEEE (2019)
* [19] Pletschacher, S., Antonacopoulos, A.: The page (page analysis and ground-truth elements) format framework. In: ICPR. pp. 257–260. IEEE (2010)
* [20] Renton, G., Soullard, Y., Chatelain, C., Adam, S., Kermorvant, C., Paquet, T.: Fully convolutional network with dilated convolutions for handwritten text line segmentation. IJDAR 21(3), 177–186 (2018)
* [21] Simistira, F., Bouillon, M., Seuret, M., Würsch, M., Alberti, M., Ingold, R., Liwicki, M.: Icdar2017 competition on layout analysis for challenging medieval manuscripts. In: ICDAR. vol. 1, pp. 1361–1370. IEEE (2017)
* [22] Vo, Q.N., Kim, S.H., Yang, H.J., Lee, G.S.: Text line segmentation using a fully convolutional network in handwritten document images. IET Image Processing 12(3), 438–446 (2017)
|
]
# Nonlinear flexural-gravity waves due to a body submerged in the uniform
stream.
Y.A. Semenov [ Institute of Hydromechanics of the National Academy of
Sciences of Ukraine, 8/4 Maria Kapnist Street, 03680 Kiev, Ukraine
###### Abstract
The two-dimensional nonlinear problem of steady flow past a body submerged
beneath an elastic sheet is considered. The mathematical model is based on the
velocity potential theory with fully nonlinear boundary conditions on the
fluid boundary and on the elastic sheet, which are coupled throughout the
numerical procedure. The integral hodograph method is employed to derive the
complex velocity potential of the flow which contains the velocity magnitude
on the interface in explicit form. The coupled problem has been reduced to a
system of nonlinear equations with respect to the unknown magnitude of the
velocity on the interface, which is solved using a collocation method. Case
studies are undertaken for both subcritical and supercritical flow regimes.
Results for interface shape, bending moment and pressure distribution are
presented for the wide ranges of Froude numbers and depths of submergence.
According to the dispersion equation, two waves on the interface may exist.
The first, longest wave is that caused by gravity, and the second, shorter
wave is that caused by the elastic sheet. The obtained solution exhibits
strongly nonlinear interaction of these waves above the submerged body. It is
found that near the critical Froude number, there is a range of submergences
in which the solution does not converge.
††preprint: AIP/123-QED
## I INTRODUCTION
The problem of interaction between a fluid and an elastic boundary is a
classical problem of fluid mechanics which is of interest in offshore and
polar engineering, medicine and many industrial applications. In the last two
decades, this topic has received much attention due to the melting of ice in
the Arctic regions, opening new routes for ships and new regions for resource
exploration Squire et al. (1988, 1996); Korobkin et al. (2011). Most of the
studies are devoted to wave propagation along the ice sheet, its response on a
moving load, and effects of heterogeneous properties of the ice sheet such as
a floe, polynya, cracks, etc. Guyenne and Părău (2012, 2012, 2017); Li_2018 .
The works studying the interaction between the flow bounded by the ice sheet
and the body submerged in the fluid started relatively recently. Das and
Mandal Das_2006 considered oblique wave scattering by a circular cylinder
submerged beneath a uniform ice sheet of infinite extent and determined the
transition and reflection coefficients. To solve the problem, they employed
the multipole expansion method. Sturova Sturova_2015 applied the method of
matched eigenfunction expansions and studied the interaction of a submerged
cylinder and an inhomogeneous ice sheet including a floe or polynya. Tkacheva
Tkacheva_2015 considered oscillations of a cylindrical body submerged in a
fluid beneath an ice sheet and solved the problem through the Wiener-Hopf
technique. Savin and Savin Savin_2012 considered the ice perturbation by a
dipole submerged in water of infinite depth. They applied the complex variable
technique and the Fourier transform to solve the Laplace equation. Shishmarev
at al. Shishmarev_2019 studied the strains in an ice cover of a frozen
channel which are caused by a three-dimensional dipole moving under the ice at
a constant speed. Li et al. Li_2019 considered a circular cylinder submerged
below an ice sheet in water of finite depth. The solution method is based on
the derived Green function which satisfies the boundary conditions on the
ice/water interface. All the works mentioned above regarding submerged bodies
are based on linear potential flow theory, and the boundary value problem is
usually formulated in the frequency domain.
The nonlinear theory of hydroelasticity is currently under development. The
unknown shape of the ice/fluid interface and its higher-order derivatives,
which have to satisfy the dynamic boundary condition, are the main challenge
to derive analytical solutions or develop computational approaches. As the
dynamic boundary condition gets more complicates, e.g. include gravity,
surface tension and/or elasticity of the sheet covering the fluid, it
increases the level of the mathematical challenge which has to be addressed.
The simplest form of the dynamic boundary condition corresponds to free
streamline flows for which the velocity magnitude on the free streamline is
assumed to be constant. This class of flows is well developed and presented in
classical books by Milne-Thomson Milne-Thomson , Birkhoff and Zarantonello
Birkhoff , and Gurevich Gurevich .
For free-surface flows, gravity leads to an additional term in the dynamic
boundary condition which relates the velocity magnitude and the vertical
coordinate of the free surface. This kind of problem can be reduced to a
singular integrodifferential equation whose form depends on the solution
method and the choice of the governing functions. Various forms of the
integro-differential equation were derived by Forbes and Schwartz Forbes_1982
, King and Bloor King_Bloor , and Faltinsen and Semenov Faltinsen_Semenov .
For capillary free-surface flows, the dynamic boundary condition comprises the
curvature of the free surface which involves the first and second derivatives
of the free surface. Fewer analytical solutions for purely capillary waves are
presented in literature. Crapper Crapper developed a closed-form solution for
a fluid of infinite depth, and Kinnersley Kinnersley extended his method to a
fluid sheet. Crowdy Crowdy developed a method based on complex function
theory and retrieved Kinnersley’s solution in much simpler form and obtained
new solutions for steady capillary waves on a fluid annulus. However, the
extension of the method to fluid/structure interaction problems with surface
tension seems to be nontrivial. Alternatively, several numerical methods have
been developed to solve the capillary and capillary-gravity flows. Schwartz
and Vanden-Broeck Schwartz_Vanden-Broeck proposed a method based on a
boundary-integral formulation and finite difference approximation of the
derivatives and applied it to the purely capillary and capillary-gravity
waves. Vanden-Broeck and Miloh Vanden-Broeck_Miloh proposed numerical methods
based on the Fourier-series expansion and studied steep gravity waves. Later,
the method was adopted by Blyth and Vanden-Broeck Blyth_Vanden-Broeck and
Blyth and Părău Blyth_Parau to compute nonlinear capillary waves on fluid
sheets of finite thickness. Yoon and Semenov Yoon_Semenov considered a cavity
flow past a circular cylinder in the presence of surface tension and derived
the solution using the integral hodograph method. They derived a singular
integral equation with respect to the velocity magnitude, which is solved by
the method of successive approximations. The method can be applied to solve
problems with a more complicated form of the dynamic boundary condition which
comprises higher-order derivatives of the free surface. However, the higher-
order derivatives of the interface which appear in the dynamic boundary
condition results to a higher-order hypersingular integral equation. A special
numerical treatment is required to solve this type of integral equation.
The nonlinear theory of hydroelastic waves, for which the dynamic boundary
condition gets more complicated, has been studied intensively in recent
decades with emphasis on waves generated by a moving load. Most of the studies
are focused on the analysis and simulation of hydroelastic waves, which
account for the nonlinearity of the potential flow and elastic sheet
deformations. ParauPărău and Dias Parau_Dias derived a forced nonlinear
Schrodinger equation for ice sheet deflection and studied the weakly nonlinear
effects. Fully nonlinear computations based on the boundary integral method
were presented by Guyenne and Părău Guyenne and Părău (2012). The nonlinear
response of an infinite ice sheet in the time domain has been studied by
Bonnefoy et al. Bonnefoy using a higher-order spectral method. They found
that at a critical speed at which the linear response is infinite, the
nonlinear solution remains bounded. Despite the progress in the development of
numerical methods to solve nonlinear problems of hydroelastic waves, their
extension to a body submerged in fluid beneath an elastic sheet seems to be
not easy, since the flow potential has to satisfy an additional boundary
condition on the body surface.
In the present paper, we study a fully nonlinear problem of the hydroelastic
waves generated by a body submerged in the fluid beneath an elastic sheet. We
shall use the model of potential flow with the fully nonlinear kinematic and
dynamic boundary conditions on the submerged rigid body and the elastic sheet
which is modelled using the Cosserat theory of hyperelastic shells suggested
by Plotnikov and Toland Plotnikov . To solve the nonlinear problem, we adopt
the solution for a cylindrical body moving beneath a free surface Semenov_Wu ,
which is obtained using the integral hodograph method. An expression for the
flow potential which includes the velocity magnitude on the free surface in an
explicit form has been derived. This gives the possibility to adopt the
solution to in the present problem, because the velocity magnitude on the
interface between the fluid and elastic sheet appears in the dynamic boundary
condition explicitly. The coupling of the elastic sheet and fluid problems is
based on the condition of the same pressure on the interface, one from flow
dynamics and the second from elastic sheet equilibrium.
The derivation of the flow potential which contains in explicit form the
velocity magnitude on the interface and the numerical method to solve the
coupled fluid/elastic sheet interaction problem are presented in Section 2.
The extended numerical results are discussed in Section 3.
## II THEORETICAL ANALYSIS
We consider a two-dimensional steady flow past a cylindrical body submerged
beneath an elastic sheet which is modelling the ice cover. The characteristic
length of the body is $L$, and the thickness of the sheet is $B_{i}$. A
Cartesian coordinate system $XY$ is defined with the origin at a point inside
the body and the $X$-axis along the velocity direction of the incoming flow
with a constant speed $U$. The $Y-$axis points vertically upwards. The fluid
is assumed to be inviscid and incompressible, and the flow is irrotational.
The elastic sheet is modelled by the Cosserat theory of hyperelastic shells
Plotnikov . The submerged rigid body is assumed to have an arbitrary shape
which can be defined by the slope of the body as a function of the arc length
coordinate $S$, or $\beta_{b}=\beta_{b}(S)$. The interface between the elastic
sheet and the liquid is defined by function $Y(X)$. The interactions of the
submerged body, flow and the elastic sheet may generate waves extending to
infinity in both upstream and downstream directions. On the other hand, the
flow is uniform at infinity, $Y\rightarrow\infty$ and $-\infty<X<\infty$, and
the velocity is constant there. In order to provide the same value of the flow
velocity at infinity in all directions, we introduce damping regions
$P_{1}P_{2}$ and $T_{1}T_{2}$ upstream and downstream, respectively, where a
term providing the wave damping is added in the dynamic boundary condition to
provide the same velocity magnitude at points $P_{2}$ and $T_{2}$, or
$V_{P2}=V_{T2}=U$. Outside the interval $P_{2}T_{2}$, the flow velocity on the
interface $V(X)\equiv U$ including infinity. Thus, the fluid surface has a
limit $Y(x)_{|x|\rightarrow\infty}=H$ which is defined as the submergence of
the cylinder measured from the origin of the coordinate system.
Figure 1: ($a$) Physical plane and ($b$) the parameter $\zeta-$plane.
We will derive the complex potential of the flow, $W=W(Z)$, with $Z=X+iY$. For
the steady flow, the kinematic conditions on the body surface and the
interface mean that the stream function is constant, or
$\Im\\{W(Z)\\}=const.$, as they both are streamlines. The dynamic boundary
condition on the interface is obtained from the Bernoulli equation assuming
that the hydrodynamic pressure on the interface is the same as the pressure
conditioning the bending of the elastic sheet.
$\rho\frac{V^{2}}{2}+\rho gY+P_{ice}=\rho\frac{U^{2}}{2}+\rho gH+P_{a},$ (1)
where $U$ is the speed of the incoming flow, $\rho$ is the liquid density,
$V=|dW/dZ|$ is the magnitude of the complex velocity, $g$ is the acceleration
due to gravity, $H$ is the depth of submergence, and $P_{a}$ is the
atmospheric pressure. The pressure due to the bending of the elastic sheet is
Guyenne and Părău (2012); Plotnikov
$P_{ice}=D^{\prime}B^{3}_{i}\left(\frac{d^{2}\kappa}{dS^{2}}+\frac{1}{2}\kappa^{3}\right)+P_{a},$
(2)
where $D^{\prime}=E/(12(1-\nu))$ is the flexural rigidity of the elastic
sheet, $\kappa$ is the curvature of the interface, $B_{i}$ is the thickness of
the elastic sheet. Equation (2) corresponds to the assumptions: the elastic
sheet is inextensible and not prestressed Blyth_Parau .
Two different Froude numbers can be defined based on the characteristic length
$L$ or the depth of submergence $H$, respectively:
$F=\frac{U}{\sqrt{gL}},\qquad F_{h}=\frac{U}{\sqrt{gH}}.$ (3)
Using nondimensionalisation based on $U$, $L$ and $\rho$, we have $v=V/U$,
$x=X/L$, $y=Y/L$, $h=H/L$, $s=S/L$, $b_{i}=B_{i}/L$, and $W(Z)=ULw(z)$.
Replacing in equation (1) by (2), the dynamic boundary condition (1) takes the
form
$v^{2}=1-\frac{2(y-h)}{F^{2}}-2D\left(\frac{d^{2}\kappa}{dS^{2}}+\frac{1}{2}\kappa^{3}\right),$
(4)
where
$D=\frac{D^{\prime}}{\rho gLF^{2}},\qquad\kappa=\frac{d\delta}{ds}.$
The angle $\delta=\arcsin(dy/ds)=\pi+\beta$ is the angle between the $X-$axis
and the unit tangential vector $\mathbb{\tau}$ which is opposite to the
velocity direction $\beta$. The normal vector $\mathbb{n}$ is directed from
the liquid region outwards, while along the interface the spatial coordinate
increases in the direction of the vector $\mathbb{\tau}$ such that the liquid
region is on the left (see Fig. 1a).
Equation (4) contains the velocity magnitude along the interface, the wave
elevation $y$ and its derivatives which will be related in the following
throughout the derived expression for the flow potential.
### II.1 Hodograph method.
Finding the function $w=w(z)$ directly is a complicated problem since the
boundary of the flow region is unknown. Instead, Joukovskii Joukovskii_1890
and Michell Michell proposed to introduce an auxiliary parameter plane, or
$\zeta-$plane, which was typically chosen as the upper half-plane. Then, they
considered two functions, which were the complex potential $w$ and the
function $\omega=-\ln(dw/dz)$, both as functions of the parameter variable
$\zeta$. When $w(\zeta)$ and $\omega(\zeta)$ are derived, the velocity and the
flow region can be obtained in the parameter form as follows:
$\frac{dw}{dz}=\exp[-\omega(\zeta)],\qquad
z(\zeta)=z_{0}+\int_{0}^{\zeta}\frac{dw}{d\zeta^{\prime}}/\frac{dw}{dz}d\zeta^{\prime},$
(5)
where the function $z=z(zeta)$ is called as the mapping function.
The flow region beneath the interface and outside the body is a doubly
connected domain. A canonical region of a doubly connected domain is an
annulus. By making a cut connecting the external and the internal circles of
the annulus, the doubly connected region becomes simply connected. As shown in
Fig. 1a, $O_{-}D_{+}$ and $O_{+}D_{-}$ are the two sides of the cut which
could have an arbitrary shape but form a right angle with the flow boundary at
both the body surface (points $O_{-}$ and $O_{+}$) and the liquid/ice
interface (points $D_{-}$ and $D_{+}$).
The simply connected flow region $C_{-}D_{-}O_{+}BAO_{-}D_{+}C_{+}$ is then
transformed into the rectangular domain $O_{-}D_{+}CD_{-}BAO_{+}$ in the
parameter plane. An upper half-plane or unit circle is usually chosen as the
parameter plane. However, the flow region may have corner points at which the
mapping $z(\zeta)$ will be not conformal. Chaplygin (see chapter 1(5) in the
book Gurevich ) pointed out that there is flexibility in the choice of the
region of the parameter variable. It should be composed of straight lines and
arcs of circles in such a way that by means of image transformations of these
regions it is possible to cover the whole complex plane in a simple manner.
When solving boundary problems, the shape of the auxiliary parameter region is
usually chosen with the aim of obtaining the solution of a problem in the
simplest form with a minimal number of singular points at which the
transformation of the parameter region onto the complex potential region, $w$,
and the region of the function $dw/dz$ is not conformal. In the present case
of the doubly connected flow region, the additional corner points appear at
the intersections of two sides of the cut and the flow boundary. In order to
provide a conformal mapping at these corner points, we have chosen the
rectangle as the parameter domain, which also has right angles at points
$O_{-},O_{+},D_{-},D_{+}$, e.g. the same angles as in the physical plane.
When the parameter region is chosen as a half-plane or the first quadrant, the
polynomial functions are usually used to construct the mapping function
$z(\zeta)$. Here, for the rectangular domain, the polynomial functions will be
replaced by Jacobi’s theta functions Birkhoff , which are quasi-doubly-
periodic functions. Due the periodicity, they naturally satisfy the same
conditions on both sides of the cut. Jacobi’s functions have been used to
solve free surface problems involving doubly connected flow regions in the
books Milne-Thomson ; Birkhoff ; Gurevich ; Terentiev .
We may choose the coordinates of the rectangle vertices $O_{-}O_{+}D_{-}D_{+}$
as $(0,0)$, $(\pi,0)$, $(\pi,\pi\tau/4)$ and $(0,\pi\tau/4)$, respectively, as
shown in Fig. 1a. Here, $\tau$ is an imaginary number. The horizontal length
of the rectangle is then equal to $\pi$, and its vertical length is equal to
$\pi|\tau|/4$.
In the flow region, there are two stagnation points marked as $A$, where two
streamlines merge into one, and $B$, where a streamline splits into two
branches. Positions of these points in parameter plane $\zeta=a$, $\zeta=b$ as
well as the position of point $C$, $\zeta=c+\pi\tau/4$ which corresponds to
infinity in the physical plane. The parameters $a$, $b$ and $c$ should be
determined from additional conditions at the solution of the problem.
The interval $0\leq\xi\leq\pi$ on the real axis corresponds to the body
boundary. The interval $c<\xi\leq\pi$, $i\eta=\pi\tau/4$ corresponds to part
of the interface $D_{-}C_{-}$, and the interval $0\leq\xi<c$,
$i\eta=\pi\tau/4$ corresponds to the other part of the interface $D_{+}C_{+}$.
It should be noticed that points $C_{-}$ at $x\rightarrow-\infty$ and $C_{+}$
for $x\rightarrow+\infty$ in the physical plane have been transformed to the
same point $C$ in the parameter region $\zeta$.
### II.2 Integral hodograph method:derivation of the governing functions
$dw/dz$ and $dw/d\zeta$.
At this stage we denote the angle of the velocity direction along the body as
$\beta_{b}(\xi)$ and the velocity magnitude on the free surface as $v(\xi)$.
With these notations, we have the following boundary-value problem for the
function of complex velocity, $dw/dz$:
$\left|\frac{dw}{dz}\right|=v(\xi),\qquad
0\leq\xi{\leq}\pi,\quad\eta=\pi\tau/4.$ (6)
$\chi(\xi)=\arg\left(\frac{dw}{dz}\right)=\left\\{{\begin{array}[]{l}-\beta_{b}(\xi),\qquad\quad
0\leq\xi<a,\quad\eta=0,\\\
-\beta_{b}(\xi)-\pi,\quad\,\,a<\xi<b,\quad\eta=0,\\\
-\beta_{b}(\xi)-2\pi,\quad b<\xi\leq\pi,\quad\eta=0.\end{array}}\right.$ (7)
$\frac{dw}{dz}(\xi=0,i\eta)=\frac{dw}{dz}(\xi=\pi,i\eta),\qquad 0\leq
i\eta\leq\pi\tau/4.$ (8)
In (7) the argument of complex velocity has the jumps equal to $-\pi$ at
stagnation points $A$ ($\zeta=a$) and $B$ ($\zeta=b$) due to the jump of the
velocity direction when passing through the stagnation point. The two vertical
sides of the rectangle in the parameter plane correspond to the two sides of
the cut in the physical plane. The velocities on both sides of the cut are the
same and therefore the condition of periodicity can be applied on the vertical
sides the rectangle. The solution of the boundary value problem (6)-(8) can be
obtained by applying the integral formulae derived in Sem_Wu2020 ,
$\displaystyle\frac{dw}{dz}$ $\displaystyle=$ $\displaystyle
v(\pi)\exp\left[-\frac{1}{\pi}\int_{0}^{\pi}\frac{d\chi}{d\xi}\ln\left(\frac{\vartheta_{1}(\zeta-\xi)}{\vartheta_{1}(\zeta-\xi-\pi\tau/2)}\right)d\xi\right.$
(9) $\displaystyle+$ $\displaystyle\frac{i}{\pi}\left.\int_{\pi}^{0}\frac{d\ln
v}{d\xi}\ln\left(\frac{\vartheta_{1}(\zeta-\xi-\pi\tau/4)}{\vartheta_{1}(\zeta-\xi+\pi\tau/4)}\right)d\xi+i\chi(\pi)\right].$
It can be easily verified that for $0<\xi<\pi$, $\eta=0$ the argument,
$\arg[(dw/dz)_{\zeta=\xi,\eta=0}]=\chi(\xi)$, while for $0<\xi<\pi$,
$i\eta=\pi\tau/4$, the modulus $|dw/dz|_{\zeta=\xi,i\eta=\pi\tau/4}=v(\xi)$,
i.e. the boundary conditions (6) and (7) are satisfied. The boundary condition
(8) is satisfied due to periodicity of the function $\vartheta_{1}(\zeta)$. By
substituting the boundary conditions (6) and (7) into (9) and evaluating the
first integral over the step change in the function $\chi(\xi)$ at points
$\zeta=a$ and $\zeta=b$, we obtain the expression for the complex velocity in
the rectangle $O_{-},O_{+},D_{=},D_{+}$ Sem_Wu2020 ,
$\displaystyle\frac{dw}{dz}$ $\displaystyle=$ $\displaystyle
v_{D}\frac{\vartheta_{1}(\zeta-a)\vartheta_{1}(\zeta-b)}{\vartheta_{4}(\zeta-a)\vartheta_{4}(\zeta-b)}\exp\left[\frac{1}{\pi}\int_{0}^{\pi}\frac{d\beta_{b}}{d\xi}\ln\frac{\vartheta_{1}(\zeta-\xi)}{\vartheta_{4}(\zeta-\xi)}d\xi\right.$
(10) $\displaystyle+$
$\displaystyle\frac{i}{\pi}\left.\int_{\pi}^{0}\frac{d\ln
v}{d\xi}\ln\frac{\vartheta_{1}(\zeta-\xi-\pi\tau/4)}{\vartheta_{4}(\zeta-\xi-\pi\tau/4)}d\xi-i\beta_{O}\right].$
where $\beta_{O}$ is the angle at point $O_{-}$ which is zero if point $O_{-}$
is the highest point of the body. The constant $v_{D}$ or the velocity
magnitude at point $D_{+}$, is determined by satisfying the velocity at
infinity, $\zeta=c+\pi\tau/4$, which is $1$, as it has been chosen as the
reference velocity, or
$\left|\frac{dw}{dz}\right|_{\zeta=c+\pi\tau/4}=1$ (11)
For steady flows, the stream function $\psi=\Im(w)$ takes constant values
along the body and the interface. According to Chaplygin’s special point
method Gurevich , to determine the function $w=w(\zeta)$, it is sufficient to
analyse all special points where the mapping is not conformal. These are the
stagnation points $A(\zeta=a)$ and $B(\zeta=b)$ and point
$C(\zeta=c+\pi\tau/4)$ corresponding to infinity in the $w-$plane. The order
of the function $w-w(\zeta)$ at these points can be determined by analysing
the behaviour of the argument of $w(\zeta)$ in the vicinity of these points.
Then, the derivation of the complex potential can be obtained in the form
Sem_Wu2020 .
$\displaystyle\frac{dw}{d\zeta}=K\frac{\vartheta_{1}(\zeta-a)\vartheta_{4}(\zeta-a)\vartheta_{1}(\zeta-b)\vartheta_{4}(\zeta-b)}{\vartheta_{1}^{2}(\zeta-c-\pi\tau/4)\vartheta_{1}^{2}(\zeta-c+\pi\tau/4)}.$
(12)
Dividing (12) by (10), we obtain the derivative of the mapping function as
$\displaystyle\frac{dz}{d\zeta}$ $\displaystyle=$
$\displaystyle\frac{K}{v_{D}}\frac{\theta_{4}^{2}(\zeta-a)\theta_{4}^{2}(\zeta-b)}{\theta_{1}^{2}(\zeta-c-\pi\tau/4)\theta_{1}^{2}(\zeta-c+\pi\tau/4)}$
$\displaystyle\times\exp\left[-\frac{1}{\pi}\int_{0}^{\pi}\frac{d\beta_{b}}{d\xi}\ln\frac{\theta_{1}(\zeta-\xi)}{\theta_{4}(\zeta-\xi)}d\xi\right.$
$\displaystyle-$ $\displaystyle\left.\frac{i}{\pi}\int_{\pi}^{0}\frac{d\ln
v}{d\xi}\ln\frac{\theta_{1}(\zeta-\xi-\pi\tau/4)}{\theta_{4}(\zeta-\xi-\pi\tau/4)}d\xi+i\beta_{O}\right].$
whose integration along the intervals $0\leq\xi<c$ and $c<\xi\leq\pi$ at
$\eta=\pi\tau/4$ in the $\zeta-$plane provides the parts $D_{+}C_{+}$ and
$D_{-}C_{-}$ of the free surface $C_{-}C_{+}$ in $\zeta-$plane, respectively.
The parameters $a,b,c,\tau$ and $K$, and the functions $\beta_{b}(\xi)$ and
$d(\ln v)/d\xi$ have to be determined from physical considerations and the
kinematic boundary condition on the body surface and the dynamic boundary
conditions on the free surface.
### II.3 System of equations for parameters $a,b,c,\tau$ and $K$.
At infinity, point $C_{-}C_{+}(\zeta=c+\pi\tau/4)$, the velocity approaches
unit (since this velocity is chosen as the reference velocity), and its
direction is along the X-axis. Therefore, the argument of the complex velocity
(10) at point $\zeta_{C}=c+\pi\tau/4$ should be equal to zero
$\arg\left(\frac{dw}{dz}\right)_{\zeta=\zeta_{C}}=0.$ (14)
The scale factor $K$ is determined by the length $S_{b}$ which is the
perimeter of the body cross-section
$\int_{0}^{\pi}\frac{ds_{b}}{d\xi}d\xi=S_{b}.$ (15)
where
$\frac{ds_{b}}{d\xi}=\left|\frac{dz}{d\zeta}\right|_{\zeta=\xi}.$
The free surface on the left and right hand sides at infinity has the same
value of $y-$coordinate. This is also equivalent to that the stream function
$\psi=\Im(w)$ is continuous across the cut, or
$\Im(w_{D_{-}})-\Im(w_{D_{+}})=0$. By integrating $\Im(dw/d\zeta)$ along
$D_{-}D_{+}$ passing the point $\zeta_{C}$ along a semi-circle $C^{\prime}$ of
an infinitesimal radius $\varepsilon$, at which $dw/d\zeta$ in Eq.(12) has the
second order singularity, we have
$\displaystyle\Im\left(\int_{\pi}^{c+\varepsilon}\frac{dw}{d\zeta}d\zeta+\oint_{C^{\prime}}\frac{dw}{d\zeta}d\zeta+\int_{c-\varepsilon}^{0}\frac{dw}{d\zeta}d\zeta\right)$
$\displaystyle=$
$\displaystyle\Im\left(\oint_{C^{\prime}}\frac{dw}{d\zeta}d\zeta\right)=\Im\left(i\pi\begin{array}[]{c}{}{}{}\hfil\\\
\mbox{Res}\\\ ~{}^{\zeta=\zeta_{C}}\end{array}\frac{dw}{d\zeta}\right)$
$\displaystyle=$
$\displaystyle\Im\left\\{i\pi\frac{d}{d\zeta}\left[\frac{dw}{d\zeta}(\zeta-\zeta_{C})^{2}\right]_{\zeta=\zeta_{C}}\right\\}.$
Here the first and third terms on the left hand are zero because
$\Im(w)=const.$ on the free surface. From this equation it follows
$a+b=2c.$ (20)
The depth of submergence, $h$, and the flowrate, $Q$, between the body and the
free surface are related. Therefore, instead of a condition for the depth $h$,
we can use the following condition for the given flowrate $Q$, which is the
integral of the derivative of the complex potential along the side
$O_{-}D_{+}$
$\Im\left(\int_{0}^{\pi\tau/4}\frac{dw}{d\zeta}d\zeta\right)=Q.$ (21)
We may place a vortex with circulation $\Gamma$ at the centre of the cylinder,
which can be nondimensionalized as $\gamma=\Gamma/(2\pi UL)$. For a circular
cylinder, this does not affect the impermeable body surface boundary
condition, but does change the positions of the stagnation points and also
affects the free surface boundary. For a hydrofoil, $\gamma$ should be
determined through the Kutta condition at the trailing edge.
Integrating $dw/d\zeta$ along the body surface in the parameter plane, we have
$\Re\left(\int_{0}^{\pi}\frac{dw}{d\zeta}d\zeta\right)=2\pi\gamma.$ (22)
In the case $\gamma\neq 0$, the real part of the potential, $\phi=\Re(w)$,
have a jump on the sides $O_{-}D_{-}$ and $O_{+}D_{+}$ of the cut, while the
complex velocity, $dw/dz$ and the stream function $\psi=\Im(w)$ are still
continuous across the cut.
Equations (14) - (22) allow us to determine the unknown parameters
$a,b,c,\tau$ and $K$, which appear in the governing equations (10), (12) and
(II.2), once the functions $v(\xi)$ and $\beta_{b}(\xi)$ are specified.
### II.4 Kinematic boundary conditions on the body surface.
By integrating the modulus of the derivative of the mapping function (II.2)
along the side $O_{-}O_{+}$ in the parameter plane, we can obtain the spatial
coordinate along the body as a function of the parameter variable
$s_{b}(\xi)=\int_{0}^{\xi}\frac{ds_{b}}{d\xi^{\prime}}d\xi^{\prime},$ (23)
where $ds_{b}/d\xi=|dz/d\zeta|_{\zeta=\xi,\eta=0}$. Since the function
$\beta_{b}(s_{b})$ is known, the function $\beta_{b}(\xi)$ can be determined
from the following integro-differential equation:
$\frac{d\beta_{b}}{d\xi}=\frac{d\beta_{b}}{ds_{b}}\frac{ds_{b}}{d\xi}.$ (24)
By substituting $dz/d\zeta$ from (II.2), this equation takes the form
$\displaystyle\frac{d\beta_{b}}{d\xi}$ $\displaystyle=$
$\displaystyle\kappa[s_{b}(\xi)]\frac{K}{v_{D}}\left|\frac{\theta_{4}^{2}(\xi-a)\theta_{4}^{2}(\xi-b)}{\theta_{1}^{2}(\xi-c-\pi\tau/4)\theta_{1}^{2}(\xi-c+\pi\tau/4)}\right|$
(25) $\displaystyle\times$
$\displaystyle\exp\left[-\frac{1}{\pi}\int_{0}^{\pi}\frac{d\beta_{b}}{d\xi^{\prime}}\ln\frac{\theta_{1}(\xi-\xi^{\prime})}{\theta_{4}(\xi-\xi^{\prime})}d\xi^{\prime}\right.$
$\displaystyle-$ $\displaystyle\left.\frac{i}{\pi}\int_{\pi}^{0}\frac{d\ln
v}{d\xi^{\prime}}\ln\frac{\theta_{1}(\xi-\xi^{\prime}-\pi\tau/4)}{\theta_{4}(\xi-\xi^{\prime}-\pi\tau/4)}d\xi^{\prime}\right],$
where $\kappa(s_{b})=d\beta_{b}/ds_{b}$ is the curvature of the body.
### II.5 Nonlinear dynamic boundary condition.
The dynamic boundary condition (4) includes the interface elevation $y(s)$ and
its derivatives. Each branch of the interface, $C_{-}D_{-}(c<\xi<\pi)$ and
$C_{+}D_{+}(0<\xi<c)$, is evaluated by integration of the derivative of the
mapping function (II.2). The parameter form of the interface is as follows,
$s(\xi)_{\\{c<\xi\leq\pi,0\leq\xi<c\\}}=\int_{\\{\pi,0\\}}^{\xi}\frac{ds}{d\xi}d\xi,$
(26)
$y(\xi)_{\\{c<\xi\leq\pi,0\leq\xi<c\\}}=y_{D}+\Im\left(\int_{\\{\pi,0\\}}^{\xi}\left.\frac{dz}{d\zeta}\right|_{\zeta=\xi+\pi\tau/4}d\xi\right),$
(27)
where
$\displaystyle\frac{ds}{d\xi}$ $\displaystyle=$
$\displaystyle\left|\frac{dz}{d\zeta}\right|_{\zeta=\xi+\pi\tau/4}$ (28)
$\displaystyle=$
$\displaystyle\frac{K}{v(\xi)}\left|\frac{\vartheta_{4}^{2}(\xi-a+\pi\tau/4)\vartheta_{4}^{2}(\xi-b+\pi\tau/4)}{\vartheta_{1}^{2}(\xi-c)\vartheta_{4}^{2}(\xi-c)}\right|$
and $y_{D}$ is the vertical coordinate of points $D_{-}D_{+}$ and can be
obtained from
$y_{D}=\Im\left(i\int_{0}^{\pi|\tau|/4}\left.\frac{dz}{d\zeta}\right|_{\zeta=i\eta}d\eta\right).$
(29)
The curvature of the interface is
$\kappa[s(\xi)]=\frac{d\beta}{ds}=\frac{d\beta}{d\xi}/\frac{ds}{d\xi},$ (30)
where $d\beta/d\xi$ is determined by taking the argument of the complex
velocity from equation (10) at $\zeta=\xi+\pi\tau/4$,
$\displaystyle\beta(\xi)$ $\displaystyle=$
$\displaystyle\arg\left(\frac{dw}{dz}\right)=-\frac{1}{\pi}\int_{\pi}^{0}\frac{d\ln
v}{d\xi^{\prime}}\ln\left|\frac{\vartheta_{1}(\xi-\xi^{\prime})}{\vartheta_{4}(\xi-\xi^{\prime})}\right|-P_{1}(\xi),$
$\displaystyle P_{1}(\xi)$ $\displaystyle=$
$\displaystyle-\beta_{O}+\Im\left\\{\ln\frac{\vartheta_{1}(\xi-a+\pi\tau/4)\vartheta_{1}(\xi-b+\pi\tau/4)}{\vartheta_{4}(\xi-a+\pi\tau/4)\vartheta_{4}(\xi-b+\pi\tau/4)}\right\\}$
(32) $\displaystyle+$
$\displaystyle\frac{1}{\pi}\int_{0}^{\pi}\frac{d\beta_{b}}{d\xi^{\prime}}\Im\left\\{\ln\frac{\vartheta_{1}(\xi-\xi^{\prime}+\pi\tau/4)}{\vartheta_{4}(\xi-\xi^{\prime}+\pi\tau/4)}\right\\}d\xi^{\prime}$
and differentiating it respect to variable $\xi$,
$\frac{d\beta}{d\xi}=-\frac{1}{\pi}\int_{\pi}^{0}\frac{d\ln
v}{d\xi^{\prime}}\left(\frac{\vartheta_{1}^{\prime}(\xi-\xi^{\prime})}{\vartheta_{1}(\xi-\xi^{\prime})}-\frac{\vartheta_{4}^{\prime}(\xi-\xi^{\prime})}{\vartheta_{4}(\xi-\xi^{\prime})}\right)d\xi^{\prime}-P_{1}^{\prime}(\xi).$
(33)
Here, prime denotes the derivatives of the functions with respect to $\xi$.
The integrand of the above equation has a first-order singularity at point
$\xi^{\prime}=\xi$, since
$\vartheta_{1}(\xi-\xi^{\prime})\sim\xi-\xi^{\prime}$.
The derivatives of the curvature, $d\kappa/ds$ and $d^{2}\kappa/ds^{2}$, can
be obtained by differentiating (30). They will include higher-order
derivatives of the function $\beta(\xi)$ and a higher-order singularity in the
integrands, respectively. By substituting the derivatives of the curvature
into the dynamic boundary condition (4), we can derive a hypersingular
integral equation in terms of the function $d\ln v/d\xi$, the solution of
which requires special treatment. Instead of that, we will determine the
function $v(\xi)$ numerically using a collocation method.
### II.6 Numerical method to determine the function $v(\xi)$.
If the tentative function $v(\xi)$ is given and the system of equations
(14)-(22) and the integrodifferential equation (25) are solved, then the
interface $z=z(\xi)$ depends only on the given function $v(\xi)$. We can chose
a fixed set of points $\hat{\xi}_{k}$, $k=1,\bar{K}$, distributed on the side
$D_{-}D_{+}$ of the parameter region corresponding to the interface. Then, the
nodes $s_{k}=s(\hat{\xi}_{k})$ and the coordinates $y_{k}=y(\hat{\xi}_{k})$ in
the physical plane are obtained using Eqs. (26) and (27). The curvature of the
interface and its derivatives are determined numerically applying spline
interpolation of the nodes $y_{k}$ in the intervals $(s_{k-1},s_{k})$. We
chose a fifth-order spline which provides continuous derivatives along the
interface up to the fourth derivative,
$\displaystyle y(s)=y_{k}+a_{1,k}(s-s_{k-1})+\ldots+a_{n,k}(s-s_{k-1})^{n},$
$\displaystyle\quad s_{k-1}<s<s_{k},\quad k=1,\ldots,\bar{K},\quad n=5.$ (34)
The curvature and its derivatives are obtained by differentiating Eq. (II.6):
$\beta=\arcsin
y^{\prime},\quad\kappa=\frac{y^{\prime\prime}}{\sqrt{1-y^{\prime
2}}},\quad\frac{d\kappa}{ds}=\frac{y^{\prime}y^{\prime\prime
2}-y^{\prime\prime\prime}(y^{\prime 2}-1)}{(1-y^{\prime 2})^{3/2}},\cdots.$
The system of nonlinear equations can be obtained by applying the dynamic
boundary condition (4) at the points $s_{k}=s(\hat{\xi_{k}})$, which is
written in the form
$\displaystyle G_{k}(\bar{V})=c_{pk}(\bar{V})-c_{pk}^{ice}(\bar{V})=0,\quad
k=1,\ldots,\bar{K},$ (35)
where $\bar{V}=(v_{1},\ldots,v_{\bar{K}})^{T}$ is the vector of unknown
velocities $v_{k}$ on the interface; the pressure coefficient on the interface
due to the flow is
$\displaystyle c_{pk}(\bar{V})=1-v_{k}^{2}-\frac{2[y(\bar{V})-h]}{F^{2}};$
(36)
and the pressure coefficient determining the bending of the elastic sheet is
$\displaystyle
c_{pk}^{ice}(\bar{V})=2D\left[\left(\frac{d^{2}\kappa}{ds^{2}}\right)_{k}+\frac{1}{2}\kappa_{k}^{3}\right].$
(37)
The system of equations (35) is solved using Newton’s method.The Jacobian of
the system is evaluated numerically using the central difference with $\Delta
v_{k}=10^{-5}$, $k=1,\bar{K}$. At each evaluation of the function
$G_{k}(\bar{V})$, the system of equations (14)-(22) and the
integrodifferential equation (25) are solved. From $5$ to $20$ iterations are
necessary to get convergence of the solution. All solutions, say
$\bar{V^{\ast}}$, reported here satisfied the condition
$\displaystyle\sum_{1}^{\bar{K}}|G_{k}(\bar{V^{\ast}})|<10^{-7}.$ (38)
which is regarded as giving a sufficiently accurate solution of the nonlinear
equations. However, the inaccuracy within the intervals
$(\hat{\xi}_{k-1},\hat{\xi}_{k})$ is somewhat lower. It will be discussed in
detail in section 3.1.
For supercritical flow regimes, the waving interface may extend to infinity.
However, due to the finite length of the calculation region and the condition
that the flow is uninform at infinity in all directions (upstream, downstream
and at infinite depth $y\rightarrow-\infty$), we need to introduce the damping
regions $P_{2}P_{1}$ upstream and $T_{1}T_{2}$ downstream. In these regions,
we add an artificial term in the boundary condition (4) which may be treated
as an external applied pressure,
$c_{p}=1-v^{2}-\frac{2(y-h)}{F^{2}}+C_{d}v\frac{dv}{ds},$ (39)
where the damping coefficient $C_{d}$ increases from $0$ at points $P_{1}$ and
$T_{1}$ to the values $C_{dL}$ and $C_{dR}$ at points $P_{2}$ and $T_{2}$,
respectively. The length of the damping regions are chosen to be about
$2\lambda_{0}$, where $\lambda_{0}$ is the wave length of the free surface
progressive waves according to the first approximation theory Lamb .
### II.7 Dispersion equation.
Differentiating Eq. (4) with respect to the arc length coordinate along the
interface and taking into account that the slope of the interface
$\delta=\arcsin(dy/ds)$ is the angle between the unit tangential vector
$\mathbf{\tau}$ and the $x-$axis, we obtain
$F^{2}v^{2}\frac{d\ln
v}{ds}=-\sin\delta-F^{2}D\left[\frac{d^{4}\beta}{ds^{4}}+\frac{3}{2}\left(\frac{d\beta}{ds}^{2}\right)\frac{d^{2}\beta}{ds^{2}}\right].$
(40)
The above equation without an elastic sheet ($D=0$) and at small disturbances
of the free surface such that $\sin\delta\approx\delta$ can be written as
$v^{2}\frac{d\ln v}{ds}=-\frac{2\pi}{\lambda}\delta$ (41)
where the wave length $\lambda=\lambda_{0}=2\pi F^{2}$ is known from the
linear theory of gravity waves Lamb . In the presence of the sheet ($D\neq
0$), the waves of small amplitude approach a sinusoidal curve. Therefore,
their slope can be written as
$\delta(s)=\delta_{max}\cos\left(\frac{2\pi s}{\lambda}+\phi\right),$
where $\delta_{max}$ is the amplitude, and $\phi$ is the phase of the slope
relative to point $D_{+}$ at which $s=0$. Then, neglecting the square of the
curvature, i.e. the second term in brackets (40), we obtain
$v^{2}\frac{d\ln
v}{ds}=-\delta\left[\frac{1}{F^{2}}+D\left(\frac{2\pi}{\lambda}\right)^{4}\right].$
(42)
According to (41), the coefficient at the angle $\delta$ is equal to
$-2\pi/\lambda$, where $\lambda$ is the wave length of the interface.
Therefore, the following equation with respect to wave number $k=2\pi/\lambda$
is obtained
$k=\frac{1}{F^{2}}+Dk^{4}.$ (43)
This equation may have one, two or no roots, which depends on the constant $D$
depending on the thickness of the elastic sheet and the Froude number $F$. The
latter case corresponds to the subcritical flow regime for which the
perturbation on the interface decays in both directions. The case of one root
corresponds to the critical Froude number, $F_{cr}$, for which the waves of
the same length $\lambda_{cr}=2\pi/k_{cr}$ are extended to infinity in both
directions. Differentiating (43) with respect to $k$ and equating the result
to zero, after some manipulations we obtain
$k_{cr}=\sqrt[3]{4}\left(\frac{\rho
gL(1-\nu^{2})}{EB_{i}^{3}}\right)^{\frac{1}{4}},\quad
F_{cr}=\left(\frac{64}{81}\frac{EB_{i}}{\rho
gL(1-\nu^{2})}\right)^{\frac{1}{8}}.$ (44)
For $F>F_{cr}$, Eq. (43) has two roots, $k_{w}<k_{cr}$ due to gravity and
$k_{ice}>k_{cr}$ due to the elastic sheet. We note that the Eq. (42) is valid
along the whole interface, $-\infty<x<\infty$, so both waves associated with
wave numbers $k_{w}$ and $k_{ice}$ may appear upstream and downstream of the
submerged cylinder. We note that the depth of submergence of the body does not
appear in the dispersion equation, and therefore it does not influence the
wave number. However, we expect that the depth of submergence influences the
wave amplitude, similar to that observed for free-surface flows Sem_Wu2020 .
The roots of equation (43) for different Froude numbers and different
thicknesses of the elastic sheet are shown in figure 2.
Figure 2: Wave number vs. Froude number for different thicknesses of the
elastic: $b_{i}=0$ (solid line), $0.05$ (dashed line), $0.1$ (dotted line),
$0.2$ (dot-dashed line), $0.5$ (short dashed line).
Without an elastic sheet, the constant $D=0$, and Eq. (40) becomes $k=1/F^{2}$
that corresponds to the hyperbola in figure 2. For thickness $b_{i}>0$, the
critical Froude number appears as the minimal Froude to which corresponds only
one root and the vertical orientation of the slope. The larger relative
thickness of the elastic sheet results in the larger critical Froude number.
The dispersion equation predicts only possible waves, but the contribution of
each wave to the shape of the interface have to be determined from the
solution of the nonlinear problem.
## III Results
### III.1 Numerical approach.
In the discrete form, the solution is sought on a fixed set of points
$\xi_{j}$, $j=1,N$, distributed along the side $O_{-}O_{+}$,
$0\leq\xi\leq\pi$, $\eta=0$, and a fixed set of points $\hat{\xi}_{i}$,
$i=1,M$ distributed along the side $D_{-}D_{+}$, $\eta=\pi/4$, of the
parameter region. The points $\xi_{j}$ are distributed so as to provide a
higher density of the points $s_{j}=s_{b}(\xi_{j})$ near stagnation points
$A(\zeta=a)$ and $B(\zeta=b)$. The distribution of the points $\hat{\xi}_{i}$
is chosen such to provide a higher density of the points
$s_{i}=s(\hat{\xi}_{i})$ closer to the body and the uniform distribution for
$|s_{i}|>\lambda$.
The number of nodes on the body and the interface are chosen in the ranges
$N=100\div 200$ and $M=1000\div 5000$, respectively, based on the requirement
to provide at least $80$ nodes within the shorter waves to get convergence and
reasonable accuracy of the solution. The nodes $\hat{\xi}_{k}$, $k=1,\bar{K}$,
used for interpolation of the interface, $y_{k}=y(\hat{\xi}_{k})$,
$s_{k}=s(\hat{\xi}_{k})$, are chosen on the set of points $\hat{\xi}_{i}$ such
that $i=4k$. Then, $\bar{K}=M/4=250\div 1000$ provides 20 nodes within the
shorter wave length at which the system of the nonlinear equations (35) is
solved.
The integrals appearing in Eq. (10) are evaluated based on the linear
interpolation of the functions $\beta_{b}(\xi)$ and $\ln v(\xi)$ on the
segments $(\xi_{j-1},\xi_{j})$ and $(\hat{\xi}_{i-1},\hat{\xi}_{i})$,
respectively. They are evaluated as follows:
$\displaystyle\frac{1}{\pi}\int_{0}^{\pi}\frac{d\beta_{b}}{d\xi}\ln\left(\frac{\vartheta_{1}(\zeta-\xi)}{\vartheta_{4}(\zeta-\xi)}\right)d\xi=\Delta\beta_{bj}[r_{j}(\zeta)+iq_{j}(\zeta)],$
$\displaystyle j=1,\ldots,N.\quad$ (45)
$\displaystyle\frac{i}{\pi}\int_{\pi}^{0}\frac{d\ln
v}{d\xi}\ln\left(\frac{\vartheta_{1}(\zeta-\xi-\pi\tau/4)}{\vartheta_{4}(\zeta-\xi-\pi\tau/4)}\right)d\xi=\Delta\ln
v_{i}[\hat{r}_{i}(\zeta)+i\hat{q}_{i}(\zeta)],$ $\displaystyle
i=1,\ldots,M.\qquad$ (46)
where $\Delta\beta_{bj}=\beta_{b}(\xi_{j})-\beta_{b}(\xi_{j-1})$, $\Delta\ln
v_{i}=\ln v(\hat{\xi}_{i})-\ln v(\hat{\xi}_{i-1})$,
$\displaystyle
r_{j}(\zeta)=\frac{1}{\pi\Delta\xi_{j}}\int_{\xi_{j-1}}^{\xi_{j}}\ln\left|\frac{\vartheta_{1}(\zeta-\xi)}{\vartheta_{4}(\zeta-\xi)}\right|d\xi,$
(47) $\displaystyle
q_{j}(\zeta)=\frac{1}{\pi\Delta\xi_{j}}\int_{\xi_{j-1}}^{\xi_{j}}\arg\left(\frac{\vartheta_{1}(\zeta-\xi)}{\vartheta_{4}(\zeta-\xi)}\right)d\xi,$
(48)
$\displaystyle\hat{r}_{i}(\zeta)=-\frac{1}{\pi\Delta\hat{\xi}_{i}}\int_{\hat{\xi}_{i-1}}^{\hat{\xi}_{i}}\arg\left(\frac{\vartheta_{1}(\zeta-\xi-\pi\tau/4)}{\vartheta_{4}(\zeta-\xi-\pi\tau/4)}\right)d\xi,$
(49)
$\displaystyle\hat{q}_{i}(\zeta)=\frac{1}{\pi\Delta\hat{\xi}_{i}}\int_{\hat{\xi}_{i-1}}^{\hat{\xi}_{i}}\ln\left|\frac{\vartheta_{1}(\zeta-\xi-\pi\tau/4)}{\vartheta_{4}(\zeta-\xi-\pi\tau/4)}\right|d\xi,$
(50)
The integrals (47) - (50) are evaluated using the $8-$point Legendre-Gauss
quadrature formula. The error of the solution within the intervals of
interpolation $(\hat{\xi}_{k-1},\hat{\xi}_{k})$ satisfies the relation
$\displaystyle\sum_{4k-4}^{4k}|G_{i}(\bar{V}^{\ast})|<10^{-3}$ (51)
which is regarded as giving a sufficiently accurate solution of the problem.
The method of successive approximations is adopted to solve the
integrodifferential equation (25), which in the discrete form becomes
$\frac{(\Delta\beta_{b})^{(k+1)}_{j}}{\Delta\xi_{j}}=\frac{\beta_{b}[s_{b}^{(k)}(\xi_{j})]-\beta_{b}[s_{b}^{(k)}(\xi_{j-1})]}{\Delta\xi_{j}},\qquad
j=1,\ldots,N,$ (52)
where the arc length along the body, $s_{b}^{(k)}(\xi)$, is evaluated using
(23) with $(\Delta\beta_{bj}/\Delta\xi_{j})^{(k)}$ known at the $(k)^{th}$
iteration. The iteration process converges very fast. After $5$ to $10$
iterations the error is below a prescribed tolerance of $10^{-6}$.
The derivative of the mapping function (II.2) has a second-order singularity
at point $\zeta=c+\pi\tau/4$. Therefore, points $\hat{\xi_{i}}$, $i=1,M$,
along the side $D_{-}D_{+}$ in the parameter region are distributed within the
two intervals $c+\varepsilon_{1}<\hat{\xi}_{i}\leq\pi$, $i=1,M_{1}$, and
$0\leq\hat{\xi}_{i}<c-\varepsilon_{2}$, $i=M_{1}+1,M$. These intervals
correspond to parts $D_{-}C_{-}$ and $D_{+}C_{+}$ of the interface
$C_{-}C_{+}$ in the physical plane. The values $\varepsilon_{1}$ and
$\varepsilon_{2}$ are chosen to provide the required length of the parts
$D_{-}C_{-}$ and $D_{+}C_{+}$.
### III.2 Convergence study of the numerical method.
The formulation of the problem allows us to consider the free-surface flow
around the submerged circular cylinder if we chose zero thickness of the
elastic sheet. This case has been investigated in Sem_Wu2020 using the method
of successive approximations. The results based on the present collocation
method and that based on the successive approximations are shown in Fig. 3.
Figure 3: Verification of the numerical procedure comparing the shape of the
free surface using different methods: present collocation method (solid line);
successive approximation Sem_Wu2020 (dashed line); numerical solution Scullen
(symbols). Froude number $F=2.75$, depth of submergence $h=7.55$.
Without elastic sheet, the submerged body generates a progressive wave
downstream only. The free surface upstream for $x<-\lambda$ tends to be
parallel to the $x-$axis. This property was used in the model of Semenov & Wu
Sem_Wu2020 to determine a parameter which affects the velocity magnitude in
the damping region. Both the present numerical method and the computational
proceduresSem_Wu2020 predict the same shape of the free surface in the region
$s_{P1}<s<s_{T1}$. However, the present damping model provides the gradual
decay of the wave that is necessary to couple the pressure coefficients due to
the flow and the bending of the elastic sheet in the numerical procedure. It
is seen that the truncation of the computational region does not affect the
shape of the free surface in the interval $s_{P1}<s<s_{T1}$, as seen in Fig.
3.
Figure 4: Flexural-gravity waves generated by the submerged circular cylinder.
Thickness of elastic sheet $b_{i}=0.05$, Froude number $F=2$, critical Froude
number $F_{cr}=1.65$, depth of submergence $h=6.35$.
In order to investigate the effect of truncation in the presence of the sheet,
we consider an example of supercritical flow for the thickness $b_{i}=0.05$
and the depth of submergence $6.35$. From Eq. (43), we obtain the critical
Froude number, $F_{cr}=1.65$. For Froude number $F=2$, the flow is
supercritical, so there are two wave numbers $k_{w}=0.256$ and $k_{ice}=0.782$
determined from the dispersion equation (43). These wave numbers correspond to
the wave lengths $\lambda_{w}=24.5R$ and $\lambda_{ice}=8.03R$. The wave
length corresponding to the linear theory without an elastic sheet is
$\lambda_{0}=2\pi F^{2}R$. The interface is shown in Fig. 4 for two
computational regions $-8<x/\lambda_{0}<9$ (solid line) and
$-6<x/\lambda_{0}<7$ (dashed line). In the region $-5<x/\lambda_{0}<5$, where
the damping is absent ($C_{d}=0$) for both cases, the interfaces overlap.
Outside this region, the solid lines and the dashed lines start to diverge.
These results show that similar to free-surface flows, the truncation of the
computational region does not affect the part of the interface without
damping. The computations in Fig. 4 are carried out for the length of the
damping region, $x_{P1}-x_{P2}=\lambda_{0}$, and the damping region
downstream, $x_{T2}-x_{T1}=2\lambda_{0}$. The values $C_{dL}=2$ and
$C_{dR}=10$ were used, and the numbers of nodes are $N=100$ and $M=4000$.
### III.3 Subcritical flows.
For Froude numbers $F<F_{cr}$, Eq. (43) has only complex roots, which
correspond to decaying perturbations of the interface caused by the submerged
cylinder. In Fig. 5, we show the interface profiles for the Froude number
$F=1.5$ and the relative thickness of the elastic sheet $b_{i}=0.05$ for
different depths of submergence. The interface shape is symmetric about the
$y-$axis. The shape is different from that observed for the free-surface flow
without an elastic sheet, for which the free surface is almost flat upstream
and exhibits a wave downstream. Thus, the elastic sheet supresses the waves
downstream and perturbs the flow upstream near the cylinder. As the thickness
of the elastic sheet tends to zero, the critical Froude number $F_{cr}$
decreases and become smaller than $F$. In this case, the flow becomes
supercritical, which drastically changes the interface shape. It will be
studied in the following subsection.
($a$)
($b$)
Figure 5: Free surface shape (a) and pressure coefficient (b) for Froude
number $F=1.5$, $F_{cr}=1.65$ and depth of submergence $h=5.37$ (solid line),
$5.60$ (dashed line), $6.35$ (dotted line) and $11.58$ (dash-dotted line).
It is expected that if cylinder is closer to the elastic sheet, or the depth
of submergence is smaller, then interaction between the cylinder and the
elastic sheet is stronger. This is observed in Fig. 5. The deflection of the
sheet above the cylinder exhibits a trough making the gap between the sheet
and the cylinder smaller. It is found that there is a minimal, or critical
depth of submergence, $h_{cr}$, below which the numerical solution cannot be
obtained. For $h$ slightly larger than $h_{cr}$, few iterations to solve the
system of nonlinear equations (II.6) are required to get the converged
solution, while for $h$ slightly smaller than $h_{cr}$, the elastic sheet
starts to oscillate, and the iterations do not converge.
Figure 6: Free surface shape (a) and pressure coefficient (b) for Froude
number $F=1.5$, $F_{cr}=1.65$ and depth of submergence $h=5.37$ (solid line),
$5.60$ (dashed line), $6.35$ (dotted line) and $11.58$ (dash-dotted line).
The interface profiles corresponding to for different Froude numbers are shown
in Fig. 6a. It is seen that the critical depth becomes larger as the Froude
number approaches the critical value . The corresponding distributions of the
pressure coefficient (the left axis) and the velocity magnitude (the right
axis) along the interface are shown in Fig. 6b. For the case of free-surface
flows, there is also a depth of submergence below which a steady free-surface
flow does not exist. In that case, the velocity magnitude at the crest of the
waves tends to zero, and the free surface shape forms a corner of . The
mechanism restricting the existence of the steady flow in the presence of the
elastic sheet is different. As shown in Fig. 6a, the velocity magnitude on the
interface is much larger than zero. The present results show that there is a
condition of consistency in interaction between the fluid, elastic sheet and
the submerged cylinder.
### III.4 Supercritical flows.
We begin the computational analysis for relatively high Froude number, $F=3$,
or $F/F_{cr}=1.82$, and the ice thickness $b_{i}=0.05$, for which the
dispersion equation (43) has two real roots. The corresponding wave numbers
are $k_{ice}=1.13$ and $k_{w}=0.1113$. The second wave number almost coincides
with that obtained from the linear theory of gravity waves without an elastic
sheet, $k_{0}=1/F^{2}=0.1111$. The ratio of the wave lengths generated by the
submerged cylinder and the elastic sheet is $\lambda_{ice}/\lambda_{w}=10.11$.
The interfaces near the cylinder are shown in Fig. 7 for different depths of
submergence. The wave generated by the elastic sheet clearly seen upstream at
the smaller depths, and its amplitude decays as the depth of submergence
increases. For $h=6.13$ the wave upstream almost disappears. The wave
generated by the cylinder downstream is not completely seen since its length
exceeds the length of the interface shown in Fig. 7. The larger length of the
interface is shown in Fig. 8. For submergence $h=6.13$, the interface
coincides with that obtained without an elastic sheet which is shown in Fig. 8
by the dashed line.
Figure 7: Interface shape at different depths of submergence for Froude number
$F=3$ and thickness of elastic sheet $b_{i}=0.05$.
As the submergence decreases, the amplitude of the wave downstream increases
and reaches its maximal value at $h=2.95$, and then it starts to decrease.
This feature has been studied in Sem_Wu2020 . As the cylinder approaches the
free surface, it affects the free surface at smaller distances from the
cylinder, and the flow tends to be symmetric about the $y-$axis, similar to
that for $F\rightarrow\infty$.
The elastic sheet weakly influences the interface downstream but generates
wave upstream. As the cylinder approaches the free surface, the bending of the
elastic sheet near the cylinder increases, as can be seen from Figs. 7 and 8.
This causes the increase of the amplitude of the wave upstream.
Figure 8: Effect of submergence on the interface shape for the circular
cylinder at Froude number $F=3$. The thickness of elastic sheet $b_{i}=0.05$
(solid lines) and $b_{i}=0$ (dashed lines).
The bending moment and the pressure coefficient are shown in Figs. 9 and 10
for depths of submergence and , respectively. Although the wave due to the
elastic sheet is invisible (Fig. 9$c$), the pressure coefficient and the
bending moment oscillate at both directions upstream (left axis in Fig. 9$b$)
and downstream of the cylinder (right axis in Fig. 9$b$). The frequency of
oscillations at the upstream is caused by the elastic sheet, while the
frequency of oscillations at the downstream is due to gravity. This
qualitatively agrees with results based on the linear theory Savin_2012 ;
Li_2019 .
Figure 9: Bending moment ($a$), pressure coefficient ($b$) and interface shape
($c$) at Froude number $F=3.0$. The thickness of elastic sheet $b_{i}=0.05$
and depth of submergence $h=6.13$.
Figure 10: The same as in Fig. 9 at the depth of submergence $h=1.94$.
The results for Froude number $F=2.5$, or $F/F_{cr}=1.52$, and ice thickness
$b_{i}=0.05$ are shown in Figs. 11-14. The wave numbers are as follows:
$k_{ice}=0.97$, $k_{w}=0.1606$ and $k_{0}=0.1600$. The ratio of the wave
lengths $\lambda_{w}/\lambda_{ice}=6.04$. The interface shapes for different
depths of submergence are shown in Fig. 11 and 12. They are similar to those
in Figs. 7 and 8 for $F=3.0$. However, the pressure coefficient and the
bending moment shown in Fig. 13$a$,$b$ exhibit behaviour corresponding to
superposition of the gravity wave (longer wave) and the elastic sheet wave
(shorter wave). The amplitude of the bending moment corresponding to the
gravity wave is larger than that corresponding to the elastic sheet wave. The
latter becomes largest at the trough of the gravity wave, and it almost
disappears at the crest, as seen in Figs. 13$b$ and 14$b$. Such behaviour of
the bending moment and the pressure coefficient demonstrates the nonlinear
interaction of the elastic sheet and the flow, which is still invisible for
the interface profile in Figs. 13$c$ and 14$c$.
Figure 11: Interface shape at different depths of submergence for Froude
number $F=2.5$ and thickness of elastic sheet $b_{i}=0.05$.
Figure 12: Effect of submergence on the interface shape for the circular
cylinder at Froude number $F=2.5$.
Figure 13: Bending moment ($a$), pressure coefficient ($b$) and interface
shape ($c$) at Froude number $F=2.5$ and the submergence $h=4.4$.
Figure 14: The same as in Fig. 13 at the depth of submergence $h=1.94$.
For Froude number $F=2$, or $F/F_{cr}=1.21$, the wave numbers are as follows:
$k_{ice}=0.782$, $k_{w}=0.256$ and $k_{0}=0.250$. The ratio of the wave
lengths $\lambda_{w}/\lambda_{ice}=3.05$. The interface shapes for depths of
submergence in the range from $4.6$ to $11.6$ are shown in Fig. 15 and 16.
At the upstream direction, the wave caused by the elastic sheet becomes
visible even for the relatively large depth of submergence, $h=11.6$. Its
shape is like a sinusoid with constant amplitude. The amplitude increases as
the depth of submergence decreases. At depth of submergence $h=4.61$, the
elastic sheet interacts with the flow in such a way that the wave due to
gravity extends in the upstream direction, and the interface exhibits
superposition of the both waves.
At the downstream direction, the interface shape differs from a wave of
constant amplitude. The shape near the body corresponds to the superposition
of the gravity and elastic sheet waves, and then gradually approaches to the
pure gravity wave. This also occurs for larger submergence but is less
visible. The contribution of the elastic sheet wave decays downstream, and the
interface approaches the pure gravity wave far downstream.
Figure 15: Interface shape at different depths of submergence for Froude
number $F=2.0$.
Figure 16: Effect of submergence on the interface shape for the circular
cylinder at Froude number $F=2.0$.
Figure 17: Bending moment ($a$), pressure coefficient ($b$) and interface
shape ($c$) at Froude number $F=2.0$ and the submergence $h=11.6$.
Figure 18: The same as in Fig. 17 at the depth of submergence $h=4.61$.
The pressure coefficient and the bending moment along the interface are shown
in Fig. 17$a,b$ for $h=11.65$ and in Fig. 18$a,b$ for $h=4.6$. They
demonstrate interaction between the gravity and the elastic sheet waves. The
wave due to the elastic sheet is dominating near the cylinder. It keeps
constant amplitude in the upstream direction and gradually decays in the
downstream direction. For $x/\lambda_{0}>8$, the amplitude of the bending
moment in Fig. 18$a$ caused by the elastic sheet becomes smaller than that
caused by the gravity wave, and the oscillations become qualitatively similar
to that in Figs. 13$a$ and 14$a$. It is seen from Fig. 18$a$ that the distance
at which the bending moment caused by the elastic sheet decays is much larger
than that for $F=2.5$.
Figure 19: Interface shape at different depths of submergence for Froude
number $F=1.7$.
Figure 20: Effect of submergence on the interface shape for the circular
cylinder at Froude number $F=1.7$.
Figure 21: Bending moment ($a$), pressure coefficient ($b$) and interface
shape ($c$) at Froude number $F=1.7$ and the submergence $h=11.6$.
Figure 22: The same as in Fig. 21 at the depth of submergence $h=6.8$.
The results for Froude number $F=1.7$, which is quite close to the critical
Froude number, $F/F_{cr}=1.03$, are shown in Figs. 19-23. The wave numbers
$k_{ice}$ and $k_{w}$ approach each other, and the ratio of the wave lengths
becomes smaller, $\lambda_{w}/\lambda_{ice}$. The interfaces are shown in
Figs. 19 and 20 for the depths of submergence from $11.6$ to the $6.8$. By
comparing the interfaces at the depth $h=11.6$ for the Froude numbers $2.5$
and $2$ in Figs. 12 and 16, respectively, we can find that the amplitude of
the interface wave increases at the region upstream while the amplitude at the
region downstream of the cylinder decreases. The latter agrees with the free-
surface gravity flow past the submerged circular cylinder Sem_Wu2020 . The
former indicates the larger effect of the elastic sheet at the smaller Froude
number.
The shapes of the interface for different depths of submergence in Fig. 20 are
similar each other, but the amplitudes are different. In the upstream
direction, $x/\lambda_{0}<0$, the shapes are periodic with period of about
$2\lambda_{0}$, that corresponds to the superposition of two sinusoidal waves
with the gravity and elastic sheet waves.
In the downstream direction, $x/\lambda_{0}>0$, the shape of the interface is
not exactly periodic because the amplitude of the elastic sheet wave decays
slowly downstream. By comparing the shape of the interfaces, behaviour of the
bending moment and the pressure coefficient for different Froude numbers, we
can see that the wave due to the elastic sheet decays slower as the Froude
number approaches the critical value. The similar behaviour of the bending
moment and the pressure coefficient can be seen in Figs. 21 and 23.
Figure 23: Onset of the solution convergence.
The interaction between the fluid and elastic sheet and between the fluid and
the submerged body is not always consistent for the steady flow. This
situation has some analogy with the free-surface flow past the submerged
cylinder, for which there is some combination of the Froude number and depth
of submergence at which the steady solution does not exists. As the depth of
submergence increases, the region of non-existence of the solution decreases
and then disappears at large enough submergence. There is a maximal possible
deflection of the free surface at which the dynamic boundary condition can be
satisfied. In the presence of the elastic sheet, the situation becomes more
complicated because the dynamic boundary condition includes not only the
deflection of the interface but also its derivatives. The term due to gravity
in Eq. (31) increases as the Froude number decreases. It may cause such
combination of the deflection, its derivatives and the velocity magnitude on
the interface that the pressure distribution for the fluid (31) and for the
elastic sheet (32) cannot be the same, or the dynamic boundary condition (4)
cannot be satisfied. For a supercritical flow, the fluid forces dominate the
elastic forces, while for subcritical flows vice versa. This changes the flow
configuration near the critical Froude number. The interface shown in Fig. 19
for F = 1.7 and corresponds to the supercritical flow closest to the onset
where the converged solution can be obtained. It forms a hill over the
cylinder, while for the subcritical flow in Fig. 6 for , the interface forms a
trough. The different limits of the flow configurations for sub- and
supercritical regimes indicate an inconsistency of the fluid and elastic
forces in some range near the critical Froude number. As the submergence
increases, the deflection of the interface decreases as well as the range of
Froude numbers at which the steady flow and the elastic sheet are not
consistent. That can be seen in Fig. 23 where the onset of the region of
existence of the steady solution is shown in the plane of the parameters
Froude number vs. depth of submergence.
## IV Conclusions
A fully nonlinear numerical solution for the problem of steady gravity flow
past a body submerged beneath an elastic sheet is presented in the form of the
nonlinear analytical solution for the fluid part of the problem and the
nonlinear Cosserat plate model applied to the elastic sheet, which are coupled
throughout the numerical procedure. The solution of the fluid part of the
problem is based on the integral hodograph method employed to construct the
complex potential of the flow and Jacobi’s elliptic theta functions to deal
with the doubly connected fluid domain. The curvature and higher-order
derivatives of the fluid boundary involved in the nonlinear Cosserat plate
model have been evaluated using spline interpolation. The coupled problem has
been reduced to a system of nonlinear equations with respect to the unknown
magnitude of the velocity on the interface, which are solved using a
collocation method.
Steady solutions of the full nonlinear problem were computed for sub- and
supercritical regimes. For subcritical regimes, the dispersion equations have
no real roots. The elastic sheet exhibits a most considerable deflection above
the cylinder, which rapidly decays away. The deflection forms a curve
symmetric about the Y-axis with a trough above the cylinder. The trough
becomes larger as the depth of submergence decreases. These results
qualitatively agree with linear solutions Savin_2012 ; Li_2019 . At the same
time, the present nonlinear solution revealed a critical depth below which the
deflection of the interface cannot provide balance between the bending and
hydrodynamic forces in the steady flow. The critical submergence increases as
the Froude number approaches the critical Froude number.
For supercritical regimes, the dispersion equations have two positive real
roots which correspond to two wave numbers. The smaller wave number is closer
to that corresponding to the gravity wave behind the cylinder without an
elastic sheet, and the larger wave number appears in the presence of the
elastic sheet. The dispersion equation does not restrict the flow regions in
which the waves may occur, i.e. upstream or downstream of the cylinder. From
the linear theories Savin_2012 ; Li_2019 , it was found that the gravity wave
occurs downstream of the cylinder, while the wave due to the elastic sheet
occurs upstream. The present nonlinear solution revealed that the waves may
occur in both directions, but their amplitudes in each direction significantly
depend on the perturbation of the interface and the ratio $F/F_{cr}$.
The calculations are presented for the thickness of the elastic sheet
$b_{i}=0.05$ that corresponds to the critical Froude number $F_{cr}=1.65$. For
larger thickness of the sheet, the critical Froude number increases. We expect
that the flow configurations for larger thicknesses of the ice sheet will be
similar to those for $b_{i}=0.05$ at the same ratio $F/F_{cr}$.
At high Froude number $F/F_{cr}>1.8$ and depth of submergence $h>6$, the
interface shape is almost the same as it is without the elastic sheet.
However, the effect of the elastic sheet can be seen in the behaviour of the
bending moment and the pressure coefficient at the upstream. They oscillate
with the wave number corresponding to the elastic sheet. The amplitude of
oscillations is larger than that corresponding to the gravity wave at the
downstream. At smaller submergence, the perturbation of the interface
increases, and it becomes visible at the upstream. At very small depth of
submergence, the elastic sheet starts to affect the whole interface: at the
downstream, the amplitude of the gravity wave slightly decreases; and at the
upstream, the gravity wave is excited in addition to the elastic sheet wave.
The interface represents superposition of both these waves.
As the Froude number decreases and approaches the critical value $F_{cr}$, the
wave caused by the elastic sheet can be observed in both directions. Its
amplitude is constant in the upstream direction. In the downstream direction,
the contribution of the elastic wave to the resulting shape decays. The closer
the Froude number $F$ to the critical value $F_{cr}$, the slower decay is
observed.
Similar to subcritical regimes, there is a critical submergence below which
the steady supercritical solution cannot be obtained. The closer the Froude
number to the critical value $F_{cr}$, the larger the critical depth of
submergence. This may be caused due to the interaction of the two waves with
closer wave length that may cause a resonance phenomenon.
## DATA AVAILABILITY
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Squire et al. (1988) V. A. Squire, W. H. Robinson, P. J. Langhorne, and T. G. Haskell, "Vehicles and aircraft on floating ice," Nature 333, 159-161 (1988).
* Squire et al. (1996) V. A. Squire, R. J. Hosking, A. D. Kerr, and P. J. Langhorne, "Moving Loads on Ice Plates", Kluwer, 1996.
* Korobkin et al. (2011) A. Korobkin, E. I. Părău, and J.-M. Vanden-Broeck, "The mathematical challenges and modelling of hydroelasticity," Phil. Trans. R. Soc. Lond. A 369, 2803-2812 (2011).
* Guyenne and Părău (2012) P. Guyenne and E. I. Părău, "Computations of fully nonlinear hydroelastic solitary waves on deep water," J. Fluid Mech. 713, 307-329 (2012).
* Guyenne and Părău (2017) P. Guyenne and E. I. Părău, "Numerical study of solitary wave attenuation in a fragmented ice sheet," Phys. Rev. Fluids, 2, 034002 (2017).
* (6) Z. F. Li, G. X. Wu and C. Y. Ji, "Wave radiation and diffraction by a circular cylinder submerged below an ice sheet with a crack," J. Fluid Mech. 845, 682-712 (2018).
* (7) D. Das and B. N. Mandal, "Oblique wave scattering by a circular cylinder submerged beneath an ice-cover," Int. J. Eng. Sci. 44, 166-179 (2006).
* (8) I. V. Sturova, "Radiation of waves by a cylinder submerged in water with ice floe or polynya," J. Fluid Mech. 784, 373-395 (2015).
* (9) L. A. Tkacheva, "Oscillations of a cylindrical body submerged in a fluid with ice cover," J. Appl. Mech. Tech. Phys. 56, 1084-1095 (2015).
* (10) A. A. Savin and A. S. Savin, "Ice cover perturbation by a dipole in motion within a liquid," Fluid Dyn. 47, 139-146 (2012).
* (11) K. Shishmarev, T. Khabakhpasheva, A. Korobkin, "Ice response to an underwater body moving in a frozen channel," Applied Ocean Research 91, 101877 (2019).
* (12) Z. F. Li, G. X. Wu, and Y. Y. Shi, "Interaction of uniform current with a circular cylinder submerged below an ice sheet," Appl. Ocean Res. 86, 310-319 (2019).
* (13) L. M. Milne-Thomson, "Theoretical Hydrodynamics," 5th Edition (Dover Publications, 1968).
* (14) G. Birkhoff and E. H. Zarantonello, "Jets, Wakes and Cavities", (Academic Press, 1957).
* (15) M. I. Gurevich, Theory of Jets in Ideal Fluids (Academic Press, 1965).
* (16) L. K. Forbes and L. W. Schwartz, "Free-surface flow over a semicircular obstruction," J. Fluid Mech. 114, 299-314 (1982).
* (17) A. C. King and M. I. G. Bloor, "A semi-inverse method for free-surface flow over a submerged body," Q. J. Mech. Appl. Maths 42, 183-202 (1989).
* (18) O. M. Faltinsen and Y.A. Semenov, "The effect of gravity and cavitation on a hydrofoil near the free surface," J. Fluid Mech. 597, 371-394 (2008).
* (19) G. D. Crapper, "An exact solution for progressive capillary waves of arbitrary amplitude," J. Fluid Mech. 2, 532-540 (1957).
* (20) W. Kinnersley, "Exact large amplitude capillary waves on sheets of fluid," J. Fluid Mech., 77, 229-241 (1976).
* (21) D. G. Crowdy, "Exact solutions for steady capillary waves on a fluid annulus," J. Nonlinear Sci. 9, 615-640 (1999).
* (22) L. W. Schwartz and J.-M. Vanden-Broeck, "Numerical solution of the exact equations for capillary-gravity waves," J. Flud Mech. 95, 119-139 (1979).
* (23) J.-M. Vanden-Broeck and T. Miloh, "Computations of steep gravity waves by a refinement of Davies-Tulin’s approximation," SIAM J. Appl. Math. 55 (4), 892-903 (1995).
* (24) M. G. Blyth and J.-M. Vanden-Broeck, "New solutions for capillary waves on fluid sheets," J. Fluid Mech. 507, 255-264 (2004).
* (25) M. G. Blyth, E. I. Părău, and J.-M. Vanden-Broeck, "Hydroelastic waves on fluid sheets," J. Fluid Mech. 689, 541-551 (2011).
* (26) B-S. Yoon and Y. A. Semenov, "Capillary cavity flow past a circular cylinder," Eur. J. Mech. B Fluids 28, 670-676 (2009).
* (27) E. I. Părău and F. Dias, "Nonlinear effects in the response of a floating ice plate to a moving load, "J. Fluid Mech. 460, 281-305 (2002).
* (28) F. Bonnefoy, M. H. Meylan, and P. Ferrant, Nonlinear higher-order spectral solution for a two-dimensional moving load on ice, J. Fluid Mech. 621, 215-242 (2009).
* (29) P. I. Plotnikov and J. F. Toland, "Modelling nonlinear hydroelastic waves," Phil. Trans. R. Soc. Lond. A 369, 2942-2956 (2011).
* (30) Y. A. Semenov and G. X. Wu, "Free-surface gravity flow due to a submerged body in uniform current," J. Fluid Mech. 883, A60 (2020).
* (31) N. E. Joukovskii, "Modification of Kirchhoff’s method for determination of a fluid motion in two directions at a fixed velocity given on the unknown streamline", Math. Sbornik. 15 (1), 121-278 (1890).
* (32) J. H. Michell, "On the theory of free stream lines", Phil. Trans. R. Soc. Lond. A 181, 389-431 (1890).
* (33) J. S. Terentiev, A. G. Kirschner, and I. N. Uhlman, "The Hydrodynamics of Cavitating Flow." (Backbone Publishing Company, 2011).
* (34) Y. A. Semenov and G. X. Wu, "Free-surface gravity flow due to a submerged body in uniform current", J. Fluid Mech. 883, A60 (2020).
* (35) H. Lamb, _Hydrodynamics._ Cambridge University Press, (1932).
* (36) D. Scullen and E. O. Tuck, "Nonlinear free-surface flow computations for submerged cylinders", J. Ship Res. 39, 185-193 (1995).
* Smith et al. (2018) F. Smith, A. Korobkin, E. Părău, D. Feltham, and V. Squire, "Modelling of sea-ice phenomena," Phil. Trans. R. Soc. Lond. A 376 , 20180157 (2018).
|
Subsets and Splits